Hacker News new | past | comments | ask | show | jobs | submit | rjn945's comments login

(You have no contact info in your profile, so I'll just post this publicly.)

I've been thinking a lot of similar thoughts and have been wishing I had more people to discuss this with.

If it sounds good to you, reach out to me at the email in my profile. I'd be interested in hearing more about your diagnosis of how systems are malfunctioning, what systemic fixes might look like, and how it could be practical to work on that and still pay bills.


Do you have a link to an example of hoax story? I haven't seen that.


Take any racial media story...

From the Duke-lacrosse rape case, to the shooting of Michael Brown, the "clock boy", the black woman killed in jail, and everything in-between and beyond.

All proved to be false narratives.

The list is so large that I'm constantly surprised by the opinion that such a thing does not exist.


I was really confused when I read "the constrain() call takes expressions that define the relationships you want to impose" (emphasis mine) and saw this code:

  container.constrain(button.TOP == container.TOP + 50)
Python doesn't let you pass expressions! Why isn't that just getting evaluated to true or false? Is this not vanilla Python? Is there a pre-processor?

I dove into the code and found the answer here[1]. button.TOP and container.TOP are both Toga Attributes, which have their equality operators redefined.

Very interesting and clever use of operator overloading.

[1] https://github.com/pybee/toga/blob/master/toga/constraint.py


I am very conflicted about that trick. On the one hand it is a neat bit of syntax to express what you want, but it goes completely against your expectations of how Python syntax is parsed. 'Clever use of operator overloading' historically has always meant 'confusing use of operator overloading' - something the C++ community took some time to learn.

Python is very good at behaving as you expect it to, despite the fact that you can implement pretty much any magic. This is down to library design more than it is language design.


At least it's less magic than puLP:

  prob += x*2 + y, "foo"
This sets the objective function of prob to (2x+y), and assigns it a name of "foo".

  prob += x*2 + y > 3, "abcd"
That puts a constraint that 2x + y > 3, called "abcd".

On both of these the string is optional.

I'm also conflicted. On one hand, it's about the most compact syntax you can get for something like this - when similar libraries in languages without operator overloading resort to passing strings into functions... On the other hand, it can be utterly incomprehensible if you haven't gone through the documentation.


FWIW, I've seen the double-equals syntax in GSS as well. It's a common idiom in the constraint programming community; it's supposed to symbolize that the two sides change in sync -- container.TOP can change button.TOP or vice-versa.

I see your argument about overloaded operators being dangerous for newcomers, but the syntax isn't completely fabricated.


Thanks for the compliment. As others have pointed out, this isn't an idea of my own creation - others (e.g., SQLAlchemy) have taken similar approaches in other APIs. I'm just exploiting a pattern that I've seen used to good effect elsewhere.


some ORMs do operator overloading too, i.e. SQLAlchemy http://docs.sqlalchemy.org/en/rel_0_9/core/sqlelement.html


Google use operator overloading in their appengine stuff.


Happily, you're first example is actually already supported in the latest Underscore (1.6.0):

  _.each([1,2,3], _.partial(something, _, 'param'))
Example:

  _.map([1, 2, 3, 4], _.partial(Math.pow, _, 8))
  // [1, 256, 6561, 65536] 
Admittedly, "_.partial" is 3 more characters than "$.call", but if you really used it a lot, you could alias it to something short like "_p".

Unfortunately, this doesn't seem to be present in the latest Lo-Dash (which I generally prefer).


I agree. The combination of Backbone models/collections and React views bound together by React.Backbone[1] so that the views automatically re-render after any changes to the models/collections has made the data-heavy application I'm working on surprisingly easy to develop.

The real issue I've been having React -- that I'd I like to see dealt with by a library/design pattern -- is dealing with transitions between view states. Re-rendering the view on any data change can result in very abrupt changes. For instance, my app displays multiple lists of items. When the user edits an item, that can result in the item suddenly disappearing from underneath their cursor to reappear somewhere else in that list, in a different list, or nowhere within view.

I've been able to handle each issue of this nature as it arises in an ad-hoc fashion, but I'd really like a more formalized way in React to say, "when attribute X changes, use transition Y to change from the old view state to the new one".

Does anyone know of an existing solution to this problem?

[1] https://github.com/usepropeller/react.backbone


Check this out: https://github.com/andreypopp/react-router-page-transition

Not simply react but you can get the idea.


Obligatory Wikipedia link for those of us (like me) who don't know what materialized views are: http://en.wikipedia.org/wiki/Materialized_view

In short: "In a database management system following the relational model, a view is a virtual table representing the result of a database query. Whenever a query or an update addresses an ordinary view's virtual table, the DBMS converts these into queries or updates against the underlying base tables. A materialized view takes a different approach in which the query result is cached as a concrete table that may be updated from the original base tables from time to time. This enables much more efficient access, at the cost of some data being potentially out-of-date. It is most useful in data warehousing scenarios, where frequent queries of the actual base tables can be extremely expensive."


Ha, I remember trying to make sites that looked this good, with this aesthetic, back in the 90s.

But now, I totally agree. The stock photo girl with headphones along with a lot of other subtle things would make me instantly think "this a fake/auto-generated/contentless site" and hit the back button without even reading it.

It's interesting how strongly websites can give off vibes.


I think this has a lot to do with it. After an hour of reading, watching and thinking, I can't come up with any way to put it into one paragraph.

Here's the shortest what and why I could come up with:

Questioning Assumptions

Many relational databases today operate based on assumptions that were true in the 1970s but are no longer true. Newer solutions such as key-value stores ("NoSQL") make unnecessary compromises in the ability to perform queries or make consistency guarantees. Datomic reconsiders the database in light of current computer set-ups: millions of times larger and faster disks and RAM, and distributed architectures connected over the internet.

Data Model

Instead of using table-based storage with explicit schemas, Datomic uses a simpler model wherein the database is made up of a large collection of "datoms" or facts. Each datom has 4 parts: an entity, an attribute, a value, and a time (denoted by the transaction number that added it the database). Example:

  John, :street, "23 Swift St.", T27
This simple data model has two main benefits. It makes your data less rigid and hence more agile and easier to change. Additionally, it makes it easy to handle data in non-traditional structures, such as hierarchies, sets or sparse tables. It also enables Datomic's time model...

Time

Like Clojure, Datomic incorporates an explicit model of time. All data is associated with a time and new data does not replace old data, but is added to it. Returning to our previous example, if John later changes his address, a new datom would be added to the database, e.g.

  John, :street, "17 Maple St.", T43
This mirrors the real world where the fact that John has moved does not erase the fact that John once lived on Swift St. This has multiple benefits: the ability to view the database at a point in time other than the present; no data is lost; the immutability of each datom allows for easy and pervasive caching.

Move Data and Data Processing to Peers

Traditionally databases use a client-server model where clients send queries and commands to a central database. This database holds all the data, performs all data processing, and manages the data storage and synchronization. Clients may only to access the data through the interface the server provides - typically SQL strings which may include a (relatively small) set of functions provided by the database.

Datomic breaks this system apart. The only centralized component is data storage. Peers access the data storage through a new distributed component called a transactor. Finally, the most important part, data processing, now happens in the clients, which, considering their importance, have been renamed "peers".

Queries are made in a declarative language called Datalog which is similar to but better than SQL. It's better because it more closely matches the model of the data itself (rather than thinking in terms of the implementation of tables in a database). Additionally, it's not restricted like SQL. It allows you to use your full programming language. You can write reusable rules that can then be composed in queries. Additionally, you can call any of your own functions. This is a big step up in power and it's made practical because of the distribution. If ran your query on central server, you'd have to be concerned about tying up a scare resource with a long-running query. When processing locally, that's not a concern.

When a query is performed that data is loaded from central storage and placed into RAM (if it will fit). Later queries can use this locally cached data for fast queries.

----

That's definitely not all it does or all the benefits, but hopefully that's a good start.


I would add the following

*Transactions as first-class entities

Transactions are just data like everything else, and can add facts about them like anything else. For example, who created the transaction. What did the database look like before and after transaction.

Additionally, you can subscribe to the queue of transactions, if you wanted to watch for and react to events of a certain nature. This very difficult in most other systems.


> a time (denoted by the transaction number that added it the database).

Do transaction numbers have total order or just partial order? Total order is serializing. (And no, using real time as the transaction number doesn't help because it's impossible to keep an interesting number of servers time-synched.) Partial order is "interesting".


It is totally ordered.

The transactor is a single point of failure.

However, since its only job is doing the transactions, the idea is it can be faster than a database server that does both the transactions and the queries.


Hmm... presumably an application can act in read-only mode in the absence of a transactor. That's an interesting thought :-)


The transactor is only used for writes, so if the transactor went down, you could still run queries.


I think their statement about ACID is too bold.

How does somebody do read-"modify" style of transactions ?

Say I want to bump some counter. So I delete old fact and I establish new fact. But new fact needs to be exactly 1 + old value of counter. With transactions as simple "add this and remove that" you seemingly cannot do that. So it's not ACID. Right?


Transactions are not limited to add/retract. There are also things we call data functions, which are arbitrary, user-written, expansion functions that are passed the current value of the db (within the transaction) and any arbitrary args (passed in the call), and that emit a list of adds/retracts and/or other data function calls. This result gets spliced in place of the data function call. This expansion continues until the resulting transaction is only asserts/retracts, then gets applied. With this, increments, CAS and much more are possible.

We are still finalizing the API for installing your own data functions. The :db.fn/retractEntity call in the tutorial is an example of a data function. (retractEntity is built-in).

This call:

    [:db.fn/retractEntity entity-id]
must find all the in- and out-bound attributes relating to that entity-id (and does so via a query) and emit retracts for them. You will be able to write data functions of similar power. Sorry for the confusion, more and better docs are coming.


From what I remember, compare-and-swap semantics are in place for that kind of case.

If that was not the case, you could still model such an order-dependent update as the fact that the counter has seen one more hit. Let the final query reduce that to the final count, and let the local cache implementation optimize that cost away for all but the first query, and then incrementally optimize the further queries when they are to see an increased count.

That said, I'm pretty sure I've seen the simpler CAS semantics support. (The CAS-successful update, if CAS is really supported, is still implemented as an "upsert", which means old counter values remain accessible if you query the past of the DB.)


Forget my last paragraph. Anyways, richhickey answered. :)


> However, since [the transactor's] only job is doing the transactions

Huh? How is that consistent with:

> access the data storage through a new distributed component called a transactor.

If "doing the transactions" consists of more than passing out incrementing transaction tokens, won't the transactor be a bottleneck?


Yeah, it looks like I got that part wrong. (I intentionally skimmed over the transactor, because I was avoiding "how" issues and because my understanding of it wasn't that clear.)

The transactor is involved in just writes, not reads. (So that helps.) It's not distributed and cannot be distributed, in this system, because it ensures consistency, so yes, it is potentially a bottleneck. In blog comments by Rich Hickey[1], he states:

"Writes can’t be distributed, and that is one of the many tradeoffs that preclude the possibility of any universal data solution. The idea is that, by stripping out all the other work normally done by the server (queries, reads, locking, disk sync), many workloads will be supported by this configuration. We don’t target the highest write volumes, as those workloads require different tradeoffs."

Presumably, 1) the creators of Datomic think that performance can be good enough to be useful, 2) this is a new model that probably requires testing to prove is practical.

[1] Multiple people have linked to it, but for convenience: http://blog.fogus.me/2012/03/05/datomic/comment-page-1/#comm...


Isn't this the same compromise we would already have had to make if we just used postgres?


It's actually slightly better than a SQL database. If your master SQL database gets fried, there's a chance you could lose some data. Datomic's transactor only handles atomicity, not writes, so if the transactor dies, nothing written to the database will be lost.


Yeah, I had a similar concern but it's just a confusion from thinking that every peer has to be equally powerful (i.e. all weak) and has to run on the end-user's computer.

If you performed this type of calculation before with a traditional database, you had to have a powerful enough to computer to perform the calculation. In this model, you would still have that computer; it's just now a "peer".

If millions of people want the same piece of data that requires a huge calculation to get, then you would set up one powerful machine of your own just to do this calculation and then write the result to the database, so the many "thin" peers can just read the result.


These are my favorites:

Rich Hickey: Keynote -- Not another philosophy meets programming talk unfortunately, but interesting nonetheless. Rich talks about possible future features of Clojure: different build profiles (like a lean version for Android), using logic programming to add language features like program analysis or predicate dispatch, an extensible reader, and multiple other topics. It's always interesting to hear Rich talk because he shows such a clarity of thought about language design issues.

Kevin Lynagh: Extending Javascript Libraries from ClojureScript -- I thought this was the most immediately likeable talk. Other talks had deeper technical content but Kevin is an engaging speaker with an interesting topic.

Daniel Solano Gómez: Clojure and Android -- Covers the speaker's original work to get Clojure to run and run performantly on Android and the Dalvik VM.

Mark McGranaghan: Logs as Data -- Mark works at Heroku managing the system to log and analyze the huge number of events occurring on that platform. He makes a compelling case that using data structures (like the ones in Clojure) are much more powerful way to store log data than the traditional text file format.

Neal Ford: Master Plan for Clojure Enterprise Mindshare Domination -- Neal works for a company that surveys new technologies and make technology recommendations to enterprise companies. That puts him in a very good position to give this clear talk on how languages/technologies become popular in the enterprise and how to advance that process.

Daniel Spiewak: Extreme Cleverness: Functional Data Structures in Scala -- The least boring talk on data structures you'll ever hear. This guy really does have energy (to point that I found his pacing distracting, but his speaking made up for it). The title says in Scala but really these are largely the same data structures and concepts used in Clojure. Most Clojurists probably know about persistent data structures and structural sharing by now, but if you don't this is a good intro.

Michael Fogus: The Macronomicon -- A very clear discussion of macros. Most of what I've seen and read about macros has been very theoretical, but here they're presented as just another interesting tool for a practicing programmer. This talk isn't really about how, but more why and where to use macros.

Ambrose Bonnaire-Sergeant: Introduction to Logic Programming with Clojure -- An accessible introduction to logic programming and some interesting examples in 40 minutes.

Almost all the other talks are good too. If you see any talk on a topic that interests you, it's probably worth checking out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: