1. We use causal consistency using vector clocks to establish the causal order of operations instead of using Time stamping and Network Time Protocol which are unreliable over WAN distances
2. We share Database state changes/updates between each Point of Presence (POP) to the others in the cluster using Asynchronous Streams that use a pull model instead of a push to maintain ordering of messages over the network. This has the added benefit of allowing us to know exactly how current each PoP is with changes.
3. We dont use Quorum for inter PoP to establish consistency - there's a white paper on our site that shows you the how and why of coordination free replication. Gist is we have developed a generalized operational CRDT model that allows us to get Associtive, Commutative, Idempotent convergence of changes between PoPs in the cluster without needing quorum
4. the DB is a multi master - you can access and change data at any PoP. Its also multi model and lets you query your data across multiple interfaces such as key/value, documentDB (JSON), Graph etc
5. the DB automatically creates GraphQL and REST APIs for your schema taking away the complexity and effort of a lot of boilerplate development on the backend
6. The DB is available as a managed service in 25 PoPs today - you can request an account and we will give you one. WE will be generally available with a free tier in April and you can signup online and self administer your cluster
7. You can access the DB via a CLI, GUI, or write code that accesses it via REST, GrapHQL or using native language drivers in JavaScript, Python today (we are working on other languages with a view on releasing them over the next few months)
REST interface is good, but could we add business valdation before mutating data? Because bulk of the work backend does is this business logic. How could we do this?
The short answer is yes. Macrometa integrates a function as service (FaaS) which can be hooked into the database and be triggered by events on a stream or a data collection.
So you can for example do the following:
Expose a RESTful or GraphAPI (included deep nested queries in graphQL) for one or more collections - when mutating, attach a validation function to the collection as a trigger that is called before the mutation is applied to the DB. You can also have a trigger that calls a function after the mutation is complete.
One can also do this on streams with functions being triggered to a specific topic.
Lastly - there is full support for running containers as well and you can use the endpoints exposed by the container as a trigger.
Oh and one more thing - the dB is real time. It will notify clients of updates to collections automatically (like firebase).
Shares some feature overlap with dynamodb (key/value and document dB interfaces). Where we differentiate - global replications across all our 25 global POPs (50 by end of 2019). Integrated graphQL generator (rest as well), real-time: dB will notify clients of changes to data I.e. no need to poll, rightly integrated with streams and pub/sub, run functions and containers as triggers or stored procedures to the DB, geo query: query by lat/long/height, elastic search integrated (July 2019). There’s more - will announce in April
That's quite a harsh comment without anything substantial. It's really unfair to the parent comment who attempted to help the HN reader crowd with a summary.