Hacker News new | past | comments | ask | show | jobs | submit login

Most graph databases are being used as secondary systems, often used as indexing for other databases, etc. We believe this is due to a lack of features or performance at scale.

Our goal at Dgraph is to be used as main storage similarly to how people use Postgres or Mongo.

Happy to ask Manish (the author of the post and CEO of Dgraph) for more details!




Congrats on the funding! I've been looking into Dgraph (and also played a bit with Badger), as well as other graph databases as a way to store chronographic event data, while enabling rich relationships between the observed artifacts belonging to each event. The problem is that the solutions I find seem an ugly hack compared to a relational solution. Can you point me to any specific Dgraph documentation or case studies for these kinds of workloads?


Let's chat! We're working on improving our docs specifically regarding the data modeling aspects.

If your use case is open source we could even use it as one of our case studies :)

francesc@dgraph.io


By chronological event data, you mean timeseries data? Dgraph can be used for storing that, though, it's not specifically designed to store data that "flat".

It should work conceptually. At least, for smaller datasets, it should be alright (gigabytes or something), but for bigger datasets (terabytes), I think a specific TS DB would make more sense.

However, you can take the aggregations from there and store that along with relationships into Dgraph. That'd be a perfect fit.


fyi - We built a product around a way to store chronographic data in a graph database, capable of handling any resource, relationship, property, and/or event detail you can throw at it. It's called IBM Agile Service Manager.

Take a look, feel free to reach out if you'd like to know more:

Update from our technical team on the latest version: https://www.linkedin.com/pulse/new-ibm-asm-v115-some-paralle...

Black Friday with IBM Agile Service Manager (video from our technical team): https://www.youtube.com/watch?v=lJGVAJU6qp8


generally I think of GraphDbs as being fast read, slow write - does Dgraph has this issue? This is actually why I don't think of graph dbs as the source of truth db.


Dgraph's writes are actually very fast. The one-time bulk loader loads at millions of records per second. With recent optimizations [1], the live loader can load 21M records in 5 mins without indices and 20 mins with tons of indices. Note that all live data load also does WAL, consensus and disk syncs before every write call finishes to ensure crash-resilience.

[1]: https://github.com/dgraph-io/dgraph/commit/d697ca0898f0ac951...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: