Hacker News new | past | comments | ask | show | jobs | submit login
The broad set of computer science problems faced at cloud database companies (davidgomes.com)
240 points by davidgomes on Aug 21, 2023 | hide | past | favorite | 60 comments



I've been working hard to up skill on the consistency and distributed systems sides of things. General recommendations:

- Designing Data Intensive Applicatons. Great overview of... basically everything, and every chapter has dozens of references. Can't recommend it enough.

- Read papers. I've had lots of a-ha moments going to wikipedia and looking up the oldest paper on a topic (wtf was in the water in Massachusetts in the 70s..). Yes they're challenging, no they're not impossible if you have a compsci undergrad equivalent level of knowledge.

- Try and build toy systems. Built out some small and trivial implementations of CRDTs here https://lewiscampbell.tech/sync.html, mainly be reading the papers. They're subtle but they're not rocket science - mere mortals can do this if they apply themselves!

- Follow cool people in the field. Tigerbeetle stands out to me despite sitting at the opposite end of the consistency/availability corner where I've made my nest. They really are poring over applied dist sys papers and implementing it. I joke that Joran is a dangerous man to listen to because his talks can send you down rabbit-holes and you begin to think maybe he isn't insane for writing his own storage layer..

- Did I mention read papers? Seriously, the research of the smartest people on planet earth are on the internet, available for your consumption, for free. Take a moment to reflect in how incredible that is. Anyone anywhere on planet earth can git gud if they apply themselves.


>- Did I mention read papers? Seriously, the research of the smartest people on planet earth are on the internet, available for your consumption, for free. Take a moment to reflect in how incredible that is. Anyone anywhere on planet earth can git gud if they apply themselves.

There is a flood of papers out there with unrepeatable processes. Where can you find quality papers to read?


This is a great point. It's true that there's a wealth of good information out there. But there's so much bad information that we now struggle with a signal vs noise problem. If you don't have enough context and knowledge yet to make the distinction, it's very easy to go down a wild goose chase. Having access to an expert in the field who can mentor and direct you is invaluable.


If a Com Sci paper is unrepeatable, all of this has been a freaking waste of everyones time.


Idempotency, testing, readability, and longevity are all too often ignored in CS and tech in the name of speed and simplicity.


Can you elaborate on idempotency in the context os CS papers?

I’m familiar with the concept wrt mathematics(in particular in the context of ulter-a-filters as my favorite professor would say it), but I don’t see the necessity in most CS research.


Idempotency (as I understand it, what maths people might call an 'idempotent function') is a very core idea in distributed systems - networks are unreliable, stuff may get lost, or you might not get an acknowledgment back, so the ability to send the same thing 1 0r a million times and end up with the same state is useful.

Or have I completely missed the point of your question..?


It is also super useful in ye olde batch process / etl process. Designing an ingest-analyze-report process to checkpoint its work and recover gracefully even when started at an unexpected time or place means you can retry safely rather than have to manually clear out the detritus of a partial job run.


I think autotune was saying that software often does shortcut hacks violating some of those principles as an optimization. This can be a topic for research into the tradeoffs (e.g., CAP theorem), but may be more common in non-research-based implementations (e.g., NoSQL databases because ACID is slow/constraining/etc.)


it is very rare that algorithms from theoretical comp sci papers actually get implemented and executed.


Can you explain what you mean by an un-repeatable process? This isn't a physical science, you don't need your own reactor or chemical lab or anything to repeat what they've done.


"But, you are not Google" - https://medium.com/humans-of-devops/you-dont-need-sre-what-y.... Just an example of what I'm talking about. Practices that work at one org may not easily transfer over to another, be that taking on kubernetes vs standard ec2 instances, on-prem vs cloud, etc. etc. Maybe un-repeatable isn't the best word so much as non-portable?


Ah I think I see what you're getting at.

Most of the stuff I read isn't "your SAAS app should have atomic clocks" and more "here's some maths on why it works, here's some explanation of what we were going for, here's some pseudocode".


Designing data intensive applications is frequently recommended, but is 6 years old now. I know that doesn’t sound like a lot, but the state of distributed compute is a lot different than it was 6 years ago. Do you feel like it holds up well?

A year of two ago I read Ralph Kimball’s seminal The Data Warehouse Toolkit. While I could see why it’s still often recommended, it was showing its age in many ways (though a fair bit older than DDIA). It felt like a mix of best practices and dated advice, but it was hard to tell for certain what was what.


A lot of what DDIA covers is pretty fundamental stuff. I expect it will age fairly well.

It's not really a book about 'best practices', despite the name. It's more like an encyclopaedia, covering every approach out there, putting them in context, linking to copious reference papers, and talking about their properties on a very conceptual and practical level. It's not really like 'hey use this database vendor!!!'.


Concur. In large parts, the book feels like it's making the forest of fundamental research papers, published across decades, accessible by putting them into context, ordering them and "dumbing them down" for us mere mortals.

Most of that research is decades old. I specifically remember Lamport timestamps. Not only has that held up, it's unlikely to go anywhere anytime soon. Most topics covered are as fundamental as the two generals problem; almost philosophical.

No database vendor can solve the issues around two concurrently existing write masters. Sync will be necessary, conflicts will occur. A concrete vendor could only hope to make that less painful (CRDTs for automatic conflict resolution, ...). That's kind of the level that book operates at.


DDIA partly goes into implementation details (for example, transaction implementations in popular databases), but the reasoning behind it is always rather fundamental. I think it's going to age well unless some breakthrough happens – but even then only that particular section would be affected.


> but is 6 years old now

Hmm. The techniques used today were invented in the 1970s-80s.


I was gifted Designing Data-Intensive Applications in high school and it changed the course of my CS education. Beautiful, beautiful book.


Thanks for the story, I gifted a copy to an intern at work this year and I hope they enjoy it.


Other than DDIA do you have a heuristic for finding the older papers? I often look for the seminal works but it’s hard when you don’t know who the key people are in a new field. Do you just look for highest number of citations on google scholar or something else?

For example in the world of engines, Heywood is basically god: (https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=John...) has 29k citations!


Yeah just go to wikipedia and find out who coined the term :) I love the original version vector paper, it's from 1983 and kind of funny in parts:

"Network partitioning can completely destroy mutual consistency in the worst case, and this fact has led to a certain amount of restrictiveness, vagueness, and even nervousness in past discussions, of how it may be handled"

https://pages.cs.wisc.edu/~remzi/Classes/739/Fall2015/Papers...

But as a general starting point, all roads seem to lead to Lamport 78 (Time, Clocks). If you have a specific area of interest I or others might be able to point you in the right direction.


> - Follow cool people in the field.

Any advice how to approach this?


A few cool accounts I follow:

https://twitter.com/DominikTornow

https://twitter.com/jorandirkgreef

https://twitter.com/JungleSilicon

You can also follow me. Not saying I'm cool but I do re-tweet cool people:

https://twitter.com/LewisCTech


Any paper recommendations?



When I read that Google installed their own atomic clocks in each datacenter for Spanner, I knew they were doing some real computer science (and probably general relativity?) work: https://www.theverge.com/2012/11/26/3692392/google-spanner-a...


We had hardware for time sync at Google 15 years ago actually, thats not a new thing. Actually.. we had time sync hardware (via GPS) in the early Twitter datacenters as well until it became clear that was impossible to support without getting roof access. =)


Accurate clocks in data centers predates Google. Telecom needed accurate clocks for their time sliced fiber infrastructure. Same for cellular infrastructure.


yeah synchrozing clocks across distributed systems is really hard without expensive hardware


...but for distributed databases specifically, you can use a different algorithm like Calvin[0] or Fauna[1] that do not require external atomic clocks… but the CS point and the wealth of info in research papers (in distributed systems stuff) are solid

...but there is a lot of noise in those software papers, too - you are often disappointed by fine print, or have good curators/thought-leaders [2] - we all should share names ;)

enjoying the discussion though - very timely if you ask me.

-L, author of [1] below.

[0] - The original Calvin paper -

https://cs.yale.edu/homes/thomson/publications/calvin-sigmod...

[1] - How Fauna implements a variation of Calvin -

https://fauna.com/blog/inside-faunas-distributed-transaction...

[2] - A great article about Calvin by Mohammad Roohitavaf - https://www.mydistributed.systems/2020/08/calvin.html?m=1#:~....


I hear this from tech people, but hft people are happily humming along with highly-synchronized clocks (mifid ii requires clocks to be synchronized to 100us). I wouldn't say it's "easy" but apparently if you need it then you do it and it's not that bad.


> (mifid ii requires clocks to be synchronized to 100us)

That only applies if the clocks are within 1ms of each other, so around 100 miles (or equivalently: within a single cloud region), and only came in to force in 2014.

The bound that Spanner-likes keep is ~3ms for datacenters across continents, and that was in 2012.


I wonder if you have to account for localized gravity differences for that kind of thing.


ChatGPT tells me that the time difference between the top of mount everest and sea level is in the nano second range - so completely dwarfed by network latency and maybe doesn't matter?

Pure speculation on my behalf though.


If you were to supply that answer in a group concerned with time and gravity they might ask you if that was the right question to ask wrt "greatest relativistic time seperation between two points on 'the earth's surface'"

ChatGPT likely won't help - but you could look into the fact the the earth isn't round, being an oblate spheroid with 20km less radius at the poles than the equator.

Of course the fact the ideal WGS84 ellipsoid, the official mean global sea level, and the geoid (gravitational surface of equipotential) don't all align must surely come into play here - and that bloody great "gravitational hole" somewhere south of Ceylon.

https://www.e-education.psu.edu/geog862/node/1820

https://en.wikipedia.org/wiki/Gravity_of_Earth

https://en.wikipedia.org/wiki/World_Geodetic_System


Yeap I am way out of my depth here :)


No drama, the greatest difference seen on the walkable|sailable Earth is less than a tenth of sweet bugger all (as you noticed).

I just felt like pointing out a rabbit hole that some might like to dive into!


I’ve built pretty much my entire career around this problem and it still feels evergreen. If you want a meaningful and rich career as a software engineer, distributed storage is a great area to focus on.


I'm a principal engineer in a faang-adjacent ecomm domain. I've been doing this for 11 years and I'm absolutely sick of it. there are no novel problems, just quarterly goals that compete with and dominate engineering timelines.

how did you get started, and what would you recommend for pivoting into this space?


Could I send an email? I'm sunsetting my CRUD career and going down this route, would appreciate some perspective.


I was hoping that the blog post would actually spell out examples of problems. Is it just me or have there been a lot of shorter blog posts on HN lately that are really no more than an introduction section rather than an actual full article?


If you are interested in performance aspect of databases, I would recommend watching this great talk [0] from Alexey, ClickHouse Developer, where he talks about various aspects like designing systems by first realising the hardware capabilities and understanding the problem landscape.

[0]: https://www.youtube.com/watch?v=ZOZQCQEtrz8


I worked at a specialty database software vendor for almost 4yrs, albiet I worked on ML connectors. I recall some of the hardest challenges as figuring out each cloud vendor's poorly documented and rapidly changing/breaking marketplace launch mechanisms (usually built atop k8s using their own flavor (eks, aks, gke, etc)).


There is so many interesting problems to solve. I just want there was available libraries or solutions that solved a lot of them for the least cost, so that I may build on some good foundations.

RocksDB is an example of that.

I am playing around with SIMD, multithreaded queues and barriers. (Not on the same problem)

I haven't read the DDIA book.

I used Michaeln Nielsen's consistent hashing code for distributing SQL database rows between shards.

I have an eventually consistent protocol that is not linearizable.

I am currently investigating how to schedule system events such as TCP ready for reading EPOLLIN or ready for writing EPOLLOUT efficiently rather than data events.

I want super flexible scheduling styles of control flow. Im looking at barriers right now.

I am thinking how to respond to events with low latency and across threads.

I'm playing with some coroutines in assembly by Marce Coll and looking at algebraic effects


From the article:

> Another example is figuring out the right tradeoffs between using local SSD disks and block-storage services (AWS EBS and others).

Local disks on AWS are not appropriate for long term storage, because when an instance reboot the data will be lost. AWS also doesn't offer huge amounts of local storage.


Unless you manage the replication across different local disks yourself


yeah, that is part of the trade off. Using an ephemeral SSD (for a database) means the database needs to have another means of making the data durable (replication, storing data in S3, etc.).

There are AWS instance types (I3en) with large and very fast SSDs (many times higher IOPS then EBS).


I'm kind of confused by companies like the one in the post. What is the selling point of the these hosted DB companies running in AWS, when aws and the rest of the providers them selves provide pretty good, probably much better, DB services? Is there that much money to make between running DBs on EC2 compared to the existing offerings they have?

Amazon, google, MS, these companies print money, have built up massive engineering cultures to run reliable storage. I just dont see what the value is with trusting data with some VC funded group over proven engineering work.

I worked on one of these in house storage systems, all we did was look at how the cloud providers did things already for inspiration. Might as well just use those. IDK maybe someone can convince me of the value?


Cause they provide solution AWS does not ? SingleStore is hybrid OLAP/OLTP AWS does not have one of those. Neither does it have a proper horizontally scalable new SQL DBMS like CockroachDB. Snowflake is way better than what you can get from AWS for OLAP. and so on


Sometimes the value is not in having a zillion configuration options with these providers but, instead, having a more accessible service that doesn't require a PhD.

And some of the people in those VC-funded groups were alumni of those providers too. :)

-L


The risk is not that AWS offers a better service now (many of these companies are actually offering something better). But that they copy your idea or service and do it later.

I'm given to understand Snowflake runs its own cloud platform, at least in part.


> For instance, blob storages such as S3 have enabled cloud database providers to offer flexible, unlimited storage (SingleStoreDB even coined the term “bottomless storage” for this).

Can someone please elaborate that? What does it mean in conjunction of S3 and DB. I know how traditional DBs work (PostgreSQL and MySQL). I know how S3 work (opensource implementation like minio). But S3 is not a random access file on block storage which is a prerequirement for PostgreSQL and MySQL. How is that solved for S3 based DBs? Can someone point out to the doc, or even better an opensource implementation.


Its a popular design for SQL Data warehouses. I think almost all of them (snowflake, redshift, etc.) store cold data in S3 and hot data on local disk[1][2].

It works well if the data is stored as immutable files (i.e., A log structure merge tree) or is not index at all (classical columnstores). S3 doesn't provide an efficient way to update a file.

[1] https://dl.acm.org/doi/10.1145/2882903.2903741 (snowflake SIGMOD paper) [2] https://dl.acm.org/doi/10.1145/3514221.3526055 (singlestore SIGMOD paper)


ClickHouse has two modes of operation on S3:

1. S3 as main storage with a write-through cache. 2. S3 as a cold tier in tiered storage.

It works well because the data is organized by a set of immutable parts called MergeTree. These data parts are atomically created, merged, and deleted, but never modified.

S3 does not work well with random access... but neither it's needed.


Both BigTable and Spanner use Colossus for storage. Which are for OLTP and random access.


One ends up with a 3-tier access hierarchy for accessing a given page:

Present in buffer pool? -> Present on local disk? -> Retrieve from S3/Azure/GCP.

The challenge becomes optimizing this -- speculatively pulling pages in, background evictions, etc.

Garbage collecting old pages also turns out to be complicated. Doing a full trace for expired versions in secondary storage on disk is slow but conceivable. Doing it across petabytes in the cloud, with all the problematic latencies and reliability issues that come with network access... limits the approaches you can take.

They are not new problems -- DBMS development has always been about juggling the trade-offs in performance of different lvels in the memory hierarchy. But it permits higher scale.


S3 is a key-value store with random access and various pricing / limits. If you can make the disk IO part of your DB map to S3 API there is nothing stopping you other than network latency, and ensuring the way you use S3 is cost effective.


You usually have to redesign the storage from the ground up around S3. Some databases do transparent data tiering with S3 which can work ok too depending on the use case.


Very similar to the problems traditionally faced by engineered storage solution engineers one to two decades ago. There's a mix of those engineers and newer folks from academia or cloud in general leading the solutions for the cloud.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: