Hacker News new | past | comments | ask | show | jobs | submit | kakakiki's favorites login

Or you can cut the intermediate links and go straight to https://www.youtube.com/@hamelhusain7140/playlists



Surprised no one has mentioned another great and similar resource called Rustlings [0] (yes very punny name). You are given some files with todo statements which you'll need to fix and make the code compile and pass all the tests. It's an interactive way to learn which is what got me through learning Rust a few years ago.

[0] https://github.com/rust-lang/rustlings


If you find the official release notes a bit dry, I've made an interactive version: https://antonz.org/go-1-22

I don't get the "its hard to measure throughput" line. I'm using RDS at work. At some point we had 20TB data, with daily 500GB (batch) writes into indexed tables. Same order of magnitude cost, sure. But the combination of RDS instance monitor, Performance Insights, PGadmin dashboard means you have: visual query plan with optional profilling (pgadmin), live tracking of SQL invocations with # invokes per second, avg number of rows per invocation, and sampling based bottleneck analysis (disk reads, locks, cpu, throttling, network reads, sending data to client, etc), you have per disk read/write throughput (MBps), IOPS being used, network throughput, etc. At most times what i felt lacking was the ability to understand why PG was using so much CPU/disk troughput(e.g. inserts into indexed tables) but the disk throughput the instance was under was always very visible.

The article also doesnt mention anything about using provisioned IO instances. Nor any mention of which architectures have the highest PIOPs ceiling.


At the risk of displaying my ignorance and lack of knowledge about this area, one part I found very familiar in this article is that the action interactions in his apps didn't actually interact with the blockchain, but essentially with two centralized services.

My very limited understanding is that for blockchains essentially the way to distribute them is that every node has a full copy. This sounds awfully expensive in the long run. My intuition would be that once running a node is expensive enough, this would not be truly decentralized. If I can't get the fundamental information out of a blockchain myself on hardware I can afford, the actual properties of the blockchain don't matter anymore as I cannot access them myself.

The moment you need to rely on third parties, you lose any unique properties a blockchain might have. I don't know how this would work if blockchains inherently are inefficient enough that you always need a way around querying them directly. I find the idea of a distributed trust-less database interesting, but if it is so inefficient that I can't actually access it myself that idea doesn't seem that interesting anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: