"If you don't find yourself in need to add back at least 20% of what you removed, you didn't remove enough". Elon's word. Sounds like an excellent engineering principle, with some caveats of course.
I talked to a McDonnell Aircraft structural engineer in the 80s. He said something like Old Man McDonnel told us to design for a 0.85 factor of safety on the F101. If anything broke during testing, we'd redesign it.
I'm not sure if that was how it really worked out.
That's as much as an "engineering principle" as him thinking minimum character level of an item is more important than the actual attributes of the item in PoE2. He's just saying stuff his target audience thinks are smart, and whenever checked on it he lashes out. He's got nothing.
I'm looking forward to Elon applying the same strategy to aviation. I wonder how many crashes it'll take for him to realize that this is a horrible engineering principle?
Totally agree. I wrote a few Play apps way back when and really enjoyed it. I was so excited about the future of the framework and how it would beat out Java for web apps, and steal folks away from the Rails ecosystem.
And then it just…stopped. Not sure what happened there honestly.
I have nothing but fond memories of reading Beej's guides.
It's also this sort of work that's becoming less necessary with AI, for better or worse. This appears to be a crazy good guide, but I bet asking e.g. Claude to teach you about git (specific concepts or generate the whole guide outline and go wide on it) would be at least as good.
Seems more efficient to have one reference book rather than generating entire new 20 chapter books for every person.
I also think if you are at the “don’t know what you don’t know” point of learning a topic it’s very hard to direct an AI to generate comprehensive learning material.
> Seems more efficient to have one reference book rather than generating entire new 20 chapter books for every person.
The main advantage of LLMs is that you can ask specific questions about things that confuse you, which makes iterating to a correct mental model much faster. It's like having your own personal tutor at your beck and call. Good guidebooks attempt to do this statically... anticipate questions and confusions at the right points, and it's a great skill to do this well. But it's still not the same as full interactivity.
I think a mix is the right approach. I’ve used LLMs to learn a variety of topics. I like having a good book to provide structure and a foundation to anchor my learning. Then use LLMs to explore the topics I need more help with.
When it’s just a book. I find myself having questions like you mentioned. When it’s just LLMs I feel like I don’t have any structure for my mind to hold on to.
I also feel like there is an art to picking the right order to approach learning a topic, which authors are better at than LLMs.
A good book by an expert is still better than LLMs at providing high-level priorities, a roadmap for new territory, and an introduction to the way practitioners think about their subject (though for many subjects LLMs are pretty good at this too). But the LLMs boost a book's effectiveness by being your individualized tutor.
This is a bit of a stretch, but it's a little like distillation, where you are extracting from the vast knowledge of the LLM and inserting those patterns into your brain. Where you have an incomplete or uncertain mental model and you ask a tutor to fill in the blanks.
True although the don't know aspect is where LLMs will be magic. I envy today's youth for having them (and I'm not that old at all)
I remember fumbling around for ages when I first started coding trying to work out how to save data from my programs. Obviously I wanted a file but 13 year old me took a surprisingly long time to work that out.
Almost impossible to imagine with AI on hand but we will see more slop-merchants.
Definitely more efficient in terms of power consumed, not so in terms of human effort to build such guides across nearly every topic one could think of. But you're right, we shouldn't ignore the power consumption.
I have found that asking AI "You are an expert teacher in X. I'd like to learn about X, where should I start?" is actually wildly effective.
Whoever, or whatever, is creating the thing that needs reference materials would have to seed the initial set (just as they/it seeded the thing itself) and then go from there.
If you didn't, then you won't be included the training set (obviously) and the AI would not easily know about you. Sort of how if you start a really cool company but don't make a website Google doesn't know about you and can't return you in their search results. It's valuable for Google (AI) to know about you, so it's valuable to build the sites (docs) to get indexed (trained on).
I don't get this type of attitude. Surely using the source signal, before an LLM add noise would be much preferable. Besides, there seems to be heavily diminishing returns to de-noising LLM output, and even a hard barrier to how much we can denoise it. Yet people claim they prefer the noisy data and don't consider the risk that they are learning the noise instead of the signal because they have by definition no way of knowing what is signal and what is noise when they ask an LLM to teach them something. Because the noise is friendly sounding and on demand?
I don't disagree, but since the quality of AI is largely a function of the quality of human content, there's always going to be value in well-written human content. If humans stop producing content, I think the value of AI/LLMs drop significantly as well.
Current gen AI can spit out some very, very basic web sites (I won't even elevate to the word "app") with some handholding and poking and prodding to get it to correct its own mistakes.
There is no one out there building real, marketable production apps where AI "codes everything for them". At least not yet, but even in the future it seems infeasible because of context. I think even the most pro-AI people out there are vastly underestimating the amount of context that humans have and need to manage in order to build fully fledged software.
It is pretty great as a ridealong pair programmer though. I've been using Cursor as my IDE and can't imagine going back to a non-AI coding experience.
presumably it's because the stressors of poverty forces one to think in the short-term, while Chess incentivizes long-term strategy.
If you learn not to take a pawn because 5 moves later you'll lose a bishop, maybe you won't take on credit card debt when 5 months later you owe 10% more in interest.
It also incentivizes taking as much material as you can right away to gain an advantage. It’s all positional and contextual. Not sure you can conclude that there’s any concrete life lesson in here.
lol credit cards that's a million privilege steps ahead most children in the world do not have two hours to spend on their own education much less chess they work etc
reply