Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What rabbit hole(s) did you dive into recently?
238 points by RetroTechie 8 months ago | hide | past | favorite | 380 comments
You get nerd-sniped. Assigned a bug to squash. Some new tech or gadget arrived, to familiarize yourself with.

While researching / reading up / debugging, you stumble upon something interesting. Upon looking into that, yet another subject catches your attention.

You know how this goes. So... (see title). Bonus questions: what intermediate steps did you pass along the way? What stuck in your mind the most?




So, I decided to install Linux on my formerly-Windows-only laptop, and thought it was cool enough to go full time and ditch Windows completely. The downside was the lack of access to top tier games. No problem though, my plan was to take a break from gaming, figuring that by the time Linux had caught up with compatibility, computers would also be much more powerful and I'd be able to resume gaming at some point in the future on better kit, and not have to worry about janky framerates on struggling hardware.

Linux proved interesting enough that I kept finding all sorts of cool new rabbit holes to go down - shell scripting, filesystems, Python, databases. It was side-quests within side-quests! Plus, having kicked my gaming habit, I had plenty of time to explore these.

Anyway, to cut a long story short, that was 23 years ago. I ended up getting a career in tech, relocated, got married, had kids, lived the American Dream... The "life" rabbit hole kind of got in the way of my plans, so I can't wait to finally get back on track and play GTA III on a decent box.


The old-laptop -> install-linux-because-windows-wont-run -> tech-career pipeline is absolutely real.

It got me to learn C, graphics programming, operating systems, networks and firewall, literally everything I wanted to do required a couple of days deep in arch linux wiki learning about all kinds of inter-connected systems.


Yep, this is how I spent High School, although I did have prior interest in computers. I started tinkering with Debian on a hand-me-down HP pavilion and ended with a ThinkPad T60 with Arch. Developed a lot of skills that helped me get started in my career.


When I ditched Windows and installed Ubuntu 8.04 on my personal computer was my gateway into further tech (building PCs, using a terminal, programming, etc.) and set the foundation for my career.



This was a phenomenal read. Thanks for linking


It was a big moment for me too when I realized that the real world has infinitely many interesting things to do and explore. The real world is incredibly detailed.


Hard disagree.

The real world only has pain and suffering. Endless trials and never a payout.

Games on the other hand and very detailed and have a well defined path to success.


This oddly enough has been quite a big issue in my life as of lately.

I need to get off my ass and start working towards better things in real life in order to (potentially) better my situation e.g. new job, better hobbies, etc..

However, I have always had issues with gaming on and off throughout my life (perhaps a lot more on than off). I seriously think that a lot of my issues with gaming is that games are preferential to life in many regards. In a game, I know if I work hard and follow the steps/guides/quests, then I will be rewarded. Goals are obtainable in that if I fail to achieve them it is my fault -- because I did something incorrectly.

Sadly, when I take breaks from gaming, I am not a productivity machine. I just find something else to waste the time with.

In the back of my mind, I want to believe that if I work hard and better my situation, then will finally be rewarded. But I have worked hard to get where I am, and I am still awaiting the reward, so to speak.

So, I think a part of my brain has taken the shortcut to destroy my motivation because I know that Sisyphus isn't the only one rolling the boulder up the hill thinking, "maybe this time will be different?"


Try recording yourself playing video games. You will quickly snap out of the illusion and realize that you're passing through life staring at a screen.


This is funny considering the whole streaming ecosystem. You might be out of touch.


I am also a software engineer, so wouldn't this still be true for my work?


I had an unhealthy comfort meal that I couldn't kick. I filmed myself eating it and was disgusted by the sight of myself. That worked. I still eat it sometimes but in much smaller portions and not nearly as often.


Yes this is the exact effect I am aiming for. There's just something about watching yourself from a different perspective that lets you have fresh judgment on it as if it were a different person.


But you won't be rewarded? It's a game and has no bearing on your real life, I don't follow this train of thought at all.


I perhaps did not explain myself well.

The areas I want to improve in my life mainly pertain to my career. I believe it is important to note that I am not a Type-A, career focused, go-getter kind of person.

I am a developer like mainly others on this site. I am also quite unfulfilled and unsatisfied in my current position. I am grateful to be employed, but when I describe my position to others, I am generally met with a response like, "Run! Now, and never look back."

So, in order for me to get out of my current position, I am going to have to put in a lot of work in my free time. That's not necessarily a bad thing, of course. However, say I land a new job due to my efforts. I seriously do not think it will be fulfilling nor increase my happiness in the grand scheme of life. I seriously think where ever I end up will be a different round of the same "game" with different players, so to speak.

I do not feel that my motivation to change comes from some internal burning passion, but rather some obligatory "should" feeling i.e. I should find a better job, I should study <insert topic>, I should work out more, etc..

Though, you are right -- gaming has no bearing on my real life. The rewards from any game are, more or less, fruitless. Though, I guess my point is that real life is pretty damn fruitless too. If anything life just feels like chasing a carrot hanging from a stick. I have lived my entire life with this concept of an ever moving delayed gratification. There is no end to the real life grind, no clear guaranteed steps on how to progress, no checkpoints, no redos, etc.. Seriously, what am I missing? What I am grinding for?


The hardest part of life is to reward the process over the outcomes. Videogames don't have that constraint.


But when you succeed in the game, then what? Yes, the real world is full of endless trials and never a payout. But the entire point is that there is no point. The fun is in the journey. The real treasure is the friends you make along the way, to quote the meme.


Yes but isnt the suffering and occasional ecstatic wins the whole point. Learn to enjoy this journey.


The real problem with real world is that it is so boring most of the time.

Well designed video games can be hard or even harder than real life but they give you constant endorphins or dopamine hits to keep you going.

And you can take a break from video games without any negative repercussions.


I think you may be confusing what is providing the payout. There is not a single game I’ve played that payed me money that also had a well defined path to success. Gambling any simple game is a near gauranteed loss of money. Poker can payout, but it’s a truly grindy game, and one bad night can wipeout months of grinding. Blackjack can have positive expected value, but good luck successfully counting cards for more than a couple hours without getting permabanned. So what game are you playing that provides payout that isn’t based mostly on dopamine that is provided by you in the real world? And if it’s just numbers go brr that is the payout, you can always buy a counter and watch those numbers go up. In fact, you might be able to train yourself to get a hit of dopamine if you get a clicking counter that goes up when you click it…

Still, it sounds like you may be in a bad place, and my heart goes out to you. Hang in there. Life doesn’t always get better, but seasons change, and this too will pass.


Let's hear your distro journey! What did you start with, what do you currently use?


I started with Mandrake (ironically for the game ban, I bought a retail version that came bundled with a copy of The Sims that had been rigged to run with wine), then tried Linux From Scratch (very educational), then a brief spell on Gentoo (because my Dell laptop needed some extra config to get the wifi working anyway, so I figured why not compile all the packages - then I figured out why). Then I liked KDE so moved to openSUSE, but then I stopped liking KDE when 4 came out and broke everything, so I moved to Ubuntu. Then I stopped liking GNOME when 3 broke everything, and at any rate I didn’t have so much time to fiddle any more, so I moved to Xubuntu, and there I remain.


Big fan of Xubuntu! Recently switched to OpenSUSE Tumbleweed so I could have rolling release, but I still use XFCE as my desktop manager!


This is the one!

Same here.

Life happened.

What I told my dad would never happen (not using Linux From Scratch or Gentoo - like how can he use Windows and like just get things done instead of digging deep and solving problems all the time?) did happen.

I now use Debian derived but more up-to-date and easy to use for mainstream people distros (like shudder Ubuntu) and they're relegated to server and TV duty and I hope nothing breaks when I upgrade from LTS to LTS version coz "I don't have time for this!" and I play a few games here and there on my Windows laptop.

But I think it's important to have gone through the "rabbit hole" in the middle. All the digging and understanding I did I still do all the time at work. I just no longer spend the other half of my life on it. I spend it digging into other things.


I think the ??? --> Gentoo --> Something Stable pipeline is a defining characteristic of 90s/00s nerds.

I _think_ the first Linux distro I ever installed was Slackware, but ashamedly, I can't remember. In my defense, I would've been about 10 years old at the time. I played around with a ton of others (I'm fairly certain I tried Debian Woody), but settled on Gentoo, because of course. Forget just tying up the phone line for dial-up, I tied up both of the family computers with distcc.

After a career break wherein I did relatively little with computers for a decade, I got back into Linux and quickly realized I did not care about -funroll-loops. I've been running Debian non-stop since Jessie, on everything from repurposed laptops, to ancient tower servers, to slightly-less-ancient racks.

> But I think it's important to have gone through the "rabbit hole" in the middle.

This right here. There is an endless stream of "how do I learn Linux?" questions on Reddit, and the answers are always some variation of "read this book," "take this course," etc. Perhaps there is value there, but I learned it by trying to do stuff. Like getting an HP PSC 2610 to talk to hplip and CUPS over LAN. Or getting a Chaintech AV-710's (an obscure sound card that happened to use an excellent DAC for 2-channel output) to work under ALSA. Doing these kinds of things forced you to read man pages, forums, newsgroups, etc. And when you succeeded, you could write up a HOWTO, and the three other people in the world who also needed this particular combination would give thanks.


You just described my life back when Slackware OG was alpha in 1993, but on a 386 PC. 31 years later I never used my EE degree but traveled through all sorts of aspects of IT and IS.


You'll have to do some research to figure out how to get a decent copy of the old, moddable GTA III, and not the remastered, ported ones.


A lot of the top tier games run fine on Linux using Steam/Proton Compatibly layer or just pure Wine. Do explore that.


Eerly similar story! Linux is the ultimate rabbit hole.


PDF files, and why the heck they are so slow to read. Hours upon hours of perf(1) and fiddling with ugly things in C. My main takeaway is everyone in the world is doing things HORRIBLY wrong and there's no way to stop them.

(Digression: did you know libpng, the one everyone uses, is not supposed to be an optimized production library—rather it's a reference implementation? It's almost completely unoptimized, no really, take a look, anywhere in the codebase. Critical hot loops are 15 year old C that doesn't autovectorize. I easily got a 200% speedup with a 30-line patch, on something I cared about (their decoding of 1-bit bilevel to RGBA). I'm using that modified libpng right now. I know of nowhere to submit this patch. Why the heck is everyone using libpng?)

The worst offender (so far) is the JBIG2 format (several major libraries, including jbig2dec), a very popular format that gets EXTREMELY high compression ratios on bilevel images of types typical to scanned pdfs. But: it's also a format that's pretty slow to decompress—not something you want in a UI loop, like a PDF reader is! And, there's no way around that—if you look at the hot loop, which is arithmetic coding, it's a mess of highly branchy code that's purely serial and cannot be thread- nor SIMD- parallelized. (Standardized in 2000, so it wasn't an obvious downside then). I want to try to deep-dive into this one (as best as my limited skill allows), but I think it's unlikely there's any low-hanging optimization fruit, like there's so much of in libpng. It's all wrong that everyone's using this slow, non-optimizable compression format in PDF's today, but, no one really cares. Everyone's doing things wrong and there is no way to stop them.

Another observation: lots of people create PDF's at print-quality pixel density that's useless for screens, and greatly increases rendering latency. Does JBIG2 support interlacing or progressive decoding, to sidestep this challenge? Of course it doesn't.

Everyone's doing PDF things wrong and there is no way under the blue sky to make them stop.


> The worst offender (so far) is the JBIG2 format (several major libraries, including jbig2dec), a very popular format that gets EXTREMELY high compression ratios on bilevel images of types typical to scanned pdfs. But: it's also a format that's pretty slow to decompress—not something you want in a UI loop, like a PDF reader is! And, there's no way around that—if you look at the hot loop, which is arithmetic coding, it's a mess of highly branchy code that's purely serial and cannot be thread- nor SIMD- parallelized.

Looking at the jbig2dec code, there appears to be some room for improvement. If my observations are correct, each segment has its own arithmetic decoder state, and thus can be decoded in its own thread. The main reader loop[1] is basically a state machine which attempts to load each segment in sequence[2], but it should not need to. The file has segment headers which contains the segments offsets and sizes. It should be possible to first decode the header and populate the segment headers, then spawn N-threads to decode N-segments in parallel. Obviously, you don't want the threads competing for the file resource, so you could load each segment into its own buffer first, or mmap the whole file into memory.

[1]:https://github.com/ArtifexSoftware/jbig2dec/blob/master/jbig...

[2]:https://github.com/ArtifexSoftware/jbig2dec/blob/master/jbig...


- "If my observations are correct, each segment has its own arithmetic decoder state, and thus can be decoded in its own thread."

Yeah, but real-world PDF JBIG2's seem to usually have one segment! One of the first things I checked—they wouldn't have made it that easy, the world's too cruel.

It's sort of a generic problem with compression formats—lots of files could easily be multiple segments that decompress in parallel, but aren't—if people don't encode them in multiple segments, you can't decompress them in multiple segments. Most formats support something like that in the spec, but most tools either don't implement that, or don't have it as the default.

e.g. https://news.ycombinator.com/item?id=33238283 ("pigz: A parallel implementation of gzip for multi-core machines" —fully compatible the gzip format and with gzip(1)! No one uses it).


Yikes! Doesn't seem like there's anything that can be done to solve that then.

I guess the only way to tackle it would be to target the popular software or libraries for producing PDFs to begin with and try to upstream parallel encoding into them.

Or is it possible to "convert" existing PDFs from single-segment to multi-segment PDFs, to make for faster reading on existing software?


Conversion's a very good solution for files you're storing locally! I'm working on polishing a script workflow to implement this—I haven't figured out which format to store things into yet. I don't consider it a full solution to the problem—more of a bandaid/workaround.

The downside is that any PDF conversion is a long-running batch job, one that probably shouldn't be part of any UX sequence—it's way too slow.

Emacs' PDF reader does something like this: when it loads a pdf, its default behavior is to start a background script that converts every page into a PNG, which decodes much more quickly than typical PDF formats. (You can start reading the PDF right away, and the end of the conversion, it becomes more responsive). I think it's a questionable design choice: it's a high-CPU task during a UI interaction, and potentially a long-running one, for a large PDF. (This is why I was profiling libpng, incidentally).

https://www.gnu.org/software/emacs/manual/html_node/emacs/Do...


> a very popular format that gets EXTREMELY high compression ratios on bilevel images of types typical to scanned pdfs

Funny you say that, https://en.wikipedia.org/wiki/JBIG2#Character_substitution_e...


I can still remember this cool talk about that. https://www.youtube.com/watch?v=7FeqF1-Z1g0


Question since you are probably knowledge-able about it right now.

> Another observation: lots of people create PDF's at print-quality pixel density that's useless for screens, and greatly increases rendering latency.

Is this relevant to text in the PDF? I would assume text is vectorized, meaning resolution is not relevant until you _actually_ print it?

Or is it just relevant to rasterized content like embedded images?


Your understanding's right: PDF's that are text + fonts are easy and fast. I'm concerned about the other kind, that's scanned pages. Any sheet music from Petrucci / imslp.org for one example. That kind is a sequence of raster images, stored in compressed-image formats which most people aren't familiar with, because they're specialized to bi-level (1-bit, black and white) images. A separate class from photo-type images. The big two seem to be JBIG2 [0], and CCITT Group 4 [1], which was standardized for fax machines in the 1980's (and still works well!)

[0] https://en.wikipedia.org/wiki/JBIG2

[1] https://en.wikipedia.org/wiki/Fax#Modified_Modified_READ

(You can examine this stuff with pdfimages(1)—or just rg -a for strings like /JBIG2Decode or /CCITTFaxDecode and poke around).


Personally I have the impression that CCITT group 4 compressed PDFs are displayed very quickly, unless they are scanned at 3000 DPI... Can't say the same for JBIG2 or JPEG/JPEG2000 based ones.


> A separate class from photo-type images.

I'd assume that the photo-type image decoder is optimized, right? If so, how does the optimized photo-type decoder compare to the apparently unoptimizable JBIG2 decoder?


I'm not knowledgeable to speak to that, but just to clarify—the low-hanging fruit in libpng I mentioned is in simple, vectorizable loops—conversions between pixel formats in buffers. Not in its compression algorithm (which isn't part of libpng—it calls out to zlib for that).


> I know of nowhere to submit this patch.

How about the folks listed as "Authors":

* http://www.libpng.org/pub/png/libpng.html

> Why the heck is everyone using libpng?

What is the alternative(s)?


sumatrapdf seems better than most at reading them?


is there any ffmpeg like command line program for pdfs? Creating, Appending/Removing pages, viewer, etc?


For appending, removing, merging pages, there’s pdftk: https://www.pdflabs.com/tools/pdftk-server/


gs (ghostscript), mutool, ocrmypdf...

To add/remove: mutool merge -h

To split PDF pages: mutool poster -h

I made a script here that I use frequently for scanned documents: https://github.com/chapmanjacobd/computer/blob/main/bin/pdf_...


For me, it's DIY audio.

The thing about diy'ing audio (primarily speakers but also amps, DACs etc) is that you can get top of the line performance for a fraction of the market price. A $50,000 speaker setup that would bring tears to your eyes could be made for perhaps $5000. A DIY $500 kit can perform similar to a $2-3000 set of speakers. Open source amps with gerber files on github are amazing.

The biggest reason it's so easy to get amazing value is because that $600 speaker only has $150 of materials. Upgrading its $25 woofer to a $80 one would help a lot, but no company would do that and not sell it now for $1000 if they could.

However the biggest allure for me is not beating commercial systems on cost, but making what I want. A small speaker with deep base? Easy. Speakers with quasi-active noise cancellation behind them? Sure, why not. Speakers that'll make the most overpowered/fancy beach-boombox sound like a crappy toy? Simple.

The only limit is your imagination and time/money.

I'd very much recommend diyaudio.com, but be warned, parts of this field are mature while others are still in effective infancy. Also, being an engineer (electrical/mechanical) helps a lot, there's a ton of signals processing and electrical/mech oscillation.


All that build up and not a link to be seen?!?! Where do I go to spend $ on a new hobby I didn't need?


The easiest "take my money" approach would be to look at some Troels Gravesen speakers on his website and/or find kits being sold on sites like parts express (if in the US). Jeff Bagby and Paul Carmody are two other well known designers, the latter having more budget-friendly builds.

Additionally, sites like diyaudio.com are better when you want a specific thing built and are looking to learn more about techniques, new parts etc.


This is a rabbit hole worth jumping into when time allows it.


Check out Hexabase on YouTube, and then check out DIYAudio.com. Those will whet your appetite, I hope.


I think you mean HexiBase?


Can you actually outperform the KEFs and Perlistens of the world? They seem to have so much engineering put into their designs I don't believe a hobbyist can realistically match them.

I will admit that stuff like "speakers with quasi-active noise cancellation behind them" sounds intriguing. That's probably a good reason to get into this rabbit hole!


> Can you actually outperform the KEFs and Perlistens of the world? They seem to have so much engineering put into their designs I don't believe a hobbyist can realistically match them.

Absolutely. Don't forget, these guys are: a. Humans, and b. Operating for a company to make a profit. When you're DIYing you're (generally) not concerned about the latter part at all.

There's a few more reasons why DIY is so capable:

1. High quality drivers are available to purchase. There are companies like Tymphany/SB Acoustics etc that are OEM/ODM manufacturers selling to the big names. You can get the same/very similar models from parts express and other sites.

2. A lot of the engineering principles are well understood, public science. In fact many experts hang out on websites like DIYaudio.com. They're human. You can see their workings, opinions, doubts etc up close.

3. Some speakers like the Dutch&Dutch 8c's started their lives on forums like diyaudio. Which is to say, they went from DIY level to "well-reviewed" level in a manner that's quite clear/transparent to anyone familiar with the forum/DIY. No "hidden" black magic involved.

4. You have a lot of amazing designers on these forums putting their designs out for free. Jeff Bagby, Paul Carmody, Troels Gravesen, Perry Marshall etc. Check out Perry's comment on his speaker below. Btw, he's a professional designer having worked across a number of audio & car companies designing AV systems.

Now, if you want to design your own speakers and not use an existing model, yes you'll need to learn a lot. But it's very much doable. It may take time/money/effort, but beating a top of the line system for a fraction of the (material, not labour) cost is possible and has happened.

[0] - https://www.diyaudio.com/community/threads/ultimate-open-baf...


BTW Perry had made another comment about how his speakers sounded better than almost all other speakers at AXPONA and his kids agreed, but I couldn't find it right now. And many of those speakers were high 5/6 figure speakers.


This for me, for quite a while now. I've built my own speakers, microphones and now I'm in a DIY synth module hole.

Latest at https://fourays.lon.dev

I've just got to the point where I think I know what the module is going to be, but last night found out that PCB manufacture puts additional constraints on the PCB design, so I have to go back and re-do a lot of it, including probably dropping some features to make it simpler. The learning never ends.


All I want is a small stereo speaker to replace my awful monitor speakers. Like a small soundbar that doesn't suck. I might dive into this rabbit hole


Check out this speaker kit call C-Notes. They're really well rated and a pretty simple/budget friendly option, especially if you're okay applying some eq on them. Paul Carmody has many good designs.


"Small" and "decent audio response in bass" don't really go together. But you can go surprisingly far with a pair of "bookshelf" speakers.


With passive speakers, I agree, but with active/DSP speakers you can do ludicrous things. There's this build on diyaudio called something like "compact active 3 way", that'll give you an idea of how decently powerful a small speaker can go (and that's despite some design flaws in the build like the choice of a passive radiator).


Check out Hexabase on YT. He has really punchy small speakers that he has released designs for.


> Check out Hexabase on YT.

https://www.youtube.com/@HexiBase in case you're having trouble finding it with that spelling.


Thank you. Mea culpa.


How hard to make something better than my hs8’s? I want an upgrade but next tier is like 3k


How much are you willing to spend? Troels Gravesen has many builds on his website and I'm sure some/many of them would be better than the HS8s. You could also search for "hs8" on diyaudio.com and see posts of people in a similar position.


I mean, hopefully less than 2k for something that’d be worth 3k for a pair of barefoot03’s ?


Do you mean the barefoots cost 3k and you'd like to beat it by spending around 2k? You could see if there's any design you like from Troel's page, though you could also make an account & post the question on diyaudio.com. Unfortunately I'm not very familiar with DIY studio monitors, I know speakers like the Hitmakers (by Paul Carmody) exist but I'm more aware of domestic speakers.


I did not realize that learning Fusion 360 was going to be such a huge chapter in my current journey. Looking back, I'm kind of stumped at how I avoided it as long as I did.

I would now put learning CAD in the same category of mandatory life skill as learning to code. The ability to translate what you see in your mind to something that can be repeatably fabricated is an incredible power move, akin to learning how to communicate complex ideas with empathetic language.

My advice is to start by following this tutorial step-by-step. It's a 90 minute video that took me ten days to get through. Step two is to take an existing project and change it in a significant way. Step three is to create something from scratch which solves a problem that you have.

https://www.youtube.com/watch?v=mK60ROb2RKI


If you don't know about Plasticity 3D, you should know about Plasticity 3D. It is different, but still parametric, and CADish.


Thank you for this! I was completely unaware that it was out there, and I really dig a lot of what the creator is going for. For example, I'm not a Blender user but lots of people are, so the idea that he created a WebSockets-powered real time bridge is the kind of innovation that makes the world better.

However, I do have two nits. The first is the overwhelming focus on pricing model as a feature. I get it; SaaS is frustrating AF. However, I also understand that it's very difficult to build a functioning business around tooling that is free. There's a huge number of people who just can't understand why we don't have free Fusion/Z-Brush equivalents. It's easy to fix... just set up a $5-10M/year donation schedule to a group of people currently working for Autodesk and you could definitely have an OSS competitor to Fusion in a year or two.

In reality, people using powerful tools are used to paying a bit of money for those tools, and I honestly feel like that's how it should be. The people screaming the loudest for free-as-in-beer CAD that doesn't suck are also likely the worst customers that you wish you didn't have. Anyone who has ever noticed that non-profit clients always want to argue about billing the most will be nodding.

Second nit is that as awesome as Plasticity is, it's really a modelling tool (like Z-Brush) that is influenced by CAD, not the other way around. And I believe that there's a huge market segment for this! Game asset creators come to mind.

But there are huge swathes of workflow functionality that Fusion nails which just don't seem to be present in Plasticity. There's no component hierarchy, no timeline. The whole relationship between sketches and operations is lost in favour of just slicing through stuff... which is cool until you need to change your primary enclosure dimensions and expect every aspect of your design to re-calculate and adapt.

There's more to "parametric" than being able to change parameters on a tool. I try to describe to friends that in a tool like Fusion, the geometry itself is a parameter, the operations are like lambda functions, and the timeline can be rolled backwards and forwards like git commits.

When you have the lightbulb moment for all of this, it's really hard not to be annoyed when people attempt to shame you for not using FreeCAD, as is happening elsewhere in this thread.


I'm with you on the timeline/component stuff. I came from SolidWorks and SolidEdge and I still pine for a complete CAD package that is actually functional. Fusion360 is foreign to me and I doubt I'll ever be in a place where I could afford an annual SolidEdge subscription. Note, the community edition is free [0]. I may have to brush off my CAD skills and give it a go.

[0] https://resources.sw.siemens.com/en-US/download-solid-edge-c...


For what it's worth, I definitely felt like Fusion 360 was foreign until I did the tutorial that I linked to. It's very good.


> The people screaming the loudest for free-as-in-beer CAD that doesn't suck are also likely the worst customers that you wish you didn't have.

These are often ambitious hobbyists who have quite some free time for their hobby, but little money (also since they don't earn money by using the CAD program).

The most rational choice would in my opinion to sell very cheap licenses without any support to these people: you don't loose any money because they wouldn't be customers for the expensive licenses anyway; on the other hand, because of some of these people's devotion, it makes the CAD software a possibly better choice for many companies because of the existence of more people to hire who know the software quite well.


I use it these days for my basic indie game models, it's great.


Not only that, but after learning Fusion 360 (or similar software) a very interesting world was opened when I started learning G code so that I could write, optimize and modify Post Processors

Learning G code allows you to start using CNC machines - even a simple 2 axis plasma cutter can do interesting things. 3 axis machine centers can make things that are quite remarkable. I am now stepping into 5 axis.


When 3D printing became accesible I joined the subreddit for inspiration of what use I could put it to and the opposite happened. Everyone was making plastic toys and ironic reaches like an “egg holster”. Every so often someone would be able to fabricate a replacement part for something which I thought was the coolest. But beyond that. There wasn’t too much to be motivated by. I never got into it.

Here I wonder the same thing. Not that everything joyful must be productive. But if there was a way to apply this to something that was neat in the real world I think I’d be far more motivated to learn the skill. And enjoy it more.


I can empathize with your perspective. For everything genuinely useful thing like the Gridfinity storage ecosystem, there's a mountain of future landfill. There's only so many wifi router wall-mount brackets you need.

However, I am going to gently push back by pointing out that you're not connecting the dots between knowing how to use CAD to create solutions to problems, and having cheap 3D printers available that can make those solutions real.

In other words, your mistake might be looking externally for what you should be making. It's not so much a failure of imagination but not training your brain to make the possibility of creating objects one of the first steps on the path to problem solving. Perhaps a good analogy is how people go from asking GPT-4 things they've heard other people try to making asking GPT-4 about everything as normal as brushing your teeth.

So like, as much as it's awesome that I could realize I can print my own reels (for pick and place) from an STL off Thingiverse, my main use of my 3D printer at this point is to print off plastic prototypes of circuit boards and custom enclosures that I'm working on. Not only does this allow me to verify clearance (I actually saved myself five digits and months of pain recently by realizing that the 1/4" audio jacks would not allow my board to be inserted as designed) but it gives me something I can put in people's hands. I've found that, over many years, you can describe things to people and they will nod like they get it, and then when I put the real thing in their hands, they say something roughly like, "oh, this is what you meant". Which I used to find frustrating, and now I just accept it.

Right now, I'm working with the company in China that makes hard shell cases for basically every consumer product. They are sending me revisions of the insert that will hold everything safely. I print them off and then send photos and measurements back of how everything fits (or doesn't) which completely avoids the expensive and slow process of them making a mold, sending me a sample and me testing it. I've literally saved months and thousands doing this. It's awesome.

Similarly, you might have heard that injection molding is incredibly expensive to get started with and that there are fussy design rules you must follow. Well, engineers have recently clued in to the realization that we can essentially 3D print the molds, saving thousands and many lost weeks. Right now there's this crazy arbitrage where about 90% of product designers don't appear to realize that this is a thing, yet.

I could go on and on. The only takeaway is that as you normalize CAD and 3D printing as a go to tool the same way you probably think screwdrivers are pretty normal, you realize that you have more things you need a 3D printer for than things you need a screwdriver for. And that escalation can be really fast.

Addendum 1: Also, remember that it's not just 3D printing. Creating photo-realistic renders of something that doesn't exist yet can save the day. But there's also subtractive processes like CNC which is in some ways even more useful than additive processes like 3D printing. There's a Kickstarter right now for Carvera Air that a lot of folks should get in on.

Addendum 2: One of my very favourite theoretical use-cases for 3D printing is printing prosthetic limbs for animals. I say theoretical because I've never done it personally... but I intend to. I'm a total sucker for this concept and I want to have time to get involved someday. Lots of videos on YouTube, like https://www.youtube.com/watch?v=dP3Kizf-Zqg and https://www.youtube.com/watch?v=EynjYK45dyg and https://www.youtube.com/watch?v=sdFtMRko2GU


Can second this!

However, I would recommend the open source https://solvespace.com! It hits a sweet spot between features vs complexity/learning effort. (And as a programmer I dig the terminal aesthetics)


I learned CAD / fusion for both work and a burgeoning interest in 3d printing. I'm a product manager in my day job. Being proficient in CAD and do simple renderings has been extremely helpful with work.

I'm now looking at going down the Blender / render rabbit hole as Fusion can only get you so far.


Cool! I keep telling my SO that she would be able to express things better at work, too.

What sort of problems do you hope Blender can address that aren't tackled by Fusion?

In a similar vein, I also recommend taking a long look at game development engines, as they are in the same category of tool that have many uses beyond making games:

- quick UI mockups - environment walk-throughs - product demos - VR environments - best way IMO to teach coding to kids

I use and generally like Unity, although if I was starting over today, I'd be taking a good look at Godot.

TL;DR if you need to do interactive renders/movies a gamedev engine might actually be more generally useful to a coder than Blender.


>What sort of problems do you hope Blender can address that aren't tackled by Fusion?

I felt that Fusion was limited with the textures and materials available for rendering. I think Fusion is good enough for internal purposes but I want better looking renders for external uses.

I've worked on scrappy product teams that lacked a dedicated industrial designer. Usually, the mechanical engineer on the team would come up with a design and I'd come up with simple renderings for marketing docs, show investors, etc.

Most of the industrial designers I follow on linkedin all use Blender, I hadn't even considered Unity, I'll check it out.


Thanks for the push to learn this as well!

Curious how you best learned how to communicate complex ideas with empathetic language?


This was super cool. I guess if you want to "simulate" its a whole different world?


Not really! Many of the major CAD packages have built-in or add-on packages for FEA. Some of them will even do dynamic multiphysics simulations -- so for example you could model an internal combustion engine, simulate the combustion cycle, and measure the torque.

(That's not a beginner-level project, to be clear)

example: https://www.youtube.com/watch?v=dejend_kx94


Do NOT use this proprietary nonsense, learn FreeCAD


Friends don't tell friends to learn FreeCAD.

I'm a decently able self-taught CAD user now, of the level where I can reasonably quickly pick up a new piece of software. And yet... I've lost count of the number of times I've reinstalled FreeCAD thinking "this time it will be different"... and then quickly removed it again. Compared to anything reasonable it's just an awful hot mess to try and figure out, with huge quirks, a weird interface, and unhelpful error messages.

Given the reasonable pricing, I'm interested to try Plasticity, although it's not strictly CAD in the sense of Fusion360/Solidworks/etc - it's currently more of a modelling program. It's also doesn't have the parametric + history features that are really valuable in other products.

The truth is, we're crying out for a decent open-source CAD program. Everything currently available (FreeCAD, OpenSCAD, SolveSpace, CadQuery, etc.) has huge usability and/or feature deficits compared to the commercial offerings.


To quote myself from a previous blog post of mine:

> I tried and tried and tried to get into [FreeCAD]. It promises so much, but there are two fatal flaws in my opinion. First, the UI is a nightmare. I have no idea which "workbench" I am supposed to be using, and there are so many similar choices available, each with subtly different tools and ... I gave up trying to make sense of it. Secondly, even when following tutorials to get some basic modelling done, I found lack of sensible keyboard control and having to click almost everything a real distraction. Not a good experience.

It's an absolute nightmare to use. Really the worst UX I've ever seen.


I am a developer, FreeCAD is a bit of fun to wrangle a model out as a puzzle but it's not even in the same universe as Fusion360.

I'd love to use something like KiCAD is in the EDA world.

I highly expect, at any moment, Audodesk to pull the rug from under the hobbyist licence and I'll not be able to use it anymore.

While I feel kinda 'dirty' when I use it, I have been using it for maybe 8 years and the thing is a masterpiece. It's like a third arm for me.

Once your brain knows the whole constraints workflow it is so natural. I can very quickly model up pretty much anything I can think of to make.


Fusion360 was good until it became SaaS


I'm not saying that Fusion is the best CAD software, but FreeCAD is worth less than you pay for it. The UI/UX is in the bottom three most user-hostile nightmares I've ever experienced.

Time spent thinking "I'm sure it's just me" in FreeCAD is time you will never get back.


I use Blender for CAD. It's not designed for that, but I've developed a particular workflow that fits my needs quite well. I use it to design everything from small parts to buildings. I've used FreeCAD, but it's just too time-consuming.


mind elaborating on this? as an outsider to CAD software i don't know the landscape — i'm finding some concerning things about fusion360 when i search for pros/cons but i'm not sure how to weigh the opinions, e.g how relevant the commercial lock-in could be for a hobbyist


One time FreeCAD user: FreeCAD couldnt solve a very basic constraint. Every time it got stuck and said it's too hard. Fusion360 had 0 issues.

Licensing is a thing with FreeCAD but last time I looked it was free (as in beer) for hobbyists. The grey zone is where you turn your hobby into a small scale business.


I picked it up to do CNC hobby work and I can’t get enough of it.


Self-hosting and homeserver stuff!

I started out when I got a new phone, and didn't know what to do with the old one. One of the ideas I had was a homeserver. Turns out it's not trivial to run Docker even on rooted Android phones, and you need a lot of kernel patching, tweaking and more and it still had issues after that.

The next step was when I figured out I could install postmarketOS on it, and I managed to flash it, SSH into it and set up Nextcloud for our photos and unbound as a recursive DNS for my home network. I thoroughly recommend postmarketOS, and the contributors are amazing as well.

I was however running out of storage, so I ordered an 256GB SD card, and set up mergerFS between it and local storage, worked fine.

After some time however, I got paranoid about having and old device with a LiPo battery constantly being charged in my home, so I decided to get a mini PC from Aliexpress and chucked a 2TB SSD in.

In the meantime, I discovered Immich, which turned out to be much better for photos than Nextcloud, and fell in love with it.

The final thing I added was a miniDLNA service to play my local movies and shows on my LG TV without having to bother with Plex/Jellyfin and reencode anything. Unfortunately, it kept disappearing after roughly 2 days of operation, so I just added a cron job to restart it at 5 AM.

For the time being, I don't need anything more and am turning my attention to other things.


I’ve fallen into the plant tissue culturing rabbit hole. I was selling excess trimmings of aquarium plants locally on Facebook marketplace and made a surprising amount of money, and my kids really enjoyed it (they got a cut for helping me out). I thought hmm, this could be a great excuse to make a little business around this, teach them some skills, get them thinking more constructively and feeling a sense of agency and ability, etc. Plus earning money is really nice when you’re a teenager.

The challenge is that in an aquarium, plants grow reasonably fast but not fast enough to sell regularly for a decent income. You need ways to produce more plants faster, more reliably, and without taking up too much space. That’s where tissue culturing comes in.

It has reaaaally sucked me in. I’m culturing everything I can find. I’m also propagating aquatic plants through more typical means, and that’s fun too. I’m out of space though.

Tissue culturing is a really fascinating science and practice. I love keeping track of the media recipes, results, growth rates, etc. I’m too early to have had meaningful results, but I look forward to tracking those as well.


This is really cool. I work in the plant industry, and most people don't know this but huge numbers of the plants bought in the U.S. are grown from tissue culture. I would guess it could be in the ballpark of half of all small and medium-sized plants (< 10" diameter), because it brings some benefits when propagating at scale for certain genera. For others, it's too difficult to grow a stable plant from tissue culture, so they need to be propagated vegetatively.


You're an industry plant.


Sweet. I spent a few years cultivating mushrooms and found the hobby to be very rewarding. One can grow (legal) medicinal and gourmet mushrooms on a small scale with very little money. I recommend people interested give it a go because it's fun and offers a lot to learn. :)


Mushrooms are what gave me the confidence to try tissue culturing! I’d grown mycelium from spores, made liquid cultures, and generally figured out the sterile workflows and understood the gist of things. I figured it shouldn’t be too different or difficult.

Plants have been slightly more challenging, but not as much as I expected. If you aren’t optimizing for profits to keep a facility open, mediocre results are still awesome and you’ve got plenty of time to keep experimenting. I guess it’s much the same with mushrooms. If you don’t get 5 pounds on your first flush, it’s still great fun. The beauty of mycelium is that turn around times are an order of magnitude shorter. Tissue cultures are very, very slow.


Did you collect the spores yourself? Because that’s where most people leave.


Do you have a biology/botany/lab tech background?

I've always really wanted to get into this but really don't know where to start.

Also, "Murashige and Skoog" is such a silly name; I love it.


Ha, I love that name too.

I have a background in absolutely nothing. Anyone can do this stuff.

I mentioned in another comment, the book “Plants from Test Tubes: An Introduction to Micro-Propagation” was incredibly helpful to me. You can get a cursory understanding of things from YouTube or similar, but the book does a great job of explaining what you’re actually doing, how, and why.

A good place to start is prepping some media and containers, collecting some plant tissues, sterilizing it, and dropping it into the media in the container! I know each step here is a subject within itself, but it really is this simple when you zoom out a bit. If you aren’t sure about how to make MS media, start with pre-made options from a company like Plant Cell Technologies. You can use glass jars with autoclaveable plastic lids as the containers. You can use a pressure cooker as your autoclave. Collecting the plant material can be simple or extremely difficult (collecting meristem material can be excruciating if you haven’t worked under a microscope before), and it’s fine to start simple (just use a piece of a leaf). You’ll want a flow hood or still air box, and you can make these for peanuts or buy small solutions for pretty reasonable prices.

As someone once told me: There’s nothin to do but to do it


Thank you for that; I appreciate you taking the time to post that. You've inspired me.

> You’ll want a flow hood or still air box

This was what I was going to ask next. I live in the middle of the Pacific ocean so shipping large things here is prohibitively expensive (for a hobby project) so I guess I'll have to make one. I'll do some googling.


This is really interesting! Wondering if there’s any good reading/books about it you’d recommend?


I started with “Plants from Test Tubes: An Introduction to Micro-Propagation”. After reading that I’ve perused various sites, YouTube, and I find research papers are remarkably plentiful and useful when you’ve got a specific question.

For example, I want to culture some plant. How do I get the best tissue from the plant and which media and hormones seem to work best? Should I use agar, gellan, multiply in a temporary immersion bioreactor, are there any special deflasking notes, etc. The information is out there for a ton of species!


Any online communities?


I’m not a part of any, but that’s a great idea. There are probably several out there.


Do you think this would work with wasabi? Notoriously difficult to grow.


I've been attempting to relearn music theory for the third time.

The first attempt was over a decade ago, and I never really quite got it.

The second attempt was about 6 years ago, and while the fundamentals clicked and I improved dramatically as a musician/composer, I still only ever learned some small portion of the basics.

This time, I'm taking it a few steps further. I'm going back to the basics first to recheck my existing understanding of things, and really trying to take my time and understand each new concept before moving on.

I'm currently working my way through this online textbook: https://musictheory.pugetsound.edu/mt21c/frontmatter.html

I've also picked up a simple little budget keyboard (a 61-key Casio CT-S1) that does just what I need at a great price.

Having a lot of fun so far and learning a ton! :)


I love the incremental re-visits where you, after the fact, realize that you have subconsciously progressed.


Wow, thanks for the link! Im in similar situation. I'd like to improve in my music theory knowledge as well.

I realized 15 years after finishing music school how chord progressions would speed up my learning new songs on piano and learn playing jazz. So Im looking for any kind of literature that would improve my knowledge there.


I deep-dove into the identity of a spammer. They were sending me the crappiest, absolutely lowest-denominator hodge-podge of spam. Fly repellent, ultrasonic dog whistles, knives. It all seemed like legit businesses, so it was just some marketing firm that bought an email list somewhere.

Whenever I get spammed, I check the email headers. Turns out they were using an email with the owner’s initials in the unsubscribe header. From it, I was able to easily guess their actual name. I found them on LinkedIn! I used that, plus the list of all the domains they used for marketing and sending mails, to build a pretty comprehensive map of their operations.

I thought for a while about what I could do with this info… but in the end, reporting them to my country’s consumer rights authority for spam did the trick. No reason to get in trouble myself, as fun as it could be.

So the lesson is: look at email headers! There’s fun stuff in there!


What about absurdly targeted spam just to mess with them?

https://ghostinfluence.com/the-ultimate-retaliation-pranking...


I started by looking into LLMs outside of ChatGPT (self-hosting ollama, API services, etc.) I got annoyed that they all had their own 'chat' interface, or that all the chat interfaces were some bad rebuild of a chat application.

So I wrote https://github.com/arcuru/chaz to connect any LLM to Matrix so that I could use any Matrix client with any LLM.

That got me writing a Matrix bot account in Rust, and I couldn't find a simple bot framework so I split out a bot framework in https://github.com/arcuru/headjack

But _then_ I realized a really easy bot I could write that would also be very useful for me, and I wrote https://github.com/arcuru/pokem which is sort of a clone of ntfy.sh but using Matrix. Send yourself (or others, or a group) a ping on Matrix using an HTTP request or a simple CLI app.


Chaz looks really cool! It'd be awesome if you posted it to This Week In Matrix (room: [0]).

[0]: https://matrix.to/#/%23thisweekinmatrix%3Amatrix.org?via=mat...


Micropython on an Adafruit Feature to make a dual deploy altimeter for high power rocketry. Got really into control theory and all the fancy math for determining the state of the rocket on the way up. Then got into device drivers and improving on the ones that come with CircuitPython to get higher performance out of barometers and IMUs. Then got into circuit board design and fabrication starting from zero for an i2c pyro board that fires the ejection charges (little tubes of blackpowder and an e-match used to eject parachutes). That was pretty interesting, i have the advantage of a handful of hardware engineer friends to answer my questions when i get stuck.

It's all sitting on my desk, first flight will likely be in May.


ESP32 + CircuitPython is really cool.


I recently fell into the Linux from Scratch rabbithole. The extra unexpected fun came while attempting to build LFS from within NixOS (my daily driver, which I only had a basic proficiency in). I quickly realized the challenges of following the LFS project's guidance, due in large part to how the Nix store is implemented. So I fell down another rabbithole of learning derivations and the Nix language - which took me a good way though building the LFS toolchain within Nix's intended sandbox. However, achieving FHS-compliance became another issue as I attempted to build essential LFS system tools within a chroot-like environment, but sought to do so in roughly the same declarative/reproducible manner as Nix's ideal. After a few lost hairs, I discovered the wonders of Nix's FHS build environment/bubblewrap implementation. A few hoop-jumps later, handling permissions and migrating to the final build partition, the project was complete with a mostly declarative, mostly reproducible, functional, minimal, bootable LFS build.

What sticks with me most through this experience is the brilliance of the open-source community (and a special satisfaction with now being able to say "I use nix btw")


That sounds super interesting, could you share the code for that?


I've been making milk punch for friends as a gift for years now. On a lark I wanted to figure out how to produce it in larger batches with less manual labor and discovered the tip of the iceberg of what is the field of beverage filtration and food chemistry.

Turns out getting particulates out of a solution is a massive, massive industry with a large body of science, literature, and engineering practice behind it.

EDIT: Here's a few wiki entries I found as OK overviews. ChatGPT was handy for figuring out what relevant literature in the field was and terminology I could use to find more pertinent resources:

1. https://en.wikipedia.org/wiki/Food_engineering

2. https://en.wikipedia.org/wiki/Ultrafiltration

3. Food Chemistry: https://www.amazon.com/Fennemas-Food-Chemistry-Srinivasan-Da...

4. Introduction to Food Engineering: https://www.sciencedirect.com/book/9780123985309/introductio...

5. Handbook of Food Engineering Practice: https://www.routledge.com/Handbook-of-Food-Engineering-Pract...


What's your milk punch recipe?


I got my first intro to milk punch from How To Drink: https://youtu.be/zr8dtT9siq4?si=akpHbgmLrtIcgXJk

I originally used their spec, but have since learned milk punch is pretty forgiving. It works really well for complex flavors that have a lot of tannins or volatile constituents (tea, wine, citrus, etc). I’ve found good black tea and a deep, sweet port tends to be a winning combination.

Done correctly, the resulting punch is shelf stable. That said, it has a neat trick: since no filtration is perfect, you end up with trace amounts of milk fats in the solution that continue to react with any left over volatiles. This leads to a smoother, rounder flavor over time. The last batch I made with a bergamot tea and port ended up tasting like a fruity, complex boba tea after a couple months of rest.


I’d never heard of milk punch, had to look that up. And now I have my own new rabbit hole! Much thanks!


Be warned that getting good clarity takes time and persistence if you’re going the coffee filter method. You’ll also get better flavor if you let the solution sit for up to a day before filtering. Enjoy!


I was curious about how people with seizure disorders safely click links not knowing what lies ahead. Investigating the current best practices for building for epileptics, using this great resource (1) as my starting point.

This took me down a rabbit hole on current methods to detect seizure onset... I came across a very interesting journal article on applying ML in an implantable that can detect seizures within 3 seconds, which spurred my current research on less invasive detection methods. Like any good rabbit hole, I've strayed from the original mission.

Seizures seem scary and I don't want to give them to people, but the causes of their onset seem to be too nuanced and patient-specific to build with any guarantees. The best I can do is avoid the obvious and hope the cutting edge detection and mitigation research bears fruit.

1. https://developer.mozilla.org/en-US/docs/Web/Accessibility/S...



Figure skating. It should be illegal to have so much fun. The feeling of gliding out there on the ice with nearly zero friction is amazing. It's a great community too. Not cheap, but a lot cheaper than many hobbies. Less than $1k will get you a nice pair of skates, plenty of group lessons, and access to the rink for a few months. It's fun for the family or single adults. There are always tons of people just starting. Private lessons can get expensive, but for the average interested adult, you can probably do a single private lesson like once a month and then work on what you learned at the rink.


I totally agree, we are lucky here in Sweden to have free access to ice rinks, basic lessons are also rather cheap and usually operated but local not for profit organizations.


Where I live it is hot and humid for much of the year, so even though we get cold winters, it isn't enough to have outdoor ice rinks everywhere like in the Northern US, Canada, Eastern Europe, and Scandinavia. I'm jealous of the skating culture where you're at!

One of my Canadian friends was explaining that pretty much everyone chooses to do either hockey or figure skating from a young age with a small minority doing speed skating. She says every neighborhood has its own ice rink too. Also, apparently the women's league for hockey is becoming pretty popular in certain places, so that is cool. I'm guessing it's similar to that where you live?


I live in a suburb of Stockholm and it is like that, every neighborhood having its own ice rink, some are inhouse and some are opened, mostly free for use and divided for hours with and without hockey clubs.


That sounds so freaking cool. I cannot imagine such a world. I bet it also does a lot to bring the community together.

In my area there was practically nothing, but recently a bunch of things got built like walking paths connecting the neighborhoods, a school, sports centers, a park not too far away...etc. I think it had an overwhelmingly positive impact on the community and only some of it came from tax dollars too.


Sports in general is a big thing here in Sweden and the same way it comes only partially from tax payers' money, a lot of those activities are supported by the community and participants


I started hockey last year. When I started I looked like Bambi learning to walk. Now I am at the point where I can hold my own in a game, I even scored a goal out last game!

Learning to skate is amazing. Once the motions start to “click” and you can start flying around the ice it is so much fun. Wearing full hockey pads makes the learning process a little easier on the body. Falls on the ice hurt!


Spring Boot. What I learned is: I had no idea how much I hate "configuration".

Spring is a Java dependency injection system that uses XML-based configuration. Recognising that XML sucked, they later added 2 additional ways to specify configuration, which they call annotation-based configuration and Java-based configuration. Both kinds use annotations. Both kinds use Java.

Spring Boot is a layer on top of regular Spring that tries to make things simpler by automatically guessing what you're trying to do and configuring it for you, with something it calls auto-configuration.

Just trying to understand what makes what happen in a (very simple) Spring Boot app sucked weeks from my life.


What's funny is that Spring Boot powers some of the World's largest enterprises and applications.


Texas Hold'em. I know there are probably plenty of experts here but after decades avoiding it, I'm getting into it because I found a group of friends that played online.

I first discovered that there are tables (and tables and tables) of preflop card combinations that tell you whether to raise/call/fold based off of where you are in the dealing order and how many players are active.

Then I learn that's basically derived from monte carlo simulations, to calculate your chance of winning at any given moment (equity). It seems that it's probably more accurate to make your decision based off of equity and pot odds, i.e. Kelly Criterion. If I can get fast enough at estimating that in my head.

Also having fun trying to find open source libraries to do those simulations so I can create my own drilling exercises. Honestly, I'm having more fun doing that stuff than playing it.

The biggest dumbest problem is when blinds accelerate to the point that the minimum bet is greater than what Kelly would recommend. Some online poker sites are aggressive about this, to the point that you are forced to make irrational choices pretty quickly or get blinded out. A way to juice the house advantage I guess. It's almost enough to make me give up and find the next project.


>The biggest dumbest problem is when blinds accelerate to the point that the minimum bet is greater than what Kelly would recommend. Some online poker sites are aggressive about this, to the point that you are forced to make irrational choices pretty quickly or get blinded out. A way to juice the house advantage I guess. It's almost enough to make me give up and find the next project.

The house does not have an advantage with poker. With player games, they earn money from hosting by taking vig(orish). But rapidly rising blinds in tournament poker is a way for them to get more games played, hence introducing turbos in tournament poker and Zone-type cash games.

Better understanding common betting patterns is more useful than hand simulators or whatever. Being able to accurately update your opponent's range throughout the hand is essentially how to play well.


Did you write an algorithmic betting software?


Nah. When playing solitaire I just use a Kelly calculator. I plug in estimated equity and pot odds, and get a recommendation of what percentage of my stack to bet. It's just to get a sense of how to make riskier bets when the pot is large.


Graph visualization.

I've been on a journey for a while to understand how to layout diagrams / graphs in an "aesthetically pleasing but structured" way. Long story short, DOT[0] is the best language I've found for defining graphs (compared to doing something with Mermaid.js or any other markup language), but rendering with the DOT engine in GraphViz fails the "aesthetic" test for me.

Did a bit of a literature review[1] to understand better the different approaches, and to understand the scope of the field. This book does great job of defining and providing the keywords for the different levels of requirements, starting with "principles" that are provable in the academic sense, to "conventions" that are like principles, but cannot be necessarily computed (eg NP hard, so requiring heuristics or simulations to achieve), and ending with actual "aesthetics" where things get very subjective.

Ultimately got pretty deep writing my own force-directed graph simulation in Rust and visualizing with egui[2] (needed an excuse to work on UIs and I've always wanted to write less Python), but I'm taking a break to use what I've learned writing Rust to shore up the REST API testing suite for my dayjob.

[0]: https://graphviz.org/doc/info/lang.html [1]: https://www.amazon.com/Graph-Drawing-Algorithms-Visualizatio... [2]: https://docs.rs/egui/latest/egui/


My two rabbit holes from the last ~year:

JellyFin. Amazing. A little clunky here and there, so some automation is needed. Then the weird bugs - gotta get a debug instance going to see what is going on. Once you start thinking about it as a "personal Netflix" you start building out a larger collection which needs organizing. Then your friends want an account. Then you realize your satellite tuner has a streaming function so you start reading through the plugin code for other streaming boxes...

Next one was the Japanese PS2 DVR combo unit. Sold as junk (which mostly they are - none can read discs anymore basically) but very interesting in that they are beautiful and cool and weird. The English OSD translations will often break your unit in subtle ways - so I joined the discord and identified that this was actually a configuration management problem - you have to only ever use a translation which was done to a set of OSD files which match your firmware revision. So I started writing a framework to auto-translate the XML file strings. Then the guys on the community mentioned the images with strings, as well as detecting when your translated string is too long for the UI element... I am still working on the framework. No dealbreakers yet! I will probably buy a few more to generate all my own OSD translations. My wife is going to kill me - I have 2 PSX DVR's now, and probably will buy more on our trip to Japan soon...


Try Veso, which is a Jellyfin fork: https://github.com/vesoapp/veso


How the public transport services in my local area are run! There's a lot to it: route planning, scheduling, ticketing... and a large amount of digitalization too.

It's interesting to discover that a lot of 'facts' about public transport people take for granted just aren't true. The names and liveries by which vehicles go often don't actually correspond with their actual operators and owners. The company that's named on your ticket might not actually get any money from your purchase - or they might make money from passengers who don't pay at all, due to a myriad of subsidy schemes run by different levels of government.

Waves of privatization and re-nationalization with political motivations at every turn have produced a system which is amazingly efficient in some ways, and appallingly wasteful in others. Workforce strikes are obvious to the general public, but what's not obvious is who the negotiating parties even are, with various trade unions (and unions of unions) competing against various management groups (and groups of groups).

Some things are pleasantly surprising. Without any fanfare, digital systems for vehicle tracking have been introduced with remarkable efficiency. Then, for me, there's the astonishment of discovering that not only is every timetable published in a consistent, nation-wide data format, but one that has been utilised in production for twenty years!

It all makes me realise how limited the public discourse about public transport and 'green' mobility policies are in my region. It is simply impossible to grasp the true consequences of any given proposal in the meagre columns that they're given in the newspapers and the two-minute reports in which they feature on the television. Diving into this rabbit hole has led me to respect the complexity of the field much more than I did before, and fills me with both hope and despair on topics which I had hitherto scarcely lent a thought.


Bottle digging, i.e. looking for and collecting old bottles, typically buried or submerged.

Wife and I live on a floating house in Oregon outside of Portland. Taking my paddle board to various beaches on the river to pick up trash has become a beloved hobby of mine. Fantastic way to get exercise, get out in nature, and clean up my community.

I kept finding old, antique bottles during outings: Coke bottles from the 40s and 50s, little medicine bottles from who knows when, old beer bottles from brands no longer in business. I eventually learned that there's a whole hobby around this called "bottle digging", even an active subreddit (r/bottledigging).

I've since bought a nice snorkel and mask and have been diving for old bottles on clear days, when river visibility allows it. Thinking about taking it to the next level and getting scuba certified.

If you like finding things, you might like bottle digging.


I enjoy this as well, but live in an area where finding bottles isn't too common. That hobby has been replaced with shark teeth hunting. I have a decent collection of them (maybe ~300).

I've also learned how to make pendants out of them.

Although, if I were you, I'd be looking for old necklaces, watches, etc. in that river. It excites me to imagine finding some old gold coins or something.


My dad and I did that when I was a kid... we were always in search of the Ruby Red beer bottles. We also looked for insulators on the ground near the telephone poles next to old railroads.

We frequented a subdivision that was being built on land that was a former city dump.


Mine all feel like "we do these not because they're easy, but because we thought they would be easy".

Turning a thousand-page book - PAIP - into a stack of Markdown files in a git repo, readable online. The print book received more editing and revisions than the ebook. I converted an ebook's ... odd formatting ... into Markdown, remade diagrams, generated new ePub and pdf files, and had the spine cut off a print copy to make a fresh scan. Working on that scan, I made Scantailor, an X program, easier to access from a Mac, via Docker. I tried different OCR engines, and pored over the diffs, incorporating dozens (hundreds?) of improvements. I got to find so many differences between Markdown engines. I have ideas on how to make Pandoc links between chapters. There's still a lot to do!

My current WIP: Lars Wirzenius posted about file systems with a billion (empty) files. I started exploring because I was curious, if I was remembering correctly, how well a mostly empty image file would recompress - like, drive_image.gz.gz. Lars offered a Rust program; I was curious about how other methods compared. Like, how about nested shell loops, tar, and touch? And, hey, how well can we archive and compress them? I've gotten to see some issues, bottlenecks, and outright failure modes with SMR hard drives, Samba re: sparse files, and parallel gzip compression. I've accumulated some shell script boilerplate to make it easier to go back and verify my processes, and harder to accidentally wipe out past work if I rerun it.


I’ve spent a half months salary on literature auctions. French, latin, english, swedish, 1700s and up. History, memoires, philosophy, religion, science and math, poetry, stories. Art as well, happened to buy prints of Girodet and Dürer that I enjoy very much.

Did some duolingo for french just recently and already did for latin two years ago. It’s difficult to read them but broad context is enough for an overview understanding and I feel a detailed understanding increase by the chapter from exposure alone.

Apart from the actual books I enjoy bidding, similar to but healthier than gambling on stocks, and “winners curse” of overpaying for stuff I really like eh I’m aware of it but the dopamine is worth it too. Some arbitrage opportunities as well, as in finds on local auctions and selling online internationally, but usually international shipping for heavier stuff like entire œuvres and just the time investment to pack and ship something deters me from doing that. Wouldnt want to damage 1600s books in shipping I suppose as well, they look quite cheap locally but fetch more internationally.


Nice. When cash was more prevalent in my life (ie before kids) I got into buying 1700s and earlier books on eBay. Magnificent things. I couldn’t ever part with them, users shouldn’t be dealers!

I love how many absolutely insanely interesting books from that period aren’t affordable. I have a copy of the book in which Bishop Ussher put forward his theory that the earth was made on "the entrance of the night preceding the 23rd day of October... the year before Christ 4004"; that is, around 6 pm on 22 October 4004 BC.

I use as a cautionary example of data driven analysis. It was all from lineages in the bible and ages mentioned there. Seemed like a solid methodology at the time!


This past month, I’ve been reevaluating my dev environment and workflow. My goals are to reduce RSI, be more efficient, as well as learn all my tools as deeply as possible. And have fun!

- I’ve ditched VSCode and gone all-in on NeoVim. I’ve spent a bunch of time watching Primeagen, etc., tweaking my vid config and learning how to navigate as efficiently as possible.

- Switched from QWERTY to Colemak-DH to hopefully reduce RSI. I’m at about 70wpm with decent accuracy after 4 weeks. My QWERTY skills are gone. I like Colemak, but we’ll see how I feel in another month or two.

- Finished my custom hot swappable Sofle keyboard, and spent many hours customizing the layout. I think I’m pretty close to feeling comfortable. I’m using home row mods, which I love. Currently using Kailh box whites (clicky). Might switch to Gateron Brown Pros.

- Been going through a “Build your own git” course, to understand git as deeply as possible.


Strongly urge you to acquire a Logitech MX Vertical mouse. It takes a few days to get used to; now you'd have to pry it from my dead hands.


I went down this path and tried various vertical mice, and settled on the MX V for a few months in the pursuit of reducing wrist pain. After about 4-5 months or so, I started getting strong wrist pain again, and switch backed to a standard mouse. At that point I started looking elsewhere, specifically on strengthening my wrists and joints. I've been doing this about 5 times a week for probably 5 months now, and most of my mouse hand wrist pain has subsided: https://www.youtube.com/watch?v=iVum3vWlh4Q


Thanks for the video. I’ll definitely try it out. In my original post, I should’ve emphasized that I’ve also been focusing on strengthening/mobilizing my wrists. It’s only been a month or two of concentrated effort, but I think they’re improving a bit.


Congratulations on learning Colemak! I made that journey myself and haven't regretted it once since. I did have to relearn QWERTY, frustratingly, but luckily it isn't nearly as difficult as learning a layout for the first time, and I can now switch between them relatively easily. (For me, a few years on, I now type at 100-120 WPM on Colemak; I was also at 70 WPM four weeks after starting.)


Thanks!

That’s encouraging to hear that you can switch between the two. Awesome!

I’m afraid to start practicing QWERTY too soon, and risk losing my progress with Colemak. Maybe I’ll attempt it in a few months.


I didn't go back to QWERTY until having had a couple of months of uninterrupted Colemak use, so I don't know what the effects of that would be. But yes, I can now switch between the two :) After a few minutes of 'warm up', I can type about 70 WPM in QWERTY and then go straight back to >100 WPM Colemak. Also, I always use QWERTY on my mobile phone, since 'swipe-typing' doesn't really benefit from having frequently-used letters close to each other nor from having them on the home row.


For my issues, the combination of apple trackpad and a proper ergonomic keyboard with a keywell did it: at first Kinesis Advantage 2, then Glove80. Model 100 is also quite good.


Building custom keyboards is fun, but I might bite the bullet and buy a Glove 80.

I don’t own a 3d printer, so designing a custom Dactyl is not very feasible for me.


oooh the sofle. i'm curious about your layout! been using a Moonlander for a few years and while I like it, it's just too big. ordered a sofle variant recently and I've been thinking about switching back to a dvorak or trying colemak when it arrives.


The Sofle has been decent for me. I’m not the biggest fan of thumb key positions (especially the outer one), but I’m getting used to them.

There are only two thumb keys per side. I’ve had to get a bit creative with my layout. One trick I’ve discovered is Mod-Tap. This lets me use my space bar as a layer key (when held), or a normal “space” when tapped. Two functions on a single key. Awesome.

I’ve also been reading this person’s blog to improve my symbol layer and vim navigation (I’m tempted to try the Engram layout, but I’ll stick with Colemak for now): https://sunaku.github.io/engram-keyboard-layout.html


Group theory.

Something is incredibly beautiful to me about classifying the kinds of symmetry things can have.

I’m trying to understand where the sporadic simple groups come from. Starting with the Matthieu groups. So far it seems to be due to some anomaly in Pascal’s triangle, but I’m still trying to put it together. “Another Roof” on YouTube has a good video about this.


I'm over a decade out from my maths undergrad degree, but I still remember how beautiful Group Theory felt. Enjoy it!


i was interested in signal processing with group theory it was quite interesting to see how matrix multiplication corresponds to modulo over finite group.(I am not a math major).


Not just multiplication: https://mathoverflow.net/a/63298/5734


I got into cross-compiling Python wheels (e.g., building macos wheels on linux and vice versa). Zig's `zig cc` does much of the heavy lifting, but one step in building a portable wheel is the "repair" process which vends native library dependencies into the wheel, necessitating binary patching (auditwheel does this for linux, delocate for macos).

I wanted to be able to do this cross platform, so I re-implemented ELF patching and Mach-O patching and adhoc signing in Python, and wrapped them into a tool called repairwheel: https://github.com/jvolkman/repairwheel


I've been doing some modernization on an old scripting language used by the game engine I work on [1]. Added a garbage collector, simplified how internal symbols are defined, added a VS code extension with some niceties like syntax highlighting, "Go to Definition", and doc tooltips. Also recently added support for websockets and plan to tackle JSON soon. Oh, and so much refactoring.

https://github.com/ZQuestClassic/ZQuestClassic


Our company decided that they want to make an electronic device. They've found an electronics engineer who devised a schematics and wrote some kind of PoC firmware. I wrote "driver" for it (really just wrapper around serial port) for our software to consume it.

That guy was busy with other tasks, so iterating on firmware was too slow. So I decided to dive in. I mean I knew C a bit.

So I had to learn STM32 arm, I had to learn low level C, I had to learn assembly, I had to get some understanding of those electronics things to get some sense of it, I had to read tons of manuals and datasheets.

Long story short, I rewrote this PoC firmware into something I could bear. It's so nice to control all software from the start to the end.

Now our company wants to rework this device into "smart", add display with touchscreen and stuff. So I'm digging into embedded Linux programming, LoL.

I'm generally consider myself full stack developer, so I can write frontend, backend, kubernetes, setup servers, deal with cloud stuff. However digging that deep feels like testing my limits.


"Now our company wants to rework this device into "smart", add display with touchscreen and stuff."

please don't do that. Add some kind of interface, eg bluetooth, and an open-source app, or open-source/documented protocol, to control it.


USB! I’ve tried and failed couple times in understanding how USB works under the hood, from electrical, to protocol, then classes, and also Power Delivery. This time around things seem to make more sense now. It started out as an ambitious goal to emulate an FTDI USB DMX converter with the ESP32-S2/S3, but realizing that might be too big a goal, so I’m starting small. I want to be able to make a custom device class on the ESP32, and write a driver with libusb.


USB is a frighteningly deep rabbit hole. I thought it was going to be easy when I first dipped my toes into the USB stack but boy was I in for a shock.


I had a peek at the USB spec recently, no desire. I thought it would be trivial to even just enumerate USB devices on a Windows box. It is not.


Metal-air batteries/fuel cells. Made a mini aluminum air battery (you can easily DIY one with household items). It seems that most people consider metal-air batteries to be a dead-end, since they aren't green and are generally non-rechargeable, and air cathodes are tricky (sluggish, exotic materials, expensive catalysts). I dove into "alternative" battery and fuel cell research after looking into how to extend the range of my electric motorcycle. I love the electric drivetrain, especially on motorcycles, but lithium ion isn't up to the task as far as capacity for anything beyond an hour or two of high performance fun. If I could get a compact metal air battery or hydrogen fuel cell to output just 1kw for a hybrid drivetrain, range issues could be solved.


I would be interested in discussing your project further, for use on my e-bike.


It's still more of an idea/crude experiment than a project right now. There are some neat videos of DIYers with similar projects though - small homemade metal air batteries and materials experimentation. Aluminum air seems to attract the most attention. There is also a lot of available research.


I was writing an article on reverse-engineering vintage synthesisers[0], and I ended up getting majorly sidetracked trying to find out exactly what year the Hitachi HD44780 LCD controller was first manufactured, or try and find any background information on it. The earliest reference to it I can find online is a 'preliminary' user's manual[1], dated March 1981. I know it's a bit of a weird rabbit hole to go down, but I figured I'd get to the bottom of this mystery. It's a shame that there's so little background information available about one of the best-known ICs in history.

0: https://ajxs.me/blog/Introduction_to_Reverse-Engineering_Vin...

1: https://archive.org/details/Hitachi-DotMarixLiquidCrystalDis...


GoLang and Azure APIs.

I've been attempting to add a oauth2 device code flow to a Tacacs server with the goal of extending Azure accounts to access network device management planes. Pretty neat, I can get a "enter this device at URI" from the router/switch and let Azure do it's 2fa/compliance etc. Currently trying to get token validation working on the tacacs server =).

Ultimate goal is have a reverse proxy web front end kind of like Apache Guacamole that does the Oauth for the user and when they click on a network device, the JWT is passed through to the network device over SSH and thus the tacacs server which is relatively local to the network device which will validate it and let the user into the network device.

Playing around with GPT4/Opus a lot lately and man... I have feelings. They've been a great learning tool to learn the basics of Go though so I'm thankful.

It's going swimingly /s but I seem to be making progress. Slowly, I'll bake this into my bigger network management tool if it an be secure and make sense to do so...


Chrome extension build steps. Because the chrome manifest requires an actual file, you can’t rely on a dev server and hot module reloading. So I’ve hacked vites build command and watch command to constantly recompile chrome extension code as I develop it (typescript + frontend frameworks etc).

Add to the fact that you have to manually reload the unpacked chrome extension to apply your new changes, so I’ve hooked solution 1 up with a web socket server and a custom chrome extension that watches all your other extensions and talks to the web socket server to auto reload an unpacked extension anytime the build step completes

It’s a nightmare hack, but I may just be the worlds most productive chrome extension builder as a result! I’ve released 5 extensions in the last 5 months :)

About to release a big one, but it’s also probably the words most complicated chrome extension (LLM + Firebase store + Stripe + Auth + Serverless functions)


Sounds very cool. Do you have plans to open source that?


Here’s a project that does HMR and more: https://wxt.dev/


I haven’t used it but I think I was put off by the fact that updating your manifest restarts the browser

My custom workflow handles everything for me, I never need to restart the browser of anything like that


I think I should but it’s currently a private repo, here’s the current Vite plug-in:

https://www.npmjs.com/package/vite-plugin-extension-reloader

The extension is of the same name


I've been bitten by the Meshtastic / LoRa bug. It's fascinating to see how far these little inexpensive units can reach.

This weekend I was able to reach my home node from a state park 8.2 km away and have been giddy since.


Chasing memory leaks in BCP [1] data load implementation in Babelfish [2]. Spotted unusually high RAM usage, decided to look deeper and got into a rabbit hole. Haven't had this fun of overnight debugging (of Postgres guts) for a long time. As a result found 4 different leaks and one (unlikely to be triggered) crash. And now have BCP data load impl with 100% constant-bounded RAM usage in a DB server process.

[1] https://learn.microsoft.com/en-us/sql/tools/bcp-utility?view... [2] https://babelfishpg.org/


Back in the days my uncle told me about this new thing called Debian that he compiled over the last couple days. That was in the end of 1996. I went on to use it for a while in parallel to try things out, and was quite happy in the endless learning bubble.

Fast forward into my first job, there was a Windows machine with a crashed HDD and my task was to recover as much data as I could. Windows tools all sucked hard, even the ones that we had at the company that cost thousands of bucks per month. That's when the power of the Linux ecosystem hit me.

Went on to being an IT forensics guy, then went into pentesting, then into blueteaming and now I am having my own startup that builds a better EDR software.

I still have to think about that coincidence. Literally nerd-sniped my life, otherwise I probably would have still been a sysadmin or something.

On the way I learned a ton of new things, from programming languages and compiler bugs to exploit techniques and kernel development and even hardware design. If you go deep enough in hardware pentesting, the whole phreaking scene is amazingly welcoming. The CCC chapters are also amazing, and there's just so many opportunities to grow your knowledge and experience in the field. It never gets boring!


History of early Christianity. I found the gospel if Judas to be extremely fascinating.


I find that subject facinating - can you recommend any resources or literature on this? I'm not religious, but I find relegion facinating!


Honestly, I mostly watch podcasts. Misquoting Jesus, ReligionForBreakfast, Centre Place, and History Valley are some great ones.


Fun exercise to try and list them out! My last couple of weeks:

- 3D-printable parts storage solutions (via: I found some part storage bins in the discard pile at a local hackerspace)

- MITM proxy to snoop on Github Copilot API requests (via: we're building an jupyter AI assistant thing and got curious how other players do it).

- DIY robot arms (via: I'm making several for a nested 'you pass butter' joke, via a casual conversation about robotics being accessible now. YouTube is amazing at surfacing smaller makers once you start watching a few videos on a given topic)

- Learning about Oauth and JWT (via: 'why is auth still a pain?')

- Invertebrate UV fluorescence (via: that millipede is glowing under my UV torch!)

(a small subset of these end up documented https://johnowhitaker.dev/all.html eventually if you're curious to see a longer historical list)

I like rabbit holes where following the curiosity gradient to a satisfying conclusion is possible. "How does X work" leads eventually to code that does X. I'm less happy when they lead into a tangle of complexity, like digging into a library only to find weird abstractions 6 layers deep or trying to compare 18 different alternatives in a field I don't know very well.

OP I'd also like to hear yours!


> OP I'd also like to hear yours!

Today I gave some thought to what would be a fitting name for my boat (if I were to rename it).

One option: the glider pattern from Conway's Game of Life. Instantly recognizable by true hackers, just a weird symbol to others.

Of course a quick check on Wikipedia. Know that I'm always interested in things small / simple / computing, so... cellular atomata. Which led me to varieties used to simulate or help understand biological systems ("systems biology" - if only that field had even existed back when I left high school).

From there on: artificial life, Core Wars & co, self-replicating machinery, and... Astro-chicken (deserves a HN post of its own, imho).

Btw. it's amazing to see how many big, open questions there still are, related to the origins of (biological) life, and evolution. Eg. full simulation of a single cell organism: never been done (too complex).

Next up: a cup of hot chocolate.


Started like this. I had a few talks with startup founders interested in if they were hitting the market at the right time (their "why now"). Seeing more interest I ended up running a workshop on the timing topic for a few accelerators. I kept digging into what clues there are that a product will hit the market at the right time. Researched how the same products have been reintroduced again and again over years until finally they work out. Most surprising to me is how many people realize timing is important, yet how few have a framework to assess timing.

Fast forward a few years... I just wrote a book on the timing topic, called Why Now: How Good Timing Makes Great Products.


Ham radio!

I got into it through wanting some cheap radios to keep in my house, so I wouldn't have to go all the way upstairs when I needed to communicate with my daughter.

Well let me tell you Baofeng radios are extremely cheap but really flexible. I got these things for the simplest possible use case but after realizing their potential I just had to learn more about the space. You can adjust their configuration with a tool called Chirp and you're off to the races!

I attended a local severe weather awareness event where I met some hams who were part of an emergency response network. It's really cool to learn about how these communities operate. It's legal to receive even without a license - you only need the license to transmit.

I plan to take the technician test soon and get my license so I can help out at a nearby bike event. The area is incredibly rural so there's no cell coverage and the ham operators are really helpful in coordinating things.

Anyway, I feel like the hobby is a bit of a dying art, but it's something that seems like it would have a lot of appeal to the programmer crowd.


If you want to transmit with your Baofeng before getting a HAM license you can always get a GMRS license. There's no test, and the license is a household license that lasts 10 years. IIRC the license costs about $30, you can get one on the FCC website (https://www.fcc.gov/wireless/support/knowledge-base/universa...).

If you have a repeater in your area there may be a net where other licensed operators check in every week. I've heard people check in from over 60 miles before. You can check this website for repeaters in your area: https://mygmrs.com/repeaters. Good luck on your technician test!


Golf simulation. I found myself in a unique situation where I had the space, computer and projector. So I got a mat, screen and the key piece, a launch monitor. Now I’m playing 9-36 holes of golf simulation almost every day. The tech is great, the setup is never ending. Golf simulation tech is blowing up right now too. I’m not even necessarily a golfer fwiw. But I love the simulator.


I'll never set it up but I'd love to hear about the tech choices/options.


The software that I am using to simulate the courses is GSPro. There are hundreds of courses on there. It's got just about every course you could imagine.

Launch monitors come with their own software too and there are a few other options for simulating courses. E6 is one and GolfClub2019 is another. AwesomeGolf. GSPro is hard to beat though.

For launch monitors, there are two main types: camera based and radar.

Garmin offers the most affordable radar based option with R10 for around $600.

Bushnell offers an all camera model and for $2000 you get all the ball data. For a subscription fee you can play GSPro and other 3rd party golf apps using the Bushnell and for another fee you can get all the club data. This model sits beside the ball.

FlightScope has a launch monitor that operates with radar and/or camera and sits behind the player. For about $1800 you can get the ball data and its free to connect it to GSPro. For another $1200 you can get club data, with impact location. It's unreal how accurate this thing is. I've had mine for about a year, and since then they have pushed some incredible updates, including what they call "Fusion", which is combining the camera and radar for readings. Its how their really expensive 20k+ unit works.

At the very top end there are monitors that go on the ceiling and give you readings from there. And then there are commercial simulators where the floor will move up and down. It really never ends. One company showcased lighting from above that shows you where you should putt. It never ends. . .

https://gsprogolf.com/

https://www.garmin.com/en-US/p/695391

https://www.bushnellgolf.com/products/launch-monitors/launch...

https://flightscope.com/

https://www.foresightsports.com/pages/gchawk

https://uneekor.com/


To what extent does it make you a better real life golfer?


It’s helped me immensely. Just recently I feel like I’ve really broken thru to another level. I played real golf last week and had the best round of my life. Shot par+2 on the back on what’s considered one of the toughest courses in the area. Crushed the group I was with.I was teeing off first- I’m usually the guy not keeping score because it’s gonna be 110-130…

I had never practiced golf in my life(late 30’s) and would play 5-8 rounds every summer for last 10 ish years. So I had a ton of room for improvement. I also grew up playing competitive tennis, so the repetition of hitting a lot of balls is familiar and scratches an ich I’ve longed for.

I’ve just learned so much about golf from simulating golf that I never would have taken the time to learn.

And I’m sorta anti lessons and more learn on my own. So the sim is right up my alley. It’s also great exercise. 330 cals in less than an hour and I can play 18 in less than 40 mins…

I think everyone agrees it makes you a great ball striker. At the very least. Then the rest is golf. Which can be very very very tricky, even for the best of players.


OK, yeah. I'm never setting that up : D


Rebuilding a Ford Cleveland V8 - the Australian specific 302 cubic inch version.

One broken engine and one non operational one and turning them into a single good motor. American thin-wall cast V8 engines are fairly similar, but different enough that if you don't get them built you have to do a bit of puzzle solving (especially in the timing case). Plenty of youtube videos and forum posts on the Cleveland and it's been fun piecing it back together and learning about new things like installing cam bearings.


Visiting and documenting abandoned mine sites across the U.S. desert southwest.


+1 for literally rabbit hole'ing :)


Is hantavirus a possibility?


How dangerous is that on a scale from 1 to 10?


Trying to _really_ understand the postgres query planner's `EXPLAIN` output. We have long-running embarrassingly parallel processes where the throughput will sometimes completely tank. Got worse when we upgraded to PG16.

Trying to compare good query plans with bad ones, and then work out what changes we need to make to the slow queries is ... interesting.


I picked up a carbon-steel pan at IKEA because that was all that was left. I knew it takes some work to season. I did not know there's a whole community of carbon-steel nerds committed to getting the most perfect seasoning to fry an egg. Now I know more about the topic than practical or necessary.


I switched to Neovim from Sublime Text after trying copilot in Sublime, feeling sad, and then watching The Primeagen and his glorious mustache for too long.

Ostensibly I wanted to be able to code on the production server like a miscreant with the same tools as my laptop.

However I just wanted to regain command of my dev environment after years not coding.

I also reorganized the furniture in my office and got weirder lighting to make it hacker friendly. I bought a new desk to solder electronics.

Most people know me as a partnerships marketer or product manager but I am a compsci at heart. This made me happy.


i've been attempting to switch to neovim off and on for about a year now. VS Code is so much much easier to get started with though. And adding support for a new language is just an extension-install away.


That’s fair. I can’t use vs code on the server was my logic. But also it was a hacking challenge.


Why not really? Remote editing through ssh is vscode's superpower!


There’s no reason. I just wanted to be cool and use neovim! Lol :)


Neovim — as good for a mid-life crisis as a Porsche, and a fair bit cheaper.


True. My friend is selling electrified retromodded Porsches. I want one but I am poor.

However I can salve my ego spending a day flipping through neovim colour themes.


So, how was the switch to Neovim? Which plugins did you settle on?


I used nvchad and I am configuring it from there. Here’s my fork.

https://github.com/sunir/NvChad

Overall I still think I am faster in sublime text. I get stuck in the different modes. I find shift select and grep to be pretty frustrating.

However I will muscle through this. Every challenge is another set of vim stuff to learn. I have faith I will love it later.


MH370. There was no intermediate step as I saw somewhere it was the 10th anniversary of its disappearance and decide to get a quick update on the current status of all the evidence and theories. Ended up spending the entire day reading about radars, pings and aviation controls.


What did you find is the most reasonable explanation?


Pilot suicide by a long shot. There is no other alternative theory that comes even close.

What was interesting is to see them piece the theory together from the very fragmentary and little evidence (what is even more mind-blowing to know how little evidence there was to play with given we are talking about a 747).

The true mystery left is how did he execute the suicide and why. How did he dispatch the co-pilot? (There was a very small window to do so). What did he do in the final hours (was he alive for the entire duration)?


did you chance upon @MentourPilot on youtube and his breakdown of the MH370 investigation + recent update? If not, I am afraid you may have another hour to spend, but its very likely you have already seen this.



It was a 777. Back to the rabbit hole with you!


"Recently" (~5 years ago) I got very interested in mathematics, and decided to self-study up to an undergrad degree-equivalent level.

I researched what topics are typically included in a 4-year math major university program and what textbooks are used to teach those topics at MIT. Then started grinding all the way through from beginning to end.

It was so awesome that upon finishing, I promptly started all over again... but with physics instead.


The wonderful world of research paper identifiers.

DOI (Digital Object Identifiers) are used by many modern research papers as sort of a UUID for papers, run by doi.org.

But they're discipline-specific. So they're used widely by certain disciplines. But others use different databases.

So for biology-related papers, NIH's PubMed ID. Or for Astronomers, Bibcode.

All are "global" identifiers and each has some kind of consortium that's trying to make theirs the One ID. DOI seems to be the closest.


Doi is the main one. Most published papers and many other things have them. They're newer so often older things hang around too, I'd wager most new pubmed articles have dois too.

There are several registries, crossref is the big one in the west but it's not the only one. They have probably the best access to the data out of all of the larger registries though.

Dois are pretty good, though not persistent and there's no versioning built in so people have their own formats.

I spent a lot of time working with these as part of https://dimensions.ai for a decade or so. Happy to chat if you want to delve in more.


Cool!

I know you guys work with OrcID too right?

We work with CrossRef to get data but if a DOI is missing, then things get harder to find in CrossRef in our experience.


Man, I envy the dedication and energy y'all have. I retired 10 years ago and have completely lost motivation and the ability to stick with anything.


interesting, why did you loose motivation?

I saw my grandparents go through 10 years of doing all the retirement things(beach, hobbies, social hour etc) to keep them more than busy. Then they kind of did it all and just started having social hour earlier and earlier every year into their 80s. Cocktail our started about 11:30 am in the end.

I'm starting my retirement and have so many interest/subjects I want to learn so feel like I can stay pretty entertained for more than 10 years but maybe not.


Dunno. I just have little motivation to put hours and hours into a challenging project that I probably won't complete. When I was working, I never had any side projects.

It's not like I'm sitting around on my ass all day. I walk the dogs, bicycle, rock climb indoors, and travel. I'm starting Tai Chi and just bought an e-drum kit. After I retired I spent a year renovating one house and have spent a lot of time this winter on another. Also lived off-grid in a forest for three summers and am doing it again this year. Off-grid ain't easy. Can't just turn on the faucet and presto hot water.


Ooh, another one I have: I cosplay as a sysadmin.

A while back I had bought a domain for my email, and I thought “I should write a blog about creating a blog”. At first I hosted it in GitHub Pages, but then I realized I have a perfectly good Raspberry Pi. It’s not like I’m ever gonna get a lot of traffic… so why not self-host?

That sent me into a very deep rabbit hole. How do I make sure my website doesn’t go down if my IP address changes (no static IPs for me, sadly)? How do I create and automatically renew a certificate? How do I achieve high availability?

A few years passed, and now I have a cluster of a few Raspberry Pis running Docker Swarm, managed by Portainer, with stacks running multiple websites and services I self-host. I’ve learned a lot!

My next move is going to be a full overhaul: Docker Swarm is blocking me from setting up some things the way I want to, so I want to build a new cluster using Kubernetes. I’ll use the opportunity to overhaul the network layout as well.

The funniest part is that I haven’t written a single blog post in 3 years. I wanted to add responsive images so I could add diagrams and photos. Somewhere along the way I realized I shaved too many yaks.


What's your stance on using something like cloudflare to proxy your pie fleet?

Cloudflare tunnel would solve the issue of static IPs and you also get DDoS mitigation and caching. Caching on the edge would be especially beneficial for something like a blog which is likely to be fully static or SSG at most. Although it won't help you with the writing part (;


I’ve never even considered, really… I wanted to stick to self-hosting and FOSS as much as I could, and it all gets very little traffic.

You’re right that it’s a good idea, though. I’ll look into it!


Can't believe I haven't heard this usage of the phrase "yak shaving" before, I only know it from the classic Ren and Stimpy christmas album that I had when I was a kid :)

https://www.youtube.com/watch?v=5mmISldi060

I realise now that I've spent most of my career just shaving yaks.


The big hole - I basically have learned the fun bits of STEM(3d Printing, Laser cutting, CNC, microcontrollers/electronics etc) over the last few years. I graduated with a ME degree 25 years ago but left engineering shortly after to go into the startup world. Built a career out of helping scale startups and now decided to retire and focus on my interest as I never had time or energy.

My current rabbit hole is tuning my home's boiler to be more efficient in its use of gas. It is an interesting engineering problem because the lack of feedback loop since the thermostat is dumb and specific home variables makes smart thermostats useless as well as boiler sizing more complex than most installers understand. My goal is to add some features specific to my home to reduce gas consumption based on more variables - outside temp, minimum outside temp, sun/clouds and home variables like brick wall temp, fireplace heat, boiler controller settings as well as the wifes 'I'm cold' variable.

I am using a microcontroller and custom current switch as well as IOTSTACK to send inlet/outlet temp and gas valve & circulator state (on/off) to influxdb/grafana so I can see what is happening between thermostat and boiler controller. I have identified a few freebies in terms of consumption and inefficiencies. I have added a relay to delay the gas valve once the boiler starts cycling to reduce "short cycling" which is a waste of gas on startup and a mini explosion every time gas lights. I have managed to reduce cycles in half which helps with wear and tear as well as the number of boom sounds coming from my boiler room :)

I would love to go down the simulink rabbit hole but I think I will not.


Why doesn't my Powershell terminal work in Visual Studio Code?

What do you mean my launch.json file is missing? It was there yesterday?

Wait, I can set up custom launch settings in my launch.json? What else?

Ok, so I've got seven different launch settings in there, and now to see if I can have one used for markdown for my markdown word editor.

Oh, neat, lots of extensions for markdown.

Wait, you can install vim?

An hour later, and I've completely re-broken my VSC and am reinstalling from scratch.


Was into cine drones for a while; then discovered FPV. Spent dozens of hours on simulators, now flying actual FPV drones in manual mode. Incredibly addictive. I'm having a hard time focusing on anything else right now.


Sounds very fun! I also spent many hours with simulators, but unfortunately never got to the real-life stage, as that was just about the same time as FPV flying was made effectively illegal here in the United Kingdom.

For those who are curious, it is not one single law that makes this the case, which is why many people will tell you that it's still allowed. However, with the combination of privacy laws forbidding the use of aerial cameras, UAV regulations limiting the locations you can fly, limitations on certain radio bands as well as specific rules about maintaining line-of-sight, it is not usually possible. How could it work? Perhaps if you live in a rural area with no nearby air traffic, have permission and access to use a large patch of land (such as a farm) from which to take off, a willing partner to maintain line of sight (which of course precludes many of the stunts often associated with competitive flying) and you have done the paperwork. Not really hobby territory for most people at that point :(


Heh. I was building/programming/racing FPV drones in 2013/14, back before teenaged reflexes and rich dads paying thousands for top line gear became a thing - and easily started beating 50 year old eyesight and reflexes with self imposed budgets of only a few hundred bucks...


Any tips on getting started?

Is it possible to start super cheap just to see if I like it and then upgrade?


For starting on a simulator you need a remote control (gamepads can technically work but are not recommended). The Jumper T-Lite V2 is around $60. Liftoff is one of the most versatiles simulators and costs around $20 on Steam (there are lots of others).

Then to start in real life, complete kits from BetaFPV or GepRC are around $200 (including drone, rc and analog goggles); you can find them used for about half that, in excellent condition.

But there is NO POINT in trying to fly an actual drone before doing plenty of hours on a simulator: you would crash constantly and destroy the drone before you even get started. So just start on a simulator. 10 hours is the absolute minimum you'll find everywhere, but I'd recommend around 50 (you can listen to podcasts at the same time).

If you want to go the extremely cheap route you can start with a cheap simulator (FPV Freerider, $5) or even a free one (FPV SkyDive?) and use an existing gamepad -- but gamepads really are confusing and don't work like RCs (the throttle joystick should not center automatically).


Suno and Udio.

Spent WAY too much time adapting a Buddhist sutra into a heavy metal banger: https://www.youtube.com/watch?v=H-5Y9Z7DK4s


Yeah, digging this!


I got a ZSA Voyager split keyboard and then spent weeks exploring custom layouts. The first question was QWERTY vs something better. Then there was layers and layer navigation. And should I swap out the key switches? And Keyboard Maestro.

Now I'm trying to abandon 30 years of muscle memory and typing at 4 wpm while I learn Colemak-DH. Maybe what I should really do is build a custom 34-key board...


I've been researching and planting fruit trees and edible plants. Looking at paw paws, peaches, pears, berries, persimmons and tea bushes.


I'm in a paw paw/persimmon zone, and figs grow nicely. One of the lowest-maintenance trees that don't have much insect pressure. I thought they were notably absent from your list. The variety I chose tastes much like a peach.


Yes, they are good too. I have planted three of those. The hardest part for me is making sure I prune them right. The jam reminds me of strawberry...


Reverse engineering android apps. I wrote a bit about it in [0]. In the weekend I also started doing another one. It's interesting to see how these apps behave.

[0] https://github.com/benhamad/blog/blob/main/2024-04-12-dramal...


I couldn't find any containers for paper filters for the AeroPress XL that didn't look like cheap 3d-printed garbage, so I've been going down a rabbit hole of how to build bronze articulated joints.

I'm building a paperweight inspired by vintage brass table lamps to hold the papers in place on a wooden platform.


Got a ARTinoise Re.corder (an electronic wind instrument, basically shaped like a recorder, but MIDI) for my wife. Works with an iOS app for basic sounds. She's spent very little time with it :-)

The REAL rabbit hole is the astounding amount, and quality, of AUv3 plugins for iOS. Sounds, effects, looping tools, MIDI things, just... wow. And almost all of them are under $20, and many are free! I've spent less on a dozen software toys than on the first two guitar pedals I got. And infinitely more powerful.

Check out this video of someone doing the looping thing way way way better than I'll ever be able to (but it's fun to work towards a goal). Software she's using is called Loopy Pro, another amazing thing:

https://www.youtube.com/watch?v=T1O0pwUMbnw


I spent the last week evaluating drag and drop form designers and came to the reluctant conclusion that I’d be better off building my own. Long story. So now I’m building a drag and drop form designer, quite the deviation from the road I need to travel down.

Its almost certainly a rabbit hole but at least I’ve forewarned myself.


I'd be interested in code if you produce something you like.


Let’s see how I go. There are quite a few out there - formio and formily came closest. I would probably have chosen formily, but the main docs were all in Chinese and I couldn’t get it to work.

I also looked at vueforms and surveyjs, the builders are not free though.


Lots of really cool answers here.

Electric Bikes: Hub Motors and Mid drives are a really great spring / summer rabbit hole to go down. So many form factors of ride and you can also kill two birds with one stone by going for 60V lawn care equipment (There's adapters on eBay to connect them to your bike).


Two very very deep rabbit holes in the last 6 months:

- Designed/built a small USB controlled pan/tilt camera head to control the mirrorless I use as a webcam (couple of servos, gears, belts), and then designed/built a custom ortholinear keyboard with a joystick to control the camera (custom PCB, CNC'd aluminum case, etc)

- I'm a pretty big runner, built my own web based calendar UI that integrates with Google Calendar where I can type in workouts like "1 mile warmup @z2 + 5x(30 seconds @ 6:00/mile + 0.5 miles recovery) + 1 mile cooldown" and this gets parsed/total weekly mileage gets tallied. The next step down this rabbit hole is building a small iOS app to automatically generate Apple Watch Workouts using WorkoutKit.


Very interested in your first rabbit hole. Which servos did you use? Which gears? For me, it would be to use with an action camera. How many hours did you spend on it before you were satisfied? I've seen some arduino-based projects to do that, but servos look quite bulky... and with the right gears, very little torque / power should be necessary. But i have not spent the time yet.


Apologies for the metacommentary but do you understand what you've done here?? You've collected together a bunch of rabbit holes into one all-powerful thread that some people might never be able to leave! So I'm gonna keep it bookmarked for later, thanks...


I haven’t had time to even look but etcd is on my list. It is a distributed key/value for node configuration (I think!)

The reason why I want to learn more about it is I feel it is like a base building block of distributed systems and may be easier to grok and even write a toy version than a bigger thing like kubernetes or a leaderless distributed datastore. I would also learn some go and know how a critical piece of kubernetes works.

What led me there is practicing for a damn system design interview. As much as this whole topic is controversial on HN the grinding has really got me curious about the tech that runs at larger scales and how it works under the hood.


Well... I wanted a dedicated AI/ML box here at home, with enough horsepower to do some interesting AI stuff (but without breaking the bank totally). So I built myself a new PC from parts for the first time in around 20 years. That was ... interesting. Building a PC is - in many ways - not that different than it was in 2005. But it's also changed in many ways. So I spent a lot of time researching and picking out parts, and then building the system. Which led me to things like spending hours watching youtube videos on "What's the best thermal paste" and "what's the best CPU cooler" and "are thermal pads better or worse than thermal paste" and "Do you need one of these CPU gasket things if you're running socket AM5" and so on.

I just got the box finished up over the weekend and it's working really nicely so far. I went with an MSI MEG x670e "Godlike" motherboard, AMD Ryzen 9 7950X CPU, MSI Gaming Radeon RX 7900 XTX GPU, and 64GB of DDR5 RAM. The whole thing is running Ubuntu 22.04 (I was going to use Alma Linux, but the Alma installer wouldn't even start), ROCm 6.1.0 and Ollama. So far I've mostly been working with the LLM stuff using Ollama and the llama3 model.

Now I've started using Spring AI to interface with the Ollama API. Next steps: figure out "function calling" with Ollama (which doesn't seem to be supported by Spring AI yet, boo) and some "agentic workflows" and multi-agent stuff...


Unrelated to work: the black hole rabbit hole.

It started with an article about the hypothesis that planet nine may be a primordial black hole with 3-6 Earth masses.

What’s a primordial black hole? It’s one that formed in the first seconds after the Big Bang. We don’t know for sure they exist but many theories and simulations predict them.

They’re an excellent dark matter candidate. Could it be that simple? Could at least a lot of the missing mass be tied up in little baseball sized embers from the birth of the universe that rarely interact with anything so we don’t see them? They’d be small, would rarely interact, and unless they are sucking in mass (causing a hot accretion disk) would be dark.

Then I got onto Hawking radiation and whether micro black holes could exist. Along the way I read about loop quantum gravity (LQG) which looks to me like a decent stab at unifying QM and GR that’s much less baroque and more testable than string theory.

That then led to the LQG “bounce” hypothesis for black holes. See LQG does away with true infinite mass singularities. Instead a black hole would be matter packed to its theoretical maximum density (which is still insane). From there it would quickly “bounce” and become a white hole.

So wait… how do black holes persist then? Time dilation! From a the hole’s frame of reference it collapses and then instantly bounces and goes kaboom. From our frame of reference though all that gravity slows it to such a crawl that the black hole phase at or near max density looks like it’s stable. The bounce takes billions to even trillions of years!

Last but not least I learned about the black hole starship idea. It’s a set of ideas about how far future intelligences could use black holes as mass energy converters to reach relativistic velocities. Might be somewhat easier (for crazy sci-fi values of “easy”) than handling antimatter. This also gives SETI yet another wild extreme technosignature to look for.

… and back to the beginning I found a post about how if planet nine were a PBH we could use it to yeet probes to the stars at meaningful fractions of c… at least if we could make them able to survive insane g forces. Unlike the black hole starship this would be feasible today. It’d just be a gravity assist off a ludicrous gravity well.

Here I thought black holes were dull. Turns out they’re the most extreme objects in the universe and a whole lot of the most amazing physics intersects around them. If there is any way we could tap into the phenomenon we could potentially access sci-fi levels of energy too.


Neat! Is the bounce idea incompatible with hawking radiation?


No. Unless I didn’t get it Hawking radiation is part of what this looks like to an outside observer. But I would lean on a physicist more familiar with the loop quantum gravity model to explain how these relate.

The bounce idea is super neat because it feels less “magical” than a true singularity. A black hole is just a whole lot of mass stuck in a time dilation tar pit… from our frame of reference.

There are other implications too. From what I read LQG may allow stable micro black holes due to quantum effects dominating at small mass, naked singularities (well not true singularities but regions of off the charts mass energy concentration not hidden behind an event horizon), and Hawking radiation subject to quantum spectral effects similar to how emission spectra work.

It also resolves the black hole information paradox. All the information just bounces back out. Easy.


Building a CRUD app.

It's been a while since I've worked on a CRUD app so I'm finding the whole thing quite interesting. The purpose of the app is to solve a scheduling problem.

I've written my own CDCL SAT solver (now just using google or tools), and on the app side I've jumped from Phoenix (elixir) -> Dream (ocaml) -> axum (rust) -> Django. I feel like Phoenix probably perfectly suits what I'd like to do with this app (long running tasks and collaborative editing) but I'm at the point where I want to support this app long term and I don't see me not being familiar with python anytime soon.


Interesting, what’s the scheduling problem?


Writers Festival program: basically you have a 100+ authors, a handful of venues, and 3 or 4 days to run the festival. The nature of writers festivals is interesting because there are a lot of panel sessions (multiple authors on the one session). The formulated problem is to produce a valid schedule such that the number of required accommodation nights is minimised - ultimately to reduce operating costs.


"What if I were to gather these 5 five recipes that really worked in a future... book?."

Bookbinding has fascinating details.


Due to all the recent BSD posts I have spent the past month exploring NetBSD and OpenBSD. Really enjoying the journey and finding that I can do everything I am currently doing easily on both.


I tried creative coding. Sonic Pi, Processing, p5js, there are so many frameworks built around creative coding that I didnt know even existed. It was really fun.


Installing and exploring V7 unix (1979) on a PDP11 emulator.

Crazy how familiar and yet different things are.


Are you a reader of virtually fun?

That's exactly the kind of topics they explore.


I wasn't, but I am now. Thanks for the tip!


I'm slowly simplifying all of my projects. I'm constantly aiming for simpler, lighter, lower maintenance. Nginx for caddy, PostgreSQL for SQLite, shorter configs, static site generators.

It's fun to reach a point where things just run forever at zero expense.

In parallel, I work hard on developer experience just for myself. I finally get the greybears and their keyboard incantations. No UI can beat scripts and muscle reflex.


ERC-404 draft standard

an amusing AI generated origin that entertains the humans sufficiently well, while solving some frictions the humans encountered

basically somebody launched a crypto token and rugged it for some quick cash. it was a hit because their code, coaxed on and perhaps entirely generated by ChatGPT, did fun things:

- issues an NFT from a collection whenever you buy a whole token, but not when buying fractions

- transferring the NFT transfers the whole token

- while selling a non-whole unit of the token irreparable burns the NFT

community said “no that was awesome lets keep doing that” but original dev could only think about doing more rugs, so some in the community made Pandora token as the new reference implementation and pushing this draft standard’s development along

in reality it solves some frictions with token + Nft launches, any NFT collection launch, and multiseries management within 1 contract. Something ERC1155 tried to achieve in a more complex way. now people can launch NFT collections directly from an AMM’s liquidity pool like Uniswap. No direct boutique websites or NFT marketplaces needed to trade the NFT or create liquidity. And has a fun incentive for holding on to the tokens.

so far in practice, communities make separate markets for the NFT's with them trading at a premium to the token, as the NFTs in the limited size collection gets burned over time by sellers and has its own scarcity attributes while the fungible token trades at a different lower price, allowing for roundtrip arbitrage. Automatic community with the arbitrage features. ie. a Pandora NFT trades at $7,000 and comes with 1 token. While the token market alone trades at $5,800.


AWS Bedrock. Set it up to experiment with different models in one place. Turned to be fascinating; UI is far more friendly than AWS usually is. Cool stuff:

Guardrails: No longer do we have to use the AI to fight AI, AWS has presets for blocking inappropriate content going into or out of the AI block.

Evaluation: Basically just give it a JSONL file of all the inputs and expected outputs, and compare two models or judge the quality of a model. This can be done by a human (it even manages the access of this team of humans). But more interestingly, it can be automated or done by AWS workers.

Knowledge base and agents: Seems like it's only for Claude, but damn, Claude has been impressive for a good price. Claude Sonnet (the mid-range) has been about as good as GPT-4 for half the price. The weakness has been things like knowledge base and AWS lets us just quickly set up vector DBs and embeddings. Then the agents feature lets us connect with it and combine it with all of the above.


Bluetooth coffee mugs. I disliked Ember’s app, particularly on Mac, so I built a substitute for the menu bar. While expanding to knockoff mugs on Amazon, it was entertaining to see how bizarre and inefficient some BLE implementations were. Some write every second unnecessarily. Ember starts the heater during cooldown, even though they could do so closer to target temp and save 5-10% battery.


Writing a low level C code editor, going down the rabbit hole of gap buffers and piece tables. Fun though.

I first wrote it the dumbest way possible, one big array with padding at the back. Worked fine actually for most modern use cases, but as this is also a learning experience, I want it to be best in class in performance.

I think I'll settle on gap buffer because the performance is great and it doesn't hurt my head.


A quick and dirty, shallow one, that I just opted to brute force out of curiosity.

An online game I play includes an optional two player Russian Roulette type feature (non-fatal). I got to wondering if there was an optimal betting percentage to use, if you set aside some money as a betting seed. So I spent time coding up a really ugly brute force "just run lots of games and see".

Pretty much the answer is you'll lose more often than you win, looks like your best bets are around 2% of whatever money you have left of your betting money.

If you play 75 games, at 2% of your betting pool, you'll come out ahead only about 49.8% of the time.

There's more efficient ways of working that out than I bothered to do, which was to create a basic abstraction for a gun. For example, your odds of winning is essentially 50%, given two players. For every "game" I simulated, I could have just picked a random integer between 0 and 1 instead. Faster and the same effect.

As best as I could find, there are no good betting strategies on a coin toss (which is what this really is)


> a really ugly brute force "just run lots of games and see".

This is usually called https://en.wikipedia.org/wiki/Monte_Carlo_method


You might find the Kelly Criterion interesting and/or useful for optimal bet sizing. This rabbit hole goes deeeep

https://en.m.wikipedia.org/wiki/Kelly_criterion


Oh boy, further down the rabbit hole!


Okay, unless I'm missing something, Kelly Criterion puts it at 0%, which is about what I'd expect.

p = 0.5

q = 0.5

b = 1.0

0.5 - (0.5 / 1.0) = 0.0


Formal Methods; and i don't think i am going to get out of this rabbit hole anytime soon in the future.

More than just using some formal language/tool i had wanted to learn about the Mathematics/Ideas behind formal methods and how they are embodied in TLA/Z language/B method/etc. After a survey of available books i zeroed in on Understanding Formal Methods by Jean François Monin hoping to get an overall idea of how everything comes together. But what i got was a fire-hose/mishmash of so many different sub-fields/notations/abstractions used in the field that it is quite a struggle to get a good grasp on anything. The author's style of writing is obtuse/challenging and the contents are more of a survey/introduction than detailed explanations.

The result is that i am now interested in figuring out a whole lot of mathematical/logical sub-fields which i suspect is going to occupy a lot of my time in the future.


Yesterday's submission may interest you: https://blog.brownplt.org/2024/04/21/forge.html (Forge: A Tool to Teach Formal Methods)


After the post the last week about the DIY GPS receiver, I decided to get out my RTL-SDR and set it up. Took a couple of days of fiddling around with it, but now I've got SDRtrucking setup. Spent one morning listening to the public safety radio traffic. That was a wild ride on it's own with all sorts of things going on in this metropolitan area.


So, I was looking at some performance issues which seemed to be stemming from linux networking stack. We eBPF with docker/k8s a fair bit as a PaaS, and I ended up getting into the weeds with linux sys/procfs and kernel tracing. One thing led to another and after quickly dispatching the perf issue, I was supposed to be on a planned week long holiday which turned into a deep dive into linux kernel + networking.

Intermediate step - I feel pretty confident with the gory details of the kernel code now. can possibly build a custom kernel, boot qemu with both a simple C+assembly bare metal kernel, or the self compiled kernel. I feels like the clouds have cleared and I can see the sun. - Incidentally the kernel source code is pretty well documented, but one thing which is missing is a much smaller list of files which are most important. true pareto here. 20% files carry the weight. You also need to know the subsystem you want to touch. Chances are that subsystem is much lesser number of files.

Finally - Got to reading about kernel packet handling. at the L2/L3/L7 level. from nic hardware to userspace. Turns out that eBPF [hello old friend!] has a networking avataar called XDP which is pretty recent [<5 years] way of doing high performance networking on the linux kernel. Along the way, got to know about network performance optimizations specially in modern multicore systems in the kernel like RPS/RSS/aRFS, DPDK/fd.io/VPP.

And now I feel the itch to apply this to some of our networks. Particularly, baremetal servers on equinix metal + aws ec2 + azure can be peered with either VPP/Bird to make a p2p connection which is a factor more performant than the vpc interconnect/gateways which are provided off the shelf.

I might extend the holiday by a few days. and I would love to talk to people who have hands on experience with any of this. Its hard to contain my excitement tbh.



Hi I would greatly enjoy talking to you.

I play with io_uring and multithreading. I am looking at event loops.


Video console emulators and AI. Emulators made me dig into video games history, which some I know first hand. I'm utterly amazed by the tools people is developing everywhere. AI, well it marvels me every single day. Funny enough both topics end up hitting again and again another obsession of mine, the dreadful laws of copyright.


Writing a beginner intro to wealth tax and assemblng and organizing all the Ressources I had consumed in years. https://start.me/p/1997pL/inequality-for-the-pragmatic-ideal...


How to improve one's speaking voice. Mine is tiny, no strength, no quality, monotone. Found something called "kharaj ka riyaz" and practiced it for a few weeks and voice quality and strength improved. Then quality went down and my voice is not irritable and scratchy. I hope it recovers.


I didn't start out planning to write a text layout engine. But that's the way it ended up. I need a different hobby!

https://github.com/KaliedaRik/Scrawl-canvas/pull/75


So many great rabbit holes in this list. There are several that have blown years of my spare time.

My one, at the moment is precision time keeping. Time nuts. I have a pile of oven controlled crystal oscillators, GPSs (even the full Ublox time specific LEA M8).

I got a BG7TBL counter and multiple cheap GPSDOs to test my own.

I have a DAC1220 20 bit DAC on an ESP32 disciplining a TCXO from an old phone base station by counting the 10Mhz using the PCNT and gated off the GPS 1PPS.

Meantime, I learnt the esp-idf so I could have more control over things. Everything done with the esp-idf is way way more stable than using the Arduino wrapper, no idea why, maybe later versions?

The disciplining/tracking parameters are exposed by http and mqtt and put into influx.

I have a 5.5 digit multiple meter (I repaired the classic HP 3478A). Maybe I need another digit, there goes another rabbit hole. Voltnuts.


You might explore clock hardware used in the film and music industries. Ranges from $200 temperature controlled audio clocks to very expensive video black generators. Some links to get started:

https://www.sweetwater.com/c953--Synchronizers

https://en.wikipedia.org/wiki/Black_and_burst

https://www.sweetwater.com/insync/black-burst/

https://www.ebay.com/sch/i.html?_from=R40&_trksid=p2334524.m...

Might be interesting to convert an audio precision word clock into a precision computer clock. Audio atomic clocks are available ($7000):

https://en.antelopeaudio.com/products/10mx/


I assume my GPS disciplined over controlled crystal oscillator is going to be much the same as most of those. Short term .1milli Hz.

Without GPS, yes a rubidium oscillator would be fun to have and the phase noise is may be better than the TCXO disciplined oscillator, but the system I have with the LEA-M8T with a good external antenna and used in the 'base station surveyed in' mode is more stable. I've been keeping an eye on ebay for old one at a reasonable price. They do wear out so it's a risky buy.

The black burst stuff is funny. I used to work in TV as a kid. The plate on my car was PAL-443


Juggling, I found a link to the library of juggling in HN [1]. I've been practicing for the last month and learning basic 3 balls paterns.

[1] https://news.ycombinator.com/item?id=39601201


Been tinkering with my personal site again for no particular reason.

1. Looked at what it would take to turn it into a sort of "pubnix" for some friends

2. ...which got me looking into how to set up Postfix to manage local emails (allegedly this works out of the box, but I must have screwed something up since I never did get my test messages from one user to the other)

3. Then on to looking at BBS systems, starting from Enigma 1/2. Didn't get too far into that since the theme customization scared me away (and not enough of my RL friends are nerdy enough to get into it)

4. Finally backing away from the pubnix thing again because of insufficiently nerdy friends (although one is humoring me in experimenting with SSB), I then instead set up a Synapse server to have my own identity in the Matrix ecosystem.


Oof, Pulumi here. A module was wrapping a Terraform module which wraps a REST client which wraps the provider’s API. Errors only came forward during a `pulumi up` instead of being caught during a `preview`. Fun times digging through those multiple layers of abstractions


Bartending / cocktails / "mixology" .

It's more fun than I thought because whipping up every cocktail you've ever heard of is actually quite simple. There's also some cool generalizations to uncover so that you can Pareto principle your way to knowing how to do almost everything with a few tricks and ~10 liquids. I could have someone knowing more than 2/3 of bartenders (most are just a job / basics) in an evening. Then once you've got a good handle on all the basics, there's an endless adventure into variations and history. There are a lot of flavors out there you wouldn't believe existed; many concoctions of herbs and botanicals. It's a great social activity too.


I installed NixOS and went down into that black hole.


Tell me more. What was your use-case for installing NixOS? I’ve been thinking about putting together a new media server and saw a recommendation to try NixOS. (I’ve previously used Unraid). That was a couple weeks ago, and it was the first time I’d heard about it


My use-case for NixOS : Got tired of breaking Arch and never ever wanted to solve package dependencies issues again, so I installed it on my workstation laptop.

I havent tried it yet for home server stuff. I am still running containers on a Proxmox host.

Nix documentation is bad/incomplete, so I helped myself with some Youtube videos to get started.

This might get you started https://jvns.ca/blog/2024/01/01/some-notes-on-nixos/


I take photos for fun in my spare time and recently started shooting on film again. This time around I've been interested in how film actually functions. SmarterEveryDay has a fantastic 3-part YouTube series of the Kodak manufacturing process as he tours through the facility. I'm now amazed that film is still even being manufactured today, and can't even imagine what the production line was like during the heyday of film.

I've developed my own film in the past but knowing so little about chemistry myself, it's still pretty much magic to me even after digesting all of the info from the series.


This is only partially relevant, but there is some analog film theory buried about halfway through.

https://www.youtube.com/watch?v=YE9rEQAGpLw


I wish I could do this, but the financial aspect just isn't working where I am.

I've been looking at digital backs for old cameras such as the blad I have taking dust. Sadly, either they're completely impractical or they're way above my budget. Hopeful that some day, something better than a polaroid back can be used to resurrect my old hassleblads.


Everything. Everything I look at or touch I soon realise it’s way bigger and deeper than I thought. You could choose nearly any topic and turn the study of it into a phd if you really want.


Miniature painting

I wanted to be able to easily distinguish between the 60+ figurines of the Zombicide board game, so I figured I could paint them "quick and dirty".

Well 8 months later, I'm not finished because I "had to" learn about paint, color theory, paint mixing, human vision, brush types...

Being colorblind I gave no attention to colors around me, but I have since discovered I can see more shades than I was aware, and I'm having a blast just looking at the foliage... Which does not speed up the painting!


I have spent the. last few years (especially during COVID) to collect and explore the world of stationery. I now collect over 2500 items ranging from pens, pencils, stickers, a 50 yr old typewriter (functioning), note pads, old diaries, planners, gifting items, collectibles like from army, navy sets of pen, card holder and so much more, from around the world. It fascinates me to know such an amazing rabbit hole exists.


I'll try to build my own corne keyboard so I recently dived into custom mechanical keyboard rabbit hole, which seems to be more of a black hole from which nothing comes out, not even light. (Plot twist: I have no experience in soldering, so that's gonna be fun...)

Otherwise, ThePrimagen pisses me off always talking about Neovim and Vim motions, so I am right now in the painstaking process of learning Vim motions and I want my life to end because of the learning curve.


I've been interested in making a homebrew playstation 3 (childhood console :D) game for a while, and finally got an environment setup yesterday.

Ended up getting a simple rust function built, with some slight miscomplilations via wasm + wasm2c. Now I'm going to try to get graphics working.

There is a surprising amount of public code containing calls to sony's licensed SDK, if you know what to search for (Not to mention the SDK, which was, "obtained" dubiously). Fascinating stuff


This is very cool, do you have a place where you post updates?

I was playing with w2c2 last week to try and get some stuff running on Mac OS X Leopard, but I couldn't quite get it working :(.


Not yet, there is a monumental amount to learn before I can make anything cool :( If I get something cool working, maybe i'll start a blog


This is pretty fitting for HN, especially considering I've tried to ignore it this whole time... but AI.

Watched a couple videos on making AI to play video games, looked into localai and gptpilot, failed to get it to write a QT application. Played around with tts and stt, and now I'm stuck in prompt hell with diffusers/dreamshaper.

(It's mostly there, but I feel like it's screwing with me because it always finds a way to screw up the picture.)


I have bought recently a bunch of ICs from 8080/8086 computers (mostly peripherals like PIC, I/O, Timers, memory) for next to nothing and i am trying to build a "retro" synth. It is way more fun and challenging than arduino or esp32, and not really complicated! A lot of instruments of the (Roland Juno, Oberheim Matrix 6, Akai AX-80) are using these chips too, so when in doubt i can take a peek at the schematics and move on with construction.


The Rust 3D rendering stacks. I just want to use them, not work on them. But there are low-level problems and I had to open the lid on Rend3->WGPU->Vulkan.


Over the last little while, I've fallen down the Nix rabbit hole.

I don't currently use it in any serious projects aside from tinkering about with it, but it has been a lot of fun to learn and study.

Between the Nix package manager, the associated language, etc., there has been a lot to learn about, and it's been good fun. I have nixOS on my spare Thinkpad for toying with, and I have Nix on my main Debian systems, if I want to pull something from nixpkgs.


I started looking into the Gospel of Thomas right before the start of the pandemic, which led to a number of rabbit holes:

* Turns out the work is not 'weird' or 'Gnostic' but is directly addressing details from Lucretius, including paraphrasing his view of evolution and atomism, but refuting the claim there's no afterlife by basically appealing to the idea we're in a simulated copy of an original physical world where the spirit doesn't actually depend on a body, because there is no actual body.

* As I dug more into the various mystery religions the followers of the work claimed as informing their views, I saw a number of those were associated with figures various Greek historians were saying came from the same Exodus from Egypt as Moses.

* Turns out a lot of the ahistorical details in the Biblical Exodus narrative better fit the joint sea peoples and Libyan resistance who end up forcibly resettled into the Southern Levant latter on. In the past decade we've also started finding early Iron Age evidence of Aegean and Anatolian settlement and trade previously unknown in the area, including in supposed Israelite settlements like Tel Dan, lending support to the theory that Dan were the Denyen sea peoples.

* Also turns out that in just the past few years a number of Ashkenazi users have been puzzled by their genomic similarity to ancient DNA samples, where the closest overall match in a DNA bank was 3,500 year old Minoan graves sequenced in 2017 or that they have such a high amount of Neolithic Anatolian (which the 2017 study found was effectively identical to Minoan).

* The G2019S LRRK2 mutation that's almost only found among the Libyan Berbers and the Ashkenazi appears to have originated with the former but appeared in the ancestry of the latter ~4,500 plus/minus 1k years. Which is a window that predates the emergence of the Israelites in the first place, but is on the cusp of the sea peoples/Libyan alliance.

* There's also been discovery of endogamy among some of the Minoan populations. Did the Ashkenazi endogamy evidenced from their emergence in Europe and the bottleneck in the first millennium CE actually go back much further than we've been thinking? Maybe Tacitus wasn't so off base when he talked about how some claimed the Exodus involved people from Crete hiding out in Libya.

Anyways, that's a very rough summary of some of the rabbit holes I was going down.

Bonus: Herodotus's description of Helen of Troy spending the whole time in Egypt has two datable markers to the 18th dynasty, which is when Nefertiti, "beautiful woman who arrived" is around during a complete change to Egyptian art and religion while she's the only woman in history to be depicted in the smiting pose, with her only noted relatives being a sister and wetnurse.


> appealing to the idea we're in a simulated copy of an original physical world where the spirit doesn't actually depend on a body

Sounds a bit gnostic no?


It was what resulted in Gnosticism, not the other way around.

You had this first century response to Epicureanism's naturalism as a foundation. In that paradigm, the Platonist demiurge recreating the physical world before it was an agent of salvation, liberating the copies from the certainty of death from the Epicurean original.

What happens is that Epicureanism falls from popularity over the second century, so in parallel to the increased resurgence of Platonism, Plato's forms becomes the foundation instead. For Plato, there was a perfect world of the blueprints of everything, the corrupted physical versions of those forms, and then the worst of all was the images of the physical. So the Thomasine salvation by being in the images of physical originals is through that lens corruptive.

So as the foundation shifted from the Epicurean original world of evolution (Lucretius straight up described survival of the fittest in book 5) to Plato's perfect forms, a demiurge creating a copy of what predated it shifted from being a good thing to trapping people in a corrupted copy.

For the first 50 years of the discovery of the Gospel of Thomas, it was mistakenly thought to be Gnostic. This changed at the turn of the 21st century with the efforts of Michael Allen Williams and Karen King, and it's now labeled as "proto-Gnostic." It's absent a lot of the features typically associated with 'Gnosticism' though that term in general should be retired as it's turned out that there isn't any single set of beliefs to be considered 'Gnostic' in the first place (this was the chief realization of scholars over the past twenty years).


Is that the one where Jesus sells Thomas into slavery because he didn't go obediently to minister to the people in India?


No, that's the mythology that develops around the travels of an apostle Thomas much later on.

I'm pretty sure that there was no 'Thomas.' My guess is that the philosophy of being in a twin universe and a twin of an original humanity ends up anthropomorphized by or before "doubting Thomas" in John and ends up credited with the tradition making those philosophical claims which was also denying the physical resurrection.

In the Gospel of Thomas itself, there's only two mentions of a 'Thomas,' both likely later additions. Moreso the work features him having female disciples and discussions directly with them, and the only later tradition following it claimed a female teacher named Mary as the starting point of their sect.

The Gospel of Thomas is a collection of sayings, and that core may have gone by different names before the 2nd century when it's rolled up in a more secretive context as attributed to 'Thomas' (despite the core itself seemingly being more anti-secretive than any other texts in the early Christian tradition).


My project uses a library, which in turn uses a WSGI library. During a vulnerability scan, my web UI would crash, but I had no error messages. The rabbit hole!

Despite leaving a bug ticket with my library, I received no response after a month. I struggled to debug this library within a library. One day I realized the rabbit hole I was in and switched over to Tornado and no problems since.


I’m learning LLM and AI. And I’m building a multi-modal full stack LLM chat agent. And I got very fascinated by it. [0]

Using semantic-router for dynamic conversation routing, and LiteLMM for model providers.

It was lots of fun to learn and build. I will be adding function calling support (tools use) for the models to have it more capable, like an agent, in the future.


I watched Lex Fridman interview Richard Wolff and have spent 2 weeks going hard into marxist and anarchist theories and practice. Working through 2 books, a dozen browser tabs, interviews, etc. It's rare something catches my interest like this (especially non-technical). But I'm really enjoying all the different perspectives and formulating my own fantasy scenarios.


If it hasn't made it onto your list yet, Ursula K. Le Guin's book titled "The Dispossessed" is a great exploration of anarchism in practice through a sci-fi lens


Do you have any recommendations on books that talks about Leninism like it's foundational ideas, etc. and how it departs from Marxism?

Thanks in advance.


Leninism doesn't quite depart from Marxism really, Lenin expanded upon Marx's ideas and applied them to the evolving material conditions of Tsarist Russia. In a manner of speaking he departed with Marx in the way that Marx thought revolution would occur in developed industrialized nations like Germany and France and not rural agrarian societies like Tsarist Russia. Another thing to note is a lot of what Lenin wrote was scathing and sort of exaggerated polemics against others who he often worked alongside strategically, but argued with for the purpose of directing the course of action correctly. His ideas, the conditions and people he was responding to, evolved over time as well, so it's very important to understand those contexts going in, and this book does an excellent job of synthesizing and condensing some of that well: https://www.goodreads.com/book/show/26268757-revolution-mani...

I think this does a good job too: https://www.goodreads.com/en/book/show/50284837

I would also recommend The People's Forum / The Socialist Program's recent class on Lenin: https://m.soundcloud.com/thesocialistprogram/sets/lenin-and-...


Used tensorflow since its creation until changed companies one year ago where I only use torch lightning. I'm fixing lighting to make it have some of the nicities that tenorflow has. Also finally fixing the graph engine in tensorflow so the abstraction actually works and doesn't have sharp edges around things like models of models.... aka graphs or graphs.


Several renowned mechanics in my area don't work on cars that are older than OBD-2. So, I went down the rabbit hole for:

One person brake fluid replacement / bleeding procedure after replacing a brake master cylinder. For my car, I can now do this without taking the wheels off. I'll be done within a couple of days and after that, I'll be kind of an expert.


Collatz conjecture


I've been looking at that particular rabbit hole since a professor of mine mentioned it in 2003 or something. Once or twice a year, I'll read about some theorem or something and think it can be applied to Collatz somehow and dive back in.

I've actually proved it several times...except for the insignificant detail that I glossed over that didn't seem important but tanks the proof.

Someday I'll have to publish my "book of lemmas that don't prove the collatz conjecture."


I found my two old mechanical PC keyboards from the mid 80's in the garage, took them in and cleaned them. Bought some VIAL adaptors for them and new alps keycaps for one of them. Ended up reading about mechanical keyboard history, keycaps, VIAL/VIA/QMK, etc. Also bought an old IBM model m keyboard yesterday.


Been nerdsnipped and diving down the rabbit hole on a few topics in the past few months:

Some history podcasts had me digging into the Napoleonic Wars and Israel/Palestine.

Also a recent interest in human health and diseases has basically sent me down the path of self-study equivalent to a Kinesiology/Exercise Science/Sports Physiology degree.


> Israel/Palestine

This is more than a rabbit hole, it's a fractal that changes as you zoom in and out


Mostly Accurate.


In browser Time travelling debugging in Javascript.. trying to implement it for an open source hobby project


OpenWRT, CD and DVD Players, Kindles, and OpenBSD due to some recent HN article.

I knew about most of them a little or fair bit already, but there's always something to learn the deeper you go :)


I’ve been trying to fit something into GCP free tier while also designing it to be scalable.

Been tinkering with that for an absurdly long time vs just throwing some python onto a $5 dollar VPS but it’s been fun & learned a decent bit about over engineering pitfalls along the way


The temple os , terry a. Davis one


How to break substitution ciphers.

Somehow and unintentionally as the search began from some random article I’d read, this seemingly unrelated subject ended up uncovering some insights into a problem with deduping database rows I’d been working on for another project.


I'm trying to beat level 19 in terminator dash: https://www.atarimania.com/pgesoft.awp?version=37332


I've been fascinated with the VESC project (https://vesc-project.com). I had no idea that controlling a BLDC could be so complicated.


Local brewery is doing a cinco de mayo event and we started talking about pre-gaming with margaritas:

Margaritas -> Jello Shots -> Chimoy/Tajin rim/topper -> Pop Rocks -> History of Pop Rocks


Jpeg-xl encoding. Its API is almost openssl-style confusing. I found no way yet how to encode a image buffer to a file.

ChatGPT gave some nice code, but it wouldn't work at all, good ideas though


I used Termux to get Simh running in my android based smartphone.

I then set up a VAX 11/780 running OpenVMS 7.3, which I can telnet into from outside. ;-)


Started on the Nagoya Protocol, then PCR of wastewater on airplanes, mechanical engineering of lavatory fittings, then metagenomic shotgun sequencing, and now Bloom filters.


It is such a good feeling to deploy a bloom filter in production. There aren't many times it will help but when it helps it helps a TON.


I'm playing Zelda Tears of The Kingdon... almost infinite side quests..


What is the difference between sony camera models.


Now that's one tricky rabbit hole. If you are planning on buying a camera, the deliberation seems to be a part of the shopping process. Post buying though, the differences seem either very important, or don't matter at all. Depends on how serious you want to get with the cameras.


Today in Toronto, Canada, a man that was charged with First Degree murder of a police officer was acquitted. I'd only really absorbed the initial news story in passing. The initial news releases relayed the police account of the incident (a man running over a police officer with his car) that it was a "deliberate and intentional act". During the trial, and with today's acquittal and subsequent lifting of a publication ban, we learned how different the public perception of the initial events were compared to what actually transpired. Thus, it's interesting to use technological tools (search, or wayback machine) to see how the story was originally framed and how subtle differences can affect perception. My hope is that the event at least makes people question their own assumptions borne of limited or misleading information in the earliest publications of any news event involving the death of a police officer.

Even today, I'm seeing conflicting reporting of quotes. For example the CBC has one article that quotes the police chief:

> Outside court, however, Toronto Police Chief Myron Demkiw struck a different tone. "While we respect the judicial process and appreciate the work of everyone involved in this difficult case, we were hoping for a different outcome," he said.

https://www.cbc.ca/news/canada/toronto/umar-zameer-acquittal....

However, the video of the Chief's statement is slightly different (and quoted correctly in another article):

> "While we respect the judicial process and appreciate the work of the 12 citizens who sat on a very difficult case, I share the feelings of our members who were hoping for a different outcome," Demkiw said.

https://www.cbc.ca/news/canada/toronto/umar-zameer-verdict-1...

The first quote cuts out essential information and does not note that the quote was not verbatim (as presented). It's especially problematic as the quotes might have subtle but different interpretations now that his suggestion of "hoping for a different outcome" is being addressed. (There's a subtle difference between 'sharing feelings with someone that hopes for a certain outcome' and 'hoping for a different outcome' -- and this could be major if it's seen as the police services suggesting the wanted the man, who was declared innocent, to have been convicted.)

(One other possibility is that the Chief did in fact mention both lines, and that they are so similar because it was based on his official public statement).


update: The quote discrepancy has been mostly corrected today. We'll see if there are any other ramifications considering the slight-misquote did lead to other criticisms.

It's a dangerous game when news articles are rewriting content pulled from news wires (eg, Canadian Press in this case) and it's not always clear which sections are pulled from where. There's an argument for legitimate AI-rewriting that can at least leave in direct sources for the content that it pulls in/summarizes (a task that ends up being a hindrance for deadline-facing humans).


San Francisco politics in the late 19th century.


Did the phrase 'knife fight in a phone booth' still apply to SF politics back then?


Very much. The more things change the more they stay the same.


I recently fell into internet comics from the early 2000s. Stuff like Absurd Notions, Sluggy Freelance, and Freefall


----esp32 module ----gaussian splatting ------


People keep talking about blimps, but they neglect the price of helium which is rising.


A while back I wrote a game in assembly, for CP/M. Since I have a single-board Z80-based computer on which I can run it.

I later ported the game to the ZX Spectrum, because that was a fun challenge, and I only needed a few basic I/O operations - "write to screen", "read a line of input", etc, etc.

It occurred to me that I could reimplement the very few CP/M BIOS functions and combine those implementatiosn with a Z80 emulator to run it "natively". So I did that, then I wondered what it would take to run Zork and other games.

Slowly I've been reimplementing the necessary CP/M BDOS functions so that I can run more and more applications. I'm not going to go crazy, anything with sectors/disks is out of scope, but adding the file-based I/O functions takes me pretty far.

At the moment I've got an annoying bug where the Aztec C-compiler doesn't quite work under my emulator and I'm trying to track it down. The C-compiler produces an assembly file which is 100% identical to that produced on my real hardware, but for some reason the assembler output from compiling that file is broken - I suspect I've got something wrong with my file-based I/O, but I've not yet resolved the problem.

TLDR; writing a CP/M emulator in golang, and getting more and more software running on it - https://github.com/skx/cpmulator


I was Youtubing this last weekend and ran down quite the rabbit hole. This all seems like mumbojumbo to me.

I could not derive a single piece of solid science in any of it.

It was remarkable how much content there was on this subject with little to no actual information - enjoy:

https://www.youtube.com/@MFMP


Sound event recognition is so interesting and seems to be lagging the other ML domains


There's speech recognition (pretty good these days, but probably still not as good as it could be), and gunshot detection (which doesn't work so well). There has been a fair amount of work in emotion detection from speech over the last decade, again with not so great results, though it's used in call centers. It's true as far as I can tell that deep learning has not been applied to emotion recognition and other types of sound events (animal noises, human presence, fire detection, etc.) as far as I can tell from public news (though I haven't gone looking for it). There has been a huge effort to detect anomolies in machine noises and vibrations in the past since that has industrial uses in detecting machine failure (why doesn't it exist in cars?). Music recognition works very well (Shazam). Haven't seen an app that can tell people "what made that noise in my house and where?".


Color theory, human color vision, color models and so on


Tracing the lineage of the British royal family back to Elizabeth I.


The Night The Stars Fell


I created a startup 2 years ago around a gigantic rabbit hole: understanding cause-and-effect and how you can move towards the outcome you want in a consistent way.

In case you want to have unlimited fun yourself, ask yourself: "What is the purpose of X?" and then "how can you measure/assess the fit-for-purpose of it?"

Possible Side-effects: #1 you might get disgusted and even angry with the self-declared "experts" who have not even understood the basic concepts.

#2 you might learn how little you understand yourself and how deep the rabbit hole goles.

Example in Software Development: Understand the quality dimensions for a "definition of ready" and what impact a good/used DoR has compared to a bad/not used DoR for the efficiency and effectiveness of a software development process.


cheap chinese retro consoles. Bought one december, now i have seven with different linux configurations. Also Steam deck.


Saw all the CSPAN videos for the hearings on the January 6 2021 attack on the US Capitol, listened to Trump's call with Brad Raffensperger, and The Trump Tapes with Bob Woodward, and Liz Cheney's Oath and Honor.


Different table top roll playing game systems


I fell into the rabbit hole of delinking programs back into object files.

Long story short, I was inspired by the Super Mario 64 and REDRIVER2 decompilation projects and wanted to do one. I picked a PlayStation video game from my childhood, started Ghidra and then I quickly realized that the game code's a complete mess. It's bad enough that I don't see myself ever finishing this project unless I can somehow divide-and-conquer this problem into manageable pieces. But you can't exactly break a program into pieces... can you?

So I've started to think for a bit and remembered the basic toolchain workflow: source files are compiled into assembly files, which are assembled into object files, which are all linked together into a program. The last bit stood out to me and I wondered: what if I could undo the work of the linker? I'd get a bunch of object files, dividing the original reverse-engineering problem into smaller pieces.

I searched online and found absolutely nothing on the topic. That should've tipped me off, but instead I started scribbling on a piece of paper. Object files are made up of sections (named arrays of bytes), symbols (named offsets within these sections) and relocations (spots to patch with a symbol's address). The linker lays out the sections, computes the addresses of the symbols and then patches the relocation spots to produce the program. I can't just take the program bytes and stuff them into object files because of these applied relocations, but if I could somehow undo them...

The good idea fairy struck, and the fairy struck hard.

I'm writing scripts in Jython and after a couple hundred lines I get results on sample test cases. I try them on the game and it takes forever due to algorithmic complexity. I rewrote a new implementation in Java, forking Ghidra in the process. I rewrote it a couple more times because my analyzer kept hitting edge cases. I built an elaborate and exhaustive test harness because I keep introducing hard to track down regressions. I submitted a couple of pull requests to Ghidra to solve some painful points and reduce the size of the diff, which spanned thousands of lines. I reply to the questions from the Ghidra team with walls of texts trying to explain my use-case, but the PRs get rejected because they don't fit the current design of Ghidra well.

When the Ghidra team rejected my stuff probably because at this point I was probably speaking in the native language of Cthulhu, I really should've taken the hint.

Instead, I spin off my fork as a Ghidra extension to alleviate the maintenance burden, which by now was getting closer to ten thousand lines. I keep rewriting my MIPS relocation analyzers again and again to improve their correctness, always hitting a new edge case. I've decided to start a blog, because I'm tired of trying to explain this stuff from basic principles to people since there's no literature on this topic. I get side-tracked writing a complete series of articles on the basics of reverse-engineering to introduce the topic. I get side-tracked again writing a series of articles on the applications of delinking related to software ports, with a case study on a x86 program that requires me to write relocation analyzers for this architecture and perform refactorings to support multiple ISAs and object file formats.

I'm finally back on reverse-engineering the video game that started all of this and get side-tracked once more because I'm documenting the process in another series of articles. By sheer luck I stumble upon a SYM debugging symbols file, but I don't have the matching executable for it, so I build a placeholder one that matches its shape, then import the placeholder into Ghidra, then write about a thousand lines of Java to import this data on top of the placeholder, then write a bunch of scripts and my own correlators to version track it onto a executable I do have because Ghidra doesn't know what to do with a source executable that doesn't have a single initialized byte to its name. I've tried to engage with the Ghidra community about this latest problem, but no answer. I assume they're probably busy trying to find an exorcist, so I carry on regardless.

Two years. Two years I've spent digging this rabbit hole that's probably worth a thesis or two. I know enough about delinking now that I could probably write a book that would read like a Lovecraftian horror story to people that develop linkers for a living. I've automated this stuff down to making a selection inside Ghidra and clicking on "File > Export Program...", but there's only so much you can do to make accessible or even understandable a technology that allows you to literally rip out code from a Linux program and shove into a Windows program or from a PlayStation game into a Linux program and have it work, in spite of ABIs or common sense.

TL;DR I've developed a reverse-engineering technique by accident that would give professors teaching Computer Sciences 101 an existential crisis.


I am not sure if I follow correctly. Please clarify the following for me. Have you succeeded in making a tool which undoes the work of the linker?


I did, you can find the Ghidra extension there: https://github.com/boricj/ghidra-delinker-extension

The problem is properly identifying the relocations spots and their targets inside a Ghidra database, which is based on references. On x86 it's fairly easy because there's usually a 4-byte absolute or relative immediate operand within the instruction that carries the reference. On MIPS it's very hard because of split MIPS_HI16/MIPS_LO16 relocations and the actual reference can be hundreds of instructions away.

So you need both instruction flow analysis strong enough to handle large functions and code built with optimizations, as well as pattern matching for the various possible instruction sequences, some of them overlapping and others looking like regular expressions in the case of accessing multi-dimensional arrays. All of that while trying to avoid algorithms with bad worst cases because it'll take too long to run on large functions (each ADDU instruction generates two paths to analyze because of the two source registers).

Besides that, you're working on top of a Ghidra database mostly filled by Ghidra's analyzers, which aren't perfect. Incorrect data within that database, like constants mistaken for addresses, off-by-n references or missing references will lead to very exotic undefined behaviors by the delinked code unless cleaned up by hand. I have some diagnostics to help identify some of these cases, but it's very tricky.

On top of that, the delinked object file doesn't have debugging symbols, so it's a challenge to figure out what's going wrong with a debugger when there's a failure in a program that uses it. It could be an immediate segmentation fault, or the program can work without crashing but with its execution flow incorrect or generating incorrect data as output. I've thought about generating DWARF or STABS debugging data from Ghidra's database, but it sounds like yet another rabbit hole.

I'm on my fifth or sixth iteration of the MIPS analyzer, each one better than the previous one, but it's still choking on kilobytes-long functions.

Also, I've only covered 32-bit x86 and MIPS on ELF for C code. The matrix of ISAs and object file formats (ELF, Mach-O, COFF, a.out, OMF...) is rather large. C++ or Fortran would require special considerations for COMMON sections (vtables, typeinfos, inline functions, default constructors/destructors, implicit template instantiations...). Also, you need to mend potentially incompatible ABIs together when you mix-and-match different platforms. This is why I think there's a thesis or two to be done here, the rabbit hole is really that deep once you start digging.

Sorry for the walls of text, but without literature on this I'm forced to build up my explanations from basic principles just so that people have a chance of following along.


It is not easy to follow. And there are many things worth discussing. I understand there are complications with MIPS and C++ just to name a few.

But let me stick with some basics. So, I can write and compile an x86 test.c program. Then, I use your extension and undo the linking. Then, I use the results to link again into a new executable? Are the executables identical? When does it break?

How much of a task is it to make it a standalone program? What about x64 support?


> But let me stick with some basics. So, I can write and compile an x86 test.c program. Then, I use your extension and undo the linking. Then, I use the results to link again into a new executable?

There are links in the README of my Ghidra extension repository that explain these use-cases in-depth on my blog, but as a summary:

- You can delink the program as a whole and relink it. This can port a program from one file format to another (a.out -> ELF) and change its base address.

- You can delink parts of a program and relink them into a program. This can accomplish a number of things, like transforming a statically-linked program into a dynamically-linked one, swapping the statically linked C standard library for another one, making a port of the program to a foreign system, creating binary patches by swapping out functions or data with new implementations...

- You can delink parts of a program and turn them into a library. For example, I've ripped out the archive code from a PlayStation game built by a COFF toolchain, turned it into a Linux MIPS ELF object file and made an asset extractor that leverages it, without actually figuring out the archive file format or even how this archive code works.

You can probably do even crazier stuff than these examples. This basically turns programs into Lego blocks. As long as you can mend them together, you can do pretty much anything you want. You can also probably work on object files and dynamically-linked libraries too, but I haven't tried it myself.

> Are the executables identical?

Probably not byte-identical, but you can make executables that have the same observable behavior if you don't swap out anything in a manner that impacts it. The interesting stuff happens when you start mixing things up.

> When does it break?

Whenever the object file produced is incorrect or when you don't properly mend together incompatible ABIs. The first case happens mostly when the resynthesized relocations are missing or incorrect, corrupting section bytes in various ways. The second case can happen if you start moving object files across operating systems, file formats, toolchains or platforms.

> How much of a task is it to make it a standalone program?

My analyzers rely on a Ghidra database for symbols, data types, references and disassembly. You can probably port/rewrite that to run on top of another reverse-engineering framework. I don't think turning it into a standalone program would be practical because you'll need to provide either an equivalent database or the analyzers to build it, alongside the UI to fix errors.

> What about x64 support?

Should be fairly straightforward since I already have 32-bit x86 support, so the bulk of the logic is already there.

I encourage you to read my blog if you want to get an idea how this delinking stuff works in practice. You can also send an email to me if you want, Hacker News isn't really set up for long, in-depth technical discussions.


This sounds like it would benefit from modifications to linkers to make decomposition easier. The benefits of code reuse might make it worthwhile, although the security implications of code reuse without having any idea of what's in the code seem formidable.


Some linkers can be instructed to leave the relocation sections in the output (-q/--emit-relocs for gold/mold), but it's extremely unlikely that an artifact you would care about was built with this obscure option.

I'm mostly using this delinking technique on PlayStation video games, Linux programs from the 90s and my own test programs, so I'm not that worried about security implications in my case. If you're stuffing bits and pieces taken from artifacts with questionable origins into programs and then execute them without due diligence, that's another story.


Give us a blog link!


It's on my profile page: https://boricj.net


FOUNTAIN PENS


vtuber?


[flagged]


Well, it's good to have hobbies. Keeps ya out of trouble.


[flagged]


We managed to get twitter bots into HN. Maybe tomorrow the pigs will fly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: