Hacker News new | past | comments | ask | show | jobs | submit | nbsande's favorites login

I'm definitely not one of those people who thinks Windows should stay frozen forever as Windows XP, I appreciate we're finally getting 15-20 year old features like search filtering in the task manager.

But whoever's bonus metric is tricking people to sign in with MS accounts is really making it a gross experience. I managed to finally get my new machines install on a local account using "OOBE\BYPASSNRO" although none of the instructions I found online worked for some reason, eventually found the script find myself. Then when trying to eventually register Windows it tricked me into converting to a MS account because the license troubleshooter forces you to login to do anything.

Next time I boot my machine its asking for my MS password instead of my local password... So that's taught me to never enter my MS account into anything on the desktop because there is a risk it will silently do that.

You have a "Home" and a "Pro" version, really just wish for a world where they do this silly scammy behavior in the Home version and let the Pro version just be an actual tool to use hardware. I'm not using Windows because it's a great operating system I'm only using it because it's the only OS in the Venn diagram of "Supports Nvidia GPUs" and "Runs Adobe CC", can't they just be happy I use it at all.


I recently broke a piece of glassware that had been with me for over a decade. I have all the pieces and I'm wondering if it's possible to stick it in a kiln and run a heat cycle that will cause all the cracks to flow together without deforming the shape, and barring that if there's a thin gold glue I could use to do some approximation of this technique but with cracks that are essentially zero width.

Asyncio allows you to replace the event loop with an implementation of your own. For Temporal Python we represent workflows as a custom, durable asyncio event loops so things like asyncio.sleep are durable timers (i.e. code can resume on another machine, so you can sleep for weeks). Here is a post explaining how it's done: https://temporal.io/blog/durable-distributed-asyncio-event-l....

The biggest problem with asyncio is how easy/common it is in Python to be able to block the asyncio thread with synchronous calls, gumming up the whole system. Python sorely needs a static analysis tool that can build a call graph to help detect if a known thread-blocking call is called directly or directly from an async def.


Discussion (such as it is) [0] (28 points, 5 hours ago, 4 comments)

[0]: https://news.ycombinator.com/item?id=40267446


This was very good!

I was really happy when the first code on the page, for once, did not look like it was written by someone writing C when they would much rather write something else.

This line:

    struct article *my_article = malloc(sizeof *my_article);
Is almost exactly how I would have written it, and (to me) does three things very right:

- Does not use a pointless, wordy, bloated cast to somehow "convert" the pointer to the proper type. The point of malloc() returning a void pointer is that no such cast is needed.

- Does not hardcode a size, but uses the sizeof operator to let the compiler compute it.

- Does not repeat the type name, which is brittle and wordy, but instead (again) lets the compiler compute it from the target pointer.

I was a bit more hesitant about this line, later in the article:

    return (void *)b + sizeof(block_t) + b->size;
Not sure what language standard is targeted by the code in the article, but being able to do pointer arithmetic with void pointers is a GCC extension which at least should be mentioned.

We (Marqo) are doing a lot on 1 and 2. There is a huge amount to be done on the ML side of vector search and we are investing heavily in it. I think it has not quite sunk in that vector search systems are ML systems and everything that comes with that. I would love to chat about 1 and 2 so feel free to email me (email is in my profile).

Reminds me of “The Birth & Death of JavaScript”

https://www.destroyallsoftware.com/talks/the-birth-and-death...


Copy and patch is a variant of QEMU's original "dyngen" backend by Fabrice Bellard[1][2], with more help from the compiler to avoid the maintainability issues that ultimately led QEMU to use a custom code generator.

[1] https://www.usenix.org/legacy/event/usenix05/tech/freenix/fu...

[2] https://review.gerrithub.io/plugins/gitiles/spdk/qemu/+/5a24...


That's such a sweet idea. If you're looking for a weekend project idea, I recommend flipping it 180: https://days.sonnet.io

How I bail myself out of Rust lifetime problems as somebody who probably learned Rust the wrong way (by just trying to build stuff as if it were C/node.js and run into problems instead of slowing down + reading):

1. .clone() / .to_owned()

1. String -> vs &String/&str with & borrow

1. lazy_static / OnceCell + Lazy / OnceLock

1. Arc<Mutex<T>> with lots of .lock(). What would start off as like NULL in Java/C is Arc<Mutex<Option<T>>> and you have to check if it is .is_none()

I think that's about it. I rarely introduce a lifetime to a struct/impl. I try to avoid it honestly (probably for worse). Arc kind of bails you out of a lot (whether that's for good or not I don't know).

edit: Remembered another. I think it's kind of weird/too verbose from the compiler / borrow checker when you have a primitive like u32 and it's borrowed and you had to dereference it or clone it.


I recently had a coworker share a resume they had created with LaTeX. It was beautiful.

As someone not as interested in committing fully to LaTeX — but wanting a similar outcome — I found that I could achieve a pretty but easy to edit resume with Markdown and rendered via Pandoc because Pandoc supports LaTeX (among many other formats).

Here is a great GitHub repo that helped me get started: https://github.com/mszep/pandoc_resume

I would love to hear of other low(er) barrier-to-entry ways to use LaTeX, because it’s a pretty steep commitment for someone who isn’t a professional writer.


I don't remember the time that online certificates from MIT, Berkeley, etc would have been anti signals.

https://www.edx.org/learn/computer-programming/massachusetts...


Oh?

I applied to Google in late 2021 / early 2022. The suggestion was to grind leetcode.

The recruiter emailed me after my interviews to say they'd resulted in good news. She set up a phone call shortly afterward in which she told me I'd passed the interviews, I should prepare for a series of "team fit" interviews, I should see a job offer in about 6 weeks ("the end of February", when the call occurred in mid-January), and congratulations!

I was never offered, or contacted about, a single "team fit" interview. When the end of February rolled around, she informed me that, because I'd done poorly in the interviews (the same ones mentioned above; my results were good in January, but by February they had apparently spoiled), Google was uninterested in hiring me.

No one has really been able to explain why, in Google's eyes, "you did so poorly we're rejecting you" is a message to congratulate the candidate over, or why performing at that level is considered "passing" the interviews.


Those seemingly imperfect attributes were placed there by the devil (a Perl dev) to cause fear and doubt.


I have been training a natural intelligence model for 3 years now and she still doesn’t get nuance. Things are either good or bad in her book: nothing in between. My plan is to let her train with binary good/bad labels till the age of 5 and then start smoothing the labels after that. Wonder if that works for your AI.

I think this is a lot of the mathematics of scaling LLM training. Which is quite important!

One fundamental requirement though for any machine learning engineer working on these kinds of systems is https://people.math.harvard.edu/~ctm/home/text/others/shanno.... I do not want to be entirely hypocritical as I am still ingesting this theory myself (started several years ago!), but I've found it _absolutely crucial_ in working in ML, as it implicitly informs every single decision you make when designing, deploying, and scaling neural networks.

Without it, I feel the field turns into an empirical "stabby stab-around in the dark" kind of game, which very much has its dopamine highs and lows, but ala Sutton, does not scale very well in the long-run. ;P


cough mobilism.org/libgen.is/irc channels/z-library cough

I've been reading this paper with pseudocode for various transformers and finding it helfpul: https://arxiv.org/abs/2207.09238

"This document aims to be a self-contained, mathematically precise overview of transformer architectures and algorithms (not results). It covers what transformers are, how they are trained, what they are used for, their key architectural components, and a preview of the most prominent models."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: