Hacker News new | past | comments | ask | show | jobs | submit | more aidenn0's comments login

Why not a tracing GC instead?

Inko used to have a tracing GC (see https://yorickpeterse.com/articles/friendship-ended-with-the... for more details), but this was removed due to the various problems the presence of a tracing GC introduces.

Thanks for pointing me to that. Upon reflection, there's a further conversation to be had in PL design:

> The rationale for this is that at some point, all garbage collected languages run into the same issue: the workload is too great for the garbage collector to keep up.

What this actually means is "The program I wrote is a poor fit for the garbage collector I am using." which can be fixed by either changing the program or changing the garbage collector. People often focus on the latter and forget the former is a possibility[1].

Similarly with single-ownership and borrows, you can write a program that is fighting the memory management system (see e.g. any article on writing a doubly-linked list in Rust).

In other structured memory allocation systems (hierarchical, pools &c.), the memory allocation lends itself well to certain architectures of code.

As far as I know, nobody has done a comparison of various memory management systems and how they enable and hinder various forms of program design.

1: The "time" special operator in SBCL shows allocation and GC time statistics by default. I think this nudges people into thinking about how their program is allocating at exactly the right time: when they are worried about how long something is taking.


That's an interesting way to read the specification, seems contrary to the intent and differs from how historic unix systems have implemented it. The author of the man page also disagrees with you on it being allowed by the standard ("If the standardized behavior is required srand_deterministic() can be substituted for srand()...").

Fucking things up for people who understand how PRNGs work just to fix things for people who don't seems backwards; things like this is why configure scripts take over a minute to run. I might be okay with the non-determinism in rand() if srand isn't ever called, but intentionally ignoring the stated intent of the program just pisses me off.


> non-determinism in rand() if srand isn't ever called

This is the path Go took for its math/rand library eventually: if you don't seed the RNG, it's a CSPRNG, but if you do seed it, it's deterministic.

However, there is lots of "advice" out there to seed your RNG with "random" values like the current time to make it "more secure" (and to be fair, in some cases, you may get sufficient entropy for the purpose at hand by doing this, but in most cases, you won't get enough and you'll spend it very fast). So a call to srand may indicate that you know what you're doing and want determinism and understand the consequences, but it doesn't necessarily mean that.


> seed your RNG with "random" values like the current time

Good point. srand(time(NULL)) is a common, yet nondeterministic, seed. The caller clearly doesn't expect rand() to generate a particular sequence of numbers, so srand() could check whether the given seed looks like a time_t close to the current time() and, if it does, then use a proper CSPRNG instead.


That's a horrifying suggestion.

Since any good solution would start with deleting rand and srand from the C standard library entirely, but that's never going to happen, we only have not-good solutions to consider.

Amusingly in go 1.24 they're turning srand into a nop: https://github.com/golang/go/issues/67273

The spec itself contains an informative section which says:

> The following code defines a pair of functions that could be incorporated into applications wishing to ensure that the same sequence of numbers is generated across different machines.

And then gives code which does not use rand/srand. The intent of the spec is very clearly not for it to be portably reproducible.


There's something weird for the rendering of the runways on my machine. For example, IAD's 1L/19R is grey while 1L/19C and 1L/19R both show up blue, as expected. 12/30 is the proper yellow.

> ...where he compares the rather top-down, leader driven culture of Unix development to the free-for-all style of Linux.

The Cathedral example was actually GNU, which is Not Unix (it's in the name!).


Shoot, you're absolutely right! It's been a long while since I last re-read the article, and I had forgotten how "targeted" (for lack of a better term) it was at certain specific individuals.

My wife asked me this question, since I'm a considerably faster reader than her. We discovered that (if I try reading aloud as fast as possible) I can read aloud faster than she can read without speaking, though my tongue does occasionally trip over words at that speed. I read in my head faster yet.

I read light prose several times faster than an audio book; 60-100 pages in an hour for a typical paperback. That works out to 240-400 pages for the author's 240 minutes.

I read Going Postal by Pratchett in under 4 hours.

[edit]

Also, consumers of audiobooks may turn the speed up. My wife listens at 1.25-1.5x speed, depending on the narrator's pace.


How do you do that? Are you hearing the words out loud in your head? Is every sentence being parsed? Or is it like touch typing where you're hitting just close enough to get meaning but not enough to go deep?

Also, what is light prose? Is it just something you don't care too much about extracting meaning from?


Some books are easier to read than others. They use more common words, or contain a more traditional narrative that is easy to understand, or both.

> How do you do that? Are you hearing the words out loud in your head? Is every sentence being parsed?

Yes, I still have an internal voice when I read (and when I touch type for that matter).

> Also, what is light prose? Is it just something you don't care too much about extracting meaning from?

- The words are familiar (a counterexample would be any work written in Elizabethan English, since I'll trip over archaic word meanings)

- The organization is such as to make for easy understanding what is happening. Some authors do the opposite as a literary device; highly non-linear works or an extremely unreliably narrator are counterexamples. This does not exclude allegory. Arthur Miller's The Cruicible is a straightforward dramatization of a story in colonial Salem, but is rather transparently an allegory for McCarthyism.

I think that extracting meaning is largely orthogonal to whether prose is light or not. Certainly many important works are intentionally dense, but there's also overly pretentious drivel dressed up in hard-to-understand clothing.


Ist reading faster then listening for everyone?

In my native language, absolutely.

In non-native languages, often not.

One of the interesting things I've found in picking up a few languages is that listening to a foreign language is quite different from reading one.

When listening, especially to a recording (that is, you can't simply ask someone to repeat themselves, though you can usually replay a passage), when I hear a word I don't immediately recognise or am unfamiliar with ... my mind just sort of skips over it. Often I have some idea of meaning (previous encounters, or more often, context), even if that's vague. When reading, however, strange words cause me to stumble and I'll slow considerably. Consequence is that I can listen faster than I can read.

The dual option of listening and reading simultaneously is particularly effective, and I'm fond of options (often podcasts) which offer both audio and text transcripts to read along. This also seems to be more useful for expanding my language skills.


Was that for Yore?

Um, yeah. How do you know the name?

I have never mentioned it by name, I think. Only linked to it occasionally.

Edit: Another reason I am surprised is that Yore became its name only a few months ago.


I'll admit I had to search for the name. I recognized your username as "that Yao guy" and remembered that you were also working on a build system and VCS. Since yore is in the same repository of yao, it wasn't hard to find.

Ah, cool!

I think it stands for "Very Old Games On New Systems"

The sad irony of members of a forum named as such harassing someone who is indeed building a very old MIDI synthesizer for new hardware.

It's specifically about the limits of incremental design.

TFA's thesis is roughly that incremental design dooms you to a local maximum:

Since Jeffries (the TDD/Sudoku guy you seem to be aware of) starts out with a suboptimal representation for the board, there is no small change that can turn the bad code into good code. At some point along the line, he makes motions in the direction of the design that Norvig used, but as there is no incremental way to get there (maintaining two representations was a dead-end since it hurt performance so much), he never does.


Thanks! Annoyed that the link still isn't loading for me.

I'm curious on the thesis. I'm assuming "locked in by tests" increments are the problem? I'm curious why you couldn't treat this like any learning task where you can take efforts that are effectively larger steps to see where they can get you?

I should also note that I am not clear I understand how bad of a representation of the board you could get locked with. I got a working solver years ago with what is almost certainly a poor representation. https://taeric.github.io/Sudoku.html


> I'm curious on the thesis. I'm assuming "locked in by tests" increments are the problem? I'm curious why you couldn't treat this like any learning task where you can take efforts that are effectively larger steps to see where they can get you?

Here's a quote from TFA on this (using >> for quotes from TFA)

>> But Jeffries isn't in the business of starting over. He not only believes in incremental design, but in using the smallest possible increments. In his posts, he regularly returns to GeePaw Hill's maxim of "many more much smaller steps." He is only interested in designs that are reachable through a series of small, discrete steps:

and later

>> Jeffries, however, does not believe in bigger pictures; his approach to software design is proudly myopic. He prevents himself from seeing the forest by pressing his face against the trees.

> I should also note that I am not clear I understand how bad of a representation of the board you could get locked with. I got a working solver years ago with what is almost certainly a poor representation. https://taeric.github.io/Sudoku.html

First a point of clarification; Jeffries also gets a working solver; not in the original episode you may have heard of but in a series of forty-five articles 18[1] years after the infamous incident; TFA focuses almost entirely on this (successful) attempt.

Once Jeffries has a working solver he attempts to simplify it, and TFA makes the claim that these attempts are hindered by the choice of Option[Int] for each cell rather than a Set[Int] (i.e. a set of remaining legal values). This results in Norvig's code being significantly more succinct than Jeffries' code, even when implementing the same heuristic.

1: This originally read "two" due to quick skimming on my part.


Oh wow, 45 articles is still feels like a lot.

Link still isn't loading for me. I'm forced to assume it is a problem on my end, at this point. Going to be hilarious to find this is from some sort of content block on my side.


> two years after the infamous incident

Actually 18 years later.


Thanks, that's right; it's two years after an interview that was fifteen years later. Fixed.

Is Comcast still charging content providers and CDNs for peering?

Yes.

And peering with Comcast is almost the same price and transit.

deutsche telekom, Telstra and the Korean Telcos also do this.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: