Hacker News new | past | comments | ask | show | jobs | submit | Snarwin's comments login

It looks like bans for odd/even license plates were used in Paris in 2014 [1] and 1997 [2], but not before then. However, a similar scheme was used to ration gasoline in the US during the 70s [3].

The only source I can find for the claim about police confusion is the one cited by Wikipedia [4], whose reliability I'm inclined to doubt based on the 1997/1977 discrepancy.

[1] https://www.npr.org/sections/thetwo-way/2014/03/17/290849704...

[2] https://www.wired.com/1997/09/paris-smog/

[3] https://www.npr.org/sections/pictureshow/2012/11/10/16479229...

[4] https://en.m.wikipedia.org/wiki/Odd%E2%80%93even_rationing#D...


The wikipedia article matched my vague recollection of the event, but if they got the year wrong then it may as well just be a rumour at this point.


Thanks!


It looks like this is what git merge-base --fork-point is supposed to do, although according to the docs it is not 100% reliable.


Based on all the discussions I’ve seen I think it’s impossible to programmatically find the “base” in general. Maybe it’s possible for most cases though.


Interestingly, C already does this for function calls: "fptr()" is considered equivalent to "(*fptr)()".


A mistake, IMO; I want to be able to see right away whether a call target is static or dynamic.


Page is completely unreadable in Firefox for Android.


No, it's that dark reader doesnt play nice with the background texture on that site. Perfectly readable with dark reader disabled.


Thank you kind soul!


I'd say C is actually a pretty good choice for an educational project like this. Having to write out all of your data structures by hand and manage your memory manually is a good learning experience, and since you're not writing serious production code, you don't have to worry too much about making mistakes.


_someone_ has to learn to build runtimes


_someone_ “else” :)


I don't think these authors intend for you to interpret their criticism of Audible's business practices as a statement about your moral character. One of their central points is that Audible's business practices have severely limited consumer choice in the audiobook market. You can't be held morally responsible for a choice that's been taken out of your hands.


May not be their intent, but at large declaiming something is bad and "enshitifying" the internet is a very transitive label?

And again, there is no evidence that audible is lessoning consumer choice? There is some evidence that they lesson producer practical choice. But even that is weak? With few exceptions, mostly audible produced works, I can find all of the books I care about elsewhere.


The cumulative point of this thread seems to be that they are getting to work with 75% of 90% of the audible book market (you claim books are available elsewhere = 25% commission).

You can call that whatever you want, but it sounds a lot like the descent to pre-Internet situations that weren't markers of stable quality and that many people who survived the 1980s don't want to see again.


I'm not clear on this post? What do you mean?

They have 90% of the audio book market, likely. They do not have exclusives on that much, though. I'd hazard that 90% of the available book market is through the major publishers, still. At large, this makes sense, as most books are still written with upfront payments from publishers.

I would also not really shed much of a tear for Audible. I just don't understand the idea that getting rid of them moves us further from the old style. Seems more likely that the publishers would brow beat any future entry into the market so that it was much closer to how things were in the 80s and 90s.


Most likely the intended audience is the press. Now that this letter has been published, journalists can write stories like "Online Safety Bill Under Fire From Security Experts", which make the issue digestible to a lay audience.


> Case in point at 9:30[0] he talks about localised memory patterns as though he's the first to come up with it

This seems like an uncharitable interpretation to me. When he says "I'm not sure if anyone else has ever used this approach," it's pretty clear from context that the "approach" he's referring to is interleaving the sine and cosine tables to improve cache usage. And he doesn't even claim to be the first to come up with it, just to have independently discovered it.


Just telling you why even as a fan I can see why he rubs people the wrong way.

If you look at previous HN threads on his content many have similar complaints.

Every one loves the content and premise of revisiting SM64.

Many don't like the presentation.

If anything I think I'm being charitable assuming it's optimized for the lay public.


First result on DDG is a Wikipedia article that says

> The term was coined in 1998 by Vincent Flanders, author of the book and accompanying website Web Pages That Suck.


A number of command-line programs are actually just wrappers around a more strongly-typed library API. For example, curl and libcurl, or ffmpeg and a whole collection of audio/video libraries. [1]

Of course, in the POSIX world, "library API" means "C API", so if you want to write nice-looking Python code like in the article, there is still some more work to do.

[1] https://trac.ffmpeg.org/wiki/Using%20libav*


ffmpeg is a bad example as libav is a library specialized towards making ffmpeg. It's not a generic audio/video manipulation library.


Where does the requirement for it being a generic library come from? The point is that a lot of things that are available as command-line utilities for shell scripts already have a C API available as well.

And libav* is used by a lot more than /usr/bin/ffmpeg


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: