Hacker News new | past | comments | ask | show | jobs | submit | ruricolist's favorites login

Totally!

Pro tip: if you really do know that contention is unlikely, and uncontended acquisition is super important, then it's theoretically impossible to do better than a spinlock.

Reason: locks that have the ability to put the thread to sleep on a queue must do compare-and-swap (or at least an atomic RMW) on `unlock`. But spinlocks can get away with just doing a store-release (or just a store with a compiler fence on X86) to `unlock`.

Spinlocks also have excellent throughput under most contention scenarios, though at the cost of power and being unkind to other apps on the system. If you want your spinlock to be hella fast on contention just make sure you `sched_yield` before each retry (or `SwitchToThread` on Windows, and on Darwin you can do a bit better with `thread_switch(MACH_PORT_NULL, SWITCH_OPTION_DEPRESS, 1)`).


AI, at least in its current form, is not so much replacing human expertise as it is augmenting and redistributing it.

Just a note, from having worked in a Congressional office that phones are usually answered by interns and a staff assistant, mail is opened almost exclusively by the staff assistant, but this is still the lowest level position in an office so the marginal difference is pretty minimal. And for any issue that's time sensitive, don't bother physically mailing because it has to go through extensive security checks before it gets delivered to us. I'd say the ideal way to maximize impact with an elected official here is to ask them the question publicly (FB, at a town hall) and try to garner as much support as you can so they feel like they have to answer you positively.

And yet, the reason tracing GCs are chosen by virtually all high-performance languages that heavily rely on GC is that they've been found to be faster in practice for the common workloads.

One of the reasons why your intuition is not so straightforward is that a tracing GC needs to do no work whatsoever when the number of references is zero. One of the common ways to teach the basics of GC is by starting out as looking at tracing and refcounting as duals: refcounting needs to work to free objects, while tracing works to keep them alive. If you thinking in terms of what work needs to be done to promptly determine when an object becomes garbage, then you're already not thinking in terms of tracing, because tracing never actually needs to learn about when an object becomes garbage (this isn't actually true when they do reference processing, but that's another story).

Or, if you want to think about it another way, in a tracing collector there are already only two cases no matter how many pointers there are to an object: reachable or not, i.e. the same one and zero as in your case, only there isn't even a need to ever set the counter to zero.

However, in principle tracing and refcounting can be quite similar (https://www.cs.cornell.edu/courses/cs6120/2019fa/blog/unifie...) in their behaviour, but in practice most refcounting GCs in industry use are crude, and don't match the performance of tracing GCs in common use, which are quite sophisticated.


I was this kid in my day. I had an extremely high-end PC that I built myself (to date myself, it cost well over $6,000[0] and was a home-built 486DX50 (right when they came out) maxed out motherboard RAM (that might have been 16MB but memory escapes me), a Turtle Beach Wave Table Synth sound card, a video card capable of powering a display at 1152x1024 on a 16" CRT display. I ran a multi-node BBS on two 9600, then 16.8K modems.

Had it not been for the experience of "tinkering" with that PC and its predecessor (a ten year old 8088), I would likely have never become a software developer.

I had the honor of building a gaming PC for a neighbor kid and watched over four years as he (unbeknownst to him) became a very competent geek. A few years of tweaking settings in games to eek out a higher frame-rate, fighting with firmware updates, video card drivers, software updates and every other bit of mayhem that the words "I want to game on my PC" evoke, and you pick up a wealth of knowledge "by accident".

I hadn't talked to the kid in four years; met up with him, again, recently and I was blown away. First thing he does is pull a "Flipper Zero" out of his pocket and tells me about all of the crazy things he's done with it.

He is as capable as any systems guy I've ever talked to. He has no clue that he has this skill, either.

It was such a good experience that I opted to buy my son a decent gaming laptop to graduate him from consoles. Over the last year, the same thing has happened to him. This year, my daughter (she's two years younger) begged me to build her a desktop, so she's getting a really nice Christmas followed by a year of learning/frustration. :)

Oh, and the fam got a Flipper Zero this year, too. Can't have the neighbor kid have all the fun.

[0] I saved up the money to afford it by building computers for other people.


Svalbard has a very weird legal status. It has visa-free right to live there and work there. Russia maintains a population there at a loss-making coal mine, arguably to keep a persistent claim of having a national interest in it. As well as the location being ideal for polar satellite comms, with the opening of trade routes past Svalbard (with the melting of the Arctic) suddenly a lot of nations care a lot more and Norway and NATO allies have picked up the pace on exercises in the region. I have reason to believe that "what do we do if Russia decides it wants to take Svalbard" is regularly considered in military planning. The Svalbard treaty is a weird leftover from a different time; a time when Svalbard was much less valuable.

Svalbard has a surprisingly large Thai population. I don't know about today, but at one point about ten percent of the population were Thai citizens.


In many ways its harder to see something through water than it is to see it through rock. Ground-penetrating microwave radar can get through tens of meters of quartz but centimeters of water. VLF and ULF penetrate 10-100x farther through the ground than through seawater. The frequencies that can penetrate more than a few hundred meters of water are around the same as the ones powering your lights. Antennas at those frequencies are miles long. You need special, non-conductive soils and bedrock to make them work. In short it's a real pain in the butt.

After 200 meters the ocean is practically opaque. Objects much deeper than that are reflecting a handful of photons. Below 500-1000 meters you're talking about photons per second at the surface.

Water is one of a quite small number of general radiation absorbers. To block alpha/beta/gamma radiation you need pure density- more mass per volume to slow down high energy particles. You can use heavy atoms like lead or uranium, or very densely packed lighter atoms. Neutron radiation is different- the actual number of atoms per volume is critical to maximizing the number of scattering events. That means you use things like polymers- hydrocarbons, so you have as many small atoms (hydrogen) in a volume as possible. Water is one of the denser liquids, while still being 11% hydrogen by mass, vs 14% for pure polyethylene. That makes it quite good.

Even sound isn't great underwater, relatively speaking- being a fluid in motion, there's a constantly-changing distortion on everything. The thermal conductivity also means thermal signatures spread out quickly.


Speaking as a PhD, I can assure you that just because PhDs research ways to make a thing work does not mean it does. Nor are they, or those hiring them, immune to presenting their results in the most optimistic possible way.

I am personally acquainted with several active fields that have been trying to make things work for 20+ years with very moderate success. They present their results which amount to "barely better than nothing" in order to keep funded. They also make the same arguments you do, "it stands to reason that it should work", to keep the money flowing.

There is the drug industry, which is chock full of new drugs that are barely better than placebo or generics, if they are at all. The results are hyped because it keeps the cash flowing. It would not be entirely true, but nor would it be too far off the mark, to say that almost the entire drug industry is based on fraud and exaggeration. And that industry is far more transparent WRT data and superficially altruistic than the advertising industry. This example is perfectly parallel because no one denies advertising works, just like no one denies antibiotics work, but the difference between new and old drugs, just like new and old advertising methods, seems to be greatly exaggerated.

Google built its entire empire based on contextual advertising, not behavioral targeting. With Facebook you may have an argument, but Facebook has a very special dataset not available anywhere else, and I would also add that Facebook makes the same amount of money whether behavioral targeting really works, or if they've just convinced their advertisers that it does.

I am neutral on the subject of whether it works because I have never looked into it, but "lots of self-interested people say that it does" is not convincing, and the fact that such an argument is so frequently made makes me think there is no actual proof.


I have been building a new not-exactly-a-programming-language using Racket and it has been an absolute pleasure, and the community has been extremely friendly and helpful.

One existing stumbling block that I encountered on the way is that I could not find documentation for the full process for the setup for creating a new packaged language (I have been known to fail hard at finding the right documents so this may just be my problem). https://beautifulracket.com/ has excellent coverage of the general process, but the differences between #lang br and #lang racket/base are just large enough that I couldn't fill in the gaps without going and looking through multiple repos to see how others had implemented their repo layout and then could intuit which setup commands I needed to run.

If I find time I intend to write up my findings and submit them to the documentation, the short version is here in case someone finds it useful.

  repo-base
  ├── .git
  ├── my-lang       ; run `raco pkg install` here
  │   └── info.rkt  ; meta package depends on my-lang-lib
  └── my-lang-lib   ; run `raco pkg install` here
      ├── info.rkt  ; include actual dependencies
      └── my-lang   ; implementation goes here
          └── main.rkt  ; in theory you can put your whole implementation here
Once that set up is complete you should be able to use `#lang my-lang`.

Hy is a self-compiling macro. Macros are just programmable mini-compilers embedded in the host language. This lets you do neat things like take a declarative description of a process and compile it into an executable function:

https://github.com/paultag/snitch/blob/master/example.hy

I gave a talk at Pycon last year that discussed, among other things, the implementation of a constraint solver for games. You can implement an answer-set language on top of it in order to make it easier to use for yourself or collaborators in pure-Python... but writing AST-transforming code using Python's `ast` module is a huge pain. Just sprinkle some parens around and you can treat the Python AST as if it were just another plain-old-datastructure. It becomes magnitudes easier to write your answer-set solver's language interface.

And because Hy is freely importable from Python code you can simply use it to write that front-end compiler for your constraint solver. The rest of your code can be in Python if that works better for you.

But it doesn't stop there... you also get Hy's core library. Which provides some nice higher order functions and macros which compile down to really nice, idiomatic Python code. The kind of code you'd want to write to express that idea.

It's a nice tool to have.

(as a contributor I'm slightly biased).


You could have a look at Siskind's paper, Flow-Directed Lightweight Closure Conversion. Warning, it's not exactly light reading:

  ftp://ftp.ecn.purdue.edu/qobi/fdlcc.pdf
Although the subject is nominally closure conversion, he describes how flow analysis is used to propagate information about values around the compiler's model of the program. Because the analysis is on a whole program basis it need not be very conservative - in a whole program compiler you never have to give up and do something the slow way because a value might be used in a place you can't see.

This aggressive flow analysis gives the compiler enough information to pick nice flat C-like representations, inline objects inside other objects, use more specialised calling conventions, etc.

For example, a whole program compiler might determine that the elements of an array are never compared with eq or have their mutable parts assigned to, which could allow the elements to be inlined into the array. Doing this is tough or impossible in a traditional compiler because the contents of files that have yet to be compiled is unknown, forcing the compiler to assume the worst.

Pretty neat stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: