Hacker News new | past | comments | ask | show | jobs | submit | inopinatus's favorites login

Surprisingly, storing a game (with all moves) can take less space than encoding a single board. This is because you can effectively encode a move in a single byte (as there are less than 255 moves possible in each position). Applying compression to the resulting binary string will allow you to reduce the space even more.

Check out this great blog post of Lichess for more information: https://lichess.org/blog/Wqa7GiAAAOIpBLoY/developer-update-2...

And shameless plug: Using this encoding, I'm storing millions of games on https://www.chessmonitor.com/ There you can link your Chess.com or Lichess account and view all kind of statistics of your games.


“Those spikes for std::lower_bound are on powers of two, where it is somehow much slower. I looked into it a little bit but can’t come up with an easy explanation. The Clang version has the same spikes even though it compiles to very different assembly.”

I saw this and immediately went “oh, those look like Intel hardware”.

Intel uses 12-bit memory port quick addressing in their hardware, resulting in an issue known as “4K Aliasing”. When addresses are the same modulo 4K, it causes a collision that has to be mitigated by completing the associated prior memory operation to free up the use of the address in the load/store port system, effectively serializing operations and making performance very dependent on the data stride.

I first bumped up against this when running vertical passes of image processing algorithms that got very slow at certain image sizes, a problem that could be avoided by using an oversized buffer and correspondingly oversized per-line “pitch” to diagonally offset aliased addresses (at a small cost to inter-line cache line overlap).


Hi! I invented replayable event graphs. I'm writing a paper at the moment about it, which hopefully should be out in a month or so. Send me a private email and I can mail you the current draft if you like.

> If you remove these ops from history, does that remove the ability to time travel

Yes it does. You also need the ops from history to be able to merge changes. You can only merge changes so long as you have the operations going back to the point at which the fork happened.

> is it possible to put the removed historical/tombstone ops into a "cold storage" that's optional and only loaded for time-travel use?

Absolutely. And this is very practically useful. For example, you could have a web page which loads the current state of a document (just a string. Unlike CRDTs, it needs no additional metadata!). Then if some merge happens while you have the document open, the browser could just fetch the operations from the server back as far as it needs to be able to merge. But in normal operation, none of the historical operations need to be loaded at all.

All this said, with text documents the overhead of just keeping the historical operations is pretty tiny anyway. In my testing using diamond types (same algorithm, different library), storing the entire set of historical operations usually increases the file size by less than 50% compared to just storing the final text string. Its much more efficient on disk than git, and more efficient than other CRDTs like automerge and Yjs. So I think most of the time its easier to just keep the history around and not worry about the complexity.


The best advice I can give you is to use bigserial for B-tree friendly primary keys and consider a string-encoded UUID as one of your external record locator options. Consider other simple options like PNR-style (airline booking) locators first, especially if nontechnical users will quote them. It may even be OK if they’re reused every few years. Do not mix PK types within the schema for a service or application, especially a line-of-business application. Use UUIDv7 only as an identifier for data that is inherently timecoded, otherwise it leaks information (even if timeshifted). Do not use hashids - they have no cryptographic qualities and are less friendly to everyday humans than the integers they represent; you may as well just use the sequence ID. As for the encoding, do not use base64 or other hyphenated alphabets, nor any identifier scheme that can produce a leading ‘0’ (zero) or ‘+’ (plus) when encoded (for the day your stuff is pasted via Excel).

Generally, the principles of separation of concerns and mechanical sympathy should be top of mind when designing a lasting and purposeful database schema.

Finally, since folks often say “I like stripe’s typed random IDs” in these kind of threads: Stripe are lying when they say their IDs are random. They have some random parts but when analyzed in sets, a large chunk of the binary layout is clearly metadata, including embedded timestamps, shard and reference keys, and versioning, in varying combinations depending on the service. I estimate they typically have 48-64 bits of randomness. That’s still plenty for most systems; you can do the same. Personally I am very fond of base58-encoded AES-encrypted bigserial+HMAC locators with a leading type prefix and a trailing metadata digit, and you can in a pinch even do this inside the database with plv8.


To this day I maintain that a large part of IPv4 space wastage is due to the HTTP WG's longtime avoidance of adopting SRV or SRV-like DNS records or even the DNS itself as normative for HTTP/HTTPS, instead allowing the HTTP RFCs to just vaguely suggest that DNS might be one way to resolve the IP address of origin servers, whilst in practice squatting on the A (address) record like they owned it (and worse, all the apex records). Consequently inspiring a vast chorus of LIRs applying for /19 allocations over the years "for SSL hosting" and continuing to do so long past the introduction of SNI (RFC 3546). Saw this behaviour firsthand as a European LIR operator with friends at RIPE. Is it cracked down on now? Yes. Are there whole swathes of IPv4 space that remain unassigned or entirely unannounced? Yes. Does every large-scale DNS hosting service have some hackish way to workaround the prohibition of CNAME records at the zone apex? They sure do, and HTTP is why.

Paul Vixie saw it coming, the very first example in the original SRV proposal (RFC 2052, 1996) is resolution of HTTP. Alas that this example was omitted in later editions. The new SVCB/HTTPS RRs (RFC 9460, 2024) are literally decades overdue.


> people who didn’t understand networking

A couple of decades ago I witnessed a classic demonstration of Weinberg's Corollary¹ when a spanning tree misconfiguration on a freshly-installed BlackDiamond disabled LINX for hours, throwing half of Britain's ISP peering into chaos. The switch in question was evidently cursed: it'd been dropped on my foot earlier that week, thus disabling me for hours, and everyone else involved in that project was dead within two years.

__

[1] "an expert is a person who avoids the small errors while sweeping on to the grand fallacy"


There is a tale - perhaps apocryphal - handed down between generations of AWS staff, of a customer that was all-in on spot instances, until one day the price and availability of their preferred instances took an unfortunate turn, which is to say, all their stuff went away, including most dramatically the customer data that was on the instance storages, and including the replicas that had been mistakenly presumed a backstop against instance loss, and sadly - but not surprisingly - this was pretty much terminal for their startup.

    On Tue, 10 Oct 2023 07:55 +1100 jameshart wrote:
    > Are there any pre-eternal-September warriors still sticking
    > Emily-Post-like to their 1992-era netiquette standards

    Yes.

    —- 
    inopinatus

    y = ->(f) {
      ->(x) { x.(x) }.(
      ->(x) { f.(->(v) { x.(x).(v) }) } )
    }

Here's a thought: HTML does not separate structure, or content, from presentation. I'm not sure where the myth of the web's separation of concerns first arose - possibly from some architecture astronaut who'd never written an element in their life, perhaps from someone who really knew better but needed to slip something past the mediocrity police - but irrespective of what any non-normative reference may say, HTML does in practice blend structure, semantics, content, and presentation. CSS and JS substantially augment the presentation and semantics, but fighting against the grain of what HTML fundamentally embodies, instead of leaning on the framework, is a prime recipe for technical debt, not to mention opening the door for an awful lot of NIH.

Tailwind responds to that with an organising principle of styles refactored at the abstraction level of page components. It is not supplying an off-the-shelf design system for front-end novices a la bootstrap/material design. That is an excellent fit for a template-driven world, which is to say, most applications of scale. It is also an excellent fit for anyone (or any HTML generation language) that thinks of data as functional and anonymous. Naming things is hard, so the fewer things we're forced to name the better. My CSS has even improved from using Tailwind, not just thanks to the excellent docs but because it's been easy to experiment with alternative compositions.

To your specific inquiry - I've been maintaining several Tailwind frontends for a few years now, and it's been more reusable, more composable, and more maintainable that any of the hand-crafted CSS I might've written in the past, no matter how polished and proud I was of it.


> One way to enhance the usability of unique identifiers is by making them easily copyable.

No matter what your identifiers look like, if you want them to be easily copyable you should add `user-select: all` to the element containing them.

If you do this, all of the text will be selected automatically when you click on the element.

https://developer.mozilla.org/en-US/docs/Web/CSS/user-select


I have twice had the experience of meeting a professor who had done their PhD in the late 1980s/early 1990s on Dataflow and were lamenting having wasted their time since it turned out to be a dead end. I pointed out to them that the computer they are currently using has a hidden Dataflow machine inside of it - it just has a front end converting the x86 code stored in memory into the dataflow graph (reorder buffers, reservation stations and so on) before actually executing it.

Perhaps one day the "all advanced processors are really RISC inside" meme will get replaced with "all advanced processors are really dataflow inside".

Of course, there is the option to make the dataflow visible to the outside with the EDGE (explicit data graph execution) architecture (Microsoft even showed Windows running on one).


I'll throw out a VC's perspective on liquidation prefs:

1) I think 1x is very fair and meant to protect investors from bad company behavior. If you didn't have 1x preference, this would be an easy way for an unscrupulous founder to cash out: raise $X for 20% of the company, no liquidation preference. The next day, sell the company and its assets ($X in cash) for, say, 0.9x. If there's no liquidation preference, the VC gets back 0.18x and the founder gets 0.72x, even though all that the founder did was sell the VC's cash at a discount the day after getting it.

2) >1x liquidation preferences are sometimes the founder's fault and sometimes the VC's fault. Sometimes it's an investor exploiting a position of leverage just to be more extractive. That sucks. But other times it's a founder intentionally exchanging worse terms for a higher/vanity valuation.

For example, let's say a founder raised a round at $500m, then the company didn't do as well as hoped, and now realistically the company is worth $250m. The founder wants to raise more to try to regain momentum.

A VC comes and says "ok, company is worth $250m, how about I put in $50m at a $250m valuation?"

Founder says "you know, I really don't want a down round. I think it would hurt morale, upset previous investors, be bad press, etc. What would it take for you to invest at a $500m+ valuation like last time?"

VC thinks and says "ok, how about $500m valuation, 3x liquidation preference?"

The founder can now pick between a $250m and a 1x pref, or $500m and a 3x pref. Many will pick #1, but many others will pick #2.

It's a rational VC offer -- if the company is worth $250m but wants to raise at $500m, then a liquidation preference can bridge that gap. The solution is kind of elegant, IMHO. But it can also lead to situations like the one described in the article above where a company has a good exit that gets swallowed up by the liquidation preference.

3) generally both sides have good lawyers (esp. at later stages of funding), so the liquidation preference decision is likely made knowingly.

Related to #3, if you're fundraising, please work with a good lawyer. There are a few firms that handle most tech startup financings, and they will have a much better understanding of terms and term benchmarks than everyone else. Gunderson, Goodwin, Cooley, Wilson Sonsini, and Latham Watkins are the firms I tend to see over and over.


> Of course, designers may not like the way this looks and we want to create a great looking custom switch.

The general UI rule is use a switch when toggling has an immediate impact (similar to pressing a light switch) vs a checkbox when there's a submission step before it has an impact (similar to ticking a paper form then mailing it). See:

https://www.nngroup.com/articles/toggle-switch-guidelines/.

It made sense after learning this but I didn't find the difference that intuitive before. A lot of UIs get it wrong (e.g. switches in forms, checkboxes for settings that immediately change something) but it only bugs me now I know which one to use. It's not only for cosmetics though.


Andreas Fredriksson demonstrates exactly that in this video: https://vimeo.com/644068002

PHYSICIST: I offer you the following gamble. I toss a fair coin, and if it comes up heads I’ll add 50% to your current wealth; if it c—

ROSENCRANTZ: (interrupting) Do it.


This is an excellent article, and I enjoyed skimming it, as I am only faintly familiar with Blessed John Duns Scotus or his work. I've spent more time around the Dominicans, and so I adopted a Thomistic outlook and studied the Angelic Doctor more than the others.

This controversy that Blessed Duns participated in is only the tip of the iceberg. The Dominicans and Franciscans are contemporaries, both founded in the early 13th Century, and the Jesuits, though they came around 300 years later, joined in a very vigorous, sometimes brotherly, rivalry among the three.

The three orders often had debates regarding the doctrine of the Immaculate Conception, which was not defined as dogma until the 19th Century, and which is basically rejected by the Eastern Churches, unless you can reformulate it carefully in Greek.

They also had debates about missionary activity and the "Chinese Rites Controversy" was a huge factor in how East Asia would be evangelized, and how they would worship, and whether indigeneous peoples worldwide are able to contribute their faith practices to Roman Catholic liturgies at this late stage of development.

There's an old joke they have, which goes: "The Dominicans were founded to combat the Gnostic heresy of the Albigensians. The Jesuits were founded to combat the heresy of Protestantism. Who was more successful? Have you met any Albigensians lately?"


“75% chance of rain”:

1. It will definitely rain, on 75% of the relevant area.

2. It will definitely rain, for 75% of the relevant time period.

3. It will rain with an intensity of 75% of the maximum our instruments can measure.

4. Three out of four meteorologists think it will rain.

5. It will rain on 75% of the population.

6. It will rain on everyone, but 75% of the population forgot their umbrella.

7. It will rai

8. 25% chance of dry.

9. 25% chance of snow.

10. When you become trapped in a Groundhog Day-type loop and are forced to repeat today three more times, then a subsequent analysis will show that it rained on exactly three of the four total days. Probably.


Entrepreneurship is like one of those carnival games where you throw darts or something.

Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.

Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about "meritocracy" and the salutary effects of hard work.

Poor kids aren't visiting the carnival. They're the ones working it.


Reminds me of this good old rant from Peter Welch, Programming Sucks

    ...

    Every programmer occasionally, when nobody’s home, turns off the lights, pours a glass of scotch, puts on some light German electronica, and opens up a file on their computer. It’s a different file for every programmer. Sometimes they wrote it, sometimes they found it and knew they had to save it. They read over the lines, and weep at their beauty, then the tears turn bitter as they remember the rest of the files and the inevitable collapse of all that is good and true in the world.

    This file is Good Code. It has sensible and consistent names for functions and variables. It’s concise. It doesn’t do anything obviously stupid. It has never had to live in the wild, or answer to a sales team. It does exactly one, mundane, specific thing, and it does it well. It was written by a single person, and never touched by another. It reads like poetry written by someone over thirty.

    ... 
[ https://www.stilldrinking.org/programming-sucks ]

Slightly offtopic but anyone with a dark sense of humour would do well to check out Chris Morris's stuff - I get a feeling most younger Brits haven't heard of it. Day Today and Brass Eye, both still funny, are wonderful time capsules satirising Britain as it was thirty years ago.

But IMO his finest work was Blue Jam - the radio comedy not the TV incarnation, hour-long episodes of low-key music and surreal sketches. Absolutely brilliant even today. Archive.org has a copy at https://archive.org/details/chrismorris_bluejam. Best enjoyed late at night.

Trigger warning: basically everything. The BBC would never get away with broadcasting it now.


Very interesting essay. Reminds me of how Donald Knuth describes his job:

> Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don't have time for such study.

https://www-cs-faculty.stanford.edu/~knuth/email.html

It's an aspiration for how I want my career to go, though I haven't been very effective at moving in that direction.


:focus-within is a favourite of mine since it is one of the few CSS selectors where child element state is significant. Thus it is very nice for drop-downs. It also works with CSS transitions, so my pure CSS drop-downs have a 150ms easing in & out (tip: transition the visibility property, since display:none can't be delayed).

There is another, however. An element whose state depends on that of other elements, and it's even more general. A form element moves between :valid and :invalid based on its inputs, allowing us to use form:{in}valid with any of the descendant, child, sibling or adjacent combinators. A hidden required checkbox is sufficient. Radio inputs work too (tip: you can clear radios back to :invalid with an input type="reset").

The really, truly monstrous part of this, however, is that the associated input doesn't even have to be a child of the form. Using the form attribute (i.e., <input form="whatever">) means they can be anywhere on the page, and they can themselves be hidden and targeted from somewhere else on the page again, with a <label> element.

I once documented the horrifying potential of this in a company wiki, along with a lovely modal slideover that was wrapped in a <form> element and transitioned to visible based on form:valid via a hidden required radio button, and whose backdrop was a reset button, and this was rightly labelled NSFW and banned by popular acclaim from ever appearing in our HTML.


BoiledCabbage:

You are on the right track!

[Dedekind 1888] started out on the correct path by defining

natural numbers up to a unique isomorphism.

Unfortunately, there was long detour through 1st-order logic :-(

Powerful foundations are now urgently required to prevent

successful cyberattacks.

See the following:

https://www.youtube.com/watch?v=AJP1VL7shiI


Remember that Apple is a global empire. If you have met any of their people, they learn a kind of culture language which diverts and uncenters themselves where no one person other than Cook can be seen to represent it. Understanding their brand language starts with the idea that Apple is an ideal of perfection, and beneath it are the ideals of harmony and flow. As an ideal, Apple does not have defects. Only things and maybe the past can have defects.

These other things like bugs and vulnerabilities are external to its perfection, and so they originate elsewhere, maybe in the past, maybe as something random, but certainly something devoid of meaning when compared to the ideal and its experience. Individual products are not Apple, because they are not perfection, but they align to it, and perfection is what makes it always seem just out of reach. The Apple experience exists above and over the material bounds of memory handling and input validation, so these things are external, and in the brand language, they are only ever allowed to exist in the past. By way of example, this is why their security advisories can come off as weird to analysts who confront and solve things.

The best writers speak the language of memory, and a trillion dollar company probably has more than a few of them. Consider that 80% or more of what you believe about reality comes through one of their products, and you are in-effect entranced by them. Their responsibility is to sustain this experience of hypnotic comfort and perfection. The advisory language is reduced until there is nothing left to remove, then calibrated to cause nothing more than a small ripple in your bliss.


I always use antirez's (Redis creator) `sds` and advertise it whenvever I get the chance. Thanks to the someone who recommended it on HN some years ago. It's a joy to use.

https://github.com/antirez/sds

The trick is the size is hidden before the adress of the buffer.("Learn this one simple trick that will change your life for ever").

From the Readme:

```

Advantage #1: you can pass SDS strings to functions designed for C functions without accessing a struct member or calling a function

Advantage #2: accessing individual chars is straightforward.

Advantage #3: single allocation has better cache locality. Usually when you access a string created by a string library using a structure, you have two different allocations for the structure representing the string, and the actual buffer holding the string. Over the time the buffer is reallocated, and it is likely that it ends in a totally different part of memory compared to the structure itself. Since modern programs performances are often dominated by cache misses, SDS may perform better in many workloads.

```


Very often, in engineering or design or even art, the best feature is actually a constraint.

I used to do research at the Shedd Aquarium. After watching a particularly clever octopus defeat every attempt to prevent him leaving his enclosure at night, I am absolutely confident that, just like certain aquatic mammals such as Orca whales, that we only devalue their intelligence because they were unfortunate enough to not be in a situation where they could develop significant tool use and the cultural artifacts that said tool use and creation enables.

In previous threads on this topic I don't think I've explained deeply why I find their defeat of our team so impressive. This research group contained a multidisciplinary set of scientists. I was the only member that did not have a PhD; almost every team member had completed at least one significant postdoc as well (think Stanford, UChicago, Caltech, prestigious national labs, etc). We had applied science and engineering talents in addition to pure science, so this wasn't a case of not being able to develop realistic escape prevention mechanisms due to the team being too theoretical. The longest we were able to stop this clever guy from escaping with one of our implementations was 4 days. He usually made us look like idiots the very evening after we installed our new prevention device.

Not only this, the octopus made very clear that he had an extremely well developed memory. He clearly recalled his favorite scientist who hadn't visited in a few years when, as soon as said scientist entered the room, the octopus ignored the rest of us and followed him the entire time he would be in the room. He also became what I can only describe as depressed when that individual departed once again; this was a period of time when he stopped eating as much, moved much more lethargically, and his escape attempts were half-hearted - this was the period when he finally took more than one day to break our attempts at keeping him in.

Further, he absolutely had a sense of humour. After seeing us crack up laughing at him wearing this plastic rings we had put in his tank as jewelry (we were setting up some sort of exam that I can no longer recall the purpose of) by placing them on his tentacles, he would do so every time we entered the room. Otherwise, he ignored the rings entirely when we were not present.

I did not eat octopus prior to this but I became firmly in support of encouraging everyone to avoid eating octopus and related creatures after this experience. Not only do I think they're immensely intelligent animals, I am firmly convinced that this particular specimen was smarter than a number of humans that I have met.


it's tuples all the way down

Related, here[1] is an excellent ~hour long talk by Kerry Davis of Valve about how much thought they had to put into doors for VR while working on Half Life: Alyx.

1 - https://www.youtube.com/watch?v=9kzu2Y33yKM

(edit: it looks like digipen also posted that talk themselves, and theirs doesn't have the ~10-15 minute gap the VNN one has, but theirs seems to not have the slides. Take your pick! https://www.youtube.com/watch?v=8OWjxGL8PDM0)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: