Hacker News new | past | comments | ask | show | jobs | submit | westbywest's comments login

The "Soliton Wave" from TNG was very far down on the list of things I would have expected to see in real life.


Good thing the 304L (etc) comprising Starship's shell won't be subjected to repeated heatings. https://app.aws.org/forum/topic_show.pl?tid=7585


Although the end of the Weimar Republic was essentially an electoral choice, significant chunks of the electorate by then had been skewed, divided, disenfranchised, or even displaced it wouldn't be accurate to call the elections fully representative. And yes, similar efforts are underway in the US too.


The increasingly toxic politics (arguably by design) emerging around school board elections is very recent addition to disincentives. I live in a Midwest city where suburban school districts that previously wooed young professionals away from urban core now frequently feature nasty culture war fights, book banning, etc. It's notable for impacting generally wealthier households that could bear the expense of relocating to suburban municipalities with higher cost of living/taxes to access better schools.


I wonder why all of troublemakers are not ejected from the system? You would think any "komsomol" types would be flushed by democratic system whereas they would be retained in assignments based system. Are there parents who genuinely want culture wars / book banning types deciding how their children study?


Increasingly, the troublemakers are gaining overwhelming support from the electorate. We used to rely on the fact that the belligerent, crazy, culture wars / book burning folks were a tiny minority, kept powerless by the democratic system. As their numbers grow, the democratic system starts to work for them. Many places (at least in the USA) have crossed the rubicon demographic wise, and the inmates are now running the asylum.


An underlying motivation to this new power-mongering in school district politics is that education is typically the 2nd largest budget item in most US states. The "troublemakers" despite their theatrics also as a category tend to overwhelmingly support schemes that divert education funds to private entities, e.g. vouchers, tax-credit scholarships.


So instead of helping the vulnerable groups get education, they will try to divert as much funds as possible from that goal?

I alao thought that getting education outside of public one-size-fits-all school was a conservative thing (religious schools, private schools) - especially if enabled by a voucher. Is that is what politics are about?


The conflict of self-interest and common good becomes especially perverse when power utilities themselves start mining coin at their own facilities (for additional profit). https://www.datacenterdynamics.com/en/news/ameren-deploys-bi...


At a previous job I was checking out MicroPython due to its support on LEON4 RAD-hardened CPUs like GR740. It was appealing as a possible design path from proof-of-concept implementations with desktop python/numpy (etc) to space-certified platforms, ideally reducing the quantity of code to reimplement in C. https://essr.esa.int/project/micropython-for-leon-pre-qualif...


60GHz wifi products are pretty common now. With the notable quirk of having difficulty passing through paper.


Paranoiacs will be finally be able to wear regular hats!


Highlighting Vienna's district heating system which does this, partly because the facility in Spittelau had artist Friedensreich Hundertwasser design it to look like a psychedelic castle. https://hundertwasser.com/en/architecture/910_arch73_fernwae...


Probably the flexibility of that router having 3 MiniPCIe slots for the wireless card(s), from the standpoint of ease in firmware development. The cheaper routers with embedded radio chipsets range from very good to crappy support for open source firmware. But, few people are going to be interested in a 350$US product, when equivalent is available for a fraction of that.


I think the author phrased it somewhat differently, but my understanding is grep's high throughput also comes from its use of what we (Computer Engineering grad research group) referred to as a variable state machine. A colleague was researching implementing analogous on an FPGA, for gigabit line-speed throughput. Preferred Computer Science term is apparently Thompson's NFA. https://en.wikipedia.org/wiki/Thompson%27s_construction#The_...


Thompson's NFA construction, when used directly to search via an NFA simulation, is dog slow. It does give you a O(nm) search, but in practice it's slow. AIUI, GNU grep uses a lazy DFA, which uses Thompson's NFA construction to build a DFA at search time. This does indeed lead to pretty good performance for the regex engine. But GNU grep's speed largely comes from optimizing the common case: extracting literals from your pattern, identifying candidate matching lines, and then checking with the full regex engine to confirm (or reject) the match.


I suspect Thompson's NFA is not inherently dog slow (Glushkov can be done reasonably fast for decent-sized NFAs). The fact is that most Thompson-lineage engines opted for the 'lazy DFA' approach and optimized that (which is effective until it isn't). I imagine a more aggressive 'native' Thompson NFA is possible. A nice benefit of that is not having to write to your bytecode - there's a good deal of systems-level complexity stuff in RE2 that springs out of a consequence of the 'lazy DFA construction' decision.

That being said, matching literals is always going to be faster, especially if you decompose the pattern to get more use out of your literal matcher - the downside of filtration is that if the literal is always present, you are just doing strictly more work. At least with decomposition you've taken the literal out of the picture. See https://branchfree.org/2019/02/28/paper-hyperscan-a-fast-mul... for those who don't know what I'm talking about (I know you've read it).

Am flirting with doing another regex engine that gets some of the benefit of decomposition and literal matching without taking on the nosebleed complexity of Hyperscan...


Do you know of any fast Thompson NFA simulation implementation? I don't think I've seen one outside of a JIT.

Is there a fast glushkov implementation that isn't bit parallel? I've never been able to figure out how to use bit parallel approaches with large Unicode classes. Just using a single Unicode aware \w puts it into the weeds pretty quickly. That's where the lazy DFA shines, because it doesn't need to build the full DFA for \w (which is quite large, even after the standard DFA compression tricks).


Unicode is a PITA. In Hyperscan, it's not pretty what gets generated for a bare \w in UCP mode if you force it into an NFA (it's rather more tractable as a DFA, even if you aren't lazily generating, although of course betting the farm that you can always 'busily' generate a DFA isn't great).

I've always thought that a better job of doing NFAs (Gluskov or otherwise) and staying bit-parallel would be done with having character reachability on codepoints, not bytes, generally remapping down to 'which codepoints make an actual difference'. This sounds ugly/terrifying, but the nice thing is that remapping a long stream of codepoints could be done in parallel (as it's not hard to find boundaries) and with SIMD. Step by step NFA or DFA work is more ponderous as every state depends on previous states.


Yeah, I've looked at glushkov based primarily on your comments about it, but Unicode is always where I get stuck. In my regex engine, Unicode is enabled by default and \w is fairly common, so it needs to be handled well.

And of course, one doesn't need to bet the farm on a lazy DFA if you have one, although it is quite robust in a large number of practical scenarios. (I think RE2 does bet the farm, to be fair.)


Unicode + UCP is a perfectly principled thing, but it wasn't a design point that made any sense for Hyperscan as a default. The bulk of our customers were not interested in turning 1 state for ASCII \w into 600 states for UCP \w unless it was free.

I think both Glushkov and Thompson can be done fast, but I agree that they are both going to be Really Big for UCP stuff. Idle discussions among the ex-Hyperscan folks generally leans towards 'NFA over codepoints' being the right way of doing things.

Occam's razor suggests if you do only 1 thing in a regex system (i.e. designing for simplicity/elegance, which would be an interesting change after Hyperscan) it must be NFA, as not all patterns determinize. If you are OK with a lazy DFA system that can be made to create a new state per byte of input (in the worst case) then I guess you can do that too.

I am not sure how to solve the problem of "NFA over codepoints", btw. Having no more than 256 distinct characters was easy, but even with remapping, the prospect of having to handle arbitrary Unicode is... unnerving.


Yeah, my Thompson NFA uses codepoints for those reasons. But not in particularly smart way; mostly just to reduce space usage. It is indeed an annoying problem to deal with!


... and no, I don't know of any fast Thompson NFA simulations, but I don't see why they shouldn't be possible. They have a very simple "next" function, modulo the awfulness of getting past epsilons, but that seems to be roughly parallel to the awfulness of computing arbitrary 'next' functions in Glushkov-land. I'm not aware of anyone that's actually tried.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: