Note that if you do this then you will opt out of any security updates not just for jq but also for its regular expression parsing dependency onigurama. For example, there was a security update for onigurama previously; if this sort of thing happens again, you'd be vulnerable, and jq is often used to parse untrusted JSON.
Indeed, there are many methods to have a custom build and still get security updates, including at least one method that is native to Ubuntu and doesn’t need any external tooling. However my warning refers to the method presented in the article, where this isn’t the case.
But isn't there still the kernel of an idea here for a package management system that intelligently decides to build based on platform? Seems like a lot of performance to leave on the table.
Rebuilding from scratch also takes longer than installing a prebuilt package. So while it might be worth it for a heavily used application, in general I doubt it.
Also I think in earlier days the argument to build was so you can optimize the application for the specific capabilities of your system like the supported SIMD instruction set or similar. I think nowadays that is much less of a factor. Instead it would probably be better to do things like that on a package or distribution level (i.e. have one binary distribution package prebuilt by the distribution for different CPU capabilities).
I'm curious how applicable these are, in general? Feels like pointing out that using interior doors in your house misses out on the security afforded from a vault door. Not wrong, but there is also a reason every door in a bank is not a vault door.
That is, I don't want to devalue the CVE system; but it is also undeniable that there are major differences in impact between findings?
Sure, but jq is very much a "front door" in your analogy. You'd have to look at each individual CVE to assess the risk for your specific case, but for jq, claimed security vulnerabilities are worth paying attention to.
This is certainly true. Also, by replacing the allocator and changing compiler flags, you're possibly immunizing yourself from attacks that rely on some specific memory layout.
By hardwiring the allocator you may end up with binaries that load two different allocators. It is too fun to debug a program that is using jemalloc free to release memory allocated by glibc. Unless you know what you are doing, it is better to leave it as is.
UBSAN is usually a debug build only thing. You can run it in production for some added safety, but it comes at a performance cost and theoretically, if you test all execution paths on a debug build and fix all complaints, there should be no benefit to running it in production.
I think it's time for the C/C++ communities to consider a mindset shift and pivot to having almost all protectors, canaries, sanitizers, assertions (e.g. via _GLIBCXX_ASSERTIONS) on by default and recommended for use in release builds in production. The opposite (i.e, the current state of affairs) should be discouraged and begrudginly accepted in select few cases.
https://www.youtube.com/watch?v=gG4BJ23BFBE is a presentation that best represents my view on the kind of mindset that's long overdue to become the new norm in our industry.
ASLR is not security through obscurity though. It forces attacker to get a pointer leak before doing almost anything (even arbitrary read and arbitrary write primitives are useless without a leak with ASLR). As someone with a bit of experience in exploit dev, it makes a world of a difference and is one of the most influential hardenings, next to maybe stack cookies and W^X.
I'm genuinely curious what was so undesirable about this sibling comment that it was removed:
"ASLR obscures the memory layout. That is security by obscurity by definition. People thought this was okay if the entropy was high enough, but then the ASLR⊕Cache attack was published and now its usefulness is questionable."
Usually when a comment is removed, it's pretty obvious why, but in this case I'm really not seeing it at all. I read up (briefly) on the mentioned attack and can confirm that the claims made in the above comment are at the very least plausible sounding. I checked other comments from that user and don't see any other recent ones that were removed, so it doesn't seem to be a user-specific thing.
I realize this is completely off-topic, but I'd really like to understand why it was removed. Perhaps it was removed by mistake?
Some people use the "flag" button as a "disagree" button or even a "fuck this guy" button. Unfortunately, constructive but unpopular comments get flagged to death on HN all the time.
I had thought that flagging was basically a request for a mod to have a look at something. But based on this case I now suspect that it's possible for a comment to be removed without a mod ever looking at it if enough people flag it.
My point was more that, at least in this case, it looks like a post was hidden without any moderator intervention.
If this is indeed what happened, it seems like a bad thing that it's even possible. Since many, perhaps most people probably don't have showdead enabled, it means that the 'flag' option is effectively a mega-downvote.
I assume people downvoted it because “ASLR obscures the memory layout. That is security by obscurity by definition” is just wrong (correct description here: https://news.ycombinator.com/item?id=43408039). It does say [flagged] too, though, so maybe that’s not the whole story…?
No, that other definition is the incorrect one. Security by obscurity does not require that the attacker is ignorant of the fact you're using it. Say I have an IPv6 network with no firewall, simply relying on the difficulty of scanning the address space. I think that people would agree that I'm using security by obscurity, even if the attacker somehow found out I was doing this. The correct definition is simply "using obscurity as a security defense mechanism", nothing more.
No, I would not agree that you would be using security by obscurity in that example. Not all security that happens to be weak or fragile and involves secret information somewhere is security by obscurity – it’s specifically the security measure that has to be secret. Of course, there’s not a hard line dividing secret information between categories like “key material” and “security measure”, but I would consider ASLR closer to the former side than the latter and it’s certainly not security by obscurity “by definition” (aside: the rampant misuse of that phrase is my pet peeve).
> The correct definition is simply "using obscurity as a security defense mechanism", nothing more.
This is just restating the term in more words without defining the core concept in context (“obscurity”).
I'm inclined to agree and would like to point out that if you take a hardline stance that any reliance on the attacker not knowing something makes it security by obscurity then things like keys become security by obscurity. That's obviously not a useful end result so that can't be the correct definition.
It's useful to ask what the point being conveyed by the phrase is. Typically (at least as I've encountered it) it's that you are relying on secrecy of your internal processes. The implication is usually that your processes are not actually secure - that as soon as an attacker learns how you do things the house of cards will immediately collapse.
What is missing from these two representations is the ability for something to become trivially bypassable once you know the trick to it. AnC is roughly that for ASLR.
I'd argue that AnC is a side channel attack. If I can obtain key material via a side channel that doesn't (at least in the general case) suddenly change the category of the corresponding algorithm.
Also IIUC to perform AnC you need to already have arbitrary code execution. That's a pretty big caveat for an attacker.
ASLR is not purely security through obscurity because it is based on a solid security principle: increasing the difficulty of an attack by introducing randomness. It doesn't solely rely on the secrecy of the implementation but rather the unpredictability of memory addresses.
Think of it this way - if I guess the ASLR address once, a restart of the process renders that knowledge irrelevant implicitly. If I get your IPv6 address once, you’re going to have to redo your network topology to rotate your secret IP. That’s the distinction from ASLR.
I don't like that example because the damaged cause by and the difficulty of recovering from a secret leaking is not what determines the classification. There exist keys that if leaked would be very time consuming to recover from. That doesn't make them security by obscurity.
I think the key feature of the IPv6 address example is that you need to expose the address in order to communicate. The entire security model relies on the attacker not having observed legitimate communications. As soon as an attacker witnesses your system operating as intended the entire thing falls apart.
Another way to phrase it is that the security depends on the secrecy of the implementation, as opposed to the secrecy of one or more inputs.
You don’t necessarily need to expose the IPv6 address to untrusted parties though in which case it is indeed quite similar to ASLR in that data leakage of some kind is necessary. I think the main distinguishing factor is that ASLR by design treats the base address as a secret and guards it as such whereas that’s not a mode the IPv6 address can have because by its nature it’s assumed to be something public.
Huh. The IPv6 example is much more confusing that I initially thought. At this point I am entirely unclear as to whether it is actually an example of security through obscurity, regardless of whatever else it might be (a very bad idea to rely on it for one). Rather ironic given that the poster whose claims I was disputing provided it as an example of something that would be universally recognized as such.
I think it’s security through obscurity because in ASLR the randomized base address is a protected secret key material whereas in the ipv6 case it’s unprotected key material (eg every hop between two communicating parties sees the secret). It’s close though which is why IPv6 mapping efforts are much more heuristics based than ipv4 which you can just brute force (along with port #) quickly these days.
I'm finding this semantic rabbit hole surprisingly amusing.
The problem with that line of reasoning is that it implies that data handling practices can determine whether or not a given scheme is security through obscurity. But that doesn't fit the prototypical example where someone uses a super secret and utterly broken home rolled "encryption" algorithm. Nor does it fit the example of someone being careless with the key material for a well established algorithm.
The key defining characteristic of that example is that the security hinges on the secrecy of the blueprints themselves.
I think a case can also be made for a slightly more literal interpretation of the term where security depends on part of the design being different from the mainstream. For example running a niche OS making your systems less statistically likely to be targeted in the first place. In that case the secrecy of the blueprints no longer matters - it's the societal scale analogue of the former example.
I think the IPv6 example hinges on the semantic question of whether a network address is considered part of the blueprint or part of the input. In the ASLR analogue, the corresponding question is whether a function pointer is part of the blueprint or part of the input.
> The problem with that line of reasoning is that it implies that data handling practices can determine whether or not a given scheme is security through obscurity
Necessary but not sufficient condition. For example, if I’m transmitting secrets across the wire in plain text that’s clearly security through obscurity even if you’re relying on an otherwise secure algorithm. Security is a holistic practice and you can’t ignore secrets management separate from the algorithm blueprint (which itself is also a necessary but not sufficient condition).
Consider that in the ASLR analogy dealing in function pointers is dealing in plaintext.
I think the semantics are being confused due to an issue of recursively larger boundaries.
Consider the system as designed versus the full system as used in a particular instance, including all participants. The latter can also be "the system as designed" if you zoom out by a level and examine the usage of the original system somewhere in the wild.
In the latter case, poor secrets management being codified in the design could in some cases be security through obscurity. For example, transmitting in plaintext somewhere the attacker can observe. At that point it's part of the blueprint and the definition I referred to holds. But that blueprint is for the larger system, not the smaller one, and has its own threat model. In the example, it's important that the attacker is expected to be capable of observing the transmission channel.
In the former case, secrets management (ie managing user input) is beyond the scope of the system design.
If you're building the small system and you intend to keep the encryption algorithm secret, we can safely say that in all possible cases you will be engaging in security through obscurity. The threat model is that the attacker has gained access to the ciphertext; obscuring the algorithm only inflicts additional cost on them the first time they attack a message secured by this particular system.
It's not obvious to me that the same can be said of the IPv6 address example. Flippantly, we can say that the physical security of the network is beyond the scope of our address randomization scheme. Less flippantly, we can observe that there are many realistic threat models where the attacker is not expected to be able to snoop any of the network hops. Then as long as addresses aren't permanent it's not a one time up front cost to learn a fixed procedure.
> The correct definition is simply "using obscurity as a security defense mechanism", nothing more.
Also stated as "security happens in layers", and often obscurity is a very good layer for keeping most of the script kiddies away and keeping the logs clean.
My personal favorite example is using a non-default SSH port. Even if you keep it under 1024, so it's still on a root-controlled port, you'll cut down the attacks by an order of magnitude or two. It's not going to keep the NSA or MSS out, but it's still effective in pushing away the common script kiddies. You could even get creative and play with port knocking - that keeps under-1024 ports logs clean.
I believe most people see security through obscurity has an attempt to hide an insecurity.
ASLR/KASLR intends to make attackers lives harder by having non consistent offsets of known data structures. Its not obscuring a security flaw, instead its raises an attacks 'single run' effectivness.
The ASLR attack that i believe is being referenced is specific to abuse within the browser, and running with a single process. This single attack vector does not mean that KASLR globally is not effective.
Your quote has some choice words, but its contextually poor.
That attack does not require a web browser. The web browser being able to do it showed it was higher severity than you would think than if the proof of concept had been in C, since web browsers run untrusted code all of the time.
ASLR obscures the memory layout. That is security by obscurity by definition. People thought this was okay if the entropy was high enough, but then the ASLR⊕Cache attack was published and now its usefulness is questionable.
ASLR is by definition security through obscurity. That doesn't make it useless, as there's nothing wrong with using obscurity as one layer of defenses. But that doesn't change what it fundamentally is: obscuring information so that an attacker has to work harder.
Is having a secret password security by obscurity? What about a private key?
Security by obscurity is about the bad practice of thinking that obscuring your mechanisms and implementations of security increases your security. It's about people that think that by using their nephew's own super secret unpublished encryption they will be more secure than by using hardened standard encryption libraries.
Security through obscurity is when you run your sshd server on port 1337 instead of 22 without actually securing the server settings down, because you don’t think the hackers know how to portscan that high. Everyone runs on 22, but you obscurely run it elsewhere. “Nobody will think to look.”
ASLR is nothing like that. It’s not that nobody thinks to look, it’s that they have no stable gadgets to jump to. The only way to get around that is to leak the mapping or work with the handful of gadgets that are stable. It’s analogous to shuffling a deck of cards before and after every hand to protect against card counters. Entire cities in barren deserts have been built on the real mathematical win that comes from that. It’s real.
ASLR is technically a form of security by obscurity. The obscurity here being the memory layout. The reason nobody treated it that way was the high entropy that ASLR had on 64-bit, but the ASLR⊕Cache attack has undermined that significantly. You really do not want ASLR to be what determines whether an attacker takes control of your machine if you care about having a secure system.
The defining characteristic of security through obscurity is that the effectiveness of the security measure depends on the attacker not knowing about the measure at all. That description doesn’t apply to ASLR.
It produces a randomization either at compile time or run time, and the randomization is the security measure, which is obscured based on the idea that nobody can figure it out with ease. It is a poor security measure given the AnC attack that I mentioned. ASLR randomization is effectively this when such attacks are applicable:
You are confusing randomization, a legitimate security mechanism, with security by obscurity. ASLR is not security by obscurity.
Please spend the time on understanding the terminology rather than regurgitating buzz words.
I understand the terminology. I even took a graduate course on the subject. I stand by what I wrote. Better yet, this describes ASLR when the AnC attack applies:
The normal way is to use dpkg to rebuild and patch, and use dch to increase the patch version with a .1 or something similar, so that the OS version always takes precedence, and then rebuild.
This is generally true but specifically false. The builds described in the gist are still linking onigurama dynamically. It is in another package, libonig5, that would be updated normally.
The gist uses the orig tarball only, so skips the distro patch that selects the distro packaged libonig in favor of the "vendored" one. At least that's how it appears to me. I only skimmed the situation.
Or do you see something deeper that ensures that the distro libonig is actually the one that gets used?
You can't "launder" copyright away like that. The court will see straight through it. See "What color are your bits?" at https://ansuz.sooke.bc.ca/entry/23
There are over 200K language modeling datasets on Hugging Face, I bet a large portion of them were generated with LLMs, and all LLMs to date have been trained on copyrighted data. So they are all tainted.
But philosophically, I wonder if it's allright to block that, it techincally follows the definition of copyright. It does not carry the expression, but borrows abstractions and facts. That's exactly what is allowed.
If we move to block synthetic data, then anyone can be accused of infringement when they reuse abstractions learned somewhere else. Creativity would not be possible.
On the other hand models trained on synthetic data will never regurgitate the originals because they never saw them.
> Was the modified ENIAC less of a computer than the Manchester Baby because its program was in ROM and could not be changed by the computer?
To me, the most remarkable property of a computer is that data and code are interchangeable. This makes it possible for the same computer to run different programs, run programs that transform programs, and so forth. It's the same fundamental concept that today means that one can "download" an app and then run it.
(See also: Lisp, which is equally remarkable in the software space for the same reason)
> Look at it this way: many modern microprocessors, especially small ones for embedded control, have their programs in ROM. If they are modern-style computers, then so was the modified ENIAC.
What makes them modern-style computers, though, is that they are capable of having their firmware flashed - or at least the development versions can do this while their software is engineered. If the final product only runs a ROM, it has lost the essence of a general purpose computer, which is the fundamental and very remarkable invention that is what we actually celebrate.
"No-one would claim that a modern Harvard Architecture computer with its program stored in ROM isn’t a stored-program computer. So does ENIAC take the prize from Baby as the first electronic stored-program computer?"
I do actually claim that a modern Harvard Architecture computers such as the AVR8 or PIC microcontrollers are not stored-program computers. You can't store a program in them and then execute it. To be fair, some MCUs can change their own Flash so the difference can be subtle - in that case the processor is used either normally or as part of the ISP (in-system programming) circuit at different times.
For very simple stored-program machines the ability to modify running code is needed for it to be Turing complete. In a computer like the Baby, how would you add two arrays? It had no index registers. So you would need to increment the instructions that load and store from the arrays every time you go through the loop. I agree that this isn't an issue on a machine with only 32 words of memory in all, but it is a key idea in theory.
Of course, a Harvard computer can simulate a Von Neumann one (see AVR8 simulating an ARM in order to boot Linux where it does indeed store programs and then run them). In fact, a popular way to implement CISC computers was to build a tiny Harvard machine running a single program in its "microcode ROM" emulating the computer you actually wanted to use.
I have never seen a MCU that cannot write its own flash memory with a new program. This is how everybody makes firmware updates. At most, some MCUs have the option to blow some fuses to make some part of the memory immutable, but during program development the memory is still writable and used as such.
Ancient MCUs with true ROM or ultraviolet-erasable PROM may be claimed to not be a stored-program computer, but this claim is completely false about any MCU with a flash memory.
Manchester Baby was just a test device, not intended to be really useful, but its successor Manchester Mark 1, which became operational next year, in 1949, already had index registers.
Nevertheless, in the next few years there have been built many computers that lacked index registers, despite the precedent of Manchester Mark 1, so instruction modification during program execution was mandatory in them. After 1954, few, if any, computers remained without index registers, so instruction modification was no longer necessary.
Not all types of approaches are available at all runways (or airports), and sometimes they are down for maintenance. Specific runways may be required due to wind, aircraft weight and runway condition and length. Most airlines ban "circling approaches" (using an approach to one runway end and then circling visually to land at a other) for safety reasons. ILS, which is probably the "close range guidance" you are thinking of, must be installed, maintained and calibrated individually per runway end. Visual aids cannot be used for approach if there is low cloud.
It is usual to be able to abort an approach and try again at the same airport using a different approach technology. But if the journalist wanted to find the most extreme example, it's not surprising that it happened at least once that an alternative wasn't available. This is probably "sampling bias"!
Note that final operational decisions are made by the aircraft commander. Aircraft do not "get diverted", except by decision of the "captain".
> GPS for tracking phone users should go away anyway. If you are in an unknown city, but a damn paper map. No tracking and you absorb the big picture much faster.
Regular GPS is receive-only. GPS receivers naturally cannot be tracked. Tracking happens much higher up the stack, such as with your map app downloading local map tiles for display. Technology-wise, it's trivial to have a smartphone based map that is tracking-free, and the privacy focused alternate phone OSes do this already.
Is money a scam? Not in itself, but you don’t have to go very far to find a scam that involves money. Similarly, it’s hard to answer your question if you consider “crypto” to be a general concept, unless you properly and narrowly define it as something specific first.
> Multiple files to herd. When I get an email with five patch attachments, I have five files to copy around instead of one, with long awkward names.
That’s not correct. You can write the email to an mbox file (your MUA lets you do that, right?) and then use `git am` to pull it all into git.
> Why I don’t like it: because the patch series is split into multiple emails, they arrive in my inbox in a random order, and then I have to save them one by one to files, and manually sort those files back into the right order by their subject lines.
The patch series arrives threaded, so your MUA should be able to display them in order. Export them to an mbox file, and then use `git am` again.
There might be ways that someone can send you email in this way and for the patches to be broken such that `git am` won’t work, of course. I take no issue with that part of the argument.
Not everyone has a fancy client-side MUA that gives them trivial access to mbox files. E.g., a typical webmail service will make exporting mboxes into a whole process at best. (And on the sending side, have fun with the lack of git send-email integration. I've spent far more time than I'd like manually patching up References and In-Reply-To headers.)
Of course, the classic response is "get a better MUA you luser", but that just adds even more steps for people who use webmail services for 99.9% of their email needs.
People can use webmail for regular email, but then connect a “better” MUA for patch handling. I get that this would be more steps, but for those who don’t want to do this, they probably just use GitHub PRs, and that’s fine, they can carry on doing that :-)
I’m just completing the picture by pointing out that for those who choose to use emails to jockey patches around by mutual agreement, including patches in emails really shouldn’t be a problem.
> Of course, the classic response is "get a better MUA you luser"
Git is distributed and allows you to work efficiently with poor connectivity, having full history available at any time, which is a big accessibility point for people with limited connectivity (and also helps people working while traveling, for example). If you do have any email client, you get all of this as well, plus arbitrarily powerful, low-latency filtering and searching. I recommend Greg KH's "Patches carved into stone tablets" talk [0].
Despite your "luser" strawman, people advocating for client-side MUAs mean well and have a point. Try replacing "webmail" by "Notepad" and "client-side MUA" by "emacs/vim" to see how your argument sounds. You probably spend a decent amount of time interacting with email, and the investment in setting up a fast, flexible and powerful environment (preferably reusing your text editor for composing messages) for doing so pays for itself soon.
> Try replacing "webmail" by "Notepad" and "client-side MUA" by "emacs/vim" to see how your argument sounds.
As it happens, I'm the kind of masochist who uses Sublime Text with no plugins for most of my programming (and literal Notepad for most of my note-taking on Windows), so I find value in letting people stick to their familiar workflow, even if some might see that workflow as somewhere between 'grossly inferior' and 'literally unusable'.
The nice thing with remote Git repos is that you don't need to care at all about how they work internally: you can speak to them using the same Git client (or GUI wrapper, alternative Git-compatible client, etc.) that you use for everything else. Of course, many people would prefer not to use Git at all, but it's a necessary evil to have some client if you want source control, and it doesn't take much work to set up. (At this point, I've installed several source-control tools that I don't really use nor have to worry about.)
But setting up an MUA solely for a git-send-email based workflow is several steps beyond that. E.g., some of the Linux maintainers demand inline patches, which physically cannot be done through many webmail services. So you're left with the slog of finding the right incantations for git-send-email (or an MUA you don't need for anything else) to provide the right credentials to an obscure SMTP proxy. And then you have to worry about how well-protected those credentials are, unless you have some sort of keyring or 2FA integration.
> You probably spend a decent amount of time interacting with email, and the investment in setting up a fast, flexible and powerful environment (preferably reusing your text editor for composing messages) for doing so pays for itself soon.
I'm a bit curious, how well do these tools handle HTML email? Webmail services come with WYSIWYG editors that I make liberal use of for formatted text. There's a big overlap between the "email patches are great!" and "HTML email is the worst!" crowds, but I'd be surprised if HTML email is totally anathema to today's MUAs.
> As it happens, I'm the kind of masochist who uses Sublime Text without any plugins for most of my programming, so I find value in letting people stick to their familiar workflow, even if some might see that workflow as somewhere between 'grossly inferior' and 'literally unusable'.
I definitely think there are upsides to not tweaking your text editor config endlessly, so I understand your point :) What I meant with "vim/emacs" is mostly that sometimes you really want to automate a text editing task, and then it's really convenient to have a programmable text editor. It's also very much a case of [0].
> I'm a bit curious, how well do these tools handle HTML email?
In my case, I use mu4e in emacs to read my mail. Very basic HTML works by default via emacs's native HTML renderer (see, e.g., [1] for old screenshots). That's my preferred solution because I like the keyboard consistency (it's just an emacs buffer) and because there is a command to view the email in an external browser if needed, but it is also possible to render HTML email accurately in emacs by embedding a webkit widget [2]. As for writing, you can write in Org mode format (emacs markdown, if you will) and it gets converted to HTML on send.
> ...the Thatcher government in the 1980s gave social housing tenants the right to buy homes, and created laws that prevented local authorities from building new public sector housing projects...
Also, with a large discount, so even if new social housing could be built, there wouldn't be the funds to build them. It is effectively a large wealth transfer from the taxpayer to those needing subsidised homes, such that they could later resell that home to release their windfall, sucking the money that could have gone to housing others.
Perhaps I'm an outlier. I expect to pay for my personal gaming experience. But if there's some necessary part of gameplay I don't like, that's negative experience that makes the game worse for me, reducing the value of the game to me. To skip that gameplay seems like something that shouldn't cost me anything, or even get me a discount because I'm not getting some of the experience I paid for. Like say if a side dish is so bad I send it back at a restaurant. I neither expect to pay for that nor a premium for someone else to eat it for me!
So I'm not willing to pay a premium for such a thing. I don't see why the game with the bad part missing should be worth more than with the bad part present. Rather, the inverse! I'll more likely skip the game entirely and find a different one that doesn't have such mandatory grinding.
As I say maybe that's an outlying opinion since making money from this kind of thing apparently works. It it helps, I'm in the adult with family demographic and my time (rather than game purchase levels of money) is what is at a premium.
I've never bought an XP booster myself, and I feel some of the same conflict. Although I can obviously afford it, I don't think I could stomach doing this for new releases. So at most it would be something where I'd be interested after 2-3 years when all the patches are in, bugs are fixed, and review consensus has settled.
So rather than pay US$70 today for a buggy, grindy new release experience, I pay $20 in two years for the base game + $10 for the "player's digest" mod.
I expect even then it would be a tough sell, particularly having to be on PC— a lot of the market for this kind of thing would more casual console/mobile/streaming players.
Just because something is OCRable doesn't make it structured data that can be used immediately. A table at a restaurant might have a QR code that takes me to a menu with the table number already encoded and pre-entered into the order page ready to go. An OCRable table number does not give me that, and an OCRable URL like https://fragmede.restaurant/menu?table=42 might work for HNers, but most humans won't recognise and understand their table number when going up to the bar to order.
"Fragmede.menu" costs $35 a year, which is roundoff error cost for a restaurant, and is a short-enough domain for a customer who wants to view a menu and order. No need for the "https://" which is implied. Adding a "?table=42" could be optional but isn't necessary, as the website in addition to simply presenting the menu could provide a means to order and if so have a little html input box when ordering to put their table number or whether it is pickup.
Sure it can be done, but there's no denying that a simple scan of a QR code instead of manually typing a URL would make life easier, as would some kind of alternative encoding technology that is more pleasing to the eye.
> * SECURITY UPDATE: Fix multiple invalid pointer dereference, out-of-bounds write memory corruption and stack buffer overflow.
(that one was for CVE-2017-9224, CVE-2017-9226, CVE-2017-9227, CVE-2017-9228 and CVE-2017-9229)
reply