Because, if they announced it and it seemed plausible or even possible that they were correct, then every media outlet, regulatory body, intelligence agency, and Fortune 500 C-suite would blanket OpenAI in the thickest veil of scrutiny to have ever existed in the modern era. Progress would grind to a halt and eventually, through some combination of legal, corporate, and legislative maneuvers, all decision making around the future of AGI would be pried away from Ilya and OpenAI in general - for better or worse.
But if there's one thing that seems very easy to discern about Ilya, it's that he fully believes that when it comes to AI safety and alignment, the buck must stop with him. Giving that control over to government bureaucracy/gerontocracy would be unacceptable. And who knows, maybe he's right.
Our onboarding docs specifically tell employees to NOT use Google Authenticator precisely because of this issue. I have no idea how Google let this fester for so long, literally if even one (1) person over there was using it and got a new phone, they should have known about the issue.
Yeah, same with my company. "DO NOT USE GOOGLE AUTHENTICATOR" is littered throughout our Intranet and onboarding docs in bold letters with recommendations for different options. And people still use it and lose their codes all the time.
Now it's tied to the Google Account which means it'll be tied to either their personal or work account and now we have to worry about personal account bans removing their 2FA or when they leave the company, our suspension process killing personal 2FA that were synced via the wrong account.
The app has supported bulk QR code export and import for years. This makes it easy to transfer to a new phone, and relatively easy to make physical backups.
Which only worked if you had both phones working at the same time... I'd bet a sizable portion of new phone enablements are due to losing the previous phone irrevocably.
When doing a factory reset because of whatever reason, this becomes an issue as well. You cannot take screenshots of the bulk export QR-Code on Android because of FLAG_SECURE, so you need to work around that and take a photo of the screen with a different device to import from later.
Also, as of last week, there existed an issue with special characters when trying to import and the app would just freeze or not recognize the QR code pattern at all, so you better had backups of all your secret keys.
Both issues made me switch to Aegis and appreciate my past self backing up the secrets with KeePassXC.
I have long migrated to Aegis and it is pretty awesome. Backups. Copy & Paste. Encryption. Auto-upload to Nextcloud. Better Interface (with names!). etc.
You'd save the QR code at the time you first used it on the old phone, and not wait for when you needed to transfer it.
For me, I'd usually be on the desktop when setting up 2FA anyway, so I'd just save the QR code from the desktop browser ("Save image as ..."). When I needed to set up a new phone, I'd open the saved image on the desktop and point my phone at the screen.
That's an absurd expectation. First of all, many users don't even have or use a computer. Of course, I personally do have one, but I'm often nowhere near one when I set up MFA on a new account. So then I guess I screenshot the QR code to my phone? But if I saved the image to my phone it gets stored in my photos backup anyway. Why would Authenticator not just back its own contents up, to that exact same spot, rather than me doing some crazy runaround that for some reason involves images?
Nope, you can't screenshot the page, so you can't save the code and can't send it to another phone. This means you can never trade in a phone for a new one and if your phone is lost or stolen you're locked out of all your accounts forever.
They actively added code to prevent you taking screenshots, which is insane but true.
I'm on iOS and I'm able to screenshot the QR code with version 3.4.0 of the app. Maybe the screenshot lockdown is limited to Android?
In any case, if you're trying to create a backup there are other avenues of capturing the QR code - offline digital camera is probably the most secure way of doing so.
Interesting - but not good enough. For the threat model TOTP solves, it is not absurd to want Authy-like functionality where codes can be backed up, encrypted, to a cloud service OR like Authone (?) which allows you to export the data to a file.
This thought crosses my mind a lot too. When we're working with code, we're shaping information into structures and patterns. And so our brains are tasked with translating those structures and patterns into text, making sure that text is written exactly right, and reconciling the difference in the observed output of structures and patterns with the text we've written. What if we could stay at that higher level of abstraction?
My de facto example of this that I keep coming back to is syntax highlighting. It's immensely useful, and yet once you get used to it, the brain is able to skip the (conscious) step of breaking it down e.g. "this text is green so it means it is a function name, so the next text should be blue because they are function parameters", etc. It's more like "okay, green, blue, bunch of stuff between the braces, checks out". Indentation, squiggly line error highlighting, etc. all serve a similar purpose. They rely on our minds' fundamental ability to recognize visual patterns without having to fully process them through our language centers.
So I feel like there is a huge unexplored space there for more efficient and natural programming. VR with 3D representations of code as, I dunno, pipes, gears, something like that? Function declarations in specific colors and then call sites in mixed colors when one function is called from another? Anything to reduce the friction from thought to working software without having to worry about syntax.
This is all very abstract and I probably haven't explained it well, but it's one of those things I can't shake because I'm fairly sure there's something there.
> The syntax complexity will just be moved into a visual layer, it doesn't disappear.
You missed the part where our brain is much better prepared to deal with visual complexity than textual. It's like moving the job to a specialized hardware co-processor.
Even if the inherent complexity of the task is the same, the environment would allow to do it way more efficiently. And we could also redesign the languages and IDEs to take advantage and reduce non-inherent complexity as well. See for example the Cursorless demo linked above, where you can target specific bits of code by naming a letter rather than navigating to them with arrow keys or the mouse.
> You missed the part where our brain is much better prepared to deal with visual complexity than textual.
Citation needed.
Text is also a visual representation of language. If you want symbols instead of letters to represent logic you basically end up with hieroglyph like system, a logographic one.
Visual representation is not logical, it works according to gestalt principles. [1] Visual improvements would not be used to represent the program logic but secondary notation to improve and accelerate understanding. [2]
The metaphor would be using a graphics card for rendering the scene and the CPU for the game logic. Without a graphic card, your rendering options are extremely limited.
> Anything to reduce the friction from thought to working software without having to worry about syntax
I think that the text/notation based representation of programs (or state machines) is the most effective way.
The reason is that it leverages the human ability to use language. I think our intuition for language and thought is much better than our intuition for 2d/3d spaces.
A picture is worth a thousand words, but if you need exact precision, as programs need, no amount of non-text-pictures would give you that flexibility to describe exactly what you want without losing details.
There is a steep learning curve to languages, but once you have learnt the language, these concepts get attached and integrated to your thoughts. This allows you to design increasingly higher levels of abstraction, until you are working with concepts in your current domain.
I love that I can count on the Top Minds of Hacker News to tie themselves into knots debating whether or not it is morally and ethically justifiable to try to stop the We-Kidnap-Children-And-Put-Them-In-Cages Agency from doing their job.
Indeed. In another comment, someone suggests that it might be immoral, as someone in a position to exert some small influence on the situation, to impose their judgement on the rest of us who are not in a position to do anything at all.
That's... that's just some incredible mental yoga there.
While I disagree with briandear, I don't find it sociopathic.
(It's like the PG said - be aware of what the Taboos/unspeakable ideas are in society.)
I live in Canada as an immigrant, love it here, would never move, and USA health/insurance system frankly baffles me, even having lived there for a year previously. Don't even remotely understand why people put up with sheer complexity and opaqueness.
But I appreciate people like the GP [briandear] that eloquently and clearly help me understand alternative perspectives different to my own. I think it's a very valid, very core discussion of values and goals: is it equal care for all, or do we prioritize it? And I agree with GP [briandear] that shying away from this core discussion muddles the issue. Sometimes it feels like USA builds ever more complex systems to mask the goals and direction of its health care system, rather than a simple system that would fess up and do whatever it's meant to do efficiently :-/
Wow, this story is annoying me a lot more than I expected. There's a whole lot of shaming of the layperson in here, with a wink wink nudge nudge that if people would just _think a little_, they could save themselves a lot of trouble.
> Graphics like these need to be read closely and carefully. Only then can we grasp what they're really saying.
Well, that's ridiculous. If these graphics were in an app, we'd tear them apart for the poor user experience. Look at this: https://imgur.com/a/Ko2Z4uM
There is nothing there to indicate that what you're looking at is the probability distribution of the center of the storm. The very obvious interpretation of this graphic, without additional context, is that this cone is the area that could be affected by the hurricane. This is reinforced by:
1. The size of the start of the cone is the same as the size of the hurricane
2. There is a well defined border to the path
3. There is no additional shading outside the cone that indicates "could also face danger"
Furthermore, "probable path of the storm center" is not something the general public cares about! That is not the question that needs answered! What people want to know is, "will I be in the path of the hurricane?"
There is a big box with a bunch of text at the top of the graphic saying that it is showing the probable path of the storm's center, but does not show the size of the storm. This is different than saying "the storm may extend beyond the edges of the cone". And the next line is "hazardous conditions may occur outside the cone." Okay, every time there's a thunderstorm or 50mph wind gusts, there's an alert in my weather app about hazardous conditions. That information is too vague to be actionable. This doesn't even touch on the design aspects of this graphic, such as how the massive amount of text, size, color, spacing, etc, seems to draw the eye away from that message at the top.
But getting back to a point from a previous paragraph, why is this even disseminated widely at all, when this graphic is so misleading and also not what people need to know? Is it the NHS pushing it out to the public? Is it the media? It seems wildly irresponsible. The tropical storm force wind speeds graphic mentioned in TFA would be a much better product to deliver. Or, just create a graphic that is what the article has already identified as the natural interpretation - a cone of the possible area that could be affected by the hurricane!
Sorry, but excuses for poor design are a real pet peeve of mine.
Seriously. I already know everything in the aricle (because I've read similar before) and yet I still have trouble remembering it when reading a map and trying to visualize the area I actually care about. I can't imagine how the heck purple are supposed to just deal with this and wrap their head around it, especially regarding an emergency.
I'm wondering if the people who didn't prepare due to a misleading map despite a technically accurate prediction can sue for damage? Not that I would enjoy seeing NOAA et al. in a lawsuit, but if nothing else has gotten them to fix these, maybe a lawsuit will?
I did not come away with that impression. Rather, what I came away with was: people tend to misunderstand these visualizations, so we should re-evaluate using them.
You're constructing a straw man here. All your points are explained by the NWS themselves on the plot, and this is but one of the many products you can avail yourself of, along with the different wind speed probability plots, etc.
As to why this particular one is shown by the media, I guess you have to ask them.
If anyone from Stripe is reading this, I do hope a significant amount of support goes to Project Vesta. I have not seen a better sequestration strategy, in that olivine weathering is both a long term CO2 sink, and -also- helps deacidify the oceans. I have no affiliation with Project Vesta, I'm just very excited by it.
My question for the Vesta folk is, what is the current status of the project? There is a timeline on your website but no indication of how close any of these stages are to being a reality. What are the blockers, who can help, when will you "start shipping", as it were?
> Fact of the matter is if it weren't for plastics, the everyday products that we all consume would either cost way more, or be so expensive that quite a few of us would never get to use them.
That's fine. When the choice is not having some luxury items vs total biosphere collapse, I'll side with the former. I guess you'd rather keep sipping your Diet Coke while the world burns.
As soon as you made it a personal, moral issue, you lost the argument entirely. If the goal is to get everyone to adopt good habits for the environment, you're going to have to address price and convenience.
"I wish they would have addressed the price and convenience issues and not made it such a question of morality", I say, as I eat a handful of dirt and survey the barren lifeless landscape around me.
I've been drinking almost exclusively tap water the better part of my life. I suppose my luxury is drinkable tap water. So let's aim for that for everyone instead.
Food doesn’t always need to be wrapped in plastics. For example I’ve seen bananas wrapped in clear plastic bags. Why is that when bananas come with their own “wrappers”?
Yeah, after a long stretch of trying "proper" cheeses on burgers, I've gone back to using Kraft singles. Even Kenji Lopez-Alt recommends American cheese for burgers, so I don't feel so bad about it.
I much prefer a mature cheddar. In particular an apple smoked cheddar.
A close second would be a blue cheese such as St. Agur.
Both with a lot of umami goodness on the burger.
I will happily eat a burger with American cheese. It is good - but not cheese to me. It is more an oily softness than the smooth fatty softness from (dare I say real) cheese.
Maybe this comes from growing up where a "cheese shop" actually is a thing :-)
But cheese fries with american cheese are truly sinful and a very guilty pleasure. They're not common around here for which my heart is eternally thankful!
I do both cheddar and blue/gorgonzola on the same burger and it works pretty well. We sometimes add chopped green olives and cream cheese and it makes for an absolutely sublime cheeseburger.