Hacker News new | past | comments | ask | show | jobs | submit | BulgarianIdiot's comments login

You win btw.


The abstract concept of ORM is fine, unfortunately the particular ORMs we use suffer from pretending all objects and their relationships exist in RAM, there's a single instance of every entity and it's always up to date. An abstraction that isn't merely wrong, but falls apart in your face at every opportunity.

This problem is not unique to mapping objects to SQL databases, but mapping objects to anything remote at all, say a GraphQL or a REST API.

OOP as we presently interpret it is an inherently synchronous, reference (or handle, rather) based paradigm, that only works when all your state is local and in RAM.

This is why distributed object protocols keep failing. They'll keep failing until OOP programming reorients to use values for messages and makes references explicit, so their impact is seen and felt (and it's especially seen and felt when you reference an entity on another machine halfway around the world, in terms of lag, fragility, eventual consistency and everything).

We see hints of this with value types in Swift and .NET, unfortunately I'd say rather rudimentary so far. But it's coming. The ideal example of such a set up is Erlang. A language that Joe Armstrong has called "probably the only OOP language in the world". A statement Alan Kay agrees with (source: https://www.quora.com/What-does-Alan-Kay-think-about-Joe-Arm... ).


At this point browsers are more complicated than the operating systems we had back when browser+mail was the norm.

And I bet Gmail.com running in a modern browser is more complicated than a mail client from that time.

As such, browsers are now an app platform. They don't need to ship with prebaked apps like mail. They need APIs like a powerful runtime, visualization layer, background services and notifications... and they have them.


Chargeback from your CC provider/bank?


I find it odd how many schema changes in modern RDBMS must be done on the whole table at once. You can split a table in chunks and recode each chunk gradually in a way which doesn't change the data in it (so no downtime) but removes dead entries like updated enums.

In a way you describe how we can emulate this process. The question is why the heck wouldn't databases do this themselves? Same with adding and dropping columns.

Consider how PostgreSQL encodes null for example, by skipping them in the row as fields, and adding them in a null bitmap in front of the row. Meaning... rows are not uniformly sized, there's no math like offset = row * rowsize + field_offset; kind of addressing for reading a field in PG where recoding some of the rows breaks the entire table.

And yet we have all those huge monolithic operations that need to be done atomically. So weird.


Elon would rather lose 50 billion (and yes that includes what he owes the bank and his co-investors) than admit defeat. He sees his "genius who makes no mistakes, it's all 6D chess" image as his primary money maker.


What do you mean ruined your objectivity? It seems like it repaired it.


Great example how AI can be given few partially opposing constraints and told "find me a common subset that fully satisfies both". This thing would take a person weeks of tweaking to get right.


I'm a kind fella usually, but this article was criminally stupid.


> In short, my brain has crossed a Rubicon and now feels like experiences constrained to small, rectangular screens are lesser experiences.

The funny thing about this statement is that Vision Pro is in fact the smallest rectangular screen of any device he's written code for.

He's hyped up. That's normal. In time he'll understand Vision Pro doesn't provide any better UX for common activities. In fact it's worse in many ways.

Where Vision Pro may shine is tasks where you need to perceive and manipulate complex three-dimensional objects, as they would be in physical space. I see great uses in engineering, design, art. It'd be great to preview interior design, design cars, architecture, create machinery and so on.

It'll also be great for previewing products, so online stores become a lot more viable than they are now, as you get a sense of size and style for an item in Vision Pro.

It may also be great for education, training, simulations.

It has many great uses. But basic apps isn't it. And most people won't care. This thing sucks to wear for more than 20 minutes. It's heavy and uncomfortable. You can't share your experience with others, either. It costs a lot. And you can't multitask with it. I can walk to a place and do something on my phone.

The input model also sucks. To code, for example, you need to hook a bluetooth keyboard and mouse. Looking at symbols one by one to fingertap would be comically slow. At which point, you may as well just get 2-3 screens and work on a normal workstation. For less money.


Have you worn this? Yo use pretty explicit with your complaints about its weight and comfort. Your penultimate paragraph is really unfounded and inaccurate. You can multitask with in (in a computing sense). You can share with apps that have use the appropriate APIs. And I completely disagree that "basic apps" (whatever that is) will not be a great use case. Hell, I would buy this today so I could have a monitor replacement. And I'm just a lowly sysadmin who is at home with vim in a terminal.


It will absolutely rule for wargames (i.e. board-based simulations). High resolution, zooming into "counter" stacks, limited intelligence (yes!), calling up rules, procedure checklists, odds calculation & combat resolution, the list goes on and on methinks.

Grognards with sufficient disposable income - rejoice! At last you can (for example) play Operation Barbarossa at regiment level - and retain your sanity!


These are just gen1 issues. I expect it will become lighter and lighter until it will probably be like some heavier/bulkier glasses.

For the common user it will be amazing for cooking (tells you what to take next and from where and mix in what order, all that fully with arrows on the screen). Or let's say you want to leave your home and it knows that you forgot your keys and it tells you where they are with directions on the screen like the objective marker in a game.

It will know where things are in your home even if you don't pay attention to them it will have object recognition in place and you will be able to say "Hey Siri where did I leave my glasses?" and it will point you to them.


“Bulkier glasses” is a deal breaker for me — LASIK was one of the best quality of life upgrades and I can’t imagine rushing back to that experience for many reasons.

Ambient computing of the style you describe I do think is a common use case and I look forward to less invasive form factors to tackle it.


Do you recall the wave of people breaking their TVs with the Nintendo Wii controllers?

I'd expect a similar wave of people breaking their expensive Vision Pros if they try cooking with one. It's a terrible idea. First, most kitchens are cramped, full of low-hanging cabinets to break your Vision's fragile glass into.

And then, keeping those open vents around vapors full of fat and tasty food bits is a great way to cover the circuits with grease.

We already have a solution for something telling you what to do next, and it's called an iPad with a stand. A phone also does the job and has much lower chance of incidents than a headset, despite yes, you may need to wash your hands from time to time to scroll down. Or... you can simply use assistive features and voice for that. Siri is going to get a lot smarter thanks to LLM, much sooner than Vision Pro will become light and pragmatic for such purposes.

Regarding this "it'll know where things are in your home", let's use basic logic here. It can learn the layout of your rooms and where your immovable furniture is. But no, it can't know where everything that moves is, because this means you literally can't move anything unless you have the headset on to track its location. Or slap expensive AirTags on every single jar and utensil maybe. All solutions would be hilariously impractical. And... we'll end up with where I started: a broken Vision Pro glass as you slam it in a cupboard while trying to fish out a jar of condiments.

I don't know what is about VR that makes people pull out the fantasy scenarios. It's simply a (bulky) screen with pass through. It's not a wizard. It can't know things unless there's a way for it to find them.

We can imagine a super-thin model that you can keep on your face 24/7 and sleep with it too, so it tracks your entire life forever and knows you better than you know yourself. And it synchronizes with your spouse and children who also wear their own headsets 24/7. And it's unbreakable. And the battery never runs out. We can imagine many things. They don't exist, and won't exist any time soon. "Not on the horizon" as Steve Jobs used to say.


>To code, for example, you need to hook a bluetooth keyboard and mouse.

I have a hunch that coding as we know it is going to look very different in ~5-10 years.


Maybe, but not in terms of needing code.

You see, the rules of formal languages that encode formal rules of system constraints pre-date computers by centuries. Think of math proofs, for example. Sure, we can encode symbols as emojis, or geometric figures or whatever. But in the end, it's sequences of symbols, that's the nature of it. And tapping symbols one by one with a headset will suck, no matter how programming looks.

The rules of formal languages that encode formal rules of system constraints pre-date in fact our species too. Think about what DNA is. Oh yeah, spooky, isn't it. A sequence of symbols (GTCA) encoding a sequence of more complex symbols (proteins). Spooky! But yes, DNA is our code. And it works the same as our programming code.

Now I know where you're going. LLMs. Let's assume an LLM writes the code for you. You still have to read it, which you can do fine with a headset (if it's not as encapsulating and heavy, and with short battery life as Vision Pro v1). But if you spot something's off, you need to adjust it. Go directly for the kill, and make that surgical series of edits. You know? Or... maybe you can spend the rest of the day hopelessly trying to explain to Siri 2030 year edition what you want to do, instead of going in and doing it, for that "last mile".

Because if AI can do the last mile itself, to the point you don't need to even verify it... first, that's the fast way to AI shipping code we don't understand and basically giving up our entire civilization to it. And second... we don't need to code, but we also won't need to exist, and therefore not need headsets.

So in the worldlines where we DO exist... Vision Pro sucks for coding, because it's a shitty human interface to editing code.

And in the worldlines where we DO NOT exist... Vision Pro sucks for coding, because AI doesn't need headsets.


You do realize you can use a keyboard with this? And that the VP can plug in for infinite power?


I realize yes. And do you realize if you'll be using keyboard and mouse you may as well not literally wear a computer *on your head*? Are you aware of displays? They can support themselves. On desks. Or wall mounts. Compared to Vision Pro it feels like magic. Self-supporting displays. It's the future. Everything is about to change when people learn about it.


And do you realize that’s by taking the displays off of your desk you can have more of them? Oh, at any size and any location you want. You can continue using the input devices you like but now have as many displays as you want and you can take it with you easily.

You assume you will hate it, maybe you will. But maybe the future won’t involve dedicated furniture to put things on and cables connecting them. Maybe you’ll be able to work wherever you want and with the same amount of productivity. Or maybe even better productivity!

But you’re probably right, the future will never get any better, this device is pointless and will never lead to better versions of itself or point to other ways of working. Thank you, there’s no telling where we might end up without true believers of the status quo like yourself.


[flagged]


We've banned this account for repeatedly breaking the site guidelines and ignoring our request to stop. Not cool.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


I admire the commitment to your opinions lol. Never really sure what motivates someone to tell the world what they aren't interested in using. My snark was aimed at highlighting the weirdness of how important you thought it was for us to know what you hate.

It's almost as though you're explaining why current and previous VR goggles haven't done very well. David Smith, among others, have had some experience with the device and are really excited. That excitement also includes several VR specific reporters.

I have yet to hear any complaints about screen sharpness. The most upvoted comment is from someone that worked on it and he also thinks the screen Rez isn't a problem. Foveated rendering is probably the reason it looks good. That same guy also said that the active cooling prevented it from getting hot on the face. We'll see how it works IRL.

Some have complained about the weight. I'm not all that concerned since this is the first version. I'm excited by the potential, especially since developers are excited about it.


The people who have actually used the VP have said text rendering is excellent.


[flagged]


You keep illustrating that you didn't view the keynote, or any of the followup videos and are just judging based on your priors.

Clearly you haven't seen the VP in person, or you wouldn't be making such assumptions. I'm not, I'm trusting people who have used the device in person, people who have said the text is very sharp and easy to read. People with much better credentials than a random on HN.

And if you had done even the most minimal research, you'd know that the VP has fans that eliminate heat build up and fogging.

But hey, you've repeated ad nauseam that you don't like the VP and think it won't succeed. Everyone's entitled to their own opinions, but not their own facts. You remind me of how ESR was continually predicting the failure of the iPhone until he went quiet on it a few years ago. Same circular reasoning about how it won't succeed because reasons... Yet the people who have seen and used the VP differ greatly with you.

Now the VP may not be for everyone, and some elements of Apple's marketing are a bit cringe, but people said the same thing about AirPods.


> But if you spot something's off, you need to adjust it. Go directly for the kill, and make that surgical series of edits.

LLMs are still a really immature technology. The hype is about where it could go in future, not necessarily where it is now.

Think about when compilers were immature technology, and the science of parsing/etc and optimizations were not well understood. You could make the exact same argument you have made now about the need of editing assembly or machine code by hand when the compiler doesn't get it right.

It was indeed common practice to do this well into the 1980s. That, and inline assembly is increasingly unnecessary now.


None of what I said is restricted to the current state of LLMs. I was speaking very broadly about the nature of AI in our world, and going back to the creation of DNA...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: