Hacker News new | past | comments | ask | show | jobs | submit | chromejs10's comments login

No, it's not a "share invite codes" -- no one has codes to give out. You have to add yourself to the waitlist and wait for the company to send a code


It's obviously a CEO selling his own product so not much to comment on. There is definitely a place for both headsets. The main part I laugh at though is him complaining about a slight motion blur in Vision Pro when moving your head quickly when the quest 3 has the most insane picture warping ever in passthrough mode around the hands and edges of the screen.


It’s a shame he isn’t in a position to speak honestly. Seems like the Vision Pro is a massively better product, but the price tag reflects it.

Though being a semi open platform is probably a huge benefit for the quest. If you want to play games, the video pro is pretty useless and probably always will be given apples track record with gaming.


I'm a VR/AR doubter.

Both of these products won't change the world, and will probably wither on the vine. Most people can't afford this stuff and it doesn't actually solve a problem. It more tickles a desire for novelty. I'm not saying they won't sell a lot of this stuff, I just don't see it being a mainstream item ever.

I could be wrong, but Zuck is just trying to save his ~billions he's wasted on the metaverse.


The first question would be: do you have a VR headset with 10 games / programs in it?

If not then you are not doubter. You can have doubts, express them, but if you do not tried VR headset for several days I would say such opinion would not be worth much.

I use quest often just to work out. I am lazy and I do not want to go to the gym. I start pistol whip and after a few minutes I feel much better. It can be a demanding game for my spine.

Experiences in VR are not experiences that could be observed on a flat screen. It is different to move a player in a game, and to climb a nearly lifelike skyscraper. It is different to play a diablo, and be surprised when skeleton attacks you from behind.


> do you have a VR headset with 10 games / programs in it?

Many people simply have no desire for such thing. An 10 games? Who has time for that?


> Who has time for that?

Is this linkedin? Who doesnt have time for that? We are talking <10 hours.


You could say the same thing about any activity, including eating good food. Being uninterested is different from being a "doubter", the latter implies your belief that there's no market for it.

Headsets need to slim down, investments need to be made to produce more high quality virtual experiences, and developers need to learn how to utilize the new medium to its fullest extent. Most of them are still designing VR experiences like they're flat screen games instead of truly taking advantage of interactive worlds. Displaying a floating 2D interface inside a 3D world ought to be a sin.

If those things happen, I believe the masses will jump on board. It has the potential to be a complete paradigm shift for gaming and entertainment from merely observing an experience (and interacting with it in a limited fashion through a controller), to actively participating in a virtual environment using your entire body and most of your senses.

It's one thing to watch a video of someone playing VR games and a whole another thing to put the headset on and see the virtual world for yourself with life-like depth perception and intuitive interaction. Racing games are naturally suited to this with a headset, steering wheel, and pedals.


Being uninterested is not a bad argument against a product, it's a good one.


>such opinion would not be worth much

Not thinking a device is even worth trying, let alone buying itself plus 10 games is worth quite a lot in supporting the argument that these devices won't have mass appeal or success, how could it not be?

It's sort of comical that people "completely rejecting the device out of hand" is not "worth much" and somehow a point in your favor. The more that feel this way, the less the device will be successful obviously.


> how could it not be?

You're out of touch and confidently betting against Apple -- what's the last product they launched that didn't end up gaining mass market popularity?

Steve Ballmer laughed at the iPhone, too. Not so much these days :)


This is a non-sequitur.

My point was, the more people that don't even have a desire to try VR, the less likely that VR will succeed. Pre-release demand for iPhone was huge. Let's say 100% of people don't even try VR because they have no interest, while in this hypothetical the product is amazing and 100% of people would like if they tried it. The product would still fail.

The point here is "not wanting to try the product" is not a bad argument against it, rather it's more of a proxy for "this is a product solving a problem people don't have" i.e. it reflects low demand. All things being equal, low interest/demand to try it is worse than if there was high demand to try it.

>out of touch

Apple can make the best version of VR, better than everyone else, but that's no guarantee of success or that the product class itself will have mass appeal. It's not Apple's fault if there's no there there.


> The point here is "not wanting to try the product" is not a bad argument against it, rather it's more of a proxy for "this is a product solving a problem people don't have" i.e. it reflects low demand.

Now that is a non-sequitur. I agree, but that has nothing to do with my point.

Your lack of interest is a useful data point, but you're just not qualified to comment about the product's mass market appeal. VR is already selling million of headsets annually while it's still in its infancy. That's within an order of magnitude of game consoles like Xbox and Playstation.

> Apple can make the best version of VR, better than everyone else, but that's no guarantee of success or that the product class itself will have mass appeal. It's not Apple's fault if there's no there there.

VR is already moderately successful and with a giant like Apple entering the arena, they will most likely jump start the cycle of ever increasing investment.

If you were following the VR space at all, you'd know that one of the most common complaints that people have is that there simply aren't enough high quality games to play. On the other end, game studios can't afford to invest too much into development because the market isn't large enough to sustain the investment required.

Do you really not see how this could easily be resolved when you have a company like Apple entering the space?

They'll create high quality experiences which will lead to more headset sales, which will make outside investment more economical, which will attract new headset sales and so on.

Your outright dismissal of VR's existing success and lack of understanding of problems that are currently holding back the VR market is why I said that you're out of touch. Apple has more than enough resources to make all of this happen.


I don't think there's been much success at least in comparison to prior computing device forms.

This[0][1] is going to constrain the demand and popularity for sometime, and has so far, along with the price given the physical discomfort (paying a lot to be uncomfortable), with a worse productivity UX compared to keyboards, mice and hiDPI monitors.

>one of the most common complaints that people have is that there simply aren't enough high quality games to play

I could have written these exact words as argument against the entire product class. This self-serving way to frame it sound similar to stating that customers, by their actions, are saying: "There are not enough compelling reasons for me to buy this product". Whether from lack of games or any other feature.

But it hand-waves this fact (that the product does not have enough usable features/content/compelling use cases for produce demand) by assuming game companies or device makes could make these things but have just chosen not to. They should just do it then, since it would sell according to you. Perhaps they have more insight than you? i.e. they're doing what they can, which isn't much. And the market says no thanks.

It's quite obviously possible that the form factor itself is not conducive to producing high quality games that customers are satisfied with or demand in numbers close to other gaming form factors, for the exorbitant cost (both to the device makers and end user).

[0] https://www.the-sun.com/tech/10400743/apple-vision-pro-retur...

[1] https://www.jorgeherskovic.net/apple-vision-nope/

  "It’s uncomfortable for me
  
  In the end, this is what ended up killing it above everything else for me. I can’t wear this thing for extended periods of time.
  This renders every other consideration moot, because if I can’t put it on my face I can’t use it."


Do you have an opinion on the future of consumer computing? Do you think we'll be using laptops in 2064? Genuinely curious.


I still write with pens and pencils, I am confident that typing will hang onto keyboards and laptops will be more powerful, but pretty similar to what they are now.


I write more often on a "onyx boox" to take notes, and read.

Do not think that everybody still uses pens and pencils that often.


iPad is about 30 billion in revenue each year.

For a device that even now many people would question what problem it solves.


The iPad solves a lot of very obvious problems. For drawing it’s one of the best value products around, for PoS and restaurant ordering software it’s the best option, and for media consumption it’s a pretty good option.


I don't doubt that they will sell. I just don't think it's the future the hype is making out.


A big portion, maybe even a majority, of my fellow students uses an iPad.


The iPad is a game console for most people. As such it works great.


Study finds that developers produce code likely to be buggy.


Only 20% time in meetings? That's nothing if he worked at Amazon unless he was a junior.


The breakdown definitely reads SDE1-2


I keep getting `No module named 'ldm'` after I run `python scripts/dream.py --full_precision`. I've confirmed 'ldm' is activate in conda. Any idea?


I get the same thing too!


Try to put `sys.path.insert(0, os.getcwd())` after `import sys` in dream.py.

That fixed it for me.


One thing I don't quite understand is that many cars have autopilot-like software. Unlike tesla's which constantly requires putting pressure on the wheel to show you're there and paying attention, Ford's lets you drive indefinitely without any hands on the wheel. Wouldn't this same investigation be put onto other manufactures as a giant audit? Hitting emergency vehicles is obviously bad but 1) it happens in cars without autopilot and 2) if you're hitting one you're clearly no paying attention. it's not like they just appear out of no where


What it comes down to for me is the marketing. The other manufacturers are very careful in how they market the software, Tesla is not.

If you look at Mercedes, for example, their marketing page describing their driver assistance technology[0] (with a very similar feature set to Tesla's) uses the word "assist" more than 30 times and in practically every header. Few people would come away from that marketing thinking that their car is going to drive itself without them paying attention.

Tesla, in contrast, advertises their "autopilot" and "full self driving" capabilities. The word "assist" is used exactly once on the Autopilot landing page[1]. The rest of the words and names are carefully chosen to convey a sense of total autonomy.

[0] https://www.mercedes-benz.com/en/innovation/autonomous/the-n...

[1] https://www.tesla.com/autopilot


Tesla has also used this wording to advertise their purported self-driving features since 2016[1]:

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


I find it really interesting that for Mercedes L3 self driving they are even willing to put their money where their mouth is and take liability for the car in self driving mode.


Read the fine print though.

They do disable the system in complex situations.

And they give ten seconds warning in advance of this happening.

Which sounds great until you think it through…

It means they schedule the disabling of the system way earlier in the progression of any potential traffic complication. Ten full seconds is long before incidents even start unfolding. The car has to be psychic, but it's not. So in order to accomplish the ten-second warning, what that means is those warnings need to be on a hair trigger with many false positives.

The driver, if they have it enabled, will be constantly getting warnings that the system may disable itself in ten seconds when the car sees even the very slightest possibility that things may potentially get complicated way down the road. I don't see this being endurable for most people.

So my conclusion is this is just marketing hype, until they can get rid of the auto disabling thing.


I haven’t read the fine print but you can have ten seconds of warning without predicting complications ten seconds in advance.

That 10 seconds simply means that the car must handle any situation safely for at least that amount of time. In practice, this means decisions such as emergency braking and steering must be autonomous but possible more complex scenarios (e.g. moving for emergency vehicle on a tight street) can be delegated to a human (or the system can simply pull over and stop).

They disable the system in cases of rain but at least from the marketing and videos demonstrating it, there’s no prediction of complex scenarios like you described.


No, you are not understanding what I explained. There is a moving horizon of time during which the ten seconds is continually extended, and the system needs to draw the line somewhere. When does the final ten seconds before cutoff begin? Your examples don't explain this. This is not easy stuff to solve without some compromises.


If it’s limited on what roads it can run on the 10 second window makes perfect sense. Just enable when you’re entering an unsupported roadway.


> And they give ten seconds warning in advance of this happening.

which is infinitely better than the <1 second that tesla gives you.

> those warnings need to be on a hair trigger with many false positives.

unless it is actually reasonably good at tracking threats and traffic. look, if you have multiple sensors of different types, then you are able to take firm action when things turn south quicker than a human.

A lot of tesla's problems are because they have shit sensors, and a stupid approach to designing the system (no lidar, no radar, no multiview cameras, no high res GPS or maps) They also have an overwhelming pressure to just yolo shit, rather than test, and redesign.


>which is infinitely better than the <1 second that tesla gives you.

Not at all. With Tesla you are constantly paying attention, or should be to the same level that you are with any other car.

Meantime it is actively intervening to keep things safer with better follow distance, collision avoidance, lane departure detection, and automatic braking. Most scenarios leading toward accidents are entirely avoided due to all this.

So the <1 second scenarios (which are not talking about warnings, but about when the system was in a complete outlier situation it does not know how to deal with and disabled itself) are very unusual things.

Like "a Prius driver drove off a bridge and is now landing on our hood." Of course the system will disable itself in that situation; what else would you expect? What sensors do you suggest for that?


> With Tesla you are constantly paying attention, or should be to the same level that you are with any other car.

you should be paying attention, but you are not. Human attention is a difficult thing. from what I recall, in a driving it takes about 7-14 seconds to re-gain situational awareness. This means that 1 second isn't enough.

> Like "a Prius driver drove off a bridge and is now landing on our hood." Of course the system will disable itself in that situation;

I'd expect that the system would slam on the brakes, not disengage to avoid liability. Thats the point here, its not about tech, its about legality. Thats the worse part, the entire system appears to be designed to stop tesla being taken to court.


That’s a paranoid and particularly uncharitable view which ignores the perfectly valid reasons that it’s best the system behaves as it does.

The human has responsibility for the safe operation of the vehicle, whether they step up and fulfill it or not. If they don’t fulfill their responsibility, all bets are off.

There is no other (equally good or better) way this could work in practice. You can imagine other ways, and I’m guessing you will, but they are imaginary, not practical.

> I'd expect that the system would slam on the brakes

It can and it does, even while they autopilot system is disabled. Who said it wouldn’t?

You should learn more about the cars before hardening your opinions so much.


> paranoid

I doubt its paranoid, I am not worried about it, just rather annoyed that driver's aids are being marketed as something they are not, cheapening an entire industry.

> There is no other (equally good or better) way this could work in practice.

We are literally discussing a company that has another way.

> It can and it does,

https://www.youtube.com/watch?v=45XMhMzMDZY

suggests otherwise.

>You should learn more about the cars before hardening your opinions so much.

I work in machine perception, this is my bread and butter. more over I have worked with life critical infra, and I know corner cutting when I see it.


I think the clever thing about this is that it breeds a culture of responsibility at Mercedes - if you are programming the self driving, you will be more considerate of how it works because your company (and perhaps ultimately you) bear that responsibility. The consumer gets confidence in the product, and Mercedes holds itself accountable because it doesn’t want to ship a product that will endanger people.


Smart. This will be the move that triggers widespread adoption. Cheaper insurance.


It makes sense as long as it is failproof. But is their tech better than Tesla's? I assume it is not and they will have to severely limit the capabilities of the system. Maybe it will work in traffic jams, simple routes.


Why are you assuming it isn't better, just Musk's relentless promotion and exaggerated claims?


The page literally talks about "full self-driving capabilities [...] through software updates designed to improve functionality over time". Which to me sounds like FSD is not actually being advertised as currently existing, as opposed to something you'll eventually get.


Yes the actual finished FSD software release does not exist and Tesla does not claim it exists. There is a SKU you can pay up front for (in other words, pre-pay for) called FSD, so the SKU exists, but it does not yet exist as a released, finished software release.

A lot of people can't get their head around this level of subtlety. Those people probably should not have too much confidence in their take on this, but they do. Dunning-Kruger effect. It seems like you do get it.

Just for completeness to soothe anyone triggered by details missing here, certain features of the current beta version are enabled for drivers who have purchased the SKU. And there is also a full beta release that is available to some drivers who opt in. All of this does not mean FSD exists yet as a public non-beta software release.


> A lot of people can't get their head around this level of subtlety.

Have you considered that this might be intentional on Tesla's part?

In my original post I never said anything about full-self-driving being a real thing, I simply said that their marketing uses that term a lot and leans heavily on those capabilities. I'm well aware of the distinction, but I'm also aware that Tesla seems to intentionally cultivate the ambiguity as to what exactly FSD means. It's not fair of you to cast blame on those who get confused when the company makes the line very, very blurry in their marketing.

EDIT: I'm also not accusing Tesla of outright lying. They can "not claim it exists" while still making sure plenty of people miss that "subtlety".


Yes, I've considered the possibility of it being intentional on Tesla's part. It's impossible not to consider it with all the conspiracy theorists and short sellers repeatedly bringing it up on Hacker News and elsewhere.

However, I don't buy into the theory, for the reason that Tesla's proactive, frequent, and aggressive informational notices clarifying the point are impossible to miss when you are in the car, and that goes completely counter to the theory.

And their informational messages are also there prior to that during the buying experience.

And prior to that in the marketing.

Why would Tesla do all these reminders about the human needing to maintain oversight and control, if they are trying to trick you into thinking the opposite?

Haters latch on to the marketing as if it's the only thing, and ignore the caveats in the marketing, and ignore the informational messages during the buying process, and ignore the in-car information and the in-car active measures the car takes to make sure you are paying attention.

Haters dismiss all that and pretend it does not exist.

To what end, I'm not sure, I think it is to stay in a comfort zone regarding a delusion they have about their opinion being the right one. Confirmation bias doing its job.

I do appreciate you bringing up the point. It's a fascinating phenomenon.


You're assuming that FSD is a software problem, and that there is any realistic chance that the feature will be available in the lifetime of any of the cars they sold including it.

Both of these beliefs seem unwarranted: FSD as described in the advertising is most likely much more than 5 years away, and will almost certainly require LIDAR to achieve any kind of safety.

People pay for a pre-order based on the promise that the item will be delivered. If I pre-order a game and it is later cancelled, or even delayed for many years, I will get my money back, I won't just be told "well, you knew it wasn't ready at the time".


I don't think I said anything about hardware or whether FSD will ever be delivered in the lifetime of the current fleet.

Myself, I am skeptical of Elon's timelines. But I also understand they are not his promises, they are just his (foolish, imho) expectations.

He admitted he vastly underestimated the problem… but what he might not admit is continuing to do so. I think he continues to underestimate it.

On the other hand, the power of compounded returns of improvements over time is counterintuitive, and he probably understands that better than most of us. Maybe he used that understanding to get overconfident, or maybe it's still beyond reach. We really don't know. It's still possible that at some point, his team might just crack it. Not just with vision, though. They will need a world model for things like predicting the behavior of a group of children occluded by a bus near a crosswalk.

The money back thing is another question. I hope Tesla offers money back to ease the experience of those who are bitter, but I probably won't take it back myself, because I don't mind supporting the effort even though it seems like the results are far away. I don't think money back was an option previously. As a matter of company survival (which in Elon's mind equates to humanity's survival, take it or leave it, but suffice it to say he doesn't treat it as a normal throwaway company) Tesla just didn't have the money. Now, they probably do.


> I don't think I said anything about hardware or whether FSD will ever be delivered in the lifetime of the current fleet.

You said "Yes the actual finished FSD software release does not exist and Tesla does not claim it exists." (emphasis mine).

Even if you didn't say it, the whole false advertising investigation revolves around the difference between FSD being a software or a hardware problem. Tesla marketing and Musk personally have stated clearly (at least in the past) that all cars sold with the FSD option are FSD ready on the hardware side, and that FSD will be delivered as an over-the-air software upgrade to all of them once it's ready.

If they can indeed enable (working) FSD without a hardware upgrade, then they have not lied (even if the timelines they suggested were wildly optimistic). If they in fact need hardware upgrades to support FSD on the cars sold with this option, then they have lied in their advertising, and people who bought this are entitled either to a refund or to a free upgrade when the feature is available.


> and will almost certainly require LIDAR to achieve any kind of safety.

Is there a physical reason for that? We know that humans do just fine with just ~8cm of stereoscopic separation, and for example cars have the potential for a significantly higher amounts of stereoscopic separation.


Not a physical reason, no, but an AI one.

Humans and most other animals don't rely solely on stereoscopic vision to navigate the world, we rely on a model of the world where we recognize objects in the image we perceive, know their real size from experience, and use that as well as stereoscopic hints to approximate distances and speeds. We additionally use our understanding of basic physics to assist - we distinguish between an object and its shadow, we can tell the approximate weight of something by the way it moves in the wind (to know if we need to avoid an obstacle on the road), and there are other hints we take into account.

We also take into account our knowledge of the likely behavior of these objects to judge relative speeds (e.g. thr car is moving away, it's not the tree coming closer).

Without this crucial aspect of object recognition and experience about the world, our vision is actually very bad at navigation. If you put us in an artifical environment with, say, pure geometric shapes at various distances, no/fake shadows, objects with non-realistic proportions and so on, we will have much more trouble navigating and not bumping into things even at walking speeds. And this is the level the AI is currently operating at, more or less.

And if you don't believe me, note that humans with one eye, while having impaired depth perception, are still perfectly able to drive safely, with ~0 physical mechanisms for measuring distance (I beleieve the spherical shape of the iris may still give some very subtle hints about distance as you move your eye around, but that is minimal compared to stereoscopic vision). A LOT of our depth perception is just 2D image + object recognition + knowledge about those objects.


While all of this may be true, this doesn't explain why stereoscopic vision wouldn't work where a LIDAR would. Both provide identical geometrical information and neither has anything to do with AI. Neither tells you approximate weights of things, or judge based on human experience how things might move in the future depending on their type (tree vs car), or anything like that. And if you swap one system providing geometric information for another one that provides identical information, I don't see how this makes the cognition of any AI later in the pipeline magically any better, no matter how good or bad that AI was previously.

However, one benefit that long baseline stereoscopic vision (for example with cameras in corners of the front windscreen) would have compared to a short baseline stereoscopic vision (a human) or a point measurement (LIDAR) that could be relevant for safety would be the ability to somewhat peek around the vehicle in front of you from either side. Admittedly, this may overall be a small-ish benefit relative to a LIDAR but it does provide strictly more information (slightly) than a LIDAR would.


Well, LIDAR uses very well understood physics to give you precise measurements of distance from the world around you, without any need for object recognition. It is not enough on its own, but it is an excellent safety technology. It's basically impossible to run into an object that's moving slow enough to avoid based on LIDAR input.

Stereoscopic vision first relies on object recognition of the elements of the pictures taken by each camera, then identifying the objects that are the same between the pictures, and only THEN do you get to do the simple physical calculation to compute distance. If your object recognition algorithm fails to recognize an object in one of the images; or if the higher-level AI fails to recognize that something is the same object in the two pictures, then the stereoscopy buys you nothing and you end up running into a bicycle rider crossing the street unsafely.

LIDAR does have limitations of its own (for example, it can't work in snowy conditions, since it will detect the snow flakes; not sure if the same applies to rain), but the regimes under which it is guaranteed to work are well understood, and the safety promises it can make in those regimes don't rely on ML methods.


> Well, LIDAR uses very well understood physics to give you precise measurements of distance from the world around you, without any need for object recognition. It is not enough on its own, but it is an excellent safety technology. It's basically impossible to run into an object that's moving slow enough to avoid based on LIDAR input.

Again, claiming that LIDARs make things magically safer sounds like a lot of snake oil to me. Both LIDARs and stereoscopic systems use well-understood physics. Stereoscopic rangefinders were being used in both World Wars for gun-laying and you wouldn't say that you don't need precise measurements for gun-laying.

> Stereoscopic vision first relies on object recognition of the elements of the pictures taken by each camera, then identifying the objects that are the same between the pictures, and only THEN do you get to do the simple physical calculation to compute distance. If your object recognition algorithm fails to recognize an object in one of the images; or if the higher-level AI fails to recognize that something is the same object in the two pictures, then the stereoscopy buys you nothing

As for whether stereoscopic vision relies on object recognition, that seems like a mild stretch to me. Generally it, like for example SfM (of which it is a special case), seems to rely on local textures and features for individual data points -- and in a simple single-dimensional stereoscopic vision case, your set of possible solutions is extremely limited, so matching features from SIFT or SURF in stereoscopic vision is way simpler than even the general SfM case. Those individual data points do not require in any way for individual objects to be recognized and separated. I have NOT seen in my life an SfM solution that would not give you a point cloud if it failed to separate objects -- in fact, SfM software doesn't even try to identify objects when generating a point cloud because it doesn't even operate at such a high level. Note that this actually provides the exact same information as a LIDAR would, namely a point cloud with no insight how the points are related to each other.

Pretty much the only situation where stereoscopic vision or SfM fails to provide depth information is with a surface of highly uniform color completely devoid of textures. Whether this could or couldn't be solved with structured light is an interesting problem.


Human stereoscopic vision could also be fooled by specifically designed optical illusions in science museums. We just avoid them when designing roads.


> Unlike tesla's which constantly requires putting pressure on the wheel to show you're there and paying attention, Ford's lets you drive indefinitely without any hands on the wheel

That’s not Ford’s version of Autopilot, it’s one step further. It’s actually hands off (despite how many treat autopilot). Named BlueCruise.

It’s comparable to GM SuperCruise. It ONLY works on specially mapped divided highways that Ford has approved. It will disengage for strong turns and anything it’s not ready for. You MUST watch the road, it keeps track with a camera on the wheel.

Basically the way it treats the driver is far more conservative. Instead of telling the driver they need to pay attention, it actively monitors them. Instead of saying “you should only use it on these kind of roads“ it actively prevents you.

It’s a fundamentally different approach. Ford’s ACC (not hands free, Co-Pilot 360) constantly monitors for steering wheel torque to ensure your hands are on the wheel and disengages pretty quickly if they’re hot and you ignore the warning.

That said, I have it on my car. It’s freaky as hell to use, kind of scary. Maybe I would use it on long drives in the country, but I just don’t want something else that in charge in even medium traffic.


That is kind of similar to Tesla still though. At least with FSD Beta there is a camera actively monitoring you and making sure your eyes are on the road. I've also been on roads where autopilot either won't activate at all or will activate but at a reduced speed. I was hoping that once Tesla enabled that internal camera they would stop relying on weight on the steering wheel and just use eye tracking.


> Named BlueCruise

Ah, thanks! I thought the dealer was saying "Blue's Clues"


As far as I know there had been a unusual high amount of "unusual" accidents associated with Tesla autopilots and there is not such observation for other car manufactures.

This doesn't mean their system is more advanced it actually could mean their system bails earlier due to being less advanced and in turn luckily avoiding this problems.

Or that they are much much less used.

But then Tesla is not really known for good QA.

And in the past there had been multiple unrelated tests for emergency brake systems in which Tesla cars failed really hard. Behaving worse then many much "simpler" less advanced systems. Sometimes to a point of only braking after/when hitting the pedestrian... (mechanized test dummy puppet the Tesla system by it's own feedback recognized as human).

If your most advanced self driving system can't even compete with emergency brake systems by such a large margin I would not be surprising if the Teslas system has major faults tbh.


Autopilot has Prevented far more accidents than it has caused.


Any citations for this claim?


Yeah I've always hated that statement. How do you even measure that? I've had my Tesla sound the alarms when I'm 4 car lengths away from the car in front of me and nothing bad is happening at all. Do they count that as having prevented an accident? Do they count all those phantom breaks as preventing a crash ;^∀;^)


Ford, like GM, has a literal camera monitoring the driver's face so they can see if they are paying attention. Seeing what your face is doing is a much more reliable system of measuring attention than whether or not the driver is touching the steering wheel while reading their book.


Doesn't Tesla have that too? Not that I trust Elonbois with a camera looking into my car, but hey, maybe someone will offer to buy me a horse...


No. See https://electrek.co/2021/01/20/tiktok-star-criminally-tesla-...

Edit: Sorry, I should have said _mostly no_.

Last year they pushed an update to use the built-in cabin camera installed in 2021+ cars for "driver attentiveness."

Compared to Ford and Cadillac's IR-illuminated driver attentiveness cameras pointed straight at your eyeballs and can see through sunglasses and works in the dark, the wide-angle cabin camera may not even see your face if you wear a hat, certainly _does not_ have a clear view of your pupils, and doesn't work in the dark or if you have dark glasses on.

https://electrek.co/2021/05/27/tesla-releases-driver-monitor...

https://electrek.co/2021/04/08/tesla-driver-monitoring-syste...



Tesla's isn't good enough to replace wheel torque as an attention monitor, and they don't even pretend to (still requires wheel torque to keep FSD/autopilot enganged.)

In particular, it has terrible night vision and no IR illumination so it can't see you well enough at night to work.


A some older Model S/X vehicles with Autopilot 2.0 hardware lacked driver-facing cameras, but modern ones, along with all Model 3/Y have them.


I don’t think anyone reads about Fords (for example) manual and thinks “oh wow, I don’t even have to have to pay attention and have my hands on the wheel!”.

The fact that it’s a big enough problem for Tesla that they have to monitor it, and still have issues, points to the main difference being user expectations and marketing around these features.


It's almost as if calling it "autopilot" was wildly irresponsible.


It's a very appropriate name for something that automates the long boring things while not eliminating a human operator for exciting transient things, just like in an airplane or on a ship. What about it was irresponsible? Sounds like a completely typical autopilot to me.


What's irresponsible to me is how Tesla has advertised their Autopilot feature for the last six years[1]:

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


Possibly, but notice that this is not what I was responding to, which was purely the naming of the feature. So I'm not sure how this segue is relevant? Do you have anything to comment w/r to the name?


This is very clearly labeled as an ad for future software capabilities.


There is no such "clear" label on the video or website. Tesla titled the page "Tesla Self-Driving Demonstration". Tesla's embedded video is titled "Autopilot Full Self-Driving Hardware (Neighborhood Long)" with a video description that says[1]:

> Take a ride in a Tesla with Full Self-Driving Hardware.

That's it. Nowhere is it "clearly labeled as an ad for future software capabilities".

[1] https://vimeo.com/192179727?embedded=true&source=vimeo_logo&...


> …how Tesla has advertised their Autopilot feature…

I don’t see the word autopilot on that page or that video.


Look at the URL[1] and then look at the title of the video, which is "Autopilot Full Self-Driving Hardware (Neighborhood Long)"[2].

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...

[2] https://vimeo.com/192179727?embedded=true&source=vimeo_logo&...


Fair points, but it doesn't invalidate what I said: I don’t see the word autopilot on that page or that video. And it's not on that page, and it wasn't in the video. The VIMEO video title doesn't appear on embeds.


It's a very appropriate name for people who understand aviation, but I've heard A LOT of people over the years say stuff to the effect of "the planes these days can fly themselves, they don't actually need pilots they're just in there due to the unions"


If the argument was "Tesla shouldn't use the Autopilot name because some people are grossly misinformed about aspects of the aviation industry" then I'd have some sympathy for it. I'd still consider it a poor argument, but at least it's one I could comprehend and respect.


Not just some people but many people and not just some aspects of the aviation industry but precisely those with are being referenced.


No. Tesla should not use that term because of what that term means everywhere outside of aviation industry. Aviation industry is one small niche.

The term autopilot implies autonomous driving everywhere outside of that one small subgroup of people.


"Everywhere outside of aviation industry"? I'm sorry if this sounds harsh, but I can't avoid a strong feeling that you've just made that up. Where are all the different non-aviation autopilots in the real world we live in that "imply autonomous driving" when that term gets applied to cars? Are there bicycle autopilots? Crane autopilots? Bulldozer autopilots? All of them operating devices fully autonomously with no operator oversight? Because I can't for the life of me recall a real-world usage of the word "autopilot" that could give people the misconception you're alluding to. If there's a wide range of different real-world types of autopilots that almost all require no human attention except for airplane autopilots which do require it, all of them in such widespread use that they justify some people's notion that systems called "autopilots" are operator-free, then I must have completely missed that somehow. Please give me some examples of those different systems if they do exist.


Yes the word autopilot is used and understood by people autside of aviation to full autotonumous driving.

Just like spaceship is used space vessel able to travel through space. And teleport is used for moving from one place to another in blink of eye. They don't need to exist.


> the word autopilot is used and understood by people autside of aviation to full autotonumous driving.

...is a completely different statement from...

> because of what that term means everywhere outside of aviation industry. Aviation industry is one small niche.

The former implies that the common "understanding" by people who know little of consequence about the field is actually a misconception, whereas the latter insinuates that there do exist separate, valid, non-aviation usages of that word (of which you haven't provided any) which would justify such expectations about Tesla's product.

> They don't need to exist.

Except autopilots actually exist and they partially automate the boring parts of operation of means of transportation such as airplanes and ships. To argue with a fictional notion of a fully autonomous system against an actually existing device is like saying that you don't care what words mean and that you can reuse established terms to describe whatever you want, even if they already mean something very different from what you want them to mean. This is just like for example the East-German reinvention of the word "Aktivist". You're basically peddling Newspeak by saying that autopilot doesn't mean what it actually means.


> The term autopilot implies autonomous driving everywhere outside of that one small subgroup of people.

And with that you've disproven your own argument.

To the extent that "Autopilot" is used to refer to driving technology, it is as the brand name for Tesla's version of such. The name literally cannot be the causal factor as to why Tesla was supposedly wrong to pick that name, if it wasn't associated with autonomous driving prior to Tesla's use of it. Your argument is circular and thus invalid.


When drivers receive as much training as pilots, then it will be reasonable to draw comparisons between autopilot for planes, and autopilot for cars.


How is this relevant? Autopilots for airplanes don't eliminate human operators. Autopilots for ships don't eliminate human operators. Tesla's autopilot doesn't eliminate human operators either. On basis of that, I would deem the naming as very appropriate, the level of training of the respective required operators notwithstanding.


Interesting idea. Pilots require 40 hours minimum to take the certification test, but most need maybe 60 or so. Pretty sure I had a lot more than that for driving. In addition to the state-mandated driving instruction which was probably around 8 hours total (some of it "watching" other student drivers) I'm sure I had a few hundred hours of supervised driving under my belt before taking the test. Outlier? Maybe, but I can't imagine less than the 40-60 hours required for prospective pilots.

Also, I don't think student pilots typically train on autopilots. Those kinds of technology components are usually learned either on one's own or with an instructor/co-pilot after completing basic training. The basic pilot training focuses on safe pilotage from takeoff to landing, including land nav and radio communications, and ignores pretty much all modern technology.


That argument makes no sense. Generally speaking, tools are named based on the capability of the tool. We don't make a rule of changing the names of tools based on the predicted skill of the operator.


Ford's system has a lot more limitations. AFAIK, Tesla is the only company who has non-geo fenced FSD.


You say “limitation” but I hear “safety feature”. Best to put limits on dynamic systems that aren’t well understood.


Maybe their systems are safe and Tesla's isn't?

I don't know anything about other manufacturer's systems but I've seen video of Tesla's on autopilot doing unsafe things and read a lot of anecdotes of this as well. Elon Musk has made ridiculously optimistic statements about when Teslas will be self-driving and that by itself can influence people's behavior - perhaps fatally.

Edit: also, Tesla removed Lidar from their system. And there have been well-publicized deaths of people using autopilot.


Tesla has not removed Lidar form their system. You are mixing things up, they removed rader. And most of cases in the report are about system with radar.


Tesla never had lidar lol


Panic didn’t say they did.


Right

> rader

Radar is actually short for "RAdio Detection And Ranging"

And Lidar is short for "LIght Detection And Ranging"


I'm similarly confused. And with the shade Biden has been throwing at Tesla I'm concerned it may be politically motivated.


[flagged]


Seriously? If money is power, Musk _is_ the ruling class.


Your premise is not necessarily always true though. There are other kinds of power such as political power that can only be partially interchanged for economic power. Its pretty clear the mainline political parties dislike Musk as he’s a wild card.


"The billionaire is being oppressed!"


If we are in a capitalist economy, Musk, as a member of the haut bourgeousie, is the ruling class.


> If we are in a capitalist economy

We aren't, and haven't been for some time.


Nope, look at the constant strife between him, the California government, and the federal government. The Biden administration doesn’t even acknowledge that Tesla has any significance in the electric car industry.

Money is certainly helpful to get power, but it is definitely not sufficient nor completely necessary.


Who's the ruling class?


Talk about a click bait title... This is no where even close to cheating. It's like saying any test you've ever prepped for in the past was cheating.


Companies that are rescinding offers are just terribly managed companies. If you're in that bad of shape, then you should have done a hiring freeze months ago and there shouldn't have been any offers left to rescind (sure, there are odd ones where maybe they accepted 3 months early).


In addition to bad management, the contrast of the CEO buying a $133 million compound in Bel Air while throwing employees out on the street is just too stark.

https://therealdeal.com/la/2022/01/03/pawson-designed-bel-ai...


Not many companies expect or plan for their valuation to drop 75% in 6 months, that's gonna lead to tough choices. In the case of coinbase I'm guessing they're also seeing trade volume (and therefore future revenue) plummet and don't project it to increase anytime soon. Not sure this is evidence of a horribly managed company so much as a company dealing with a crisis and doing costly things to their reputation to manage it.


I mean, if you're in the crypto business you should have more foresight than that. It's literally the definition of a volatile market


Crypto has been that volatile for years. If they didn't see that coming, they have their heads up their asses.


3 months ago the stock market was at or near an all time high, I don't see how they could anticipate the current economic downturn.


It’s june already so no it wasn’t. The writing was on the wall in November and Fed had clearly telegraphed their intent that this is going to happen


"No one could have predicted a thing that happens regularly in the current economic system"!


Came here to say this... The book animates behind all the other ones. I was like "...shouldn't the book be in front of the others so I can see the title? you know... I bet it's because it's Safari". Sure enough Chrome works as expected


Is this an early April fools joke?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: