Hacker News new | past | comments | ask | show | jobs | submit login
Text Messages Between Travis Kalanick and Anthony Levandowski (ieee.org)
164 points by edshiro on Aug 15, 2017 | hide | past | favorite | 119 comments



> A source close to Uber’s operations says its engineers watched the intersection where Uber’s cars were said to have run the red light, and that this text refers to them recording a number of normal, human-operated vehicles also breaking the law. Uber has never officially admitted that its software was to blame.

is the implication here that that intersection has a lot of red light runners? if so, are they so dense in not understanding how normal people running red lights is less of an issue here than a machine running that red light?

normal humans run red lights because they're either not paying attention or they're assholes. how is a machine safer or better if it can't pay attention (or even worse, is an asshole).

someone could have died because uber decided the rules didn't apply to them. it's ridiculous that they're still allowed to operate in california.


Don't forget illegally running autonomous tractor trailers in Palo Alto and on the roads of Nevada.

These guys are criminals... total disregard for human life (but their own) in every shape and way there is. From potentially killing people, too treating everyone is who not their BRO like trash; drivers, employees, customers, business partners, etc, etc, etc... GROSS!


Your comments come off as if you are more bitter about some company you can just choose to ignore than benchamark is about Travis.


If you ever walk through a crosswalk, ride a bicycle (or motorcycle), or drive a car, you may pay with your life for ignoring self-driving cars that violate traffic laws and/or fail to react intelligently to unusual situations.


> bitter about some company you can just choose to ignore

This is not about some product you buy, or some service you pay for, it's about sharing the road, and whether the other people/entities on the road with you operating multiple thousand pound machines at 24-65 Mph have a legal right to be there and are safe. That's not something you can necessarily "ignore" without consequence.


I am only referring to this person's tone being unlike most of the people on this forum. "GROSS" "burn Uber down to ground". It is concerning how angry this person is. Also yes the car ran the signal, it was bad they should be fined and what not. This is not L5 autonomy so the driver is also responsible for not breaking at that signal . Stop making this sound like the car ran over someone or did not stop for pedestrian.


Yes Travis continues to meddle with Uber and be involved he may just well burn it to the ground out of pure ignorance and hubris. Only hold the ashes he built then burnt!

Their behavior is so GROSS it's almost illegal.

As for concerning that's funny ... Uber stole my money and laughed, then caused other ppl I know financial harm and then all this stuff comes out in the press about them. I loathe Uber that is all ... no more no less.

Why do you care so much ... what skin in the Uber game do you have?


21 ppl upvoted my comment at the top.


Just had a look at furioussloth's comments. Every single one is an Uber related story and an Uber related comment.

Reminder to us all to please declare your interests if you have them. even if it's only:

Full disclosure: I quite like Uber.

For me, full disclosure, I do actually, quite like Uber. (Although not everything they do, or endorse all their policies or individual employee abhorrent behavior etc).


Being pro-Uber here is like being a downvote magnet so yes I do maintain an account for non-technical comments. Also I do not try to condone their bad behavior; I am very amazed by what they had achieved by disrupting the taxi industry. I sometimes feel people have such knee-jerk reaction to certain things which are not really that scandalous or incriminating.


No it isn't. Declare your interests.


I am very bitter after Uber pretty much laughed at my Uber account being hacked and thus $1k stolen from me. These hacks were happening about a dozen or more a day for months and years and did they respond properly by alerting users that they should change their password? No, their PR blamed it on the users saying those getting hacked should have used a stronger password or something to that extent.

Further, thousand of us aspire to reach Kalanick's heights and to see all lack of graciousness and humility for his position is disgusting! His persona is one of a greedy BRO who thinks he's king one who needs to be kicked off his horse and learn humility! Thankfully it looks like this is slowly happening!!!

Give it up Travis and save Uber or continue being your greedy BRO king self and in turn burn Uber to the ground.


Are you saying that we should ignore companies that act badly?


Normal humans also run red lights when the yellow-to-red light time is too short. Transit guidelines generally state that yellow lights should stay lit 1 second for every 10mph of the speed limit. When the yellow light is shorter than this, it's easy to get caught in the middle of a red.


Have you seen the video? It's way more jacked up than what you're implying:

https://m.youtube.com/watch?v=pzzQ42D9Srw

Watch that and tell me that it's reasonable or caused by a short yellow...


I've run that light myself by accident. It's a hard light for humans too. This light is in the middle of a block rather than at the end. The area is full of activity including people walking but also popular sights. That brick building to the right is SFMOMA. Its hard to notice the traffic lights amongst all this activity not to mention that you don't look for it. Also that middle lane can easily have its views obstructed on both sides.

That area is also messed up because you need to be in the correct lane else you will be forced up different streets so you have many cars trying to change lanes. Lastly, that area has its share of asshole drivers that cut you off, speed, etc....

From my perspective, that car is behaving like someone not aware of the light and not sensitive to context (should have driven more slowly). You can call that asshole behavior if you wish.


Its hard to notice the traffic lights amongst all this activity not to mention that you don't look for it.

That makes it hard for a person, especially if they're new to that road, but why would it affect the database lookup that a self-driving car is doing?


>but why would it affect the database lookup that a self-driving car is doing?

Who said a self driving car is doing a "database lookup"?

If anything, a good self driving car should NOT do any kind of database lookup (of the location of traffic lights etc) and be able to recognize and respond to a moved, impromptu (e.g. because of road work), new, unfamiliar, etc. traffic light.


Getting additional data from a predefined map is expected, even if it's just for something to test the data coming from the sensors. If the car knows there's a traffic light on the map but it isn't 'seeing' one then it should be handing control back to the human, not just carrying on regardless on the assumption that the map is wrong.

A level 5 self-driving car would work completely autonomously without any prior knowledge of the area it's driving in. We're not there yet.


I see -- checked the paper. I knew they were using maps data for the routes and assistance (and that would extend to traffic lights) but I'd expect them to be able to spot all kinds of movable traffic lights (e.g. when there are works or an accident) by pure image recognition/AI.


I barely know anything about how self driving cars work so someone else should answer that. I am not affiliated with Uber or its competitors.

Perhaps I should have said that its a hard light for humans but that I don't know anything about how hard it is for cars.


The standard approach to detecting signal lights is to have a database of GPS positions of the signals, along with rough location in the camera where the signal is expected to occur [1]. Then, when the car nears the signal, it locates the signal and detects the current color.

This mechanism really shouldn't be susceptible to the same biases as humans. The described signal may legitimately be more challenging for the self driving car, but more than likely the signal was missing from Uber's database. Their lack of explanation for this failure does not inspire confidence in their approach.

[1] http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/People/zkolter/pubs...


I saw his reply as adding "too short of yellow to red time" to the list of "not paying attention" and "being an asshole" for human reasons to run red lights. I'm not sure why you're attacking him for a claim he didn't appear to make.


I don't think this was an attack; rather, an example furthering the case of "too short".


Holy cow.

With a pedestrian in / on the edge of the crosswalk. And there was even another car that had been moving just a few seconds earlier stopped there too...


You don't think it was going to stop if the pedestrian stepped out in front? If autonomous cars can't do that, they can't do anything.


Who knows? This Volvo didn't: https://www.youtube.com/watch?v=AsTxS6tg6xc


It's not an autonomous car and that particular car doesn't even have "pedestrian detection system" installed. It only can detect big objects like cars without optional hardware.


It's driverless. Are you sure that Uber had a pedestrian detection system?


Honestly I thought it was the vehicle just before it, until that one came out of nowhere on the right. That was WAAAAAY after. I nearly closed the browser tab before it came through.


Looks like it didn't consider a traffic light in the middle of a block as a thing that could happen.


This is a cartoonishly uncharitable interpretation of "Quick update on that special intersection in SF, we taped 6 red car violations within 2 hours".

A more reasonable interpretation is: A techie pointing out that this is a really confusing intersection, and trying to ease his boss' concerns that the software isn't going to work. Surprise surprise, people have trouble with it too.


Uber is saying that this text doesn't mean Uber knew that its self-driving cars were running the red light - instead, it refers to Uber engineers recording normal, human-driven cars running the light.


>normal humans run red lights because they're either not paying attention or they're assholes. how is a machine safer or better if it can't pay attention (or even worse, is an asshole).

Obviously a machine can be safer if it doesn't pay attention (can't recognize the light being red) on a smaller rate than humans.

I.e. if humans cross red lights 2% of the time because they don't pay attention, and a machine 1%, then the machine is safer -- despite still crossing red lights.


> if so, are they so dense in not understanding how normal people running red lights is less of an issue here than a machine running that red light?

I disagree. A machine might be able to completely verify that it's safe to ignore the light. I wouldn't trust a person to do that.


weren't they suspended? Also i doubt they coded it in their software to break the signal , can't believe you just implied that. It surely was a bug in software and I hope they got fined for it and took steps to fix it. Testing self-driving cars is not trivial, pretty much everyone who is testing them have screwed up here and there. That's the reason the person behind the wheel should be vigilant.


i'm not sure why you reached that conclusion but i didn't say they coded anything into their software.

also they weren't suspended insomuch as the vehicle registration of the cars they were using were revoked. at that point i'm assuming at that point uber stopped messing around with the ca dmv.


Given there is no further context in the messages released I humbly submit that this "explanation" provided by the unbiased source at Uber is full and utter bullshit

Like, they are standing there taping other people running red lights? And Kalanick cares how, exactly?

No, this is them taping their own cars running red lights. Which is a very likely scenario since they are 1) testing on a small subset of streets and 2) if it makes the mistake once, bet on a computer to make it again.


No, if their car is not running any lights under normal circumstances, but has ran it even once at a particular intersection, you bet I will send my engineers there with some cameras to tape the intersection so we can analyze it and see what is happening. If the intersection is poorly designed/timed, why would it be implausible to happen to catch human drivers make the same mistake?


Because it obviously isn't either of that. Did you watch the video?


I watched the video. You asked "why would they be taping the intersection", I gave you a pretty obvious explanation, not sure why you are so vehement about it.

In turn, why would any sane person run a two ton piece of gear through a moving intersection, endangering pedestrians and cross traffic through a red light where it is known to have failed once already, not once, not twice more, but six times in a day?

Why would you, as a responsible person, not immediately tell your test drivers to avoid this intersection until the issue is root caused, or, more likely, suspend testing for that day entirely?

Or do you just assume every engineer at Uber is a cackling mad scientist blinded by their quest to create our new machine god?


You put quote marks around something that I didn't ever write. Sorry, I don't believe there is a basis here for discussion.


I apologize if I misunderstood your question here:

> Like, they are standing there taping other people running red lights?

It sounded like you were rhethorically wondering why would they be standing there taping the intersection, but perhaps my interpretation was faulty.


>If the intersection is poorly designed/timed, why would it be implausible to happen to catch human drivers make the same mistake?

Unless they had already pulled the self driving cars, which the timeline presented suggests they haven't, why would that data be important enough to text back without the corresponding number of Uber violations?


I'm saying there's not enough context in the released messages to declare this unambiguously bullshit, and provided a reason why they would record the intersection, as parent comment asked "why in the world would they be taping it".

The number of Uber's violations could have been communicated previously, or could have been implicit in the context of the conversation (at least 1 violation, causing the investigation) - either of these 3 explanations seems equally likely.

Considering how dangerous this is, I really doubt they would run a red light SIX times through the SAME intersection purposefully, risking the life of REAL PEOPLE around them each and every time.

Uber's management and engineers may be irresponsible, but I don't think they're that evil, and I would hope no self-respecting engineer or human being would go along with that. "Oh, that's funny, let's try that again" works for software development, it's emphatically not how real world testing involving danger to life and limb works.

As to "why would this be important enough to text back", that's pure speculation, but if I was tasked with investigating this, I would be pretty relieved to see that this is a location that's confusing human drivers as well, not just my software, and would be pretty likely to communicate that back to my team.


>The number of Uber's violations could have been communicated previously, or could have been implicit in the context of the conversation (at least 1 violation, causing the investigation) - either of these 3 explanations seems equally likely.

The implicit one doesn't make sence, if an event caused the investigation surely how well Uber did would be the primary focus. Which leaves he knew already, meaning they already had a meeting or something to discuss the results and they were significant enough that he wanted a comparison.

>Uber's management and engineers may be irresponsible, but I don't think they're that evil, and I would hope no self-respecting engineer or human being would go along with that. "Oh, that's funny, let's try that again" works for software development, it's emphatically not how real world testing involving danger to life and limb works.

This was their illegal self driving car test on the streets of San Fransisco. They were unwilling to follow the legal safety rules, and unwilling to provide the deposit in the event they did cause someone serious harm. That they left on the road while they tested if the light was to blame.


"Uber Saw Tesla as a Huge Competitor

While Uber followed Google’s cars closely, it was Tesla and Elon Musk that the duo discussed most frequently.

9/14/2016 Levandowski: Tesla crash in January … implies Elon is lying about millions of miles without incident. We should have LDP on Tesla just to catch all the crashes that are going on.

9/22/2016: We’ve got to start calling Elon on his shit. I'm not on social media but let's start "faketesla" and start give physics lessons about stupid shit Elon says like [saying his cars don’t need lidar]"

Does anyone know what they're referencing here? I don't take Elon as a person to lie, his character seems too strong for that - he understands public perception and seems to deeply cares about it.


Multiple images can be used to compute a 3D point cloud. This is computer vision stuff around for many years. The challenge is this is a passive sensor in that the cameras count on light illuminating the scene. So at night; in bright light (that causes images to blow out); shadows; etc; you can have voids. If a person is in that void bad things can happen.

But cameras now cost under a $1 each in volume (thanks smartphones!) so dirt cheap. An imaging based point cloud extraction system main components are therefore cheap. Add a GPU enabled system to process (it's quite compute heavy) and you are set. OpenCV has the algorithms needed.

LiDAR is an active sensor in that the laser "illuminates" the target area. This adds cost but that is coming down quickly. Also as the sensor delivered 3D points (not images) the computational cost with images can be saved; so less CPU/GPU required.

Levandowski is a LiDAR guy. It's what he believes is the best solution for the problem.

Some feel that LiDAR is not a fit either as it doesn't work well in rain/fog/sleet/snow. There was a youtube video showing a self driving car running a test course in clear weather and again in the rain. You would not want to be a pedestrian during the rain test.

In reality this is all engineering dick waving. Prices will come down and the sensor payload will converge.

For full autonomy it is likely that cameras, LiDAR, Radar, and sonar all will be used. They all bring some advantage to the problem that addresses a weakness of one of the other sensor techs.

Oh yeah, and Levandowski is a complete prick. Someone should teach him about IP theft and give him a prison life lesson. He's going to need it.


Incidentally, Musk's take: "The whole road system is meant to be navigated with passive optical or cameras and so once you solve camera vision then autonomy is solved if you don't solve vision it's not solved so that that's why our focus is so heavily on having a vision neural net that's very effective for road conditions." https://www.youtube.com/watch?v=gv7qL1mcxcw&feature=youtu.be...


Why would you want to limit yourself to passive cameras and make your life harder? This is like limiting yourself to flapping bird wings to make airplanes.


No, it's like limiting yourself to using skis to move down a ski slope. He's right: the roads are design to be navigated using vision. Signage, regulations, paint, curbs, etc. There's no proof that you could safely navigate the roads with LIDAR, but we prove every time we drive that you can do it with vision.

And sure, there might be a better way to get down a ski slope, but skis would be a pretty good starting point. And they guarantee you don't end up in an impossible situation because you're doing things a fundamentally different way than the system expects.


They're designed to be navigated using human vision, which has very different characteristics in terms of dynamic range, resolution, processing pipeline, inferring details about the scene based on past experiences, etc than machine vision.


Because not everyone can afford to spend $20k on extra sensors that make the car 1% safer. And holding back autonomous cars until they're perfect can kill more people than near-perfect autonomous cars. It's an economic tradeoff like any other.


His thesis is that relying on cameras makes it easier, since the entire preexisting road network is literally designed around optical navigability.

Adding other sensors isn't free. Every minute you spend on developing techniques to process inputs from other sensors, not to mention integrating their conclusions with that of other sensors, is time, money, and energy you could have used to improve your optical system.

I'm not saying I necessarily agree (though I find his position intuitively compelling), but he clearly thinks that it's easier, faster, and cheaper to bring an optical-only system to a point of reliability than it is to bring a mixed-sensor system to the same point.


It's interesting if you consider we have 2 eyes [cameras] and we drive under all conditions, and under bad driving conditions - if you're sane - you slow down or even completely stop and pull over with your 4-ways on. When I've been in very heavy rain downpours on the highway, it feels like I'm only driving because I'm able to follow the flow of lights in front of me as having a bunch of guidance points - autonomous vehicles could likely do a much safer job of this..


So just from reading the pulled quote, is he saying current road users, i.e. humans, navigate using a passive optical system, i.e. our eyes take in photons, they don't emit lasers, but our eyes are also components of a general intelligence, does "solving vision" entail development of a general intelligence?


> Multiple images can be used to compute a 3D point cloud. [...] Add a GPU enabled system to process (it's quite compute heavy) and you are set.

It requires enough information ("features") in the images to compute a stereo pair. Flat patches of color have no features and as such, cannot be correlated between cameras. In such a case, you have a spot with no depth information.

This is exactly why you want to have other sensors, and saying that "oh humans have two eyes and they do fine" doesn't really cut it. Humans can say "hey this flat patch of color is a sign and signs are not dangerous" or "hey this flat patch of color is a really clean semi truck, and crashing into trucks is bad" but computers aren't that smart.


> Flat patches of color have no features and as such, cannot be correlated between cameras.

Flat patches of color are also flat, which makes it possible to fill in the missing depth information.

I took a course in computer vision where one of the projects [1] involved monocular vision, and assuming that flat patches are flat, vertical lines are vertical, all others are horizontal and the background is flat, it was possible to get a pretty good reconstruction.

[1] http://groups.csail.mit.edu/vision/courses/6.869/notes/chapt...


Everything has texture, even things that appear solid, this is how optical mice can work on glass. Throw in NIR/UV camera, or maybe thermal for good measure, slap LSTM on top and you are covered for everything human would spot.


I'm curious as to what happens when there are two cars with active sensors at the same place - how much would they interfere with each other?


There are a lot of techniques to avoid it. I remember listening to an interview with Greg Charvat who said that coding a polarised pulse train with some random phase distribution is a possibility for avoiding interference.


What about malicious interference?


If it's deliberate, I would imagine that there are plenty of techniques that would work with regular cameras, as well. Ultimately, you can always target the algorithm that works on the data, regardless of how the data is collected.


Tesla will do self driving without a LIDAR, just cameras (ok, 8 of those), a RADAR and some ultrasonic distance sensors: https://electrek.co/2016/10/20/tesla-new-autopilot-hardware-...

Some people believe this to be impossible for the time being - LIDAR gives you a 3d point cloud (every obstacle, with its distance from you measured), which is amazing (but currently expensive, bulky, fragile, etc) while with cameras it's way harder, and RADAR can't see some materials at all.

Tesla argues that their suite is enough (humans manage with just 2 "cameras", some even 1), but the algorithms to let you safely entrust your life to such a sensor suite are harder (i.e. more far off in the future).

Kalanick + Levandowsky disagree with Musk, and were planning on calling him out on it.


I know that you were just doing a tl;dr and not necessarily trying to start a discussion about it, but wow, that seems kinda stupid to be coming out of someone who is ostensibly fairly intelligent.

A 3D point cloud is great, but with the raw computing power we have these days, image processing should be pretty reliable, at least until LIDAR becomes more. . .well, reliable and affordable.

Doesn't the HoloLens just use an IR camera to map its environment in real time?


> A 3D point cloud is great, but with the raw computing power we have these days, image processing should be pretty reliable

In theory? Yes.

In practice? That's how the first autopilot fatality happened. The image processing confused the white side of a semi truck with the sky.


I thought that that was (also?) a problem with the radar, where it bounced under the truck and thought the road was clear.


> I thought that that was (also?) a problem with the radar, where it bounced under the truck and thought the road was clear.

Ultimately this was driver error, since he wasn't paying attention and didn't break.

As for why autopilot did not engage the breaks, you could blame any of the forward facing sensors since they all failed to see the truck.

If the vehicle had windshield-height radar, or a better vision system, perhaps the system would work.

The problem with the radar was it couldn't see the object at that height.

The vision system confused the trailer for an overhead road sign.

After the incident, I remember some people reporting more breaking occurring on highways underneath overhead signage. That made me think the fix they put in place was to raise the threshold for what is considered an overhead sign. That is, they chose to err on the side of assuming the sign is an object ahead.

That may have been a temporary fix. I'm only speculating based on driver reports that appeared in /r/teslamotors a month or two after the crash was reported.

I don't know whether Tesla ever gave an official statement about how they fixed that issue. Tesla wasn't found to be at fault so they probably didn't have to. Also, the whole system is constantly under development.



Oh interesting. That's a pretty detailed report.

One thing jumps out at me,

> This is where fleet learning comes in handy. Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database. If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.

Relying on whitelists seems like a hack. Then again, I'm not building it =)


> Relying on whitelists seems like a hack. Then again, I'm not building it =)

Agree that it feels hacky at first, but once the whitelist dataset gets huge, it becomes unique training data for Tesla (and hopefully their machine learning will be able to generalize from it).


> Agree that it feels hacky at first, but once the whitelist dataset gets huge, it becomes unique training data for Tesla (and hopefully their machine learning will be able to generalize from it).

I guess. But then you still have to deal with rollout in other countries. And maintenance of sign locations which can change over time, get removed, etc. Still hacky IMHO.

How would machine learning make use of white-listed data? I doubt they could use that data to predict the GPS location of unknown signs.

If you mean image recognition, I assume if machine learning could properly identify the signs with accuracy, then they wouldn't need the whitelist. Then again, maybe they truly haven't collected a full overhead-sign dataset yet. I'd be shocked though if they don't by now. Anyway, you could be right. It would be fun to learn more about these setups.


True, but it was Mobileye _one_ quite low res BW camera system that decapitated the driver.


Yes the HoloLens uses IR to map it's environment, however if you move to an area that has a window it throws a caniption fit.


The reason I bring it up is that the HoloLens is pretty limited. Given what I've seen it do with its limited power and the kind of computing power they're putting in Teslas, not to mention the extra sensors (RADAR), it seems more than adequate to perform real-time image processing, even in the short term. That's not even considering what's coming down the pipeline, like fleet learning, the new Nvidia GPUs, etc.


Also when the HoloLens throws a fit it's maybe a 5% margin of error in terms of how the image moves. You'd think an autonomous vehicle wouldn't crash because of that much inaccuracy considering human drivers don't normally with less accuracy than that.


The Hololens uses an IR projection and camera in addition to 4 "environmental cameras", which is why it doesn't work well in very low or very bright lighting. It also uses the inertial sensors to help correct the map.

There are some things about SLAM that are best done optically.


The irony is a car with LIDAR on it won't sell, it looks ugly and too foreign. Tesla is designing with asthetics in mind.


Musk has said on a few occassions that he can achieve full automony on a Tesla with no LIDAR setup. I think instead they use a front-facing radar and a bunch of cameras. Maybe IR?. Levandowski strongly disagrees and sees LIDAR as critical to reliably mapping the environment. This all came up when that guy drove into the side of a trailer in a Tesla.


Elon has publicly claimed that he will be able to provide an L5 autonomous car without using lidar in the next few years. AL, another expert in the field, believes that claim is so ridiculous that he must be willingly lying, not just overconfident.


Musk said they'd demo L5 late this year, which is delusional.


Tesla’s new self-driving car can only make you money on the ride-sharing ‘Tesla Network’, not Uber or Lyft (2016)

https://electrek.co/2016/10/19/teslas-new-self-driving-car-c...

that is the bigest challenge for Uber if tesla decides to completely integrate vertically and as a big fleet.


> I don't take Elon as a person to lie

But we all know he will exaggerate to the point of being misleading.


What does LDP stand for?


Reading their exchanges its more interesting than just finding stuff about the court case. Specifically how these insiders think about their competitors. Such as how they think that Elon is the biggest competitor and how they wanted to partner with Google.


I guess that's because of Tesla's advantage in vertical integration -- they control the entire stack unlike Waymo or Uber who have to retrofit their gear into specific car models.

Tesla's tech is generally considered weaker but Musk could still catch up and overtake by playing the "worse is better" card.


They might also realize just how important the combination of electric and self-driving is in terms of $/mi. Both lower the cost substantially, but together things start to get really cheap. When you add in the cost of a driver, internal combustion is still roughly competitive. Once you remove that cost, electric will be significantly cheaper. Tesla's investment into battery production could be difficult for Uber's suppliers to overcome.

The end game in Uber's VC-money-burning present state is an eventual future where self-driving vehicles drive the cost down to where Uber makes a profit from the same fares we pay now. But if Tesla prices their service near their own cost, Uber could get to its own self-driving service and be forced to choose between pricing their service higher than Tesla or continuing to take a loss on each ride.

Tesla also has a huge PR advantage. Competitors need to be able to capture the attention of riders as well as offering self-driving vehicles. Given how easy it is for Musk to get press, he might have the best chance of doing that without having to pay for it in the same way that Uber has had to pay.


> 7/23/2016 Kalanick: You hungry? .. Can get some Uber Eats steak and eggs.

Travis shows dog fooding at its best.


Even specifies "Uber" Eats. That's a CEO right there.


Exactly. The man doesn't have an off switch. It's like Alec Baldwin talking to the sales guys: "Always be closing!"


i don't understand how us the public are allowed to read a private conversation. Don't take me wrong i enjoyed reading it, it felt like snooping, but isn't it a blatant privacy violation?


Anything you do that's business related could end up on the front page of the NY Times. I would be extremely careful about what you do on corporate provided phones and email.


Published AND edited. They will delete the part of the sentence that makes it explicit you are joking about something.


Published AND edited AND photoshopped -- into a nice fake iPhone stock photo for maximum effect.


Zuckerberg's early days of conversation released didn't seem to hurt him making $40B+ - I always wondered if FB censored/prevented the spread of that information relating to him on FB.


Who knows where he would have ended otherwise.


Court documents are generally part of the public record and that's where the texts came from. Like it or not, that's the way it currently works.


This is why you don't write down anything illegal you plan to do. If you get caught courts can demand you hand over evidence, or it'll be seized by a search warrant, and likely made part of the public record.


They were entered into evidence for a court case. Court documents are generally public record. That's how these messages went from private to public.


So much talk about LIDAR and other sensors. Why nobody talks about obvious idea of Road Object Message Bus? ROMB is a protocol where each road object (a traffic light, a sign, a car, etc.) transmits info about itself. A car could broadcast its direction vector, intention to turn, any non ROMB moving object it sees. A traffic light could broadcast current state and when it is going to change. That information would greatly enhance overall security, especially during rain and snow conditions, when even LIDAR fails.

Self-driving is such important (just after eliminating combustion engines) that we could upgrade existing cars with cheap ROMB boxes. Vehicle GPS tracking system costs about $30. ROMB box would cost about $60. Let's say that from 2027 all cars have to have a ROMB box to enter a downtown ...


Because this would likely require a large change in city infrastructure.

Who will be building this? Who will pay for it? If it is the city how will you convince the city's taxpayers to pay for it? If it is a profit-seeking corporation, how will you convince a city to let you cause the disruption, construction, etc. to let you do this?

For other cars, what advantage does this bring to other car manufacturers and why would they agree to cooperate with competitors? Of course there is the obvious benefit that this would help all the players, but why does that marginal benefit outweigh the risk of commoditizing a brand new market / product and eliminating the chance to establish a market share lead. I am partly raising these hypothetical questions because I think companies are trying to "tough it out" and do it without such changes to city infrastructure first and see how that turns out.

I appreciate your simple approach, but you might be disregarding the societal and business factors in favor of making the engineering challenge simpler.

Edit: grammar


Also GPS tracking systems are only accurate to about 20m, not enough to avoid hitting stuff. And transmit over the cell network so don't work if you can't get signal.


Although if you're going to be adding to infrastructure, you could just as well add differential GPS transmitters around the place and get the accuracy down to sub-1m[1] although that's still not really safe enough for cars...

[1] (and I believe you can get it down to ~3cm if you have enough data.)


Let's say your car ROMB received info about the white truck, while your car cameras and vision recognition systems see just a cloud and any truck in 100 m range.

ROMB purpose is not to replace cameras or LIDARs, but to extend gathered info.


I can't help but shake a feeling that all of this posting of text messages is just trying to shame Levandowski, Kalanick, or both.


Any "leak" is intended to weaken/shame some party. The unknown is whether it's an attack on Levandowski/Otto, Uber, Kalanick, or all of the above.


The source of these texts are unsealed court documents. Like it or not, these things can become part of the public record if you get sued.


Maybe so. You play in dirt, you get dirty.


I didn't read much of the transcript, however these guys must be intelligent enough to keep what they know to likely be illegal as a private in-person conversation without record?


Most (though not all) people are probably smart enough not to send a text or an email like "Where should we dump the body." However, the totality of a bunch of emails or texts, considered together with other evidence, can certainly be suggestive even if there's no smoking gun.


Possibly communicated through other channels as well like Signal or Whatsapp.


Interesting that they are using telegram. Ref the message in the end of the PDF where they complain about telegram on planes.


Simply unbelievable how much garbage is on that web page without an ad blocker (I put a box around the actual content that appears without having to scroll): http://i.imgur.com/0S2FIAW.png



Zero text content for me (without having to scroll, and ad-blocked enabled).

JS Disabled: https://imgur.com/a/IAcXb

JS Enabled: https://imgur.com/a/TtSeg

Likely a result of my zoom/font settings, but this is ridiculous.


More annoying is that even with as a IEEE member with Spectrum subscription, there's no way to log in and remove the ads, only to download the magazine.


We've so far since 1999, and we have so much further to go.


Archive.is doesn't have this issue: (http://archive.is/k7vgK)


The headline and image are also content, they're just not body text.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: