Hacker News new | past | comments | ask | show | jobs | submit login
AirSim: open-source simulator for autonomous vehicles (github.com/microsoft)
320 points by jonbaer on Sept 18, 2019 | hide | past | favorite | 53 comments



Super cool. I've always been a big fan of generating synthetic training data. As long as you can degrade/distort the data to match real world conditions, you can have your cake and eat it to: accurate inputs with perfect automated labels.

At my last job, we read the contents of receipts and parsed them for purchase information. I was really interested in working there, so I built a docker-based CGI receipt-generator[0] that mimicked real-world conditions of a receipt captured with a camera phone. After getting the job, I expanded it to provide the perfect labelled data of bounding box and character[1]. I then used this data to train custom TensorFlow models.

0. https://github.com/amoffat/metabrite-receipt-tests

1. https://i.imgur.com/gkdnAkt.jpg


In medical imaging synthetic training data get called phantom.

https://en.wikipedia.org/wiki/Computational_human_phantom


This is really cool! Nice work. Do you remember how you went about creating the original receipt generator? I'd be really interested in this kind of stuff without any ML/DL.


Not OP but I was just looking through their repo and I've done something similar in the past for a different application. We both used Blender for generating the rendered synthetic data. Blender has a really simple python API that was originally made for simple scripts but you can run it in a headless mode. You start with a base scene that has all of your objects and use the api to randomly generate samples and render them out to a file.


In theory, could I use this (or something like it) to develop algorithms for controlling autonomous vehicles (aerial, ground, and/or aquatic), then if I ever get my experimentation to the point that I’m interested in doing so, building the vehicle itself and deploying it with a reasonable degree of confidence that it would actually work?


There would be a reality gap, going from simulation to real world. Good read: https://ai.googleblog.com/2017/10/closing-simulation-to-real...


AKA 'sim2real' gap for googling purposes.

related: https://lilianweng.github.io/lil-log/2019/05/05/domain-rando...


getting things like this to transfer well is not necessarily easy, it’s a major active area of research.

even if you somehow had a perfect physics model, there are problems like the fact that motors and joints behave differently as they wear down, etc.

it might be easier for wheeled vehicles — usually the examples of failure are complicated, error-prone things like bipedal walking, hands with lots of joints.


Yeah the trick is knowing what the performance of the as-built vehicle will be before building it. You can to some extent marginalize over this by running randomized trials with different parameters for your mechanisms so that you verify that you're not too sensitive to the exact setup. The control system needs to be fairly robust to such variations. But you still don't know that you've modeled all the physics correctly until it's actually flight-proven. Picking the correct degree of fidelity in a simulation model isn't super straightforward.


My thinking is that if your sim is kinda realistic, then once it works you can fine tune on some (expensive) real hand labeled data. But the sim helps you develop the training process etc.


I just downloaded this last weekend! I’m trying to find a good simulator where I can import models generated with photogrammetry. I’ve been using a cell phone camera and OpenSfM + OpenMVS to make 3D models of real world environments, and I want to be able to run them in sim. I see this is built on Unity. In the past I tried UnrealEngine but it was such a big mess of software for simple RL tasks. I’ll have to see if Unity is cleaner.

I’m building a mostly 3D printed four wheel drive Rover vehicle I designed, and I’m using an Nvidia Jetson Xavier to try to make it autonomous using only cameras for sensors. Currently I’ve got some stereo cameras but I’m considering a synchronized 4 X 4K camera rig. Hopefully I can release some scans once I get something good.

The rover is public domain open source so please check it out if you’re curious! It can support a 10-20lb payload with the right springs:

https://youtu.be/ToGT3KokPZA


This sounds super cool! Can you say a bit more about your OpenSfM + OpenMVS stack? I've been idly interested in building 3d models from cell phone video for a while.


Thanks! Yeah I’m essentially running OpenSfM exactly as described in their documentation usage example. Then on that page they also show how to export to OpenMVS, and from there I follow similar example usage documentation from OpenMVS. OpenSfM has a config file and the defaults are fine, but I’d recommend reading the description for each possible parameter so you know what to tweak if you don’t like the default results. Oh and OpenMVS uses a lot of memory if you use lots of photos, but remember you can use swap. At first I used GCE instances with huge amounts of physical ram and that was a silly waste of money in hindsight. I simply didn’t think of using swap at first.


This is quite cool, why cameras only though? Intel realsense, ultrasound? Are you avoiding them due to cost?


Hello and thanks! No I am not avoiding either of those due to cost. I’ve worked with ultrasonics on a robot before and I found them pretty unimpressive. I have seen a demo of some advanced ultrasonics but the guy also told me he patented the technology. Normal ping pong ultrasonics return such ambiguous information they aren’t very useful. Regarding intel realsense, they have two different technologies. Some realsense devices are stereo cameras, so they are “just cameras” in the sense of this discussion. But I’d rather run the algorithms myself and build a complete perception system than use the built in computation of for example the realsense tracking camera. The other technology they use is structured light, and such technology doesn’t work well or at all in sunny environments. I want to build a complete perception and navigation stack for outdoor robots, and aside from some sensors inside the robot, cameras are the technology I’m most interested in learning how to use. I just think cameras are the future. Passive vision is how most every living thing on earth perceives the world.


Slightly worrying that the car simulation doesn't appear to have any cyclists or pedestrians.


Or pictures of people on the sides of sprinter vans.

Or bikes on trailers on the backs of cars.

Or pickup trucks stuffed to the gills with Lime scooters.

Or...


It's good that it's open source then ;)


Good that autonomous driving companies won't pay me for my contribution to the software that underpins their R&D?


One perk of it being open source is that you can take your payment in "benefiting the world" instead of cash. You're given the platform to improve it if you want; expecting some other kind of compensation for addressing your own complaint doesn't seem realistic.


"My own complaint" is one on which the success of autonomous vehicles hinges. It's not an arbitrary critique at all -- all these things will be misclassified and mistreated unless explicitly included in the training process (until ML gets smarter). The fat tail strikes again.

"Benefiting the world" sounds great except we're embedded in a capitalist system for which the net gain on utility from this action is negative -- the super-corporations capitalizing on my unpaid work will further contribute to the economic inequality that beleaguers everyone who works for a wage.


There are more and more methods of getting funding for contributing to open source, for example https://opencollective.com/


All open source is not equal. Fetishizing "open source," without considering the use cases and who may profit from it, is not something I feel comfortable doing.

I'm not ragging on open source as a concept, just that this example particularly is a bad one for justifying its utility.


Yeah and in the car video on https://microsoft.github.io/AirSim/, they have the car driving on the wrong side of a barricade on a bus only lane. :(


I know, I wanted to train a neural-network piloted killer robot car too.


I remember people using Grand Theft Auto V to generate synthetic data. It’s got pretty much all of that.


I just ran across Real Engine looking for Architectural Visualization software (note that Twinmotion is free until Nov!) This is an amazing space which I imagine will explode in variety of application, especially once VR goes mainstream.

Are there any open source alternatives which even come close?

I wonder if I would attract attention if I were to upload a rendering of Saudi oil infrastructure for drone simulation. ;)

Can we use this thing as a simulation for shooting drones out of the sky? This could be an interesting massively multiplayer research project creating an arms race between attackers and defenders.


> I just ran across Real Engine looking for Architectural Visualization software (note that Twinmotion is free until Nov!)

Thank you!


If you are interested in this, I would recommend checking out DroneSimLab [1]. Both are developed for the Unreal Engine and both have SITL (Software In The Loop) for drone control. One thing DroneSimLab will allow you to easily do is add multiple drones. If I remember correctly it uses Gazebo to create semi-realistic control for the drone.

[1] https://github.com/orig74/DroneSimLab


looked at using airsim a few months ago to replace a product we'd been using for some time, problem was the actual vehicle dynamics simulation capabilities were subpar


Airsim also has poor facilities for stepped simulation needed for many RL algorithms (e.g. insert control, advance simulator by 30ms, observe state, repeat). It looks like this is still being worked on: https://github.com/microsoft/AirSim/issues/600


Weird, I'm surprised that isn't the default mode of operations for something that's designed to plug in your own control system.


Did not do a code review but I would not be surprised if they were using Unreal's default vehicle simulation; which, as you noted, is indeed sub-par. It's honestly a very challenging task to get vehicle physics to accurately replicate real cars, and the systems which are out there (commonly used by manufacturers to make advertisement renderings) are proprietary and expensive.


It the demo video it popped up"Advanced Physx car model". Physx is designed for a good enough approximation in games, not an accute simulation.


There is a physical car designed to be repainted (like a motion capture actor) in CGI. It solves physics by using actual physics.

http://www.themill.com/portfolio/3002/the-blackbird%C2%AE


In that case you are limited to real terrain only, your replication goes as far as your recording.


If using Unreal, the assetto corsa competizione video game is pretty good for simulations and it offers a commercial license. https://www.assettocorsa.net/assetto-corsa-pro/

As does rFactor2 but it may not be so easy to use with unreal.

http://www.vesaro.com/store/pc/viewPrd.asp?idproduct=130


Just looks like an improved Midtown Madness to be honest.

https://en.wikipedia.org/wiki/Midtown_Madness

/s incase it wasn’t clear


I've been using LiftOff (https://store.steampowered.com/app/410340/Liftoff_FPV_Drone_...) to test my drone handling skills in a simulated environment for a while. It's remarkably accurate and allows for custom drone models to be downloaded that match up very well the the handling of the real drone in question.

Weather, wind etc. are all taken into account.


Awesome; it looks like GTA gone serious.


Does this support uav swarms? Would you need one instance per vehicle?


I think it only support 1 uav. The issue have already stated about that: https://github.com/microsoft/AirSim/issues/1540


That looks interesting, but I wonder how accurate the flight behavior is when they are not including any atmospheric dynamics. The wind in the atmospheric boundary layer is characterized by strong semi-random fluctuations that arise from the turbulent interactions of the air with the surface. These fluctuations occur on scales both larger and smaller than a drone, so they should have a significant effect on its aerodynamics.


Aerodynamic simulation is a really tough problem to take on. I'd venture to guess that it's in part due to not having a complete set of theories/equations around fluid dynamics. From what I hear, X-Plane comes really close to this. But yes, skill-transfer aside, I'd guess any airborne vehicle trained in a simulator that lacks environmental effects, ground effects, induced drag effects...etc in its physics calculation is not going to perform well in the real-world.


If you're interested in this stuff but don't want to spend the time setting everything up, you should check out Amazon DeepRacer[0]. They have a whole environment set up to do reinforcement learning (although right now it's only their tracks). You can get your feet wet with the free tier.

[0] https://aws.amazon.com/deepracer/


How is this different than CARLA? Just curious.


Flight sim is totally out. Drone sim is what's up now! :D


For C# devs, there’s unity integration as well.


Any companies working on this?


(Disclosure: I work at one)

There’s a few working specifically on simulating the autonomous vehicle domain. air sim is unique because you can do both ground and air. From the business side of things I’ve seen enterprises find it as a good way of proving the value of simulation. The company I work for CVEDIA, develops SynCity which we use both for simulating autonomous systems and generating training data for object recognition and classification tasks.


Neat!


anyone else want to use this but can't because they use a mac?


What is the practical application of this?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: