Hacker News new | past | comments | ask | show | jobs | submit login

It can work - I won't re-post everything again here, but I've already mentioned in this thread how I used my 750 ti for training a CNN based on NVidia's End-to-End model to self-drive a virtual vehicle around a simulated track (this was part of Udacity's Self-Driving Vehicle Engineer Nanodegree).

Udacity supplied the simulator, I set up CUDA and such to work with TensorFlow and Keras under Python. I had to drop the res down on the simulator (windowed at 800x600 IIRC), and training was done without the simulator (I used the simulator to generate the datasets - this is built into the simulator). The resulting model was actually fairly small (less than 300 kbytes); I scaled down the input images (originally 640x480, scaled to 160x120) to limit the number of initial parameters to the CNN, then applied aggressive dropout and other tricks to further keep things in check thru the layers as I trained (I could have used batch processing, too). The resulting model worked well with the simulator afterward - and it had no problem with memory or anything else in order to keep up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: