Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Vehicle Detection – using machine learning and computer vision (github.com/tatsuyah)
153 points by tsy on Oct 30, 2017 | hide | past | favorite | 21 comments



My friend and I did something similar using Haar Cascades for vehicle detection and Hough transforms to detect the lane lines. We used that to analyze dash cam videos and calculate following distance between the cars in the video.

The final product is here, http://asherman.site/hazcam/ and I also release the trained haar classifier separately on my own GitHub in case anyone had another useful projects for it. It was a lot of work hand selecting and cropping 400+ photos of the rear ends of cars, so I figured I might as well share the result. https://github.com/pddenhar/OpenCV-Dashcam-Car-Detection


This looks like it might be exactly what I'm looking for. I am quadriplegic and use a powered wheelchair, I also live in a small town with narrow roads and can't tell how far away a car is from behind me. I can't use regular wing mirrors as it would make the wheelchair to wide and I can't quite get the angle on a mirror to see behind me, so I thought I would use a camera on the back of the headrest which with some computer type cleverness™ could tell me if about to be mown over.

I was recently hit by a Mercedes SLK, and it's not something I'm anxious to repeat.

At the moment I'm thinking a GoPro streaming wirelessly to a Raspberry Pi, which can do the visual processing and essentially warn me if something big and metal is coming up behind me too quickly. I got the hardware, but I really don't have the software chops so if someone could point me in the right direction I would really appreciate it!

Thanks!


By all means I respect wanting to roll your own version of this. But if you would like to buy something to detect vehicles from your rear, there is something that already exists and is used commonly by cyclist. You may however need a bike computer as well, so it might get a bit expensive...

https://www.amazon.com/Garmin-Varia-Rearview-Radar-Light/dp/...


You're absolutely right that that does look like it would do the job, but once I start throwing in a computer as well it may start getting a little bit prohibitively expensive as you suggest. Definitely something to keep in mind though, thanks!


Looks like you can get a dedicated display with the Varia for an additional $100. $300 total is still cheaper than a GoPro, Raspi, and all the additional stuff you will need to make it work together...

https://buy.garmin.com/en-US/US/p/518151/pn/010-01509-10#box


How about use the PI Camera module instead? No wireless mess and much cheaper than a GoPro. I'm not even sure the GoPro video stream is accessible outside their proprietary app.

There are a lot of tutorials online on developing computer vision with the PI.

However, it sounds like the K.I.S.S. solution for you would be a backup camera or something so you can see whats happening behind you instead of relying on CV (computer vision) especially if you don't have the software chops to take on the CV challenge.


The reason I shied away from using the Raspberry Pi camera module is that I live in what is essentially a wet hole in the ground in Yorkshire, is consistently wet and when it's not wet it's damp and when it's not down its usually flooding. So that's why I thought the go pro would be a better choice.

The Raspberry Pi is going to be the ideal machine I think to do the heavy lifting hopefully, and the main reason I am open to rely on CV is that there is nowhere to fit a screen on the front of my wheelchair. I tried streaming wirelessly from a go pro to my iPhone and the battery life was absolutely abysmal.

Maybe if the CV detected a problem it could make a fit bit or something similar vibrate to alert me to a problem, different numbers of vibrations were different situations etc etc. Although it is ENTIRELY possible I'm overthinking this!


Chapter five of "OpenCV for Secret Agents" has a night-time version that recognizes headlights. The author knows OpenCV and I found the style of the book to be less dry than your usual Packt tutorial.


Thanks,I will definitely have a look at that as I'm definitely going to need some sort of night-time capability hopefully.


I'm going to drop you an email on this - Hoping I can help out!


Cool, you can grab me out HN@robotsandcake.org And thanks. :-)


This is one of the projects in Udacity Self-Driving Car Nanodegree in term 1. I've done it. It's simple and does not need deep learning. But if you want to go fancy, you can use image segmentation. http://blog.qure.ai/notes/semantic-segmentation-deep-learnin...


Good work! I did this project as well about a week ago. I will shamelessly also share my writings on this project: https://medium.com/towards-data-science/teaching-cars-to-see... .

The HOG + SVM method is quite slow and not as accurate as a deep learning approach. Before jumping onto semantic segmentation, I recommend re-implementing this project or more generally solve this problem using a Regional Convolution Neural Network architecture (R-CNN) like Faster R-CNN[1] or YOLO[2] for instance.

[1]: https://arxiv.org/abs/1506.01497 [2]: https://arxiv.org/abs/1506.02640


Totally agree with your point on HOG + SVM, I think it is obsoleted by convolutional neural networks.

I wrote a realtime human detection library [1] for a robotics project that used HOG + a simple neural net for classification. While it worked okay, I wasn't happy with the precision (around 90%) and decided to try out a simple convnet from Torch (doing the classication on depth images instead of HOG descriptors). The Torch version was slightly slower on a CPU, but both the precision and recall jumped up drastically.

[1]: https://github.com/seemk/FastHumanDetection


Very good summary.


Nice! Check out mine. https://youtu.be/l7zqSn8HCXg


Also interested in a writeup!


Got a write up?


This might be the wrong place to ask, but my search skills are failing me as I try to understand what "Space binning" (referred to in the README of the linked project) is. The top hits seem to all point back to the original article.


I noticed you are standardizing your dataset before your test/train split. This is an example of information leakage which is causing your model to overfit by learning the test example distribution: http://www.eggie5.com/97-model-evaluation-information-leakin...


Noticed a couple typos:

Second paragraph - 'argures' should be 'argues', and 'resonse' should be 'response'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: