> YOLOv3 is a good detector. It’s fast, it’s accurate. It’s
not as great on the COCO average AP between .5 and .95 IOU metric. But it’s very good on the old detection metric of .5 IOU. Why did we switch metrics anyway? The original
COCO paper just has this cryptic sentence: “A full discussion of evaluation metrics will be added once the evaluation server is complete”. Russakovsky et al report that that humans have a hard time distinguishing an IOU of .3 from .5! “Training humans to visually inspect a bounding box with IOU of 0.3 and distinguish it from one with IOU 0.5 is surprisingly difficult.” [18] If humans have a hard time telling the difference, how much does it matter?
> But maybe a better question is: “What are we going to
do with these detectors now that we have them?” A lot of the people doing this research are at Google and Facebook. I guess at least we know the technology is in good hands and definitely won’t be used to harvest your personal information and sell it to.... wait, you’re saying that’s exactly what it will be used for??
> Oh. Well the other people heavily funding vision research are
the military and they’ve never done anything horrible like killing lots of people with new technology oh wait.....
> I have a lot of hope that most of the people using computer vision are just doing happy, good stuff with it, like counting the number of zebras in a national park [13], or tracking their cat as it wanders around their house [19]. But computer vision is already being put to questionable use and as researchers we have a responsibility to at least consider the harm our work might be doing and think ofways to mitigate it. We owe the world that much. In closing, do not @ me. (Because I finally quit Twitter).
> 1 The author is funded by the Office of Naval Research and Google.
For people who don't get how the last statement ties in: on the actual paper, the third line 'iudqnolq quotes has a superset 1 on it. The PDF doesn't allow you to highlight the 1, so the punchline isn't as strong.
Incredible. I wish people would take themselves less seriously like this. I would never have read that paper and learned a bit about that space if it hadn't been such an engaging read.
Is there a github for the results cited in this paper? If not then I have one confusion despite the paper being awesomely written. We need more like this. I dislike the lifeless way the new research is communicated. I have to force to read them but not to this so much.
Now coming to the question-
Is the input size and output size same that is why box top-left coordinated (bx, by) prediction corresponds to offset(cx) + prediction (sigma(tx)) and similar for y?
Algorithms doesn't seem to be the right term, this seems to be more tooling, like this wrapper for Tensorflow and YOLO training that runs a variety of monitoring tools for you:
This is impressive only because they are in automotive. I worked in that industry for years, they are generally at least a decade behind the rest of the software world
Tesla isn't but this is one of the biggest problems I have with Tesla that I don't hear many people talking about. Tesla is more like a laptop with wheels than a car with a computer. That's fine but I'm not going to upgrade my car on the same schedule that I upgrade my laptop. It would be nice if Tesla allowed you to buy a new "shell" and transfer over your old battery pack. I wonder what the cost breakdown between battery pack and everything else is.
sorry for the negative vibes, but this looks a lot like some fake buzz to create some credible BMW-traineed profiles – none of the profiles linked with the project is older than 5 days...
At least on using the age of git/GitHub repos to determine the legitimacy of a project/effort: I would say it's not uncommon for some groups to time the release of their code with the publication of some announcement of it. I'd also say it's not unusual to adjust (for example, collapse) the git repo history when publishing code.
This is great. It would be good if all life-critical software was open-sourced like this. Maybe something that should in fact be required?
But, ”in turn, we receive support in taking our AI software to the next level of development”
I bet they are mainly looking for help. They are trying to figure out why their software is not capable to do what it is supposed to, and hoping someone can assist.
I’m ready to consult them on this if they are open to hear the bad news first...
The article could be clearer on this, but when it refers to "production", I think it means factories and logistics. (At least that's how I put together "production" in the headline with "implementing next-level production processes throughout its plants" from the first sentence.)
In other words, I think they use these algorithms in manufacturing systems, and they aren't putting this software into the cars' computers.
Looks like you’re right on this. Probably wishful thinking on my part that at least one manufacturer would be doing the right thing wrt autonomous vehicles.
I think having the source available for inspection is a very good goal, but there is the issue of "we invested millions into it and now anybody can copy it" which would be the de-facto result even if they retained copyright on it.
Why not? If a person perceives proprietary software to be harmful, as a lot of people do, then it only follows there should be laws against it. Or, alternatively, if they see it like they do speech, others should be able to decompile it, comment, and redistribute it without consequence.
https://pjreddie.com/media/files/papers/YOLOv3.pdf