Hacker News new | past | comments | ask | show | jobs | submit | haditab's comments login

Although I’ve never tried it, I can’t imagine it would be that difficult to find an editor for the specific language to take a look at it. I doubt they need that many hours of work from the editor for it to be a significant expense.


I hear this claim a lot, especially by Apple brand enthusiasts, but is there really any data supporting this?

From my memory Apple is just often late to the party. Especially for 4G and LTE.


Friend of mine has had great success blocking the ads with pi-hole.


Huh, always wondered why my LG TV never got ads and just remembered my entire house is under NextDNS's blocking.


Totally off-topic, but which phone had 5G in October 2019?


How did you feel about the project being shut down? Did you feel like you still had work to be done or was there a feeling that this just isn't going to work?


The usual disclaimer: these views are my own, and do not represent Makani, abc, etc.

I was terribly disappointed the project was shutdown. The engineering problems were unique and challenging. And the team was absolutely world class.

I felt like we could do it. And I wish we were still at it. But that's purely from the technical side. I wasn't involved in things like budget, and whether it would be economically viable.


I never knew about this project. It would have been my dream job.


It was a dream job! Not sure what to do next.


I don't think anyone is arguing that taking MOOCs and taking classes in person are exactly equivalent. I think the argument is that you may be able to get what you wanted by taking the correct online classes and not attending college (which is incredibly expensive).

From my personal experience in academia, both as a student and as a lecturer I honestly think most students don't really benefit from being present in person.

Also, about the inroads to places like FAANG, almost all of those companies require you to go through the entire interview process even if someone recommends you and landing interviews isn't that hard. I actually know plenty of people who self-studied their way into those companies without degrees.

If you do manage to stick to the program and go through everything on your own (which requires an incredible amouunt of discipline) in my opinion the biggest issue you would face right now is bias and stigma.


Why is there a need to blame anyone? The goal is for students to learn, not that they show up in class out of duty. If they found a more efficient way to learn from the class that involves not showing up in person, I think they should be encouraged to do it.


I have a fundamental belief that delivering a lecture is an active experience. I use and gauge student reactions to questions or comments to tell how well a class is receiving the information I am presenting. I also use it to pinpoint possible weaknesses of the class (did they miss the point of a prerequisite class, for example).

If I am not doing that, what's the point of a "lecture"?

Otherwise, you might as well read a book. It's faster than a lecture, likely more thorough, and not tied to a specific timeslice.


You may want it to be an active experience, but it will only be an active experience for those who choose to actively participate. Some students just don't learn that way and likely won't actively participate even if they show up to class. Those who want to actively participate will show up to class even if they have the option not to. There are other forms of getting feedback and interacting with students. Email and office hours for example. Also, I disagree that textbooks are faster. Maybe they are if you are an expert in the topic, but if you are not (and hence taking a course on it) in my experience the average lecture is still easier and quicker to learn from than reading the textbook (except for exceptionally well written textbooks).

I was one of those students who didn't learn by actively participating and I always felt like I had trouble learning in class. The school I did my PhD in started using online tools for some of the courses and I found that I learned way better when I watched the online lectures. A big issue I always had in class is when the lecturer explains something that is obvious to me in great detail, or just speaks slowly in general I find myself daydreaming and once I snap out of it I may have fallen behind and not be able to follow the rest of the lecture well. Online lectures at 2x speed are way more interesting for me and I almost never daydream. If I feel tired I can pause for a couple of minutes and stand up and stretch or walk around the room. If I miss a detail I can go back and replay it again. I would still attend office hours and email the professors though. What I learned from the online lectures also stuck with me much longer.

At the end of the day, students are paying an enourmous amount of money to learn, and I feel it shouldn't matter if the way they learn it by not showing up to class. Especially if they learn it better by not showing up to class.


If I could have when I was in college, I totally would have just watched the 100+ people lectures online. If I wanted clarification office hours were always more useful, and most of my teachers and TAs were very pleasant to engage in that context. Thankfully most of my classes were more on the 20-30 person scale (all of them in CS) where attending felt much more valuable. The few 12 person classes I had were undoubtably the sweet spot though, with enough peers around to make discussion interesting but much more engagement between students and the professor. Not that I expect most classes to be at that size, it would just be a better world if they magically could be.


I believe this is exactly why pjreddie quit computer vision research. It must kill him to see such projects based off of his work.


Maybe someone else is also interested in some backgrounds on this https://news.ycombinator.com/item?id=22390093


Thank you.


This detects any face, it does not identify people. It’s for stuff like autofocus, etc.


Detecting where is a face in a picture is the first step that's necessary before detecting whose face is that.


Taking a picture is a step before that.


Having people with faces is a step before that.


Having people is a step before that.


This isn’t Reddit


This isn't Digg


And purchasing a knife is the step before stabbing someone. Your argument is ridiculous.


How is it ridiculous? The fact that purchasing a gun is the first step of shooting people is a good enough reason for most countries to ban the purchase of guns...


Yet they don't ban kitchen knives because there's legitimate uses for kitchen knives. Thus the point that you utterly missed.


I don't think "because there's legitimate uses" is the differentiating factor, since that implies only kitchen knives have them. Self-defense is a legitimate use for owning a gun, for example.


In many countries self-defense is NOT a legitimate use for owning a gun.


Many of these countries have exterminated their large predatory animals.


Guns also have legitimate uses.


iirc the UK does ban kitchen knives from being carried publicly.


It seems the tradition of bringing your own dining knife passes. [https://www.atlasobscura.com/articles/medieval-knives]


There is no argument, why are people jumping to extrapolation? I'm pointing out that this DNN can be (and is being) used to identify people based on the ("any") detected faces. The possible usage is certainly related to the Parent comment issue.


Autofocus can be for anything - Phone cameras, Surveillance cameras, Drone missile targeting systems, etc.


Cars can be used for anything - moving people from home to work, robbing banks, running people over, etc.


Yes, and pjreddie seems to have concluded computer vision is mostly (or too often for their liking) used for the digital equivalent of those bad things.


I think prjeddie's concerns are extremely relevant. However he's not the only one working on things like this, thus it's unlikely that research and development will stop, although I certainly think such development is ethically questionable. In some ways this thing seems similar to the ethical problems facing the scientists working on the nuclear bomb. I just hope to God that this tech will be used for good rather than bad, but the way things are going with political censorship (government sponsored or otherwise) and people of opposing camps doing their best to dox political opponents–let's just say I'm not too optimistic...


> it's unlikely that research and development will stop

I know you didn't make this argument here, but I still want to point out that that's ethically irrelevant for his decision.

Or the other way around: "Someone else would have done it" is not a defense when you've built something that was clearly gonna be used for Bad Things(TM).


> Yes, and pjreddie seems to have concluded computer vision is mostly (or too often for their liking) used for the digital equivalent of those bad things.

Indeed, there are many nefarious applications of computer vision. But applications to the medical industry are plentiful too.

I see weighing up the net benefit as a tricky and a personal matter.


That's fine - that's a personal choice he is free to make. But I completely disagree with it. I also don't think that unencumbered AI research is going to lead to the overthrow of the human race by machines like Elon does.


Making cheap computer vision is just as dangerous to the "tyrant" as his supposed victims. You can already make a plausible anti-president suicide drone, A Ticket To Tranai style.


It's not really for autofocus. For autofocus, you need a model that can detect blurred images. You also need to operate on 36-42 bit data.

Furthermore, autofocus has already progressed from face detection to eye detection.


Autofocus, autotarget, aimbots...


Could you elaborate? What is the problem with the linked project? Training a slightly faster, smaller and less accurate version of an existing model?


https://pjreddie.com/darknet/yolo/

Is that pjreddie used horses, dogs, and bicycles as training data? Not realising that his technology could also be used on human faces?


> Not realising that his technology could also be used on human faces?

I'm not sure how you got to this idea, but it's just not plausible.


He said: "I stopped doing CV research because I saw the impact my work was having. I loved the work but the military applications and privacy concerns eventually became impossible to ignore."

Initial good results on CV doesn't mean that you realize all the ways it'll start to be used and the implications thereof.


IMO this is ethically more simple than many like to believe.

There are existing power structures (planned ones and emerged ones) in this world and the technology we create can either be used to reinforce them or to question them. Sometimes it is both and things cancel each other out and move on a sideways trajectory – but in the case of CV, it is quite clear who will benefit: those in power, those who need to quantify, control and punish the human element, but don't have the manpower (=legitimacy?) or funds (=priority?) to do so manually.

I get that working in CV is interesting and cool stuff, but the collective suffering it might help creating and keeping is something one should seriously think about as well.


Exactly. CV is interesting and can be completely harmless. But the benefits of the technology for the average person are extremely small relatively to the benefits it brings to any potential oppressive power. Mass surveillance tech can be convenient but it's a deal with the devil and I think we sometimes willfully ignore that under the guise of a perceived amorality of progress. "it's just science, it's neutral and you can also use it for good" can sound like a good argument that I usually even agree with.

But in this case it's very simple:

"Good guys" using of CV gets them things like good auto sorting in Google photos

"Bad guys" using CV can reduce the complexity of creating a fully Orwellian, big brother like surveillance state from "absurdly complex to implement and impossible to maintain" to "we can already put in place a solid implementation today and it will get better by the day".

Now I used to work in CV and as you said,I get how great and exciting the underlying tech is. But I definitely lean more towards fear than excitement these days. I also realize that it's already everywhere, with heavy research efforts and that you can't ever stop it at this point. But that really goes to show that the Yolo creator was right.

Think about it, what does the chinese people gain from CV or face recognitio right now? Maybe cool filters. The chinese government? Unimaginable levels of surveillance and control over it's entire population and it's just getting started.


Just like nuclear energy or genetic engineering, once the technology is there you can't put it back. The problem the chinese people have is not with CV, it's with their government. The way to go is having strong regulations a la GDPR, otherwise a black technology market will appear is there is enough incentive.


Also, if the state-of-the-art in these systems is in the public domain, people on the 'counter-CV' side will be able to experiment and find effective countermeasures. If all the knowledge is kept secret (i.e. it's only researched by secretive organisations) the average person will have less understanding of the technology and its limitations.


Similar to the "security by obscurity" conundrum.


You're saying Chinese gov, but the US gov or UK gov would do the same.

I agree that society needs to come together as a whole and regulate this into law, because that's how bad actors from governments can be stopped. At least in democratic countries.


I was answering the previous comment, but yes you are also right, though I tend to think citizen's rights are more at risk in China. Anyway that cannot also be taken for granted in the US or Europe, regulation must be agreed and evolved through time.


But the benefits of the technology for the average person are extremely small relatively to the benefits it brings to any potential oppressive power.

CV is a vitally-necessary component of self-driving car technology. Self-driving cars could save more than 30,000 lives each year in the US alone.

Even the very wise cannot see all ends, so why try? Work on what interests you.


Come to think of it, is there any tech that we would erase from history if we could.

To qualify, it needs:

- To have a negative net effect. Explosives for example don't qualify. They are certainly used as weapons and for all sorts of nasty reasons, but they are invaluable in many areas, including safety systems.

- Not to be an essential stepping stone for other, positive discoveries. For example the V2 missile made space exploration possible.

We could put all sorts of weapons in the list, with nuclear bombs in a top position. But think about it. Hydrogen bombs didn't kill anyone. Although the idea is debatable, they may even have acted as deterrents, preventing conflict. As for Hiroshima and Nagasaki, it basically ended the war, who knows for how long it would have continued otherwise, with potentially more victims than what the nuclear bombing have caused. More generally, the most technologically advanced countries are now living in an unprecedented time of peace, despite having the most advanced killing machines ever.

In the end I don't see any tech that I would put in that list because of abuse. The ones I would put there would be of the "oops, didn't know it was bad, let's stop using it" kind. Leaded gasoline comes to mind.


>As for Hiroshima and Nagasaki, it basically ended the war, who knows for how long it would have continued otherwise, with potentially more victims than what the nuclear bombing have caused. More generally, the most technologically advanced countries are now living in an unprecedented time of peace, despite having the most advanced killing machines ever.

as a german I'm always amazed how casually americans are willing to forgive themselves and even rationalize their states war crimes as good


To qualify, it needs:

- To have a negative net effect. Explosives for example don't qualify...

I was more idealistic about this kind of thing at one time, but it really is easy to rationalize the "Work on what you want to, don't worry about the end use" attitude that I have now. All I needed to convince myself was a good example.

Imagine that it's the early 1980s. Reagan is in office, the Cold War with the Soviet bloc is still very much a thing, and Star Wars is ramping up. Every other week it seems that somebody proposes yet another batshit-crazy weapons system. You're an engineer with progressive political views, and your bosses are asking you to work on a vast, global satellite network that will allow the military to locate both targets and assets with pinpoint accuracy anywhere on Earth. As far as you're concerned, ol' Ronny Raygun can fuck right off, and you tell them as much. "I'm not working on anything like that!"

20 years later, it turns out you missed your chance to get in on the ground floor of the most important public utility since the telephone system, all because you could only see the destructive uses for the technology.

For me that's hypothetical since I was nowhere near old enough to be employed at the time, but it's easy to say the same thing about applications like UAVs, autonomous vehicles, and ML/AI in general. CV is nowhere near enough of a defense-centric technology to justify refusing to work on it, IMO. Someone who refuses to work on CV on ethical grounds is walking away from their share of our technological future, just like the hypothetical engineer who refused to work on GPS.

We could put all sorts of weapons in the list, with nuclear bombs in a top position. But think about it. Hydrogen bombs didn't kill anyone. Although the idea is debatable, they may even have acted as deterrents, preventing conflict. As for Hiroshima and Nagasaki, it basically ended the war, who knows for how long it would have continued otherwise, with potentially more victims than what the nuclear bombing have caused. More generally, the most technologically advanced countries are now living in an unprecedented time of peace, despite having the most advanced killing machines ever.

I tend to agree with the general notion that nukes are a net win for world peace, but it's debatable whether the net death toll due to warfare has been that much lower in the post-Hiroshima age. Superpowers just conduct proxy wars nowadays instead of beating up on each other in person. If you added up the civilian toll of those proxy wars, it would probably be right up there with many WWIII scenarios, but since those conflicts are happening somewhere else besides major American or Soviet cities, nobody much cares.

The other concern I have is a relatively new one: people are going to forget what those things are and what they do. Eventually, the last person to see a nuclear explosion in person will die of old age. Long before that happens, morons with microphones will deny that Hiroshima and Nagasaki ever happened, just like they do now for events ranging from the Apollo landings to Sandy Hook. Others will take the position that nukes are just bigger versions of regular bombs, nothing that special.

So long term, who knows... maybe it would be better to put that genie back in the bottle if we could. Dunno, and in any case, it's hardly the same thing as CV. I can understand if someone is reluctant to work on nuclear weapons technology, but that understanding stops well short of refusing to work on CV.


> Even the very wise cannot see all ends, so why try? Work on what interests you.

The person I murdered could've gone on to be the next Hitler, so just do whatever interests you.

This logic isn't very helpful, is it.


No, you're right, that logic definitely isn't helpful. Here on Earth, murder is illegal while CV research isn't.


If that is what interest you and you think its beneficial to you than yes you should do it. Other people who is not interested in it won't do it.


>it is quite clear who will benefit: those in power

I think it's more likely that you want to detect a person so you don't hit them with a car than trying to hit them with a drone missile. Detecting cancer with CV, and other improvements to diagnostics also save lives. If you could be working on these technologies and stop, your decision could cost lives.


If you are working on technology that detects cancer that is true. If you are working on technology that detects people, this is a harder question.


The same neural network can be trained for both problems. Advances in medical image neural networks translate to advances in people detecting neural networks.


Generally speaking this isn't super true, the objects are different enough that the same networks aren't best at both.


Of course, they are not directly used without changing the training data, etc. However, if we look at the big picture to make a general statement, then the advances brought by Computer Vision on natural images often help push research on medical images forward, albeit with a year (or two) of delay. One such example is U-Net (https://en.wikipedia.org/wiki/U-Net).


>but in the case of CV, it is quite clear who will benefit: those in power, those who need to quantify, control and punish the human element, but don't have the manpower (=legitimacy?) or funds (=priority?) to do so manually.

Technology that exists for surveillance can also be turned on the surveillants. The most relevant case probably being police abuse being caught on smartphone cameras. These tools don't just discipline citizens, they also discipline the police. If I'm in a room with someone in a position of authority far above me, I'd rather have the camera on both of us than none of us.

So it's not actually that simple, and I don't see opting out as realistic or helpful, because other benefits these technologies bring, for example security, will always convince the population to drive adoption forward.


I work in FR, at a leading company. I do not write any ML code, I'm an applications developer with a background in visual effects.

My attitude is being there in the developer group influencing how the applications behave, interfacing with our management, sales and clients and being a voice unafraid to raise ethics questions within these groups is my method of knowing what is the state of this dangerous technology. I'd rather be there influencing its development and use than on the outside, frankly blind to what it's in-deployment capabilities are.


You can only influence these things so far. The company ultimately needs to make money and sometimes the money is coming from people that want drone-guiding or surveillance technology and so if your suggestions are leading too far away from that, you'll be fired.


Then I'll be fired.


I mostly agree on your thinking but I think it is not so simple to decide.

If we focus on reinforce vs question part. It is kind of prisoners dilemma. If we assume governments(reinforcers) will anyway work on this technology, and lets assume they will end up with some lower quality version (lets say 50/100) . Questioners have 2 options:

- dont work on this, accept 50/0

- or work and improve both sides, 70/20

Without deeper context hard to decide which will be better for questioners.

Would you prefer to have a gun against rifle, or no weapon vs gun?


The question is, if we are so aware of the dangers posed by these power structures, agreeing that the technology itself is "apolitical", then why have we allowed power to be concentrated so highly in a given institution or individual? It seems that, if we are concerned with ethics, we should be pushing much more for the dilution of power back to the people, rather than "taking sides" in terms of who to work with or what to work on.


This is about implications - that's not the same as not realising that face is an object.


Just trying to provoke an answer with a little humour ...

It is obvious to anyone that one of the primary uses of computer vision is to watch other humans at scale. This cannot be surprising to anyone in the field yet now there are ethical concerns mixed with politics through the roof.

Maybe we can be more precise about this?

The work here is a technical achievement but there are some weird comments here which I think has something to do with the author being Chinese.


I think that's the wrong approach. You don't stop the arms race by not investing in arms, you invest in countermeasures instead.


I also believe it was idiotic.

Focusing too much on negative aspect of thing will lead you nowhere.

Manhattan project gave >500 research papers. Gave us Iodine 131 and other radio nucleotide which we use in medicine. And gave us lasting peace. So was it bad or not?

Is gene editing bad? was the internet bad? Was the dude who invented round wheel bad?


Yes, using atomic bombs was bad. As others say it is debatable if it helped end the (already won?) war. And in any case, _lasting_ _piece_ where? Are middle east and Africa not part of this world?

I find your comment is entirely missing the point and low effort. Asking whether random techniques, inventions, inventors were "bad" or not makes us much sense as asking:

Is the sun bad? Were dinosaurs bad? Are the aliens bad? Is life bad?


| And gave us lasting peace

I think the jury is still out on that. Or rather the trial is still underway.


I believe that the USA and USSR/Russia have been at war with some other country directly or by proxy more or less always since the end of WW2. Yet we had no other major world war because of the mutual assured destruction. That kind of peace might not last forever but it was an unusually long period. A wonderful change for the best if you lived in Europe, no change at all if you lived in one of the countries that took the turn to be the target of one or both the powers. Add regional powers operating on their own will now.


You know who else can recognize faces? Humans! I guess we should cancel humans because of ethical concerns.


one human brain can only perform some 10-12 hrs of facial recognition work before needing to take a break, and also do so relatively low speeds.

One computer brain can be copied for free, and deployed to thousands of computer clusters and work 24/7 on facial recognition, at a fraction of the cost.


So it is about job loss?


Because small models of face detection software would be released on github?


As someone who recently switched to Sony from Canon; I don't recall ever having to 'learn' the menu layout with Canon. It was just pretty obvious. I'm 100% sure that I never read the user manual and I never googled how to find anything in the menus.

The Sony layout on the otherhand is just terrible. Sure, you can customize all the programmable buttons and menus and it gets much better, but it still takes a long time to learn. Also, had to watch a ton of youtube videos of people changing settings on their camera (the fact that there are so many for Sony cameras should hint to how terrible the menu is).


From experience I would say running the OSS project is definitely harder, but the type of person who is capable of making a successful OSS project is not the type of person who can chain themselves to a chair for 3 months and practice interview questions. Studying for FAANG interviews is arduous and extremely non-rewarding. It is like the extreme opposite of an OSS project, where you put in the same hours and have exactly nothing to show for it.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: