I think this is very dependent on how you find your news stories. In Apple News today I see 6 Syria stories. These are from very mainstream news outlets too: CNN, the Guardian, Reuters, Wired.
In Google News (different device) I see stories about Yemen in the past day from USA Today, Reuters, the Guardian, the Irish Times, the Washington Post, and the New York Times. Most appear to mention the school bus bombing.
I admittedly don't know what makes the front page of printed news or goes into heavy rotation on TV news. I've been using news aggregators instead of any one publisher for a decade+.
I maintain a database of news articles from local news sources and sometimes query it if I feel something is underreported re Syria. Sometimes it is and sometimes it's just a wrong feeling. Developments in Daraa/Quneitra were much less reported than developments in E. Ghouta earlier this year.
This x 100. The news is being reported, but so many of us just digest what the aggregators and algorithms decide is most relevant - it saves a ton of time. Today's readers need to be curious to be well-informed.
The article explains it's a mix of human observers and sensor data plus a lot of trial and error that has been done in order to get accurate prediction and to build trust.
What are the strategies to avoid or reduce false positives. I suppose the human observers are vetted? And the article suggests that there must be sufficient amount of sensor data for the likelihood to be big enough. However it fails to mention how many sensors and therefor data-points are collected.
Are there good resources on crowdsourcing data, ensuring certain quality of data, especially when it comes to limited data inputs?
Also, not sure if I'm not seeing well or if my adblocker is being too zealous, but I don't see any share buttons on the article. Not something I'd expect from Wired, or any media outlet these days.
This article is shameful. The “tech” involved predicting bombing raids, to evacuate people beforehand. Obviously the tech will be used in the future to maximize casualties via predicting bombing raids.
Not saying what they are doing is bad, just the unabashed optimism / tech will save us all view of Wired magazine.
From the article, it seemed like the “tech” is just text messages triggered based on flight paths of aircraft. How does that translate to killing more people?
If you know how people are going to respond, you can take advantage of it. You get a related practice in double-tap bombing attacks, where there's a second bomb set to hit the predictable batch first responders.
In this case, you can predict where people will evac to, and time your follow-up bombing flights to hit those areas before there's time to evacuate from them.
CEO of Hala here. What I think needs reiterating is that the tools to do what you fear are already available and employed by the highly advantaged attacker (regardless of who it is). One of our primary missions is to actually give defensive tools to people who currently lack adequate means to defend themselves.
I think that’s a very fair argument and you / your company should be commended for what you’re doing.
Really the criticism I have is of Wired magazine’s article, and not even the article so much as their promotion. On twitter and other places, you’ll see stuff from them like “how AI is preventing bombing raids”. That kind of attitude to generate readership I find recklessness and dangerous.