Hacker News new | past | comments | ask | show | jobs | submit | gooddelta's comments login

ITT: People who don't understand safety-critical systems telling people how to write safety-critical systems.

The most popular answer in this thread is "you can only write safe C++" which is bullshit. The language that you use will likely be dictated by the toolchain you're forced to use to meet whatever standard your org has adopted. For example, if you're in the automotive realm and following something like ISO-26262, you'll only be able to use a qualified toolchain that's compatible with your safety MCU – so you'll likely be limited to C or C++, and then FURTHER limited by MISRA standards to a subset of those languages. There is no version of Rust that may be used for safety-critical systems, currently – despite the fact that it's arguably a better language, the rigorous verification/documentation work hasn't been done yet. If you're looking for an alternative to C or C++ for use in safety-critical domains, look at Ada.

You will likely not find any example of an open source codebase for safety critical systems. Rigorously-developed safety-critical systems cost millions of dollars to produce, document, run through V&V, etc. They don't tend to get released as OSS.

For the rest of the folks in this thread: type safety, memory safety, etc. are awesome features – but having a language with these features doesn't allow you to build a safety-critical system. It doesn't even begin to. If you're curious, you can start to look at the roadmap for the Ferrocene project – the company behind it is working with the folks from AdaCore (AFAICR?) to make a version of Rust for safety-critical systems a reality (one that I'm very much looking forward to!)


Dude, nobody is arguing that we should ban digital communications at all. Take off the tinfoil hat, chicken little. What I'm suggesting is that if we are going to make PLATFORMS available (i.e. WhatsApp) that are capable of amplifying and making individual communications instantaneous, we mightttt need some more thought, collectively, about how to combat fucked up human nature. Like, just a pinch of consideration for the fact that the notion of free speech didn't evolve during a time in which any asshole could pull a thought out of their ass, hit send, anonymously, and reach a thousand people.

EDIT: Sorry, I realize that I was speaking condescendingly here. I'm just trying to get the point across that asking us to actually solve the hard problem of platforms before unleashing them on the world shouldn't be too big a burden to bear. Some (myself included) would argue that our entire society is destabilizing because of the lack of ingenuity in solving this problem. It gets frustrating.


The “lack of ingenuity” in regulating speech is a hard-fought, hard-won tenet of liberal democracy that a citizen ought to be willing to sacrifice his life to protect. Arguments about the need for paternalistic stewardship to protect the masses from their flawed human nature are bog-standard apologia from authoritarian regimes seeking to justify repression.


Did I say "regulating speech"? No. If we're going to try to have a cogent conversation, please stop putting words in my mouth.

Free speech != the right to a platform, which is synonymous with saying that, just because you have a voice doesn't mean your message has immediate merit, or that it is the truth, or that people have to give your voice equal weight to others, or that people have to listen to you.


You cannot compel a platform to carry a message against its will. This is true.

Restricting the messages a platform may carry according to the state's view of merit or truth, on the other hand, is exactly "regulating speech."

EDIT: To be clear, I am responding in the context of the upthread "Blocking WhatsApp would be a perfectly justifiable solution! Regulating it would be as well." Your argument seems less puzzling in the context of the very different assertion that Whatsapp itself should exercise editorial control. Apologies if this is what you meant.


The problem is that there's no solution to "deplatforming" people who incite lynch mobs that also doesn't give the power to "deplatform" peaceful protest against repressive regimes. You can't come up with a knife that only cuts what you want it to cut.


I'm pretty sure poster above is taking for granted that they (or other like-minded) would also wield the knife.


What if there aren't any technical solutions that are likely to work, and no top-down regulatory solutions that are likely to be worth the cost? What if the only mitigation that works is "culture change?" Culture change is messy and slow and self-directed.


Printing creates a barrier to entry. You have to have enough conviction in your idea and determination to actually write the statement, lay it out, and print and distribute it. You don't just type "oh shit closeparen should be hanged lmao," hit send, and reach 100 people.


Please don't copy and paste what you posted elsewhere. That lowers the signal/noise ratio.


So we should just ban the internet because some village in india couldn't use it right?


This comment breaks the site guidelines, which ask: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize."

If you'd review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting here, we'd appreciate it.


The platform is what spread those lies – without the platform, those lies wouldn't have had the reach they did.


As mentioned in the article "Mob lynching isn’t a new phenomenon in India. According to some reports, there were more than 2,000 lynchings in India between 2000 and 2012 — well before WhatsApp was around."

I agree that WhatsApp has the potential to exacerbate such issues.


Because rumor mongering came about only with whatsapp, and gossiping with your neighbours is a phenomenon born straight from instant messaging.

People spread lies. Not platforms or machines. People. Via whatsapp, other IM services, email, printed leaflets, handwritten notes or voice. People do that.


That's the entire problem with this industry – we make the assumption that someone less ethical than us will come along and build the shitty product anyway, and use that as justification to do something we likely shouldn't.

Blaming WhatsApp is misplaced, in some sense – the blame lies with startup culture, and with each of us that choose to make that excuse.


Are you suggesting that basic text communication apps should not exist?

This is fundamentally very very simple functionality - there's no social media algorithms surfacing different content to different people or anything like that.

Are you seriously advocating that we should just disallow digital communication. (Not to mention advocating it using digital communication)


I didn’t get the sense that they were implying chat messengers should cease to exist. While they certainly could have been more explicit, I think that’s a rather uncharitable interpretation.

The larger conversation seems to be revolving around the question of “While we continue to build these infrastructures, are we currently—in a serious way— searching for ways to mitigate against these types of things from happening? And if not, why not?”

Which, from what we’re seeing— with how vulnerable many segments of populations seem to be, with misinformation and these types of mob behaviors—this is a timely and important question.


There are multiple organizations trying to find ways to stop misinformation but it seems next to impossible. What is the truth? Who gets to decide what is and isn't the truth? No doubt Google will come up with some API that reports if a statement is true or not and all algorithms will use it to hide any information google says is false.


"The shitty product" is text chat. We "likely shouldn't" build text chat? Startup culture is morally defective because it built text chat, and we share in the moral capability by "making excuses" for text chat?


Exactly. This is the tired old pattern of "humans do human things and technology is coincidentally involved because we live in the 21st century, obviously it's the technologies fault."


In many cases is genuinely is the technologies fault when it uses UI patterns to shape behavior and algorithmic sorting to selectively show ideas to the user. But that is not the case here.


The article mentions forwarded hoax videos:

“each said they’d done so after watching shocking videos on WhatsApp warning of outsiders abducting children.”


> "They said that as long as their children are safe, they have no regrets." [The lawyers of the murderers]

While in general I agree that everyone should share responsibility for maintaining a safe society that very feeling seems to be what these murderers were operating on. Given the lack of remorse I think it is clear that there is a deeper issue than unfiltered efficient communication in these communities.


In this case, I would blame lynch mob culture, not startup culture. No matter how these people received their misinformation, what they did was infinitely worse.


Yes, I don't see how this is on WhatsApp in the slightest unless we're going to say communication apps shouldn't exist...


There's no shortage of ignorance and gullibility offline--it's not something that was just invented with social media. It's easier to blame "startup culture" than have a hard look at the unpleasant parts of one's own culture.

> "Mob lynching isn’t a new phenomenon in India. According to some reports, there were more than 2,000 lynchings in India between 2000 and 2012 — well before WhatsApp was around."

I don't know, crazy idea: Maybe the blame should fall on... the people who think it's OK to lynch strangers.


The article makes it clear that the people of Rainpada are extremely uneducated, many lack literacy and tend to trust their tight-knit social groups. They don't see many locals, and they genuinely believed the strangers were harvesting organs. Try hard, and imagine being completely naive to internet hoaxes, and not even knowing how to read and write (note that the video had audio dubbing, precisely to target unsophisticated viewers.)

Western businesses don't have an inherent right to market their products in India. Group messenger apps, at least those lacking in sovereign moderation powers, are clearly causing harm in illiterate/hoax-naive demographics. Is India getting a deal that outweighs the human costs of violence, or should they develop their own?


If the government thinks the locals are unable to handle the internet then it should either educate them or restrict the usage.


they cut off internet access for 24 hours following the murder, and within two hours ordered the local news channel to air an admonishment about the misinformation on repeat, along with a warning for prosecution for further spreading it.

the village is now a ghost-town; most of the men fled, and the women and children stay inside. elders are ashamed to mention their hometown.

there are two educational efforts: the police have posted numerous warning signs in public areas, and a group of students are traveling around giving a play to demonstrate how spreading rumors on WhatsApp caused a lynching.

in addition to those efforts, what can WhatsApp do to keep this from happening? India is doing their part. If WhatsApp thinks it's unable to handle the situation, they should turn administrative control of their service over to the government.


WhatsApp already took some actions after getting forced by government. Now, in India, a user can not forward a message on WhatsApp to more than 5 contacts in one go. Earlier it was select as many as you wish, but now you can choose only 5 contacts.


That change only affects the latest version of the app. Only a single phone among those surveyed in the village by BuzzFeed was running the latest version. I'm not sure what Facebook can do about it technically, but the change won't be effective any time soon.


You're not wrong – part of the blame obviously lies there.

But even if it is simply enabling and perpetuating an existing problem, why haven't we asked ourselves why technology isn't addressing the issue? How is this future we've created any better? Because it's faster? Because the UI is nicer?

It's rare to see startup culture produce anything that I would call true innovation. Maybe I just have a higher bar than most folks. That could be what I'm arguing here, and that doesn't change the fact that you have a point – I just believe we have to do better.


Yea, I happen to agree with that too. Although I don't think WhatsApp is necessarily an unethical software, far too often, we see stories here of startups and bigger technology companies, totally tossing ethics aside in search of their "engagement" or "reach" or whatever stupid metric they're chasing. As software engineers "on the inside", we have both an obligation and unique opportunity to stop these things before they start. Or at the very least not contribute toward it.

For every unethical piece of software that gets written, there's at least one engineer... one of us, maybe even someone reading this very post... who said "Sure, boss, I could totally write that!"


I know this is going to be an inconvenient reminder for many folks, but: if you play any part in bringing software (or technology in general) into the world, you have an obligation to consider the moral, ethical, and political ramifications of what you are building at every single step of the way.

What happened in that village wasn't an accident – it was the result of careful "growth-hacking", design, product work, and engineering. It just didn't work as intended.

There is no such thing as a neutral line of code, feature, notification, design, business, or product. Full stop.


What exactly is "growth hacking" in a chat app? There were no ads here, no algorithmic timeline, no perverse incentives, nothing that you could potentially consider as malicious.

What should WhatsApp have done?

Every single post about WhatsApp here has people pontificating about this and that but no one offers any suggestion of what should be done.


What exactly is "growth hacking" in a chat app?

WhatsApp is a fairly direct clone of BBM and BBM’s growth hack for want of a better term was to make its users feel like part of an exclusive in-group.


Yes, to the extent that every chat app is a clone of every other chat app, WhatApp is a clone of BBM. Other than that, what exactly are the similarities?

The whole point of WhatsApp is having a primary key that is your phone number. And they launched on literally all kinds of OSes that developing countries were using, including Symbian as late as 2012. So please tell me, how exactly is a service where you can message each other by their phone numbers and that was probably supported on the most number of platforms for its time "exclusive" and "in-group".

Sorry but you don't seem to have any idea what you are talking about.


> Sorry but you don't seem to have any idea what you are talking about.

Personal swipes like that will get you banned here, so please don't post those. Your comment would be fine without that bit.

https://news.ycombinator.com/newsguidelines.html


Sorry but you don't seem to have any idea what you are talking about.

Perhaps you are unaware that BBM and hence WhatsApp are really about their group chat features?


Yeah, definitely no advertising, particularly overseas, that could have contributed to WhatsApp's growth. No features added like large groups that could be used to spread disinformation. WhatsApp chats are used to influence advertising, so that goes out the window. This happened because WhatsApp provided the means for it to happen – and you can speculate all day long about whether some other app would have made it possible, and whether it would have happened, but what you're saying is speculation, and what I'm citing are facts.

None of it was intentionally malicious – but not considering the moral, ethical, political ramifications of how your product will be used is negligent. This is negligence, not malice.

Platforms aren't neutral.


The logical conclusion of this perspective is that human beings should not be allowed to freely communicate in groups. I'd rather not live in your world.

Technology can't fix broken human beings. There is a moral failure here, but it isn't Facebook's. Blaming WhatsApp at best makes the real problem harder to fix, and at worst makes excuses for the actors in this horrorshow.


Speed and reach, my friend. Lynchings happened in the US a while ago, but they weren't a universal phenomenon. A platform with the speed and reach of WhatsApp makes disinformation more potent.

Nowhere in my post did I suggest human beings shouldn't be allowed to freely communicate – what I'm suggesting is that maybe we haven't built the right tools to enable that safely, and instead, we're exacerbating the "broken human beings" side of this issue.


> we're exacerbating the "broken human beings" side of this issue.

There is no other side of this issue. This could and probably have happened with sms. This is 100% the mob's fault.


SMS costs a lot in 3rd world countries. We are probably seeing these issues now because this is the first time these people have had access to cheap and easy communication. The rest of the world was eased in to it with email and the web but these countries are being dropped in to the deep end.


Subtext: You primitives aren't mature enough to communicate in groups without supervision.

Don't you find that an offensively paternalistic thing to tell whole societies?


By that logic we should ban megaphones as well.

You know what? Just ban shouting until people build the right tools to enable that safely.


Well if you start using a megaphone to encourage killing people you will have troubles in a few minutes. With whatsapp... Not that much. Whatsapp is probably the first medium of communication that is harf to monitor in many places. It is much safer to encourage hatred and violence there because you don't need physical interaction, it scales a lot and authorities cannot intercept it.


[flagged]


Actually all i’ve seen you do in this thread is moralize about the use of technology. You haven’t really offered any solution to the problem at all, aside from suggesting that technologists ‘think about it more’ before creating such platforms.


You've repeatedly broken the site guidelines in this thread. We ban accounts that do that, so can you please not do that? Someone else being wrong is no reason to break the rules.

https://news.ycombinator.com/newsguidelines.html

(I do appreciate that you apologized for going over the line in the previous comment.)


I apologize if it appears I've stepped out of line, or if this thread has been difficult for you to moderate; that wasn't the intention.

If you go back to my first comment on this post, it seems reasonably clear that my message is not particularly contentious – I'm simply suggesting that people apply some amount of ethical rigor to their work. Sadly, that message was heavily twisted in all sorts of ways that somehow fit within this site's guidelines.

Ethics and politics are indivisible from the act of creating software. It clearly stands to reason that software is fractal of the beliefs of the people that create it. By placing intellectual curiosity on a pedestal above morals and ethics, HN has managed to drive away some of the most talented and credible people in the industry from this site – and to promulgate a culture of no accountability for what we do, or fail to do.

I've fought year-long battles that have come close to costing me my job, just so I could prevent inaccuracies in systems that likely process your credit score. I've consciously killed projects that would disadvantage entire groups of people, no matter how technically cool or interesting they were. In much the same way that I'm not willing to negligently let someone die, I'm also unwilling to be part of the next Manhattan project – but it's becoming clear to me that I'm in the minority of the Hacker News readership that feels that way, or has ever even considered the impact of their work.

Over the past ten years, the attitude of this community has single-handedly convinced me that the end of our world will be because somebody could, so they did. If we can create "deep fakes" on demand, we will. If we can patrol the border with some hastily slapped-together CV and ML, we will. If we can use deep learning to decide who will go to prison, we will. All under the guise of intellectual curiosity.

I honestly believed tech was a good thing at one point in my career – now I spend most of my time building systems to protect people from my peers.

I know I can't delete my account here yet – I've tried. But I've never felt more disconnected from a community I'm supposed to be a part of.


What are the right tools?


It's a chat application. How people can be so ignorant to abuse such a simple thing is beyond me. As much as I hate Facebook, I don't get how you can put this on WhatsApp.

From the looks of it, you could've achieved the same thing with someone printing and spreading around fliers. Doesn't mean a printing company selling printers should consider the "moral, ethical, political ramifications".


Printing creates a barrier to entry. You have to have enough conviction in your idea, and the determination to actually write the statement, lay it out, and print and distribute it. You don't just type "oh shit lxrbst should be hanged lmao," hit send, and reach 100 people.

You're basically describing why journalism is important, and still has an important place in our society.


Journalism is important, but not to the exclusion of other forms of communication. If your thrust is that barriers to entry for communication is good because it puts mass comm. powers in the hands of the few(er), I cannot agree.


Ironic to post such a comment in a platform that puts a high barrier for communication through moderation and guidelines.


Okay, but what about WhatsApp made this possible that wouldn't have been possible with plain old SMS/MMS? (Or even just the telephone? I guess no video over the telephone, but rumors and lies can certainly spread that way.) It's not like Facebook, where there's some kind of algorithmic curation. This is entirely peer-to-peer content distribution.


Sounds like the videos were very shocking and quickly forwarded - instigating a mob mentality:

“each said they’d done so after watching shocking videos on WhatsApp warning of outsiders abducting children.”


How are WhatsApp chats used to influence advertising? They are end to end encrypted.


That's partially true – but end-to-end encryption is different than zero-knowledge. I design the latter.

Although LifeHacker isn't a great source, they do a good job of pulling more credible sources together into this article: https://lifehacker.com/stop-using-whatsapp-if-you-care-about...


A chat app is about as neutral as platforms get. What specifically should they have done differently?


I haven't used whatsapp but from what I understand it's just an IM app. I'm not sure how an IM app can be responsible for this behavior. It simply lets users send messages to each other. There are no algorithms sorting what you see or promoting certain kinds of interactions. It seems that maybe some societies are far too trusting of baseless rumors and can't handle the fact that they are now able to share rumors incredibly fast.


Indeed, rural villages with low literacy rates aren't able to handle sharing rumors at the speed of light. This is precisely the discussion we're having!

Suddenly, high-speed internet is affordable even in places without reliable power. They can't spot fake news. They're unprepared for WhatsApp - do you blame them?


I'm not buying it, these people lynched somebody, this isn't about a lack of literacy. It's about being used to using violence as a way to solve problems.


The article mentioned that lynchings were unheard of in Rainpada prior to the introduction of WhatsApp and the spread of misinformation. Please cite your source for them "being used to using violence."


I blame them for murdering strangers, yes. Of course. It makes no difference how they were tricked into doing so, that’s practically irrelevant.


A: open communication and information never reaches them, we segregate them according to some prime directive, and they go on lynching each other and being backwards for all eternity until they arrive at internet #2 through convergent evolution.

B: open communication and information reach them, some use it for productive ends and some use it to mobilize lynchings, after 1~2 generations of adjustment, education, adaptation they largely outgrow and outeducate the worst of it.

C: How do you propose to regulate communication to find the best path that is neither fully A nor B?


The article mentions how:

“The next day, WhatsApp replied to the ministry, saying that it was “horrified by these terrible acts of violence,” but arguing that an effective solution to misinformation would require help from the government. WhatsApp pushed a few changes to the app, adding “forwarded” labels to re-sent messages, and limiting the number of people or groups a user could forward messages to in India to five.”


Simple: legally, treat WhatsApp groups as public message boards, and allow the local authorities to administer local WhatsApp groups the way you'd allow police to tear down signs put up on a cork-board in a public square.

It's not rocket science.


I agree it's not rocket science, it might be harder because we're dealing with human behavior.

The Salem witch hunts happened with full support of the "local authorities" to run investigations, interrogations, trials, and hangings. It was not for a lack of administration.

Next month when there's an article about a corrupt official extorting farmers, are we to say "at least farmers can't communicate or organize without authority oversight?"


Are you implying that whenever more than two people have a conversation on any chat application, the government should have the right to read along, for our own protection?


Do you live in India?

WhatsApp is produced by Facebook, an American company. It has to obey the US government. If WhatsApp wishes to operate in India, it should provide a mechanism for the Indian government to administer it.

If it doesn't, India is well within their rights to sanction them. There's no shortage of skilled tech workers in India; Facebook needs India more than India needs Facebook.


I do not. I don't live in the US either. Is that relevant?

I never suggested that WhatsApp doesn't have to obey the law in the countries it wishes to operate in. But it seems very strange to me that a discussion about the moral and ethical implications of software is now concluding with (paraphrasing you) "the solution is simple: whenever 3 people communicate online, they should be subject to government surveillance". Really? The developers of WhatsApp should have foreseen that people unaccustomed to the internet might kill people when sharing hoax videos, but you don't foresee any possible negative consequences to your proposed "simple" solution?

I think the cure you propose is worse than the disease, but if that is what India really wants then by all means let them pass a law to make it so. In that case I hope WhatsApp will create a separate app for India, because I wouldn't want to get into a group conversation with an Indian by accident.

Apart from that, I don't think your solution is as simple as you claim it to be. When people from different parts of India are in a group-chat, who is the local authority? What happens when some or all participants in a group chat move to a different place? If a foreigner visits India for one (minute/hour/day/week/month/year/decade), when do the local authorities get the right to enter all their group chats? Will they leave those chats when nobody remaining in the group chat is in India? Would this law apply to Indians abroad?


Nothing about a communication app inherently incites violence. It's the ass-backwardsness of the community that is the cause of this behavior. The answer is better education, not misplaced blame on technology. Would you have banned gossip and verbal and written testimonies in the Salem witch hunts?


I've done a lot of code reviews in my time and I've seen many, many neutral lines of code. So cut the hyperbole.


It looks like Enigma dog-foods their own data infrastructure products (Concourse / Assembly?). The public data thing inspired them to build better tools for themselves and sell them. Big lesson in that: sometimes what you learn along the way will be the most valuable.


Really awesome visualization, but I didn't read the text because the columns were insanely wide on my 24" monitor. Might want to constrain the width a bit, chaps.


Thanks for the feedback! Just pushed an update with a max-width.


Cool dude -- no worries; I'll just never hire you. Passion is really hard to fake and, frankly, I don't want to work with you if you don't have it.


Really nice interface. Could be a great way to discover interesting repos!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: