Hacker News new | past | comments | ask | show | jobs | submit login

Then you'll be happy to learn that what you propose has been the case for consumer computers since protected mode was added to Intel 80286 processors in 1982.

I think few people in this discussion are worrying about programs directly affecting other programs through memory unsafety, exactly because this doesn't really happen for software that isn't a driver or inside the OS kernel. The problem with memory unsafety is that it often allows exploits that corrupt or steal user data, or gain unauthorized access to a system. That's not a problem when you are the only user of your software and you only have access to your own data, but once you have other peoples data or run on other peoples system I think you should at least consider the advantages of using a safe(r) language.




But I don't understand how data stealing can happen if each process is effectively sandboxed. If my process can't read or write to memory outside of what it allocated, how can I corrupt or steal user data?


Well it depends on your definition of sandboxed. Does your program have permission to perform I/O, either by reading/writing to a filesystem or sending/receiving data on a network?

Most "interesting" programs can perform I/O. Then you run into ambient authority and confused deputies.


Yeah I guess it seems like a decent model for "safe" software would be sandboxed memory, and fine-grained file permissions. Arbitrary network traffic is a bit less dangerous - I mean someone could steal CPU cycles to process data and send it over network, but a safe language is not going to save you from that either.

Most programs do not need arbitrary access to the file system, and it should be the OS's job to whitelist which files a program has access to. Again, a safe language is not going to save you from bad behavior on the filesystem either. It really only solves the memory problem.


> Most programs do not need arbitrary access to the file system, and it should be the OS's job to whitelist which files a program has access to. Again, a safe language is not going to save you from bad behavior on the filesystem either. It really only solves the memory problem.

Except that it is often a memory-safety problem that enables an attacker to make a program misbehave, through things like buffer overflows. A memory-safe program is much harder to exploit.


Are we talking in circles here? My original point was that memory safety should be ensured by the OS/hardware. That way no matter how badly my program misbehaves, it will not be able to access memory outside of the sandbox. In other words, the CPU should not be able to address memory which has not been allocated to the current process. A buffer overflow should be a panic.

Even with a safe language, there's vulnerabilities like supply chain attacks which allow malicious code to use an escape hatch to access memory outside of the process. I.e. I could be programming in Rust, but one of the crates I depend on could silently add an unsafe block which does nefarious things. OS/hardware level sandboxing could prevent many such classes of exploits.


> That way no matter how badly my program misbehaves, it will not be able to access memory outside of the sandbox

The problem is not about memory outside of the sandbox, but inside. Please read about return-oriented programming, for example, where a buffer overflow bug of a process can be exploited to hijack said process to do work it was not meant to do normally. If this error happened for example in sudo, it can very well be used to gain privileges and do real harm — and this whole problem domain is almost entirely solved by using a “safe” language.


In case of a browser, a buffer overflow can be exploited to upload user files for example — which are very much readable without trouble on most linux distros.


But again, isn't that the OS failing to protect user files rather than an issue of memory unsafety?


That's another aspect of it. Please see this answer of mine:

https://news.ycombinator.com/item?id=27642630

In short, memory unsafety makes programmer bugs exploitable, instead of generally just failing.


I understand what you are saying, and I understand that this is a real security issue in modern computing. However I would put the question to you in a different way:

Let's say we have two programs, A and B.

Program A by its very nature needs to have write access to the system's file permissions in order to fulfill its core purpose.

Program B only needs R/W access to a sqlite database installed in a specific directory, and the ability to make network calls.

I would agree that for program A, a memory-safe language can provide a very real benefit, given the potential risk compromising this program could expose the system to.

Would you agree that if a buffer overflow exploit in Program B can be used to compromise the system outside of the required resources for that program, this is a failing of the OS and not the programming language?


I agree with that — not having buffer overflows is a good to have but not sufficient thing for security. MAC and sandboxes are a necessity as well, eg SELinux can solve your proposed problem with program A and B.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: