I'm not familiar with the MORPHEUS proposal specifically, but I do have a tiny bit more knowledge of the SSITH program overall, and this has a terrible explanation of what it is. It's effectively about exploring (with working prototypes in mind) new instruction sets—or, in practice, modifying existing instruction sets—with mitigations for certain common classes of vulnerabilities implemented directly in hardware: specifically, vulnerabilities related to "permissions and privileges, buffer errors, resource management, information leakage, numeric errors, crypto errors, and code injection."
The original proposal says nothing about "unhackable", and in fact, specifically quotes someone as saying that eliminating these hardware vulnerabilities would, "…effectively close down more than 40% of the software doors intruders now have available to them"—a far cry from unhackability! It's pretty typical of both science reporting and marketing bluster to go from "addressing certain vulnerabilities to reduce intrusion by less than half" and end up at "unhackable".
Our research group at Georgia Tech applied for this grant but we were unfortunately not selected.
As far as I recall, $50m was allocated to the project over a 3 year period. Unlike NSF, DARPA grants typically require a working prototype by the end of the funding period. In this case, DARPA needs a fully functional implementation of the security scheme on a RISC-V processor as well as a development toolchain that can be used to "secure" generic software and/or hardware applications.
At a first glance, it sounds like this team is applying instruction set randomization at the micro-architecture level: as far as I know, this has been done before at a smaller scale. Our approach was to address each of the CWEs (vulnerability classes) with a different technique, which I think contributed to the rejection of our proposal.
Edit: Ignore the title. The goal of SSITH is lofty and likely impossible to achieve for all cases in practice. But this is how DARPA operates: they come up with (currently) far-fetched goals with the hope that one of the funded approaches strikes gold.
That's what Im reading out of it. Anyone curious can look up Instruction Set Randomization Security or combos of words "security," "diversity," and "moving target." I had conceptual designs for doing it at microcode and/or RTL with a NISC-like approach.
Sorry your team didnt get picked. I wonder if any submissions came in from Draper or someone on SAFE architecture. CHERI as well. They're already proven in other designs. Chopping down the bitsize for CHERI while replacing BERI MIPS with Rocket RISC-V would seem straightforward. Throw in some optional enhancements.
Ill have to dig into this program after work to see more about requirements and submissions.
I don't think its ISA randomization if the EETimes article is accurate:
> Morpheus works its magic by constantly changing the location of the protective firmware with hardware that also constantly scrambles the location of stored passwords.
Constantly changing the location sounds like dynamic ASLR.
I looked at umich to see if there was anything more detailed but all I found was this press release which is pretty much the same article:
Todd Austin's group has some (old) publications on something close at least. Looks like they're forcing control flow to not use indirect jumps and branches somehow and using that to do dynamic ASLR.
I'm skeptical. As far as i can tell their 'unhackable' solution is some form of super advanced ASLR that moves 'some' kind of software around and decrypts and re-encrypts it on the fly. By continually encrypting with new keys and such they are betting that an attacker won't have the time to find vulnerabilities. However, this all assumes the 'advanced ASLR' itself isn't vulnerable, and moreover they are not building software that has no bugs, but rather just throwing all the bugs behind a pretty big locked door.
Sure it's a cool idea, but lets not call it unhackable.
Unclear what the article is really talking about. However, regardless of programming language, I do think that microarchitectures could do more for security. When going for memory safety, why stop at the highest level?
Frankly, if it were about memory safety, though, I think we could count C out. A microarchitecture with inherent protection against memory bugs would likely not be able to provide it's advantages to vanilla C or existing C software.
Why not get safety AND easy concurrency by going to Software Transactional Memory in hardware? I don't know the specifics of the overhead off the top of my head but I imagine we're getting close to the point where it's a reasonable cost to bear for the benefits it would bring...
Do you know what JVM , Node, python runtimes are written in? Its not like there aren't bugs in other software. When you have millions of lines of code there is always going to be something that some one forgot to think about.
Unless you have a very trivial program I think its hard to make anything "unhackable."
There were/are architectures (Burroughs large systems, iAPX432, to some extent AS/400) with fine-grained memory protection which essentially solved all the memory safety issues. Apart from the fact that some such machines were terribly inefficient (eg. iAPX) the major reason why you don't see them that much today is that there was no sane way to run existing C code (or any other code that expects flat memory space for that matter) on them while preserving the safety features.
So the bottom line is that such hardware architectures necessitate using more high-level languages than C.
I worked on Burroughs large systems - they had a stack architecture and memory locations were tagged by data type to ensure safe access. ALGOL scope rules were effectively enforced in hardware. The issue even then was that these rules did not work once you moved to COBOL, or C or other languages which had different scope rules / data types from ALGOL and so hacks had to be used which effectively gave up a lot of the safety features of the hardware.
Rust's safety was partly inspired from such spending on Cyclone, a safe version of C. The NSF and other organizations fund tons of work on clean-slate languages, type systems, formal methods, etc. The academic incentives usually lead to quickly thrown together prototypes that arent production ready. Even the good ones rarely get used by programmers in general. Ocaml was one of the exceptions. What gets adoption is normally a language/platform with a knock-off of their efforts pushed by a big company or used in a "killer app." Bandwagons in other words.
The consistently-low adoption of clean-slate tech along with lock-in effecta means the opposite is true: massive investment should go into making most common tech more correct, secure, and recoverable as simple as possible for users. If clean-slate, it should stay close as possible to what people already know with clean integrations. We can also continue to invest in ground-breaking stuff for niches that use them (eg SPARK Ada for aerospace) and/or new bandwagons that might build on them (eg Rust ecosystem).
I had forgotten Cyclone's name, thanks for the reminder. I've always felt kinda sad some of the small features like int@ didn't make their way back into C. I'm glad at least C++ has non-null pointers via references (but with stricter semantics and potentially undesirable syntactic sugar) and fat pointers via span<T>, but it's useless when I have to write in C for legacy reasons.
So instead of spending money on a completely revolutionary approach that would add an additional layer of security in addition to whatever an up-and-coming language that's already doing a tremendous job at raising awareness about the relationship of programming languages and security as it invents a new paradigm (the borrow checker), we should just give it to focus on the latter?
It sounds like an oxymoron. Given the meaning of hacking today it seems a computer by definition is hackable - if you can't hack it what's the point? There's surely some mathematical way to prove that the usefulness of a computing system declines in proportion to it's hackability, such that the least hackable system resembles more or less a rock.
Wouldn't abandoning the von Neumann architecture do this immediately? Store the code and data in separate memories. I'd think that would take out most all exploits in one strike, no?
To be fair, 600ms is a long time for what I assume is an analogy for a hashing algorithm.
Maybe it would be a fun idea to have sensitive data encrypted in such a way that a part of the hashing algorithm involves solving a physical Rubik's cube.
I don't know enough about computer security, but I think the talk from Joanna Rutkowska about 'Towards (reasonably) trustworthy x86 laptops' [0] is basically pushing for 'hack resistance' from a different angle, with a stateless laptop. I guess that's focussing more on verifying the lack of hacks rather than making it unhackable.
I can't help but feel that the only "unhackable" computer is one without powers that's been recently introduced to a sledgehammer.
Besides, if you wanted to hack some sort of computer system: Wouldn't you just take someone's children and do unspeakable things to them until someone cracks, that seems less more straightforward - certainly more reliable than depending on the intellect of an engineer ("hacker").
The original proposal says nothing about "unhackable", and in fact, specifically quotes someone as saying that eliminating these hardware vulnerabilities would, "…effectively close down more than 40% of the software doors intruders now have available to them"—a far cry from unhackability! It's pretty typical of both science reporting and marketing bluster to go from "addressing certain vulnerabilities to reduce intrusion by less than half" and end up at "unhackable".
The quotes here come from this announcement, from before the proposals were selected: https://www.darpa.mil/news-events/2017-04-10