Hacker News new | past | comments | ask | show | jobs | submit login
Doom on GLium, in Rust (docs.google.com)
149 points by hansjorg on May 10, 2016 | hide | past | favorite | 35 comments



Are my perceptions clouded from being inside the Hacker News echo chamber, or is Rust really picking up steam really fast?

It seems to have more libraries and the ones it has are more advanced than what would be expected from a language this young.


I really hate all the "let's port everything to <fancy new language>" whenever a security issue comes out.

That being said, the Rust community has done a lot of amazing work. There are people who are not just talking shit, but actually making attempts to port or rewrite some major things in rust.

I was thinking about using it for a project recently, but I found the GTK documentation to still be lacking.

Documentation is the biggest factor is language survival. I hated Python around 2002/2003, but today the documentation is orders of magnitude betters. I really got into PHP back then thanks to its documentation, not realize how awful a language it was under the cover (although PHP7 seems to be making some huge strides to get it away from whale-guts status: an actual AST, name spaces, things that are slightly more sane).

So long as they're more tutorials, stackoverflow questions and library projects for Rust, it will grow. I have pretty high hopes for it being a major language for embedded systems. Who knows, maybe in five years we'll have production read Rust based kernels that can run existing docker containers.


"I really hate all the "let's port everything to <fancy new language>" whenever a security issue comes out."

That's a nice oversimplification, even disinformation. More like ALGOL60-68 was designed with prevention of common issues in mind. It didn't run on PDP-11, so they ported & modded BCPL into C to write UNIX. All kinds of stuff written in that language with tons of flaws that were mostly the same and preventable since ALGOL's. Many calls to use alternatives (eg Wirth's, Ada) that prevented them by default with switches to turn each off only when necessary. One was a clever C variant called Cyclone with pointer and memory management tricks to knock out errors. Inspired by it and some others, Mozilla team created Rust to achieve similar safety/security objectives that existed since 1960's for ALGOL.

So, it's not a fad or gimmick. It's recognizing certain things regularly trip up system programmers. The basic techniques to stopping it were deployed and field-proven far back as 1961 in mainframes. Many others since, like those in Rust. The author, rather than your assertion, is simply applying that wisdom to reduce flaws instead of ignoring it. Rust and DOOM were probably chosen because they were more fun for author than rewriting djbdns, OpenSSH or Nginx in Ada2012/SPARK2014 w/ full, prover use. Entirely understandable even if not ideal.


AFAIK ALGOL 68 was garbage collected and so where many safe languages of that period, so no, the techniques to prevent memory safety issues without garbage collection (affine and linear types, region systems, move semantics) where only implemented in pratical languages in the last 20 years (ATS, Cyclone, Rust). Rust is possibly the first non-GC'ed safe language that seems to have a chance to reach mainstream adoption.

So, while it is true that the success of C primarily derives from it being used to implement UNIX, it is also true that for a long time (and, arguably, still today), GC'ed languages where perceived to be inadequate for system programming.


"AFAIK ALGOL 68 was garbage collected and so where many safe languages of that period, so no..."

Garbage collection is not mentioned in ALGOL68 report at all. There's a link to it on Wikipedia page if you want to check. Not ALGOL60, either. It wasn't in Pascal, Modula-2, or original Ada 1983 rationale either. PL/0 of MULTICS, PL/1 & PL/S of IBM, ESPOL/NEWP of Burroughs, or Mesa at Xerox didn't have it either. All had various enhancements for safety in quite a few forms. So, I have no idea where you're getting that information from. Disinformation, most likely, that was probably given to you by those who didn't study history and strawmaned up reasons safer, non-GC languages couldn't exist. ;)

Note: Dynamic memory allocation generally had to be done carefully since there was no GC. Yet, the other safety features of the language helped reduce the number of problems you'd run into. Especially those that discouraged directly working with pointers.

Now, let's look at memory safety. C was a language designed specifically for PDP-11 & its model. Burrough's B5000 (1961) was a machine custom-designed for ALGOL and its philosophy. It had stack overflow checking, array/pointer bounds checking, pointer protection, HW checking of argument types during function calls (made possible by OS written in ALGOL w/ strong types), and tags to prevent execution of data. You can bet your ass that ALGOL and a machine designed for it was much more memory safe than C on a PDP-11. Or even C on a mainframe. Doing a pentest on that architecture with today's knowledge gave me very few points of ingress due to all those checks implemented for the safe-by-default language.

"it is also true that for a long time (and, arguably, still today), GC'ed languages where perceived to be inadequate for system programming."

That would be true. Thing is, that had nothing to do with the safe languages and approaches that didn't use GC's and predated C. Further, Modula-2 showed one could match C-like simplicity/efficiency with better safety with about no effort. Taken further, Hansen developed a safe-by-design langauge (with 5 keywords lol) and OS called Edison on PDP-11. At that point, it was clear that Thompson and Ritchie's preferences were only reason their PDP-11 language was that insecure and hard to analyze. :)

"Rust is possibly the first non-GC'ed safe language that seems to have a chance to reach mainstream adoption."

This, plus claim about recent advances, I agree with. It has a lot of potential. It's why I promote it and help people posting here with Rust projects. Also tried to help them with their docs.


I do not pretend to have any first hand knowledge of these languages, the little I know is from what can be found on the 'net and, yes, second hand information. Also right now the office firewall doesn't let me access the ALGOL Report (!). Still...

"Garbage collection is not mentioned in ALGOL68 report at all."

Possibly is not mentioned explicitly, but it might be implied. Did any conforming algol implementation ship without a GC? Wikipedia explicitly lists 'Cambridge ALGOL 68C' as an extension omitting GC. As far as I know it took a while to have a properly conforming Algol 68 compiler as the spec specifies behaviour, not implementation (cf. Knuth's 'Man or Boy Test').

Also, as you pointed out, many of those languages relied on hardware support for safety. It is at least plausible that the progressive cpu intergration of the '80s which lead to the rise to dominance of simpler and faster architectures (RISCs and even x86) left languages and OSs that realied on more complex hardware support at a disavantage compared to C and UNIX.

It is around that time that the Lisp Machine was discontinued and the iAPX 432 failed.


"As far as I know it took a while to have a properly conforming Algol 68 compiler as the spec specifies behaviour, not implementation (cf. Knuth's 'Man or Boy Test')."

You nailed it. The spec specifies how the language is to behave rather than dictate its implementation. That kind of thinking was critical with hardware so diverse as the past. You can add a GC if you want but it's not assumed. You can do that with C, too, as many have.

"Also, as you pointed out, many of those languages relied on hardware support for safety. "

It was often used but not required. The older languages established safety by including strong typing, bounds-checks, and some interface checks by default. These knock out tons of errors. Modern languages have them actually. Some went further with custom hardware accelerating it, esp Burroughs, but that wasn't the norm.

"It is at least plausible that the progressive cpu intergration of the '80s which lead to the rise to dominance of simpler and faster architectures (RISCs and even x86) left languages and OSs that realied on more complex hardware support at a disavantage compared to C and UNIX."

It's the best hypothesis. Even Burroughs, now called Unisys, got rid of their custom CPU's for MCP/ALGOL since customers only cared about price/speed. AS/400 did same with transition to POWER based on customer demand. GCOS did the same thing. As you said, LispM's and i432 (and BiiN's i960) died since they did opposite. Java machines exist with Azul's Vega3's being friggin' awesome but they largely didn't pan out. Azul is recommending software solutions on regular CPU's.

Far as I see it, market drove development along just a few variables that severely disadvantages safe HW and SW stacks. This was probably because software engineering took a while to develop and market to learn other things (eg maintenance, security) mattered. Damage was done, though, with IBM mainframes, Wintel PC's, and Wintel/UNIX servers dominating.

For UNIX, open-source and simplicity also contributed to its rise. Another aid to various products was backward compatibility with prior software or languages that are shit for lack of better word. Trends like that feed into the hardware demand trend and vice versa. So, it wasn't any one thing but price/performance was huge factor given all people looked at were MIPS, VUPS, MHz/GHz, FLOPS, and so on.


Regarding ALGOL (I can't still access the spec), does the standard actually provides for manual memory management or does it only have the equivalent of malloc and no free? (I do not doubt that practical implementations had both).

Regarding hardware safety, I believe we might see a resurgence of builtin support for safety features. CPU designers have more transistors available than they know what to do with it: after adding yet another execution unit and widening the vector lenght again, they are reaching a diminishing return point, so they might switch to adding back these safety features.

And in fact it is already happening: W^X and virtualization can be considered as belonging in this area, and more recently Intel added MPX and MPK that are a more direct attempt at userspace security features.

New architectures are being designed with security in mind, like the vaporware^Wupcoming Mill.


http://arewewebyet.com/

I always look for a SQL driver, and then if it has connection pool support. If a language passes this second test, the language is ready ;)


So, rust is ready? There are multiple database drivers and there is at least one crate for connection pools (r2d2) that also works with diesel (query builder).


Don't forget support for transactions! That's often missing too.


Same user (tomaka) also wrote Rust bindings for Vulcan API - vulcano[1], which obviously can be used for creating modern games.

[1] https://github.com/tomaka/vulkano


God, Google Docs is really horrible for non-documents. They literally just scroll me way too fast through content when I try to go to the next slide, and worse, they hack my back button so that each slide is a new page, meaning I basically have to open a new tab.

It's also bad for images; for some reason they thought the scroll wheel should zoom in and out instead of scroll, and the only way to scroll is to click and drag. It's like their UI devs are on crack.


It sounds like you're the sort of person that thinks browsers shouldn't allow web pages to override key bindings.

If you're bored this afternoon, here's an open 10 year old open bug against Firefox that will certainly entertain and possibly infuriate you:

https://bugzilla.mozilla.org/show_bug.cgi?id=380637


I'm divided on this.

On the one hand, overriding keybindings enabled me to create an interface a while back where you could drag and drop midis onto an on-screen keyboard that corresponded to your keyboard, creating a musical instrument that could be played by typing.

On the other hand, more and more I think those things are better done as desktop applications, and that web applications are bad for users.

I don't know. I'd not be very sad either way if key bindings are allowed to be overwritten or not. I'm much more concerned that programmers not override keybindings to create really bad interfaces like this.


I don't have this issue in Chrome, but you can always use the arrow keys if your mouse is having issues. As for the back button - works fine for me and takes me to the previous slide. You can even download it as a PDF or Powerpoint if you like.


Ugh, the last thing I want is for my browser back button to be hijacked by a slideshow presentation. Help, I'm stuck in a powerpoint.


If you think of each slide as a separate page, as some do, it makes sense.


I just wish there was a way to opt-in to it first.

My instinct when I hit the site was to use my mousewheel to scroll down, because I didn't immediately realize it was a slide deck. So my mousewheel advanced the deck about a dozen slides and wrecked my back button.


If you think of each slide as a separate page, then having your scroll wheel jump you to a different page doesn't make sense.


> I don't have this issue in Chrome, but you can always use the arrow keys if your mouse is having issues.

Are you seriously suggesting that a workaround is just as good as having a good UI?

> As for the back button - works fine for me and takes me to the previous slide.

So after I scroll down and literally go through the entire slideshow with a small wave of the hand, I have to click back how many times to get back to hacker news?


I have the same behavior of you, except I use Firefox... maybe it's just random what you get?


+1 on glium as I've previously mentioned here: https://news.ycombinator.com/item?id=11620852

As someone who spends a lot of time in OpenGL it's a really solid, rusty API that's quite a joy to work with.


It says "Glium: Multi-threading... Send + Sync + Context Management (means it can be done)".

Can someone explain a bit about this? I'm not familiar with Rust, but with C you have to run GL calls from one and the same thread or you're gonna have a bad day.

Bonus question: Anyone that was/is C programmer (not C++) with opinions on Rust?


There's more details in the presenter notes:

>I won’t get into much detail about threading, but imagine how the OpenGL skynet-state-machine interacts with multiple threads. GLium ensures only a thread-specific OpenGL context is used on any particular thread.

>By making everything neither Send nor Sync, it prevents you from using resources created by one thread in another, enforcing OpenGL semantics at compile-time.

Basically any type without Send+Sync traits will not work with existing threading APIs(since they require combinations of Send+Sync based on threading semantics) forcing API calls to be done on the right thread.


Thanks! I was in presentation mode, for some reason, and didn't see the notes.


I've written a bunch of C++ (and abhor it) though my primary experiences are with C and Objective-C and Rust is a fantastic language. It definitely has a learning curve, and writing code in Rust takes me a good bit longer than C/ObjC but when I'm done I'm confident I wrote the code I think I wrote*

Rust tackles all the things that are hardest about programming: Correctness, threading and memory management, and validates that you've done it right at compile time.

Think of it as an extensive suite of compile-time unit tests. Definitely forces you to think differently though. Nice side-benefit is that I learned Swift way, way faster (and IMO more idiomatically) than if I'd gone direct from ObjC->Swift.

*: Not that it's right, just that it's what I intended.


There's more info and links in the speaker notes (on the options menu).


I don't get why slides are popular. We're missing 50% of the actual content of the talk here.


I agree, but this one has the speaker notes


Right after slide 1 appearing, this redirects to https://support.google.com/accounts/answer/32050 for me in Firefox.


Fine for me?


It works in an unrestricted Chrome instance. I wonder if there's a Google docs downloader script that directly gives me the PDF without dealing with the wonky website.


<rant> Google, take your browser team of the loony pills for FIVE SECONDS! Chrome isn't the only browser in the world. Having your website crash and burn one one of the most popular browsers out there that isn't yours is beyond unacceptable. Especially if you push web standards and make recommendations to other developers and sites to make their sites support all browsers. </rant>





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: