Hacker News new | past | comments | ask | show | jobs | submit login

>Of course you can add more degrees of freedom and write programs with graph lifetimes. That's fine, but it's pretty clearly more complex.

How do you measure the complexity you're talking about here?

What if a program using graph shaped lifetimes was 10 times shorter than a program that had to use tree shaped lifetimes? Would you still say that the tree shaped program is less complex?




I don't think "number of tokens in the source code" is a great measure, no. More like, how many possibilities are there for the way this program will execute? How many potential branches are there? How many things can go wrong, and do we know if and when they will go wrong?

With Python for instance, it's just really difficult to say, because you may have arbitrary exceptions at any point. When writing long lived services, this is very significant; you need to handle all the exceptions that may be raised by you can't actually determine what that set is. I've had to read the source code of my dependencies on more than one occasion to figure this out, and still, you never get all of them before production.

With Rust this is much more manageable. Technically you could have an arbitrary panic at each step, but it's much less common to catch and handle panics than it is exceptions, and recoverable errors are handed via return values. Relative to Python, it's a breeze to figure out when errors may happen and what to do about them.

Similarly with my example before, what type are the keys in a Python dictionary? Probably the ones you meant to put there, but who really knows. What types are in the keys of a Rust dictionary? The ones you put there explicitly, full stop. There's no way for that to get screwed up or change from under you.


I kinda answered the wrong question here, that's how you would determine which program was more complex. Regarding whether trees or graphs are more complex - graphs are a superset of trees with additional degrees of freedom. Degrees of freedom are complexity. Graphs are more complex than trees and trees are more complex than lists, because we're drilling down into increasingly restricted subsets where we can make stronger and stronger generalizations.


I understand what you're getting at. A more restrictive language allows you to exclude a lot of things that cannot possibly happen and thefore you don't have to think about them any longer, which means less complexity.

But I'm still not sure whether this idea is sufficient to explain away the situation where you have to write a far longer program in order to work around those restrictions.

For instance, safe Rust cannot express linked lists in the usual way, i.e using pointers. So in order to write a linked list in safe Rust, you would have to reimplement some of the functionality that pointers provide using array indices (or similar).

This program would be very long indeed, plus it would cause many of the same safety issues that safe Rust was supposed to avoid. Essentially, what this program would do is create an unsafe memory management system in safe Rust.

I believe that the dominant factor for complexity of a program is its length. If you have two programs that produce the same output for the same input and one is 10 tokens long while the other one is 10,000 tokens long then the first program will always be less complex.

I say "I believe" because I'm a bit out of my depth here. These claims (mine and yours) clearly need formal proofs and some formal definition of complexity.


I actually don't think it's as difficult as people often suggest to write a doubly linked list in Rust (yes, I have done it and read Learning Rust by Implementing Entirely Too Many Linked Lists.). I think it's just surprising to people that something that's a CS101 data structure takes some advanced features to implement without locks.

But the thing is, you don't write C programs in Rust and you don't generally use doubly linked lists. (Linked lists turn out to be a very niche data structure that doesn't work well with modern CPUs anyway, but I digress.) You'd probably use a Vec or a BTree from the stdlib anywhere you're thinking of using a linked list.

So I don't think it's really the case that programs are significantly longer in Rust. Rust is more explicit, so you'll end up moving some things that existed as comments or documentation or just in your own head into code - that's a win that doesn't increase complexity, only exposes it. That program may look larger, but it's only because you can see it better.

It really depends on what those 10 tokens are doing. If I have a token that creates a new universe, seeds it with life, and creates the conditions for life to develop an optimal linked list - it might solve the problem in one step, but our atomic unit here is absolutely massive.

Similarly, if I compile a program to assembly I'll generally get many more tokens. But I can't really buy that I've increased the complexity of the program here.

I'm pretty satisfied with this understanding but I understand your desire for greater rigor.


>But the thing is, you don't write C programs in Rust and you don't generally use doubly linked lists.

The usefulness of linked lists is entirely beside the point. They just serve as an example for situations where additional restrictions can cause a program to be longer or more difficult to understand.

>It really depends on what those 10 tokens are doing. If I have a token that creates a new universe, seeds it with life, and creates the conditions for life to develop an optimal linked list - it might solve the problem in one step, but our atomic unit here is absolutely massive.

This example shows why I think that your claims lack a definition of complexity. You're now saying that the output of a program determines its complexity. I don't think that's a useful measure of complexity because it doesn't allow us to compare the complexity of two programs that produce the same output.


Not the particular output, no. It was how much was going on inside that hypothetical instruction, how massive of an abstraction it was, the unpredictability of the output. If you like you can think of it was a giant decision tree and imagine counting the branches to measure the complexity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: