Hacker News new | past | comments | ask | show | jobs | submit | sjolsen's comments login

>especially in embedded, where you might not get memory protection!

Or where there's perfectly valid memory mapped to address 0


The 8086 put the DOS vector table at address 0, in retrospect a truly terrible choice. Any null pointer writes would trash the operating system.

A much better design would have been to put the system ROMs at 0, where at least DOS would survive a null pointer.


>Phone calls are predominantly direct, 1:1 exchanges

>Since the internet is instead a broadcast medium, it is much easier for big players to saturate the lines

The Internet is _not_ a broadcast medium. Data transfer over the Internet is by nature peer-to-peer.

>ISPs naturally believe that _someone_ should be paying for what they interpret as the "extra load" from these big players

Someone _is_ paying for the load for big players: the consumers who are accessing their sites.

>some households will use 30Mbit/s streaming content to 4 devices for 10 hours per day, and the next household, which pays the same monthly internet bill, uses 5Mbit/s or less to read predominantly-text based sites for 2 hours per day

30 Mb/s residential service already costs more than 5 Mb/s residential service.

>Net neutrality is fundamentally large internet companies looking to keep their costs down by using government force to prevent ISPs from charging them usage-based "carriage fees" or similar.

Net neutrality is fundamentally a consumer protection preventing ISPs from leveraging their natural monopolies on last-mile service to enact rent-seeking policies. Whether ISPs charge consumers directly for access to, say, competing content providers ("Watch Turner Classic Movies for free with Spectrum Basic 10 Mb/s, or upgrade to Spectrum Premium 10 Mb/s for only $19.99 per month to access Netflix, Hulu, and HBO!"), or they charge those content providers extra to peer their traffic and those content providers fold that into subscription fees, we're the ones getting bent over the barrel.


> we're the ones getting bent over the barrel.

Only in the indirect sense that all expenses ultimately filter down to the customer/buyer. The common trope that businesses never take a loss and always find some creative way to pass each expense onto the consumer is total nonsense, except insofar as it's just a simple tautology that states the obvious fact that a business's revenue must cover its own expenses.


>The fact that we can only write down only countably many expressions for numbers doesn't mean that there are numbers that we may never write expressions for

It depends, as they say, on what the definition of is is. What does it mean for a number to exist if it cannot be described? What does it mean for a construction to exist if it cannot be constructed?

>if you confuse extant with useful you might end up believing that some random large integers aren't "there!"

The question isn't whether it's useful to claim the existence of uncomputable reals; the question is whether it's meaningful. A construction needn't be useful to be meaningful, but surely it must be meaningful to be useful.


The standard delta-epsilon type of reasoning crucially depends on existence of arbitrary reals. Many geometry proofs / lines of reasoning crucially depend on the ability to position a point on a line at an arbitrary distance from another point.

All numbers ever written are rationals and thus countable. But those endless irrational numbers make a lot of ways of reasoning simpler, or possible at all.


>The standard delta-epsilon type of reasoning crucially depends on existence of arbitrary reals.

What do you mean by "arbitrary?" Do you mean uncomputable? How does analysis require the existence of uncomputable reals?

>All numbers ever written are rationals and thus countable

It is also possible to "write" computable irrational numbers, more or less by definition.


>How does analysis require the existence of uncomputable reals?

I don't know what the other person was referring to, but from many important theorems of analysis you can prove the existence of uncomputable reals. For example, from the theorem that a continuous function on a bounded interval is uniformly continuous you can prove that there exist uncomputable reals. If we think that this theorem and many more like it are necessary for analysis, we must conclude that analysis requires the existence of uncomputable reals.

The above fact comes from the area of mathematical logic known as reverse mathematics. The idea is in the name: rather than proving theorems from axioms, we go in reverse. We take some theorem and we see what axioms we can derive from it (using a weak background theory). This gives us a sense of the logical strength of a theorem; the stronger the axioms we can prove from it the stronger the theorem.

An amazing fact is that most classical theorems of mathematics have their logical strength captured by one of five sets of axioms. The weakest are those which are outright provable from the weak background theory (briefly, the background theory, called RCA_0, says that you're allowed to do computable processes). The next is the theory WKL_0, formed from RCA_0 by adding an additional axiom, which goes by the name weak Kőnig's lemma. This axiom states that any infinite binary tree has a cofinal branch (it's a lemma in 'ordinary' mathematics but in reverse mathematics gets treated as an axiom). This is the theory which captures the logical strength of the theorem I mentioned in the first paragraph. From weak Kőnig's lemma we can prove the existence of uncomputable reals. We can construct a computable infinite binary tree so that any branch through it must be uncomputable. From such a branch one can build an uncomputable real, say by insisting that the nth bit in its binary expansion is 1 if and only if the branch went left at level n.

(The easiest way to build such a tree is to go through the halting problem. The basic idea is to construct a tree of attempts to solve the halting problem. It turns out this can be done in a computable way, and any branch through the tree will allow us to solve the halting problem and thus must be uncomputable. The key fact used is that while there's no computable procedure to check whether a Turing machine halts at some time, you just want to know whether it halts in n steps (for fixed n), then you can check that with a computer. The program is easy: simply simulate running the Turing machine for n steps and see whether it stops at some point.

To build our tree: list the Turing machine programs as p_0, p_1, p_2, ... (This listing can be done in a computable way.) We will associate level n+1 on the tree with p_n. Start with a single node for the root of the tree. Then, to build the (n+1)-st level of the tree, look at each node t on the nth level. Check, in a computable way, whether p_0, p_1, ..., p_(n-1) halt within n steps. If none of them halt, then t has a left and right child at the next level. If p_k does halt within n steps, then look below t to see whether you took the left or right path at level k to get to the node t. If you took the right path, then t gets no child nodes. Otherwise, if you took the left path whenever p_k halts within n steps, then t gets two child nodes. Intuitively speaking, at stage n we keep our options open and don't decide whether p_n halts. But if we later see that it does halt, then we retroactively fix any mistake by not going any further along paths where we guessed wrongly before.

For example, it could be that p_0 halts but it takes 1000 steps for this to happen. So while building the tree when you get to level 1000 you'll suddenly stop building any further the entire right half of the tree, since you finally learned that p_0 halts and that you wanted to go left at the root node all along.

This process for generating the tree is computable, so by definition it's a computable tree. It's infinite, because you can keep building the tree upward if you happened to have guessed correctly whether each Turing machine so far halts. However, no branch through this tree can be computable. This is because a branch goes right at level n if and only if p_n didn't halt within k steps for any k. That is, the branch goes right at level n if and only if p_n doesn't halt, so from a branch we can decide the halting problem.)

----

Probably the best reference for reverse mathematics is Stephen Simpson's book Subsystems of Second-order Arithmetic. The first chapter is freely available on his website. It nicely lays out the philosophy behind the project and the major results while deferring (most of) the technical details to the rest of the book. http://www.personal.psu.edu/t20/sosoa/chapter1.pdf

Besides that, the wikipedia page for reverse mathematics isn't awful, but neither is it very good. https://en.wikipedia.org/wiki/Reverse_mathematics


>Then build a company if you think its so easy.

If I had a dollar for every time I saw a legitimate complaint about a complex system met with, "Well just reinvent it yourself from the ground up if you're so fucking smart," I'd be a rich man. It is not a constructive suggestion.


The problem is that there are plenty of energy-storage companies available for you to invest into on the Stock Market.

AES Energy Storage for example, is building tons of energy storage across America. And you can support them by providing investment dollars.

So yeah, building companies (or at least, helping companies that match your worldview) is quite easy in this Democracy. Instead of complaining about it, you should be elevating AES Energy Storage and other similar companies.


>Instead of complaining about it, you should be elevating AES Energy Storage and other similar companies.

No. I refuse to accept a standard of discourse where one cannot even point out a problem without having already solved it. There are many, many hard problems with complex systems in the world, and no person, much less a complete outsider, can hope even to begin to address more than a very, very small number of them even if they dedicate their life to the task. Saying "fix it or shut up" is asinine.


Well, if stopping people from demanding you solve a problem before pointing it out is so simple, why don't you go do it then?

...

I'm sorry.

In all seriousness though, it's a no-expertise-needed response that's meant to shout down people who are particularly clueless and disconnected. Perhaps we need a substitute response, like calling for an expert on the relevant field to decide whether the person is clueless or not?


To expand on this, C++ fundamentally uses the same machine model as C (or at least one that's very, very close). Where C++ differs from C is the array of high-level language constructs, which map down to that common machine model (this is what is meant by "zero-cost abstractions"). The idea is that when you use a construct like templates or virtual functions, what it compiles down to is not significantly different from the analogous code you would write in C (hand-duplicated routines with different concrete types and structs-of-function-pointers, resp.).

There are exceptions (no pun intended) to this, namely exceptions, RTTI, global construction/destruction, and the intricacies of new/delete. As the article points out (as will anyone who advocates the use of C++ in constrained environments), none of these are really necessary to use C++ effectively and can be safely stubbed out.


Haskell is also garbage-collected, and lazily evaluated to boot. It's not really an apples-to-apples comparison.


Of course not. Haskell is a purely functional language and Rust is an imperative systems language. It can hardly be more Apples to Oranges.


>if writing trivial data-structures in Rust is a great challenge, it shows that writing other, less trivial things in Rust will be much more challenging than might be preferred.

Data structures are exactly the abstractions built on top of raw memory. Rust is not really geared toward working with raw memory; it's geared toward working with abstractions that hide the raw memory (i.e., data structures). That's why writing data structures in Rust is hard, and it's also why that fact doesn't imply that "other things" (i.e., code that is not part of a data structure implementation) are hard.

In other words, the fundamental flaw with the assumption that there is a relationship between the difficulty of implementing data structures in Rust and the difficulty of writing applications in Rust is that the two involve very different programming models, and Rust has much more ergonomic support for one than the other.


>The catch you're pointing out is that on x86 there are technically no 'invalid' addresses

Depending on what you mean precisely by "x86," there is such a thing as an invalid address: the IA32e architecture (or whatever you want to call Intel's flavour of 64-bit "x86") requires that the <n> high bits of an address match, where <n> is machine-dependent.


That's a fair point - though I think you could debate whether or not the current 48-bit address space of x86-64 is part of the architecture or just an implementation detail. But in the end I don't really think it matters which you consider it (And I'd be happy to consider it to be either). All that said, you're completely right that with current x86-64, there are 64-bit values that could never be a valid address.


>the problem is with compilers (and their developers) who think UB really means they can do anything

But that's exactly what undefined behavior means.

The actual problem is that programmers are surprised-- that is, programmers' expectations are not aligned with the actual behavior of the system. More precisely, the misalignment is not between the actual behavior and the specified behavior (any actual behavior is valid when the specified behavior is undefined, by definition), but between the specified behavior and the programmers' expectations.

In other words, the compiler is not at fault for doing surprising things in cases where the behavior is undefined; that's the entire point of undefined behavior. It's the language that's at fault for specifying the behavior as undefined.

In other other words, if programmers need to be able to rely on certain behaviors, then those behaviors should be part of the specification.


In some sense the language is the compiler and the compiler is the language; the language is much like a human language, used for its utility in expressing things (ideas, programs). You can tell if your human language words work by determining if people understand you. If people start being obtuse and refusing to understand you because of an arbitrary grammar rule that isn't really enforced, you'd be right to be upset with the people just as much as the grammar.

It in fact doesn't matter at all what the standard says if GCC and LLVM say something different, because you can't use the standard to generate assembly code.

The standard doesn't have anything to say about UB, so it's the compiler's responsibility to do the most reasonable, non-shocking thing with it possible: if I'm a GCC developer and you ran GCC on one of these fairly mundane examples and it compiled without error then ran rm -rf / or stole your private RSA keys and posted them on 4chan and I said "well, you can't be mad because it's undefined, it's the standard's fault" you'd probably punch me in the face after some quick damage control.

If it deletes an if loop or terminates a spinlock early that's potentially even worse than those two examples.


>In some sense the language is the compiler and the compiler is the language; the language is much like a human language, used for its utility in expressing things (ideas, programs). You can tell if your human language words work by determining if people understand you. If people start being obtuse and refusing to understand you because of an arbitrary grammar rule that isn't really enforced, you'd be right to be upset with the people just as much as the grammar.

The shortcoming of this interpretation is that programs are not (only) consumed by humans; they're consumed by computers as well. Computers are not at all like humans: there is no such thing as "understanding" or "obtuseness" or even "ideas." You cannot reasonably rely on a computer program, in general, to take arbitrary (Turing-complete!) input and do something reasonable with it, at least not without making compromises on what constitutes "reasonable."

Along this line of thinking, the purpose of the standard is not to generate assembly code; it's to pin down exactly what compromises the compiler is allowed to make with regards to what "reasonable" means. It happens that C allows an implementation to eschew "reasonable" guarantees about behavior for things like "reasonable" guarantees about performance or "reasonable" ease of implementation.

Now, an implementation may choose to provide stronger guarantees for the benefit of its users. It may even be reasonable to expect that in many cases. But at that point you're no longer dealing with C; you're dealing with a derivative language and non-portable programs. I think that for a lot of developers, this is just as bad as a compiler that takes every liberty allowed to it by the standard. The solution, then, is not for GCC and LLVM to make guarantees that the C language standard doesn't strictly require; the solution is for the C language standard to require that GCC and LLVM make those guarantees.

Of course, it doesn't even have to be the C language standard; it could be a "Safe C" standard. The point is that if you want to simultaneously satisfy the constraints that programs be portable and that compilers provide useful guarantees about behavior, then you need to codify those guarantees into some standard. If you just implicitly assume that GCC is going to do something more or less "reasonable" and blame the GCC developers when it doesn't, neither you nor they are going to be happy.


On the other hand, the expected and desirable behavior in one platform might be different from that in another platform. It's possible to overspecify and end up requiring extra code when performing ordinary arithmetic operations, or lock yourself out of useful optimizations.


Which is exactly the motivation behind implementation-defined behavior. There's a broad range of "how much detail do you put in the specification" between the extremes of "this is exactly how the program should behave" and "this program fragment is ill-formed, therefore we make no guarantees about the behavior of the overall program whatsoever."


Implementation-defined behavior at best just tells you that the behavior is guaranteed to be deterministic (or not). You still cannot reason about the behavior of the program by just looking at the source.

And I'm not sure if optimizations such as those that require weak aliasing would be possible if the behavior was simply implementation-defined.


The desire to reason about the precise behavior of a program and the desire to take advantage of different behavior on different platforms are fundamentally at odds. Like I said, there's a broad range of just how much of the specification you leave up to the implementation; it's an engineering trade-off like any other.


My point is that by leaving a behavior to be defined by the implementation you're not making it any easier to write portable code, but you may be making some optimizations impossible.


That's not entirely true. Regarding portability, the layout of structs for example is implementation-defined to allow faster (by virtue of alignment) accesses or more compact storage depending on the system, but it's perfectly possible to write portable code that works with the layout using offsetof and sizeof (edit: and, of course, regular member access :) ).

That said, I would agree that, on the whole, C leans too heavily on under-specified behavior of every variety. It's just not an absolute.


It's a stupid convention of compiler writers and standards writers at the expense of common sense and engineering standards. In fact there are many thousands of lines of C code that depend on compilers doing something sensible with UB. For example 0 is a valid address in many cases (even in some versions of UNIX). The decision to allow compiler writers to make counter-factual assumptions on the basis of UB is the kind of decision one expects from petty bureaucrats.


>For example 0 is a valid address in many cases (even in some versions of UNIX).

0 may be a valid address at runtime, but a NULL pointer is always invalid.

On such platforms, the compiler should handle 0 pointer values correctly - and the NULL pointer may not have a 0 value, and must not compare equal to any valid pointer.

But 0 or NULL constant, when converted to a pointer type, MUST result in a NULL pointer value - which may be nonzero. Dereferencing such a pointer is an UB.


How difficult is it to add a "zero cost abstraction" like the ones used here? For example, what would it take to make this compile:

    for window in buffer.sliding_window(coefficients.len()) {
        let prediction = coefficients.iter()
                                     .zip(window)
                                     .map(|(&c, &s)| c * s as i64)
                                     .sum::<i64>() >> qlp_shift;
        let delta = buffer[i];
        buffer[i] = prediction as i32 + delta;
    }
with sliding_window returning an iterator over slices of the buffer?


"Zero cost abstraction" to me means a completely "static" abstraction, i.e. there are no runtime mechanisms necessary to implement the abstraction.

Your sliding window is just a series of windows, so there is no reason why it couldn't be compiled in the "most" efficient way. In fact it probably does, just try it out by implementing the sliding_window as an iterable.

However there's always this problem with cleverness: It gets harder and harder to read and maintain. Also http://wiki.c2.com/?SufficientlySmartCompiler


There is a `windows` iterator.

https://doc.rust-lang.org/std/primitive.slice.html#method.wi...

However, your code would not work as-is, since the window borrows the buffer, so `buffer[i]` cannot be written to. Further, there is no `windows_mut` to let you write to the end of the window, because that would let you get multiple mutable references to a given element.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: