Believe it or not, it gets worse. The authors of this paper are the same folks who came up with this particular dating technique. And I can't find any evidence the technique has been independently validated. Not that that should count for much from this layman.
I don't think you can "debunk" something merely with strong evidence.
Besides that, I wasn't aware of this codex, so I'm glad you shared it. Has anyone suggested that (assuming there is a relationship) the arrow of causality might go the other way? That perhaps this image was a reference for the shroud itself?
The C14 dating is one evidence, and there are plenty of contradicting evidence for the 14th century dating. But my main point is that there is at least doubt for the 14th date, and any proper scientific approach should at least mention the codex.
If you have a goal of being able to confidently reason about your code, such as to be confident it won't go "wrong", then making it more math like would seem to make a lot of sense.
As far as the IO thing is concerned, one could use pure functions to construct a model of an imperative program. Then you just need something to run it... There is a sense in which this is exactly what is done in practice.
> If you have a goal of being able to confidently reason about your code, such as to be confident it won't go "wrong", then making it more math like would seem to make a lot of sense.
Is it easier to reason about a 10,000-line-long math proof, or a 10,000-line-long program? I'm not sure that the math is actually easier.
Is it easier to write a fully-correct 10,000-line-long math proof, or a 10,000-line-long program? Again, I'm not sure that the math is actually easier.
Is it easier to write a formal prover for the math? Almost certainly. And maybe, with a formal prover, writing the correct math is easier...
> Is it easier to reason about a 10,000-line-long math proof, or a 10,000-line-long program? I'm not sure that the math is actually easier.
Don't compare them on the basis of familiarity. Making your programming language look mathy is not the point.
I'd offer a different comparison:
Reason about a 10,000-line-long math proof, or a 10,000-line-long math proof in which there are instructions to jump back-and-forth with your pencil, crossing out and updating previously-calculated values in-place.
You don't do a single proof of the entire 10k line program, you do proofs of the many small functions that make it up. If you can use each proof as an axiom going into proving the functions that use it, then each of them becomes easier to prove too. If I have a function that might sometimes mutate a variable off in another part of the program, that's just inherently harder to reason about that one that is idempotent and guaranteed to have no side effects.
The most common proofs are those that the compiler constructs while type checking. The compiled program is the proof witness. All compilers can be thought of as producing a proof-by-construction but they vary wildly in the propositions they can prove.
If this idea is new to you and you want to learn more, google "propositions as types" or "the curry howard correspondance".
The word "suspected" does not appear on the report you linked. Rather it says it is a "possible" carcinogenic. Moreover
> The committee therefore reaffirmed that it is safe for a person to consume within this limit per day. For example, with a can of diet soft drink containing 200 or 300 mg of aspartame, an adult weighing 70kg would need to consume more than 9–14 cans per day to exceed the acceptable daily intake, assuming no other intake from other food sources.
While I do think that is an unreasonable amount for a normal person to consume, I've also met a person who would easily hit that number. Most of us should be safe though.
In your example, '3 | "foo" | false' is the type of the variable. What is the argument for this being a bad idea in general? You imply it's unsafe. How?
> In your example, '3 | "foo" | false' is the type of the variable
Depends. TypeScript would (usually) recognize 3 being a part of that union type, but it's still a normal integer that is fully compatible with other numbers. If you add 1 to it the variable is still a number but doesn't belong to this union type anymore. You could of course argue that this is just a consequence of TS being wrapped around JS, but whether the type changes between `3` and `false` or between `3` and `4`, it does change.
In Rust this is not allowed, hence you cannot just do an addition between e.g. an enum variant and a number unless you explicitly implement the interface to make them compatible, in which case the addition produces a new value with - again - an immutable type.
If 3, "foo" and false have different types by themselves then the language has to either allow changing the type of a variable or forbid a mix of them in one type.
Rust is more restrictive than it needs to be, in this case. What matters is whether it's possible to have a situation where the value of a variable or parameter is not concordant with its type in a given context and what it takes to make that happen.
Of course it's possible to tell the TS type checker to sod off in a given context (cast, use the "any" type, etc.), but without doing that, I think it's hard to contrive a situation such as I described above.
Good question. It's for performance reason. Calling into kernel is expensive. Lock acquired in the user mode is much faster than lock acquired in kernel.
A typical lock acquisition using futex looks like:
(a) while !compare_and_swap(&lock, 0, 1) // see if flag is free as 0, lock it as 1
(b) futex(&lock, WAIT, 1) // sleep until flag changes from 1
(a) runs in user mode and it's very fast. It's just one CMPXCHG assembly instruction. If the lock is free, it's acquired in one instruction as 1 (locked).
If the lock is not free, then do the expensive call into the kernel to sleep via futex at (b). Futex() helps in detecting the change of the value while putting the thread to sleep, to avoid hogging the CPU.
Importantly, prior to this (and, hell, even since) state of the art was to try atomically, then yield (or sleep(0)) then try usleep.
The kernel had no idea what was going on, so had no idea how to schedule such a thing. It particularly didn't know to wake you when the lock (which it has no idea about) became available.
It is worth noting here that this "one assembly instruction" is not that cheap. The hardware on a multicore system does have to perform some locking under the hood to execute that instruction. But yes, it still has enough of an advantage over calling into the kernel to justify the additional usage complexity.
And on ccnuma systems, you end up getting memory contention and huuuuuge memory latencies, as well for the data also residing in the same cacheline. Often we would align and isolate these locks within a cacheline blocke... which also wastes a lot of space if you have a lot of these (I was consulting on an enterprise app that had millions I think. It made a big difference in the software's memory footprint.)
You are right. XCHG can do the job to acquire the lock. At least on x86, XCHG does lock the cache line on the address of the variable, so it should be ok.
The gettimeofday(3) vDSO is pure-userspace code. Why not, then, a futex(3) vDSO that does a while + compare_and_swap(2) in userspace, but then contains a real call to the futex(3) syscall?
This part of the optimisation is well-known and had been for years before futex was invented, so there's no need to provide it as a vDSO or any other special work, userspace can (and was ready to) do that part anyway.
The futex is clever because previously the heavyweight locking is in the kernel, but all you actually needed was this very flimsy wait feature. BeOS for example, has a "Benaphore" construction where you initialise an expensive heavyweight lock but you try the userspace trick before needing to actually take it. So that looks superficially just as good as a futex...
... and then you realise oh, the underlying locks need kernel RAM to store their state, so the OS can only allow each process to have so many of them and thus you can't have so many Benaphores, they were expensive after all. But a futex doesn't use any kernel resources when you aren't calling the OS, Linux doesn't mind if your program uses a billion futexes, that's fine, any particular thread can only be waiting on one of them, and Linux is only tracking the waiting.
> This part of the optimisation is well-known and had been for years before futex was invented, so there's no need to provide it as a vDSO or any other special work, userspace can (and was ready to) do that part anyway.
I guess, to me, the semantics of futex(3) on their own seem ill-formed, without the while loop + cmpxchg being part of them. It feels like the user shouldn't have access to raw "low-level" calls to futex(..., FUTEX_WAIT), since there's only one useful way to use those calls, and it's in combination with other code, where that code is theoretically possible to get wrong. It's an API that doesn't cleanly encapsulate its own concerns.
I suppose I'm used to thinking with "building reusable crypto"-tinted glasses: with crypto libraries, you never want to expose a primitive that needs to be used a certain way to be sound; because people are inevitably going to use it wrong, and that's inevitably going to result in exploitation. Instead, you can just expose a higher-level primitive, that inherently can only be used that one correct way, with no means to ask it to do anything else.
Of course, there's nothing inherently dangerous about calling futex(..., FUTEX_WAIT) without the associated code. (You just get a race condition. But a race condition in userspace doesn't corrupt the kernel or anything.) So I suppose this kind of thinking is meaningless here.
For a long time the user in fact did not have (easy) access to it. glibc has started to provide a syscall wrapper only very recently, before you had to use syscall directly.
The reason is that the CAS is not part of the interface but it is left to the implementor is that the specific logic is very much application specific and in fact there is not only one way to use it. As discussed elsethread mutexes are only one of the many application of futexes.
What you described is what the phread_mutex_lock() does exactly, which is in user space. Application programmers don't deal with futex directly, they call phread_mutex_lock/phread_mutex_unlock.
Futex puts the thread to sleep and wakes it up. Accessing the OS scheduler requires kernel access anyway. The memory access of the flag can be done in user or kernel space.
Incorporating thread-safety into individual calls is not sufficient, because thread-safety is not composable. A sequence of calls to thread-safe operations is not itself thread-safe. Since you need locks in the outermost application level, it would be wasteful and unnecessary to also lock inside each individual call.
This non-composability is why multithreading is so difficult. A library can't abstract away these concerns for you, unless the entire multithreaded operation is one call to the library, which requires the library to know and design for your exact use case. If you want to do something a library didn't plan for, you are required to handle the thread synchronization yourself.
Some ideas that I suspect are true do indeed scare me. But I'm more scared of ideas that I strongly suspect are false, but I think other people may not agree.