Hacker News new | past | comments | ask | show | jobs | submit | cmrx64's comments login

had a relatable but almost opposite experience (no obvious infection, but it was winter 20/21), where I noticed that objects in my visual field seemed to be differentiating themselves away from the background and “competing” for my attention when previously I had to go hunt for them.


October 2020 here. I guess you got a boost whereas I got an obstruction of whatever was delivering energy.


interned at NICTA on seL4 in 2016, AMA :)


better how? I could never use this without being able to import my apkg.


Can you DM me on twitter? (https://twitter.com/Adnan_Jindani)

I am planning to add apkg imports for RepIt so if you could send a sample apkg to me, that would be awesome. All anki users will be able to use RepIt easily and will also take advantage of a better UI!


It won't have decks. You will be able to upload your notes and texts on it. You will learn whenever it tells you to. Then rate how you learned it.

It's better because it has feedback-based learning. Based on your feedback, it will tell you when to revise it for better learning. Based on the Spaced repetition technique!


"It won't have decks" sounds worse than Anki, not better.

Your description of "feedback-based learning" doesn't sound different from Anki.


It's better because it will have email reminders. And also a fully synced cloud web app with a better UI.


What is value safety? Why should value constraints be pertinent here? near I can tell this is a neologism (introduced here? https://itnext.io/we-need-to-talk-about-the-bad-sides-of-go-...) that just happens to be near the top of search results in this area.

The introduction rule for enum values in C is _not_ type safe. You know how you can tell? Well typed programs go wrong. a language absolutely does not need value constraints of any kind to get this right.


> Why should value constraints be pertinent here?

Because that's where it is unsafe: You can introduce a value of the same type that is outside of the enumerable range. You cannot introduce a value of a different type, though. It is type safe.

Yeah, any language with a type system worth its salt has value constraints, but if you choose to forego them as C and Go have, you're not going to bother adding them just for enums. It would be kind of silly to leave developers with a noose to hang themselves with everywhere else, but then think you need to tightly hold their hand just for enums.

In fact, I'd argue that if you are short on time and need to make compromises to reach a completion state, enums are the last place you would want to take the time to add value constraints. The types more often used would find much greater benefit from having value constraints.

Case in point: Typescript. When was the last time you cared that its enums behave just like C and Go? Never, I'm sure, because it having value constraints everywhere else more than makes up for it. Giving up value constraints for safer enums is a trade you would never consider.


> Because that's where it is unsafe: You can introduce a value of the same type that is outside of the enumerable range. You cannot introduce a value of a different type, though. It is type safe

C’s type system is unsound, and not all compileable programs respect its dynamic requirements. We cope with this by referring to some code as “not type safe”.

foo bar = NOT_FOO;

You say this “typedef enum {…} foo” is not a type, naming a set of values, but just a convenient alias for whatever the representation is, thus all “enum” (regardless of actual decl) name the same set, and every constructor expression shares the same “type”. Consistent with the language specification, and passes the type checker, so you could say this code is “type safe”? but it’s one hell of a foible and not consistent with any lay (non PLT) understanding of type safety, where typesafety means the type written in the code and the runtime representation won’t desync (no runtime type errors).

If you simply forbid UB and refer to only strictly conforming programs, I will accept this modified meaning of “type safe”, but grumble that this meaning is not very good

edit to encompass parent edit: as a typescript nonprogrammer, I have nothing to add :) I am confused why you are putting the features in opposition. gradual + value-sensitive typing is a good feature, but doesn’t conflict with sums. in ocaml, we support both, real sum types as well as variant [`A | `B] etc that are structural in the way you’d want C to be


> C’s type system is unsound

Along with every other programming language under the sun. A complete type system is not exactly an easy feat – especially if you want it to be usable by people.

> We cope with this by referring to some code as “not type safe”.

Value constraints are an application of types, so yes, if C/Go had value constraints then violation of those constraints would leave it to not be type safe. But they don't have value constraints. Insofar as what the types can constrain, the safety is preserved.

It seems all you are saying is that C (and Go) do not have very advanced type systems. But that shocks nobody. Especially in the case of Go, that was an explicit design decision sung by its creators. You'd have to be living under a rock to not know that.

Was there something useful you were trying to add?


> Was there something useful you were trying to add?

Yes, the clarification about value safety, which you’ve done quite well.

Not every language is unrepentantly unsound.

I continue to identify a confusion in this thread between a property of the languages, and a property of particular code, but I have clearly exhausted your patience. thank you.


> Not every language is unrepentantly unsound.

For sure. Coq does a decent job, but it's also a complete bear to use. Tradeoffs, as always.

> I continue to identify a confusion in this thread between a property of the languages, and a property of particular code

Go on. The original statement was that C and Go do not have type-safe enums. But there is no evidence of that being the case. The types are safe.

Indeed, the types are limited. An integer type, for example, cannot be further narrowed to only 1-10 in these languages. But the lack of a feature does not imply lack of type-safety. It only implies a lack of a feature.


the disagreement is that the program is typesafe just because it was typechecked. BECAUSE the system is unsound (completeness is irrelevant), typechecking doesn’t imply type safety.

… where I am using type safety to mean “no runtime type errors/UB manifest”, ie, the property that a sound typesystem would guarantee _if we had one_. You seem to be saying that just because our type system is impoverished, does not make its resulting claim of “program is type safe” any less valid, whereas I am saying “type safety is a semantic property of programs, not of languages, and this value safety idea seems like it’s what PLTers think type safety means”.

It’s a violation of the C semantics to assign the wrong value to an enumeration, so I would say that fact the language doesn’t do anything at all to enforce or check this promotes this beyond “lack of a feature” and straight into “type unsafe”. However, I’d feel less strongly if at least initializers were checked.

As you say, different language design philosophies lead to this, and it’s not surprising. Most of these ideas came _after_ C anyway!

phone dying… no response soon.


The alien creatures in “Destroy All Humans” lost their ability to reproduce and their solution was to invade Earth and steal as much genetic material as possible. To.. uh.. restore the gonads?

Surely a better work has explored this :)


this sort of static verifiability is important for applications like smartcard programs though, where the runtime environment cannot afford dynamic MMU.

additionally, this lets you inline verifiable code into your protection domain instead of forcing it into its own module somewhere else.


This is often what happens, and this is often what’s fragile. In the blog these are referred to as “lifetime extension”. The code is written as carefully as it ever is and I can confirm the observation that it’s just begging for a segfault or a leak :) Note that finalizers are asynchronous, and there’s an inversion of control/scoping issue with the way you’ve described it.


Haskell's FFI has `withForeignPtr :: ForeignPtr a -> (Ptr a -> IO b) -> IO b` [1].

A ForeignPtr is a GC-managed pointer with an associated finalizer. The finalizer runs when the ForeignPtr gets GC'd.

`withForeignPtr` creates a scope (accepting a lambda) in which you can inspect the pointer `(Ptr a -> IO b)`.

This works well in practice, so I do not really understand why "among GC implementors, it is a truth universally acknowledged that a program containing finalizers must be in want of a segfault".

[1]: https://hackage.haskell.org/package/base-4.19.1.0/docs/Forei...


I’m deeply familiar with this technique, have used it plenty, have encountered the perils, and so I do not really understand why you think it works well on practice.

It works well in only the case where you have perfectly well scoped small regions that you can model as your lambda. When you need to actually do anything intricate with the lifetime where you want it to escape (probably into a datastructure), the callback won’t cut it and it’s on you to ensure the foreignptr’s lifetime becomes interlinked with the returned b


I don't quite follow the argument:

> it’s on you to ensure the foreignptr’s lifetime becomes interlinked with the returned b

Yes; the simplest way to do that is to make sure that your data types never contain raw `Ptr`, only `ForeignPtr` -- same as in C++, where seeing `mytype * x` should ring the alarm bells.

You could say "but what if I call another FFI function that needs the Ptr as an argument"? In that case, surely the function needs to document if it takes ownership of the pointer. If it doesn't document that, yes, it'll crash; but that's unrelated to finalizers (the "impossibility of composing" of which which is what the post claimed); it would also crash if no finalizers were involved.


The possibility of segfaults is kind of a given though. I mean the whole point of foreign interfaces is to reuse existing C code. The pinning functions just expose the manual C resource management that programmers would have to deal with if they were writing C. You just turn off the automatic resource management for the objects involved so you can do it yourself, running the risk of leaking those resources.

The only viable way to escape all this is to rewrite the software in the host language. A worthy goal but I don't see anyone signing up for that herculean task outside the Rust community.


The pin and unpin could be tied to a reference count in the byte string object that was extracted. When blob's get_data is called to get the byte string, its pin count is bumped up. When the byte string is reclaimed by GC, it bumps down the blob's pin count.


I don’t dispute the possibility of using pinning correctly, in practice it’s a source of bugs. Fuzzy and loose ownership regimes just don’t compose well, people are bad at running region checkers in their head and anything beyond the absolute simplest smallest scoped is prone to eventual error.


I don't think it's difficult, if you're working inside the run-time.

The difficulty is getting the behaviors if you're outside of the run-time, writing writing FFI bindings, where you don't have the option of hacking in new ownership behaviors into the target objects, and your FFI may be lacking in expressiveness also.

If it's a bad problem for a certain kind of object, and that object is relatively important (lots of people want to use bindings for it), the way to go may be a lower level extension module rather than FFI, or a wrapper library around it which is more amenable to FFI.


because it’s healthy for the social dynamics of the corporate organism. it’s like autophagy or apoptosis.


There's a rich tradition in the Clarkson Open Source Institute of identifying these "O(fuckit)" algorithms, because by the time you've asked the question, it's too late to spend any computation getting the answer, so the only possibility for O(0) is an algorithm you never run because the only answer you can get is a shrug and "fuckit". Unfortunately the wiki documenting examples is offline...

To put it another way, "What's the class of algorithms where writing down the input string inherently already provided the answer (so clearly you already knew it, why did you bother to ask?)"

Slightly different than the class of stupid bullshit you shouldn't have been doing in the first place :)


I like O(fuckit) even better than O(zero)! Thank you for sharing this.


where do I loiter to collaborate with people who joke about that book? I read it so long ago, what a great reread for the current zeitgeist


Wow March 23rd, 2008. I was in Japan. Height of the financial crisis.

https://mogami.neocities.org/files/prime_intellect.pdf


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: