Hacker News new | past | comments | ask | show | jobs | submit login

I love the fancy modern type systems and all of the superpowers that they give you. But this is an area that I feel isn't adequately explored.

If you call a function that calls a function that has side effects, or that closes the file descriptor you're using, or could panic, or that needs global shared state, or spins up a thread, or allocates, we don't have a good way to deal with that. There are some partial and underutilised solutions like monadic IO and linear/affine types and, uh, not having panics. I have some (bad) ideas but I think this is a space that's worth playing in.




isnt this the whole point of what rust is trying to accomplish? i havent yet worked with rust but from what i hear they make it very hard (impossible?) to have memory issues and things like deadlocks. sounds great in theory but no idea if this is the reality of using it


No. Rust has no controls for side effects or allocations


Rust solves race conditions, but not deadlocks.


It is, yeah. Which is why when you compile 99/100 times it just works. I don’t think people understand that this is one of Rust’s best features.


I had the same thought years ago, I think some people have looked into it a little, but nothing popular yet.

Imagine if you could look at a function, and know, undeniably - this, and anything it can call, absolutely cannot alter the file system directly or make a network call or spawn a process. Maybe instead of just types, you need to supply capabilities (much like interfaces on classes). Sounds like it could be a real pain. Would make auditing easier though.


> alter the file system directly or make a network call or spawn a process.

Those three cases are excluded in pure functions.


I've been waiting for functional programming to have its day for more than 20 years. It just seems fundamentally at odds with how most people want to work.

You can make individual functions pure in any language, but do many languages enforce this at a core level? Or we have to behave ourselves and hope nothing we call is changed by another programmer later?


> I've been waiting for functional programming to have its day for more than 20 years.

The tech is there. It sounds more like functional programming has been waiting for you for 20 years.

> It just seems fundamentally at odds with how most people want to work.

"Most people", or "you"? If you believe in something, stand up for it.


I've used it. I've never had a job using it. Most people haven't.


Capabilities help a lot if you just want to permit/deny the action. But maybe I want to know more than "can it allocate", I also want to be able to pass in an arena to make it allocate in a particular place and I might want that to be different in different parts of the code. Generics sort of help here if you want to explicitly thread your allocator all over the place like Zig but allocation is so common in normal code that ergonomically I want it to be invisible in the common case and controllable in the uncommon case


How about a spawning a separate process and using seccomp, or having the child process run as a different user perhaps?

There are a couple of advantages to doing it through the OS. For one thing, the implementation is shared across programming languages. For another thing, it's potentially more secure against an attacker trying to inject code.

I guess the disadvantage of doing it at the OS level is that you might have to write OS-specific code.


I think that mostly works (as an aside, last time I looked at this in Windows, we couldn't do something like sudo reliably because there was no secure way to provide the password without user interaction - it seems they've just released sudo for Windows in Feb this year). The OS can support this like how a mobile OS limits the capabilities of apps, which is more secure from a final standpoint.

But I was also thinking of the development and refactoring side of things. Guarantees and assurances. Sometimes I know the generic code I wrote is 'OK' but the C# compiler says no - you're not being specific enough, I can't cast that - so I have to do better. A while ago I was trying to track down programs that loaded data from certain network shares, and it was a bit rough because these calls were all very deep, and very occasional. Traces (or blocks) of system calls only work when the code fires, but my task was to find and replace them all, so we could delete the share (for various reasons string searches only got part of the way). If 'network access' was tagged at a high level, I could follow the trail to see what we actually access and report on that. We had a similar issue identifying jobs that called out to external APIs. We left a log running for a while to catch some, but some only run a few times a year and we didn't have that long. Adding a firewall exception later could take days and these jobs have turnaround times.

I don't know if this is at all feasible. It's just some thoughts I had back then.


There have been papers that limit the capabilities of a program within a certain context (i.e., within a function), some of which were implemented at the OS level and enforced with hardware protection (e.g., address space isolation).

The difficulty is that doing this sensibly requires new OS abstractions. It's one thing to put one in a research paper, but it's really tough to get this kind of thing into an OS nowadays. Another difficulty is OS level abstractions, with a few exceptions, cannot be used without a system call, which is cheaper than it used to be but much more expensive than just a function call.

A third problem is just that the programming language has a lot more semantic information, or at least it _can_ have a lot more semantic information, than can be fluently transmitted to the OS. There are approaches to deal with this (like having a richer OS / user land interface, almost impossible to get into an OS). In general, plugging into and extending the type system of some user land is probably going to be much easier route to take.

If the research world worked differently than it does, I'd have loved to continue previous explorations on OS / userland interfaces.


This happens at runtime. I want something in the type system.


Algebraic effects?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: