The naming is a little confusing here. This is Actix the actor model library, not "Actix web" the web framework that's usually discussed.
Originally "Actix web" depended on Actix the actor library, but no longer. Though they can be used together. And I think websockets still require actors?
I'm using both Actix and Actix web for a personal project. I like it a lot and think the actor model is good fit for applications that don't fit well into a request lifecycle.
My main pain point with it is the use of futures. Right now it's a huge pain to have more than one return value in an Actor Handler, the functions that handle actor messages. Even if you box the returned Future, you end up having to use futures::Either a bunch and when you mess it up you get tons of confusing errors filled with the huge inferred types.
Implementing something like:
if (condition1) {
return <future for service call1>;
} else if (condition2) {
return <future for service call2>;
} else {
return <no-op future>;
}
Is a big pain requiring two layers of future::Either. Or I just don't know what I'm doing, which is very possible.
async/await would be a huge improvement, but can't be used at the moment, even though the current version of Actix uses std::future's.
> Even if you box the returned Future, you end up having to use futures::Either a bunch
That's really strange. A Boxed future is the end-all-be-all of not caring about the underlying type. Would you mind sharing a snippet of code that shows where you need both a Boxed future and Either?
The idea is that I want to implement a relative ptz move on a camera but some cameras have a broken relative move implementation so in that case we instead use a continuous move + a delay + a continuous stop. The final case is when a camera has no ptz it returns Http::NotFound.
if cond {
Box::new(future_a)
}
else {
Box::new(future_b)
}
then by default, the type of the first Box will be concrete Box<FutureA> rather than Box<dyn Future>, which is why you'll get a type error that Box<FutureB> != Box<FutureA>
You'll need to coerce the first expression to Box<dyn Future>. The compiler will auto-coerce the other one.
if cond { Box::new(future_a) as Box<dyn Future<...>> } else { Box::new(future_b) }
(This auto-coercion also applies to other situations where the type of an expression must match the previous one's, such as with a slice literal where the first element dictates the types of the rest.)
You don't need to coerce if the compiler can infer that the whole if-expr must be of Box<dyn Future> type. This happens when the if-expr is being returned directly and thus must have the same type as the function's return type, or it's being assigned to the binding and the binding has been previously inferred to have (or explicitly has) Box<dyn Future> type. You can test this in the playground by returning the if-expr directly instead of binding it to `result`.
A more stylistic pattern for this is to give the binding a type so that the coercion of the if/else expression works automatically instead of using `as`.
Also, there's work being done so that this won't be needed by deferring evaluation of expressions until the whole body has already been evaluated so that inference with the naïve code will work at some unspecified time in the future.
What is the difference between the actor model and object oriented programming? It seems to me like they are basically the same paradigm but with all the names changed, and some additional restrictions like all messages being async and objects only being able to process one message at a time. Why is it necessary to create a whole new paradigm just to enforce such a style?
Your question is interesting from the perspective that the actor model could be seen as the precursor to modern object oriented programming. Both the actor model as defined by Carl Hewitt and the early object computational models as they are defined by Alan Kay (Smalltalk) originated during the same period and are based on similar philosophies of computation.
However, on the object oriented model track, due to practical reasons, its definition descended into a single-thread dispatch system with full message delivery guarantees. Locality was dropped (due to singletons) and the distributed model was not maintained. Method dispatch systems were later added, but could be considered a kludge. This also explains the mismatch between remote systems calls (SOAP, REST, etc.) and the internal language. Ideally, these would be the same.
With our modern systems design constraints, especially given distributed systems, we need to revisit those early decisions. The Actor model is a good blueprint for our designs. It is fundamentally decentralized, locality is enforced and at-most-once message delivery is assumed. These allow us to design and implement distributed algorithms which would be hard to implement using traditional OOP methods.
I think this is a great summary, and also highlights much of what Joe Armstrong talked about when comparing Erlang to other OO languages, and his earlier decisions to design Erlang to work in the same way whether in a single-node or multi-node environment.
Based on the description in your Wikipedia link there is a relationship for sure -- but I don't see a strong connection. Here's how I see it:
An Active Object is essentially an OO encapsulation of a worker thread. It provides clients with a synchronous call interface where all results are resolved asynchronously (e.g. using futures for results). Internally, the Active Object converts client requests into asynchronous operations, enqueues the operations, and executes them on a private worker thread using object-specific dispatcher logic.
In an OO system, not all objects are required to be Active Objects. So there may be a mixture of synchronous and asynchronous execution.
In contrast, in an Actor system all communication is via asynchronous messages and there is no inherent requirement for multiple concurrent threads of execution.
Active Objects use a mechanism such as futures to return results. On the other hand, if there is a request-reply message exchange in an Actor system, both the request and the reply will be separate messages. The client would receive the result of a computation by receiving a message.
The relationship I see between Actors and Active Objects is that both require some kind of queue for dispatching operations (since Actor behaviors are never re-entrant). Active Objects convert synchronous calls into asynchronous operation requests, whereas Actors use asynchronous messages for _all_ inter-entity communication.
I agree with the GP that event loop programming is the closer OO analogue of Actors.
If I'm only following about half of what you're saying (I'm familiar with Smalltalk and the OO model, I don't know what the actor model is, I don't know what "locality" means), what's a good place for me to start?
A good place to start is perhaps learning Elixir and eventually move to OTP and it's actor model.
Actor are just processes in Erlang/Elixir (underlying implementation is a thread that doesn't share anything) that isolate logic (self contained) and if it goes down it doesn't take any thing else down. So it's OO that way. It helps that functional paradigm good practices is small compose-able function that does one thing. You compose these functions into a logic and wrap it in these processes and treat them like objects.
OOP and Actor model followed parallel paths of evolution: the Actor Model was created by Carl Hewitt based on the message passing semantics of Smalltalk. Alan Kay, in turn had based the message passing semantics of Smalltalk on the goal-driven evaluation of PLANNER, which was designed by Carl Hewitt.
PLANNER was the precursor to Prolog. Erlang a language based on actor model where processes are actor was originally not intended to be a language, rather it started out as a library for fault-tolerant distributed programming in Prolog, and later evolved into a dialect of Prolog, before it became its own language, still to this day heavily influenced by Prolog.
So, the similarities between Objects in OO, and Actors in the Actor Model are far from coincidental.
In the Actor model, all function calls are async and have no return value, between the Actors. Each Actor is single threaded (OOP) program. It provides a nice approach to deal with parallel programming.
Object oriented programming is a language feature while the Actor paradigm can be build on top of OOP to ease the mental burden of parallel programming.
Actually I think building oop on top of actor model makes the mental model far easier. Instead of worrying if the knight object uses the sword object to hurt the monster obect then which object .deals_damage? Or should the object take_damage? The actor framework, being all message passing, makes these choices clear. True, originally oop was message passing, but no mainstream modern oop languages except Ruby sorta are message passing frameworks.
> no mainstream modern oop languages except Ruby sorta are message passing frameworks.
Will changing the name fix that problem? I think any language or framework that gains widespread adoption will have to compromise its principles in some ways for the sake of pragmatism.
There are modern OO languages that stay true to Alan Kay's ideas too. The problem is that they all seem insignificant in impact when compared to Java, which relentlessly sacrifices purity for pragmatism.
What I'm asking is, how will the actor model stop people from taking those pure ideas and turning them into another Java? Isn't it just a matter of time? If that's the case, then it hasn't really "fixed" anything about OO.
In the Actor model, all function calls are async and have no return value, between the Actors. Each Actor is single threaded (OOP) program. It provides a nice approach to deal with parallel programming.
I guess you mean 'concurrent programming', right? I haven't seen actors used a lot for parallelism.
Actor model is good for parallelisation. Each Actor and its message queue can be processed separately from everything else. There should be no side effects except new messages sent or new actors spawned.
So a pool of worker actors can absolutely work in parallel. In fact now I'm curious if Actix supports that
Actor model really is for concurrency, not for parallelism.
For pure parallel computing it introduces unnecessary overhead because of the message passing. That overhead in turn hurts performance, which really is the only reason you'd want to compute in parallel.
It's common to conflate parallel programming with so-called embarrassingly parallel tasks, but this isn't accurate. For tasks which may be executed in parallel but aren't working on different regions of a single state, actors are an excellent choice.
Right, but isn't what you say exactly concurrency instead of what we normally consider parallel computing, at least on a single machine? Depending on the implementation/runtime you may or may not need something like threads, which are a good concurrency primitive, for parallel computing.
I'm not entirely sure what the disconnect is here, but I'll give it a shot.
Parallel computing is simply running more than one aspect of a computation simultaneously, it requires multiple processors/cores, or SIMD. Concurrency is doing more than one thing 'at a time', and this generally includes things like callbacks (to allow more work to be done while waiting on an action) and preemptive threading (which may or may not involve true parallelism).
Concurrency is an opportunity to work in parallel, one which may or may not be achievable. I consider threads a bad concurrency primitive, because they're too low-level and hard to get right, and this becomes even worse when one runs threads in parallel.
Actors, which are a share-nothing concurrency model based on message passing, are a good concurrency primitive. Among the reasons for this are that one can put them on threads and not have to deal with locking and unexpected mutation. You can treat them as an implementation detail that happens below the level the programmer must concern themself with.
This means they're good for running in parallel, as well, which is quite tractable on a single machine given that it has multiple cores (mine has eight).
Colloquially we sometimes say 'parallel computing' when referring specifically to so-called 'embarrassingly parallel' tasks, like some rendering algorithms, where one may bring as many cores to bear on the task as one has available.
But concurrency is always an opportunity for parallelism, and an actor model allows one to take that opportunity given that other aspects of the runtime don't stand in the way. And parallel computing is simply running more than one computation at the same time, it doesn't by itself imply anything else about the algorithm.
Typically parallelism is particularly interesting for performance, and the (usually?) shared-nothing architecture of the actor model is not conducive to high performance computing.
Shared-nothing usually means a lot of unnecessary copying-- but rust lets you have safe transfers of data ownership, usually without copying. I wonder if that would make for a uniquely performant actor framework. (Depends on the implementation and user code, of course.)
Yeah actor model is a good fit for distributed concurrency. For single machine concurrency, let alone parallel computing it may not be the ideal choice.
If you mean by return value like function call, actor actually can send reply message back to the caller provided the called actor knows the caller address
I think the key point is that most actor models, at least the ones I know, are truly concurrent with preemption, whereas most models based on OO are sequential. There is only one thread of execution in the OO program, whereas there is no notion of this in an actor system. All processes execute at the same time, so to speak.
There are also some points with the messaging being truly asynchronous, so you can't rely on the order in which messages arrive, usually subject to certain constraints. Erlang, for instance, require that between any two pairs of processes, order is preserved in the messages.
Also, creating a new actor creates a new thread of execution inside that actor. This is not the same as creating a new object.
To me, the biggest difference is that the actor model has no shared memory between classes like with object-oriented programming. I'd actually like to see OO programming where classes are completely isolated, but to my knowledge, that doesn't really exist. Go, Elixir and Erlang come close.
UNIX shell with isolated processes communicating over streams -> Actor model
Pretty much every C-style language with all of their caveats -> Object-oriented model
Elixir/Erlang/BEAM, as far as I know, semantically don't just come close, they arrive. I say "semantically" because there are some optimizations for things like large binaries so the system isn't literally copying a lot of them around unnecessarily, but semantically, BEAM processes are isolated from each other entirely.
Go doesn't come close; technically it's just another shared-state programming language. Culturally, it tends to use more sane concurrency features, and the channels are nice and all, but technically there is no isolation between goroutines.
With discipline, you can program Go with an actor mentality and it's fairly effective. I do it all the time, leaning on my years of experience in Erlang and some Haskell, which teaches you how to build systems that work that way. But you do need non-trivial discipline as the language provides rather less help with than I'd like.
Sharing state is what actor model tries to eliminate, it has a very strong locality concept. Singleton in OOP (which you have to implement atomic and thread safety often by yourself) can be easily implemented as an actor. Erlang has gen_server for this exact scenario. About isolation, you can actually impose class isolation in OO languages by declaring all variables as private. Public variable can never happen in actor model
The other replies have covered this pretty well, but the two models are quite similar. The big difference is the formal constraints of the actor model, which allow people to build robust distributed systems. This is because the lack of certainty of communication and architecture is assumed as a first principle. This makes a big difference, in practice, for what programs actually look like.
The way I like to think of it is: actors combine objects (allocated objects, specifically) and threads. You wouldn't really allocate a thread per object, but that's an implementation detail. Message passing etc. all follow from the restrictions that arise due to objects being threads.
The restrictions you talk about are the key advantage of the actor model. OOP is extremely powerful, and nobody seemingly knows how to use that power responsibly. Actors introduce just enough rails to drive you into the pit of success.
Just like Rust imposes restrictions to improve the reliability and security of software, Actors impose restrictions that improve the human comprehension and reasoning of software architecture.
You can run every actor in a separate thread or even in a separate machine, making the whole system highly scalable. Whether you'll consider that a new paradigm or not, is question of definitions.
Remote method invocation is inherently a bad idea, because real world remote systems have significant latencies. Ordinary function call is few processor cycles. Network function call is few billions of processor cycles. While abstraction is a good thing in general, that kind of abstraction is not a good thing.
That's the reason why remote calls today are using explicit syntax, API and so on.
But with actors it's expected that sending a message to actor is something that takes some time. So you have a clear separation between ordinary function calls and actor messages.
Of course it's still a lot of difference between sending message to a thread or sending message to another machine. So take that with a grain of salt. But at least you can utilize multicore CPUs.
Although I'm not sure what you really mean by "remote method invocation", the idea of remote method invocation in OO systems should be impossible or incredibly difficult considering the semantics of a method call.
Method calls are synchronous and they are not allowed to fail under any circumstance. But in the context of distributed systems you have to choose between at most once or at least once delivery which are semantically incompatible with OOP style method calls.
Your methods will have to be written with the expectation that they are called multiple times or maybe not at all and most OOP code doesn't satisfy these expectations because the OOP model doesn't require it. If you extend the OOP model with message semantics of the actor model then what you get is not just OOP+messaging. It's the actor model.
Also you may have misunderstood something very fundamental. The benefit of paradigms isn't to be completely different. It's that everyone that shares a paradigm follows the same design philosophy. Imagine a mixed code base consisting of traditional OOP (with locks), actor model, async/await, promises. It would be very difficult to understand and sometimes it isn't obvious which of these is used. Some functions are asynchronous or they may fail but you may not know that because every single line could do a different thing. They may even be incompatible with each other. If you follow a single paradigm instead of many different ones you can avoid a lot of confusion and wasted work.
There have been dozens or hundreds of things called "remote method" or "remote procedure" or "remote function" invocation. However, not a single one of them has overcome the fundamental problem that you can not have something that is semantically identical to the function call presented by programming languages, because programming language functions simply do not have the concept that you are accessing it over a network, and they take heavy and critical advantage of the resulting simplifications.
Someone more clever than wise may stand up and point out that technically even a local function call is unreliable in many of the same ways that a network call is. However, even if many of the same problems can theoretically arise, the distribution of the problems are fundamentally different, which is why you do not guard every single function call in your program for all the network-type errors, and it generally works, whereas if you program that way in a networked environment, it generally does not work.
Personally I've come to prefer messages very strongly to "RPC" as the foundation of a system. Messages can trivially implement an RPC-like system, but RPCs can not implement a message-like system. Even if you implement an RPC called "SendMessage", you're still adding synchronization on the RPC system sending over the call and waiting for the response. Among other things, but that's the big one.
It’s impossible to make RMI behave exactly like local method invocation. It introduces many error cases and performance challenges you don’t normally have to consider.
The big difference is that you can pass an object to a method of another object, but you can't send an actor through a message channel. That jives with threads because a thread can't climb in to the stack and get passed through another thread.
I haven't looked at this framework but from the one I am familiar with, once you make the calls async, the objects can live anywhere, locally or remote. Also, you may have to assume that messages don't get processed and thus need to handle that case. The upside is that distributing your app becomes a configuration issue instead of a coding one. This is nice because with microservices, the seams need to be decided upfront and its hard to refactor if you get it wrong.
Highly recommend Bastion if you're looking for a Rust Actor library / runtime. It uses async/await etc, and has supervision trees and restart strategies taken straight from Erlang/OTP
I'm amazed on how much discussion digressed to actix-web. Understandable, seen the latest happenings there, but still... I tried quite a while ago Actix, coming from Erlang/Elixir and Scala/Akka. While the cognitive load of using it was not high, when I started my first test, I noticed one core being maxed out, and all others sleeping, which was kinda unusual for me. I asked why, and the author answered that it uses an event loop, so if you want it otherwise, go away and use something else, like Riker... So I went away :) ... I would assume this kind of behaviour has been corrected meanwhile (I mean the library, not the comms), because otherway, it's just a Node written in Rust. My 2c :)
Actix (the actor framework) is separate from what people are discussing here: actix-web, which supports actix). In earlier versions of actix-web it was based around actix; I remember implementing actors and handlers. Since 1.0 or so that got changed to more of a standard setup.
I’ll add my 2c on actix-web: in my opinion it tries to do too much. hyper (simply based on tokio) is all you need for a fast async server. Anecdotally, I know many people who basically use just this stack, over actix-web/warp/tower etc.
I'm currently in the process of removing actix from one of my projects and replacing it with asyc/await and some channels. Actix comes with a ton of dependencies and doesn't buy you a whole lot anymore.
This is from memory, I would welcome any corrections.
Actix was written with a lot of unsafe code[0], which some people considered unnecessary and potentially dangerous in a web framework. In some cases the unsafe code may have been performing better than equivalent safe code. In other cases, it was possible to rewrite with safe code without losing performance.
People submitted patches to replace unsafe code with safe code. The maintainer of Actix responded with hostility to some of these patches, and (at least for some of the patches) did not seem to see the reason why people wanted these changes.
Subsequently, a lot of hostility was directed at the maintainer on various public Internet fora.
Ultimately, the maintainer of Actix deleted the repository from GitHub, but then had a change of heart and restored it, with a new maintainer.
Interesting - thanks for sharing that. Though that article does make things a little clearer, I get the impression we should really have three circles on a venn diagram - essentially, I'm decoupling the concepts of "unsound" and "bug":
- Bug - i.e. some unintended action of a piece of code
- unsound - i.e. code that can be used (or abused) to produce unsafe effects
- unsafe-bleed - i.e. code that cannot be proven to be safe at compile time, or even potentially code that can be proven to be unsound, that a third-party developer could legitimately use without knowing it was unsafe/unsound.
The question I'm curious about is what the proposed PRs fixed. It's clear they didn't fix a bug; the API used was part of the undocumented internals (afaik), and all internal uses of the code were reportedly clean.
It maybe "fixed" an unsoundness, future developers of the project may indeed have misused this code and received undefined behaviour as a result (again, I don't know how common or uncommon the particular misuse would be in the rust community)
I don't know whether this fixed an unsafe-bleed though; whether a third party has access to these objects and has the ability to combine them in such a way as to produced undefined behaviour.
My question to you is: is this a Bug, such that the code did not perform as expected; Is there a bleed of unsafety here, where the permissions model of rust allows access outside the crate to these implementation details; or is it just unsoundness - and if so, could the unsoundness have been mitigated some other (more performant) way?
I don't have the time or desire to answer that question. That would require a level of scrutiny into specific PRs and details that I don't think is productive to go into. I think my clarification stands on its own. Gratuitous use of `unsafe` is bad on its own, and unsound `unsafe` use is also bad on its own. They are two different categories of problems. Both were present in actix. These weren't the only issues, but others have addressed those. My only point was to clarify that the issue wasn't just frequency. Soundness was also an issue.
Whether this only impacts actix developers or whether it impacts users of actix is, I grant, an important question in order to asses just how much you should freak out about any particular soundness problem. (As dtolnay points out, sometimes you don't need to freak out at all. So my statement includes "zero amount." But I am intentionally conveying a bias here: some level of freak out is probably appropriate in a high profile ecosystem level project for Rust specifically. IMO.)
In the most recent issue, there was in fact demonstrated "unsafe-bleed," in the sense that client code could hit undefined behavior by using public safe APIs a particular way.
That seems a pretty balanced summary from my understanding of it.
One thing that I think should be noted: a lot has been said about this framing the original developer as unwilling. I'd say an unfair amount - from what I've seen, "cautiousness" would be a better description of his attitude, and hostility seems to have been shown when people blindly wade in ignoring all the discussion.
... and there are plenty of other examples in this and other tickets. Massive actions were taken by others and himself to significantly reduce the unsafe code where possible, and yet it seems the abuse kept coming.
I'd be interested to hear whether the new maintainer has had similar experiences. Unfortunately, our preferences in software development, often a product of a desire to generalise a specific overreaction, tend to become quasi-religious and puritanical. Members get outcast from their communities for trying to speak sense. I hope that this debacle will cause the community to mature, though I fear many will see it as a win and galvanise their viewpoint. We must remember that silver bullets can kill more than just werewolves.
For non-rusties: unsafe means turning off Rust's advanced safety features, reducing its safety level to that of C or C++. Which is to say, it's as safe as almost all software you're using right now. Of course, one usually turns off the safety features because one is trying to do something tricky to squeeze out performance or achieve some low level feat, so it's also an indicator that something is dangerous is going on.
In your C++ you might add a comment explaining the behaviour you're relying on to convince any reader of the correctness of your code. The cool thing about Rust is that anywhere you don't have the unsafe block, you're (theoretically) certain there's no tricky business going on without even having to read the code.
There was a lot of commotion in the community because many rustaceans take a lot of pride in having this safety features. They took offense at disabling those features, risking Rust's reputation of safety, for seemingly meaningless goals (such as winning the TechEmpower benchmarks). It's not disabling the safety features that makes the code faster btw, it's the tricks you're allowed to do when the safety is off that might yield performance improvements.
> For non-rusties: unsafe means turning off Rust's advanced safety features, reducing its safety level to that of C or C++.
Actually, it's a bit more subtle: unsafe allows using some things which are not protected by "Rust's advanced safety features", like dereferencing raw pointers, and using these things can "reduce its safety level to that of C or C++" (for that module). However, if these things are not used, the safety is not reduced; you can take a block of Rust code which has no "unsafe", put it in an "unsafe" block, and it will be identically safe as before.
I do not understand the motivation for deleting the repository. Most maintainers just quit but they don't go out of their way to take down the project when they go.
I think it’s basic human instinct, paraphrased by the saying ‘taking my ball and going home’. I mean if people are not treating you how you want to be treated, why should I go and leave them with my stuff.
Not saying I agree with what he did, but I do get it.
I think the other comment is conflating a previous kerfuffle, not the most recent one (the older event was about a criticism of Actix's previously prolific use of unsafe rather than a specific bug). From my recollection of the most recent brouhaha:
Someone found a bug in some unsafe code and created a Github issue about it. The maintainer requested a PoC to show that the issue wasn't theoretical. A PoC was provided. Maintainer agreed it was a bug, but wanted a different solution than the provided pull-request. The maintainer was then harassed by a handful of Reddit/Twitter for various reasons. Maintainer nuked the GH issue. The internet-harassment escalated, so the maintainer deleted the repo.
The outcome is that a few days later the maintainer re-released the repo but assigned a new maintainer and is no longer publicly involved in the project.
I’m really excited about this project. Using Akka was one of those things that made distributed/parallel execution just “click”. Particularly useful was being able to draw your architecture the same way one might draw an organization on a whiteboard. Everyone can quickly understand what happens and who is responsible for what.
Out of curiosity - actix seems to be very performant compared to actor frameworks implemented in other languages - would it be a good fit to implement a database using it?
For example mapping worker pool and workers to actors?
Actors in principle (isolated objects assigned to threads) sound great until you need to SHARE data, then you enter a new universe of compromises where you need locking mechanisms (for performance), but no language semantics support it natively. Pony introduced reference capabilities but that still created a cludge for those times you needed synchronous access. There is no model yet where locking primitives are associated with resource handles.
Erlang OTP 21.2 introduced the idea of `persistent_term`, which is a data structure any process can read from at the expense of being costly to modify. This provides a nice alternative to locks, since you can have a single actor/process responsible for handling writes (and deciding when to flush those writes, triggering a GC pause on processes depending on the term) but any number of actors reading from that persistent term at full-speed.
> ... sound great until you need to SHARE data ...
If by "share" you mean one writer and others with read-only access, then staleness of data is a question. In java we have volatile for that, I think there is a similar keyword in C++. However, you'll have to maintain the invariant of "one writer and others with read-only access". If you don't need absolute latest version, you can do message passing/pubsub etc.
If by "share" you mean multiple writers, I'll have to question your design decision. Why do multiple threads have to write to same memory location? Most of the time, you can get away with partitioning/sharding.
Unfortunately, our objects are not isolated, they interact with the world, and often need to be shared when we cannot copy the resource (performance). Eg. a (partitioned) rendering surface may be written simultaneously, a physics engine needs to know what objects are in the game world, multiple bank accounts need to be updated simultaneously, etc. By design is not always possible to sequence, some transactions must be atomic across multiple objects.
Yes they are, atom is in isolation, cell has membrane, we have a body we have isolation and encapsulation in physics/chemistry/biology. In actor model message passing is interaction. Since you mentioned game, check out https://github.com/aeplay/kay, which is the isolation "objects" for https://github.com/citybound/citybound "physics engine". On multiple bank accounts, well an account is an actor, so there is no problem there, please have a look at http://proto.actor/blog/2017/06/24/money-transfer-saga.html
Whats the state of Rust if I wanted to get into it for backend development? I've looked briefly into Rocket and liked it but didn't delve too deep, I understand Actix is one of the most mature?
I wouldn't call Actix the most mature, but it's definitely the one that gets talked about the most.
I built a fairly complex backend directly on top of Hyper[0] (it doesn't even involve that much glue, really) in 2016 and have updated it as Rust and Hyper have matured. It's actually a delight to work on, and I brought a new hire on board recently who had no trouble getting up to speed, since everything is just straightforward, idiomatic Rust.
I've looked at basically every Rust "web framework" that's been released since 2016, and I can't see any of them simplifying the codebase enough to justify using them. Quite a few of these frameworks result in code that isn't obvious to a Rust programmer who hasn't used that framework before, too, which I consider a disadvantage (I prefer obvious over clever).
[0] edited to add for those that may not be aware: Hyper is a client & server HTTP library. Many/most of the frameworks are built on top of it.
Yeah I agree with this. I am working on an API backend in Rust too, and I originally started out writing it using actix-web.
After the concerns about the massive use of unsafe in actix-web in the past I grew a bit concerned and already made me think about going away from using actix-web.
When I started looking at upgrading to actix-web 2.0 there was a lot of rework needed to be done, but very lacking documentation.
I also reached out and got some help from the maintainer of actix-web a few times prior to that but he was always really really brief and for me personally it was difficult to make sense of. Of course I understand that he can’t spend his time helping everyone as that would take up all his time. But from purely a developer point of view, the docs were too lacking for me, and everything felt quite convoluted to try and make sense of.
I was left feeling frustrated because actix-web was so hyped and seemingly popular but for me it was difficult to work with.
I switched to hyper and so far am happy with my decision. I am not going to touch actix-web again anytime soon, but that is only my personal opinion and feeling.
Surely? I try several other for replace my .NET core project, and Actix is the only one that barely get there.
Which other handle without much fuzz:
- Auth (big!)
- Routing with state and db pooling
- Forms
- Templates
- Encode/Decode all stuff (query string, headers, etc)
- Allow to inject middle-wares
- Have (at least) the bare minimum of functionality like gzip encoding
- And many other small details I forget now
I have worked, alot, with django, then use .net core and now Actix, and the others rust frameworks are lacking badly as far I see.
What I see the rust ecosystem is lacking now is mostly around templating (need more love for dynamic options!) and some stabilization around rdbms usage..
Actix has a lot of features, but to me that's not the same as maturity. I don't want one "batteries-included" framework, I want components (preferably mature and relatively boring) with well-defined interfaces that I can use together to build my backend.
Thats is fine, but is surprising how many stuff you need to make a semi-complex site. Is better for a ecosystem to have the "batteries-included" framework that cover the needs of many and then specialize.
I ended up building my own l framework for my use case (about 30KLOC NMS, all Rust) - http://github.com/ayourtch/rsp10 - would be interesting to hear your critique if you have a minute to look at it!
You have to adjust the filters, because hyper isn't considered a "framework", but the TechEmpower benchmarks show hyper beating even actix (though it's narrow).
Definitely agree things were pretty clunky for async Rust web stuff before async/await!
The sugar is real nice with async/await, plus some performance improvements.
I got all excited upon release and moved my backend over to async, do somewhat regret not waiting 6 months.
Docs are sparse, this is across all the various crates needed, a bunch of required interoperable parts all needing specific alpha versions or separate preview crates.
Can't deny it's not user friendly right now. Very much early days.
Regardless, a lot of effort was put into the design and have to respect core devs for that. Think it was the most discussed/debated feature ever.
That's good to know! I got burned pretty bad trying to switch away from actix-web to one of the other frameworks (rocket, gotham, warp) after the whole kerfuffle and lost my taste for Rust as a backend web language as a result. Seeing your experience makes me want to try again with only hyper! I've always heard it was too "low level" and never gave it a proper look as a result.
I've used Rocket and Actix for production web services. I love Rocket's conciseness; however, it still has a nightly dependency, and is still in the process of updating to async. If you can live without those two things, I'd highly recommend rocket.
Actix is one of the most mature but (in my opinion) complicated for someone to stand up when just standing up a simple web service. It's extremely performant though.
I'd look into tower and warp too if I were you - they seem to be almost as concise as Rocket, but have no nightly dependencies and already have async support.
FWIF I have played a bit with Actix for a few small projects, and it wasn't for me. It's a very "battery's included" library which does a lot of things for you, like transforming data structures into responses automagically. It feels a bit like rails where you just have to know the magic words sometimes to make things happen, and it's not exactly clear how things work.
I think that's a great description! Every 'batteries included' Rust web framework I've used so far makes the same boilerplate/magic trade off as Rails, though some magic has felt more intuitive than others.
Hyper doesn't remove much boilerplate but it's one of the most intuitive options that I've seen.
I'm looking at warp[0] and like the concept behind it very much... a bit higher level than hyper, but using a concept of composable filters without being too opinionated. Haven't done much kicking of the tires yet though
At present Rust's primary focus is systems programming language, it will take another 3-5 years to be a viable web programming that too in a niche area. I doubt the libraries will be mature enough to provide easy way to build the way other languages do.
If you know Ruby or Python or C# or Swift or Go language or Dart or TypeScript or JavaScript don't change to Rust yet. It's far far away from having similar library, documentation and developer eco-system.
I believe Haskell has more mature web development eco-system than Rust at present, but they also suffer from same issue most libraries do not have enough documentation and it's tiring to go through the library code to understand how to use it. But if you want to do Rust, why not try Haskell, you will learn much more and it offers a new way of functional programming. Indeed Rust, Swift, C++, Java, Python, Ruby, JavaScript etc. all borrowed ideas from Haskell or OCaml. Indeed Rust was initially written in OCaml
Actix which is the most popular framework in Rust just had a core developer who left it and was dead for a while before community tried to revive it. [1] [2]
Yes probably viable for you as an individual, this does not collate into viable for most. It’s still a niche inside a niche.
People are writing web backend in Haskell for decades, it is a viable choice for very specific niche of web development. Rust has miles to go before it can reach even that stage. At present elixir has much bigger eco-system than Rust.
I don't think the 'Rust is not as far along as backend web Haskell' is true in my (minimal) experience.
I've worked at one very small company and one very large company, and at both Rust was a much more serious/common consideration for web services than Haskell.
I absolutely agree - there's a lot of explaining when you have to choose the less traveled path, and it hurts there's a perception that both languages are intended for 'academic' or 'niche' purposes.
Why Rust is considered over Haskell in one of the organizations I've been with is because it has the performance/memory usage characteristics of C/C++, which is a requirement for certain services. Though for many projects I'd imagine they'd meet similar levels of resistance.
I think the fact that Haskell is more math than programming might also be related to that. I've been programming for a decade and still don't understand Haskell.
I disagree. The ecosystem needed for web development doesn't need to be large. One major framework with a lot of integration focused plugins (think SSO, LDAP, SQL, Kafka, etc) should be more than enough. In the JVM ecosystem Micronaut isn't even two years old and it is already a viable choice for a lot of use cases.
Event Crystal, which isn't even 1.0 yet, is easier to build a web server. The rust team just hasn't focussed on web programming as much as systems programming.
We do a new project in rust. It has a REST API building upon tokio and hyper, effectively the whole backend is in rust.
Building upon hyper isn't to bad, but not the "plug and play" way, I'd guess. For us it's OK, as we have a lot of control over the design and the stuff on top isn't to much.
Things are fast, code is OK once you wrangled about it with the compiler, async support is already OK (but definitively needs to improve, especially in the compiler for diagnostics and errors).
It's definitely workable but async/await landed relatively recently most key libs have been updated but things can be rough on documentation side of things.
I think if you want to do web dev backend with Rust (or C++ or any bare metal lang), you should do it behind a RPC framework (Thrift, gRPC) if only for security purposes. But it's also way more convenient. There's no need to have your core business logic also become entangled with a web server unless the networking minutiae is relevant. There are too many things that could go wrong on a public-facing endpoint to trust the runtime of something you are essentially rolling on your own if you use Rust, C++, C, etc.
This framework's core developer has left, probably new community took it over. Hopefully it will continue to be supported and developed.
Rust still has a way to go for being a mature eco-system for doing any serious web programming work. Given the focus of Rust on systems programming, I will still be careful to consider it for anything web related.
Hopefully situations like abandoning a project suddenly will not happen again. [1] [2]
You seem to have a superficial level understanding about the Rust ecosystem for web development and for the actix ecosystem. Yet, this hasn't stopped others from trusting your opinion or agreeing with your comment, unfortunately.
This post is for the original actor architecture. The next architecture, which should have never been called actix, was based on a completely different architecture. This second version prevailed and remains in use today, as actix-net. Actix-web v2 uses actix-net. It features async-await syntax, is feature complete, and the architecture is mature. The project includes a guide, api docs, a vast collection of examples, and growing ecosystem. Actix-net is very high-performance.
The main concern about any actix project was related to poorly communicating uses of unsafe code, refusing to codify a policy about unsafe, and working with others, respectfully, to address real undefined behaviors.
The author has retired from the projects, for now. Others have stepped up to maintain and improve. It's very exciting to see people addressing unsafe blocks. Most importantly, actix-web lives on!
The expectations and treatment of authors who spend hundreds of hours on bleeding edge, open source work is also a major problem. This problem isn't specific to Rust, either.
Hi @Dowwie, I'm one of those "others" you mentioned that's trying to fill Nikolay's very large shoes!
Yuki (John Titor on GitHub), Rob and I are well on our way removing concerning `unsafe`s wherever we can. Sadly, there are some breaking changes but that's not stopping us and we're now headed to a 3.x release. No timeline on that just yet.
The community has bounced back quite a bit ever since the transition and I'm hopeful that actix-web will continue to be the leader-ish of Rust web development!
Just an anecdote, but I've used Rust for a handful of production web services for a couple of years now. The ecosystem is immature relative to Java or Ruby, but I've quickly built maintainable and performant services without feeling like there was missing tooling.
With many (most?) Rust web frameworks now supporting async, I'd feel a lot more comfortable recommending it for 'serious' web programming work to teams interested in learning Rust.
When you adopt a dependency, you should be prepared to maintain it yourself if no one else does. There's a reason open source licenses say "no warranty" in all-caps.
Sadly this seems to be lost on the npm and npm-adjacent crowd, who installs gigs of random people's personal projects for breakfast.
Originally "Actix web" depended on Actix the actor library, but no longer. Though they can be used together. And I think websockets still require actors?
I'm using both Actix and Actix web for a personal project. I like it a lot and think the actor model is good fit for applications that don't fit well into a request lifecycle.
My main pain point with it is the use of futures. Right now it's a huge pain to have more than one return value in an Actor Handler, the functions that handle actor messages. Even if you box the returned Future, you end up having to use futures::Either a bunch and when you mess it up you get tons of confusing errors filled with the huge inferred types.
Implementing something like:
Is a big pain requiring two layers of future::Either. Or I just don't know what I'm doing, which is very possible.async/await would be a huge improvement, but can't be used at the moment, even though the current version of Actix uses std::future's.