There is a huge difference between the common stereotype of ADHD (just rowdy children who can't sit still), and actual symptoms and diagnostic criteria. The difference is almost as comically warped as an image of a "hacker" in media vs actual hackers.
ADHD includes many other things like weak short-term memory, defunct perception of time, hard to control hyperfocus, overwhelming inner monologue, and executive dysfunction that makes some tasks physically impossible to start even when the person wants to do them. And it comes with a bunch of other comorbidities. Doctors diagnosing ADHD also have obligation to exclude other causes of the symptoms, like bipolar.
The stimulant medication does not actually cause stimulation in people with ADHD. When people have a deficit/insensitivity to the neurotransmitters, the meds merely bring them up from a dysfunctional level where the brain lacked ability to function properly to the normal-ish level.
I could say a lot about this topic, but I'll try to keep it brief.
There were a number of similar "post-LambdaMOO" systems built in the mid to late 90s, and early 2000s. I wrote something of my own during a year long bout of .com crash unemployment 2001ish time frame. But before that there was "coolmud" [written by Stephen White who wrote the original MOO before Pavel Curtis adopted it and created LambdaMOO] and then Greg Hudson's "coldmud", which spawned "Genesis", a few other projects, and the thing I was working on but never finished.
Basically they were attempts to generalize the "extensible object oriented network service" side of MOO beyond the 'game' or 'chat' oriented of LambdaMOO and to fix some technical weaknesses that LambdaMOO had.
IMHO these types of architectures/systems were sort of an alternative path for how the internet could have developed if the web and HTTP based architectures hadn't taken over and defined how we think about what the Internet is. (For people who didn't use the Internet prior to the existence of the web it's hard to imagine that, I know.)
I think the combination of prototype oriented object systems plus multi-users plus networking plus group-authoring plus socializing is something that still hasn't been realized to the depth that these systems had, even if they were hobbled by their lack of multimedia capability.
I believe there were similar efforts on the LPC/LPmud side of things, but there were some differences in philosophy there.
LambdaMOO itself lives on and gets some ongoing development here and there.
Google App Engine originally used a custom containment strategy.
Back in the day, I was on the team that added Python 2.7 support to App Engine and we were experimenting with a different containment approach.
But Python is a complex language to support - you need to support WSGI, to support dynamic loading (for C extensions), a reasonably performant file system (Python calls `stat` about a billion times before actually importing a file), etc.
So our original runtime was actually Brainf#ck. So, at once point, if you had guessed that Google supported it, you could have written your (simple) webapp in Brainf#ck and Google would have scaled it up to hundreds of machines if needed ;-)
In the discussion of "X and NeWS History", I mentioned "PIX", which integrated PostScript with tuple spaces on Transputers, in thread about how X-Windows is actually just a terribly designed and implemented distributed database with occasional visual side effects and pervasive race conditions:
Jon Steinhart: "Had he done some real design work and looked at what others were doing he might have realized that at its core, X was a distributed database system in which operations on some of the databases have visual side-effects. I forget the exact number, but X includes around 20 different databases: atoms, properties, contexts, selections, keymaps, etc. each with their own set of API calls. As a result, the X API is wide and shallow like the Mac, and full of interesting race conditions to boot. The whole thing could have been done with less than a dozen API calls."
To that end, one of the weirder and cooler re-implementations of NeWS was Cogent's PIX for transputers. It was basically a NeWS-like multiprocessing PostScript interpreter for Transputers, with Linda "tuple spaces" as an interprocess communication primitive:
The Cogent Research XTM is a desktop parallel computer based on the INMOS T800 transputer. Designed to expand from two to several hundred processors, the XTM provides a transparent distributed computing environment both within a single workstation and among a collection of workstations. Using Linda tuple spaces as the basis for interprocess communication and synchronization, a Unix-compatible, server-based OS was constructed. A graphic user interface is provided by an interactive PostScript window server called PIX. All processors see the same set of system services, and within protection limits, programs capable of using many processors can spread out over a network of workstations and resource servers, acquiring the services of unused processors.
I grokked it too over a few weeks a few months ago, then rewrote it all in a sane language: C. There are also a few mistakes in the code, where it is clear what the author intended, but it is not what he wrote. After fixing them, the fences around buildings, for example, look correct. I then replaced a lot of the O(n^4) algorithms with faster ones for that one: O(n^2logn). This made it go from needing ~1e10 floating ops to render an image to 1e7, allowing me to do it on a small microcontroller, producing this: https://twitter.com/dmitrygr/status/1470934354712928257
When you look at an item listing and you see something like “Mead”, that is truly all the item is —- it isn’t a cup of mead, it’s just a vague amount of the liquid mead itself, as if your hand was the only thing keeping it from hitting the ground.
But there are containers that can hold your liquid. You have mugs and goblets that hold one quantity of “Mead”, giving the impression that one count of mead is like a generic serving size. You also have barrels and pots that hold stacks of “Mead”.
Creatures are kind of like walking containers and have their own detailed inventories. Among the things you’d expect to find like armor, weapons, and books, you might also find a “coating of tears” on a crying dwarf, or perhaps a “spattering of blood” on a murderous elf.
They’re not just static inventories for the fun of a story, creatures do interact with them and use them. Dwarves covered in a vomit item will (hopefully) put any available soap in their inventory and use it to clean themselves in water, for example.
Cats are simple and just clean themselves with no water or soap needed. The catch with them is that they ingest whatever they have cleaned off of them.
So, putting all this together: The problem was that cats pick up a whole “serving size” of alcohol and proceed to clean themselves, ingesting the entire serving. The bug surrounds the vagueness of liquid sizes.
And it was fixed accordingly! Cats are still vulnerable to the effects of self-cleaned alcohol, but the strength is now proportional.
Booch: Your functional programming work was unsuccessful.
Backus: Yes.
Booch: Let’s dwell upon that for a moment, because there
were some other papers that I saw after your
Turing Award lecture [“Can Programming Be Liberat
ed From the von Neummann Style”, 1977] which was
also a turning point. Because I’d describe it as a wake-up call to the language developers and
programmers, saying there’s a different way of looking at the world here. The world was going down a
very different path. Let me pursue that
point of why you think it didn’t succeed.
Backus: Well, because the fundamental paradigm did not include a way of dealing with real time. It was
a way of saying how to transform this thing into that
thing, but there was no element of time involved, and
that was where it got hung up.
Booch: That’s a problem you wrestled with for literally years.
It implements Conway's Game of Life by creating a DSL using the C preprocessor and printf. The output is a program (several initial boards are supplied to bootstrap) which is the input program to be compiled and run to create the next generation. This is the program for a second generation:
LIFE
L _ _ _ _ _
L _ _ O _ _
L _ _ _ O _
L _ O O O _
L _ _ _ _ _
GEN 2 STAT 328960
END
Each symbol like "LIFE" is a macro, the board is the program.
Look, I think Yarvin is a psychopath as much as the next reasonable human, but let's be clear - Urbit is garbage on technical grounds.
The terminology is a shell game of confusion and deliberate obfuscation. They pretend that it's a network, and a server, and a virtual machine, and an operating system, but it isn't any of these things -- you still need to provide all of them yourself.
It's just a program that sits on top of all your "real" infrastructure, that then emulates its own shittier "make believe" infrastructure, where you're expected to pay for shitty fake IPv4, shitty fake DNS, and are expected to write UDP applications in an esoteric programming language.
That's it.
There is absolutely nothing redeemable here. Please just let this tire fire of a project die the lonely, obscure death it deserves already.
The semantic web currently scales poorly in two different dimensions. As long as this remains the case, it will see little adoption.
First, the software technology scales very poorly unless the size of your data model is trivial. The semantic web is a type of graph search problem with all of the hard algorithm and data structure challenges that entails. Most interesting semantic web applications require data models operating at scales where existing graph platforms have pathological performance characteristics. The gap between capability and requirements is several orders of magnitude. You can't throw hardware at it; when I was first hired to do CS research on scaling the semantic web, I was working on literal supercomputers.
Second, the human element of the semantic web doesn't scale. Semantics are contextual and subjective. As the number of people contributing to the data model increases, the consistency and quality of the data model decreases. This becomes an extremely expensive global coordination problem at scale that may be unsolvable. It is broadly recognized that you need a single, global arbiter of semantics in any scalable system just to maintain semantic consistency, which basically means strong AI.
If we have the technology to solve these two problems, I suspect the Semantic Web will no longer be interesting.
This is probably one of my favorite developments coming from the quarantine. Maciej Ceglowski is a keeper of the torch reminding us of what the web used to be: a weird place filled with weird people who were guided by curious intellects and a belief that the internet can and would liberate us in some strange and amazing way.
Before social media amplified celebrity worship and extreme positions, everyone's voice on the web was only given weight by the merit or personality of what was said. No matter how popular you were on the old internet your voice was never loud enough to silence another. People were mostly anonymous (in practice because governments were caught off guard) and anyone could start a quirky website that was suddenly the talk of the town.
I miss the old internet that inspired a lot of brilliant and all too idealistic people to code into the night and bring us these amazing innovations. In some ways Mark Zuckerberg was cut from the old cloth. The original Facebook was in many ways amazing, quickly evolving, and so open. Everything took a turn for the worse with advertising.
Thank you Maciej for the trip down memory lane. Some of us may cling to the past but I hope there's another version of you and the old guard of the internet waiting for us or our future generations when we are gone.
Rachel presumably wrote her server in a reasonable language like C++ (though I don't see a link to her source), but when I wrote httpdito⁰ ¹ ² I wrote it in assembly, and it can handle 2048 concurrent connections on similarly outdated hardware despite spawning an OS process per connection, more than one concurrent connection per byte of executable†. (It could handle more, but I had to set a limit somewhere.) It just serves files from the filesystem. It of course doesn't use epoll, but maybe it should — instead of Rachel's 50k requests per second, it can only handle about 20k or 30k on my old laptop. IIRC I wrote it in one night.
It might sound like I'm trying to steal her thunder, but mostly what I'm trying to say is she is right. Listen to her. Here is further evidence that she is right.
As I wrote in https://gitlab.com/kragen/derctuo/blob/master/vector-vm.md, single-threaded nonvectorized C wastes on the order of 97% of your computer's computational power, and typical interpreted languages like Python waste about 99.9% of it. There's a huge amount of potential that's going untapped.
I feel like with modern technologies like LuaJIT, LevelDB, ØMQ, FlatBuffers, ISPC, seL4, and of course modern Linux, we ought to be able to do a lot of things that we couldn't even imagine doing in 2005, because they would have been far too inefficient. But our imaginations are still too limited, and industry is not doing a very good job of imagining things.
† It's actually bloated up to 2060 bytes now because I added PDF and CSS content-types to it, but you can git clone the .git subdirectory and check out the older versions that were under 2000 bytes.
My first encounter with Forth was through the TinyMUCK codebase, the language was called MUF, and it blew my mind at the time that it was possible to code within the MUD game itself and use it right away.
You can do that in Second Life, and do. I wrote a whole non-player character system for Second Life. There's a 64K memory limit on how complex a program you can write, because server resources are limited, but you can have multiple programs and pass messages between them. You can make HTTP requests of external servers, and it's common to call out to servers that keep databases for game state and such.
It's not all pre-made skins, either; you can go into Blender or Maya and make whatever you want. (What Fortnite calls "creating skins", from a supply of pre-made parts, Second Life users call "getting dressed".)
The downside is that you have to be good enough at 3D artwork creation and the tools for it to make something good. Anybody can create in Minecraft or The Sims. There's a high bar to entry in the systems that allow full creation capability. This is a real problem when attracting new users.
Nobody else seems to have both "big, persistent shared world" and "fully general user based creation". Most systems limit one or the other. There's also a big problem with Second Life viewers choking on excessively complex geometry, which brings down the frame rate. Second Life badly needs better automated level of detail generation.
(Automated level of detail generation is supposedly a solved problem. The trouble is, the usual algorithms don't work on clothing as separate items. Second Life mesh clothing is not part of the body. It's like real-world clothing, a 3D object with an inside and an outside. Sometimes designers omit the inside, if you can't see it, but usually it's present. Mesh reduction algorithms in common use have a terrible time with thin objects like cloth. If the original has a wrinkle or pleat that affects both sides, which are separate meshes, it's hard to flatten that out during mesh reduction while maintaining the thickness. Nice R&D problem for somebody.)
Already lots of good comments and ideas below. My first attempt here on this topic turned out too wordy -- and this attempt is also (I tried to explain things and where the ideas came from). A summary would be better (but Pascal was right). I'll try to stash the references and background somewhere (perhaps below later in a comment).
I had several stages about "objects". The first was the collision 50 years ago in my first weeks of (ARPA) grad school of my background in math, molecular biology, systems and programming, etc., with Sketchpad, Simula, and the proposed ARPAnet. This led to an observation that almost certainly wasn't original -- it was almost a tautology -- that since you could divide up a computer into virtual computers intercommunicating ad infinitum you would (a) retain full power of expression, and (b) always be able to model anything that could be modeled, and (c) be able to scale cosmically beyond existing ways to divide up computers. I loved this. Time sharing "processes" were already manifestations of such virtual machines but they lacked pragmatic universality because of their overheads (so find ways to get rid of the overheads ...)
Though you could model anything -- including data structures -- that was (to me) not even close to the point (it got you back into the soup). The big deal was encapsulation and messaging to provide loose couplings that would work under extreme scaling (in manners reminiscent of Biology and Ecology).
A second stage was to mix in "the Lisp world" of Lisp itself, McCarthy's ideas about robots and temporal logics, the AI work going on within ARPA (especially at MIT), and especially Carl Hewitt's PLANNER language. One idea was that objects could be like servers and could be goal-oriented with PLANNER-type goals as the interface language.
A third stage were a series of Smalltalks at Parc that attempted to find a pragmatic balance between what was inevitably needed in the future and what could be done on the Alto at Parc (with 128K bytes of memory, half of which was used for the display!). This was done in partnership with Dan Ingalls and other talented folks in our group. The idealist in me gritted my teeth, but the practical results were good.
A fourth stage (at Parc) was to deeply revisit the temporal logic and "world-line" ideas (more on this below).
A fifth stage was to seriously think about scaling again, and to look at e.g. Gelernter's Linda "coordination language" as an approach to do loose coupling via description matching in a general publish and describe manner. I still like this idea, and would like to see it advanced to the point where objects can actually "negotiate meaning" with each other.
McCarthy's Temporal Logic: "Real Functions in Time"
There's lots of context from the past that will help understanding the points of view presented here. I will refer to this and that in passing, and then try to provide a list of some of the references (I think of this as "basic CS knowledge" but much of it will likely be encountered for the first time here).
Most of my ways of thinking about all this ultimately trace their paths back to John McCarthy in the late 50s. John was an excellent mathematician and logician. He wanted to be able to do consistent reasoning himself -- and he wanted his programs and robots to be able to do the same. Robots were a key, because he wanted a robot to be in Philadelphia at one time and in New York at another. In an ordinary logic this is a problem. But John fixed it by adding an extra parameter to all "facts" that represented the "time frame" when a fact was true. This created a simple temporal logic, with a visualization of "collections of facts" as stacked "layers" of world-lines.
This can easily be generalized to world-lines of "variables", "data", "objects" etc. From the individual point of view "values" are replaced by "histories" of values, and from the system point of view the whole system is represented by its stable state at each time the system is between computations. Simula later used a weaker, but useful version of this.
I should also mention Christopher Strachey -- a great fan of Lisp and McCarthy -- who realized that many kinds of programming could be unified and also be made safer by always using "old" values (from the previous frame) to make new values, which are installed in a the new frame. He realized this by looking at how clean "tail recursion" was in Lisp, and then saw that it could be written much more understandably as a kind of loop involving what looked like assignment statements, but in which the right hand side took values from time t and the variables assigned into existed in time t+1 (and only one such assignment could be made). This unified functional programming and "imperative like" programming via simulating time as well as state.
And let me just mention the programming language Lucid, by Ashcroft and Wadge, which extended many of Strachey's ideas ...
It's also worth looking at "atomic transactions" on data bases as a very similar idea with "coarse grain". Nothing ever gets smashed, instead things are organized so that new versions are created in a non-destructive way without race conditions. There is a history of versions.
The key notion here is that "time is a good idea" -- we want it, and we want to deal with it in safe and reasonable ways -- and most if not all of those ways can be purely functional transitions between sequences of stable world-line states.
The just computed stable state is very useful. It will never be changed again -- so it represents a "version" of the system simulation -- and it can be safely used as value sources for the functional transitions to the next stable state. It can also be used as sources for creating visualizations of the world at that instant. The history can be used for debugging, undos, roll-backs, etc.
In this model -- again partly from McCarthy, Strachey, Simula, etc., -- "time doesn't exist between stable states": the "clock" only advances when each new state is completed. The CPU itself doesn't act as a clock as far as programs are concerned.
This gives rise to a very simple way to do deterministic relationships that has an intrinsic and clean model of time.
For a variety of reasons -- none of them very good -- this way of being safe lost out in the 60s in favor of allowing race conditions in imperative programming and then trying to protect against them using terrible semaphores, etc which can lead to lock ups.
I've mentioned a little about my sequence of thoughts about objects. At some point, anyone interested in messaging between objects who knew about Lisp, would have to be drawn to "apply" and to notice that a kind of object (a lambda "thing", which could be a closure) was bound to parameters (which kind of looked like a message). This got deeper if one was aware of how Lisp 1.5 had been implemented with the possibility of late bound parameter evaluation -- FEXPRs rather than EXPRs -- the unevaluated expressions could be passed as parameters and evaled later. This allowed the ungainly "special forms" (like the conditional) to be dispensed with, they could be written as a kind of vanilla lazy function.
By using the temporal modeling mentioned above, one could loosen the "gears" of "eval-apply" and get functional relationships between temporal layers via safe messaging.
So, because I've always liked the "simulation perspective" on computing, I think of "objects" and "functions" as being complementary ideas and not at odds at all. (I have many other motivations on the side, including always wondering what a good language for children should be epistemologically ... but that's another story.)
So, over the years I've played with many things that claim to be "declarative", and here's why I now shy away from them like the plague. There's no such thing as "declarative". No matter what you type into the computer, at some point it's going to turn into instructions that do the thing you want done. Trying to create a declarative language is a way of making it extraordinarily opaque as to what the machine is actually going to do. It looks great in four lines, it crashes and burn on any real sized problem, because you inevitably hit the following sequence:
1. I encounter a problem; a performance issue or a bug.
2. I can not practically proceed past this point because everything I might need to figure out what is going on has been "helpfully" obscured from me.
Yes, you can still thrash and flail but this hardly constitutes a "fix" to programming. You simply can not help but create an abstraction that not only leaks like a sieve, but is actually multiple leaky sieves layered on top of each other in opaque ways. (And letting us see in is in its own way a failure case too, with these goals.)
Part of what I like about Haskell is that it helps bridge the gap, but doesn't actually go too far. A map call is still ultimately an instruction to the machine. It's not quite the same type of instruction you give in C or C++, what with it being deferred until called for (lazy) etc, but it's still an instruction and it can be followed down to the machine if you really need to without only marginally more work than any other "normal" language. (It may be a bit bizarre to follow it down all the way, but hardly more so than C++ in its own way.)
Contrast this to SQL, which is declarative, and you never have to worry about what the database is doing to answer your question. Except it never works that way and you inevitably must actually sit there and learn how indexes work and how queries are parsed and how the optimizer works to a fairly deep level and then sit there on every interesting query and work out which synonymous query will tickle the optimizer into working properly except that you actually can't do that and you end up having to turn to weird annotated comments in the query specific to your database and then you still end up having to break the query into three pieces and manually gluing them together in the client code.
And I don't even care to guess how many man-millenia have been poured into that declarative language trying to make it go zoom on a subproblem much simpler than general purpose computing. (Well, except isasmuch as they've more or less grown to encompass that over the years, but it's still at least meant to be a query language.)
So, if you think you can fix that problem, have fun and I wish you the very best of luck, no sarcasm. This is the problem I've seen with the previous attempts to go down this route before, and I feed this back in the spirit of helping you refine your thoughts rather than yelling at you to stop.
ADHD includes many other things like weak short-term memory, defunct perception of time, hard to control hyperfocus, overwhelming inner monologue, and executive dysfunction that makes some tasks physically impossible to start even when the person wants to do them. And it comes with a bunch of other comorbidities. Doctors diagnosing ADHD also have obligation to exclude other causes of the symptoms, like bipolar.
The stimulant medication does not actually cause stimulation in people with ADHD. When people have a deficit/insensitivity to the neurotransmitters, the meds merely bring them up from a dysfunctional level where the brain lacked ability to function properly to the normal-ish level.