QuickJS is quite a pleasure to work with, for variety of reasons. Not using setjmp/longjmp and having a nice internal resource leak debugging mechanisms are just a few of them. I've already replaced most of my duktape uses with QuickJS, and I'm quite happy with the result.
He's in his 50s keep in mind. Most people of his productivity get shoved into management long before his age. He's very good but it's not like he did ffmpeg last week and QuickJS this week.
Some people are just born to be able to work all day all week, if you are one of those people please don't throw it away (I am not one of them). I think I have the knowledge to do most of his projects e.g. I have the mathematics, and RF engineering to do (say) the LTE stuff but there is absolutely no way in hell I could ever sit down and write it (I've done my own RF projects but I burn out so quickly).
Problem is that management pays so much better than a contributor of code. Society has collectively decided that management is worth more economically, than individual contributors.
It doesn't have to be that way, and indeed many orgs are moving away from that. Where I work, individual contributors who are talented and want to put in the work can attain a level comparable to a Director or Senior Director, and often even make quite a bit more money than their level-peers on the management track.
There's still a cap, of course; you can't remain an IC and have the level equivalent of a VP or C-level. The theory there is that the higher you climb the IC ladder, the more difficult it is to have increasing levels of impact without leading groups of people larger than just yourself. (And indeed, our higher-level IC positions often involve some amount of non-management leadership outside of heads-down coding all day.)
I'm not 100% sure I agree with the reasoning behind the IC ceiling, but things can be awesome for you outside of management, with plenty of career and salary growth opportunities, if you find a company that understands and values individual contribution. I wouldn't say this is a lot of companies, likely not even a majority, but it's a number that seems to be growing, at least in technical fields.
Yes, where I am at we also have a similar career track just introduced a year back or so. For the most part it’s a welcome change and recognition that an IC can be just as if not more valuable than a manager.
However, for extremely large organizations, despite my personal desires to see an equivalent to a VP/exec level for an IC, I just don’t see anyone being interested in that.
The rationale I’ll probably hear for why it will never happen is something like “execs are responsible for so many staff’s eventual success or failure, that there is no way an IC can compare to that level of impact”.
If I don my tinfoil hat though, the conspiracy theorist in me thinks that these sorts of changes to an IC’s career path are ultimately made possible by execs themselves, and that they would not be able to compete with someone of equal stature who has spent 99% of their time thinking about hard engineering problems. A certain fear of appearing mediocre or protecting your rank perhaps.
I’d also say that my assumptions above are probably meaningless in a startup or company less than 100; I’ve seen plenty of postings looking for a magical “co-founder/CTO/principle engineer” hybrid. Which I take to mean a really good engineer who is also responsible for some part of executive leadership. It’s not an exact comparison /shrug.
Managers are everything.. by manager I mean leaders not paperwork/monitors.
I firmly believe that 10 geniuses will go nowhere without direction 8 times out of 10. And non geniuses it will be 9.99/10. Managers were the people that turned human potential into outcomes.
I would agree with you except for the legions of god-awful managers I have encountered. A good manager is a huge asset for a team, but a bad (or even mediocre) manager is often worse than having none at all.
That's what I meant too, a good enough manager is an immense value, the others are nothing and are probably only there because somehow humans/societies recognized how crucial and applied 'put a manager' everywhere without care.
But yeah you're right a bad one will turn a team comatose.
It's not just the output of their team; compared to people like Bellard (and John Carmack, Jeff Dean, Peter Norvig, ...), 99.99% of people are mediocre. Increasing the output of 100 mediocre people by 20%-30% is worth more than contributing one person's output, even if it's 10x or 20x.
And I would argue the difference between a mediocre manager and a good manager is more than that.
You seem to be assuming that there is an equivalence between good programmers and good managers?
I wish I lived in this alternative reality you describe. My career has been littered where non technical managers--essentially clerks--get paid more than I do to sit around with a others and talk about what should be done while I (and colleagues) do all the doing. I end up doing a LOT of "managing" from behind the lines in all these cases.
Frankly, the reality distortion field where "a good manager has increased the output of their team" has been such a small observed sample, that I wouldn't be able to claim there was even a correlation between the two, much less causality; I'd just chalk it up to something like "the water" or "the lighting" if I saw it happening with any regularity.
> Because good managers are hard to come by so you have to give some sort of incentive to attract and retain the good ones.
That's a nice economics just-so story. But do people think that the median manager is effective? I'd say less than half of my managers have been effective. Some of them even got promoted.
Exactly, in my experience of open source so far, the biggest difference from a good project to a great project is management - even as simple as just keeping track of PRs.
If you make a PR and it gets left to die after a few days, who cares? If someone keeps an eye on it and tells you what they're looking for with a smile, you're going to try and get it done.
Actually it worked out pretty good here. It keeps my head free of crap so I can concentrate on making cool stuff in my own time with my own choice of tools.
I would have thought it was an asshole comment for implying that having kids makes you less productive. But having kids DOES make you less productive (at least when they are young) or at the very least removes great swaths of time that you would have had available otherwise, if you involve yourself with their lives at all.
Having kids or not is intensely personal and everyone has to make that decision themselves, not having them is extremely valid
I wish they had talked a little about his work on the Amiga. He wrote a full color MacOS emluator that actually multi-tasked with the AmigaOS rather than taking over the whole machine. It actually ran Mac programs faster than the fastest Mac hardware of the day as well. Really impressive work on such low end hardware.
Agreed. 10 years ago, I could still pretend I'd be able to come close to his productivity if I "just set my mind to it." Today (being older, slower, and having a family), I can only be humble.
Many people don't have the misconception that they can compete athletically at the level of olympics. And yet, somehow, a lot of people mistakenly believe they could perform intellectually at that same high level!
I think you run into the barrier much faster in something like sports because the gap is so easily measurable. E.g. it just takes one 100m sprint to see how slow you are.
Whereas with intellectual work it's often hard to assess the gap, not least because the further you are from closing it, the less understanding you'll tend to have of how hard the remaining parts are.
I'm not convinced. Our brains are well know for their plasticity. Our musculoskeletal and circulatory systems: not so much. The brain is capable of rewiring itself after some quite traumatic injuries, of course the people who go through rehabilitation after a traumatic brain injury do so with the focus of someone whose life depends on it.
I would suggest that most people reading this are capable of Bellard level productivity, if their life depended on it.
Maybe but as the kids get older you the time back and your more efficient with it as a result of those early years... I think at least I’m faster and smarter now then ten years ago... maybe I’m losing my eye sight and my hearing or maybe it’s true :)
He really has done so much incredible work. I'd love a Fabrice Bellard interview/podcast/tech talk but found none when searching. If anyone has a link, please post it!
He also wrote JSLinux[1]!! I was just playing with the online shell and it's mind blowing how the browser can now run a fully functioning OS like Linux within it.
His own emacs mode (https://bellard.org/qemacs) I don’t think he uses it anymore but I always found that the best side effect of concise/elegant (minimal) code (or perhaps goals/scope) is generally better performance (I think this is also partly because of a single author understanding a lot more of a ‘product’ than a team)
Also wanna throw Mike Pall & Arthur Whitney in there as an honourable mentions (productive gods/100x)
Currently composed of 85,624 lines of C code, mostly in quickjs.c (53,575 lines in that one file alone), but also with a couple other large ones.
I can't say I understand the reason for such massive files. Surely it would be easier to maintain if it was split into a few well-defined modules?
In addition to the maintenance concerns, a JavaScript engine has quite a few parts that could be used as individual components. One good example of this is node's http-parser[1] that was extracted to a self-contained C file with associated headers and is a pleasure to use.
Rather than trying to figure out which of dozens of files has what you're looking for, you can just search one or two large ones.
On the other hand, I hate finding a project that looks useful, but then the code is scattered across many files which are barely one screen long, or worse, also spread throughout different deeply nested directories. Regardless of whether the organisation makes sense, navigating directory trees is annoying.
if the LOC is similar in size, i don't see why having them in different files vs same file makes any difference at all.
Also, your IDE should be able to navigate you to definitions and usages etc. If it doesn't, it's not a good IDE. So the problem of understanding code reduces understanding the abstract structure, not how that structure is represented on file.
Searching in one file is easy in any text editor. Searching across files (and directories) is more difficult.
I don't use an IDE. In fact, I'd say it's a problem if code requires an IDE in order to work on it effectively (Enterprise Java is the most prominent example of this.) I'm nowhere near Bellard level, but would consider myself above average, and have observed that some of the most productive programmers don't either --- and their code is far easier to understand than e.g. the mostly-autogenerated, split-into-many-tiny-files projects created by those far less skilled.
To suggest that not using an IDE is a sign you're a better developer makes as much sense as suggesting that not using an electric screwdriver to fit things is a sign of a better handyman.
To me it sounds like he/she was saying that mediocre programmers have to rely on IDE features and that various excellent programmer realized they can write better code without it. Before that the statement was that it is bad for project if a IDE in needed to navigate it.
Maybe I am steelmanning too much but it could have simply meant to indicate the existence of a trend that move people toward IDEs and another trend that moves people away from IDEs
To suggest that GP could have written better software had he used an IDE is to suggest that Michelangelo could have painted a better Sistine Chapel fresco had he used an electric paint sprayer.
Or, to bring it back within realistically achievable levels of talent, that Bob Ross's trees would've been happier if he used one.
No one suggested OP could do a better job had he used an IDE (and I have no idea if he did). The claim was that NOT using an IDE somehow signals you're a better developer, which is simply absurd.
So your comparison is wrong in this aspect, as the correct comparison would be "Painters who do not use modern tools made specifically to make painting easier are probably better painters. Michelangelo was the greatest master and he did not use modern tools, after all". Would you agree with that??
Do IDEs really do that type of refactoring? As a long time Emacs user I long forgot what a good IDE is capable of.
I know of automatic renaming, moving parts you select automatically to another file, but full automatic refactoring to multiple modules is something I haven’t seen.
How does this tackle circular dependencies etc. I guess the IDE must parse the code, generate the syntax tree, populate symbol tables etc. and then make the refactoring.
A side point I sometimes dream of writing an emacs module which would let me to write in a single file (for easy search and edit reasons), and then cut it into several modules where I mark them with =======xyz.h========= etc.
I also want to use this in each repo where it automatically does this. I’ll hopefuly stop procrastinating and write the thing one day :).
To end with a Game of Thrones analogy, hearing about the latest developments I sometimes feel like we editor/linux/bsd/cli users are like Wildlings beyond the wall. We live in harsher conditions, but are amazed when we see large cathedrals, castles being built inside the wall. Our life is more free but also burdensome.
Yes. Modern IDEs are incredibly productive because they have a high-level understanding of your program, going beyond just symbols in the code and way above just text processing.
Try using Visual Studio or Jetbrains IDEs and you'll see the difference.
Is there a term for this type of refactor? Often I might end up with 2 exported classes in one file, both being used across other files. I then want to separate them into 2 files and update the imports. I feel like my IDE should be able to do this, I just can’t seem to google the right term.
> Rather than trying to figure out which of dozens of files has what you're looking for, you can just search one or two large ones.
Why is this a good thing? The same amount of code and same number of results need to be scanned either way. With many files, you at least have the file name to give you some small amount of context without needing to read the code.
Yeah. This seems like something that the industry decided was a bad practice many years ago.
My best guess is that this is more of a fun project than anything and spending too much time on such concerns would detract from the fun. I do the same with my side projects.
One advantage of using multiple files is that you can have functions that are private to the file (static functions). So when looking at a specific file you know it can use either functions in this specific file or functions #include-d from other files. But it cannot use the private functions of other non-included files.
This reduces the cognitive load and gives the compiler a way to enforce separation.
In short - files in C can give you namespaces -- which are sometimes a useful way to organize code.
In Pascal you can define functions inside other functions. So if, as is often the case, the file consists of one public function that calls a number of private ones this can be enforced without needing to put them in separate files.
In newer languages like C# you can do the same. Probably works in lots of other languages too, see Rosetta Code: Nested Function:
There's plenty of workflows that work just fine with large files. As long as you're segmenting the code somehow and have a quick way to navigate to a specific section, it doesn't really make a difference how code is layed out.
It could be that his workflow is to always navigate by searching function names. Then it doesn't really matter if the project is 1 file or 1000.
It would probably be easier to read, but as a one-man project, my guess is that readability is less of a concern than for the average code base. Splitting code up in files requires some extra work and makes refactoring harder as you have to spend a lot of effort on trivial tasks such as moving dependencies between files and updating includes all over the place.
Personally I tend to want to split up code written by others so I can focus on the parts that go together without having to read through irrelevant parts, but I can easily navigate larger files/classes I've authored myself. I mainly split it up for the sake of future readers.
I do agree that it’s easy for a single programmer to not think about future readability for other all that much but in terms of refactoring code and straitening file dependencies, an IDE or code editor like VSCode can probably do it for you.
Mate. You didn’t have to do anything. You got to do whatever you wanted and you wanted “smarter coworkers” who didn’t use PHP.
But I stand by what I said now and nearly a decade ago, and now try to be blunt: Fabrice Bellard didn’t organise quickjs this way because he isn’t as smart as you. He did it because it makes things easier for him to write quickjs and qemu and tcc and ffmpeg and all sorts of other stuff used by all sorts of other people. And if you ever figure out how to learn from watching people who can do things you cannot, I think you could be an amazing programmer.
Oh, I would never compare myself to him and can only learn (a ton) from someone like him who can indeed do many things I cannot. Just something I found curious, I'm sure he has a good reason for it.
Http is not part of JS engine, because it has nothing to do with JS.
I kinda like large files (vs splitting the code), because they are easier to navigate in vim. I have no idea about why the choice was made for QuickJS, aside from ease of inclusion into other projects, mentioned in the docs.
The original comment was about separating components out for reuse in other projects. He wasn’t wondering why http wasn’t part of the js engine, that was just the example he chose
I don't see how any part of quickjs engine itself can be used in other projects. It all looks highly tailored to quickjs, aside from things that are already separated.
Lots of parts can be used separately! The parser for code analysis or refactoring tools, the JIT to accelerate programs (anything state machine based for example is pretty much like a VM), the garbage collector to embed in other programs (especially interpreters). I mean… these are just the most obvious, there are many more use cases.
I’ve always found that lots of definitions in one file is easier than splitting code over 100 files. It reduces reliance on complex, and often slow, tooling for analyzing directory structures and, most development tools support opening the same file twice at different offsets.
> Can compile Javascript sources to executables with no external dependency.
Is QuickJS a viable way to write command-line apps in JavaScript? In particular, does it have enough of a standard library to work with, or would it be a struggle because (I assume) it can’t use packages from NPM?
I know there’s the alternative of bundling Node.js and V8 into an executable, but the resulting binaries are large - it feels like the command-line equivalent of using Electron.
It would be enough for a good number of CLI tools, I'd think. But one big issue is going to be that every library on NPM is hardcoded to use Node's modules (e.g. fs) so you're not going to be able to use any external modules at all.
OK, to be clearer: almost all the libraries published to NPM that would use the functions in QuickJS’s std module instead use Node modules and there is no compatibility layer.
So yes, there are plenty of pure JS modules. But if you want to read a file from a disk you’re going to end up with a library that assumes you’re using Node.
QuickJS is a really, really interesting engine because it's only ~200KB. Fits a lot of cases where you want to add scripting ability but without bulking out size unnecessarily.
Unfortunately because it lacks stuff like JIT it'll never rival the likes of V8 in performance. But in terms of bang for buck it's unbeatable.
I sort of wonder if making browser JS engines slower would improve the web: fast JS engines enable complex JS apps but it’s far from clear that these apps are better for UX, especially compared to improving the built-in functionality of HTML
The basic points: They watch functions for "hotness" (how often they are ran). Then, any function that is super hot, they'll see if it's being ran consistently (ie always receives two numbers as its arguments). Then they'll make a streamlined interpretation of the js code which only checks the arguments and then skips pretty much all the other checks. By doing this, it makes JS significantly faster.
If you're trying to compare and contrast v8 to quickjs, this is the first thing that comes to mind for me as to what they may be doing differently.
Very cool. Wouldn't it be cool if someone wrote bindings for Linux system calls and provided an event loop for this engine - then you could write system software in JavaScript!
bash is a domain specific language. It has many features to easily launch programs, setup stream redirects to files or to other programs. JavaScript is a general purpose programming language. It might be a better choice for general purpose programs, but when you need to glue some programs together, nothing comes close to shell scripting languages.
People typically write small, short scripts in bash, not web applications. The same goes for JavaScript, but in reverse. They were designed for different things.
There's nothing about javascript that makes it particularly suited for "web applications". This doesn't seem any more ridiculous than, say, using python or perl.
Being event-driven and simple and hot reloadable and forgiving does make it well suited for the task at hand.
You should really think about why javascript is the de facto standard for web scripting. Other alternatives have appeared but even the likes of Google, which control the entire web stack and can pretty much dictate what the world uses, decided against it.
I guess, but I think JS is to far away from C to be worth it. This is saying a lot coming from me, but I would rather write python - at least longs would be the right length (if you catch my drift)
> Couldn't javascript be just another language to replace bash?
Technically the answer should be "yes", but given the event-driven nature of javascript and a shell script's very specialized design goals (launch processes, control the runtime environment, provide a workable REPL, etc) then it wouldn't be an improvement over any of the current shell scripting languages.
As a general-purpose scripting language... That's an entirely different matter, and the answer is definitely yes. In fact, node.js and deno already do just that.
> An online demonstration of the QuickJS engine with its mathematical extensions is available at numcalc.com. It was compiled from C to WASM/asm.js with Emscripten.
It look like good (including there are some good features that I do not see in other implementations), although there are some things which may help:
- Support ISO-8859-1 encoding (true ISO-8859-1 encoding, not Windows-1252) in addition to UTF-8, for all of the functions that can read text from files and write text to files, to avoid having to implement it by yourself one byte at a time. This is useful when you want text which is mostly ASCII, but which may contain extended characters that aren't Unicode. (There is no need to support any other character sets or encodings, though.)
- Document the C API better. Currently, the documentation isn't very good.
- Implement WTF-8 (if it isn't already), so that arbitrary JavaScript strings (which are strings of 16-bit characters) can be represented as UTF-8 without losing data. Often the text will be ASCII anyways, and you will want to use ordinary C strings,
- Add a API function to read/write strings of 16-bit characters. (This is probably unnecessary for property names, although it is helpful for strings.)
- Add an option to disable use of Unicode tables, in case your program does not use them. (UTF-8, String.prototype.codePointAt, etc would still work regardless, since they don't need Unicode tables to work. However, it would prevent Unicode properties from being used in regular expressions, remove String.prototype.normalize, and case conversion would be limited to ASCII (or perhaps ISO-8859-1) only.)
Additional optional extensions may be wanted, even if not enabled or even included in the executable by default (due to complexity), such as:
- PCRE.
- Option to disable automatic semicolon insertion.
- A "goto" command; you cannot jump into blocks, past declarations at the same level (in either direction), or out of functions. You can otherwise jump forward and backward within a block (including past nested blocks) or out of a block.
- Possibility for a function called by one JavaScript program to suspend that program while executing a different JavaScript program (which may possibly share objects with the first one), and later resume execution.
When the master race led by Fabrice Bellard takes over... I just hope I'm shown some mercy. Because I clearly have nothing else to offer the new world. Holy cow, this person is nothing short of amazing.
As far as does this implement enough JavaScript features for websites to run, https://test262.report/ looks really promising! You would have to hook it up to DOM and network and all sorts of other APIs normally provided by browsers, though.
One reason you might not want to build a general-use browser on QuickJS is performance. QuickJS is one of the fastest JS interpreters in its weight class, but engines like V8 achieve much better runtime speed by using fifty times the code to do all kinds of complicated just-in-time optimizations. Websites built in frameworks like React are often bottlenecked by JS engine performance (spending 100ms or more just doing tons and tons of object instantiations and function calls and stuff), so a QuickJS browser probably wouldn't provide a good experience for those.
I believe it could be used. The biggest problem is that all of the DOM classes have to be replicated, and the api surface is absolutely huge, so that would take a large amount of work.
These individual technologies (DOM/html parser, js runtime, layout etc.) really aren't so bad. It's about a weekends worth of work to do one.
What makes a "modern" browser is all of these technologies together not just quirk free, but matching quirks with whatever browser is popular. Even this won't really be enough since most people also depend on a bunch of online services provided by browser vendors at this point (push, bookmark syncing, password management etc.)
How compares this with Duktape? I can post on some
JS needed sites with edbrowse just fine, even allowing
me to show some shortened long comments with JS.
so it's 50X smaller and 30X slower than Googles V8 Engine - cool, but not that impressive. I would still use V8 on any decision, because 30MB is really not such a big deal anymore. SPEED is what matters in 2020, not disk size!
Try to make a JS engine than is 50X faster than Googles V8, and you become a centimillionaire! :D <3 Would love to see that.
Deno allows to directly run typescript program and they discovered that there is no way to run the typescript compiler with low latency as V8 start-up times were always huge. (I now wonder if a typescript daemon in the background could be a solution...)