I'm skeptical too (I'm the author of Rome). I know it can be at least performance neutral compared to existing JavaScript tools which is the baseline for most people. However I do think that since Rome is an enclosed system, without external dependencies and pretty good TS typing, that there's some future flexibility using another wasm language that other JS tools don't have.
I was thinking tools would start replacing their typescript code with assemblyscript and build wasm modules for hot paths. Do you have any similar thoughts?
For a compiler, it's probably much more valuable to have parallelism than cherry-pick slow code. You'd have to pick a pretty big hot path for the data conversion to be worth it. JS is pretty darn fast sequentially thanks to the ridiculous effort spent on its VM implementations, and there is next to zero number-crunching in a compiler, which is the main reason people have been cherry-picking with WASM on the web so far. We're talking pretty minimal speedups, compared to giving an embarassingly parallel problem the threading it deserves.
Having written codebases that take a couple of minutes to compile, it seems like a bit of a waste to write a whole new full-stack-no-dependencies compiler in a language that can't be parallelised.
You can usually get behind a self-hosting compiler for language evolution purposes, but for JS there is zero value in having yet another project to try it out on. For a transpiler to work on itself, maybe, but Rome is a CLI tool, not a web bundle, so it will never need to exercise the most demanding features of a transpiler/bundler. JSX, minification, code splitting, async loading, legacy browser support, etc. You can innovate on those things in any other language just as easily.
At the end of the day, the only advantage is that people who want to contribute will already know the language. Is that important enough to deal with another ten years of slow JS transpilation, forcing people to run huge, un-optimised code blobs in development (real life, very annoying problem), and 800MB node processes? I don't see that changing in JS-world for a long time.
(This is snarky, but you might actually be better off without importing the unfortunate churn habits of the JS community.)
This is very true. C++ developers have been using distributed builds (where when you hit "Build" in VS or run "make", it splits your build up and sends different source modules to different build workers on your local network - typically other developer machines or CI servers, then links the results together at the end), for decades now; I remember testing one back in 2005 or so and it worked pretty well.
Especially for threads, how well does this mesh with libraries? I'd assume that most of these are either meant to run in a single-threaded manner or async.
Using those languages comes at a tradeoff: the community isn't as a whole likely to know them or want to use them.
For example, I'm a pretty prolific TypeScript developer these days. And I do know Rust and C++ and Java and a bunch of other languages, but I'm unlikely to contribute to JavaScript-ecosystem stuff written in them because the cognitive overhead is too high--the context switching kills me. And while I do know Golang, having to write it makes me want to chuck my computer in a lake and I'll within epsilon of never contribute to a tool written in it.
You can swap in whatever languages in whatever slots in the above that you want, and you'll probably cover some chunk of the community. The intersection of all of them is JavaScript (and, for this project, TypeScript). I think that's wise.
If you need to optimize certain parts of your runtime or your toolchain once it's correct and does what it's supposed to and you can call the solution set understood, that's one thing. But until it is, defaulting to something that isn't self-hosting is bad for your prospects of community building.
> And while I do know Golang, having to write it makes me want to chuck my computer in a lake and I'll within epsilon of never contribute to a tool written in it.
The best way I can describe the way that Golang presents itself to me--and this is probably that from-Google ethos, in that they're writing a language for the legion of post-college Googlers--is "you must write stupid code because we are afraid of you being too clever for yourself".
And I would never say that that is an indefensible viewpoint. If I was selecting tools for people who are not me and people who I don't think I can effectively train and coach and lead into being better at what they're ostensibly here to do, I might even advocate for it. But I am not afraid of being too clever for myself. In production capacities, you should be prevented from being too clever by this through the taste that you should have developed as a practitioner of the craft of software development (and your code reviewers should be backing you up on if you don't have that yet). And in exploratory capacities, being too clever should be encouraged--because that's how you become good at your job in the first place.
I believe that, instead of throwing up our hands and just stapling a high-cut filter on programming as a craft, one can and should build better systems that actually use the computer to do the tedious tasks involved in making sure that you've actually expressed solutions to problems in a clear and reliable way. Tools that do exactly what I described do exist; along very different axes, both TypeScript (structurally typed and with really flexible and easily expressed generics) and Rust (lower-level borrow-checker B&D but with a raft of niceties to soften the blow; "good C++ with a great IDE" ends up here too, though there's a lot of bad C++ too) scratch exactly that itch for me.
Those tools (and others, in greater or lesser measure) reward me for thinking deeply about a problem. My reward is being saddled with less and clearer code to deal with in the future. And that's amazing! On the other hand, I have felt, each and every time I have been stuck using it, that by leaving any questions of difficulty to the programmer to handle a bunch of e-janitor stuff--bad error handling, really embracing its actually-just-Java-1.4-self by not supporting generics, etc.--Golang just wastes my time. I end up with more code that's harder to read and is better able to conceal errors both of omission and commission. Golang punts the solving of the actually hard problems of software craftsmanship to the programmer and I resent that. It should be helping me, not fucking off with the job half-done.
Don't get me wrong: I don't begrudge others its use, except when I'm stuck using it because of previous choices made, but it's the only programming language in common use that makes me feel bad to use it. (Which is to say that yeah, there are worse, but I don't exactly have to deal with them.) To me, the repelling effect is strong enough that it really is one of two languages (hi, PHP) on my under-no-circumstances list.
I don't mind simple languages. I'll write code in Kotlin, which is a pretty mild hat on top of Java.
Golang is not a "simple language", though. C# is a simple language, and while it sometimes frustrates me its decisions make sense. Golang is a simplistic language. There is value in being simple; there is little value in being simplistic.
This is true, but for a compiler project, I think compiler expertise is a much bigger gating factor than language expertise. If you want to maximize effective contribution, you're probably better off picking a language that lots of compiler hackers know.
On the topic of other languages taking over these tools, fundamentally it comes down to this: At the end of the day, these javascript tools (beyond just npm/yarn, which I think could be written in any language much more easily), like webpack, babel, or rollup etc., create very hard to replicate advanced ASTs. Even microsoft hasn't bothered to port typescript compilation to say, C# or F#, and I think largely because its a lot more complex to create the foundations for those languages to understand javascript on that kind of level.
You could argue (and I can see the validity of it, see the Closure Compiler for a counter example), that it can just as well be done, but it seems like its far easier to implement in JavaScript these kind of tools.
It's much better to be self-enclosed than just to be faster at constant times.
Just like most Emacs functionalities are written in Emacs-lisp, and most Elixir functionalities are written in Elixir. It's more consistent for open source projects, and people would be more familiar and motivated when building and consuming at the same time.
It's refreshing to see a JavaScript project with a mature tone and no emoji bullet lists in its readme. I'm very eager for JS culture to mature along with the tooling, and to prioritize content over sugar.
The Readme doesn't take itself too seriously. It even says the name of the project started a joke, and pokes fun at its logo. I have a suspicion that emojis just make you upset.
I use emoji when texting sometimes. I have nothing against them. I used emoji as a symbol for excessive and distracting behavior, that feels similar to how memes feel – which to me has an almost mob/cult-like quality to it.
I’m not sure what other examples to use to describe the tone I’m referring to without offending anyone. But, I’d personally be really interested in an anthropologist writing about the internet culture of JS development. Edit: hackernews unsurprisingly did not let me use an emoji at the end there.
I was thinking about this in the context of Tesla and their competitors the other day. Tesla have some little hints of good humour throughout their cars/marketing which I endear them to me. Everything is way too serious these days and injecting a little fun - with the disclaimer it doesn't get in the way - is a great thing.
It is refreshing though seeing somebody trying to tackle some of my gripes about JS from a clean slate. I get so tired of hearing of x tool just to use some CSS lib or JS lib, or 17 tools to make development better and they all fight each other.
It has no external dependencies but supports compiling typescript, flow, etc. Does this mean compilers for those will be re-reimplemented inside Rome? Not really looking forward to build tool-specific bugs with those languages.
It does. And it's a risk not keeping up with their pace of development. Supporting those "compilers" is just supporting their syntax, which both fortunately already have very good compliance test suites because other tools like Babel already reimplement their syntax anyway.
Interesting. Do you mean that Rome won’t perform type-checking in TypeScript/Flow, it just strips their type annotations (like their respective Babel plugins do today)? If so, is the idea that people might continue to use the tsc/flow binaries in parallel with Rome (e.g. if they want to validate types in a commit hook, or for in-editor linter-style warnings)?
I've been really looking forward to this source release since reading your tweets about it. Sorry it got posted to HN before you wanted it to, but thanks for making the code public at this stage.
> is the idea that people might continue to use the tsc/flow binaries in parallel with Rome
I think so, in the same way that people use both Babel and TypeScript (or Flow) today. TypeScript/Flow check the types, while Babel simply strips them out when compiling the code.
Well, Typescript is literally checked in to the repo under node_modules. So I imagine the author just means "forget lockfiles, let's check this all in."
As I understand it, the goal is not for Rome to do type-checking, but just for it not to choke on TypeScript's syntax (i.e. what Babel does):
> Supporting those "compilers" is just supporting their syntax, which both fortunately already have very good compliance test suites because other tools like Babel already reimplement their syntax anyway.
I'm the only author at the moment. The project started off years ago as a side project without any commit history, and what little history there was over the years wasn't very thorough. Prior to the open sourcing I've been working on it internally and still did pretty massive diffs that changed thousands of files, still without good history. It's something I'm working on though and will obviously need to be accountable for now that the project is public.
I didn't say the opposite. But he demonstrated that, internally at his company (before releasing the source to the public), he was not a good engineer.
Considering this is just a source code dump and call for possible contributors I'm disappointed it was even posted to Hacker News. Rome is not ready at all for public consumption. I wanted it open sourced so I could work with other tooling authors and contributors.
Would you mind commenting on what it intends to be and what its value propositions would be when it's eventually ready for public consumption? An opinionated all-in-one platform?
I’m looking for something more specific than that. To go to such great lengths, the value propositions have to be more than just bundling a few things together. (And I did read the project philosophy section.)
If you’ve used Metro or Webpack before there is very little explaining to do. The existing tooling is bloated, fragmented (think 40k deps) and slow, and the experience of sewing it all together and keeping it in one piece extremely frustrating.
I don't think "less bloated/fragmented/slow Webpack and co" is what sebmck intends its USP to be. It sounds more like he expects there to be some advantages to arise specifically from a single tool doing everything related to processing source code, e.g. the linter being able to catch bugs by also being aware of how code will end up after bundling.
It would be interesting to see examples of specific such advantages that could be gained, because I'm having a hard time thinking of any.
I have, but there’s still a lot of explaining to do. E.g., what’s the extensibility story? To do anything remotely useful with webpack you need to install a million loaders and plugins, but the plugin architecture also lets you do a lot that can’t be done with opinionated bundlers. So, what’s Rome’s approach to this? Enumerating existing tooling’s problems does not automatically guarantee something better.
The author has commented that this isn't yet ready for public consumption, but is instead just a call for other possible contributors to the project. So:
1. Everything you seem to be asking for seems like it is not relevant at this point in the project's lifecycle.
2. IMO people who are capable of providing good contributions don't actually need what you are asking for. They can review the README as is, review the source code, etc.
I remember many years back uploading a module to CPAN at version 0.001001 with docs that said "actual docs not written yet, this release was made so other cpan authors I was talking to could experiment more easily".
Two days later I got a one-star review on the now-defunct cpanratings site complaining about the lack of documentation.
It's a public repo under a Facebook org; it's bound to be picked up on Reddit if it hasn't been already.
Personally, HN would be my go-to if I had a project to show and for which I was hoping for some high-quality collaboration.
If by 'reaching directly out' you mean contacting specific people, then it shouldn't be a public repo. Like I said, it's under a Facebook org, of course others will find it.
I don’t think you have read the current Version. Hereby the first two paragraphs:
“ Rome is an experimental JavaScript toolchain. It includes a compiler, linter, formatter, bundler, testing framework and more. It aims to be a comprehensive tool for anything related to the processing of JavaScript source code.
Rome is not a collection of existing tools. All components are custom and use no third-party dependencies.”
Compiler here probably means transpiler. Like Babel. Supporting new ECMAScript stuff, transpiling it to ES3/ES5 or whatever, so you can use it in IE6, or on NodeJS 8.
'Transpiler' adds useful extra information and context though. It's like it's fine to just say 'animal' but it's a bit more useful with more information to say 'cow'.
Yes, but hopefully done a bit better than before. Maybe the wheel will actually come out round this time? :)
Of course none of this would be needed if we could just use a programming language that wasn't shit, but before that's possible, things like Rome will keep surfacing. It is somewhat amusing to observe, even if you're blessed with not having to touch it.
The problem is more with browsers, than the language itself. Need for transpilation comes from the fact that you don’t control your env and browsers lag behind/are inconsistent. Need for splitting and bundling because we can only specify file resources to be loaded.
This only replaces about 325 js dependencies. So you are right, it doesn’t do anything. To get to the point of having any functionality, we have to wait till it replaces 7,967
I don't really understand what it does? It replaces webpack, eslint, maybe the typescript compiler... etc, and puts it all in one library? What is the benefit to end-users? To reduce the number of bugs, make it easier for people to contribute (only need to contribute to one library), and increase the speed of development?
I worry this is a start for a toolchain monopoly, I don't like the idea of all-in-one, how can a single toolchain be the best of all fields? Will it suppress innovation like jslint -> jshint -> eslint? Just because the new and better tool is not part of the toolchain!
In the Java world 99% of the code is compiled with javac.
In the .NET world 99% of the code is compiled with csc (or whatever it's called, don't remember that well)
In the C world, with one of the biggest ecosystems and a fragmented history going back half a century, 99% of the code is compiled with GCC/Clang/VS CL (of which VS CL is there just because Microsoft insists on it being there).
Similar things happen with interpreters, CPython, Yarv, etc.
Javascript should innovate at higher levels. And it should have a linker and tree shaking compiler as default for every project, everywhere, so that people can stop making silly small libs and instead can use big ones that get compiled to small code bits that are distributed by websites.
Until that level takes the problem of in-browser, persistent file/blob storage seriously, the JS ecosystem will not be able to truly progress, particularly when it comes to the question of linking evolving code bases with more stable libraries. But I digress. (I'm not going to "pimp" for my project now... see my comments!)
Having a good sensible default is one thing, bundling everything together is another. If I prefer jest over the test suite provided by Rome, I would have to install two test suites in the same project. Why not split them into different packages and let user pick what they want?
I am really excited for this project and hope it is opinionated enough and doesn’t go down the rabbit hole of endless customisable options in a Rome.js|json|rc file. I also hope this is as painless as Go and Elm’s default tooling.
Make it run in the browser, using web workers to not block main thread. Most tooling is assumed to be used in a terminal... Make libraries instead of terminal applications, so that the tools can be used anywhere. Most people, including most developers, are not comfortable working in a terminal emulator. Even if you are, you probably google for, copy/paste the commands into the terminal... terrible inneficient and unfriendly for beginners. Instead provide easy to use libraries, that can be used in monolith apps with some minor glue code.
I'm not seeing a ton a tests in this repo, did I miss them? Seems like it will be very hard project to maintain and gather community support without better test coverage.
I just wish one of the key authors of this wasn't such a jerk to the yarn development team just because they disagreed with their decisions. Makes me very hesitant to want to collaborate with such people.
Looks like a normal discussion thread. Not sure what you’re pointing to here. Do you have some evidence of bad behavior? otherwise your comment is the one that looks petty IMHO.
This is cool. Facebook does open source better than Google (things aren’t abandoned left and right) but I’m afraid the experimental would mean it’s future is unsure.
The other point is, why is Rome better than any existing tooling ? Webpack, typescript, prettier, eslint, mocha are all pretty popular and mature tools. Why would I use Rome over it ?
Rome has a logo of an ancient Greek spartan helmet. It's not very relevant since it's not Roman, but it looks cooler than a Galea.
“Galea” links to the much cooler Roman helmet, disproving this statement. Also given that this is a build environment, perhaps an icon incorporating, I dunno, a Roman building might be appropriate?
I am using Haxe, which is a complete different ecosystem, which compiles to JS. It has a fast optimizing compiler, a strict-typed language. Has features like meta programming /syntax-transformation (macros), conditional compilation, inlined calls etc. These are great tools to create high performance apps. You can find all the features on its website. It has a formatting tools and a linter, but has build-in deadcode elemination and a static analyzer so it only includes everything you use (so no need to strip it off afterwards as seen in some JS tools). Also could be a good candidate for WASM, since it compiles to C++.
One thing that is great about Haxe is that is also compiles to other languages; you can reuse same code for different compiler targets. I mean, why does Rome only target JavaScript? For Haxe that's just one option.
I love the README. I wish more more projects had things like this:
> Transparency. No back-channel project management. Project conversation and decisions will take place only on public forums such as GitHub, the Rome Discord, and Twitter. The only exception to this is moderation decisions which will be strictly done in private.
The whole document is full of good ideas. I still want more, but that's a league better than many "top" projects I use.
This seems great.
As far as I can tell, it will be take input in *Script - AST it, link it, lint it, bundle it up. Finally a whole toolchain in one place.
Hold on -- this sentence could essentially describe Webpack. The only difference is that you plug your own transpiler and linter into webpack. It sounds like Rome will let you integrate external tools, so what is Rome really? A webpack that ships with a default linter and transpiler? There must be more to it than that, otherwise I don't see the value add.
Even if it has the same features as webpack, this is better because it does not have external dependencies. Webpack (webpack and webpack-cli) depends on 312 npm packages.
Just an observation... The batteries included model of Rome where it handles ESNext, JSX, Typescript and Flow out of the box without configuration appears to be a marked departure from the Babel preset/plugin model. Customization is all fine and good, but most people don't want to configure anything and are happy using defaults, even if it's suboptimal in some cases.
Approach to errors is what clearly distinguishes "old tech" from "new tech". Most recent tools, compilers, linkers, bundlers etc. go out of their way to provide clear human-readable errors.
> Rome is experimental and in active development. It's open for contributors and those interested in experimental tools. It is not ready for production usage.
This is because, unlike the historical Rome, this Rome may fall in a day.
One reason could be that this tends to require a lot of fiddling. If you take the Rome approach, you can (theoretically) have the community do the necessary fiddling collectively and have it work for everyone. If they're alright with the standard it provides.
I know from experiences in the last few days that even things like standardjs and xo, despite being described as zero-config linting/fixing solutions, don't reliably work out of the box.
In addition to this, one reason the JS tools of today are relatively slow is because they all need to parse the code. Babel parses it to compile it, then ESLint parses it again to lint it, then Webpack/Rollup/Parcel parses it again to bundle it. A single tool that reuses the same AST representation for all three tasks has the ability to become a lot faster.
Yes and no. ESLint doesn't need to run at build time, and I assume that if you integrate Babel and Webpack via webpack "loaders", some amount of work is saved (just guessing though). That said, yours is the only good argument I've seen for building a monolithic solution.
It has to do it that way, since Babel could parse some syntax that Webpack doesn't recognise (like Flow or TypeScript annotations). Webpack loaders can even take non-JS input and produce JS output (eg. "svgr" takes SVG images and produces React components) so I don't think Webpack can even assume that the input is JS at that point.
And yet when it comes time for Webpack to crawl import statements, it can do so across all different kinds of files coming in through their own loaders. So I'm wondering if Webpack gets presented with some kind of structured "imports manifest" by the loaders, in which case I think it could skip its own parsing step.
It is interesting that even Facebook employees prefer typescript to flow when starting a project from scratch. I’d love to hear sebastian explain why he didn’t use flow.
I'm curious whether the Rome authors devote attention to 'compatibility' with the principles and requirements of build systems like Buck (and Bazel). Our company is in the process of migrating a large Typescript codebase to Bazel, and there's a non-negligible amount of reconfiguring and re-tooling that needs to be done that would be less necessary if key ecosystem components were designed early to be hermetic, deterministic, modular, and incremental.
Edit: Facebook wrote Buck and uses it extensively internally, so there being a desire for compatibility doesn’t seem far-fetched.
It’s been interesting to see the vote count on this comment go up and down over the past few hours. Clearly there are some people who feel the same, and others who think it is inappropriate to even bring it up.
No, this is what he's saying: Facebook has shown that they contribute good OSS and want people to use it with no strings attached.
His quip about Oracle was completely irrelevant to the content of his comment. In my opinion, Facebook proved themselves champions of OSS when they relaxed their license on React (which is now vanilla MIT with no stupid patent clauses).
> which is now vanilla MIT with no stupid patent clauses
Except a patent clause actually gives you more protection, not less. That's why some other licenses explicitly include a patent grant, like the Apache license. I think the patent clause was a misunderstanding more than anything else. In any case, Facebook responded to what the community wanted.
Why should anyone contribute to projects that have a completely different philosophy from what they would write? I doubt that the authors of those three projects would welcome any PR removing all dependencies of their tools with or merging all 3 tools into one.
Also, what's the problem with creating new things and fostering competition and advancing the state of our profession? Why is experimenting so frowned upon by some people here in Hacker News?
By the way, the author of Rome is also the author of Babel. So I guess he has made his share of Pull Requests.
Facebook pays for this kind of stuff? Why not spend the time incorporating a decent language instead of trying to fix the fragmented hot mess that is js
Throwing all the baby out with the bath water is something engineers consistently overvalue in business when there are other middleground trade-offs to choose from.
I suppose it depends, but throwing out node and migrating everything to Go was one of the better choices we made at Segment, before the platform grew too large.
Claiming no dependencies while not reducing complexity (76k SLOC mono-commit with no incremental history) isn't a win. No mention of test coverage, no documentation besides the readme. No performance numbers of regression suite.
The claim of no dependencies isn't there as some grandiose statement about reduction of complexity, it's there to describe how self contained the project is and our ability to change things. I'm going to refer to a comment I already made elsewhere in this thread:
> Considering this is just a source code dump and call for possible contributors I'm disappointed it was even posted to Hacker News. Rome is not ready at all for public consumption. I wanted it open sourced so I could work with other tooling authors and contributors.
Those things you are asking will come at a time when Rome is actively being marketed as a tool for users rather than an experimental project for contributors.
At this point any sensible engineering manager should be committed to prejudicially dismiss any new js tooling, preprocessor or package manager. The current trash fire state of affairs needs to be let to extinguish itself first.
You can't compile the code of scripting languages, so please stop using that term. You may be doing something to the "source code", but it is not compiling! And I would argue that the thing you are doing to that code is never a good thing.
> You can't compile the code of scripting languages
Of course you can. Compiling is a process that takes some code as input and produces other code as output. The output code doesn't have to be at a lower level of abstraction (like assembly or machine code).
A compiler that produces code at the same level on abstraction (eg. going from TypeScript to JavaScript, or newer JS to ES3 JS) is often called a "transpiler" but a transpiler is just a type of compiler, and it's totally valid to call it such.
You can use all the fancy terms for the mangling and intentional obfuscation of crappy code that you want. Sure, you might think that minification is a valid use for such tools, but really, there is no alternative to just sitting down and writing the right code so that it may eventually become a standard thing, and so we won't need to keep reinventing the wheel over and over again.
I know of what I speak. I am in the process of turning the world of web development upside down with Linux on the Web (https://dev.lotw.xyz/desk.os).
However, I am sceptical about performance of a tool written in TypeScript/JavaScript. We've seen that tools written in compiled languages like Rust or Go can be much faster, e.g. https://swc-project.github.io/, https://packem.github.io/, https://github.com/nathan/pax, https://github.com/evanw/esbuild.
On the other hand, Yarn is written in JS and it is considered fast enough. Facebook has a good track record for writing well designed tools.