Two things this doesn't mention about Rust that I think we can expect to see in new languages:
1. Version control. If you ask Cargo to make you a new Rust project, and you have git (which, if you're a software developer you almost certainly do) it makes you a Git repository for the new project, with its target directory ignored.
As with formatting you might have very strong opinions about revision control, or you might be subject to a company policy, maybe you must use Bob's Special Version Control Program for all Company Projects. Cargo won't stop you doing that, but if you don't have an opinion you get git which is definitely better than nothing. And if you want to, somebody can teach Cargo about Bob's SVCP and make 'svcp' one of the possible choices in the config for the few people who do care.
2. Hello World. If you ask Cargo for a new Application, the project it gives you prints "Hello, World!" because why not. Rust's boilerplate for such an application isn't very heavy, but like a blank piece of paper is intimidating when writing an essay or novel, it's nice to have something to start from. Step #1 remove the part where it prints "Hello, World!". Step #2 write my actual program. Hey, we're off to a great start already.
This also means you have a correct, valid program to check your tooling. A new Cargo Application is a Hello, World program that will compile, and run, showing that your tools work as intended before you write a single line of Rust. It doesn't have any tests, nor any documentation, but since it's a working program the test infrastructure and doc builder work, the results will just be not very exciting.
Any half-decent IDE created in this century gives you both these features and much more. I remember selecting the type of the new project in Visual Studio 6.0 and getting either a console "Hello, world" or a full-blown GUI application. VS 6.0 is literally older than some engineers I work with today
> Any half-decent IDE created in this century gives you both these features and much more
If you've paid for the expensive "Professional" Visual Studio .NET in 2002, you do indeed get version control. But it's Visual SourceSafe, which is garbage†. Fortunately, it's optional, so you probably just never switch it on (it's not default) and don't use any version control at all, so you're back to square one.
If you're just using the (cheaper) Microsoft Visual C++.NET for example - whose users I imagine would have believed they had an IDE - you don't get Visual SourceSafe. As we saw that's probably no problem since you don't want VSS anyway, but it does mean you do not, in fact, get version control out of the box, you're going to need to pay Microsoft $$$ (or, try using anybody else's offerings which are better and many of them are free...)
Either way, you're not getting that comparable experience.
† It's markedly worse than CVS, which itself is awful. Microsoft actually sold VSS for money rather than giving it away and so there was (until 2005) a team at Microsoft maintaining it. As I understand it the revision control of the Visual SourceSafe software itself was eventually... a Perforce repo (technically "Source Depot" Microsoft's internal-use licensed fork of Perforce's software). This is not an endorsement of your confidence in your own product.
Too many people believe in the superiority of Linux as an OS for software development that they would consider it beneath them to even notice Windows toolchains, and would then be flabbergasted by the idea of project templates.
I’ve tried developing on Windows and found it to be terrible. Even at the most fundamental basic level, like how files get locked when opened and cannot be removed unless the file handle is closed. And deploying on Windows is even worse. How do you run something when the server starts? Modify some random registry settings to log in a user and then setup some startup items (all via a GUI) to trigger your software. And of course there is no good way to get your software onto the host. There is no ssh equivalent and ssh sucks on windows because of the aforementioned file lock issue. Windows devs I talked to built new machine images (because no Docker) for each deployment, a process that takes like 30 minutes to an hour.
Oh, and there was that one time I accidentally let Unity install Visual Studio instead of VS Code. What a terrible IDE. By modern standards it’s so slow and heavyweight.
Yes, as I said, I don’t have a lot of experience. However, the little experience I have indicates that Windows is fundamentally worse. For instance, the file handle issue is clearly annoying when developing. You may be used to it, but I don’t see an argument on how Windows’ handling of files is better or even equal. Or take Poweshell. The *nix approach of composeable shell primitives is better for developers. Software development is the composition of primitives after all.
And you can tell me to RTFM all you like, but the manual is leaves something to be desired.
Google “run on system start linux api” and the first result is about systemd. Create a config file, put it in the right place, and done.
Replace linux with windows and I don’t see an answer on the first page of results.
If you learn on Windows first, or if you’re a masochist, then I’m sure it’s possible to tolerate or even enjoy Windows development. But I have never seen a convincing argument that it’s better or even on par with Linux development.
> However, the little experience I have indicates that Windows is fundamentally worse.
Fundamentally worse is extremely hard to prove and subjective. It's different, that's for sure.
> For instance, the file handle issue is clearly annoying when developing. You may be used to it, but I don’t see an argument on how Windows’ handling of files is better or even equal.
That's one of the things that's arguably worse in the modern era, I'm quite sure it comes from CP/M (1974) so it's a super ancient decision, one that would break a lot of things if they change it. It's one of "go left or right, but you can't go back, and you'll only find out in 20 years if you've made the right choice".
> Or take Poweshell. The *nix approach of composable shell primitives is better for developers. Software development is the composition of primitives after all.
That's ironic, since Powershell is super composable. If anything, it's even more composable since it sends structured data, not just raw streams of bytes. There's a reason a ton of alternative shells want to evolve POSIX shells to add structured data, they're all inspired by Powershell. See for example https://www.nushell.sh/ You don't even have to believe me, try to find blog posts from many devs of next generation POSIX shells and several of them mention Powershell explicitly as inspiration.
> Google “run on system start linux api” and the first result is about systemd. Create a config file, put it in the right place, and done.
Unless you want to use Gentoo or Devuan or FreeBSD or... And amusingly, systemd is inspired by MacOS and Windows service management (parts of them) plus it actually tries to create the unified userland that's inspired by... Windows :-)
> And you can tell me to RTFM all you like, but the manual is leaves something to be desired.
Windows docs are super disparate, but Linux ones are the same. Strictly man-ing stuff won't get you super far. You'll need to do the same forum and StackOverflow and whatever digging for years and years to get anywhere. You've already done it, you've just forgotten doing it.
> If you learn on Windows first, or if you’re a masochist, then I’m sure it’s possible to tolerate or even enjoy Windows development. But I have never seen a convincing argument that it’s better or even on par with Linux development.
If we're professionals, we use what's available and we learn stuff from it.
I wouldn't say Windows is better, but it's definitely a lot better than strictly POSIX users complain about it, and it can be made super usable and more POSIX oriented. I've done it with Cygwin circa 2008 and I've been happily using the same setup since then.
For example in terms of desktop/laptop support, on average, Windows still fares better than Linux.
Thank you for this well-thought out response. I agree that Windows certainly fairs better on the desktop than Linux. I'm sure there is more to the picture than my limited experience, but I can't say I'll pick Windows in the future :)
That's disregarding the real world a little bit. You may like or dislike it, but the fact is that Linux totally dominates the cloud era. Among those who have the luxury to choose their tools themselves, among them startups and hobbyists, it's basically down to Linux or Mac, unless you develop for the Microsoft ecosystem.
Microsoft got a bit complacent during the decade before, and allowed single user considerations to influence the design (the registry, the small business-oriented administration tools, the complete lack of usable shell, terminal and editor), and they still pay the price of backwards compatibility with this despite most of it being fixed long ago.
All languages and tools from the past two decades such as Python, Ruby, Node, Rust, Go, and so on and first class citizens on Linux. They all work with the file system, the shell, and the package management system because that's what people use. I'm sure there will be post-Linux era too, but that's not where we are now and when it comes it will not look the 90s again.
It's not about practicality, most of my day jobs relied on my knowledge of POSIX :-)
It's about intellectual dishonesty, regularly complaining about Windows things from a very shallow perspective that would not be tolerated of POSIX. An "argument from ignorance", if you will.
My original argument was from first-hand experience developing for a Windows server deployment. It's a little insulting that you would call that ignorance while simultaneously not addressing the grievances encountered.
If it was insult, then it was well deserved. You should learn from it. Having a single deployment project of Windows OS is so narrow experiance.
Every OS has things where it shines compared to others and those where it could be better. Why cant I deploy all linux like Guix?
More then that, you need to give it time, explore the culture, find out native tools and see how cross platform tools are used. You cant judge anything without spending enough productive an play time with it.
Everything is equally approachable on the Windows. Powershell is light years ahead compared to any Linux shell. Chocolatey is maybe most up to date package manager around, on par with Arch.
One thing where I would use Linux over Windows any time is IoT simply due to size and price. Windows nano version is still pain compared to mini Linux distros.
Powershell is a good example of what was made after everyone could see that the momentum just wasn't there. It's far from enough to turn the tide. The market will turn sooner or later of course, but not back to what was before.
Package managers is also a good example. Everything is third party and nothing is default. They must be configured to work with your instance of git and python and whatnot. And who knows if the tools expect a case sensitive file system, or wants to open a socket in a mode that doesn't work, or use a certificate store that isn't the system default, or some script that expects a network interface that just isn't there.
You are free to hold your own opinion of course. But if you work with Node or Python or other things that cloud apps are built from, most people will stick to whatever everyone else uses. That's just the reason it is like it is. It's the same dynamic that kept Microsoft close to a monopoly in the office era.
It's great that people use other platforms too in order to keep portability of our tools high, but as for the cloud era, the market is unlikely to change much from here. We'll be stuck with the web for a while, and it runs on Linux.
I would love a PowerShell example that finds files in a directory tree matching a file name pattern, filters those files to those that contain a specific string in their contents, and then prints the filenames to a single output file.
In my experience "IDE lockin" is terrible to work with in the long run. After 5+ years trying to compile old Visual Studio or Eclipse projects is a horrible experience.
Mine is the opposite. Visual Studio makes it a breeze to pick up some random person’s work, often after decades of time if the projects can be upgraded.
Even if not… I recently installed Visual Studio 2015 in a VM to build an old project and it was perfectly fine.
Can it show smiley emoji in string literals? No, but it does a lot more for me as a programmer than a text editor!
This isn’t my experience of running old VC++ projects. They usually end up with some weird linker errors because they embed a hardcoded DLL or something which was compiled with an old version of visual studio or something. You get a lot better at fixing these errors with experience, but I’ve lost count of the number of hours I’ve lost noodling with this stuff to compile some old code.
Speaking of progress, I adore this improvement in modern languages. C#, Swift, Rust and nodejs all have officially defined project configuration alongside the source code. I expect to be able to “cargo run” any of my rust programs in 50 years from now just as easily as I can today.
50 years from now, "Swift, Rust and nodejs" might not exist. For that sort of time period, you either need a language whose compiler & runtime is easy to write (C, Forth, Scheme, etc.) or a language that compiles to standardised bytecode (C#, Java, etc.).
That's not an issue with "Visual Studio as an IDE", but "Microsoft's C++ build toolchain" combined with specific issues with C++ the language itself, such as: not having a stable ABI, no truly common stdlib, no package manager, and weird linker conflicts due to the way headers interact with static values.
This is the primary reason I've abandoned C++ programming in general, and I'm never going back.
From what I can see, the alternative C++ toolchains from other vendors (and open source) aren't fundamentally better. They might paper over one or two issues, but rarely solve the core problem in a permanent way.
These days I do only C# and Rust for the same reason you mentioned -- sane project, dependency, and build management.
C++ toolchains in general are broken with or without IDEs. It doesn't matter if you're compiling from the command-line or a GUI, it's a trash fire either way.
This problem is sometimes temporarily "fixed" by some approach, but the problem is fundamentally with C++ itself, making many basic things night impossible to solve. Eventually the new "simple" toolchain is adapted to support every possible scenario, and then madness creeps back in and we're back to square one.
For example, I recently had to compile Chromium from scratch. It has a custom build system (of course), that is more complex by itself than all of the code that I have written in my entire life put together. It was written to "simplify" its builds. That's... crazy.
I vividly remember the day I snapped and gave up C++. I was trying to compile previously working code and it was complaining that "__malloc" was undefined. This was a linker error with no further information. It didn't identify the source of the problem, and provided no clue on how to fix it.
After hours of effort I simply gave up, switched over to C#, and never looked back.
PS: A part of the reason C/C++ developers don't "get" why IDEs are so popular is precisely because C and C++ are so hard to write IDEs for! A single header file can mean different things in the same project depending on where and how it is included. Simply providing tab-complete and inline help is a monumental task. So most C++ IDEs are bare bones compared to Java or C#. They do basically nothing for the developer except the basics.
PPS: The Rust team are all ex-C++ programmers and made the same mistakes in their language design, making IDEs very challenging to write for Rust. Don't agree? Remind me, how many IDEs are available that can do something as trivial as "extract method" in Rust?
Recently I followed someone's instructions to build a small C++ project on Windows and spent like 5 hours trying to get the right clang and the right cmake and the right ninja to invoke each other. The worst issue involved seemed to be that if you install "MSVC build tools" then cmake will auto-detect that somehow and call the ninja installed by that instead of the one in your PATH, which will pass windows-style flags to the clang in your path which does not know what those are. "Install MSVC to get required dependences but do not install MSVC build tools to avoid anti-required anti-dependencies" was not in the instructions, of course.
There is no one "Microsoft-approved" way of building GUI applications for Windows (or anything else for that matter), and you're asking for cross platform stuff :-))
The grandparent's comment was pointing out that if you take the default VS setup, which is usually MSBuild, it's quite self contained. As long as you have the corresponding VS redist, you can pretty much pick up an old VS project (one created with VS) and build it. I'm taking even about stuff from 20 years ago.
Cross platform projects use cross platform stuff, which are obviously not developed by Microsoft, so caveat emptor.
By your logic because VS integrates with npm, the issues setting up npm projects are their problem :-p
Sure, I agree, though not anymore horrible than dealing with Autotools. Shitty developer tooling is a different question (and a very important one that I trying to solve), but I am simply talking about being unaware of the simplest features because of one's ideological blinders.
And I have quite the feeling it will be identical to compiling one of these "package managed" projects whose dependencies are expressed as Github URLs.
Since when are project templates windows only? I spent a year in college using KDevelop like 10 years ago and it had project templates, as did netbeans.
Well, most of the people who fetishise Linux/UNIX would also not use KDevelop because of their deep devotion to the UNIX philosophy. I'm sure neither KDevelop does only one thing, nor does it does what it does well ("well" as defined by UNIX enthusiasts wielding pipes and shells).
Author here. This was fun to write. I think there are counter-examples, but this was my main idea:
When a new developer tooling innovation is discovered, newer programming languages get a chance to bake that innovation into their language tooling. Doing so gives them an incremental advantage, and these increments add up over time to a better developer experience.
So newer languages have one clear, well thought out way to do something, and older languages will have either many contradictory ways, or no ways at all, to do the same thing. And this makes older languages feel old.
I'm not an advanced user of either but both go.mod and cargo.toml seem like great examples of well thought out approaches to a problem, where they've learned from previous languages and came up with a pretty polished solution.
For one, it's the algorithm used to select the right version of a transient dependency. Go uses MVS which is a polynomial time algo, rather than requiring SAT solver (NP-complete). Go does not allow dependency versions to be specified as ranges.
Second, it's all the other checks and infra behind the scenes. A serious amount of thought went into the design of a dependency system that is far more resistant to software supply chain attacks.
Great post. Now, why can't someone just come up with a great tooling system for a "dreaded" language, put it out there, and everyone start using that when they see it's better? Doesn't the answer just go back to the Green and Brown Language split? People who use these languages mostly work on legacy projects where they don't get to decide the tooling used. Because otherwise why would they use that language with all the trendy new options available?
I'm trying to think of examples of trying this for C or C++.
* Batteries-Included Standard Library - can't actually be a standard library, but could be the next best thing, a widely-used library; things like NSPR and APR for C, or Boost for C++
* Third-Party Package Repositories - Conan, which seems to be healthy, but hasn't taken over the world
* Documentation - Doxygen, which has been very successful
* Write Once, Run Most Places - little progress towards this for C, but the people in the C++ standards process do at least think about portability; i think there is an inherent tension between portability and being close to the metal which means it is always going to be a lower priority for C or C++
* Package Managers - see under 'Third-Party Package Repositories'; not clear to me to what extent this is a different thing
* Code Formatters - these have long existed, i suspect clang-format is most popular, but it is far from universal; what's missing is a universally-accepted strict formatting standard, as found in Go and Rust, and arguably now Python
So, it seems that these things do get retrofitted, and do get adopted, but not to the same extent as if they were there from the beginning of the language.
I’ve worked in C++ and picked my own tools. The tooling experience in Go is much more polished for a few reasons.
First, as the author states, it’s hard to get consensus on a tool if it wasn’t baked in from the beginning. I don’t think there will ever be a dependency manager or especially an option-free formatter for C++ with the same adoption as gofmt or go modules.
Second, it helps a lot to have the compiler/runtime authors think about the tooling. Go runtime developers think about pprof. GCC developers don’t think about the UX of valgrind. C++ standard authors don’t write PR’s for code formatting tools when they change the language spec.
> I don’t think there will ever be a dependency manager
I know a C++ guy who insists that C++ already has a dependency manager — he says it’s called Docker. Not sure I agree, but that’s the kind of attitude that seems to grow when a void exists; people get by with “good enough hack for my use case in a pinch”, and then over time that turns into “this is the way it’s done”.
C’s problem is that my “way it is done” is different from yours.
Your friend uses Docker. My friend uses Apt (and hopes the libraries he wants are in there). Someone else compiles by hand and uses pkgconfig. Or CMake files. Or VC++ referenced projects. Or XCode projects. Or autotools. Or hand written make files. Or ninja. And so on.
Universal tooling only has value when it’s universal. Adding yet another contender to the mess doesn’t really help unless it becomes a standard.
Docker is orthogonal to the other tools you mentioned. You use apt or pkgconfig or autotools or hand rolled configuration inside of Docker (ignoring cross-platform issues for brevity).
> Now, why can't someone just come up with a great tooling system for a "dreaded" language, put it out there, and everyone start using that when they see it's better?
I think the first two steps sometimes happen, but the last one is where things get stuck. Often times a better experience means incompatibility with the previous solutions, so adoption happens slowly if at all. Then eventually yet another incompatible solution comes out, and things fragment.
Yeah true, legacy code must be a factor. If most people in C++ are working in legacy code bases that don't leverage many thrid-party packages then I guess there is little desire to standardize on a third-party package manager.
Another issue is more social. C++ has lots of package managers which is almost worse than declaring Conan the standard. Communities standards are hard to retro fit I guess?
(Not to pick on C++ by the way. Every popular language eventually sees itself competing with newer incumbents. It's like the innovators dilemma for PL. )
That's basically what the Laravel stack does for PHP. It's great, very ergonomic, I'd use it all the time if not for having to write PHP which I don't wish to do anymore.
It makes me think there isn't any reason old languages cannot adopt these.
Or even, come up with a new, batteries included version.
Eg. keep the Java syntax, but add one standard package manager (instead of ant, mvn, gradle), one standard test framework, standard formatter, etc. Call it 'Java++' or whatever.
You need to incentivize codebases to be moved to the new thing, or you have to rely on incremental adoption by new projects. It’s a tragedy of the commons — for any individual codebase it’s hardly worth the trouble (My codebase(s) already has a standard formatter / package manager / whatever; what do I get by changing?) but it’s worth it every other codebase was already moved.
Alternatively you build a thing that every other system could be auto-migrated to, which inevitably means supporting every possible feature of everything in play and slapping an ANSI standard label on it; and then watch as everyone extends the standard arbitrarily
> My codebase(s) already has a standard formatter / package manager / whatever; what do I get by changing?)
I guess you don't get much for yourself, but you get closer to some form of community standard. This ultimately helps your codebase "grow" when you need help from, or need to hire, other people. Other people that know those standards.
That’s fundamentally the problem; it’s only beneficial if the whole community moves in unison. If I switch from one standard with 10% ecosystem usage to another with 25%, I’ve accomplished nothing. You need something more like at least 60-70% standard usage to be a meaningful target.
Unless that 25% usage choice eventually grows to full usage… but I can’t predict the future — so I’m just betting on it. But that bet also does very little for me even on success (it’s definitely nicer to be normal, but it’s hardly critical), and I could just as well make the migration after it’s become a real standard with no risk.
There are low hanging fruits which would have adoption problems but that can be navigated.
However, there are very painful things like modern compilers which are massive undertakings for an old languages. It cost Microsoft 3-4 years to rewrite the C# compiler to be modern. But the benefit is a language server which keeps pace with the compiler and refactorings on the same AST (aka more stable/reliable).
There are some absolutism in there which are not correct
> Because of this, Go developers lean on the standard library more than many other communities and generally hold it in high regard.
As a .NET developer I can just laugh here. Compared to JS and C/C++ this is accurate, but .NET, Java, PHP, Python this is the normal case. I would say the majority of platforms for application writing nowadays come like that. Which supports your argument, just that it is the norm already for a long time.
To your generalized argument: i do not follow that. Yes, modern languages have the benefit of knowledge, but the old stacks are massively catching up. Package manager and libraries is a solved problem (except for C++). The most interesting aspect this should focus about are modern compilers. The ones like C# or TypeScript which expose their services (AST, type checking, etc) as language servers. Here old platforms (PHP, C++, Java, ...) have massive pains because rewriting the compiler is the opposite of fun and new languages could start like that (e.g. TypeScript).
>> Because of this, Go developers lean on the standard library more than many other communities and generally hold it in high regard.
> [...] Compared to JS and C/C++ this is accurate, but .NET, Java, PHP, Python this is the normal case.
This is not my experience with Python, and even less for PHP. There are too many dark corners in the standard libraries of these languages to hold them in high respect. The standard library of PHP is an inconsistent mess, with many documentation holes.
For instance, there are several ways to send HTTP queries using the PHP STL. Apart from very basic needs, the usual way is through the libCURL integrated into the STL. Unfortunately this library is just a port from C, far from a native syntax and barely documented. For any advanced usage, the developer has to read the C documentation, or switch to a non-standard library.
There’s something I’ve never quite understood and I wondered if someone who knows this history might be able to explain it.
I’m used to Java and Maven. My package manager (mvn) downloads 3rd party packages into my local repo, and all my Java projects have access to it. So I only have to download a specific version of a specific package once.
Node and NPM came along, and I think were written by a former Java developer. But NPM puts a full (unzipped!) copy of every package in each of my projects. This leads to pointless duplication of files, and in Windows is slow as dirt due to how NTFS works.
Yarn’s PNP mode sort of solves this, and it seems like recent versions of NPM may be changing things. I’m not sure.
The reason for a centralized local cache in Java is just because Java and jar files date back to the mid 1990s. Maven to the early 2000s. Storage was expensive. I wrote my first Java programs on a machine that had 16mb of RAM and a couple hundred mb of disk space.
Centralized caches of compiled code present all sorts of logistical issues when it comes to deployment. Java has suffered through several generations of uberjar-like designs. Python, also dating to the 90s, as a source based language tried to solve for both local and centralized trees and the result is a horrendous soup. In contrast, PHP- yet another 1990s language- had a model of just dumping files into the directory and serving them, which was comparatively simple and uniform.
Node, invented in the late 2000s by a C++ and Ruby programmer, was intended to simplify all sorts of aspects of writing webservers, leveraging in particular the deployment model of PHP (all needed files in source code in the local file tree) with the superior runtime performance and scalability, including support for async, of the Javascript V8 runtime, and with the benefit of having the same language both client and server side, something that was first tried with both Java and Javascript in the late 1990s but for various reasons failed. The other major envisioned benefit to Node was that it was its own webserver, and this has led to many other innovations.
That Windows still after 25 years of the NT architecture has a file system that doesn't scale is not something most tooling in the web ecosystem doesn't concern itself with. Node, like PHP, was linux first and foremost.
Distribution problems you mention in Java are not handled by NPM at all and are not related to how you store dependencies (in Java). Not sure why you mention it.
NPM dependency resolution is neither simpler nor faster compared to Maven. You have two commands for installing dependencies:
slow one (npm install) that updates package-lock based on config file developer use
faster one (npm ci) is based on package-lock cache-like file that is hard to read, you cannot be sure package-lock file is correct for current config file without running npm install, different versions of NPM will create different package-lock files
Both commands are slower and more buggy compared to Maven. Clearing dependency directory (node_modules) and trying again installing is a common practice when working with NPM. NPM is fast and correct only if it acts as a script runner - i.e. it is calling JS libraries (that are expected to be installed, NPM will not check that for you).
I find putting dependencies inside project folder to be most stupid design decision of NPM (second one is choosing json for configuration file syntax).
NPM's simplicity seems to be related only to its implementation (but somehow it did not prevent it being the most buggy cli tool I have ever used). It is basically a dependency downloader with script runner capabilities.
> That Windows still after 25 years of the NT architecture has a file system that doesn't scale is not something most tooling in the web ecosystem doesn't concern itself with.
This isn't exactly true. It's not so much that NTFS is slow, it's that it's not optimized for the usage patterns that Linux's FS is. That being said, it's certainly possible to have high IO performance on Windows if you know what you're doing.
ntfs is straight up bad. the way you get high performance io on windows is by inventing your own file system and dumping all the data into the same file because ntfs hasn't figured out how to make your ssd slower (yet).
It's not the storage used but the repeated downloading, if I understand the complaint correctly. Avoiding the network traffic if you have an exact match should be much faster on a clean build, even if you don't care about disk space.
Right. If I have three projects using React 17.0.0, I have three copies. If I start another project with React 17.0.0 it will download another copy (and of the hundreds/thousands of dependencies) even though I may already have three copies of each.
And what if I try to make a project while offline? Maven can do it if I’ve previously downloaded that package.
My employer is big on Windows. All those copies of tens of thousands of files has a serious time cost.
Even if dependencies were kept zipped up but not shared that would make more sense to me. Easier to deploy than individual files, smaller too.
That's not an obstacle to a central cache - you can solve that with symlinks. A build tool should make node_modules/foo a symlink to ~/.npm/cache/foo-1.23 if foo 1.23 is what you need. A packaging tool should be able to follow symlinks easily.
I'm really wishing that the innovation that language server brought to the editor ecosystem can be expanded to other dev tools.
I'd love for dependancy management and build systems to be a solved problem and not reinvented by every new programming language that has gained enough popularity that some volunteer decides to build these basic dev tools.
Imagine learning one tool and using it for every language. They all solve the same problem. What's the common interface?
The common interface is dependencies and dependents, but the problem with existing build systems is that they make assumptions about those two things. They also place other restrictions.
For example, the Go build system assumes you set up the project according to the Go standard. Cargo sets things up a certain way too.
They also assume certain build tools, like the Go compiler and `rustc`.
Restrictions for ease of implementation are the reasons why every build system sucks. Figure out how to remove the restrictions and implement that, and you will have a universal build system.
Source: I am building such a build system, so whether I'm right will only become clear over time.
One difficulty is that dependency resolution need to be language-aware, to understand the semantics of #include, import, require, CLASSPATH, compiler -i flags etc etc.
Best would be if compiler could export this in a standard format for consumption by a build system. Like the language server.
Looking at build tools that try to be cross language, you have Bazel probably being the most serious contender, which often requires you to vendor your dependencies and rewrite the build files entirely in a different mindset than how upstream packaged it.
You are absolutely right, especially about understanding the semantics of importing code in each language. Fortunately, with few restrictions, it is possible to use hacks to simulate the effect, mostly.
I agree with this. With Nix I can finally stop using nightmarish tools like nvm and rvm. I have my development tools where I need them: my projects. There's no reason for me to have, say, node or eslint available anywhere else.
One reason that progress is slow is that new language that comes along often have to start over. The article points out how some new languages have benefited from this, but it also means that getting to parity with the best of existing mainstream languages is quite difficult.
An exception is languages that piggyback on an existing ecosystem (for example, Kotlin on Java's ecosystem). But they're not advancing the tooling state of the art.
Tooling is extremely important. However there’s one thing I think is worth mentioning:
Java tooling (IntelliJ and Lombok) is incredible. Yet Java is still on the most dreaded list.
Why is that? Honestly I don’t know, and I actually think Java is a good language exclusively because of its tooling. That being said, current tooling doesn’t fix all of Java’s flaws.
- Gradle and maven are reliable like npm and cargo, but super verbose and confusing
- Standard library has some issues and quirks. Like URL#equals sends a network request, apparently that will never be changed. Some other packages (log4j, JAX) are similarly bloated
- The “billion dollar mistake” untyped null. People tried to fix this with @NonNull annotations but it’s not far enough.
- Java is in a lot of legacy software, which often cannot use Lombok and requires Java 8 or even Java 7.
- I just have to say, IntelliJ and Lombok really do fix Java’s verbosity. Both reading and writing. Indeed, tooling is the new syntax. But maybe some disagree.
Agreed (although i hate Lombok myself, as being altogether too much magic; happily, standard Java language improvements now provide a lot of the comforts that Lombok does).
But other languages have quirks on a par with Java. JavaScript is quirkier, and Ruby is basically a DSL for composing quirks.
I see two areas where Java fell more dynamic competitors.
Firstly, Java does not have a great story about being able to open up an editor, write some code, and ship it. Using a simple text editor is too tiring, and the extremely sophisticated IDEs take a lot of learning. Java has never been well supported by the middle ground of Sublime Text-esque programmer's editors. Meanwhile, the standard tooling (javac and jar) is simple, but using it to get to production is cumbersome, and the more sophisticated build tools are not standard, and either verbose and maddeningly inflexible (Maven), or crammed with arcane secrets (Gradle). And none of them make starting a new project as easy as 'rails new' or 'npm create-react-app' (Maven has archetypes for this, but i don't think they ever got really popular).
Secondly, Java does not have a really good framework for being highly productive at building webapps right out of the gate. By which i mean Rails. There's absolutely no technical reason you couldn't build a Rails equivalent in Java. I am astounded that (AFAIK!) there was not an urgent effort to do a straight port of Rails to Java once it became clear that Ruby was eating Java from the bottom up. Meanwhile, Java had Spring, but until Spring Boot, Spring was nowhere as easy and consistent to use as Rails; Spring Boot has helped, but by slathering another layer of even more complexity on top. I have never used Dropwizard, but maybe that's closer to the Rails spirit. The more 'sophisticated' frameworks like Play and Vert.x are not the answer, because they're so tied up with particular overly clever concepts.
Java is great, I think many wrinkles need some active deprecation. Its interesting that the blog likes the extensive standard libraries, but Java is littered with old APIs and language features that aren't used or liked any more. Newer languages avoid having these old warts, its only matter of time though.
> Java tooling (IntelliJ and Lombok) is incredible
Until a Java IDE can jump from an annotation to all the code that actually processes that annotation in the context of the current project, Java tooling lags substantially behind ctags.
> - Gradle and maven are reliable like npm and cargo, but super verbose and confusing
At one recent-ish job, I wrote some scripts to download gradle dependencies via wget and stuff them into gradle's cache because gradle was incapable of resuming downloads and multibillion dollar companies are incapable of running maven repositories on the same continent as their developers or delivering HTTP responses from maven repositories across oceans. The resulting amalgam of bash, wget, and gradle was still quite slow because it takes forever to run gradle long enough for gradle to tell you what it failed to download.
> - Standard library has some issues and quirks. Like URL#equals sends a network request
What.
> - I just have to say, IntelliJ and Lombok really do fix Java’s verbosity. Both reading and writing. Indeed, tooling is the new syntax. But maybe some disagree.
It may be useful to compare an "idiomatic Spring Boot" implementation of a small collection of related REST endpoints for a moderately complex model to an implementation in any other language and calculate how many times more files and how many times more lines of code you end up with.
* I find it hard to believe anyone with experience on the JVM would choose Java over Scala or Kotlin. The latter actually addresses most of your concerns. The existence of Lombok is a strong argument in favor of a more modern language.
* I'd violently disagree with anyone grouping gradle and maven together. One of them is declarative the other is a mess surpassed only by SBT.
* I don't get why nulls are so hated by people who are not real FP purists. How are they different from ", err := .. if err != nil .." in golang which people do a lot and seem to enjoy (in comparison with exceptions and even Optionals)? NPEs are not common in development and very unusual in prod. I'm not sure I like how much ceremony there's in Kotlin to keep track of nullable things.
* I hate to break the news but log4j was dropped in favor of slf4J&Co a decade ago. There are better restful frameworks such as akka-http and sparkjava.
> I don't get why nulls are so hated by people who are not real FP purists. How are they different from ", err := .. if err != nil .." in golang which people do a lot and seem to enjoy (in comparison with exceptions and even Optionals)? NPEs are not common in development and very unusual in prod. I'm not sure I like how much ceremony there's in Kotlin to keep track of nullable things.
Disagree that NPEs are a rare occurrence, that Go-style error handling is universally enjoyed (I would say it's one of the top complaints people have about Go) and that Kotlin null handling uses too much ceremony.
And as for that last point, idiomatic Kotlin code (or, indeed, code in any language that has null-safety) will tend to eliminate nullability from types as much as possible, it's not really the case that most function arguments, return values, etc. really need to be nullable.
* I'd prefer Java (with Lombok) over Scala or Kotlin
* How often do you see both Gradle and Maven in a project? Not often. They're similar tools.
* [0]
* This is completely untrue. log4j is still widely used, even in new projects. sl4j is simply a universal API that allows applications to choose their logging backend. sl4j and log4j aren't mutually exclusive.
* I was going to say that spark hasn't been updated in years, but it looks like they posted a release [1]!
Java was my first introduction to full bull-goose, consultant-class enterprise architecture, where I think that salaries and hourly rates are proportional to the number of cascaded factory classes your solution requires. At least four or five, or you're bush leagues.
Wait, am I sounding shocked, wounded and bitter again?
"This thing is so enterprise-y that the documentation alone can send email."
I would argue that any language that you use for enterprise-y program will always result in similar code bases.
These always a lot of standards to satisfy and eventually this results in people coming up with leaky abstractions for incorporating those policies. All business logic cannot be nicely boxed into clean code as much as we would like to.
Edit: apparently some people think I am disagreeing with the statement in and of itself. Yes, Java is a homework language, but it’s real problems are far worse than that. Worst of all is the generation of developers that it produced.
Bad languages leave lots of room for tooling. Good languages leave less room.
Only the deluded run ubsan/asan/tsan on safe rust. Does that mean safe rust has less tooling than other languages? Well, yes, because that tooling is redundant and irrelevant.
I do, because I'm a Java hater. Here are the reason I hate it for
- Baking the difference between primitives and objects into the language itself : an ugly mistake with far reaching consequences, made by a language designed in 1995 while another designed in 1980 (smalltalk), in 1991 (Python) and 1995 (Ruby) all didn't fall for it.
The difference is an irrelevant VM-level optimization detail, there is no reason to uglify the human-level language with it. Once the initial mistake has been made, the correct response was NOT to make the even uglier hack of wrapper classes, but to make the primitives objects in the newer releases of the language, this won't break old code, as valid uses of objects are a superset of valid uses of primitives, except perhaps that objects need to be allocated explictely with "new", but this can be a special case for primtives (i.e. "int is a special kind of object that you don't need to allocate explicitly"). The compiler can figure out whether it needs to be represented as objects or as primitives, you can leave hooks and knobs for people to tell the compiler they need to the primitives to be represented as primitves, but it shouldn't be mandatory.
- Baking in choices about object representations : Like the fact that objects are always passed by reference, or that they are always allocated on the heap. Why the "always" part ? why not give developers the choice between pass-by-value and pass-by-reference like C# does ? why not give developers the choice to allocate on the stack (and complain as loud as you want when they want to do something unsafe with it, like escaping from methods), which, unfortunately, even C# doesn't ?
Everytime you see something like "foo deepCopy()" that's a failure of the language, forcing you to explicitely pay attention to the fact that foo objects need to be copied deeply everytime they are copied, instead of just once when you define the object by marking it as a "struct" or whatever word to signify that object has value semantics, and then deep copy is just assignment or passing as a parameter. Why make it the default to be inefficient with the heap when it's very easy to give developers the choice to be efficient in situations where it's always safe ?
- No operator overloading : I get the hate, it's a powerful tool. But it's misguided to ban it, operators should not be special, languages like Haskell and Raku go even further and allow you to define new operators entirely and control their predence and other things. You don't need to go that far, why can't objects use the already built-in symbols the language support ? because it might be confusing ? anything can be confusing, you can write assembly in any programming language, and it will be even worse than assembly because of the more powerful and obscure abstractions.
- Generics : The overall theme of forcing you to do things its way seems to a staple with java. Why do I need to use type-erased generics ? why shouldn't I get the choice to specify whether I need a new class generated for runtime efficiency or use the type-erased catch-all for size efficiency? there is no need to bake VM-level support for this, it can all be done at compile time (possibly with help of additional metadata files or special fields in the .class of the generic type).
- Overall verbosity : Why "extends" and "implements" ? do you really need to know whether you're inheriting a class or an interface ? and can't those be lighter symbols like "<" and ":" perhaps ? why is "private/public/protected" a must in front of every method and field ? most people align fields and methods by their visibility, C++'s way is that you declare "public:" and then everything declared below that is public. In the worst case you can always recover Java's way by "public : <method> ; private : <method> ; public : <method>" and so on, but it's nice to at least have the choice of not repeating yourself.
Why aren't any constructors generated ? there are at least 2 very obvious ones : the empty one, and the one that assigns all the non-defaulted fields (and can take optional arguments to override the default fields). Why aren't generated getters and setters available with a small and light request, like C#'s "get ; set ;" ? Java's design is just full of things like this. It feels like a weird sort of disrespect for your time, "yeah you must write those routine 25 lines of code all by yourself, you have anything better to do?", how about actually writing my application instead of pleasing your language with weird and unnecessary incantations ? It's like a modern COBOL.
- Horrible OOP excesses : Not really the language's fault (except that it encourages verbosity and loves it) and already mentioned, but worth mentioning again.
Overall, I treat java as assembly. I write kotlin in my spare time, and whenever I'm confused about the semantics of some construct I make intellij show the bytecode then hit "decompile" to see a Java rendition of the code, the exact semantics will be obvious but verbose. A language that took this literally is Xtend, a high-level augmented java which transpiles to java and is a strict superset of it, but with option that the Xtend compiler figures out all the verbosity for you. Groovy also takes the "Superset and Augment" approach but doesn't transpile. And off course Kotlin is very good with it's interoperability, every JVM language is but Kotlin's mixture of being close to Java semantics (unlike say, Scala or Clojure) and Intellij excellent support for mixed projects makes it at least somewhat special.
I like the JVM and it's cutting edge research and performance, and these days the Java standard writers seem to show signs of finally waking up to reality after years of being behind every mainstream language, and they regularly augment and modernize the language. But you can't undo 20 years or so of bad design, not easily and not painlessly.
>Indeed, tooling is the new syntax.
Very much agreed, long long gone are the days when a compiler or an interpeter is the only thing expected out of a language. But it's not a panacea to treat any bad design, at best it's just a band-aid for bad designs that makes them barely berable. The language has to be designed from the start with the knowledge of "this is going to run in an IDE" baked in to make full use of the full range of fantastic things an IDE can do.
> Everytime you see something like "foo deepCopy()" that's a failure of the language, forcing you to explicitely pay attention to the fact that foo objects need to be copied deeply everytime they are copied, instead of just once when you define the object by marking it as a "struct" or whatever word to signify that object has value semantics, and then deep copy is just assignment or passing as a parameter. Why make it the default to be inefficient with the heap when it's very easy to give developers the choice to be efficient in situations where it's always safe ?
It's not clear that running a bunch of code in the form of a copy constructor and potentially performing a bunch of heap allocations whenever someone returns a value, passes an argument, assigns a variable, or evaluates a temporary as part of an expression is a good thing.
> - Generics : The overall theme of forcing you to do things its way seems to a staple with java. Why do I need to use type-erased generics ? why shouldn't I get the choice to specify whether I need a new class generated for runtime efficiency or use the type-erased catch-all for size efficiency? there is no need to bake VM-level support for this, it can all be done at compile time (possibly with help of additional metadata files or special fields in the .class of the generic type).
There ends up being no difference, right? Your generic ArrayList is just an ArrayList of pointers to Object no matter what. You never had the option of declaring an ArrayList of structs.
> Baking the difference between primitives and objects into the language itself
This is not unique to Java and this difference is present even in a modern language like Rust. Primitives are copied by value, objects are copied by reference. If you want copy by value use Java records: https://www.baeldung.com/java-record-keyword
> Baking in choices about object representations : Like the fact that objects are always passed by reference, or that they are always allocated on the heap
Because the JVM does escape analysis and performs automatic stack allocation. The JVM wants to retain control here.
> Generics
Type erased generics were a deliberate design decision implemented for backward compatibility. .net had to introduce brand new collections while java programs could continue to work. To be honest, I felt this was stupid too. A clean break would have been better.
> Overall verbosity
Deliberate design decision. Though frankly, I personally prefer extends and implements. This is just done once at class declaration and allows for easy and no-nonsense grepping of sources from the CLI.
> Why aren't any constructors generated
Satisfied by records now which are meant as the placement for POJO's. Classes should now be used for services - for which you _don't_ want generated constructors.
> Horrible OOP excesses
Looks like you are ~10 years out of date on Java tech and community. Perhaps look at some of the modern java libraries ? Look at GraalVM ?
I prefer coding in Modern Java over Kotlin. Kotlin readability is poor.
>This is not unique to Java and this difference is present even in a modern language like Rust. Primitives are copied by value, objects are copied by reference. If you want copy by value use Java records:
You're confusing my copy-by-value-copy-vs-by-reference criticism with my type system criticism. The point you're replying to is a point about the type system. Rust has no notion of a unified type system, so it can do whatever it wants. (It's still better to not have too many exceptions and special cases in your type system, I don't know enough about Rust to know if this is the case there.)
Java announces at the start that all types share a common ancestor, an idea it got out of objective-C which got it out of smalltalk. If you're going to do that, you better go all-in and make sure that ALL types really do in fact share a common ancestor. Objective-C couldn't do it because it was wrapping C, but in a new language there is no excuse to make a rule then immediately listing several exceptions to it right off the bat. Scala and Kotlin are proof it can work, your primitives are compile-time objects that you can call methods on and do all the things you can do to objects, the compiler decides if your code can compile to JVM primitives (and therefore the method calls are static function calls) or if you had done something that requires boxing like Generics.
>Because the JVM does escape analysis and performs automatic stack allocation
There is no reason it can't do all those things and give the developer the ability to explicitly specify that they want this data on the stack, for all the data that can be allocated on the stack (e.g. not dynamically sized arrays). The object representation is a low level yet very important question that shouldn't be monopolized by the language.
There is also no reason to bake what I'm saying into the JVM, the compiler is there. It can "unwrap" the objects that you stack-allocate into primitives (recursively) and translate all the code that manipulates the objects into equivalent code that manipulates the underlying primitives. The JVM would be none the wiser, all it would see is primitives.
This is what I mean when I say that Java is an assembly language, it tries too hard to reflect the underlying VM, developer ergonomics and productivity be damned.
>Deliberate design decision.
I never said it's accidental, I said it's bad and ugly and horrible for developer productivity.
>This is just done once at class declaration
And is this a rare thing in java ?
>easy and no-nonsense grepping of sources from the CLI
Regex is far from a no-nonsense solution to anything, for one thing it's sensitive to spaces and newlines. You can't match "class foo <newline and several tabs later> implements Iinterface" for example unless your regex explicitly and verbosely account for it. Kotlin's regex won't be anymore difficult than the equivalent java one that handles the same edge cases, syntax easy for humans is syntax not too difficult for machines.
Here is an example :
"class [^:]+:(\s|.)*Iinterface"
This matches all classes implementing Iinterface, accounting for any whitespace and the fact that Iinterface might not be the only interface implemented. This is pretty mild by regex standards.
>Satisfied by records now which are meant as the placement for POJO's.
Being late matters. Generating things automatically isn't rocket surgery, it has been done by languages since forever. It isn't enough to wake up, you have to do it at morning.
>Classes should now be used for services
Keyword is "should". Good luck forcing it on a community after being late for 20 years.
>which you _don't_ want generated constructors.
So override them. If you already know that you don't want constructors and you're going to override them anyway, why should the language force everybody to conform to your choices. Generating obvious things should be the default, it's up to you, as a person who wants to override the defaults, to know that you should override the defaults and go override them.
>Looks like you are ~10 years out of date on Java tech and community
I'm right here in this day and age, and no the horrible excesses and design pattern fetishism never went anywhere in the vast majority of non-performance-critical code.
>Perhaps look at some of the modern java libraries
Does Android count ? because I have seen things there you wouldn't believe, 7-class deep stacktraces full of "abstract" and "Impl".
> GraalVM
It's hardly fair to claim that a compiler\VM codebase is typical code.
>Kotlin readability is poor.
This is what I call the "COBOL view" on programming language readbility. The language should be as verbose as possible and full of natural language words to imitate an informal document.
Needless to say, I'm not very keen on this view. It's misguided. Readability is precisely when the language gets out of your way. For this to happen, it's crucial that it's not verbose, if you want verbosity later you can add it with vebosely-named language abstractions. In a language that allows macros (which isn't kotlin), you can make "implements" a keyword, if you want.
The keypoint is that verbosity and extreme detail is not readability, not always. Even in natural language. Readability is what happens when the problem is described at the exact level of verbosity that makes its solution fit in one human brain. The language shouldn't claim to know this level in advance for all possible problems, it should be as austere and as minimal as superhumanely possible, it should get out of the way and let those describing the problem decide how it should be described, because they know better (about the problem) than any language designer.
In other words, you can make Kotlin verbose for problems that benefit from it, but you can't make Java not verbose for problems that benefit from it.
It sounds like you just want to write C#. The reason that Java is so successful is precisely because of the points you mentioned. If they broke backward compatibility to fix the warts then it wouldn't be the popular language that it turned out to be.
Modern Java (11+, and especially 17) has greatly improved the language. Sure, there's still type erasure and primitives, but it is adding a lot of features that matter like record types, more efficient GC, deprecation of older classes, and more fleshed out APIs.
I can see why you'd dislike Java. There are a lot of things that could be improved. There are times when it is the right language and times when it isn't.
C# isn't ideal either, but it's a way better java than java. The even better java is Kotlin or Groovy.
My point in the GP comment is just that there fundamental design things that IDEs can't fix. Java is badly designed, IDEs can do damage control, but it can't undo what the mistakes of language designers.
> why can't objects use the already built-in symbols the language support?
In Raku they can. But one can also introduce new ones. More generally, it's nice Raku makes it easy for a dev to write a module that lets other devs write code and get nice output like this:
use Physics::Measure :ALL;
# Define a distance and a time
d = 42m; say d; # 42 m (Length)
t = 10s; say t; # 10 s (Time)
# Calculate speed and acceleration
u = d / t; say u; # 4.2 m/s (Speed)
a = u / t; say a; # 0.42 m/s^2 (Acceleration)
# As a special treat it allows working with measurement errors:
d = 10m ± 1;
t = 8s ± 2;
say d / t # 1.25m/s ±0.4
(With thanks to SteveP for the module and niner for the above example code.)
It’s interesting to me the emphasis placed on tooling. The tooling in Rust’s ecosystem, while good, is far from my favorite part. I very much enjoy how expressive and ergonomic the language is. Coming from primarily C and C++, it was eye opening to me to see all the things I never knew I was missing out on (borrow checking, const by default, ? operator, monadic types, the list goes on…)
This is a good article. It covers a bunch of "ecosystem" stuff.
At the same time I'd generally encourage a lot of developers using plain editors and modern, but not widely used languages, to give IDEs and common languages a try. That addresses the "iteration" experience.
Modern IDEs for java, C++, C# area huge step up from whatever cobbled together setup one is likely using in neovim (saying as something who mainly used vim till about 2 years ago).
VCS integration, build integration, debugging facilities, refactoring, linting etc. are just amazing. Jetbrains IdeaVIM is also really good at getting a fast editing experience. At this point, as soon as I'm beyond a small script, it is worth it to open the project in an IDE.
We still aren't at the smalltalk or lisp machine level in certain aspects, but that is more a limitation of our larger computing platforms.
I use an IDE for my day job (because when in Rome...) and I use Vim the rest of the time.
There genuinely are things to like about the IDE. But, they're fewer and further between than I'd anticipated. For example the debugger is more intuitive to work with. If I have no idea what's actually wrong, so that I'd be resorting to just poking around hoping I see something weird to investigate, this makes the IDE a better choice.
I expected several things to be nicer, or to be roughly the same but perhaps with a cute GUI that I'd ignore - but they were generally worse. Test infrastructure is worse: Just tell me what failed and how. Don't waste my time making some sort of chart, I'm not trying to impress some middle manager I have work to do. Revision control is worse: the IDE forgets which files are modified! Sometimes restarting it helps. Buggy software is annoying all the time, but buggy development software is awful.
Now, maybe JetBrains is amazing, I haven't used it. But I have used a famous IDE that got mentioned elsewhere in this HN discussion and it was... not good. Not so terrible that I told my boss I can't put up with it. But definitely not better than Vim.
A few years ago I did just that. C (a very common langauge) and a popular IDE (either Eclipse or IntelliJ, I don't recall which one). I loaded up a simple two-file C project into the IDE and it promptly crashed.
I think I'll stick with my text editor I've been using for 30 years. I've never had a problem with editing any language.
the old language that's a bit of a counter example to this is php.
folks came out with a package manager in composer that was just so much better that everyone jumped to it. and then the IDE's finally got modern features, and then the runtime got much faster in php 7.
It's just by the time all this happened, "PHP sucks" had become"common knowledge" and it really fell off the popularity ladder. but not for lack of tooling.
C++ has also made a lot of strides in developer productivity, it just still has a bad rep.
And you actually can have xdebug installed without performance degradation if it is not automatically enabled on every request (think that was true for xdebug 2 as well). What you do is to send a special request payload to enable it when needed, can be done with a browser extension.
There is also phpdbg, shipped with php, it does not suffer from the same performance penalties that xdebug prior to version 3 had (not sure how it compares to xdebug 3). However it is a command line debugger only, it does not support remote debugging or IDE integration.
I actually hate the tooling for Rust. VSCode on Windows sometimes refuses to launch the project or the debugger crashes on launch. IntelliJ Community doesn't do debugging in Rust at all, neither does Sublime Text. Nova for macOS does have a Rust extension, but it also doesn't support debugging.
It's pretty good with IntelliJ IDEA Ultimate on Linux though
One crucial thing about Rust's documentation is that your examples are run by the test infrastructure.
This makes logical sense. If the programmer provided an example of how to call some_function() then we should test that example works. If it doesn't work their example was wrong, or some_function() is broken, either way the programmer needs to fix that.
For the test infrastructure to run your examples, they must be code, not some sort of annotation.
The result is now you don't ship documentation which oops, is documenting the old some_function which had different behaviour and so of course that example doesn't work any more. Sorry, we didn't rewrite the docs yet, just go read this Discord thread and the six bug tracker entries it links...
I first learned this from python docstrings, which can include testable examples.
I think rust improves on how it’s achieved, but I’m more familiar with how go does it.
Whichever language you use I agree that it’s a great way to ensure that the docs match the package. One thing I would love to see more of is annotations in docstrings that specify when changes were made to packages. (E.g. “version 2.3, added FooFrob()”) Python standard lib does a good job of this, but I can’t think of a convenient (for developers) way to enforce this with tooling.
Asking for your Vec to use a different allocator instead of the global heap allocator is a nightly feature, ask for feature "allocator_api" in your nightly compiler, or read GitHub issue 32838 if you find problems.
#[rustc_const_stable(feature = "const_vec_new", since = "1.39.0")]
Vec::new() is compile time constant from 1.39.0, before that a call to Vec::new() actually calls that new() function at runtime, even though a new Vec doesn't allocate anywhere to put anything. If you're currenty in a compile time constant context Vec::new() won't compile at all until 1.39.0
Well, rustdoc lets you include arbitrary Markdown files. So that case is not eschewed.
With that said, they serve two different use cases. So as someone who has probably written more lines of docs than I have code in Rust, it doesn't make sense to me to ask for one and not the other.
API docs with a bit of high level prose and some examples are very likely good enough for small focused libraries.
For bigger libraries or projects, you are absolutely free to write prose that is disconnected from the API. You can do that in rustdoc or use some other tool (mdbook is very popular).
But either way, you pretty much always want at least some API docs.
> Doing it that way repeats the mistake of HTML. Annotation should be separate from content, not made into a sausage with it.
This doesn't make any sense to me. The API of a library is both the signatures of items exported by the library and the documentation on those items. The docs might make certain guarantees about behavior, error/panic conditions and more. Those can't be described by the type system, and thus, the items and the docs are intricately connected. Both are required to define a library's API.
> I'm all for good documentation, but why embed it in the source code? Why not have good documentation files that stand alone?
It usually ends up out of sync with the code almost immediately. And there is a very good chance of the documentation being lost, buried in some some enterprise document management system. Sometimes I feel like Gandalf reading through stacks of scrolls trying to find thousand year old references to magical rings.
I'm not sure if rust does this but C#'s embedded docs are integrated throughout the toolchain so they appear alongside intellisense as your writing or navigating etc. Keeping them with the code makes sense to me because they have the same reasons to change.
My experience with most rust crates is that they do use separate markdown files for their big doc pages like readme and tutorial, getting started etc; and they include (`!#doc["readme.md"]`, or something like that) these files in their source so that they're picked up by `cargo doc`. Only actual Rust items get directly-in-source-documentation -- again, in general.
However I agree (I think) that markdown is not a great format for long form docs; you want something a bit more expressive (say, asciidoctor).
As an example of documentation at the same or better standard, also look at Racket. I'd say the python ecosystem is also really getting better at this in the past few years.
rustdoc may be good for reference documentation (as a Rust noob, I don't really think it is, most docs are pretty messy and the UX isn't very good, it's often better/easier to read the docs in-situ in the code), but it seems to lack any and all facilities for non-refence docs, which most Rust projects basically don't have at all, some put in the top-level module docs (which usually limits the size) and a few have completely separated docs for anything non-reference. (Or one of the creative hacks to get around it, like creating empty public modules or useless macros to have additional pages).
In these areas rustdoc is a significant regression compared to say, Sphinx.
The upside of rustdoc is no settings, no setup, no friction. Which also means that the deficiencies as far as the reference functionality is concerned can actually be fixed.
> it's often better/easier to read the docs in-situ in the code
Could you elaborate on this? It's really surprising to me. Some reasons why I'm surprised to hear you say it:
* The actual public API isn't necessarily always obviously clear from the source code. For example, if you see 'pub struct Foo;', that doesn't mean it's actually in the public API of the library. rustdoc takes care of figuring out what exactly is exported.
* Viewing it in a web browser has the benefit of it being nicely formatted.
* The web browser also has hyperlinks you can follow. Your editor environment probably handles a fair bit of these with 'goto definition', but the prose itself might have hyperlinks that aren't easily followed in the context of your editor.
I think the first reason is the most compelling. rustdoc gives you a simple overview at a glance of what's actually in the public API. Discerning this from the source as a human isn't terribly difficult, but not as easy as a glance.
> but it seems to lack any and all facilities for non-refence docs, which most Rust projects basically don't have at all, some put in the top-level module docs (which usually limits the size)
I don't really think any of those are deserving of non-reference docs.
I do think I have some crates that would be deserving of non-reference docs but haven't been able to find the time to write them. It isn't for lack of tooling.
> a few have completely separated docs for anything non-reference
Do you see this as a problem? If so, why? Aren't non-reference docs, by definition, totally separate from reference docs?
> (Or one of the creative hacks to get around it, like creating empty public modules or useless macros to have additional pages).
I've used the empty public module approach before. Other than it being not quite an intended workflow, it has worked well. What specifically do you see wrong with it?
> * The actual public API isn't necessarily always obviously clear from the source code. For example, if you see 'pub struct Foo;', that doesn't mean it's actually in the public API of the library. rustdoc takes care of figuring out what exactly is exported.
That's true, but hasn't really bitten me too much so far. In most cases it seems like if you managed to Ctrl+B some place and see "pub" it's probably actually public. The same goes for sorting by visibility in the structure browser, I'm assuming that just looks at the local visibility and not whether it's visible from outside the crate. It'd be cool if CLion could visually mark stuff that's not part of the crate API.
> * Viewing it in a web browser has the benefit of it being nicely formatted.
> * The web browser also has hyperlinks you can follow. Your editor environment probably handles a fair bit of these with 'goto definition', but the prose itself might have hyperlinks that aren't easily followed in the context of your editor.
CLion/their Rust plugin renders doc-comments as inline rich text, complete with clickable references, so for these two points it's pretty much the same as rustdoc. In rustdoc I constantly get lost with more complex types because it's just a huge page which might have a lot of trait impls on it and if you're in the middle of it there really isn't anything telling you what you are looking at right now (e.g. whose trait methods). In CLion I have my breadcrumbs that always tell me where I am.
Maybe sticky breadcrumbs would help with getting lost on larger pages.
> Do you see this as a problem? If so, why? Aren't non-reference docs, by definition, totally separate from reference docs?
Surely it means there is a lot more friction to doing it? You have to setup a separate documentation tool, builds and hosting for that, probably need to figure out how to configure it for linking into the reference etc.
> Do you think every single Rust library should have non-reference docs?
No, but to me it seems like Rust projects tend to have a lot less of it relative to other ecosystems. I've also seen a bunch of crates that don't have even one example in them, even though the code itself looked solid.
> What specifically do you see wrong with it?
It appearing as modules and navigation happening through the module table is kinda awkward but otherwise it's... fine? Which I suppose means that rustdoc is mostly there.
-
A general issue I've experienced with some crates is that (especially if there are only some API docs and nothing else) it's hard to get that "at a glance" overview of the API. Questions like "I wanna do X, what do I need for that" or "I have an X, and need a Y, how do I go from one to the other" aren't particularly obvious. It's often unclear to me from rustdoc (without trying to compile) what types can convert via From/Into into other types, particularly if crates bring their own variations of From traits into the mix. Doxygen used to do all these graphs, I mostly found them kinda useless, though the inheritance ones were sometimes useful. Rustdoc doesn't do anything in the visual-ish domain so far, which might be because that's proven to be a largely useless fad in prior tools, but perhaps Rust is sufficiently different from C++ that there could be more-than-useless visualizations.
A concrete example where I sorta experienced this with the mysql crate. There's Queryable, okay, and you can see Conn implements that, so far so good. How do I get one of those? Conn::new is a thing, but that's a generic over "things intoable into some other type" (sorta Yoda-style as the "Opts: TryFrom<T, Error = E>" constraint). If I click on the TryFrom there that leads nowhere [1]. Clicking on Opts (apparently you can put trait bounds on non-type parameters, TIL) brings up the Opts page, and that has a little comment telling us to use OptsBuilder to make one of these. But there's also an inconspicuous "TryFrom<&'_ str>" down in the sidebar. The docs are silent on this, but that actually calls Opts::from_url, which is also not that documented but I can guess what it does. So now I've actually managed to figure out that I could have just done Conn::new("user:pass@host") all along (except that won't work because it wants mysql:// because URL, right? Maybe? I dunno, would have to try). Now, one might point out that the first lines of non-boilerplate in the example are this:
let url = "mysql://root:password@localhost:3307/db_name";
let pool = Pool::new(url)?;
let mut conn = pool.get_conn()?;
And yeah, that totally gets you a Conn, right? Well PooledConn or whatever, but same difference. But I didn't want a pool, I just wanted to connect. I also can't click on anything in the examples (if I read this in my IDE, I actually can go to definition in all examples).
Later it was time for writing some queries and of course the question became "well, uh, what types can I stuff in there? And when I query, what types can I convert to, and what does it expect for these on the other side?". Just from rustdoc itself this is difficult to understand (the 3rd question is of course not really answerable for rustdoc, as it lies outside the type system). There's actually a hand-written table to specifically answer these questions, but that's part of a dependency: https://docs.rs/mysql_common/latest/mysql_common/#supported-... mysql actually has a very comprehensive overview in the crate root, it's just not directly see this as there is no table of contents for the contents of the page itself -- which makes sense for rustdoc, where, from the API docs PoV, you're not expected to put pages worth of content into a single module/struct/function docstring, but would be weird to be absent from a generalist documentation system. Headlines are anchors, so I'm guessing you can always create your own hand-made ToC here, but that doesn't seem ideal to me. People do that with README.md and it's not great imho.
Just to clarify, I'm not trying to shit on mysql or mysql_common here, obviously.
[1] I just thought that maybe if you build the docs locally with "cargo d" that it'd also generate std docs, but they only link to the online docs. "cargo d" also doesn't tell you where the built docs are and there isn't an overall index.
RE browser vs reading the code: sounds like you have a nicer setup than my neovim setup. Although I think my first point still holds unless CLion handles that case too.
With respect to the rest of your comment, indeed, those are issues. Although I think I take issue with you pinning this on rustdoc. I actually think it's a dance between documentation presentation (so, rustdoc), API design and familiarity with the language.
I've long said that rustdoc makes unknown unknowns difficult to discover, and this is particularly painful for folks new to Rust. Because you don't really know what to look for yet. And writing docs is a difficult exercise in perspective taking, where you need to balance what you think others know. If you assume they know too little, it's not hard to end up writing too much and adding a fair bit of noise. With that said, I agree that "too little docs" is a far more common problem than "too many docs."
But yeah, your experience is a perfect example of what I mean when I say "generics complicate APIs." They introduce indirection everywhere, and I'm not sure how much rustdoc can really help with that. You might be right that maybe there are some visualizations that can be added, but like you, I've always seen those as gimmicks in other tools that are rarely useful. IMO, a heavily generic API really requires the crate author to write more prose about how their APIs are intended to be used with lots of concrete examples. But that's really hard to do, because writing good docs requires being a good communicator, and most of us suck hard at that.
The interesting bit here is that I've personally found the documentation experience in Rust to be far far better than any other ecosystem. All the way from writing docs up to consuming them. I've sampled many different ecosystems (C, C++, Haskell, Python, Go to name some) and other than maybe Go, I thought the doc experience was really just not great in any of them. Python specifically seems to be a case where I tend to see a lot of variance in opinion. I hated Sphinx so much, for example, that I built an alternative.[1] (Well, alternative for API docs, pdoc does not deal with the non-reference case.) I also just generally dislike the output that Sphinx produces. I find that it lacks structure, and I've always had a hard time navigating my way through Python library docs.
The counter argument to this is that developers requiring more and more tooling surrounding what is arguably just a text file with some code in it can cause friction, yak shaving, and cognitive load over time.
Listen, there's nothing wrong with IDEs, linters, formatters, test frameworks, advanced dependency management across multiple projects & repos, yada yada yada. But there's also something to be said for being able to open up a blank file in a blank folder, writing some lines of code, and running a command to execute said code. That's it. That's the stack.
Programmers are constantly tempted throughout the ages to overcomplicate and overburden language ecosystems and stacks such that a newbie coming in has to learn an inordinate amount of knowledge about everything surrounding the language before they can even learn the language itself.
I guess what I'm trying to say is I'll always push back against this idea of "progress" == more complexity. Tools, frameworks, languages, and ecosystem evolve over time. Best practices change and adapt. That's to be expected. Is it progress though? Is it actually better?
I don't think this is a counter argument at all? If anything, it's reinforcing the argument in the article.
> But there's also something to be said for being able to open up a blank file in a blank folder, writing some lines of code, and running a command to execute said code.
E.g., in Go the tooling is so good that you can just create a file and run it using `go run .`. This is sort of the point the author is making? No need to worry about build systems, dependencies, etc.
Yeah, it's definitely possible to make tooling "investments" that never amortize, but I've also seen an awful lot of programmer-minutes become programmer-days when tooling degraded to the point of printf/run/printf/run/printf/run/bisect/repeat.
1. Version control. If you ask Cargo to make you a new Rust project, and you have git (which, if you're a software developer you almost certainly do) it makes you a Git repository for the new project, with its target directory ignored.
As with formatting you might have very strong opinions about revision control, or you might be subject to a company policy, maybe you must use Bob's Special Version Control Program for all Company Projects. Cargo won't stop you doing that, but if you don't have an opinion you get git which is definitely better than nothing. And if you want to, somebody can teach Cargo about Bob's SVCP and make 'svcp' one of the possible choices in the config for the few people who do care.
2. Hello World. If you ask Cargo for a new Application, the project it gives you prints "Hello, World!" because why not. Rust's boilerplate for such an application isn't very heavy, but like a blank piece of paper is intimidating when writing an essay or novel, it's nice to have something to start from. Step #1 remove the part where it prints "Hello, World!". Step #2 write my actual program. Hey, we're off to a great start already.
This also means you have a correct, valid program to check your tooling. A new Cargo Application is a Hello, World program that will compile, and run, showing that your tools work as intended before you write a single line of Rust. It doesn't have any tests, nor any documentation, but since it's a working program the test infrastructure and doc builder work, the results will just be not very exciting.