From the main post and other comments it seems like it was for personal reasons, rather than pragmatic ones. They said they didn’t know how fast the Go implementation would be, but felt better about the original Rust one (I’m assuming before this really became a problem).
I think the reasons they switched are pretty weak, but justified at the same time. Kdy1 looks like more of a Rust person (their GitHub has more active Rust repos than Go), so this should’ve been the choice from the beginning. Going with the comfortable choice over the “pragmatic” one is almost always the best option if you’re the only contributor (or plan to be for a while).
Neat! I hope this is a step in the right direction towards and not-so-bespoke SCM.
I do wonder:
1) How it handles large (binary) files. This is a major pain point when using git and even the standard solution (git-lfs) leaves *a lot* to be desired.
2) How does server hosting currently work? I didn’t see any mention and am assuming it’s not an option currently? (two dependencies of Sapling are currently closed source)
I am by no means a JavaScript person; I use it when it’s the right tool for job (rare). However, projects like Deno and Bun make me hopeful for the future of the JS ecosystem.
And while the ‘1.3M modules’ headline is more scary than exciting (for me), I’m glad they locked it behind a special “npm:” classifier and have their own way of dealing with node_modules so things stay explicit, while allowing people to swap over somewhat seamlessly.
I wonder what impact this will have on Deno and NPM long term.
+1 for Odin. It's a very nice language to use, very rarely has bugs in the compiler, and has never required more than an 'odin build .' to build my projects. It does a lot of things right and removes a lot of the cruft of other languages.
In the rare instances where you do have bugs in the compiler, have you been able to resolve them reasonably? Or is it deeper language issues that you then have to work around? Not trying to poke holes; it seems really cool and I want to try it myself soon.
Most compiler bugs I’ve seen (a rare occurrence) had decent workarounds that didn’t require much effort, and kept me moving while I waited for a fix (the community responds very fast).
Overall, the design of the language is very clean and well thought out. Every decision that was made, and continues to be made, feels like it was put there for a reason; and, for the most part, nothing feels out of place.
If you can get into the beta, the Jai programming language by Jonathan Blow ticks the box of "no build system required." With how the build system is managed (just running arbitrary code at compile-time that modifies the build process), I can do this 99% of the time:
#run compile_to_binary("name_of_binary");
main :: () {
// ...
}
#import "base"; // my own library that has 'compile_to_binary'
I'll go into depth about what this does, but if you're not interested, skip to the final code snippet.
The above code '#run's any procedure at compile-time (in this case 'compile_to_binary' which is defined in the library I imported). That procedure (really a hygienic macro) configures my 'workspace' to compile down to a 'name_of_binary' executable in the current directory. Any third-party libraries I've imported will be linked against automatically as well.
To do this without a dedicated procedure, just put this in the same file as 'main':
#run {
options: Build_Options_During_Compile;
options.do_output = true;
options.output_executable_name = "name_of_binary";
set_build_options_dc(options);
}
main :: () {
print("This is all I need to build a project\n");
}
#import "Basic";
#import "SDL"; // or whatever...
Then compile:
jai above_file.jai
I've done this for projects with hundreds of files that link in SDL, BearSSL, etc.
The best part is neither of these systems are a requirement to build your project. Running the compiler against a Jai file will do everything I do with my own system (I just like having the ability to configure it in code).
Jai has been a breath of fresh air in terms of "just let me write code," so I highly recommend it if you can get in the beta.
If you can get into the beta. I'm a ten year C++ programmer and focus a lot on games but my dove's song didn't reach Jonathan Blow's ears, or something. I fear I might not be the "right" demographic. Which is disappointing.
I'd say to give it another go. There's a wide range of demographics in the beta, so it could've just been sent at the wrong time! Invites come in waves, usually at the release of a new version, so it could be a month or so before you get a reply.
I’ve thought about ideal fonts for both writing/reading and programming quite a bit, and have been let down at the choices out there. It seems most popular monospace fonts (Fira Code, Iosevka, Ubuntu Mono, etc) attempt to either fit as many characters on the screen as possible, or are so wide, that larger sizes make it impossible to fit even a function on screen; both design choices hurt overall readability and end up forcing me to fiddle with line spacing, sizing, and custom font versions just to be somewhat happy. At this point I’ve just stuck to putting consolas on virtually every machine I use, but that’s also not great.
I really hope some font foundry out there focuses on designing something specifically for readability of large bodies of monospace/duospace text.
It’d also be nice to have a provable metric for readability, rather than the anecdotal evidence I tend to see, but I suspect this is hard in and of itself.
To me, this article misses the point, but only slightly. If we compare the proposed Photoshop for text and the actual Photoshop for images, those are very different tools in how they work with their respective mediums.
Photoshop doesn’t use AI models to generate new things for the artist, photographer, whoever (it does have generation of course, but not in the same way). A tool like Photoshop for text, in the way described, seems more like a writing aid than a writing tool; something I can give vague commands like “make this paragraph more academic” (whatever that means), and it’ll spit out something approximating the academic style it analyzed. Whereas “auto balance the levels of this image” is much more concrete in what it means, there’s no approximation of “style.”
I feel like Photoshop for text should be an editor that cares more about the structure of stories and who’s in them, with ways to organize big chunks in easy ways, rather than something that generates content for you.
> Photoshop doesn’t use AI models to generate new things for the artist, photographer, whoever
It certainly does have this, such as tools to remove objects and paint over them as if they were not there, automatic sky replacement (replace a cloudy sky with a sunset), super resolution (AI up scaling) and a range of what they call 'Neural Filters'.
You want your low quality boring midday image of some famous bridge to be a high resolution, taken during sunrise without that person riding a bicycle? Photoshop will do it with very little user skill or input.
Some beta features include style transfer, makeup transfer and automatic smile enhancment.
Yeah, but nobody is buying Photoshop for the inpainting features. Photoshop's bread-and-butter is raster graphics tools, and almost all of them function deterministically.
Not sure what you mean by "deterministically", but most models are deterministic during inference: given the same input or prompt, they'll generate the same output.
Tense and plurals is one that would that seems like it would be helpful for a computer to keep consistent while you edit, so when you edit the subject to be plural from singular, it tacks on the "s" onto the verb for you. If you had
"Families enjoy this restaurant"
And started typing and changed "Families" to "our family" this "photoshop for text" would change "enjoy" to "enjoys" for you.
Enough little facets like that, I could see being useful.
The article is more about user experience than implementation. When you apply an effect in Photoshop you can change all relevant pixels at once. Whereas we don’t have good tools to change all words in a document at once (while retaining the intended meaning)
Oh that would be cool. Write a paragraph about a red ball, then halfway through, decide it's a blue ball. The editor then highlights all occurrences of the variable and rewrites them for you. Like the "Refactor local variable name" bit in VSCode.
I think the reasons they switched are pretty weak, but justified at the same time. Kdy1 looks like more of a Rust person (their GitHub has more active Rust repos than Go), so this should’ve been the choice from the beginning. Going with the comfortable choice over the “pragmatic” one is almost always the best option if you’re the only contributor (or plan to be for a while).