Hacker News new | past | comments | ask | show | jobs | submit login
Fusion – A hobby OS implemented in Nim (github.com/khaledh)
360 points by michaelsbradley 67 days ago | hide | past | favorite | 107 comments



Author here. Thanks for posting this <3 Happy to answer questions.


Congrats, fascinating work!

Q: has the GC of Nim caused any challenges?

(And if not, would you attribute that to Nim unique GC that does NOT “stop-the-world”?)

https://nim-lang.org/1.4.0/gc.html


Thanks! As @Tiberium mentioned, Nim 2.0 defaults to using ARC (Automatic Reference Counting), so no runtime GC. The Nim compiler is quite smart about when to copy/move and when to destruct, with some hints like lent/sink for when you want to have a bit more control.

Keep in mind that I also need to use ptr (as opposed to ref) types in a lot of cases when working at a low level, so there might be a need for some manual memory management as well.


1.4.0 is a very outdated docs version (Nim is at 2.0 currently, and 2.0 release brought ORC as default with ARC as an option), you should refer to the updated https://nim-lang.org/docs/mm.html and also https://nim-lang.org/docs/destructors.html


Nim should take inspiration out of other languages doc sites where a banner is placed at the top notifying the reader it is for a previous version with a link to the current version. Could also add a rel=canonical meta tag pointing to the latest version so search engines funnel people there.

"Nim gc" on Google indeed puts you at the 1.4 doc page.


Slight OT: are is the best documentation management system these day?

ReadMe, Docusaurus, Mintlify, etc?


Amazing work! What where the most troublesome parts of the project? Also, any tips if anyone want to write an OS from scratch, aswell?


> What where the most troublesome parts of the project?

Task switching. It's a very intricate process, you really have to understand how to structure everything (especially the task stack) so that you can switch everything from underneath the CPU as if nothing has happened (the CPU is obliviuous to which task is running, including the kernel itself). Add to that switching between user mode and kernel mode (during interrupts and system calls) and it becomes even more challenging.

> Also, any tips if anyone want to write an OS from scratch, aswell?

As @deaddod said, you need to read a lot. Two invaluable resources for me were the Intel Software Development Manuals (SDM), and the osdev wiki. The SDM can be daunting, but it's surprisingly very readable. The osdev wiki has great content, but it can be a hit or miss. I complement them with various blog posts and code on github to really understand a certain topic.

That being said, the most important aspect of this process is to have tons of curiousity and to be passionate about low-level systems programming. I love to learn how things really work at the lowest level. Once you learn it, you'll discover that there's no magic, and that you can write something yourself to make it work.

[0] https://www.intel.com/content/www/us/en/developer/articles/t...

[1] https://wiki.osdev.org


Read. A lot.

Basically, OSDev is one of the few realms you can't really take shortcuts in. It's kind of like learning Rust, in that you'll have a lot of foundational work until you get some real payoff. Unlike rust, however, there isn't just some cliff where you start getting it; it's a constant uphill trudge.

Learning the boot process of your target architecture, adding core functionality (process scheduling, filesystem/VFS support, IO management, etc), adding driver support, supporting your video device (just getting a basic framebuffer, and then adding each piece after that), supporting USB, supporting CRT and POSIX (if you choose to do so), etc are all herculean tasks of their own.

That being said, it's a super incremental process, so you'll get to watch it grow and expand at each step.

Reading up on the FreeBSD and Linux kernels are good starts. As well as reviewing other hobby OSes such as Serenity, TouruOS, Haiku, etc. And the OSDev wiki is invaluable.

Also, accepting you probably aren't going to build the next big OS or trying to compete with the big dogs is something you'll have to humble yourself with.


Where are the screenshots


I just added a few screenshots, including one from the graphics branch that is still a wip.


Reading the planned work section in the readme, what kind of screenshots are you expecting??


I don't use Nim, but it's an interesting language. I've read someone else complaining about having to make changes to an old project every time he went to recompile it. I'm wondering how true this is, so in other words, I'm wondering about the frequency and severity of breaking changes in the language.


I haven't faced such issue. I think the only issue I faced was when I upgraded to 2.0, they made `--threads:on` the default, so I had to turn it off, but that's about it.


Doesn't happen a lot, but it does happen from time to time. Of course only if you actually update your Nim version, so it's not like interpreted languages like Python where stuff stops working if a new version comes out and your package manager upgrades it.


wasn't this about C/C++?


Q: what made you choose Nim over Swift?

(since they seem very similar at this point, yet Swift is more battle tested)


I did consider Swift for a brief moment, but was put off by the fact that targeting bare-metal was almost non-existent. It was more work than I would like to put into bootstrapping the dev environment.


What made you use nim in the first place vs any other lang?


I went over this in other comments, but basically the language appeals to me since:

- it's close to Python in syntax (less noise, more readable)

- has no garbage collector by default (it uses ARC)

- has great C interop

- can be optimized through the C backend compiler

- can target bare-metal with minimal effort

- supports inline assembly

- has great template/macro system (I do use templates, but I haven't had the need for macros yet)

There's probably other reasons, but those are the ones I could think of now. As for why not other languages, I think the only other languages suitable for this kind of work are: C, C++, Rust, and Zig. Here's my take on each:

- C: The mother of all system languages, but outdated with lots of UB gotchas

- C++: I don't like/need OOP, so why pay the prices of C++ complexity and manual memory management

- Rust: I find the language too complicated for my taste. I know it's subjective, but it just doesn't feel right for me. Also writing a kernel involves a lot of unsafe code anyway.

- Zig: I tried Zig and also found its syntax to be a bit too noisy. Also having to worry about allocators in most of the code distracts from the core logic I'm trying to focus on.


Thanks for the details.

Any reasons why you would not recommend Nim?

Or things Nim could improve?

(Nim seems like this unicorn of languages, that's completely overlooked. And I don't understand why it's overlooked)


Nim is in part less popular because its BDFL, - isn't concerned with Nim becoming a mega popular language; it's enough that it has a successful user base to justify the effort/jobs developing it - believes that marketing is not required as people will choose Nim based off its superior features alone. (This is in large part true up-to people learning it exists, which is where marketing is valuable)

There are genuine shortcomings, of course. Every language has them.

It is something of a unicorn though. It's so pleasant to use at many levels of the stack from bare-metal to applications. People even use it for games.


I had to think hard about this, and I couldn't come up with something so obvious that it would be a no brainer to not use Nim.

__MatrixMan__ and SJMG have good points. If it's anything, it wouldn't be a technical issue in the language. It's like a Tesla when it first came out; it was obvious that EVs are the future, but they had less investment, less adoption, not enough charging stations, shorter range, and not a lot of certified shops for repairs. So naturally a lot of people held back until it became mainstream.

The issue is, will it become mainstream one day? Maybe, maybe not. But I'm betting on it myself.


If Nim went to the olympics, it would get a bronze or silver metal in five or six different areas, which is very impressive indeed. I think it's overlooked because gold is what draws attention.

If you wear enough hats to appreciate Nim in more than one of its dimensions, your attention is necessarily split enough to not evangelize so loudly for it in any one. People don't want practical, they want provocative.


Thanks. Agree on Rust. Liked zig at first but it seems they lost focus and started adding syntax sugar which just complicates things so I lost interest..


Q1: Overall, are you happy with your choice of using Nim?

Q2: What would you do different? (and what unexpected positives did you find)?


> Overall, are you happy with your choice of using Nim?

Yes. It's a pleasant language to work with.

> What would you do different? (and what unexpected positives did you find)?

A couple of things I think need improvements are: (1) better IDE support (especially for JetBrains IDEs), and (2) better support for true sum types and pattern matching[0].

As for unexpected positives, I found that the standard library covers a lot of functionality that I rarely (or ever) need a 3rd party package. Maybe that's because I'm not doing anything exotic.

[0] https://github.com/nim-lang/RFCs/issues/548, https://github.com/nim-lang/RFCs/issues/525


I heard that null is a valid value for objects in Nim for most situations. Is that correct ?

I like languages that disallow null by default (e.g. Rust, OCaml etc) because it seems to be a huge source of errors.


No, it isn't. The type must explicitly be a nullable type or pointer to be nullable. All other values must have a valid initialization. Anything not marked as a pointer or ref is by default managed by its scope. This includes dynamic types like seqs and strings, which are pointers to heap memory but managed by the stack scope and deallocated upon leaving scope.


To be honest, I haven't found this to be an issue (yet). I try to keep most of my types value types (cannot be null), which the compiler can pass by reference under the hood if it detects that it's too big, without compromising memory safety.


I use the Options module which has a none/some check. None is the absence of a value. You can test for this quite easily and I see it as a feature, not a bug.


Your blog/docs are excellent. Perfect balance of showing and telling. Thanks so much for taking the time to share what you're doing like this.


Thanks! Actually writing has helped me so many times in improving the design and implementation. It forces me to question my assumptions, and ask myself: would the reader be able to understand why I made such decision? I have to justify everything I do, which helped me remove unnecessary complexity and focus on the more important aspects.


May I ask what kind of blogging engine/site generator you used for the docs?


I use VuePress[0]. You can find the source code for the site here[1].

[0] https://vuepress.vuejs.org

[1] https://github.com/khaledh/khaledh.github.io


Thank you!


Development journal of Fusion’s author:

https://0xc0ffee.netlify.app/osdev/01-intro.html


Nice, I love to see stuff like this. I've been an on-again, off-again Nim "ecosystem guy" for several years. It's great to see this delightful little project is still chugging along.


Nifty! Fun to pull up the module for ELF and have it be so easy to read.

Some day I want to write an RTOS in Nim. I enjoy writing embedded programs in Nim and it’d be fun to make an RTOS.


That would be great! I'd love to follow along if you ever decide to build an RTOS.


To either of you, whenever you are doing something new from scratch, it can be useful to consider the granularity of provided abstractions & services which khaledh seems to be doing. I see fusion only has like 8 syscalls presently. It's not in Nim, but along these lines ("how much" individual calls do), you might want to consider an approach like this: https://github.com/c-blake/batch to amortize costs of crossing expensive call boundaries.


Batching syscalls is on my mind. The architecture of Fusion will revolve around channel-based ipc (both sync and async), including between user mode and kernel mode. The end state I'm aiming for is an async channel for syscalls, where the user task issues syscalls, which get buffered in a queue, where the kernel processes the queue asynchronously.

For this to work properly, user tasks need to be able to respond to completed syscalls async as well. That's why my idea of user tasks is that they should be modeled as state machines, with channel-based events as the core mechanism by which code gets executed in a deterministic manner. The equivalent of signals in Unix (which many find one of the bad aspects of Unix design) would be receiving events on one or more channels for various purposes (e.g. IO completion, timers, abort, interrupt, GUI events, etc.).


Ack! I was just about to write a blog post on this idea!

Good minds think alike, I guess.

But as a sibling comment says, this is essentially what io_uring is. Read Lord of the io_uring [1] if you want to know more. Polling mode is the key.

[1]: https://unixism.net/loti/index.html


You might both still be interested in the tiny (EDIT: 27 lines! https://github.com/c-blake/batch/blob/9c7e07670ef1fd0e98687c...) little "virtual machine interpreter" I linked to. Per-interpreter loop overheads are below 100 CPU cycles on several CPUs on a generation of hardware similar to what https://www.usenix.org/system/files/atc20-gu.pdf was mentioning as 700-1000 cycles for microkernel IPC latencies. I'm not sure if the idea coheres with io_uring style dispatch (especially the emphasized polling mode), though.. maybe with some work.

The reason I mentioned it after elcritch's RTOS mention is partly that the way the little interpreter has no backward jumps means there are no loops and so no halting problem issue. So, you can still embed conditional error handling logic in system call batches, but the overall call is bounded by the time of its component calls (that is, if they are bounded, anyway...). That might be a useful property for higher levels of the system to guarantee meeting a real-time budget in many situations with very low complexity. I'm not sure if any of this is original with that github repo, but I haven't seen it before in this specific context.

Perhaps the most complete example of "adding a new sys_batch-based syscall" is https://github.com/c-blake/batch/blob/master/examples/total.... which adds a `mapall` that can mmap a whole file or fail trying (at a couple points) for the purpose of just totaling the bytes as an example calculation.


Sounds interesting - kind of like microkernels meet io_uring (in Elevator-pitch-ese).


ELF is a very simple file format. I would be surprised if it was difficult to read...


What is Nim, and what is the overarching design goal for Fusion? Thanks.

I'm hoping these questions aren't too basic, I have no context whatsoever for understanding this so hope someone can explain.


As others mentioned, Nim is a statically typed programming language that compiles down to C, C++, and JavaScript. It has great C interop, which makes systems programming easy. As for why Nim, here's an excerpt from my accompanying site[0]:

> Why Nim? It's one of the few languages that allow low-level systems programming with deterministic memory management (garbage collector is optional) with destructors and move semantics. It's also statically typed, which provides greater type safety. It also supports inline assembly, which is a must for OS development. Other options include C, C++, Rust, and Zig. They're great languages, but I chose Nim for its simplicity, elegance, and performance.

As for the overall design goals of Fusion, I have high ambitions, which I list on the same page I referenced. I don't want to build another Unix-like OS; I'd like to experiment with fundamental issues in OS design, such as using a single-address space and capability-based security for protection. Another aspect I'm trying to explore is how processes/tasks are modeled, which I believe should be modeled as state machines with statically-typed channels to communicate between each other (this is not new, it's been done in Singularity OS[1]). There's rudimentary support in the kernel for channels and using them from user space, but it's still early.

[0] https://0xc0ffee.netlify.app/osdev/01-intro.html

[1] https://en.wikipedia.org/wiki/Singularity_(operating_system)


Are you considering user-level abstractions other than files? Perhaps a plan9-like everything-is-a-file-system?


A file is a manufactured concept that has served us well for a long while. But I'm not convinced that it's the right abstraction for everything. Files are just binary blobs (even if they're text), with no uniform interface to interact with them other than open/read/write/close. In order to compose processes around files, they have to agree on a certain format (e.g. lines of text for most Unix commands); there's no structure other than what each process assumes it to be.

My idea is that processes should compose in a statically typed manner, where opening a channel for reading gives you a Channel[T] to read entities of type T. Two sub-abstractions of Channel[T] would be Source[T] and Sink[T], where they can be used to read and write to any source/sink (including files) as long as there's a registered (de)serializer for T.


This sounds like a great idea to me.

Have you played at all with nushell? It's fun in a composing-processes-via-types kind of way, although it's a bit more on-the-surface than what you're describing.

My impression is that much gets done via builtins--you have to start writing nushell plugins for everything if you want to extend the fun to arbitrary programs which nushell knows nothing about. (unless you're happy with json I/O, but you're talking about static typing here).

It sounds like the source/sink type registry that you're describing would solve that problem in a much nicer way.


I haven't used nushell (although I'm aware of it). I'm aware that the idea of piping statically-typed objects between proceses is not new (PowerShell has had this since 2006). But the problem is that this is confined to the shell, i.e. you can't do this kind of composition outside the shell.

My idea is that everything in the system should be able to use statically-typed channels, including syscalls and calling into system services. This opens up the possiblity of, for example, composing GUIs in a manner similar to writing a shell script.


I bet it would do wonders for observability. I'd love to be able to enable a custom visualizer on some data stream without having to do any explicit plumbing besides ensuring that the types match.

I'm rooting for you.


This sounds similar to some of the concepts in the robotics framework I'm writing. It's pretty powerful to be able to separate out the transport data has from the serialization it has.


You might enjoy reading about Ethos's Etypes. My understanding is that they punted a bit and their design still has byte streams, but the operating system provides helpers to parse them and injects filters to guarantee well-formed input/output, including over the network. They failed to produce anything but papers, but the papers are interesting.

https://www.ethos-os.org/~solworth/petullo14ethosTypes-20140...

https://www.ethos-os.org/


Nim is an awesome language that feels inspired by Ada and Python!


But it's secretly also a quirky Lisp, to quote Andreas himself.


It's macro system certainly makes it feel that way!


I'd say it's moreso the referentially-transparent expression-based programming by default. You kind of have to opt-in to imperative programming, and it actively disincentivizes OOP.


I think it is better described as C with metaprogramming frendliness


From the documentation of the project: https://0xc0ffee.netlify.app/osdev/01-intro.html


Seeing more projects in Nim makes me happy. I'm a (mostly) Python and JavaScript programmer who is interested in the benefits of also knowing a modern, fast, statically-typed language. Among a candidate list of Go, Rust, Zig, or Nim, I like Nim the most. It feels the most "Pythonic" in the sense of very little syntax clutter when I'm reading code. I also love, love, love using a REPL to prototype new code, and INim does it well. The biggest problem with Nim currently is its small community size, which makes the universe of available and maintained software libraries smaller than in the other language communities. It's a chicken-or-egg problem, but can be solved by more devs (including me!) being "the change you want to see in the world".


To be honest, I haven't found the community size to be an issue. The Nim forum[0] has a vibrant community, and is the place I go to for help, and the response is usually quick and on point. The language is also evolving in a careful manner, with Araq at the helm I think it's going to be even better in the long term.

As for the ecosystem, yes, it's not as big as Python or Rust, but surprisingly the standard library has most of what people need. I rarely look for 3rd party packages to do something.

That being said, I acknowledge that Nim is on the lesser known languages of the spectrum, but that doesn't take away from its merits as a very promising language that does what it's supposed to do very well.

One thing I think the community should focus on more is IDE support. The VSCode extension is good, but has some rough edges. I also prefer JetBrains IDEs, and the official Nim plugin is very lacking to say the least. I have another side project to create a JetBrains plugin for Nim[1], but I haven't gone far with it yet.

[0] https://forum.nim-lang.org

[1] https://github.com/khaledh/nimjet


My world: I need to use OpenCV. The existing OpenCV bindings (nim-opencv) haven't been touched in years because the author left* the Nim community. (And that really stinks! It was created by dom96, who also created Nimble, Jester, and a ton of other useful stuff in the Nim world.) ... So... I created my own OpenCV bindings and published it (https://nimble.directory/pkg/mvb). But they're minimal because I'm just one dude and haven't had the time to complete the bindings (either manually, or ideally, using an automated binding generator tool). I will.. eventually.. I hope! Meanwhile, OpenCV bindings for Rust and Go are robust and well-maintained.

Now I'm playing around more with Nostr (and the Lightning Network)... Nostr libraries for Nim are not as complete or well-maintained as those in Rust, Go, or even Python, etc.

I'm not letting that stop me from using Nim for my projects... I love Nim! But it does mean I have more work to do (and code to maintain). I can make that choice because I'm my own boss and run my own company. But I could see others not making the same choice for rational reasons.

* And dom96 left, unfortunately, because of harassment and abuse, which is another possible reason why Nim isn't as well adopted as Go, Rust, etc. If people want to see Nim succeed more, they also need to focus on improving community safety, too. https://news.ycombinator.com/item?id=38999296


Fair enough. Keep in mind also that Nim is not backed by big tech, which is both a blessing and a curse. The community hasn't reached a critical mass yet to take Nim into the mainstream. All I can say is ... "be the change you want to see in the world."


Yup... The current size of the Nim community reminds me of the perceived size of the Python community when I first started using Python around 2000-2001. (I'm still amazed how popular Python is now. People said it would never happen because significant whitespace was such a fatal dealbreaker!) Back then, all I needed to get going was the book Learning Python (1st Edition) by Mark Lutz. Lack of massive corporate support didn't stop me. Python didn't really have the same kind of "Big Tech" corporate backing like Java had from Sun, though Python did have just enough to keep going. Google formally supporting Python internally definitely helped a lot. It feels like Nim is one similar ("Google uses Python!") announcement away from getting on a similar growth path.


Nim is a great systems language, and should be more popular.


Maybe it's just because I'm getting older, but I struggle to read Nim when I'm forced to use two spaces for indentation. I can barely make out each block of code, and I don't like to rely on my IDE to make a language readable.


I use it at work for an important CLI tool that backs a number of our systems. My style is almost exactly like Python. Four-space tabs, snake_case, my own rules for indenting parameters and keywords. You do not need to use the official Nim style, you can absolutely write it like Python, Ada, however you like.


The compiler seems to enforce spaces? But I did see some code that replaces tabs with spaces, but it feels dirty having that in every single source file.


Maybe I was unclear, I guess I should say my indents are 4 spaces. Never tried it with actual tabs so I can't say.


I was also a bit taken aback by the 2 space indentation. But after a bit of practice I got used to it.


Maybe, it's just so hard to read for me. Also, my IDE isn't all fancy, so tab is tab, I'd have to space it all myself...


How does this compare with TempleOS? Sounds quite similar what with the single address space.


TempleOS was written by the greatest programmer who ever lived.


TempleOS is not real mode.


So? Idols are bad.

I'd better discuss the deeds.


For anyone who didn't get the reference it's a claim the author of TempleOS used to make of himself.

https://youtu.be/o48KzPa42_o?si=s8_t4ReysLuhU_Ub


It's fun how a comment that felt hostile suddenly feels very friendly with such a piece of information.


The problem that nim have and many afraid to go nim is case and style insensitivity.

Is_land == island == IsLaND == is-land

It is bad in team setting, in real world projects.

How it goes now ? Last time I checked the main dev refuse to do anything about against popularity vote In Github.

Otherwise awesome project and documentation Fusion Os


It is so that you can use external libraries with your preferred style without having to convert.

It is a pretty amazing feature. Your problem is just imaginary. A consistent case style should always be enforced regardless if you have a case insensitive language or not. There is no real world case where you would want is_land and isLand to exists both in your code and be separate variables.


There are many code that have `class restController` in title cases and instance `restcontroller = RestController()` you can check around in the GitHub world.

How it would effect? I haven't touched nim since that decision to keep those insensitivities.


That should work fine since the first letter is still case sensitive in nim.


I agree that it's unusual (and likely scares off some), however it's not entirely case insensitive. First, dashes/hyphens (`-`) can't be part of identifiers. Second, the first character of an identifier is not case insensitive.

So:

FooBar != fooBar

FooBar == Foobar

Most of the developers in the community are ambivalent about it, because it rarely ever causes problems. If you end up misspelling an identifier, you're nearly always going to get a compile-time error due to static typing anyway.


Only the first letter being case-sensitive is a major strike against readability, one of four major pillars. While I’m sure the Nim developers are probably used to it by now, it just seems like a bad design decision Nim is probably burdened with as the result of legacy/interoperabilty.

Even just reading your foobar example at a glance took a moment for me.

And case insensitivity is also generally frowned upon. To have a language with both sensitivity and insensitivity is the worst of all worlds with none of the benefits.

If you want to understand why at a deeper level I would recommend reading readability or the case insensitivity sections in any programming languages book. Personally, I enjoy Programming Languages, Principles and Practice (Louden & Lambert)

EDIT: Yes, I get it, it doesn't affect YOU. But it doesn't mean it doesn't affect other people. Non-english languages and/or speakers are an easy example. It also eliminates a whole class of human error, and maybe that only affects non-experienced juniors, but they exist too. There are other issues with symbols being case insensitive and string values being case sensitive. If you want a practical example a classic one is HttpsFtpConn vs. HttpSftpConn


As a powershell user I have never had an issue with case sensitivity at the language level as Sigils provide separation of concerns between language constructs (keywords/variables/types). You’re using an IDE with autocomplete most of the time and many other languages have linters/formatters.

All I have personally experienced of case sensitivity is an added layer of friction any time I go to use a REPL for Bash/Python/Javascript/etc or some awful ‘allowercasewords’ gets cemented in place barring a total refactor since you can’t correct files piecemeal.

And case sensitivity in the language doesn’t even help with case sensitivity at the OS level when you’re writing cross platform code =/


The theory says that it hinders readability but in practice it doesn't. Nim has a prescribed style and if you use the linter when compiling your code has a consistent style.

Like cardanome said, in practice it's awesome for FFI.


Very good example. I didn't even know about first letter sensitivity and the rest insensitivities. That's real big problem.

Also since nim is very ninche and used by very little perscentage of the world they haven't encounter much of production scale coding. Well that may be reason nim never get pass weekend hobby projects..


> Only the first letter being case-sensitive is a major strike against readability

…How? Do you find code more readable when there are two different names that differ only in the capitalization of a non-first letter?


You can also enable --styleCheck:usages, which warns about casing inconsistencies in your code. (So it catches mistyping FooBar as FooBAr.) Then the only difference from other languages remains that you can't use weird casing differences for separate symbols, e.g. you can't name two separate variables fooBar and foo_bar, which you wouldn't normally do anyway.


Ha this always happens with case insensitivity. It's case insensitive.... except for some situations you need to now remember. I believe PHP has this issue too.

It isn't a good look that they made the same mistakes as PHP.


Nim has a good reason to do it: library interop. You can use a third party library that for some reason uses snake case, but you want to follow the camel case that NEP recomends. You just do it, and your codebase is all in the same style. In fact, it the third party library updates their case, it doesn't break Nim code that depends on it (that happened for example to Python Selenium, when they went from camelCase inherited from Java to snake_case recomended in PEP8, forcing all dependent code to update).

I personally don't like that you can have "is_OK" and "isok" and "is_ok" in the same code as three valid different things. Or having "GL_FLOAT" and "GLFloat".

Both options come with tradeoffs. Don't jump so quickly into "that is a mistake" and allow the devs the benefit of the doubt. There is only one rule you need to remember in Nim: only the first character case matters.


That's not a good reason. Why not just standardise the entire ecosystem on the same style? If you think that sounds infeasible, consider that Rust, Go and even Python have done this with no problem.


That sound very close to "Rust is useless: why not just code in C without memory bugs?"

As menctioned, a few years ago Selenium had its methods in camelCase. If your code used Selenium, it had to be camelCase and snake_case mixed. When Selenium standarized, it forced everyone to switch to snake_case.

Nim puts great effort in FFI. It means you can easily use C libraries, using their names, even if the case doesn't match, and your code is still coherent. Look at the sample code in https://www.py4j.org/index.html : why do they end with "random.nextInt(10)" in their Python code? Didn't Python had this solved? Not saying that this is the end of the world, but the Nim way is not a mistake either.


Because an important feature and focus is that nim compiles to c and makes it easy to just import and use c libraries. So many of the 3rd party libs mentioned are NOT technically part of its ecosystem. There is at least one thread on the nim forum that extensively explains the reasoning behind the decision in much better detail and pretty thoroughly debunks this “problem”


Show me how Rust, Go and Python automatically rename a foreign library types and functions to match your language style.


The bindings generator that produces the FFI glue code does it so everything comes out in native style.


Python literally has modules in its standard library that violate the PEP8 naming conventions.


That's is even more unexpected.

FooBar != fooBar

FooBar == Foobar

That could cause a lot of head ache in large code base ..


I've seen Delphi ERP projects worked on by dozens of people and case insensitivty of Pascal was never ever the issue. You choose something and stick to it. The concerns and fears are mostly due to inexperience.


True, but in Delphi's case _ and - aren't part of case insensitivty, that is probably a bit too far.


And no such thing ever exist in other languages as first letter sensitive identifiers


Ends up being a total non-issue in the real world. I use it for a mission-critical CLI application called by a number of backend systems. I use snake_case for virtually everything save for object type names. There is no global scope and modules keep most symbols segregated. I have yet to have a symbol collision issue. I get to use foreign package symbols in my own style. On multi-dev projects I'd expect a style to be enforced, and given module segregation, a collision should be fairly simple to suss out. Especially if you use `from module import nil` and force module name qualification.


While I don't like too, at the end it's just a detail. All languages have their issues


I think this problem can be solved using either a linter or formatter-like tool that makes naming consistent before the code gets committed.


Nim compiler supports a style enforcing flag `--styleCheck`, that can display hints or error on compilation.


Do we have proper linter for that now?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: