Hacker News new | past | comments | ask | show | jobs | submit login
How a Zig IDE Could Work (matklad.github.io)
213 points by todsacerdoti on Feb 10, 2023 | hide | past | favorite | 98 comments



Was hoping the author found a solution to (in my view) Zig's biggest shortcoming with generic types: a parameter that accepts all instances of a generic type must either (a) be declared as "anytype" or (b) also require a comptime parameter for the generic type argument. For example, the standard library passes around instances of std.io.Writer as anytype, so zls auto-completion doesn't work.

Also see: comptime interfaces proposal https://github.com/ziglang/zig/issues/1268


Interfaces are the most important thing Zig needs to implement ASAP

It would also help creating a uniform standard library, for example creating a Writer interface instead of having specific functions for files, sockets, buffers and all kind of types


How are you going to implement interfaces if Zig is without classes in the first place?


Go has interfaces and doesn’t have classes.


For the equivalent thing (interfaces with runtime dispatch and without owning the underlying struct) Zig has explicit vtables. I sort of think any new "interfaces" feature should at least partly be about describing constraints on types at compile time.


i think adding interfaces that are simply enforced at compile time would be a good start

You could create a new type like Dynamic(InterfaceName) that handles dynamic dispatch somehow


There's quite a bit of mildly hairy runtime code necessary to achieve this.


I agree with that, the language needs some love in that area


Context: Matklad is the main dev behind rust-analyzer


He used to be. He recently left the rust-analyzer and rust compiler development teams to broaden his horizons, and is working full-time on a project that uses Zig now (which no doubt sparked this thought experiment).


Do you have a source for that? I'd be interested to read more about it



Interesting, but it doesn’t say matklad is stepping down from rust-analyzer


Apologies, I accidentally confused matklad with jonas-schievink, a different r-a contributor that recently became a rust project alumni [1]. It seems matklad is still active on r-a.

[1]: https://github.com/rust-lang/team/pull/931


And a major dev behind intellij rust


This is very different to most static languages.

From the article:

The whole process is lazy — only things transitively used from main are analyzed. Compiler won’t complain about something like

    fn unused() void {
        1 + "";
    }
This looks perfectly fine at the Zir level, and the compiler will not move beyond Zir unless the function is actually called somewhere.


That is starkraving insanity, and now highlights why there is a whole submission about making a Zig IDE. So I guess if one wanted to do any refactoring, be sure to go through and uncomment every code flow in the app first

And look, I can appreciate lazy evaluation being awesome in some circumstance, but `fn wat() void { 1 + ""; }` just spits in the face of static typing


A nice thing you get in return tho is for separate platforms you can disable code just in the language itself; it doesn't need to be a preprocessor or or handled magically in the build system.

It's also not nearly as bad as a dynamically typed language: you might have broken code you won't realize until you try to use it, but you'll always realize at compile-time.

In dynamic languages you refactor and don't notice you broke stuff until runtime!


> A nice thing you get in return tho is for separate platforms you can disable code just in the language itself; it doesn't need to be a preprocessor or or handled magically in the build system.

I don’t know if I consider this a nice thing when thinking about it from a systems language lens. When working at the low level, I like the immediate clarity and compile-time feedback that preprocessor-like checks bring. They’re not perfect, but I think I prefer that instead of relying on laziness.

> It's also not nearly as bad as a dynamically typed language: you might have broken code you won't realize until you try to use it, but you'll always realize at compile-time.

This is correct, but to me, the elephant in the room here though is that this makes it much easier to introduce broken technical debt into a codebase. This likely means that when conducting large-scale refactors in a Zig codebase, as the amount of potentially broken code increases, the count of hair strands remaining in one’s scalp decreases.

Zig is an awesome language, but frankly, I think compiler laziness like this is pretty odd for a statically-typed systems language and not the behavior one would intuitively expect.


Uncommenting a line for a platform-specific feature isn't really any more difficult than decorating a function, adding a pre-processor gate, or any of the other normal ways of doing it.


more difficult maybe maybe not, but the way zig does it is simpler in that it's fewer languages you have to know: you don't have to learn the c preprocessor (c/c++) or about magic +build tags (Go) or whatever; it's just normal conditionals in the same language.

you might [reasonably] say you can't care about that kind of simplicity, which is fine; but radical simplicity is definitely one of the major charms of zig.

(and you don't use the zig feature by commenting/uncommenting, it's via inspection of the environment/flags in the comptime code.)


If you're already concerned with, and developing, platform-specific code then the extra knowledge isn't really that hard to grasp.

Moreover, I can grep for preprocessor defines, attributes, and such by name.

There's no way to grep a project for code commented out to prevent execution on some platforms.


It’s not necessarily simpler than an explicit hygienic macro system or similar, since comptime is a complex feature with surprising consequences. Mostly it moves the complexity around.


I work in a language with hygienic macros and in zig, and zigs way is simpler and more constrained.


So how does Zig handle platform-specific stuff that cannot be compiled on other platforms? Is it an `if` resolved at compile time causing the uncompileable code to be unused and disappear after the Zir stage?

You don't need a full preprocessor with macros and other evil stuff. Just #define and #ifdef would solve this without allowing wrong code that randomly disappears. And learning and understanding these two directives requires zero mental effort.


> Is it an `if` resolved at compile time causing the uncompileable code to be unused and disappear after the Zir stage?

Yes. It's exactly that, because Zig very aggressively evaluates values that are known at compile time. It's as simple as:

  if (builtin.os.tag == .linux) {
      // Everything here only exists if compiling for Linux.
  }
The standard library uses this quite a lot for OS/CPU specific things.


That's elegant. It's also not so different than a preprocessor gate, or a function attribute.


I don’t agree that it’s elegant. A preprocessor directive or a function attribute makes it more obvious that the code it covers may be valid only in some conditions. On the other hand, if the language optimizes some `if` statements out, you may have invalid code without noticing — and not only in the case of OS-specific code, but possibly also regular code whose `if` condition became a compile-time constant, even unintentionally/temporarily/by mistake.


That's the nice part, it can eliminate switch cases, for loops, while loops, and, or, and all forms of conditional control flow if it's comptime-known without changing the behaviour of the code. This also plays well with `inline for`, `inline while`, and `switch (x) { inline else => |c| {} }` which generate code where the captured value is comptime known; if comptime branching wasn't dealt with as such then you'd end up with code bloat when using the `inline` forms.

If it's a comptime constant by "mistake" then it would have resulted in the same wrong behaviour at runtime.


I think it’s elegant, but it seems to me that the check shouldn’t be evaluated until after type checking. But I’m probably missing something about the way Zig works.


This won't work because the conditionally-compiled platform-specific code is expected to not typecheck (or to use fields that do not exist, etc.)

The same feature is also used for specializing generic containers to specific types (an idea that became famous after the resounding success of std::vector<bool>) so that for example arraylists of u8 expose a Writer but arraylists of f64 do not. Most code that is conditional on T == u8 won't typecheck if T != u8.

I guess more generally these features are used for implementing fmt, json encoders and decoders, etc. None of these thing would work if code conditional on the type had to typecheck for all types.


I think types are generated via const, so that wouldn't be possible.


C++ has the same basic thing but in a better way with `if constexpr`. There you're opting in to lazy evaluation and can scrutinize it better without having all your branches everywhere be lazy evaluation such that wrong code is allowed.


Can one guarantee this? It makes me think of constexpr in C++, which does something similar and it makes things really confusing about when something is, isn't, or should be constexpr (is_constant_evaluated/is_const).


It's guaranteed for any values that are comptime-known.


Zig not having a text-based preprocessor is a feature, not a bug though. From the pov of a multiplatform coder who's used to exclude platform-specific code with #ifdef/#endif, Zig's behaviour is completely logical (because that's the only way it can work without a C-style preprocessor).


Yes, this is the logical way of doing it without preprocessor.

At the end of the day, this is pretty equivalent. Not sure if this is a net gain or not.

Conditional compilation is needed, moving the feature from the preprocessor stage to a lazy evaluator is probably costlier for the compiler and it might also be costlier for the IDE to highlight active/inactive parts of the code.

From the user point of view, this is very close, mostly a cosmetic change.


It seems like a nice balance might be a compiler config that accepts a list of paths to unconditionally compile, so that users can enforce stricter checks in their own project without forcing authors of their dependencies to do the same.


Doesn't @refalldecls do this?


This does it, but you might still find compile errors later than you'd like in comptime-heavy code.


It's the carrot to get you to write tests. You're writing tests, right?


> compiler config

How is this better than a line in the language itself?


Even better


It's pretty much a necessity for conditionally compiled platform-specific code if you don't have a text-based preprocessor with #ifdef/#endif.

All statically compiled languages need to skip compiling such code one way or another. Zig is just thinking this idea to the end and doesn't compile any dead code. I guess the recommended workflow is to have good test coverage. The test basically turns dead code back into live code.


This is reminiscent of C++ templates that fail with inscrutable errors deep in their implementation (only) when instantiated for particular types. I’m sure there is a better way.


It's kind of not. One way to think of it is roughly as follows: Zig is statically typed at runtime. It's dynamically typed at compile time.

Anyways it's not entirely crazy to think that maybe this sort of thing should be the province of static linters


One would just run tests and rebuild for the supported targets? That will hit all comptime branches that you care about without needing to do anything else. I don't see how this is much worse? What you seem to be looking for is an "unused function" (https://github.com/ziglang/zig/issues/335) error instead as it catches that case unless you use the function in a branch that's not part of your target set (truly dead unused code).

There's support for invoking with qemu, wine, etc in the build system which allows you to run tests for other platforms.

A case where I've taken advantage if this is to have data structures and code adjust based on the expected page size and cache-line. There are also cases where things may be both comptime and runtime known depending on which platform that you target which is easy to handle by having the maybe comptime value be first in conditional branches.


Yeah, it looks like Zig is a bit overly lazy.

As a comparison, what C++ does is that it differentiates between "dependent" and "non-dependent" expressions, where "dependent" means that it depends on some template parameter (analogous to comptime parameter here). "non-dependent" expressions are type-checked eagerly, while dependent expressions are only checked at instantiation time.

The design is certainly more complicated, even more so when you consider function overloading and name lookup, which can give different results if done late or early. But there are some upsides, and zig wouldn't need to deal with the additional complications of overloading.

edit: https://eel.is/c++draft/temp.dep


Well it's basically like C macros. The C compiler won't see any code that's behind a deactivated macro. Same thing for zig here, except you don't have to deal with an unreadable macro language full of pitfalls. All is done in the language itself.


Getting rid of macros (and all of their well documented pitfalls) is good.

Getting rid of the preprocessor and conditional compilation may not have been necessary. (as the feature is needed anyway and have to be implemented in a different manner)

The thing is that comptime, even if better than macros because everything is written in the same language, might end up not being much better than C macros.

Many of the macro pitfalls still applies, if you think about it...


I sort of like that Zig is lazy here, because in C++ a bunch of stuff that works in Zig just doesn't work.

In this line[0] a radix sort selects a function to read the next 8 or 16 bits of the input words. The C++ version of this code has to have readOneByte and readTwoBytes take an additional template parameter to control the return type:

  static constexpr auto readBucket = (BYTES_PER_LEVEL == 2) ? readTwoBytes<idx, udigit> : readOneByte<idx, udigit>;
because both parts of the expression are evaluated by the compiler and must be the same type, including the function's return type. This means we end up instantiating nonsense functions like readTwoBytes returning u8. Elsewhere in the same file, some should-be-impossible template instantiations have had to have their static_assert's removed, because C++ will instantiate them and then not insert any calls to them into the generated code. So one cannot use static_assert to say "If this is reachable, that's a bug, please fail compilation and return an error" because template instantiation and the execution of the static_assert does not imply that the function won't be immediately discarded as unreachable.

[0]: https://github.com/alichraghi/zort/blob/main/src/radix.zig#L...


This is a different issue. You could have replaced the ternary with if constexpr in a lambda, but it's very verbose.

I saw some people to propose that ternary should work potentially like if constexpr, if the condition is a constant expression. Honestly, IMO the ternary operator is already cursed, and it should not be overloaded with more responsibility.

I think this should work without nonsense return types:

  static constexpr auto readBucket = []{
    if constexpr (BYTES_PER_LEVEL == 2) {
      return readTwoBytes<idx, udigit>;
    } else {
      return readOneByte<idx, udigit>;
    }
  }();


This is my biggest complaint against Zig's comptime.

I think it's possible to do comptime better, by putting comptime into separate files that can then "export" generated code.


This is exactly how Rust's proc macros work.


This is what I suspected.


Why should the compiler compile code that is not called/exported? that's a waste of processing power and therefore a waste of time

At best it should give you a warning that the function is not used, that's it


Because the parser still has to skip it, especially if it's in a file with code that is used. The parser needs to know the beginning and the end to do so. This takes at least rudimentary parsing.

Oh, and the compiler may not know it's unused until it processes all files if it's a public function.


I found it a surprise. However it is really useful when using comptime and doing those really awkward cross platform things (like delving into the innards of a stat_t) - your FreeBSD stuff with its different member names will be completely ignored on Linux.


The Virgil compiler will typecheck all code given to it on the command-line and then only compile what is reachable from main() after running initialization in the internal interpreter, both code and data. Unused functions and fields and classes will simply never be visited for codegen.


Reminds me of Eclipse days where they explained their Java compiler was more sophisticated than javac as they had to incrementally compile broken code as that is the main state of code during development.

Maybe we need to do the same here


What happened to Eclipse, is it still big for Java developers ??


I don't think so (IntelliJ is way more popular).

I think Eclipse is mostly popular these days as a platform for companies offering custom tools. For example Team Center (which is awful btw). Honestly Eclipse was always pretty awful and I have yet to use any Eclipse based software upon which that awfulness didn't at least leave a mild stench.

Check out this list: https://en.wikipedia.org/wiki/List_of_Eclipse-based_software


Eclipse the IDE is mostly dead, but its Java analysis was moved into a language server and is the goto option for people in vim, emacs and presumably VSC, though the last option isn't as open as the first two in what the extension actually uses under the hood.


For the most part, Eclipse has fallen into obscurity. But I doubt that it's competitors work any different in that regard.


That's not quite fair. I don't have any numbers, but i suspect a few developers are using vscode which uses the jdt.ls language server. jdt is basically just a wrapper around the non-ui parts of eclipse. Eclipse lives on, although the IDE itself might not see much use anymore.


While everyone I know moved away from Eclipse years ago, if you look at the stats it is still a very widely used IDE.

Maybe universities still get students to use it?


I hope not, that would be malpractice

However, I wanted to make the distinction that while @systems asked about Eclipse as a Java IDE, which it is objectively terrible at, Eclipse is like NetBeans in that it is multi-language[1] and is likely more accessible as an open-source C++ IDE for university use than trying to get CLion licenses (and, ahem, then explain CMake to students :-/ )

1: yes, I'm acutely aware IDEA is also multi-language but the open source distribution only covers Java and Python, unlike Eclipse and NetBeans


In my experience, university CS courses are always pushing woefully outdated tooling and avoiding any remotely modern standard practices like the plague.

I was lucky to have one course in 2018 in which the professor actually included the most bare bones `git` usage.

I actually never had a class in which an IDE was recommended. In 2008 I did have a class on C (first half C, second half Java) in which we were restricted to using C89 because that was most widely adopted in industry. I really missed block-scoped variables in for loops (`for (int i = 0...)`).


Students should be learning how to move industries forward, not backward.


CLion licenses are free for people with university email addresses, and perhaps you can figure out some basic subset of CMake to just compile a few files.


I guess we are obscure developers, with our Visual Studios and Eclipses.

Eclipse headless is also what powers VS Code Java support, with Red-Hat and Microsoft as main developers.

You know, the reason why JetBrains had to rush out Fleet.


I've seen many people begin projects with VS code and end it with Intellij or another jetbrain product.

To be fair, I use VS Code as well. It works well as a text editor.


You actually could do a lot of stuff in VS Code. See rust-analyzer or our extension: https://marketplace.visualstudio.com/items?itemName=tooltitu...


Yeah, pretty sure Intellij does the same thing. A few bad lines rarely affects syntax highlighting/suggestions further down in the same file.


Fun fact - The author of this post used to work at JetBrains on IDEs.


Eclipse may not be as popular as it used to be, but saying it has "fallen into obscurity" is wild hyperbole. It's still very popular, albeit maybe not so much with the SV startup crowd which is so prevalent on HN.


> For the most part, Eclipse has fallen into obscurity.

I guess it depends on the locale/company/environment?

In most conferences, online videos, as well as among the people I know personally, JetBrains IDEs (IntelliJ IDEA for Java) seem to reign supreme: https://www.jetbrains.com/idea/ They have a community version, personally I pay for the Ultimate package of all the tools. They're slightly sluggish, want a lot of RAM, but the actual development experience and features make up for that. Hands down, the best set of IDEs that I've used from a pure development perspective - refactoring enterprise codebases mostly becomes something you can actually do with confidence. Running tests is easy. Integrating with app servers, containers, package managers or container runtimes is easy. Even remote debugging is easy to set up, as is doing debugging, or even testing web APIs. I'd say that all of the features that should exist do exist, which is more than I can say about many other IDEs.

I know that Eclipse is sometimes used more in an educational setting, however there are also both some specialized tools, as well as customized versions for something like working with Spring in the industry: https://spring.io/tools In my experience, the idea behind the IDE is nice (a platform that you can install whatever you want on, entire language support packages, or specialized tool packages), but the execution falls short - sometimes it's unstable, other times it works slow and so on. That said, it's passable.

I would say that personally I'd almost prefer NetBeans to Eclipse, even after it was given over to the Apache Foundation, which have released a few versions since: https://netbeans.apache.org/ It seems to do less than either Eclipse or IntelliJ IDEA do, but for general purpose Java editing and limited work with other stacks (PHP, webdev stuff, some C/C++) it is good and pleasant to use. However, if you have projects that get close to half a million lines of code, it does just kind of break and gets way slower than the alternatives. It still somehow feels more coherent than Eclipse to me, would pick it if IntelliJ IDEA didn't exist.

Some also try doing something like using Visual Studio Code with a Java plugin: https://code.visualstudio.com/docs/languages/java That said, I only used that briefly when I needed something lightweight for a netbook of mine, the experience was somewhat underwhelming. The autocomplete or refactoring wasn't as good as IntelliJ IDEA and just felt a little bit tacked on. Then again, that was a while ago, I don't doubt that progress is being made.


I’ve been using Eclipse for Java recently and it’s better than I remember it being. One thing I particularly like is that it has a view that’s like the Smalltalk browsers in older smalltalk environment.


That class browser has been part of Eclipse since the start or very near it (I first used it circa 2003).


I remember it being in VisualAge for Java before that IDE became Eclipse.


I still use it.


I love eclipse back in its day. It worked great for me until everybody started using Maven.

Eclipse tried to use the same approach for maven and wrote their own parser/processor to inject the Maven build file. This failed miserably.


> works with incomplete and incorrect code

I know that this is a standard assumption, but I personally believe that it is a mistake to support this use case. It is a holdover from the batch processing days when getting feedback on your program could take minutes or hours. In those conditions, it was absolutely essential to try and proceed with a partial compilation even in the presence of errors or else iteration would have almost been impossible.

Now we live in an era of fast personal computers. With a language that is designed from the ground up with IDE support/rapid iteration in mind, you can get feedback with every keystroke. Everything is easier to design if you abort immediately on the first error, whether it is syntactic or semantic. Designing a parser with error recovery from syntax errors is a particularly dark art. On some level, you have to resort to guessing. It may work _most_ of the time, but to me there is nothing worse than a tool that is maybe correct.

When you advance past an error all of the code below the error is suspect. To give a trivial example, let's say you rename a function with 100 call sites without some kind of refactoring (automatic or manual). What benefit is there in showing the 100 new failures when the root cause is exactly where your cursor already is? There are cases like this that are even subtler where you are bombarded with downstream error messages that are confusing and taking you away from the actual root of the problem (c++ template errors come to mind). You may as well just gray out everything below the first error and reprocess it only when the error is fixed.


I've had this conversation a few times over beers: I think we are collectively Doing It Wrong by not considering code edits as deltas and checkpoints.

Diffs often seem to fail to represent the actual change because they consider the delta from the last commit, which isn't the way we write code most of the time. If I go and insert a bunch of closing braces into the middle of a function, it's almost always because I'm dividing the function into two functions, or adding some missing error handling, or a corner case. So from the standpoint of a DAG representing the parse of the file, most of the time I expect the functions above and below to still be available in autocomplete even if I haven't balanced the tokens representing code blocks yet.

If you saw a bunch of functions and I break one, I expect the IDE to consider the function broken, not the file.


I think you have misunderstood. He's talking about IDE support. IDEs absolutely must be able to understand incomplete or incorrect code otherwise code completion will be completely useless. You literally only need code completion when your code is incomplete. The clue's in the name.


I am focusing on the incorrect part. Let's say that you are adding some source code in the middle:

...

def add(x: Int, y: Int) = x + y

...

let x = ad|> I am using |> to denote the cursor location

...

I agree that where the |> is that the IDE should be able to autocomplete `add` for you. What I am saying is that it shouldn't process anything below the let statement because the code is broken there. Many IDEs and compilers will continue processing the file after the first error so that they can report all of the findings. That is what I am suggesting they not do.


If you stop processing at that point, you can't autocomplete any functions that are defined further down in the file.


Exactly. That is a feature.


Maybe for you, but I'd consider that a pretty annoying bug. I like to structure my files/classes/modules in decreasing order of abstraction: put the public API and high-level algorithms at the top, and then dive into the minutiae how the various parts are implemented as you scroll down. That also means I almost exclusively call functions that are defined further down in the file.


> I like to structure my files/classes/modules in decreasing order of abstraction: put the public API and high-level algorithms at the top

That makes sense. What I am describing doesn't preclude this organizational structure. It depends on the language design. You could either support c style forward declaration that you put at the top of the file and would be available for completion even if the implementation is below the first error in the file. Or the IDE could provide folding of the implementation so that you can scan the api without drilling down into the details.

> That also means I almost exclusively call functions that are defined further down in the file.

Again, to clarify are you calling things before they are _declared_ or _defined_?


Most languages don't make a distinction between declaration and definition, and many don't even care where something is defined/declared at all. C/C++ are really the exception nowadays, and for good reason: having to keep the definition and declaration in sync is annoying and unnecessary, even though I sometimes miss the easy overview of an API header files give.


Ok. So I'm guessing you're using a Java like language with classes and that the upper classes call into the lower classes? I understand the appeal of that approach. The tradeoff is that you have to make multiple passes of the file to compile/do IDE analysis. If the language is designed as I've been advocating, one can essentially write a one pass compiler. This is simpler to implement and will generally be faster. The big tradeoff is that it imposes a more rigid organizational structure.

As a compiler author (I have written a self-hosting compiler for an unpublished new language), I dramatically prefer implementation simplicity to organizational flexibility. I respect your preference but believe that ultimately the more free-form and flexible a language, the more complex and slow the tooling, both of which lead to more bugs in the final artifact. But I certainly can't prove my point.


A significant portion of C(++)'s cruft comes from its catering to one-pass compilers. I will grant you that it simplifies the job of the IDE, but it comes with so many other costs.

The obvious is that it requires forward declaration. However, depending on how back & forth interdependence is, it requires multiple "tiers" of forward declaration to effectively iteratively redeclare (or augment? Now that's some added complexity for all parties involved…) incomplete types until you can complete them. It's one thing to have a nice list of "here's what exists", but it's another to manually detangle dependency graphs. It's bad enough in C++ which already doesn't allow very much interdependence, but it'd be completely infeasible for any language more complex than glorified assembly.

Next, how would compile-time execution work? It's one thing to forward declare the existence of something, but how do you execute it without knowing its definition? You literally have to add another pass. Similarly, how do you make an inline function call?

One-pass compilation also relies heavily on searching and referring back to data from earlier in the pass, making it rather cache-unfriendly unless you build data structures as you compile, but now you're using massive amount of memory since you have to build these structures for everything in the entire input. This scales incredibly poorly. If you use one thing from some imported header, you now need to add that header and its entire dependency tree into your one pass. This isn't the 70's anymore; more passes ≠ more slower.

It's actually the compiler and IDE people pushing against this one-pass mindset, because it's "simpler" but just worse for everyone… except maybe the developer reading a header file instead of documentation. And do note that complexity comes in forms other than fancy data structures & algorithms in compilers/tooling. I'd argue that a manual flattening of a real world dependency graph is much more complex and harder to grok & maintain. Regardless, it's the compiler/tool developer's job to take the burden of complexity to better serve their users.


How is that a feature? If someone gave you an IDE that could autocomplete in the face of minor errors earlier in the file (which they often can) you would say "sorry I don't want that feature. Please disable it"??

Why?


Because I think that everything should be declared and/or defined before it is used. I don't want the IDE to autofill something that is declared below the cursor because I wouldn't be able to use it at that location anyway.


In some languages this is literally not possible...


When I code in C, prototypes are the last thing I write. Until that last moment, most function are defined but not declared. Your "feature" would be terrible for me!


I recently watched an excellent Talk about IDE integration with Rust. Maybe worth checking out: https://youtu.be/N6b44kMS6OM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: