Shame that you specify pixels in the function calls themselves. If it were really abstracted, it would have an arbitrary 'size unit' which the glue code could translate on the backend.
In a world where one user has a 1366x768 screen and her neighbor has a 4k display, we need to move past interface code specifying pixel sizes up front, both on the web and off.
There are two different versions. The first one uses pixel measurements like you said the other one uses a fractions of the window.
If it is something requested then it is something that can be changed quite easily either by me if the feature is requested or the user since it only requires modifying one function. Furthermore for input you only provide your input data the library does not control it and the output can be scaled if required.
I find that pixels are appropriate at both the "small code" end and the "perfect display" end of user interface rendering. In between the "I have lots of CPU but I can't afford the mental time to get this perfect" is served well by abstract units.
The problem that hits abstract units is that for features which are small, you never really know how big they will be. It's easy to end up with elements which should look similar but some are three pixels wide and some are four, or the spacing is a little ragged from the rounding. Some kits avoid this by just antialiasing everything. Then you try to draw a detailed graph or a musical score and some of your horizontal lines are sharp and crisp because they hit solidly inside a pixel, and some are a grey smear because they hit a crack.
This is probably a large reason why many graphs and charts you see on the web look like they were drawn by a child with a felt tip marker. A 3 pixel wide line is pretty forgiving about where it falls.
Do you want Qt? Because this is how you end up with Qt.
(I'm mostly kidding but in my experience, the sorts of apps that you build with this sort of library invariably end up dealing with pixels anyway. There's no religious reason why resolution-independent scaling code has to live in the UI primitive library, especially since it probably can't be confined there.)
I always have an idea in my mind that we should scale / show the interface based on font-size, adjusting borders / button accordingly
p/s: obviously I have NO idea how easy / complex this would be, just an idea given that sometimes text contained in an element bleeds / overflows / outright missing
If all targets have a relatively high DPI, does that still hold? Wouldn't it be more correct to say "it isn't trivial to rasterize vector images for low-DPI displays?"
That takes you towards HTML- or XAML-style UI definitions. I'm doing something like this in my (experimental) layout lib for embedded devices. I've written a bit about it: https://www.eiman.tv/blog/posts/liblayout/index.html
Graphics on a 1366x768 display end up looking like mush if they're not aligned to pixels. Once everything's higher-res this will be less of an issue, though.
It looks cool, though the claim about having no platform dependent code must be false... either they have it or they depend on someone who does. Since it's all in one file and I'm on an iPad right now, I didn't try to discern which case it is.
What is the appeal of the implementation hiding in the header file? Is it a 'thing' nowadays? It makes every file that includes it read all 15k lines, and makes you put a special #define in one of your source files, just to avoid having a C file. That doesn't feel like a good trade-off to me. If I used this code in a project, I'd build a static library out of it first thing, and go about my business.
If compile-time is a concern (it isn't in my experience, at least if the rest of the project is C++ code), you can still wrap the library-header into your own header and put include-guards around it :)
Did you ever try to link with externally compiled libraries on Windows? :)
It's an absolute mess, the lib files are incompatible between Visual Studio versions, whether threaded/non-threaded/DLL/static MSVC runtime is used, whether linktime-code-generation is used, if there's C++ code in it, a release lib cannot be linked against a debug executable, etc etc etc...
Only sane solution is to drop the sources directly into your project. Whether it's just a header or a header/source pair is just a small detail, admittedly.
Yeah but "externally compiled libraries are a pain so I'm going to put the entire source in a single header file because tarballs man, how do they work" is a hell of a non sequitur.
It needs to abuse the preprocessor to avoid having multiple conflicting copies of the functions.
The implementation mode requires to define the preprocessor macro
NK_IMPLEMENTATION in *one* .c/.cpp file before #includeing this file, e.g.:
#define NK_IMPLEMENTATION
#include "nuklear.h"
That's literally what the preprocessor is for. The C++ inline keyword basically has the same effect, and probably half (or more) of your headers "abuse" it for exactly this purpose.
Putting external library source files directly into my project does not sound sane at all, and it is definitely not the only option for static libs on Windows. You can also make the library project a component of your solution, and add a reference to it in dependent project(s). That way, they have proper separation but are all built from source.
Having now read the rationale for single header libraries, I am completely unconvinced. These people either don't know how their tools work, or are making bad choices to coddle people who don't. Even forgetting the wasted time parsing the #ifdef'ed out implementation section over and over, it screws up separate compilation. Every time you tweak the implementation section, all the files that #include it will recompile for no good reason.
I was convinced of the header-only approach when I replaced DevIL, libpng and libjpeg (all of them made of a myriad of source files and build options, not to mention using non-portable build systems that don't really work out of the box on Windows) with 3 stb headers: stb_image.h, stb_image_resize.h and stb_image_write.h
Even if the compile time is affected (which I never noticed) this is only a minor inconvenience compared to the wins those 3 header files provide.
It's not either "build-system dependent mess" or "everything in header." The alternative to "everything in header" is "header file has interface, .c file has implementation." Then you add the .c file to your project, include the .h file, and it works better than "everything in .h" since instead of 20,000 lines all other .c files in user projects have to see only a few hundred lines, which compiles faster. And that's how C projects were supposed to be organized since the beginnings. You don't have to have dynamically bound files if that isn't needed. Some licenses insist on dynamical linking, but when you make your library, your decisions on how it's linked are independent of your decision to stuff the implementation in .h.
I agree that libpng and friends are an abomination, but a reaction all the way back to a single header file is taking it too far. There is a healthier middle ground that doesn't go against the grain of decades of tooling. Wouldn't a streamlined interface and fewer build options be sufficient? It's not like compiling multiple source files is hard.
You're probably not going to be tweaking the implementation section.
I prefer the 2-file approach myself, but the basic idea of distributing source files that the user adds into their own project is a good one. I much prefer it to having a solutionful of random projects, each of which inevitably needs its settings carefully tweaking to make sure everything works properly. Though I do admit that is not a very high bar.
831 commits in this repository tells me it is tweaked regularly. I guess the next response is: you probably aren't going to be updating your copy. That, and the idea that a project full of random source files is better than a structured solution of component projects, tells me where people's heads are at. We're not going to agree on this!
> does not have any dependencies, a default renderbackend or OS window and input handling but instead provides a very modular library approach by using simple input state for input and draw commands describing primitive shapes as output.
It seems that you're expected to provide the layer between the OS and nuklear. They probably have some default implementation for demo's and stuff.
Ah, I read that three times on the site but it didn't sink in until I saw your comment. I guess with a title like "GUI toolkit" my expectations were set incorrectly. Thanks for that!
So, you glue it to the OS yourself. That's... well actually I guess if someone marries it to SDL it could be very useful.
It's nice because your backend doesn't matter. If I have a custom rendering platform I can probably spit out an image. But if I don't want to write a gui library or hook up some BS like QT, this is a super simple solution. I have three projects for three different clients which can use this, for at least internal debugging.
Interesting. So to answer the other poster who complained about the use of pixels rather than "display units", your lower 'glue layer' could accept dp in the input and scale the output to provide density-independent behaviour?
the difference being that, when i saw nuklear's README, i just wanted to learn enough C to be able to use it...
with imgui the first reaction was (the first time i saw it), that i won't be able to use it for software targeting non-geeks. (and i really would like to have a good lightweight UI framework for C++!)
it might be just the screenshots, i've not tried any of both, but comparing the two README files, i cannot say that they are real alternatives.
even if -- very likely -- both are very good toolkits!
I do have font size issue with nuklear and imgui. The font size used in the nuklear extended example make it look fuzzy, especialy if I change the scale.
I tested the imGui and changing the code allowed to select a bigger font. I wished it was possible to change the widget look and feel because it looks minimalist.
Looks like a great piece of tool! Very often you want to throw something together and the development grind to a complete halt because having to fiddle with the UI integration. Problem solved!
X11 example doesn't seem to run here, on my debian sid+nvidia blobs drivers. The GL* do work tho, abeit with un unreadable font (I'm on a 32" 3180x2160 screen)
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 70 (X_PolyFillRectangle)
Serial number of failed request: 29
Current serial number in output stream: 102
Really pretty. Given how small it is, I wonder how difficult porting it to other languages would be; it might make a decent little reference spec for lightweight GUIs.
Would be awesome to combine it with something like https://bitbucket.org/rmitton/tigr which is a small and tidy window creation/input processing lib, because I find glfw and SDL too bulky for casual C/C++ programming.
There is nothing really dependent on any of the provided demos. As long as you are able to provide an OS window, input and a way to either draw basic shapes or vertexes you are good to go.
Rust has GTK and qtquick bindings, and there is also https://github.com/pistondevelopers/conrod which also uses immediate mode. Nuklear looks like it's more capable right now than conrod, though, so Rust bindings could still be good to have.
In case anyone wants to see a C++ framework that evolved over time and then was inextricably tangled up in the one application that really used it, check out OpenOffice/LibreOffice's Visual Component Library (VCL).
When I look at GUI APIs, the first question I ask is, "How easy is it to get things where I want them on the screen/page?" Because that is the fundamental problem for any layout engine.
That's an easy question to ask if you know the screen aspect and resolution. What if you don't know either or both? The question suddenly shifts to "Where do I want things on screens?" and suddenly it's not about library anymore.
This looks quite nice. But they did fib a bit about no dependencies. It is apparently built on top of GLFW3 which is a nice, small C library that gives you a native window and input handling.
Nope, it isn't. GLFW3 is in example use of library.
There is not a single instance of string "glf" in actual library. The library claims that it is agnostic to what actual low level layer you provide to it so you can use it with what ever you choose as render target and what ever you choose as input source. Its user of library's task to provide those two and link to GUI.
Cross platform is a key feature. Thats why glfw is so nice. Unless you want to write the grungy windows bits for all your platforms I don't see any problem with useing glfw. Its a solid library.
GLFW is great, but it's limited to GL or Vulkan as 3D backend. There's a context-less mode now (created for Vulkan I think), but I haven't seen GLFW used together with D3D or Metal yet. It's fairly easy to rip the relevant window- and input-handling code out though.
A couple of tweaks needed to get the example in the README to compile: wrong number of arguments in call to nk_button_text(), and need to take the address of ctx in the call to nk_end().
Yes, you'd have to implement the drawing and input handling code but this seems like it would be a good fit. You need to implement a fair number of drawing primitives, but certainly less work than a whole GUI.
I haven't tried this one yet, but I've tried another immediate mode ui library before and it seemed to busy loop a single cpu core. Is that normal for IM GUIs?
They typically draw/process every frame. So if you're doing a game or real-time tool, it will be part of your main loop. I wouldn't call it a "busy" loop, since usually you'd sleep or yield when done processing the frame and wait for the next frame.
There's no reason you couldn't take the same approach but only process/redraw in response to an event (e.g. on a mouse move or click). But since immediate mode GUIs tend to be used in contexts where you're already drawing every frame (games and interactive tools), people tend to use them this way as it fits the best.
The main defining characteristic of an immediate mode gui is it tries not to store state for each widget. Rather than having a large "object graph" (retained mode), you take a more functional approach and draw what you need every frame. There's much less bugs/complexity trying to keep your Model and View in sync, but somethings that need extra state (e.g. tree view expanded state) get more complicated.
As far as I know glfw, sdl and allegro work for OSX as well. But if that is not the case I still have demo code for Mac, iOS and tvOS. The problem is I don't have any of these platforms and the demos are from an old version. So if somebody is found to take over the porting I can release it again.
Static linking is "easy" on any particular platform for a particular target. But build systems and IDE and all that junk make things more complicated. You might want to build the library with the same flags as your main app, for example. You might have several target architectures you need to build and link against (i386, x64, amd) x (debug, release, QA). Etc, etc.. It also makes it easier for the compiler to inline things.
Header only libs just make all these problems go away, and (IMO) there's not really any downside.
Well, the downside is that it's literally in one header file. Besides being gigantic, it's extremely annoying to search and parse by hand - this is an issue when that's also serving as the documentation. Especially in this case, it would be much nicer to have a "nuklear" folder with a bunch of separate (appropriately named) headers. The single `nulkear.h` header could just `#include` them as necessary.
I respectfully disagree. If they are separated into thoughtful modules, then it would be much easier to parse by simply looking at the file and folder names and going from there. This will especially be true when this expands past it's current state, and it becomes impossible (Or more impossible) to see what uses what.
IMO, 20,000 is already way to much, but when that reaches 100,000 even just editing it will be a chore, let alone using it. It will simply be unmaintainable and unusable. I say this having worked on code bases which have reached that point and attempted to deal with the issues it creates. There's a reason every decently sized code base out there uses multiple files and a hierarchy. It's a natural way to create modular code, and modular code is simply better code.
It depends on the editor, I think. It's criminal that so many editors still can't do what BRIEF did 20+ years ago, and give you a quick and easy way to surf through all C/C++ functions in the current translation unit. (e.g., http://i.imgur.com/dmlCCpH.png ).
That's why I still use a buggy BRIEF clone instead of a modern IDE. If I were forced to use someone else's idea of a code browser that was architected around files rather than subroutines, I'd probably be more inclined to agree with you.
The idea that the most appropriate way to organize source code happens to correspond to something that a (likely-long-dead) OS implementer chose to call a "file" is a notion that doesn't get challenged often enough.
Perhaps if you are the author of the 20,000 line header file, but from my experience, this is certainly not the case when developing software in a team. Having all the code in one file is significantly more difficult to understand for a new team member since it becomes more cumbersome trying to understand the dependencies between different modules.
On the same note, proper modularization of source files also encourages better discipline for dependencies among modules, such as discouraging circular dependencies.
the SDL demo has lots of dependencies obviously, so it's really not something self-contained. I'm always excited any neat GUI for C, but this one seems like just a layer on top of existing GUI libraries, what's special about it? Please shed some light.
If it's really light-weight and self-contained it will be very useful for some embedded systems with a screen.
I have only experience with dear-imgui so far (mentioned in the readme), but nuklear should be very similar in its requirements. You basically just need a GL context (or other 3D APIs equivalent) to write a wrapper for, SDL is sort of overkill for this.
In my case I integrated imgui with my "weekend engine" Oryol, which has a very slim wrapper around 3D APIs, and I have this running in WebGL (via emscripten) and GLES2 on Raspberry Pi 2 (should also work on the original Pi). The resulting executables are small enough for most 32-bit embedded platforms (a couple hundred KBytes), and have a much shorter list of system dependencies. On Raspberry Pi it doesn't even need a window system, it runs straight from the Linux terminal and uses the Pi's EGL wrapper.
SDL is not a GUI library: it doesn't have any buttons, sliders or text controls. That's what this library provides.
To get on something on screen, it provides a number of backends, including SDL. However there is also one that talks to X11 directly, providing the minimum dependencies needed on Linux (as far as I'm aware).
Basically the library generates a list of drawing commands. It is up to the underlying backend (SDL, GLFW, X11, your own embedded one,....) to draw these on-screen.
SDL on Linux does a lot of dlopen detection at launch because it is so hard to know what kind of Linux environment you are running in at compile time. For example, you may have a X11 server, or you may be running completely on the command line without any window server. (It's really cool being able to develop for Raspberry Pi Raspbian via SDL without X11 since RAM is so limited.)
So that list is not SDL's contribution and that demo was just linking to a lot of things, most of which look unnecessary (and sloppy, though this also makes a case why people want header-only libraries because many people don't want to think about this part). For example, why is a gui demo demo linking to FLAC and all the Ogg Vorbis libraries? SDL doesn't use or depend on these either.
The author of Nuklear mentions it in the readme, but in case people don't spot it, there's another cool project in this family, imgui: https://github.com/ocornut/imgui
It would be nice if a GUI toolkit just targeted ncurses and you could further skin it to run in the browser, gtk or kde separately. a bit like react-native.
In a world where one user has a 1366x768 screen and her neighbor has a 4k display, we need to move past interface code specifying pixel sizes up front, both on the web and off.