Hacker News new | past | comments | ask | show | jobs | submit login
Oryol: A small, portable and extensible C++ 3D coding framework (github.com/floooh)
83 points by severine on Jan 2, 2017 | hide | past | favorite | 27 comments



Orthodox style argument aside, this project uses a build system written by the same author called fips[0]. I've tried it myself, and it's an absolute joy to use. A few simple commands will get you up and running on any supported platform, and it does an excellent job of telling you what went wrong if there's a failure. It also has a system for compiling the same shaders to multiple platform targets.

Oryol's demos[1] are impressive as well. Especially the ones that showcase emulation and various integrations of immediate-mode GUI frameworks.

The entire project is underrated in my opinion. With web assembly slowly coming alive, there's a distinct lack of small, multi-platform engines designed with the web in mind. I only wish Oryol's ecosystem was larger, it could be a realistic alternative to Three.js perhaps—if it isn't already.

Keep up the good work floeofwoe. :)

[0] https://github.com/floooh/fips

[1] https://floooh.github.io/oryol/


flohofwoe*, sorry.

Curse the two-hour edit window.


I'd love to know what people who work full-time with C++ think of the "Orthodox C++" concept (link in the article) that this project adheres to.


Oryol author here. A lot of the motivation for using such a 'simple C++ style' comes from my own frustrating experience of integrating big C++ libraries (mainly game development middleware) into complex million-lines C++ game code bases, I didn't want to have this frustrating fight in my spare-time projects.

The more language features and dependencies those game middleware libraries use, the harder they are to integrate and combine with each other. Some may require compiler options that contradict each other, or require a very recent compiler version, some do an absurd amount of dynamic memory allocations (which is easy when using the STL - ahem stdlib - carelessly), some may depend on exceptions, or on boost libraries etc...

It was always the pure C projects, or the 'simple C++ style' projects that were easiest to integrate, while projects which used a boost-style 'over-engineering' approach caused a lot of little problems during integration and also afterwards (they are usually much harder to debug, take much longer to compile, and so on...).

The only sensible thing for C++ libraries that aim to be easy to integrate is to avoid many of the typical 'modern and old C++-isms' outlined in the Orthodox C++ document (but nothing in there is particularly new or special or revolutionary, it's a collection of pragmatic advice learned the hard way, and should be fairly familiar in embedded and game development).

The other big problem (besides integration) is that some C++ features (like iostreams or RTTI) increase code bloat, and basically violate the "don't pay for what you don't use" principle, or dramatically decrease performance on some platforms (mainly using C++ exceptions on asm.js). I have written a blog post a little while ago which goes more into the details, the TL;DR is basically that a 'simple C++ style' may help to reduce code bloat and increase performance: http://floooh.github.io/2016/08/27/asmjs-diet.html


It's a reaction to the Boost crowd, who took C++ into template la-la land. For realistic programing:

* Avoid multiple inheritance. It seldom helps and there are lots of confusing cases. Putting one object inside another is fine. If your inheritance tree is more than two or three deep, you're probably doing it wrong.

* RTTI is better than unchecked downcasting.

* STL collections are fine and should be used in preference to built-in arrays.

* Avoid Boost. If anything goes wrong, you'll spend days to weeks figuring it out.

* Exceptions are not that bad. Go and Rust now have de-facto exceptions, with all the complicated unwinding machinery, but they're called "panic" and "recover". Just make sure that anything you pass in an exception is a self-contained copyable object with no pointers to anything, because that's where ownership and allocation trouble appear.

* Don't do anything in a destructor that can fail or block. Like closing files opened for writing. Destructors are for releasing memory. Raising an exception in a destructor is not a good thing.

* Don't overload operators for anything other than math. It just confuses people.

* Streams are more cool than useful. But mostly harmless.

* unique_ptr and move semantics have promise, but it may be too soon for those features. If you have complex ownership issues, consider Rust, which has compile-time checking for that stuff.


Small nit but Rust's panic is not an exception mechanism:

https://doc.rust-lang.org/std/panic/fn.catch_unwind.html

Particularly you may not be able to catch all panics. Result<T,E> is the canonical way to handle errors in Rust.

Also in gamedev(where this looks like this comes from) we generally avoid RTTI(for size reasons) and STL(for allocation control reasons, see EASTL, etc).


For both Go and Rust, it's not supposed to be exception handling. But both languages have acquired proper unwinding, so people can and do use it that way. The original idea in Go was that if you caught a panic, the program was in a bad state, and all you should do is report the problem and maybe restart the program. Like "longjmp" in C. Now, people are doing recovery without re-launching the program. That's effectively exception handling.


I concur. I don't expect that to convince anybody, but a post needs an opening, and this is as good as any.

Regarding unique_ptr and move semantics, I used both quite a lot on a recent project (VC++/clang/gcc) and have been pretty pleased with the results.

It can be quite hard to make things copyable sometimes, but it's usually easy to make everything at least moveable, so where in the past you might have had to use a pointer (leaving room for error) or a smart pointer (more coding, slow debug build, annoying in the debugger), you can now just always have a value.

And unique_ptr is good for documenting ownership; I used it for stuff that gets passed between threads, but of course there are other options. You don't get any useful compile-time checking, but if you get things wrong then it does at least go pop at runtime in an obvious way... which is better than nothing. (As the saying goes, C programmers know the value of nothing! And that's why we appreciate it so much when we get anything better.)

My two takeaways from the experience:

- moving as a language feature is far more useful than copying... I suspect (though without having thought through it especially thoroughly) C++ would have been heaps better if it started out with move semantics only, with copying being only by convention

- Rust has moved further up my todo list


> Don't overload operators for anything other than math. It just confuses people.

Do you think << and >> in iostreams confuses people?

> unique_ptr and move semantics have promise, but it may be too soon for those features.

What makes it "too soon" for unique_ptr (or shared_ptr)? These are the most useful and long-overdue improvements to C++ I can think of in its entire history.


> Don't do anything in a destructor that can fail or block. Like closing files opened for writing.

I'd like to know more about this. That means to RAII file types?


Read this: [1] In C++11, exceptions in destructors have defined semantics, but those semantics are very complicated. Prior to C++11, the semantics were not well defined.

[1] https://akrzemi1.wordpress.com/2011/09/21/destructors-that-t...


I work with C++ on embedded computer vision systems, and we mostly build our own low-level image processing libraries. Elements of this philosophy are quite popular in our company in building libraries, i.e. code meant to be reused between projects. The only item that stands out is the following:

Don't use C++ runtime wrapper for C runtime includes (<cstdio>, <cmath>, etc.), use C runtime instead (<stdio.h>, <math.h>, etc.)

I don't have any idea why it may be useful.

Since we often deal with scientific image processing, there's a heavy use of templates, but no “esoteric” metaprogramming, and it does not contradict this “orthodox” manifesto.

On the system design side, we mostly compose our systems out of rather heavy objects encapsulating separate testable subsystems. These objects exclusively use smart pointers and often exceptions and can use RTTI if it somehow aids postmortem analysis (e.g. including type_info in exceptions). Also, multi-threading also lives here, and it is usually heavily-modern-C++-flavored Intel TBB and/or simple message queues.

On the whole, it is Java-like on the top level, C-like on the bottom, templates at the basement, and whatever language helps best to stabilize the whole structure.

ADD: Overall, these “orthodox” qualities have significant effect on whether the library should be accepted by us in a serious project. I.e. mostly for this reason OpenCV use is mostly limited to prototypes and auxiliary software. However, Eigen and TBB are the exceptions, since they provide great features while being very well maintained with respect to reliability and compatibility.


In my last job I worked in a code base like this for a large AAA game with an idtech legacy. It was even more extreme than orthodox C++ in that it was basically C in runtime code (tools code used some late 90s style C++ with STL vectors, maps and classes) with a few C++ features sprinkled in for convenience or performance. We could use non-allocating STL algorithms stuff, for example I used std::sort pretty heavily, but classes were rare and memory management was carefully controlled. Way more restrictively than a simple STL container ban, everything was statically allocated or allocated at load time with memory requirements determined at (asset\level) compile time. Modern console development makes it impossible (or at least really hard) to not allocate some memory at runtime or avoid certain modern C++ idioms. There were VMs for scripted gameplay and UI code, which allocated memory and ran GC but those operated in their own, limited, sand-boxed arenas.

In some ways this style was limiting, but in other ways it was refreshing coming from previous jobs where I was working with C++ loving developers, including just myself in an indie code base I'd written that was too C++-y for its own good. Sometimes, keeping the rules of the language in my head, writing "proper C++" and dealing with random gotchas was a big cognitive load. Writing simpler, almost C style code and focusing on data layout and transformation instead of class hierarchies and C++ syntax made me a better, more productive developer.

I also completely bombed an interview after working at this job for 4 years when I had a typo in assignment operator syntax (very embarrassing) and couldn't remember modern C++ syntax which I had read about but never used in a production environment (move semantics, didn't know shared_ptr needed a default delete for char[]), so I guess you win some and you lose some.


The restrictive style can be very liberating because the convention minimizes unintended side effects. C++ has a ton of unintended side effects that come from interactions between language features.

Copy and move semantics combined with heap allocations and STL container copy/move behaviors can get you into weird places and behaviors if you use them too liberally.


>for a large AAA game with an idtech legacy.

Yep. That would explain it. From your description, sounds like an idtech 3 lineage (CoD, perhaps? I'd keep guessing, but idtech 3 was so heavily licensed that it honestly could be anything).

It certainly wasn't idtech 4, which I think used a bit more C++ than you describe inside the engine itself.


CoD seems highly likely. I can't think of any other idtech 3 engine descendant that is still going in year 2010+.


It was a version of the COD engine.


Most recent C++ changes either facilitate extermination of "random gotchas" (for example simplifying constructors and initializers) or are innocuous additions; what are the harmful "modern C++ idioms" you struggled with? How could "almost C style code" be simpler than taking advantage of a richer syntax backed, at last, by nice semantics?


I think modern C++ features are great and pushed to incorporate many of the features you're talking about as console compilers (slightly behind versions of clang, gcc and msvc) were upgraded to support them.


IMHO, one of the costs we don't think enough about is maintenance on these software projects. If you have taken the time to learn all of these great frameworks and techniques, that's great, but it increases the cost of the person who comes after you and has to also learn all of those technologies before they can maintain your code.

There is a wisdom that you should never re-implement something that a library exists for, but IMHO a strict interpretation of that view leads to horrendously bloated projects that are nearly impossible to build and maintain. I see this sometimes in node projects where the developers will pull in half a dozen dependencies before they even consider writing a for loop.


A lot of game developers agree with it, that's for sure. Many Golang fans probably will as well. (The author of the concept is both btw).

The goal in general is to produce code whose behaviour, compilation and runtime performance, and resource consumption characteristics are evident from reading it, while remaining clear and easy to write and maintain. Such code is explicit in what it does, and so tend to be the bugs in it (which means they are usually easier to catch).

It raises the minimum bar for programmer skill and discipline.


It reads a little bit like a parody of similar (more well-intentioned) documents. If I didn't know it was serious, I wouldn't have guessed it was. The casual throwaway reference to "academic masturbation" would have been a dead giveaway, but the rest isn't objectively any better.

It's hard to refute directly because there isn't any motivation or reasoning presented, only prescriptions (the author cites their "experience" in comments), so the best I can do is point out that most of it is really bad advice.

If I were to guess, C++ was designed in a certain way, and you really ought to understand it if you want to use it well (other languages do this too: it's not a fault!). So if you absolutely refuse to learn any thing that requires learning or a nonzero amount of effort, this is the sort of coding style you end up with.


> It's hard to refute directly because there isn't any motivation or reasoning presented, only prescriptions (the author cites their "experience" in comments), so the best I can do is point out that most of it is really bad advice.

If you want to refute it, why not say what you think is wrong with each of the prescriptions instead of just saying "most of it is bad"?

Personally I agree with most of the prescriptions but there's one I disagree with completely and another that I'd add a caveat to.

The one I disagree with is using c headers instead of their c++ wrappers. Some of the c++ wrappers do actually add value (e.g. cmath adding templated versions of abs, etc) and it's easier for me to remember a single consistent rule ("use the c++ wrappers, always") rather than a list of which ones to use the c++ wrapper for and which to use the c header for.

The one I'd add a caveat to is the one about not using anything from the STL that allocates: I think that's good advice under some circumstances, but not all. The STL containers are really useful for getting something up and running quickly, so I think it's fine to use them in that case and only switch to custom containers once allocation shows up as a hot spot in your profiling.

As a caveat to the caveat, I would add that STL classes should only ever be used as an internal implementation detail, never exposed as part of an API. This is because the implementation of the STL classes can change, causing binary incompatibility. For example, Microsoft changed their implementation of std::string between Visual Studio 2008 and 2010 (if I remember correctly; and possibly again since?); if you have a library compiled against the older std::string you can't use that in a project being compiled against the newer std::string and vice versa - unless you have the source for the library and can recompile it. Using your own classes protects you from that because it puts you in control of when the ABI changes.


> If you want to refute it, why not say what you think is wrong with each of the prescriptions instead of just saying "most of it is bad"?

That is what's wrong with them: if you follow them you'll end up writing worse code for no good reason. I would be a lot more interested in discussing the reasons the author might have had, but they don't present them, so there's little to discuss there.


That sounds a lot like the arguments that I heard in the late 90s, when the STL was new and dangerous. So "Modern C++" in the Alexandrescu sense, not even C++ 13/17.


very yes. RTTI and unwind adds an unacceptable size bloat on embedded. (in some cases). As does "academic" use of templates.

Reasoning about code that can throw is almost like COMEFROM. Instead, on any serious bad (oom, etc) process aborts and is restarted.

OTOH, lambdas for callbacks helps writing decoupled interfaces a lot.


I've read the "Orthodox C++" definition. It's mostly outdated or misunformed or just a troll.

Two examples:

- In 2017 you don't say STL but "Standard Library" and saying that it has "bad memory management" means you don't understand how it works (because you can use whatever memory management you want for most containers and you can even use the algorithms on C-arrays).

- I don't see why you would want to use printf instead of libfmt for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: