Hacker News new | past | comments | ask | show | jobs | submit login

Many of the principles here align with my tastes, such as focusing on fast compile times and supporting allocation failure. On the other hand:

> Unplanned Features:

> SharedPtr

> UniquePtr

> In Principles there is a rule that discourages allocations of large number of tiny objects and also creating systems with unclear or shared memory ownership. For this reason this library is missing Smart Pointers.

I don’t like that at all. I take the common view that all heap objects should at least be allocated via smart pointers. Doing so is safer and easier and usually zero-overhead. After allocation, it may be necessary to pass those objects via raw pointers/references, but smart pointers should be used where appropriate.

So while I agree that it’s undesirable to allocate “large numbers of tiny objects”, I would want smart pointers as long as there’s any dynamic allocation at all.




For me personally SharedPtr is very rarely needed as it encourages building difficult to untangle ownership hierarchies. I did use a lot of Shared Ptr in the past when creating a node.js like library in C++ but breaking the ref cycles everywhere was needed has always been a pain. That's why I am currently against its use, unless there is a very special case.

Regarding UniquePtr<T> I used to have one but I later on decided to remove it.

https://github.com/Pagghiu/SaneCppLibraries/commit/9149e28

However, that being said the library is lean enough so that you can still use it with smart pointers provided by any other library (including the standard one) if that's your preference.


> you can still use it with smart pointers provided by any other library

Is the point of having a kitchen-sink library like this not that you dont have to reach for a 3rdparty library for things that you need 'all the time'?

Certainly, not everyone needs it.

...but, not everyone needs threads either. Not everyone needs an http server; and yet, if you have an application framework that provides them, when you do need them, it saves you reaching for yet-another-dependency.

Was that not the point from the beginning?

unique_ptr is a fundamental primitive for many, as you see from some other frameworks (1), and implementation is not always either a) trivial, or b) as simple as 'just use std::unique_ptr'.

This does seem like a very opinionated decision with reasonably unclear justification; perfectly fair, you're certainly not beholden to anyone to implement features just because they want you to, but I think it's difficult to argue there's not concrete use for something like this, in a way that aligns with the project principals.

I would go so far as to argue that:

> Do not allocate many tiny objects with their own lifetime (and *probably unclear or shared ownership*)

Is hostile to not having a unique pointer.

[1] - eg. https://github.com/EpicGames/UnrealEngine/blob/release/Engin..., https://github.com/electronicarts/EASTL/blob/master/include/...


I usually like to place all dynamically heap objects of type T into an std::vector<T>, if possible. There is sometimes an obvious point where the entire batch should be discarded and a new batch should be built. At this point, you can call vector.clear() which avoids deallocation/allocation cost for the new batch, as long as it is not bigger than the old batch. This is a sort of quick and dirty arena allocation.

This style is also more cache friendly if you are going to be looping through the elements.


I do this too, when I can either use a handle/index instead of a pointer, or when I can guarantee that the vectors size is constant (so that pointers/iterators are stable). I’ve also written my own vector that stores its elements in pages so that if its capacity needs to increase, the elements don’t need relocation.

I only really use C++ for a toy game engine right now and in that codebase I don’t use any smart pointers and most objects/functions get passed references to their object dependencies. I classify objects into groups where each groups ownership is very clear. So its owner is responsible for maintaining the memory and any raw pointers can always be assumed to be borrowed references. I use handles then the underlying objects lifetime might differ from whatever is holding a handle to it. Short lived objects are kept trivial and allocated from stack/bump allocators or pools and reset at well defined times (every frame, end of level, etc)

I’m much happier this way than when I used smart pointers or when I had less well defined memory ownership.


Interesting - is there a type safe way to do this? vector<variant<>>? and/or a custom “vector allocator” to hide the details?


In my example, the T is one specific type. So you could have std::vector<Cat>. If you also have Dogs, you just make another vector std::vector<Dog>. It works fine with the standard allocator. You don't have to do anything special.


Ah okay that makes sense to me


You can do a Vector<TaggedUnion<Union>>. https://pagghiu.github.io/SaneCppLibraries/library_foundatio...

I have not been working (yet) on custom allocators, but that's on the roadmap: https://pagghiu.github.io/SaneCppLibraries/library_container...


If you wanted a factory that allocated vectors of a variety of known types, you'd probably declare template <typename T> before the generator function, so that on compilation a separate version of that function would be emitted for each type you passed to it.

(Not really a c++ expert, but that's my understanding; someone more knowledgeable can correct me).


Why would you need a variant? If you have one of something, put it on the stack. If you have a lot of the a type, put them in a vector.


Isn't that an arena allocator at that point?


Shared pointers are most definitely not zero overhead. The trouble is that they use atomic operations which can cause tons of problems for highly concurrent systems. You can use non-thread safe smart pointers, but at some point you have to ask whether that is really lower risk than just not using shared pointers at all (Rust gets around this with type checking).


Yeah, I prefer RAII over manual memory management. Sorry, strawman.


Adding a bunch of lines like `#pragma GCC poison new` to the last-included header of every source file is very useful. It doesn't fully stop manual memory management from sneaking into headers (since it absolutely will break system headers, though maybe if modules work that would be avoided).

For the rare case of porting software with unclear ownership, I use a `dumb_ptr` template with allocation and deallocation methods. Since this is header-only it naturally avoids the poisoning.

In particular, the `vector` method mentioned elsewhere is completely broken since objects move and thus you can't keep weak/borrowed references to them. If you use indices you give up on all ownership and are probably using global variables, ick. Please just write a proper pool allocator if that's what you want (possibly using generational references to implement weak).


Everything is a tradeoff. Even things like goto, and global variables will occasionally be the right choice.

Regarding the std::vector method, you may have a very loosely coupled system where a bunch of T1's enter into a pipeline and come out as T2's. For this use case, std::vector<T1> and std::vector<T2> are great. On the other hand, if you need to create an object and hand it off to someone else with no knowledge of how long they will need to hold onto it, then std::shared_ptr could be a good option.

In the in-between you have entity component systems that do the type of index tracking you mention so that identities are decoupled from memory location, allowing objects to move. I didn't understand your point about global variables and why they are necessary to implement this type of system. I also didn't understand how this gives up on all ownership. The owner would be the system that maintains the index to memory location mapping.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: