Hacker News new | past | comments | ask | show | jobs | submit | gatherknwldg's comments login

C's corners aren't very dark. It's a small enough language that it's easy to explore them. Things can get ugly when programmers decide to abuse the preprocessor because the language isn't complicated enough for them, but thankfully most C programmers have a distaste for such shenanigans. C++ is down the hall and around the corner, if you want darkness.


"write a memory recycler"

sigh This is by far go's biggest wart IMO, and one that frequently sends me back to a pauseless (hah! at least less pausy:) systems language. I sure do like it in almost every other meaningful regard. But I wish latency wasn't something the designers punted on.


I occasionally hear this kind of complaint, but I've yet to see any silver-bullet memory management system. AFAICT, the best we've been able to accomplish is to provide a easier path to correctness with decent overall performance. Also, GC latency isn't the only concern. As soon as the magic incantation "high performance" is uttered, all bets are off.

There's been decades of work on real-time garbage collection yet all of those approaches still have tradeoffs. Consider that object recycling is a ubiquitous iOS memory management pattern. This reduces both memory allocation latencies and object recreation overhead. Ever flick-scroll a long list view on an iPhone? Those list elements that fly off the top are virtually immediately recycled back to the bottom -- it's like a carousel with only about as many items as you can see on screen. The view objects are continually reused, just with new backing data. This approach to performance is more holistic than simply pushing responsibility onto the memory allocator.

Memory recycling here also reminds me of frame-based memory allocator techniques written up in the old Graphics Gems books, a technique likewise covered in real-time systems resources. Allocating memory from the operating system can be relatively expensive and inefficient, even using good ol' malloc. A frame-based allocator grabs a baseline number of pages and provides allocation for one or more common memory object sizes (aka "frames"). Pools for a given frame size are kept separate, which prevents memory fragmentation. Allocation performance is much faster than straight malloc, while increasing memory efficiency for small object allocation and eliminating fragmentation. Again, this is a problem-specific approach that considers needs beyond latency.


"AFAICT, the best we've been able to accomplish is to provide a easier path to correctness with decent overall performance."

Precisely. Which is why for performance-critical systems code it's important to give the programmer the choice of memory allocation techniques, but to add features to the language to make memory use safer.

Garbage collection is great, but occasionally it falls down and programmers have to resort to manual memory pooling. Then it becomes error-prone (use after free, leaks) without type system help such as regions and RAII.


I can't speak for the grandparent, but for my part I agree with your point that allocation patterns matter and that there is no silver bullet to memory management, which is exactly the reason that GC'd languages like Go are uninteresting as systems languages. Why use a language where you have to work around one of its main features when you care about performance?

I find Rust's approach much more interesting, because GC is entirely optional, but it provides abstractions that make it easier to write clear and correct manual memory management schemes.


Hm, Rust keeps hitting my radar with interesting attributes like this. Time to go have a look see. Thanks!


> I wish latency wasn't something the designers punted on.

The simplistic GC isn't part of the language design, it's a stopgap in the first version.


Do they have a standard ABI or FFI for interaction with C? If so, they probably designed the assumption of a conservative GC into it. You can always make an incompatible change, but it's a pain.


gmsl is a wonderful, well considered library. I had to buy the book after finding it.

Thanks!


"Which one is better depends on the platform you're on"

That works against a portable build. That still matters to some of us.


By 'platform' I meant programming language and execution environment (JVM/Ruby/Python), not OS.


I'm fond of not having the number of build systems I'm responsible for maintaining grow in step with the number of programming languages I use in a project.


All build systems I've come across are quite happy to build code in other languages - just like make. The reason I'd recommend the dominant system in each ecosystem is, well, that: It's dominant, thus likely to be better supported for the issue you're likely to face.


I've gone back and re-read your comments in this post and tried to find something concrete in them. You use strong, imperative language, yet I have no idea what you are recommending.


Which one is better depends on the platform you're on.

We are not stuck on any particular platform, we target numerous platforms new and old. Using make, the target may determine which of those wonderful tools in our ever changing (improving, failing and obsoleting) ecosystem does the actual build.

    build:
	xcodebuild -target MyApp -configuration Release clean
	xcodebuild -target MyApp-universal -configuration Release-universal clean
But all I need to do is type make


> not worth the effort

Until it is, of course. Growing up is hard.


Does he have an entry on how to become an insufferable blow-hard?



That's freaking hilarious. Who knew hackers needed sex tips. I had to do a double take on the domain to see if they were identical.


A cringe-inducing classic... oO;


"Some men have this down well enough that they can make eye contact with a woman they've never met before, smile, say "You're very pretty." and make her smile back"

...

"Cathy: Try this on a stranger in an elevator, if you must, to minimize the fear level."



I honestly wish nothing Eric wrote would ever be linked on HN again. He had his hey-days, he grabbed enough asses at conventions and harassed enough unsuspecting girls and we should just never mention him again. This is not the 90s anymore and I just hope stuff that barely applied in the 90s is more than historic and dated by now and "geeks" and "hackers" are a TON more diverse than that by now.

When it comes down to it, he is a politician, a lobbyist and while these roles are arguably not without use I would rather not have someone like that lay down the law what I as a "hacker" should and should not do or how I am supposed to be more like him.


ESR is known for harassing girls? Never heard of that but sounds like a fun story..

I actually remember reading ESRs writings as a teenager and actually being slightly afraid of ending up like him.


This story typically doesn't have a very nice ending. Maybe you're just having some fun, and will be able to shrug it off some day. I hope so. But if you find yourself trying to quit, and unable to, try to find some help.

Sometimes it takes a lot of tries, with lots of different kinds of help from lots of different kinds of folks.


That is a comment on the style of C that Bourne used in his shell implementation. He used the pre-processor to e.g. name { BEGIN, } END and do other Algol-isms.

Duff would apparently rather read C that was written in C.


The barrier to entry is low, now, and getting lower.

Prototyping your custom instruction sets on FPGAs and then commissioning a run to stamp them to ASICs isn't prohibitively expensive, or hard.

In part, it's lack of imagination that has led us so far down the complicated, twisty path into x86 hell.

Just because your chip can do it doesn't mean it's good at it.


This is very interesting. I always thought that making an ASIC was prohibitively expensive except for the largest companies. How much does it really cost?

I would really enjoy playing with a Lisp chip. It might not be good for performance computing, but it would be great for writing GUIs. The paper suggests having a chip with a Lisp part for control and an APL part for array processing - I think the modern equivalent would be a typed-dispatching part for control and some CUDA or OpenCL cores for speed.


> I always thought that making an ASIC was prohibitively expensive except for the largest companies. How much does it really cost?

Full custom is still quite expensive.

But you can go the route I'm talking about (prototype on an FPGA, then get in on one of the standard runs at a chip fab via MOSIS or CMP or a similar service) for ~10,000 USD for a handful of chips.


I'm sensing some kind of universal price point for bleeding edge fabrication.

Adjusting for time, etc. that's pretty what in cost in 1991 to have a handful of custom boards and firmware built about the TI DSP chips of the day in order build a dedicated multichannel parallel siesmic signal processing array for marine survey work.


Your cash is a valuable asset to your startup, too, but people don't store that in-house.

Who do you trust more, github or Wall Street?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: