Hacker News new | past | comments | ask | show | jobs | submit login
Small Memory Software: Patterns for systems with limited memory (smallmemory.com)
161 points by ingve on March 13, 2016 | hide | past | favorite | 34 comments



The one thing that I was strongly expecting but didn't seem to find any mention of when I was quickly paging through is the idea of using simpler, constant-space algorithms (e.g. streaming style, keeping only what's needed in memory) and in general reducing the amount of code and data.

Likewise, the use of C++ and Java in a book about "limited memory" is a bit unusual.

Then again, I'm really not keen on the whole "patterns" thing, because from experience I've found it tends to replace careful thought with dogmatic application of rules that might not be relevant at all to the situation at hand.


Why is C++ not good? It doesn't grow memory usage on its own. If you can put C code on something, you can do C++ as well. We're talking about things up to 10 MB. It's perfectly fine on those. (actually the smaller your target, the less difference there is - you don't really have to care about APIs or calling conventions on an 8K chip)

Java has many different versions. For example the smallest one (JavaCard) can run on tiny chips in smartcards, so Java as a whole may be a bad generalisation, but it's definitely possible.


I find C++ encourages more pointer use by default, which implies all things are in memory at all the time. C on the other hand has this less implicitly, so things such as handles, deferred messages vs straight method call, have less of a.. friction?

Now you can wrap them up in C++, but then making such things hidden I think defeats some of their usage.

A managed runtime should be handle this better, but I don't know of any Java or .NET runtime that is optimised for a small memory foot print, perhaps paging from a larger virtual space[1]. I'd be interested to hear of any really.

[1] I do know there have been versions of Smalltalk that did basically this, but nothing recently that I've heard of.


Honestly, I remember making less than a 100k binaries playing c64 sid and doing a whole bunch of other stuff about 15 years ago(I would have said 16kb, but I don't really remember the numbers). I believe I mostly disabled RTTI, but I wasn't very good at reading manuals and understanding C++ back then, so I ended up patching the compiler instead.

You could argue that if you end up disabling all the fancy stuff from C++ why use C++ at all. But anyway, you can definitely use C++ in low memory environments if you really want to.


You can write C++ code that works like an equivalent C program, but that code also looks like the equivalent C program -- only with slightly less modern syntax.

Writing good, modern C++ means using the external template libraries and all kinds of (relatively) heavyweight goodies. The trouble is it's difficult to predict the space cost of those goodies -- so on systems where that matters it's just easier to think in terms of C.


Regardless of if you're coding in C or C++ for micros, you're not going to be using much, if any, of the standard library anyways. You'll probably be rolling your own routines for a lot of stuff because so many libraries, even those for micros, are rather oversized for when you're truly constrained.

I've used templates in C++ libraries for the purposes of varying memory constraints and mcu capabilities. Being able to write a library once and deploy it on 8 bit and 32 bit chips without FPUs and chips with fpus means saving a lot of time, and it writes code that's just as efficient. Finally, a lot of C/C++ compilers for small devices have routines to give a pretty good guess as to how much RAM your code's going to need, so you can figure out ahead of time if a routine is really worth it.


You can use templates and objects at practically 0 cost, and significant abstractions can be built with these mechanisms. Presumably, this is a night and day difference for some people.


Last I checked C didn't even have overloaded functions. I'd choose "no fancy features" C++ over C any day.


It's certainly the case that you can make use of C++ features sparingly and get the benefits of both C and C++; but I think the notion that "good, modern C++" is somehow supposed to be highly abstract and thus bloated up unnecessarily, is in itself problematic.

My original comment was more about the fact that despite using C++ and Java exampees, this book is quite strongly lacking any advice in the "use templates and other abstractions sparingly" direction.


Talk about using C++ rather than C for embedded code: https://www.youtube.com/watch?v=PDSvjwJ2M80 (code::dive conference 2015 - Bartosz Szurgot - C++ vs C the embedded perspective)


> I'm really not keen on the whole "patterns" thing

I was surprised to see that the architecture section of the book had recommendations that Android implements (I work on AOSP for a living). I see striking similarities. If the original Android team had consulted this book, then they got a lot of stuff for free... if they didn't, it only re-enforces the fact that patterns do matter. In Android's case, they seem to fit in like a glove.

Someone can correct me, but except for the Read-only Memory and subsequent parts, I think Android seems to have most of the other suggestions implemented.

To others: The books is fairly approachable. Skimming the topics and sub-topics themselves reveal a lot. Absolute gem of a book, IMO.


Streaming would benefit strongly from coroutines, which C++ does not support yet.


I realize that small memory discipline is not something that serves a modern programmer well. Spending time on optimizing memory usage is often completely useless as a new version or new feature will invalidate the effort and the overall effect on the system, compared to other work the programmer could be doing, would not be cost effective.

That said, understanding how to design in tight memory constraints is useful. While typically only seriously practiced by embedded systems developers, having habits that minimize memory use can have a large impact at scale. The paper gives some good reasons on what the benefits of those habits are, but I also think that, as a percentage of the total, programmers living in constrained memory spaces are a specialization not the mainstream any more.


We're not living in constrained memory spaces on desktop but we sometimes are in the cloud.

In my experience 'small-batch distributed jobs' (~5 machines) are often candidates for memory optimization. Especially if written in high-level languages without easy access to struct packing.


I think we _are_ living in memory constrained spaces on the desktop. For some time now I have found the impetus for upgrading to a new computer has been to gain more RAM. More RAM usually requires changing RAM/DIMM type which necessitates new Motherboard and therefore new CPU. It has been that way since I had a 8 Meg machine.

It boggles me that a machine that could hold six hundred uncompressed 1080p video frames is my most memory constrained computer.


For what it's worth, I've had the opposite experience throughout my life. I've never actually upgraded to the physical/motherboard memory limits before I end up buying a new motherboard anyway when changing CPU sockets.


I agree. When I was at university, I bought a top-of-the-line Macbook. Not even a year later, I couldn't even run Eclipse and Firefox at the same time without swapping. Any time I needed to switch from Eclipse to Firefox or vice versa, I wound up waiting a noticeable length of time while the system was busy paging one out of RAM and the other in.

I'm on board with Niklaus Wirth: https://cr.yp.to/bib/1995/wirth.pdf


I remenber a video posted here showing experiments without traditional memory alignment (none instead of 4 ou 8). The code is faster and use less memory as long as the data can fit in the processor cache. It was on Intel CPU.


I imagine the results would have been quite different with a different architecture, such as ARM.


Actually also ARM has 64 byte cache lines. ARM also tends to be more more sensitive performance wise to proper alignment than x86, before ARMv6 you couldn't even do unaligned accesses except by emulating it in software.

So the results will probably be pretty similar on ARM as well.


Not principially, but your cache would probably be much smaller.


> I realize that small memory discipline is not something that serves a modern programmer well. Spending time on optimizing memory usage is often completely useless as a new version or new feature will invalidate the effort and the overall effect on the system, compared to other work the programmer could be doing, would not be cost effective

Memory consumption can be a primary issue for small embedded targets. Upgrading the hardware is often not an option or is actually more costly than development time, because development is a one-time cost, while adding RAM or flash is an extra production cost (so a per-unit cost).

Moreover, embedded software tends to be more stable feature-wise than desktop or server software: once it's done, you can expect that it won't be modified for a few years. From my experience, the smaller the system the more "carved in stone" it is.

Finally, it should be noticed than on "normal" systems, memory size is still important because of the many caches they feature.


> Finally, it should be noticed than on "normal" systems, memory size is still important because of the many caches they feature.

http://www.7-cpu.com/cpu/Skylake.html

L1 cache: 32 KiB, 4 clock cycles (=1 nanosecond) latency

L3 cache: (Up to) 8 MiB… with a latency (42ns) equivalent of 2½ branch mispredictions. You know, the things that Everyone Knows™ they should avoid with compiler hints like __builtin_expect.

RAM: As much as you want… at 246 cycles (61.5ns) latency.


At least this serves to remind people they must think about memory, at least sometimes. The rule today seems to be to use as much memory as possible, load gigantic frameworks or use super-heavy languages for the most simple of tasks. I only notice because my computer do not have all the memory a normal modern computer has.


How many memory does your computer have?

I run Gnome and Debian. I looked a bit at the System Monitor. All the standard unix tools (like cron...) are tiny. They only use few Kb. They big consumers are gui related and the heaviest is Firefox with 500 Mb. I have 4Gb of ram.


I have 1GB, but yes, all system tools are tiny. Browsers are heavy. Chrome was very light until some time ago, but now it is as heavy as Firefox.

Some webpages are use a hell lot of memory.

But there are big problems with webapps, too. For example, there are small apps that do super basic things and that can't be run on a Heroku free dyno. Mainly those written with Ruby on Rails, I guess -- like Discourse, and other small apps I've seen, but I've seen others.


Have a look at ublock vs adblock plus comparison featured yesterday: https://news.ycombinator.com/item?id=11277135


2GB, really. I forgot I had upgraded my computer some months ago. I would probably be dead with 1GB.


Firefox most likely holds a lot of data, for various reasons.

Start a new instance with a clean profile. I doubt it will go over 100mb, probably less.


> Start a new instance with a clean profile.

Do you mean remove the .mozilla directory?


Not really, you just have to use the Profile Manager: https://support.mozilla.org/en-US/kb/profile-manager-create-...

But yeah, you could do it the hard way, by removing/renaming the .mozilla folder.


I need to try that, thank you.


I've owned the print version of this book for many years; it's like the GoF book for embedded systems. Solid material presented very well. Kudos to the authors for releasing the PDFs!


Chapter 2 is basically a textbook description of how Android handles memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: