Depends on what you mean by "a lot", doesn't it? Yes, you are going to copy all the bytes in the final result once (at least). But it's hard to see how to avoid that, and no, delegating that task to the kernel doesn't magically/consistently make it faster.
> you just create a little node of a couple of words
Hmm..."just" creating a "little" node can often be quite expensive on today's machines. And "words" are pretty big, typically 8 bytes per word.
> Yeah there's non-locality and things like that
Well, non-locality and "things like that" are kind of a biggie, these days. The biggie, actually, most of the time.
"...computation is essentially free, because it happens “in the cracks” between data fetch and data store; on a clocked processor, there is little difference in energy consumption between performing an arithmetic operation and performing a no-op." -- Andrew Black in Object-oriented programming: some history, and challenges for the next fifty years
Yes lots of caveats... but it works for Ruby in practice, often getting 10x on template rendering workloads.
Particularly, you 'actually...' me about how fetch and store dominates... but we're not storing anything into existing memory! To concatenate strings we only create new nodes and refer to the old nodes. You don't even need to read what's in the existing nodes! The essential fast-path of building up a template is write-only with bump-allocation into the extremely well-managed TLAB. No reading required. That's why it works so well.
Yes traversing the tree means writing it out is a little slower, but that's one small part at the end, with good pre-fetching opportunities.
Depends on what you mean by "a lot", doesn't it? Yes, you are going to copy all the bytes in the final result once (at least). But it's hard to see how to avoid that, and no, delegating that task to the kernel doesn't magically/consistently make it faster.
> you just create a little node of a couple of words
Hmm..."just" creating a "little" node can often be quite expensive on today's machines. And "words" are pretty big, typically 8 bytes per word.
> Yeah there's non-locality and things like that
Well, non-locality and "things like that" are kind of a biggie, these days. The biggie, actually, most of the time.
"...computation is essentially free, because it happens “in the cracks” between data fetch and data store; on a clocked processor, there is little difference in energy consumption between performing an arithmetic operation and performing a no-op." -- Andrew Black in Object-oriented programming: some history, and challenges for the next fifty years