> Please note that the large spike in the TezEdge v1.15 RAM graph is caused by the update to protocol 011 Hangzhou. This included a major restructuring of the context tree, which is a very expensive operation. While the new representation of the context tree is better, it takes a while to migrate the past version’s tree to the new one.
The difference in range is caused by that spike (the "before" screenshot is from a node that has not reached that point).
Edit: you can also see the actual usage in the tooltip (and before the spike and protocol switch, the memory usage for the TezEdge one was even lower)
Sorry, saw it on reddit as "Mono 4 Released"[1] and reposted here, without noticing it was just a draft (the "THIS IS A DRAFT" comment wasn't there at the moment).
In Scheme asking for the car or cdr of an empty list is an error, and in Common Lisp it isn't (the result is the empty list itself).
In Shen you have "hd" and "tl" which use whatever the platform provides (in Scheme it is going to fail, in Common Lisp it is going to return the empty list), and "head" and "tail" which work the same on all platforms (it is an error to pass an empty list to them, just like in Scheme).
> If I manage to get Shen working (under Chibi) it will be even more interesting to see what it does.
What problems are you experiencing? Please submit an issue on Github and I will look into it.
It means that to be portable you have to wrap symbols that represent functions in (function <the-symbol>) when passing them as arguments. Not doing so will work on some ports but not on others, and should be considered "undefined behaviour".
Search for "Offence trolling" and read that section please (the whole article in general is good, but that section talks specifically about what you are talking about).
> Please note that the large spike in the TezEdge v1.15 RAM graph is caused by the update to protocol 011 Hangzhou. This included a major restructuring of the context tree, which is a very expensive operation. While the new representation of the context tree is better, it takes a while to migrate the past version’s tree to the new one.
The difference in range is caused by that spike (the "before" screenshot is from a node that has not reached that point).
Edit: you can also see the actual usage in the tooltip (and before the spike and protocol switch, the memory usage for the TezEdge one was even lower)