Hacker News new | past | comments | ask | show | jobs | submit | ted_dunning's comments login

An often overlooked extension of simple exponential weighting is that if you take the difference of two or three exponential averages, you can get some very nice behavior.

One particular behavior that may be desirable is that you can build a system that allows flexibly defined short bursts, but limits long-term request rates which is really nice for chatty protocols where you might want to nearly ignore a burst of 100 requests as long as the burst takes less than a few milliseconds, but clamp down hard if there is a single query per second over the last thirty seconds.

Differences in exponential decay can also be used to implement a good approximation of a Gaussian kernel of any desired size. In image processing, for instance, this allows a Gaussian blur whose cost doesn't depend on the scale of the blur (unlike a true convolution).


If doing a small amount of arithmetic and keeping 4 bytes (or less) additional memory per client is "a lot of time and space", you may need to rethink your priorities.

It is very, very unusual for any kind of request to anything to take a time within many orders of magnitude of how long exponential averaging will take. The difference between a GCRA implementation and complex multi-exponential average will be nanoseconds.


Absolutely. You just need integer implementation of fixed point for evenly spaced data.

For irregularly spaced data, you need exp(-t), but that isn't hard to do with fixed point.


This tool is distressingly delicate.

open text. type something. pop up a triangle. modal deadlock.


Well, sure. If all you want is buttons.

But if you want reasonable portability of the interface across different devices, and scale, and connection quality there's more to do.

Even just getting an interface that responds cleanly to resizing can be trickier than it looks because what is important changes as aspect and scale change. How you present things may categorically change.

And this doesn't even start on talking about how to get the backend to where it matches the implied functionality of the front end.


His range is well illustrated by listening to Spiegel im Spiegel back to back with Ukuara valss.

I wish he had done more light music. His solemn stuff is great, but I can only handle very limited doses.

https://www.youtube.com/watch?v=kMSSYc2S_Gw


But the difference to the occupants going from medium to very large vehicles is minute while the difference in risk for others increases dramatically.

And, as a point of information, you DO drive where there are pedestrians. Pedestrians cross streets and walk in parking lots. If you can't see them and the consequences of not seeing them are more fatal, then that should factor in.


The history of IT-consulting in Europe does not really inform us of the history of aerospace development in the US.

The fact is, space development used to be done on a cost-plus basis (old style) and is now moving strongly to a fixed cost basis (the current facts of life).


Exactly this. Use a Bloom filter to find candidate solutions, write those out and review them later. Since you can write the entire candidate and its location, the review consists of a simple sort.

Digging in to estimate the speed, if your Bloom filter has a 0.1% chance of a collision then a trillion digits of pi will result in a billion candidates. This requires ~15 bits per entry or about 2TB of memory for the full Bloom filter. Using multiple passes on subsets of the data allows you to trade speed for space.

Note also that a 128 bit integer has room for 38 digits. This means that you can do all of the shifting using modulo, multiplication and addition operations. I would be very surprised if the cost of the disk I/O to read the raw digits is as fast as the sieving of candidates. As such, the speed of a single pass should be about the cost of reading about 500GB of data which means that multi-threading will be of limited use. This should be less than an hour on most machines. That means that one of my ancient Intel NUCs with 32GB should be able to scan the entire range in about a day no matter what size sequence we are looking for.


It happens enough (20 years ago, at least) that a short-term visitor like myself actually saw (but happily didn't experience) multiple examples of cops abusing people (beatings, mostly). The completely oblivious reaction of the crowds around these incidents spoke volumes more; it was clear that nobody wanted to attract any attention.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: