This isn't a commercial product or framework, it's Microsoft Research. They create many cutting-edge projects and do implement them internally in their own datacenters and services but they don't have the Google-like history of open-source clones based on the papers.
Velocity is the codename for the caching component of AppFabric, and AppFabric has evolved into Service Fabric, which now runs most of Microsoft's vast cloud services and has been recently open-sourced: https://github.com/Microsoft/service-fabric
Now you see here lies the problem. No one knew that who was using appfabric. We just got crapped on with this blog post and got told to move on and take our self hosted stuff to azure which at the time was a ball ache:
The result of this is, to use the words I used at the time “well fuck you then” after we were told that this was the veritable jesus’s sandals of a product and the future of service oriented architecture at MSFT.
Now there’s a 180 on that post since, which we never checked up on because how would we know when we buggered off to memcache and foot the bill for the whole WCF and WWF rewrite debacle after 3.5. Then grumpily headed towards AWS and it was comfortable there. And now it’s open source and stuff (which judging by lesser parts of the ecosystem like SCVMM stuff on Linux) is probably going to be an abandoned wreck.
As always the marketing and roadmappery is crazy, disparate and impossible to track. We have no idea what direction MSFT is heading in, who is calling the shots and what’s going to happen next.
Which is the point. The maturity curve for MSFT software is a narrow window between two and five years. We can’t afford to rewrite our platform on that cycle.
And thus everyone leaves for greener pastures full of snakes, elephants, penguins and Bezos.
I understand the situation, but have to disagree here. Let's face it, this industry is always in flux and things change, that's the only constant there is. However Microsoft is one of the best in the world at maintaining product support and backwards compatibility, often through decades. You can still run apps on Windows written in the 90s. You can even upgrade from DOS through Win 10 and still have a working operating system at the end. [1]
Yes, some small MS projects die, but overall you can still run almost all the old stuff, and interop and upgrade to the latest when you're ready. AppFabric extended support is through April 2022 [2], which is 4 years from now. It's possible you never got an update about it but regardless, they still support it. That being said, the cloud has changed everything and there are just more options now. In-memory distributed computing particularly has seen an incredible pace of progress with everything from Apache Ignite to Apache Spark so I think you would likely have more work to keep up with changes had you gone with another stack from the beginning.
Insane performance on the most recent techempower benchmarks (usual proviso of all bencharks are meaningless etc etc.) they've caught Go with fasthttp already on one of them.
For typical workloads that doesn’t make a whole load of difference. 10% difference in API meh. 90% of the latency and load is what goes on behind that and that’s your party not theirs.
ASP.NET Core is a full-stack web framework and has about 100x more features and support. fasthttp is quick but it doesn't even implement the http spec completely.
This is a paper coming out of Microsoft Research. Not a product. Think of it more like Google's papers on Spanner, Andromeda, etc., except it's probably not in production use within Microsoft.
> To support failure recovery, FASTER incorporates a recovery strategy that can bring the system back to a recent consistent state at low cost
I'm at least intrigued, but this part is slightly worrying. "Recent consistent state" is unfortunately not "last good state", which would make it a non-starter for certain use cases.
Yes, the article mentions that its siblings includes RocksDB. There are already outsized (north of 5TB) embedded databases geared towards high-IOPS storage (ie: NVMe) and this project serves to establish C#/.NET Core as a platform to build similarly-specced analytics products.
I think you are spot on; this is an MS Research Blog post and not a product announcement. The most interesting thing in the post IMHO is the link to the paper. The Tango paper is another good read.
>First, we use a 50 : 50 Zipf workload, and plot throughput vs.RocksDB in Fig. 10. As expected, Faster slows down with limited memory because of increased random reads from SSD, but quickly reaches in-memory performance levels once the entire dataset fits in memory.
I dont mean to offend the authors in anyways. But can someone from facebook rocksDb team reproduce their results on rocksDB ? I am curious as to , why throughput remains constant, even though memory is increased from 5 -> 40GB.
So true, I hate when google does this too, what's wrong with using original names?
Not that this coulda been helped, but I spent like 2 hours yesterday trying to find info on the -filter param in powershell took me a very long time to find anything useful, and I still don't have the official doc on it.
Considering Microsoft is (and has been for some time) one of Github's most prolific users, I'd say if it's going to be open sourced, we should expect it soon.
Really, if they are still supporting this in 3 years and it materialises into the promises I will take a look again.
I will never immediately adopt an MSFT platform component.