This is kind of a masters degree course i created for myself to get knowledge of Machine Learning from bottoms up
First, you need a strong mathematical base. Otherwise, you can copy paste an algorithm or use an API but you will not get any idea of what is happening inside
Following concepts are very essential
Both the above course and book are super easy to follow. You will get a good idea of basic concepts but they lack in depth. Now you should move to more intense books and courses
You can get more in-depth knowledge of Machine learning from following sources
The Vision course by Karparthy can be a very good introduction to Deep learning. Also, the mother book for deep learning ( http://www.deeplearningbook.org/ )is good
I have worked on, or cleaned up, 4 different CQRS/ES projects. They have all failed. Each time the people leading the project and championing the architecture were smart, capable, technically adept folks, but they couldn't make it work.
There's more than one flavor of this particular arch, but Event Sourcing in general is simply not very useful for most projects. I'm sure there are use cases where it shines, but I have a hard time thinking of any. Versioning events, projection, reporting, maintenance, administration, dealing with failures, debugging, etc etc are all more challenging than with a traditional approach.
Two of the projects I worked on used Event Store. That was one of the least production ready data stores I've encountered (the other being Datomic).
I see a lot of excitement about CQRS/ES every two years or so (since 2010) and I strongly believe it is the wrong choice for just about every application.
One of the projects I worked on was in finance. The fact that we were going to get "a free audit log" from the architecture made everyone so excited.
After the CQRS/ES project failed (I was on cleanup duty) we used a more traditional arch. To handle the audit log we just had a separate table. ("customer" table had "customer_audit" table. Both were written to in transaction. Solved.)
That's what I do, with GitLab. I chose GitLab because it allows you to use true/full https with custom domains thanks to letting you upload your own certificates.
I know you can get what looks like https with Cloudflare, custom domain and GitHub but it's not full end-to-end.
GitLab also has built in CI (rather than something external like Travis with GitHub) so you can simply push a commit and have a free 'runner' (really a Digital Ocean instance) spin up, run a build script then deploy to GitLab Pages, all for free. It's pretty amazing what you can do there to be honest.
I'm using it with Hugo but there are 'runners' for just about any SSG. I think many people have switched to using it for the ability to use Jekyll plugins unlike over at GitHub.
From the connection times I'm guessing the the servers are somewhere on the East Coast of the US (maybe they're still on Azure?) so I can hit sub second loads in Europe, the US and just about get under 1.5 secs in Australia/Asia.
It's impressive for the grand old price of 'free'!
Then yes, it would probably be fine, within reason.
When running a static site at very high traffic loads it becomes about how many connections the webserver can handle and how much bandwidth you have to serve the site itself, rather than pure power of the server to process all the requests a dynamic site would generate.
You can start to chew through bandwidth allocations pretty quickly when you get a couple thousand concurrent visitors, so a shared hosting plan might run out pretty quickly if the cap is small, even with a fairly small site. And something like Apache would need tweaking a fair bit to handle that number of connections without eating all the RAM. Nginx is better in that regard and could pretty easily handle thousands of concurrents.
So a small DigitalOcean VPS can easily handle millions of 'hits' per day if set up with a little care, and more than that if setup well, but you just have to watch you don't saturate the connection.
I mirrored a few sites that had faced the reddit hug of death (just to see what the load was like) and I found you can easily hit 200mbps* sustained connection requirement with all the visitors it brings just from the comment thread, which people don't enter as much as the list posting.
*(depending on the size of the page obviously, the bigger the page the more mbps it'll need to serve, average was around 20-50mbps).
First, you need a strong mathematical base. Otherwise, you can copy paste an algorithm or use an API but you will not get any idea of what is happening inside Following concepts are very essential
1) Linear Algebra (MIT https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra... ) 2) Probability (Harvard https://www.youtube.com/watch?v=KbB0FjPg0mw )
Get some basic grasp of machine learning. Get a good intuition of basic concepts
1) Andrew Ng coursera course (https://www.coursera.org/learn/machine-learning)
2) Tom Mitchell book (https://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/00...)
Both the above course and book are super easy to follow. You will get a good idea of basic concepts but they lack in depth. Now you should move to more intense books and courses
You can get more in-depth knowledge of Machine learning from following sources
1)Nando machine learning course ( https://www.youtube.com/watch?v=w2OtwL5T1ow)
2)Bishops book (https://www.amazon.in/Pattern-Recognition-Learning-Informati...)
Especially Bishops book is really deep and covers almost all basic concepts.
Now for recent advances in Deep learning. I will suggest two brilliant courses from Stanford
1) Vision ( https://www.youtube.com/watch?v=NfnWJUyUJYU )
2) NLP ( https://www.youtube.com/watch?v=OQQ-W_63UgQ)
The Vision course by Karparthy can be a very good introduction to Deep learning. Also, the mother book for deep learning ( http://www.deeplearningbook.org/ )is good