They haven’t announced anything like that - they’re just using LiDAR rigs to validate data from their vision based approach, particularly distance. It actually even mentions something about it at the bottom of the article you referenced.
There are some great resources on the keto subreddit (reddit.com/r/keto). I personally had similar issues when I first started keto - several times a week i found i had to be conscious of my electrolyte intake and would supplement with broth and often almond milk (potassium). However 6 months in I've found it's less of an issue, even at lower electrolyte intake, but I'm not sure why. I never found the need to hit the super high recommendations on /r/keto though - I found that 4,000 - 5,000 mg per day was more than enough, and I'd get heart palpitations and muscle cramps generally under 2,500 / day. You'll see a lot of people on /r/keto saying you _have_ to hit 7k+ a day, and people who drink salted water all day - personally, I think that's overkill.
I take ZMA and potassium daily. I will drink electrolyte water too. When I get home from the gym and feel drained I will drink 8oz of chicken broth for the salt. I had some severe cramps before starting this.
I don't know what the daily values are to be honest. I just take it until my symptoms, mostly cramping, go away and stick with that dosage. I basically eat broccoli or brussels sprouts with dinner every day. They both seem to have a decent amount of potassium.
I buy potassium bicarbonate from Amazon in bulk -- ten bucks a pound. Winemakers use it to reduce acidity in their product, but it's easy enough to mix half a tsp in water twice a day. Lot of research on bicarb, too, so useful outside of keto issues.
Thanks! Really appreciate the recommendation - I'm going to hit up all our local stores for some. Went jogging this evening, and for the first time in about 2+ months didn't get any cramping in the legs (which happens to me 100% of the time for any run longer than about 8-10 minutes), and the only change I've made is supplementing potassium, calcium, and magnesium.
In keeping with this thread - I guess I could vary different dozens of variables, one or more at a time, apply the same Ketosis machine learning algorithm, and see what really makes the difference.
Every backend architecture should account for having multiple machines. What are you going to do once you get enough traffic that one machine can't handle it?
I'm reading Haskell Programming from First Principals right now it's just awesome. I've gone through a couple books on Haskell before this, and it's by far the best. Can't recommend it enough!
Congrats to Nat and Miguel! This seems to make a ton of sense, and unlike other acquisitions by big corps this seems like a great move for both Xamarin and their customers.
Curious what you see out there that's higher performance and lower cost than AWS? In my experience it's been a great fit for small apps all the way up to large complicated applications at scale - and once your infrastructure is large enough you're buying reserved instances anyway at anywhere between a 33% and 70% discount.
You can beat AWS on cost with pretty much any hosting provider (with some exceptions - e.g. Rackspace seems almost proud to be expensive). The 33% to 70% "discount" doesn't mean much when you then tie yourself into long term costs that are far more limiting than most manage hosting providers - so much for benefits of being able to scale up and down.
What really kills you on AWS are the insane bandwidth prices. Buying bandwidth elsewhere is often so much cheaper than AWS that the difference in bandwidth costs alone more than finances the servers.
How is Netflix able to manage this so effectively and still serve ~30% of US traffic off AWS?
I've heard the non-AWS folks talk of these vendor lock ins or long term costs but aren't those irrelevant in 2016+? eg. microservices to reduce the issue of vendor lock in and long term costs on infrastructure that goes out of date every 2-3 years is a poor planning indicator no?
I can guarantee you that Netflix are not paying anything remotely like the advertised rates for EC2.
I know first hand the kind of discounts some companies much, much smaller than Netflix can get, and they are steep. EC2 is still expensive then too, but if you pay, say, a million a year to Amazon without massive discounts, you've not done your job when negotiating.
But yes, someone with the leverage Netflix has will be paying relatively reasonable rates for EC2 services. But pretty much nobody else has the leverage Netflix has.
> I've heard the non-AWS folks talk of these vendor lock ins or long term costs but aren't those irrelevant in 2016+?
Paying far above market rates is never going to be irrelevant, because if you pay above market and your competitor doesn't, chances are they'll have you for breakfast thanks to better margins.
Why in the world would you agree to pay above market rates to get locked in for 1-3 years when you can pay less on a month-by-month contract?
Feels like AWS is less of a vendor lock than building it inhouse. Doing it all inhouse has a high upfront cost that must be realized over X years irrelevant of the outcome. On the other hand if one implemented a microservices architecture, moving off AWS month-to-month service to another provider is far easier. Did I miss something?
Keep in mind that Amazon (and others) uses the "roach motel" model for networking. Easy to check in, not so easy to check out.
When we looked at S3 for some archiving use cases, that came up as a risk -- if strategically it made more sense for us to adopt Google, Microsoft, etc, we would need to negotiate significant concessions from a new vendor to transition away from Amazon or take a hit during that period. You always need to plan for the exit!
You'll have similar issues on-premises (ie. dealing with EMC/etc), but many people forget that cloud providers have their own gotchas too.
Vendor lock-in is an unavoidable cost of doing business. Even if you build literally everything yourself, which you shouldn't, you still have resources, processes, apis, automation, expertise amassed around a specific set of operating constraints.
Not only that, but if you invest significantly in any single technology, migrating to another technology is always going to be an extreme effort. Having led migrations from datacenters to AWS, AWS to Digital Ocean, RabbitMQ to NSQ to SNS+SQS, etc., I can say at this point that I do not believe in vendor lock-in as a legitimate reason to disqualify any particular solution.
In my mind, it's like leasing a car. Leasing is better for your cash flow, but buying is usually a lower total cost.
Outside of large volume S3, it's pretty trivial to beat AWS costs, assuming you have the human capability. S3 is a little different, as the capital investment required to host petabytes of data is very high, and Amazon's economy of scale is pretty compelling.
For most anything else, dedicated boxes at a colo or your own datacenter should be cheaper, assuming you have the people around to do it, etc
In certain shops that may be true, but definitely not all.
Python has had a large role in Red Hat's tools for a very long time so I bet you'll find more Python than Ruby in sysadmin-land overall.
If you aren't using Rails for your web-stack, I suspect you might not be using any Ruby at all on a server. It isn't even part of the CentOS minimal install. Python is. I'm not sure about how Ruby is used in Ubuntu, maybe it's more common there.
However, language choice alone makes Ansible more "compatible" with the rest of the RHEL stack.
> If you aren't using Rails for your web-stack, I suspect you might not be using any Ruby at all on a server. It isn't even part of the CentOS minimal install. Python is. I'm not sure about how Ruby is used in Ubuntu, maybe it's more common there.
There is no Ruby installed by default in Ubuntu - or Debian last time I looked. Also (like Redhat) a significant chunk of the userspace distro code is written in Python.
Currently Vagrant is the only Ruby tool I use in sysadmin/devops land, and it's a workstation only installation rather than a server one.
Not having to deploy a whole new language runtime for your devops tooling is why I preferred Ansible and Salt over Chef and Puppet.