Sign up and let us know what you are looking to do. We are trickling out invites but being careful that we can deliver a great experience (support wise and otherwise) to everyone.
Honestly I've always gotten good support for Google products, either from friends at Google or through forums, since they are popular enough to have lots of users. Of course, I remember when Google was basically the only hiring destination for top people in the valley (2001-2004 or so), and I only narrowly avoided going to google myself.
I suspect they will do a decent job of supporting GCE, either through a premium offering themselves, or through third party developers -- same thing AWS has done. I had some meetings with AWS application/security platform consultants recently, and they do a really good job of it, you just have to pay for it.
This isn't a good argument for consumer products (if a random grandmother gets locked out of her gmail, and doesn't know anyone, she may be doomed), but I suspect any developer building products on GCE either knows someone or can pay for support, or can bitch in a high visibility forum and get help from third parties or by Google.
This is nice, simple experiment design. However, I'm really surprised they only ran this once - or indicate trial counts and indicate range/distribution. I'd be curious if video transcoding is so consistent that a single measurement is enough to draw a conclusion; certainly network/storage transfer is not. Sure, time and bandwidth are not free...
Hi DMV - we ran the transcoding tests a few times, but transcoding performance is pretty steady across multiple runs (here and elsewhere).
Network obviously isn't; the numbers here include about a dozen test runs. We should make that more clear. Even a dozen isn't enough to be a scientific test, so hopefully we (or someone else) will do more benchmarking in the future.
I'd love to make our docs more clear on this. If you have a pointer to what is confusing we'll fix it up.
To be clear, for every machine type, we are offering a HT per virtual CPU. So that means that an n1-standard-8 instance gets 4 physical cores and 8 hyperthreads. (We are also offering 3.75GB of RAM and ~440G of ephemeral disk per vCPU).
I assume GCE is based on a space-shared design (1:1 pinned vCPUs); if that's not correct then this becomes harder. On https://developers.google.com/compute/docs/instances#overvie... I see the term "Virtual Cores", but I don't want to hear about virtual cores. I want to know how many physical cores are backing the VM. (I don't find the term "logical core" further down the page helpful either.)
Not to worry it is confusing for everyone. My experience when drilling down on these things are that generally in marketing advertising literature a 'core' is what most people would call a 'thread.' Engineering documentation is usually more precise but when its perceived value, perception take priority over precision.
I'd love to hear what people think the standard nomenclature here is. I've been arguing with our PM as to call these things virtual cores or virtual CPUs. Our API uses guestCpus.
This stuff is very confusing and I'd love to find the right way to communicate it clearly.
Part of the challenge is that Intel and AMD don't do you any favors.
In the ideal world for a set of cores C if I run a piece of independent, memory bound, code on all of them I get O(C) scale. Generally that doesn't hold in shared L2/L3 cores or where HyperThreads are counted as cores.
We can define a scaling factor Cf such that 0 < Cf < 1.0 independent code scales at O(Cf).
In one school you could price out at cores such that 1/Cf physical cores were assigned per Core purchased (rounded up to the nearest unit of computation) but that doesn't begin to get into the question of shared memory vs shared network vs shared disk.
I think the best you can do for now is call it a core if Cf is > .95.
It sounds like EC2 and GCE are using different terminology here, so Google is giving you half as many cores as you think but thanks to Sandy Bridge they're crazy fast.
We went back and forth on naming the machine types. I'm sure you can imagine those discussions. In the end we opted for naming them based on the number of virtual CPUs from inside the VM. This is easier to remember than arbitrary sizes (small, medium, large) and is always going to be an integer. Hopefully this naming scheme can hold up and make sense over time as new machine types are introduced.
Its not a fair comparison, but shouldn't they also be trying/using EC2 GPU instances? I would think a large transcoding service such as zencoder has at least looked into it at some point
Floating point precision isn't the problem - when your output is 8 or 10 bit resolution, floating point differences aren't a big deal.
The biggest blocking factor is amount of code that needs to be ported to make it work. High end SFX companies spend virtually all their man hours implementing new stuff. They don't have the time to go back and reimplement everything, and their customers aren't so cost sensitive that they demand it.
Well, I know of at least one very large SFX shop that skipped GPUs for now because the results were not consistent with results from the host CPUs.
But you're correct, man-hours to rewrite are more expensive than the CPU time in that application.