Hacker News new | past | comments | ask | show | jobs | submit login
List of YC companies, ordered by response time from EC2 (blamestella.com)
63 points by delano on Jan 11, 2011 | hide | past | favorite | 40 comments



Response time data is useless without other data:

- Where are the monitors located?

- Where are the sites located?

- How was the site monitored? Did they account for DNS lookup and connection times in this time? 8ms for skysheet.com makes me think no?

- Are error times rolled into these times? Errors typically will respond quickly.

Having built www.yottaa.com, we worked hard to provide data that users can easily take actions against. In order to that understanding the questions above is important.


This is an early version of the site so there's still a lot of work to do to explain this stuff on the site itself. To answer your questions:

- All monitoring is from EC2's us-east-1 region.

- I don't track the location of the sites, other than specifying the vendor.

- Data is generated by an open source tool called Stella. I'm not currently tracking DNS lookups but probably will eventually.

- Tests with errors are not included in the response times.

The specific response time is less interesting than looking at how a site performs over time (the number and duration of downtime, errors, etc). There are a lot of rabbit holes in testing so what I'm focusing on now is the simplest possible solution that works.


So basically, anyone hosting their site on EC2 will appear to have a fantastic response time, compared to anyone else. Which makes the test mostly useless for comparison between startups (though it could be interesting/valid data for a single startup over time, which is what pingdom sells)


I'm just one guy so I have to start somewhere :]

Sometime soon though, it will be possible to select the location.


Unfortunately, I think the parent commenter is right -- your numbers are (currently) meaningless. Statistically, your sample size for estimating response time is 1 for each of those sites. You need at least a dozen or so monitor sites, preferably distributed in the same way that Internet users are, for anything meaningful information to be gained. Ideally, you'd also want to distribute your monitors taking peering agreements of the ISP, etc. into account for a truly representative picture of response times.

As it stands, you're doing a disservice to startups that aren't on EC2, for no good reason. You solution "works" on a technical level, but the data is statistically meaningless.


Combining monitoring data from multiple locations would give a more accurate response time overall. But as a site owner, that number is not very useful because it's not actionable. For example, if response times to my site become slow from say, Dallas or Ireland, there's is nothing I can do about it other than be aware that it's happening.

Monitoring from a single location is interesting because it gives a metric for how well my application is performing.


Certainly, but I would say that there are very few Internet startups that cater to a single (or even a few) geographical locations. Your numbers currently indicate quality of service for customers in the same hosting block as Amazon EC2.

If you have enough monitors and spread them out well (which shouldn't be too hard, since you could just sign up for Perl/PHP enabled hosting accounts in different locations), you can still diagnose where the slow geographical regions are.

In short, having more data can only give you a more accurate picture, depending on how you analyze it.


Your design is great and helping people to make their web sites faster is laudable goal. Add more testing locations -- even better, write a probe that you can run on various consumer broadband connections that will report results back centrally. Please don't go fire up a bunch of VPSes around the globe for performance monitoring -- that data is largely useless.

Build an image, load it onto a bunch of Pogoplugs, send them out to some friends who live in various parts of the world (and various parts of the US with various kinds of broadband!)


I know how you feel. I built http://www.pingbrigade.com/ which is similar. Mine runs of VPS's though.


You forget HTTPS. The SSL handshake adds a significant delay to the connection time.


If LaunchHear really has one of the fastest response times then that's pretty funny, considering we currently pay about $4 a month for hosting.


What host do you use?

edit: sorry. Found the answer. Softlayer right?


HostGator, which is presumably powered by Softlayer. It's about $4 a month if you use one of the promo codes that are easily findable via twitter or google. It did once crash while I was onstage trying to demo it, but other than that it's been pretty good.



Despite my reading of HN, I've never really paid much attention to YC, but this data makes me ask: Are all YC companies web apps? Are there no infrastructure companies, or other types of services?


There are many non-web app YC companies, including systems management products, mobile apps, installable apps, etc.

That said, you've specifically mentioned infrastructure, and I don't think an infrastructure company would be possible in the YC model. It would take a vast amount of money to start a new hosting provider or cloud provider, if you aren't building on existing infrastructure (which a few YC companies have done, building on EC2 and other cloud provider products, including recently acquired Heroku and Cloudkick). YC invests a few thousand dollars. Building infrastructure costs millions of dollars. Building software is what YC companies do, and the best place to build most software today is on the web.


I think it's safe to say all YC companies have a website.


RethinkDB is not a webapp.


WakeMate (http://wakemate.com/) developed a hardware device to track sleeping patterns. Although they end up hooking into a webapp for further sleep analysis, prototyping and production of a hardware device is sufficiently different from coding a website.

This should be a solid example that not everyone in YC is creating just a webapp that you can thrown onto a few EC2 instances in under 10 minutes.


A nice example of taking your product ("a tool you can use to monitor the performance of your websites" [1]) and targeting an audience niche - tech-savvy readers interested in YC companies.

The information meant nothing to me (but then, the product isn't of use to me as I'm not currently running any websites) but it caught my eye.


The only problem is that if you look at the fastest response time, it's because it's simply text indicating that they're working on the project. No navigation or anything. The difference in content based on the expectations of what the services provide makes it difficult to grasp what one can get out of this.


Ya that's true. I'm not sure yet how to handle placeholder sites. I could separate them based on a page size threshold but that might be more confusing.


Maybe you should ponderate by total received content weight. It's not perfect but still give you a clue. I mean, not just page size but images and css as well.


That's a good idea. I could sort on a score based on response time and page size.

Update: I added a score-based sorting (bytes/response time). Is this interesting or confusing?

https://www.blamestella.com/group/ycombinator?sort=score


This is a brilliant way to get the attention of HN users.

Niggle: some of those are redirect times. BlameStella gets it right if you click through, but the summary data has some bad numbers.

For example - www.justspotted redirects to spotlight.justspotted, which redirects to their posterous blog.


It's actually not monitoring redirects. The response times for justspotted.com are based on: http://www.justspotted.com/radar/


It's important to know what is being measured and what is not. Basically Stella is timing the backend performance and network speed from a single server in a single location.

Since 80% of performance is front-end, I find it's often more interesting to look at rendering time instead. Most websites can get the html back to you in a few hundred milliseconds. The more interesting question is how long it will take the browser to download and render the associated resources. This is the pain your visitors will be feeling and probably the one you should focus on. Instead of trying to get your server response down from 500ms to 200ms, focus on getting your onload time down from 5 seconds to 2.

One great open-source tool for measuring front-end performance is http://www.webpagetest.org/. It allows you to test sites from various locations around the world in and it gives you detailed performance data, waterfall graphs and even videos of the page loading. One downside is that it is IE only but it does support IE6-IE9.

Here is a sample report for Adioso on a DSL connection in Dallas in IE7: http://www.webpagetest.org/result/110111_ZF_27134dc481325cfa...

As a disclosure, I'm the founder of a web performance company and sponsor the Ireland instance for WebPageTest although I have no financial benefit from promoting them.


I really like the "about" page and the twitter banter. Very clever. I enjoy when web apps add some narrative elements to their product.


If anyone is interested in monitoring internal infrastructure and cloud monitoring as seen by your end-users, shoot me an email - prakash@cedexis.com

Gabriel wrote about our services on his blog http://www.gabrielweinberg.com/blog/2010/12/testing-cdn-perf...


What good is this without including some "how busy is this site?"-metric like requests per second? Or the amount of data served?


Why would users care how many requests there are per second, or how much data is being served?


You're right, but in this case I assumed not the users but the HN-readers were the target audience. And I think many are interested in performance benchmarks. I think it makes a difference wether a site with many requests is quick in comparison to a completely idle site.


Response time is public information where reqs/second is private. This does give some transparency to end users, and I could see sites linking up their status pages to this tool. You do get some more metrics in an actual signed up plan.


For some reason, it does not work with my website:

https://secure.grepular.com/

I'm guessing it's because it does certificate validation of some sort? I can't see the point of a service like this doing any certificate validation?

It's a multi-domain cert (with *.grepular.com in subjectAltName) signed by cacert.org.


I thought this was going to be about response time to a support request–which would be just as (or more) interesting.


Why use canvas for each of the titles? It's impossible to search in the page.


Really? I wished they broke it down further than 1 second intervals... 99% of the websites on this list report 0 as their response time.


Ya, the page could use with more descriptions. The response time is given in milliseconds and the number beside it (most of which are 0) is the number of incidents in the past hour. An incident is a timeout, error or slowdown.


Some of these don't make sense, because some people cache their front pages. Has nothing to do with how responsive the actual app is.


The look and feel of your site is great! Nice job! I agree with other commenters about the need for richer data. For example, just picking on the "top performing" site in your list (skysheet.com), here's the yottaa report on that site:

http://www.yottaa.com/url/skysheet-com-4d2ca049038ade0c05000...

Yottaa notes a similar reachability time as stella does in the "Reachability > Washington" metric. But response time performance is significantly worse at other locations.

Also, it's important to track not just response time of the server, but actual browser performance. If you look at the Page Load Time metric, you'll see that even when the server responds pretty quickly, the actual browser experience is significantly worse than a simple HTTP Client.

Overall, I like the site a lot. If you could get some more accurate metrics, it'd be great. Keep going!

BTW> Here at Yottaa we're beta-ing an API that would let you run tests and access all of our browser and low level data. Would you be interested in getting some better data from us? Contact me : jrosoff AT yottaa DOT com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: