I love information like this, and I think the OpenSignalMaps opinion is healthy. Basically, it's cool to have an install base this diverse and dynamic. Maybe this is because I come from a traditional desktop dev background where diverse HW and SW environments were just part of the game.
I recently co-created an automated web-service that tests apps on real devices, and of the thousands of tests we've run, the majority of the failures and errors are more general software development issues. Errors caused by assumptions about the presence of other apps or network connectivity or the inability to handle certain transitions like screen orientation are much more common than device-specific problems like CPU type. It does happen, for sure, but not nearly as often, and free services like ours seek to mitigate these errors altogether.
"In the case of the towel bid, hospital administrators were shown a PowerPoint presentation (a copy of which she gave to me) indicating that going with the Medline and Medical Action bids would save them between 6 and 29 percent. But this was relative to the same companies’ bids the previous year, not the bids offered by other vendors."
I'm interested in seeing these slides and sent a request to the article's author.
Were the decision makers present aware there were other vendors? If so, how was this data not called into question? It would be interesting to see if the data was presented in a completely misleading way or if there was some level of laziness and complacency on the decision-makers' part as well.
Edit: Just received a reply from the author that she's unable to share the presentation as the person that originally shared it has been "subject to legal harassment." Lame!
"BlackBerry Mobile Fusion includes RIM’s software for managing, securing, and updating BlackBerry devices as well as tools for managing Android and iOS devices running RIM’s BlackBerry Fusion Client app. After a 60-day trial, the software starts at $99 per user, or $4 per user per day, with volume discounts availabe [sic]."
- http://www.forbes.com/sites/briancaulfield/2012/04/03/blackb...
This is definitely an interesting strategy, although I seriously wonder if the same trust that's put into a full Blackberry stack will translate to just some software, regardless of the technology behind it.
We created AppThwack for developers to test Android apps on actual phones and tablets before releasing to the market. In addition to addressing issues with fragmentation by running against many device-OS combinations, we provide a centralized tool that combines screen-shots, logs, and full logcat dumps in one place, sortable by test, device, or result. We keep historical results and track trends as well.
When a developer uploads an app, it's tested on all of our devices in parallel and results update in real time. Our basic service runs a generic set of smoke and performance tests requiring no configuration at all, and we also support custom tests, including those made with Robotium (http://code.google.com/p/robotium/).
The demo does not allow uploading apps or changing project settings, but otherwise shows results in the same format as the full version. It's also using emulators hosted on EC2 instead of running against our actual device lab (10 phones, 2 tablets, and growing every day).
My co-founder and I recently quit our day jobs to work on this full time and we'd love to get some feedback. We're starting a small private beta this week.
I recently co-created an automated web-service that tests apps on real devices, and of the thousands of tests we've run, the majority of the failures and errors are more general software development issues. Errors caused by assumptions about the presence of other apps or network connectivity or the inability to handle certain transitions like screen orientation are much more common than device-specific problems like CPU type. It does happen, for sure, but not nearly as often, and free services like ours seek to mitigate these errors altogether.