It's both disturbing and fascinating to work with this data. I would love to see these datasets go beyond the basics and also include things like timelines, population size, etc, to better understand how our actions influence the spread.
Looks really nice, thanks for sharing too, I like seeing the different approaches people take to this - I too had a large COVID-19 picture as a ground at one point! I love your little logo top left, did you make that?
I sorted by deaths as think it might give a more accurate picture than case counts - confirmed cases are really difficult to compare across countries give the different amounts and rates of testing.
Seems kind of crazy that a private university is the best source of data for this, there are still many inconsistencies in the data but it is a great resource and it's great they have released it to the public.
One thing I would like to fix in the current data is there is no breakdown of UK regions (there are UK dependencies oddly but none of the UK countries), and updates to UK data inevitably don't come till the US is awake. There are no global US, Canada, China datasets either in the data so those had to be created.
It'd be nice to see the WHO or someone step up and coordinate a database centrally - I think they have figures too but they are not very reliable or timely.
I found the icon on the Noun project, I'm attributing it a bit further down on the site.
Accuracy is super tricky here. Some people speculate that the numbers coming out of China are manipulated (I have no proof or opinion on that) and there's also some concern that deaths from other causes are also counted towards Coronavirus. But that's just what we have right now... an estimation of reality.
I'd also like regions, maybe that data is available somewhere, or could be scraped from national or local news sources, and then aggregated somewhere.
Are you planing to add more features, graphs, or other information?
I know regions are available for the UK on a daily basis (but not as a time series), so I'm planning adding that and recording the time series, though it does raise the question of inconsistencies in data between regional and national counts if I continue to use the national count from elsewhere.
I think I'll add testing figures as someone else suggested, though those are rapidly evolving and the latest figures I found were for March 19th.
As one of the designers of this app, I was skeptical initially if there would be restrictions on customizing common UI components, the smoothness of animations/interactions, usability, performance, etc. You definitely have to put in a bit of extra work (isn't that always the case with making things nice though), but you can get very nice results (and battery usage) with RN for sure on both Android and iOS. Personally, I think it does benefit customers to be able to quickly roll out iterations across operating systems at a high quality. YMMV though, what works for one team, industry and customer base naturally doesn't automatically work for everyone. Do you maybe have experience or data points on RN vs. native in terms of battery usage and performance? Would love to hear any insights you can share. Thanks.
One of the reasons I brought this up was a recent article about energy efficiency of programming languages.[1] JavaScript is ~2x less energy efficient than Java, so it will eat twice the battery.
"so it will eat twice the battery" may be nearly true on headless boxes running CPU heavy workloads. Mobile apps are about as far from that as you can possibly get. Sure it's probably a measurable impact, but I would expect it's less of an impact than things like how bright the backlighting is, whether you keep GPS usage to a minimum, how well you minimize and batch network communication, etc, etc. CPU is a small part of a phone's power budget and mobile apps spend a lot of time with the cpu largely idle.
> JavaScript is ~2x less energy efficient than Java, so it will eat twice the battery.
While the citation you provide is interesting, it doesn't inherently prove that in real-world usage, React Native apps consume twice the power of their native counterparts.
Wow, that's a great analysis. In addition to being less efficient, there's definitely more overhead with the extra layer of JS underneath. Just of the top of my head, this might be a bigger issue with apps that constantly refresh the screen or have real-time interactions (e.g. games or Slack), and not so much apps that are essentially a series of forms (code only runs when you interact). With our app, what affects battery usage most is the location analysis that happens in the background. It's one of the areas the development team spends a lot of time testing and optimizing.
I also set up a tool to export your Ffffound account to http://www.wookmark.com . Note that a paid account ($20/year) is required, so I can keep the lights on. If you want to do this, create a Wookmark account and message me the usernames from both sites. While I'm writing this, my import scripts are working through a Ffffound account with 6000 images. It's a fairly straightforward process. Happy Thursday.
The other side of this is that it helps usability when things look and feel familiar. It's interesting how it's accepted that desktop applications generally look the same, but mobile apps are supposed to be more unique. Maybe just the nature of mobile being generally more consumer oriented?
Isn't it weird that when it comes to desktop apps and particularly UI toolkits, we freak out when things don't match the OS style perfectly, but with applications on the web --- many of which are more UI-intensive than native apps --- we're constantly looking for the CSS Zen Garden?
So long as desktop apps don't mimic kai's power tool interface, I think a little variety on the desktop side is nice, so long as it fits function to a meaningful degree.
So an audio mixing applic might do something in a different way than office's ribbon ui.
I don't agree that desktop apps look the same, but it depends on the platform I think.
Apps that use native controls will definitely look the same, but the apps I've been using in the past few months—and yes some are Electron-based—all look very different on macOS.
Really don't understand this. Why can't those demanding Pro people just get iMacs. It's cheaper and supports 32GB of RAM. Of course, it's stationary, but pretty sure those specific professionals are mostly stationary anyways because they use multiple monitors. This while freak-out about the MBPs ist just silly.
You answered your own question. I need a machine that is portable that I can hook up to multiple monitors at home and at work, and that I can use on the go for meetings, customer visits, etc.
But that's a matter of priority then, isn't it? If portability is important then you have to sacrifice some power.
I'm curious, what do you run on your computer to max out a brand-new MBP with 16GB of Ram?
I currently do design and dev work on a 3 year old MBP with 8GB Ram. This includes running VMs for I.E. and back-ends for various projects, using Xcode to build apps, editing Sketch files with 50+ screens, photo editing in Photoshop and motion graphics in AfterEffects. Naturally I don't everything at once, but even with a combo of some of them, everything runs just fine. So this makes me genuinely curious what people do that these brand-new machines can't handle. And are those demands really the majority use case? If not, then there is an argument for having different machines for the demanding work loads and maybe a smaller laptop or even iPad for meetings.
That's what apple wants you to do, buy a macbook and iMac. Apple had the competitive laptop edge for a long time but now they've lost it to dell, samsung, and hp.
I think you presume to know too much about the habits of these MBP users. Even if you spend 90% of your time at the same desk, you have meetings at customers', you travel, you work on the weekend or at night and no one wants to have 2 machines.
One of the benefits of being a developer is being able to develop from anywhere. If you're unable to take advantage of this, you're seriously missing out on one of the best perks IMHO.
I like the proactive approach. Just yesterday, I went to an IXDA breakfast event and some recruiters showed up. Some people were just annoyed by their presence, but I thought it was a good opportunity to talk. They were pretty new to the industry, so I gave them pointers about how supporting the community by sponsoring events, organizing talks at companies they are hiring for, telling better stories about the companies they are hiring for, etc could get them better results than the usual cold-emailing.
I just think that if you don't like how they are doing things, give them pointers. Won't work everywhere, but sometimes it will, and that will be worth it.
Shameless self promotion - I just did a talk about protoyping last week that touches on many of the points Paul makes, and also explores some of the tools available. Hope some find it useful here: http://bit.ly/gbks-prototyping
I really wish Google could come up with a good overarching strategy for their products and stick with it for a long time. I'm losing track of how all the different services relate to each other.
Data I used is also from John Hopkins, turned into a JSON file via: https://github.com/pomber/covid19 .
There are also some good data-related resources here: http://open-source-covid-19.weileizeng.com
It's both disturbing and fascinating to work with this data. I would love to see these datasets go beyond the basics and also include things like timelines, population size, etc, to better understand how our actions influence the spread.