Pixel art is a fun retro hobby, but as the article acknowledges, "we create work for high-resolution HDR screens." Icons, logos, and illustrations in general should be vector-first, so they will show up cleanly on HiDPI and in the future, 4X or 8X DPI interfaces. Too many icons and logos (even the Y on the top left here in HN) look blurry on a simple 4K monitor with 200% scaling (which is not uncommon now).
Even in the vector age, though, one of the things we've learned from pixel icons is that different sizes would ideally have different levels of detail.
For example, see this camera icon: https://useiconic.com/icons/camera-slr/ (in the left pane, click the "three squares" icon to render them all at the same size). All three are vectors (you can inspect them if you want), but they still show different levels of detail. If you take the high-detail icon and render it at a small size, the details add visual clutter that are hard to understand at a small output size. Or with this book icon (https://useiconic.com/icons/book/), if you go the other way and scale the small book up to a larger size, it's harder to tell what it is because the "pages" part at the bottom is so thick -- which was a design necessity at smaller outputs, or it'd have been invisible, but at larger sizes it's too thick.
So even with vectors, the essential takeaway of this particular challenge still stands:
> [how to] distill the essence of your design and make sure your icon is clear and understandable at all sizes
Every vector icon still gets rasterized at some point for displaying to your monitor's pixel grid, not to mention human perception. Vectorization is a transport/delivery concern, but in and of itself it doesn't replace the thoughtful design of the pixel era.
Fonts can work similarly... they are usually vector these days, but still each glyph can look different at different sizes and can be dynamically controlled to look better at different render sizes: https://developer.mozilla.org/en-US/docs/Web/CSS/font-optica...
This is what annoyed me the first time I saw vector icons used in Linux desktop environments. I appreciate the attempt at making something that scales to any size, but in practice you need to custom design the smallest versions of an icon.
Even at the larger sizes, a vector won't always look great. If the renderer doesn't fudge vector edges to snap to pixel edges, you'll end up with blurry edges instead of clean, sharp ones.
> Even at the larger sizes, a vector won't always look great. If the renderer doesn't fudge vector edges to snap to pixel edges, you'll end up with blurry edges instead of clean, sharp ones.
Do you have an example? Text is essentially "vector" these days, and I've never heard of anyone complaining that text rendered on a modern screen has blurry edges. The blurriness of some text is often "cleartype" or whatever tricks are being used to make it look better on low-dpi screens, which end up making it worse on modern displays.
True, and pixel art was originally designed for cathode ray tubes. This leads to a funny effect where modern pixel art designers perhaps misunderstand the intended look of actual oldskool pixel art and emulate it anyway. The same is true of old fonts.
Modern pixel artists far from "misunderstand" the legacy of pixel art. They're specifically designing for a sensibility and context that didn't exist when pixel art was originally made. Actually talk to pixel artists and they'll explain this to you themselves perfectly fine
I have no idea about modern pixel art artist’s thought process, but it’s generally under-appreciated that CRT-era pixel art looks significantly different on todays displays than it looked on the originally targeted hardware.
That being said, there were also a couple of years in the 2000’s where pixel-based icons were specifically designed for LCD.
We can never know what each and every pixel art designer means or intends, simply because there is no single answer.
Because of this obvious fact, I offered a common explanation among those designers I know that produced pixel art: that they simply do not consider the particulars of how pixel art was rendered on cathode ray tubes. Hence /perhaps/, a possibility is presented.
I find your counter-argument superficial and perhaps intentionally missing the point.
This is true for retro video game consoles and early computers that had ~240p displays or output to TVs. But later PC CRTs were pretty crisp and not really that much different from an LCD at native res.
> Too many icons and logos (even the Y on the top left here in HN) look blurry on a simple 4K monitor with 200% scaling (which is not uncommon now).
This has been an issue with application icons on Windows for a long time. For many applications the largest icon would be 32x32. So when they changed the default desktop icon size to 48x48, those all looked terrible. I hated seeing blurry icons enough that I would make a larger version myself if I couldn't find a decent replacement online
In some cases I would actually need to make a custom 16x16 or 24x24 icon, because whoever made the icon went the easy route of just scaling down a larger icon to create the small ones. Even if the source is a vector, most icons will not scale down to those sizes and retain readability since the details disappear. Alignment to the pixel grid is essential for tiny icons. In these cases I would have to use Resource Hacker to modify the icon stored in the executable (for desktop icons that wasn't needed since you can change a shortcut to use any icon file).
Why do you think so? 20 years ago, everything was clearly in 1X. Now, the Windows default for many resolutions is 1.5X and my Macbook is 2X by default. iPhones and Androids are at least 2X scaled by default. iPhones in the last 2 years (like iPhone 12 and 13 are scaled 3X. So we've gone from 1X only, to 2X pretty much everywhere for desktop, with 3X on the latest phones.
Full-HD monitors and laptops are still very common in the corporate world and for non-affluent PC gamers. I don’t see this lower-cost segment going away anytime soon, since display yield drops quadratically with DPI, and therefore the associated cost increases quadratically with DPI.
You can get a 4K monitor for ~$300 now - or significantly less if you shop deals. I spent that much on a super basic FHD display in 2016. Costs seem to be dropping pretty well even with inflation and the supply chain mess.
Just to clarify, Yale and Brown aren't in Boston and 4/8 Ivies aren't in New England. And Ivies aren't even all the top schools.
Top schools really are spread out quite well in the US compared to some countries. Which agrees with the general point that the US isn't super centralized.
Wouldn't the manufacturers just take over, and only survive in the lowest cost place for manufacturing? The inventors/designers would get nothing and no one would want to do that anymore. What would be the incentive to share your open sourced designs?
Like it'd be pretty easy for one state-supported giant manufacturer to just build every single open sourced product, and sell it direct. The whole world would buy from this cheapest producer. No one else would get anything, and supply chains would become even more brittle.
Another way to think about it is if knockoffs were guaranteed identical to the originals, but at a lower price. Everyone would just buy the knockoffs. No one would want to make anything new anymore.
Seeing the amount 50k written out like that today, it seems like a pretty good deal. It looks like less the amount of annual equity that an entry level software developer makes. It's the price of a used truck, a kitchen remodel, a really nice family vacation for a month, etc.
Wtf kind of entry level software jobs are you talking about lol? 50k is 11k more than the median family income here in Canada. You're right, though, it's still less than a new truck.
I've heard this trope before, but it doesn't seem true to me. The federal government's day-to-day services include universities (through student loans and grant funding), travel (domestic and international), the quality of food we eat, healthcare regulation, and nearly everything to do with employment.
The local governments seem to focus mostly on K-12 schools, and police/fire; plus some one-off errands like the DMV and liquor laws.
The amount of federal taxes I pay is a life-changing amount if I were to get it back in a single check every year, whereas the state/city taxes of sales+property+stateincome is maybe a quarter as much.
The 37% and 13.3% are on marginal income, as I'm sure you know. The effective tax rate is much lower.
California and some other states also have required disability insurance that is taken out with the state income tax. It's 1% in CA.
And I would count Social Security and Medicare too, equally 7.65%. But the SS part has a cap so the percentage goes down the more you make.
All put together, someone making $250K pays about 37% on that to various income taxes listed above. If you were making 1M income a year, that rises to 47%. For 100K, it's about 30%.
The employer pays the same social security and medicare taxes as the employee and also has a cap on the social security part, plus in some cases an unemployment tax.
If you want to include employer taxes, to calculate if the employer wants to pay you 250K how much you would get, then it's be about 42%. At 100K, it'd be about 38% since you don't reach the SS cap.
Of course, there's a couple other tricks in there, like the government can embed some tax into the healthcare costs which are required for employers to buy, but then there's also deductions that are hard to turn into a percentage like this.
But there's a survivor bias in picking US blue chips. Wouldn't it make more sense to look at the rate of return for equity markets around the world? For example, the Japanese blue chip stock market has been stagnant.