When it comes to things like favicons, I always design variants for 16×16, 32×32 and larger scalable (or occasionally just 256×256). It perpetually bothers me that no avatar system that I know of supports different images at different sizes; the constraints can be significantly different, as in my own site’s favicon, where at one extreme 16×16 pretty much ignores vertical border to get it as big as possible, and is also effectively super thick and blocky, while at the other extreme the large-size scalable one wants some definite padding, thinner lines, and maybe other details. This is basic design stuff that is intuitive and has been generally practised from the dawn of GUIs.
As for black and white, well, I have been thinking that way a little more when contemplating e-ink displays, which handle monochrome better than greyscale (faster, less ghosting). It may surprise those that don’t know to learn that reMarkable renders your writing strictly monochrome with no antialiasing, presumably for both of these reasons; and it works rather well, though I would appreciate a higher resolution than its 226dpi. (The UI is all firmly black-and-white with no greys, but they do use anti-aliased rendering throughout.)
I've spent a bit of time designing a simplistic (but state-dynamic / rendered on the fly) svg favicon for a firefox extension I'm working on. I started designing at 512², with dimensions/elements carefully calculated for downscale, but the resulting antialias on various screens was so unpredictable that I switched to designing a 16² upscale instead which, while initially restrictive, was much more predictable and came out looking quite nice.
I noticed the images looked a little blurred to me for pixel art. Digging further into it, they're oversized high resolution JPEG images scaled down by the browser rather then correctly sized PNGs.
Putting quality aside and thinking about the poor internet pipes; each image is around 100kb in size and had it been a two bit black and white PNG would have been around 12kb.
I know, I know, it's a probably an off the shelf CMS system but still, it's Apple, if anyone can afford to care about the little things like this shouldn't it be them?
For one, browser's can't agree on what CSS solution to pick. Chrome and Safari have `image-rendering: pixelated` whereas Firefox has `image-rendering: crisp-edges`. The spec used to say pixelated meant "keep it looking pixelated" and that "crisp-edges" meant to keep the edges crisp and so allowed various algorithms. The spec was changed in early 2021 so that crisp-edges means "nearest-neighbor" and "pixelated" still means "keep it looking like pixels" which implies nearest-neighbor + extra.
It gets worse though. What is the web developer's intent?
Seems straight forward right? I want my 128x128 pixel image displayed double size (256x256) where each original pixel is 2x2 pixels on the user's machine.
But, that's not how browsers work. My Windows Desktop has a devicePixelRatio of 1.25 so asking for a 256px by 256px CSS pixel image gives 317x317 device pixels. That means this 128x128 pixel image, scaled to 317x317 with nearest neighbor filtering is going to look really really bad as some pixels get 2x2 and others get 1x1 scaling. This is why `pixelated` = "nearest neighbor" may not actually be what the web designer wants.
Where as, if you manually scale the 128x128 image to say 512x512 using nearest neighbor and then let the browser scale it to 256x256 * devicePixelRatio using the default bilinear filtering it will likely look better.
Note: You can get these non-integer devicePixelRatio values on Chrome or Firefox just by zooming in the browsers (Cmd/Ctrl +/-). On Windows you can also go into the display settings and choose an OS level devicePixelRatio so that OS handled UI widgets and well written apps will adjust how they draw.
Zoom in and out and, at least on my machines the right (pre-scaled externally and then scaled down) looks better than the left (original resolution with image-renedering: pixelated
I'm not sure why you're saying there's "no true" definition: there is a spec difference between pixelated and crisp-edges: https://stackoverflow.com/a/25278886
> The image is scaled in a way that preserves the pixelated nature of the original as much as possible, but allows minor smoothing instead of awkward distortion when necessary.
The spec does not say "use nearest neighbor". In fact in specifically allows not using nearest neighbor (end of first sentence). And further, it doesn't specify the algorithm used.
So there is no true definition of what you'll get. There is only a general definition of the intent.
> The image must be scaled with the "nearest neighbor" algorithm: treating the original image’s pixel grid as a literal grid of rectangles, scale those to the desired size, then each pixel of the final image takes its color solely from the nearest pixel of the scaled original image.
and then it explicitly says in the pixelated section, right after the summarizing sentence you excerpted:
> For each axis, independently determine the integer multiple of its natural size that puts it closest to the target size and is greater than zero.
> Scale it to this integer-multiple-size as for crisp-edges, then scale it the rest of the way to the target size as for smooth.
The first sentence is descriptive; what is prescribed follows directly after it. I'm not sure how it could do more than that to define "what you'll get".
That said, my main point stands, Nearest Neighbor looks horrible when you have a non-integer devicePixelRatio which is relatively common. You'll get better results upscaling the image offline and then downscaling in the browser, rather than upscaling in the browser.
Your argument seems to ignore the fact that Apple has their own browser and could put in the time to properly handle pixel-perfect rendering.
Yes there are a lot of complexities with that statement, but I personally think they would have a significant win here if they set a new bar of excellence as a part of the challenge.
I'm only pointing out you'll get better results today, in all browser, by upscaling offline and then letting the browser downscale than upscale in the browser via `image-rendering: pixelated`
The question was "Why are they using image that's been upscaled offline" and I have a reasonable explanation why someone might want to do that.
> There is one: choose the size so that the image is displayed without scaling.
You're apparently not paying attention. There is currently no way to do that (except in Chrome via JavaScript maybe)
What size would you choose?
By default a browser will display the image at its natural size * devicePixelRatio. devicePixelRatio can be non-integer values. Further, the devicePixelRatio is fluid in Chrome and Firefox. Press Ctrl/Cmd +/- and it changes.
In Chrome (the only browser that's implemented ResizeObserver with devicePixelContentBoxSize), you can ask exactly how many devicePixels a given element is ... other browsers have not implemented this. And no, elem.getBoundingClientRect will not do it.
But, even if you have the API it would still be a pita. You'd have to take the size of your original image, figure out some integer-multiple that matches the current devicePixelRatio, pray that it still fits your design, create an element that large, then use the ResizeObverser to see if you really got that size. If not, adjust up or down a pixel at time until you do.
"pray that it still fits your design" means assume you had a 200x100 image you want displayed at 2x so 400x200 and the users DPI is 1.33. If you ask for 400x200 you'll actually get 532x266. Your 200x100 image will not scale pixel perfect to 532x266. You could, via JavaScript, compute what size to ask for to get 4x scaling. That size with a DPI of 1.33 is 300.7518796992481. You can't ask for a fractional size (well you can, but it's going to get rounded) But now your design no longer works. You expected the image to take 400 CSS units but it's only taking 300, too small. And, further, there's no guarantee you'll get 300. You might get 301. To find out requires the devicePixelContentBoxSize mentioned above
Yes, using a lossy image format for pixel art or images with a small number of colors in general has always been a a bad idea. Many people do not see the artefacts, but they are there, and the fact that the file size is even bigger with lower quality does not make it better.
BTW, the images also contain a grid, so you would need at least three colors, which requires 2 bit per pixel before compression.
Take a screenshot and zoom in. If perfectly pixel aligned the edges should be sharp even when zoomed in on a screen shot. In the screenshot you will see slightly blurry edges.
> Digging further into it, they're oversized high resolution JPEG images scaled down by the browser rather then correctly sized PNGs
Pixel perfect meets JPG. What can go wrong ? /s
There is a unhealty love between JPG and the world of photography.
I have an 8 Mpx phone but it chooses to use JPG with medium compression. Good bye quality. My camera also uses JPG but fortunately the compression ratio is not so big.
your saying there is no anti aliasing ? this looks really blurry, like you said open it up check with a color picker and you will see the gradients....
I see gridlines on my iPhone, but only if I zoom in a lot. I'm guessing this is a case of "designed and exported on Retina display, for Retina display". Because yeah, in true 1-bit image there won't be a grey gridline.
There's something odd going on with that picture, though, because it has faint grid lines for the pixels. It's hard to tell what's anti-aliasing blurring, and what's a grid line.
This is an interesting challenge. I fire up my Macintosh SE/30 regularly to journal and organize my thoughts using Microsoft Word 5.1
Every time I do it I'm impressed again and again what was possible with a 512x342 black & white screen and a 16 MHz 68030 attached to 8 MB of RAM (which is "overkill" for my use). It's incredible the collective and integrated creativity that was motivated by the constraints of the medium.
Seeing Apple propose this challenge makes me wonder if there's some kind of low-power, low-res product they're wanting to release where these kinds of design constraints are relevant again.
> Once you’ve decided, explore a single element that captures the essence of the app, and express that element in a simple, unique shape. For example, the Mail app icon uses an envelope, which is universally associated with mail.
And this bullet point is underneath an example of... the "Photos" app icon, I believe? Which basically violates this suggestion, and as a result looks like a total mess in pixel format (even more so than than the real-life hidpi version on my iPhone).
Mail => envelope
Messages => message bubble
Photos => ...colorful peacock tail that suggests more of a "painter's canvas" than it does "photographs" ??
The icon isn't the photos app icon, although the general shape is similar.
>Add details cautiously. If the content or shape is overly complex, details can be hard to discern. Icons at all three sizes should generally match in appearance, although you can explore subtle, richer, or more detailed additions at 48×48 pixel size.
I believe that example is meant to be an example of a bad icon, as it has a complex shape with lots of different textures.
> The icon isn't the photos app icon, although the general shape is similar.
are you sure? I'm holding up my iPhone side-by-side, an to me there's no doubt this is attempting to represent that. To represent the different colors, the pixel version uses a unique pattern for each "feather". It seems as 1:1 as a 48x48 pixel version could be.
Pixel art is a fun retro hobby, but as the article acknowledges, "we create work for high-resolution HDR screens." Icons, logos, and illustrations in general should be vector-first, so they will show up cleanly on HiDPI and in the future, 4X or 8X DPI interfaces. Too many icons and logos (even the Y on the top left here in HN) look blurry on a simple 4K monitor with 200% scaling (which is not uncommon now).
Even in the vector age, though, one of the things we've learned from pixel icons is that different sizes would ideally have different levels of detail.
For example, see this camera icon: https://useiconic.com/icons/camera-slr/ (in the left pane, click the "three squares" icon to render them all at the same size). All three are vectors (you can inspect them if you want), but they still show different levels of detail. If you take the high-detail icon and render it at a small size, the details add visual clutter that are hard to understand at a small output size. Or with this book icon (https://useiconic.com/icons/book/), if you go the other way and scale the small book up to a larger size, it's harder to tell what it is because the "pages" part at the bottom is so thick -- which was a design necessity at smaller outputs, or it'd have been invisible, but at larger sizes it's too thick.
So even with vectors, the essential takeaway of this particular challenge still stands:
> [how to] distill the essence of your design and make sure your icon is clear and understandable at all sizes
Every vector icon still gets rasterized at some point for displaying to your monitor's pixel grid, not to mention human perception. Vectorization is a transport/delivery concern, but in and of itself it doesn't replace the thoughtful design of the pixel era.
Fonts can work similarly... they are usually vector these days, but still each glyph can look different at different sizes and can be dynamically controlled to look better at different render sizes: https://developer.mozilla.org/en-US/docs/Web/CSS/font-optica...
This is what annoyed me the first time I saw vector icons used in Linux desktop environments. I appreciate the attempt at making something that scales to any size, but in practice you need to custom design the smallest versions of an icon.
Even at the larger sizes, a vector won't always look great. If the renderer doesn't fudge vector edges to snap to pixel edges, you'll end up with blurry edges instead of clean, sharp ones.
> Even at the larger sizes, a vector won't always look great. If the renderer doesn't fudge vector edges to snap to pixel edges, you'll end up with blurry edges instead of clean, sharp ones.
Do you have an example? Text is essentially "vector" these days, and I've never heard of anyone complaining that text rendered on a modern screen has blurry edges. The blurriness of some text is often "cleartype" or whatever tricks are being used to make it look better on low-dpi screens, which end up making it worse on modern displays.
True, and pixel art was originally designed for cathode ray tubes. This leads to a funny effect where modern pixel art designers perhaps misunderstand the intended look of actual oldskool pixel art and emulate it anyway. The same is true of old fonts.
Modern pixel artists far from "misunderstand" the legacy of pixel art. They're specifically designing for a sensibility and context that didn't exist when pixel art was originally made. Actually talk to pixel artists and they'll explain this to you themselves perfectly fine
I have no idea about modern pixel art artist’s thought process, but it’s generally under-appreciated that CRT-era pixel art looks significantly different on todays displays than it looked on the originally targeted hardware.
That being said, there were also a couple of years in the 2000’s where pixel-based icons were specifically designed for LCD.
We can never know what each and every pixel art designer means or intends, simply because there is no single answer.
Because of this obvious fact, I offered a common explanation among those designers I know that produced pixel art: that they simply do not consider the particulars of how pixel art was rendered on cathode ray tubes. Hence /perhaps/, a possibility is presented.
I find your counter-argument superficial and perhaps intentionally missing the point.
This is true for retro video game consoles and early computers that had ~240p displays or output to TVs. But later PC CRTs were pretty crisp and not really that much different from an LCD at native res.
> Too many icons and logos (even the Y on the top left here in HN) look blurry on a simple 4K monitor with 200% scaling (which is not uncommon now).
This has been an issue with application icons on Windows for a long time. For many applications the largest icon would be 32x32. So when they changed the default desktop icon size to 48x48, those all looked terrible. I hated seeing blurry icons enough that I would make a larger version myself if I couldn't find a decent replacement online
In some cases I would actually need to make a custom 16x16 or 24x24 icon, because whoever made the icon went the easy route of just scaling down a larger icon to create the small ones. Even if the source is a vector, most icons will not scale down to those sizes and retain readability since the details disappear. Alignment to the pixel grid is essential for tiny icons. In these cases I would have to use Resource Hacker to modify the icon stored in the executable (for desktop icons that wasn't needed since you can change a shortcut to use any icon file).
Why do you think so? 20 years ago, everything was clearly in 1X. Now, the Windows default for many resolutions is 1.5X and my Macbook is 2X by default. iPhones and Androids are at least 2X scaled by default. iPhones in the last 2 years (like iPhone 12 and 13 are scaled 3X. So we've gone from 1X only, to 2X pretty much everywhere for desktop, with 3X on the latest phones.
Full-HD monitors and laptops are still very common in the corporate world and for non-affluent PC gamers. I don’t see this lower-cost segment going away anytime soon, since display yield drops quadratically with DPI, and therefore the associated cost increases quadratically with DPI.
You can get a 4K monitor for ~$300 now - or significantly less if you shop deals. I spent that much on a super basic FHD display in 2016. Costs seem to be dropping pretty well even with inflation and the supply chain mess.
Just a thought: what if this some kind of teaser for a new device Apple is preparing? Maybe some e-ink device? Ok, I know we are in 2022, but I would really love to see some “low-tech” device from them. I’ll try to wake up now :)
Sorry, dumb question, but does it let me listen to downloaded music on my earphones even when I don't have a phone with me? (Never had a proper smartwatch).
Like can I download music directly to the Watch (what about DRM?) and can it connect to Airpods or other bluetooth without a phone?
You can download music directly to the watch, at least with Apple Music. No idea if it works with a local library. However, other apps have the ability too, so chances are good that there are other apps as well.
It can connect to Bluetooth headphones just fine without an iPhone, I only carry the watch when I go for a run
Cool, thanks! I didn't know they could do that. Looks like it supports Spotify Premium too. Music for exercise would be awesome! (I know, I'm way behind the curve)
I agree with your thought. "Challenges" like this rarely exists in a vacuum -- I bet there's something coming where this is relevant, and this is a chance for them to crowdsource design options.
BTW -- not implying there's malice here -- there is something genuinely fun about a design challenge like this, but I bet there's an ulterior motive too :)
possibly the v1 of the inevitable "Apple mixed reality visor/glasses" has a low-res, mono-color display, or else it uses low-res mono-color as an "idle" mode when not actively in use so the user is not distracted?
Could also be the opposite. This could be a "one last hurrah" kind of contest to announce that they are moving the entire os to some kind of resolution independent vector ui architecture.
This is a really good video about how Apple cares about being pixel perfect compared to Windows with something simple and obvious like the mouse cursor icon:
You can’t tell me Apple cares about pixel perfection when they implement fractional scaling (which, last time I heard, was I think effectively the only choice on most of their laptops) with rendering at the next integer up and downscaling.
Basically Apple have made it so that it’s literally impossible to be pixel perfect, even though they were in the best position by far to force developers to do it properly, in those few where it didn’t Just Work. Microsoft’s approach to fractional scaling is incomparably more sensible, and Wayland is finally heading towards being able to handle it properly (though I find it quite ridiculous that they didn’t just start by doing it properly).
Actually no. The latest generation of MacBooks increases the screen dpi and the default setting is actual real 2x scaling. Though it doesn't help much because see my reply to the parent comment. It's noticeable on retina too, just not as much.
Apple gives no shit about being pixel-perfect lately. Try using modern macOS on a non-retina display and your eyes will bleed because every single icon is drawn as if there's no pixel grid whatsoever. Everything is a blurry mess. Steve Jobs would've definitely fired someone over this, yet with Tim Cook it stays unfixed for more than 2 years — I'd guess because pixel-perfect icons don't affect the bottom line.
On some MacBooks, the default screen resolution is not a perfect 2x, but something like 1.75x. This means macOS is drawing its framebuffer in 2x, then scaling it down to 1.75x to fit the actual screen resolution which blurs things.
The new MacBook Pro for example uses perfect 2x scaling, but the MacBook Air has a smaller resolution screen and is not 2x.
It really is appalling. It’s not just icons; if you switch Finder to gallery view and look at some photos, the previews aren’t scaled properly. The only way I can describe it is that it looks like it was downsampled without any blending—as if they just picked every fourth pixel or something. Really jaggedy.
I fully appreciate that it’s never gonna look anywhere near as good as a display with 4x as many pixels, but it’s really, really bad. The same monitor on a Windows machine looks great!
Did some rearranging of my desks to seperate my work and play spaces, so my work from home desk now has a 1080p 21" monitor. It's amazing how not built for that modern macOS is. Even font rendering is a disaster again.
Apple still sells Macs that don't come with a screen. If someone buys a Mac Mini, chances are high that they will plug into it the monitor they already own, which most probably won't be retina.
I do use a 1440p monitor. The blurriness is still very much noticeable. It was much better on Mojave when all icons were drawn to the pixel grid.
And yes I'd buy a 5K monitor, which is the retina equivalent of 1440p, but these things aren't exactly easy to come by. Besides the blurriness is still visible even on retina displays, it's just not as pronounced.
You can’t really get 1440p monitors any smaller than 27” though. And I have a (really quite good) 27” 1440p monitor and it looks like dog shit on macOS. Same monitor under Windows looks great. Sure it doesn’t hold a candle to my MacBook’s own 220ppi screen but it handily outclasses whatever the hell macOS is doing at 1x scaling
Really wish people who cared that much worked on the actual OS.
Still can't get over on Windows how 1 in 3 times when you move it to the edge of a window (might just be the lower left edge) the resize cursor appears gigantic and glitched out for a single frame.
Really don't understand how they either don't notice that stuff or just say "Yeah it's fine, just ship it". I know you can say "well it still works, it doesn't alter the functionality" but what if like every 3rd time you picked up a pencil I flashed a floodlight in your peripheral vision. Would probably push you out of your flow right?
It's a catastrophe with windows. Make two windows snap to the borders, so together they're fullscreen? 2 pixel gap between the windows; 1 pixel gap at the right, left and upper side; 2 pixel gap to the windows bar at the bottom. How can this be so hard? And even if it's hard, this is so obvious you really want to fix it.
Problems with pixel perfect layout were what made me give SwiftUI a 12 month “wait for maturity” last year. I hope this and the new SwiftUI custom layout capabilities signal an interest in this at Apple.
I was making strip charts of data at fairly high density and wanted my minor grid lines to be one pixel wide. You must make them line up with a physical screen pixel or they turn into a blurry anti aliased mess. As of last year I never found a good way to connect a SwiftUI view to its physical screen pixel positions. I’ve done this for many years in UIKit, and while extra effort, it is at least possible.
I think this problem is partially why people used “magic marker” aesthetic on their digital charts rather than “engineering pad”.
I guess it’s time for another crack at it.
How do you do "one pixel wide" when there are so many devices, viewports, LCD/LED types, zoom sizes for accessibility, etc.? It's all virtualized these days, no? A single pixel on something like a watch or 4k 27" display might not even be visible to the average person...
With UIKit you can query the device, it’s subpixel structure, and the alignment of your abstract window on the device and work it out from there.
Mostly, it’s possible there is an abstracted display at a non-native resolution over something like an HDMI. But if you are running your display at a non-native resolution then you just don’t care about details and you deserve what you get.
I've been designing Apple icons since 1986, and clearly remember using ResEdit to "paint" icons, one pixel at a time.
These days, I always use Adobe Illustrator to create my digital assets in vector form, and always start with black-and-white, for logos and icons. That's fairly standard "branding" stuff. The icon needs to be recognizable, even when reduced to 16 X 16 (OK, I'll allow 32 X 32), and in monochrome. I seldom need to move to Photoshop, to apply any raster work.
Nowadays, Apple allows vector assets, so I usually include them as vector PDF. It slows down Xcode, but seems to make the apps look a lot better, and they are probably much smaller (App size has never been an issue for me, anyway).
I can’t help feeling that a large number of those submissions are really missing the point of economising on the canvas. Almost all of them seem to have large swathes of empty space around the icon rather than enlarging the logo to use the entire canvas space. This leads to icons that are completely unrecognisable at the smaller scales. Plus the smaller icons are almost all just clones of their larger counterparts (as if they just resized the image in photoshop — though obviously that’s not what they literally did) rather than simplifying the detail.
That all said, there are still some genuinely impressive efforts there.
Yeah me thing that definitely stood out was how a lot of the more iconic logos are already pretty simple (often even monochromatic). Though I do also believe there was a fair amount of selection bias going on too (people picking easy icons to redesign).
Aesprite is a great piece of kit for anyone doing the challenge (or doing pixel art in general) https://www.aseprite.org/ - you can get it via the steam store too for the big 3 platforms.
A way to get started may be to take a color image and dither it to black and white pixels. Shameless plug, I make software for macOS to do this with seven different algorithms[0] (it can in fact even export to MacPaint format). But most sophisticated graphics programs also have this built-in (PhotoShop, GraphicConverter, etc.).
It's quite strange (and nice) to see a hint of nostalgia creeping back in to Apple's communications. For a long time, celebrating its history seemed verboten at Apple?
There's that story of Jobs getting rid of the Apple Museum that occupied the foyer. Something about not wanting Apple to be a company that was living in its past.
I don't think apple is the company to talk about pixel perfect. They don't even support native resolutions. It's either tiny, unreadable text, or you have to scale.
See also: all the commentary around font rendering on MacOS/OS X vs Windows. IMHO Windows has always been far more towards pixel accuracy, but what the hell do I know (the internet will tell me I’m wrong either way!)
Well pixel accuracy is the wrong term for it, as the two options are fitting the pixel grid (aka hinting) and being accurate to the font vectors. Mac OS historically picked the later, Windows the former. It was more of a difference before subpixel rendering and now hi-dpi displays, but Apple have recently dropped their subpixel support (since it's not needed on retina displays) so it's once again an issue using macOS on low PPI displays.
I can’t help but think that Apple is harvesting people’s work to get a large collection of modern black and white app icons to train a model that creates these automatically.
It's interesting that the nostalgia for "pixel-perfect design" comes from a company that seems to have been on a mission to remove pixels from the user's perception for the last 20 years.
First, Apple promoted skeuomorphism as the most visible difference from the pixelated look of Windows (OS X, 2001).
Secondly, they made pixel-perfect designs irrelevant by making a powerful, small handheld computer. It made the goals of pixel-perfect design futile and triggered the RWD trend instead (iPhone, 2007).
Thirdly, they successfully pushed Hi-DPI screens to the masses, which make the smallest elements on the screen too small for the naked eye to be seen. (Retina, 2010).
I say it with all due respect. These are absolutely brilliant achievements. Still, some "Yeah, thank us for killing pixels" would be in place.
Correct me if I'm wrong, but Steve Jobs didn't believe in responsive webdesign and the browser on iPhone originally didn't support RWD. He believed people should access the full website and use gestures to zoom in and out on the website. Part of the reason apple.com was so late adapting to mobile breakpoints.
So saying that the iPhone was responsible for RWD is too much credit I think.
Apple invented the non-standard "viewport" meta tag and gave guidelines on how to use it [1]. The article that has popularized the term RWD mentions iPhone four times [2]. For me it is obvious that RWD started from the iPhone, but maybe our viewpoints (or viewports) differ.
I believe you are right, but on a different level. Iphone started the mobile revolution which inspired other phone builders. When earlier we used build m.website.tld websites, the bigger screens from Iphone-inspired smartphones allowed us to use the same website, with mediaqueries.
Maybe it's because I'm Europe based, but we didn't really develop for iphones initally.
As for black and white, well, I have been thinking that way a little more when contemplating e-ink displays, which handle monochrome better than greyscale (faster, less ghosting). It may surprise those that don’t know to learn that reMarkable renders your writing strictly monochrome with no antialiasing, presumably for both of these reasons; and it works rather well, though I would appreciate a higher resolution than its 226dpi. (The UI is all firmly black-and-white with no greys, but they do use anti-aliased rendering throughout.)