Hacker News new | past | comments | ask | show | jobs | submit login
Windows 8.1 DPI Scaling Enhancements (windows.com)
90 points by barista on July 15, 2013 | hide | past | favorite | 52 comments



I didn't see anything in there about the real problems with DPI scaling: Most third party apps break in odd ways. It seems that whatever layout system a lot of program use, it somehow combines absolute pixel sizes with auto-scaled fonts and buttons, so you end up with dialogs getting clipped up. It's not always noticeable or problematic at "125%" (I think some popular laptops shipped with this setting), but at 150%+, many apps break.

If Windows had an option to somehow detect this and let the app render unscaled, then apply interpolation, it'd be better than the current state of affairs.


It does, and that is the default way that windows scales applications unless they specifically add to their application manifest that they support High DPI.

Using High DPI myself I have never experienced what everyone else seems to be talking about with applications breaking. (Unless you are talking about Windows XP)


I'm running 8.1 on a 13" 1080p laptop (2010 Sony Vaio Z). First party apps seem to be DPI-aware and scale fine. Some third party apps are not DPI-aware and use the new pixel doubling approach. The most surprising offender is Chrome, which is so critical since fonts look grainy and blurry when pixel-doubled. Fortunately each individual app can be overridden: properties on the start menu shortcut / Compatibility / Disable display scaling on high DPI settings. After setting that, Chrome renders at 100% again, so some UI elements are too small--but at least its internal zoom mechanism works correctly for fonts on web pages.


One example app that misbehaves for me on a high-DPI monitor is Camtasia Recorder 7 Specifically, it records an area of the screen that is offset up and to the left of the area indicated by the displayed boundary rectangle. Ableton Live 9 for example has trouble tracking the mouse correctly after clicking and dragging on some controls.


I've actually found that there's a setting (in Mouse, I think?) about projectors and pixels that seems to alleviate this.


So, for example, uTorrent's add torrent window gets totally busted. In this case, the devs opted in to their own High DPI, then screwed it up? Sigh.


The OS supports it, but so many of those little custom controls people have wrote over the years didn't take it into consideration...


Probably one of the biggest complaints we get from our IT clients. Really frustrating for someone that isn't familiar with the issue because all they can articulate is that either they can't fit anything on the screen (low resolution) or can't read anything. I always assumed it wouldn't matter what MS did because x, y, z 3rd party application wouldn't implement it but its always been irritating that MS' own applications don't work correctly with the current options.


I've tried this out and it is a disappointment. 8.1 doesn't actually switch the rendering of the app to a different percentage scale when switching displays, it just scales the app rendered at the previous scale.

What happens is a "target" scaling percentage is set - any monitors that roughly match that percentage/DPI get a 1-1 pixel mapping of how apps currently render at 125%/150% etc. Monitors that have a greatly different DPI (for example, a Surface Pro internal screen) then have a scaled app. For instance, the app renders at 125% (the "target" percentage) and is then scaled down or up by the graphics card to the scale percentage for displays that don't match the target percentage. This is never a nice pixel double or halving that OSX carries out, but always a blurry mess, scaling up or down. The taskbar is not scaled at the moment either, rendering at the target percentage scale on all displays, so you get a mini or a large taskbar on the mismatched display.

They would be much better off rendering everything at 200% and scaling down, like OSX.


Could you explain what you mean? In Windows The GUI itself is scaled but it is still rendered at the native resolution of the display. Resampling virtual resolution pixels to physical pixels is the OS X approach.

Edit: It appears that Windows applications that don't support the High DPI interface are rendered then resampled as you say. So it's up to the app developer to support scaling, although you can disable the app resampling behaviour on a per app basis too.


I expect this is a compromise to ensure that older, non scaling-aware apps don't break in layout.


There was a good session at the Build conference apparently. http://channel9.msdn.com/Events/Build/2013/4-184


I don't find Windows 8 too bad as is -- I have it set to the true display resolution on my MBPr, but with font scaling (150%?). Works for most apps while some have tiny text but it isn't too bad. Everything's crisp at least, and all the Metro UI is fine.

That said, I don't use it too much, so I can't say how irritating it would be on a regular basis.


So it looks like they've done what they can without completely overhauling how they scale things. You can have different scaling values for different parts of your application.

What is the right way to solve this issue, keeping performance in mind, if you get to build everything? I'm pretty graphics-ignorant so I'm mentally stuck at 'do something with vector graphics'.


The default way this is done on Android is having multiple bitmaps rendered at different pixel sizes for different DPIs (similar to the way mipmaps work in 3D texturing, if you're familiar with those) and then the OS chooses the one closest to the display DPI and resamples it as needed to get the desired final size based on the actual display DPI.

Eg:

You have some icon, you might provide bitmap versions of it at: 8x8, 32x32, 64x64, 128x128, 256x256 (the exact sizes depend upon the desired size of the image when displayed combined with what DPI ranges you want to support well).

Based on your layout, the final pixel size occupied by that icon control might be 200x200 on a high res tablet, the OS will choose the 256x256 bitmap and downsample it slightly to fit 200x200. The same icon area might be 60x60 on a phone, so it'll chose the 64x64 and downsample that one a bit to fit 60x60.

I haven't done much iOS programming, but from what I understand it works pretty much the same except the sizes are more fixed at 1x/2x/4x because there is less DPI variation across iOS devices.

This way has downsides (makes the assets larger since you have multiple copies of each one), but generally works pretty well in practice and actually works better than vectors for some things (though vectors can scale up and down at will, if they aren't heavily 'hinted' you can easily lose important details that you don't want to lose at small sizes, with pre-rendered bitmaps you can adjust for this ahead of time).


I don't know a whole lot about GUI programming, but how come resolution independence never took off? For example, "make this element some proportion of the screen width/height", "make this element exactly N cm/mm/inches/pica wide/high".


It wasn't made that way in the first place. Remember Windows (which came out in 1985 or so) was designed to run at something like 640x350 with 4-bit graphics on an 8086. Windows 95 was ten years later and only required a 640x480 screen. Bitmapped graphics, supporting things from 16 color to 24-bit color. Check out the BMP file format sometime.

Contrast to something like NeXTStep, which evolved into OS X, started out with display hardware of 1120×832 (although grayscale). More importantly it used display postscript which is vector-based. Even though it came out in 1989 which was only 4 years after Windows 1.0, the computers that ran it were much, much more powerful than the PC of the time. The original NeXT computer was closer in performance to what a mid-range Windows 95 machine was (486DX/25MHz, 8MB RAM, 1024x768 graphics...)


Windows UI was supposed to be in "DIPs" since at least Win32.

http://msdn.microsoft.com/en-us/library/windows/desktop/ff68...

I guess the problem is that when 1 DIP = 1 pixel for almost everybody in almost every case you don't notice when you mix them up.


It means everything in your UI pretty much has to be vectors, which is certainly possible (it's done in games) but UI designers and the tools they use generally aren't used to working that way. Many UI elements would have non-integer thickness and thus would either look blurry due to antialiasing or lumpy due to pixel fitting. And you'd have to deprecate pretty much all of the existing pixel-based APIs and then wait 10-20 years for apps to migrate off them.


On super high-resolution displays, would it really be that bad? I can't imagine rounding errors being that significant on a Retina-quality display. In fact, it was never a problem on 300DPI laser rendering back when I worked in the publishing world.

Am I mistaken?


WPF went that route by using vector graphics everywhere. It has the nice property that it scales fine, but often you can see small one-pixel jumps of elements, e.g. the thumb in a scrollbar or on sliders. This wouldn't matter at all on a 300-dpi screen but with 96 dpi it's visible and sometimes jarring.


We don't have super-high-resolution displays. A small percentage of users have moderately high resolution "retina" displays and everyone else has low res. One major benefit of the pixel-based 2x design workflow is that it produces good results on old low-res screens that are still in the majority.


To some degree I think you're correct, but until there are only retina displays on the market we need a solution that works well for both. Everyone looks forward to the day when designing a font doesn't mean spending half your time hinting for those pesky ~85ppi displays, but we're off to a pretty slow start. The first real step in my opinion is to bring real vector support to the web (now that it's reasonably well supported across browsers) but that requires designers to adjust their workflow (and preferably own their own retina displays to test with).


We won't have to make fonts for Retina - we already did, they're (usually) designed to be printed which is much higher DPI than a Retina display.. It wasn't until Windows XP when Microsoft said "Let's make fonts that look great on the screen, at low resolution"


Yes, that was my point (except it wasn't just XP– hinting was/is a PITA on any platform). These days we still have to make them look as good as possible on both, but if you're also targeting printed media then you're already set when high PPI displays are the norm.


I don't know mich about windows GUI, but from my user experience I think it's just hard (lots of things to consider).

I guess you have to deal with font sizes (handled by the system) you're own contraints ("this button is half this screen", "this screen is at most 800px" etc), and the user scaling your UI.

Before scaling was even an issue, I've seen a lot of app screwing the font and UI size pairing when launched on non english languages with different default fonts settings or wildly longer text(the app's own translated text that is). Throwing in a "what dpi is my screen?" variable to the equation must make things that much harder.


Maybe because it is not a silver bullet? The viewing pleasure of 2cm on my phone != 2cm on my 52inch LCD monitor from 2-3 metres away. Not that I have a 52inch monitor


That's why I mentioned relative dimensions as well as absolute ones.


I can't see a price for the 4k screen they refer to. Anyone know how much these cost?


ASUS's 4K screen is $3,500. The Sony one at Fry's was $4,995.


This is a real crock, it's terrible. Microsoft's own applications don't even scale properly - and fonts can't scale at all. I use a Surface Pro @ 1080p and two external displays @ 1080p and fonts are blurry in apps like Windows Explorer, Outlook, Word, because the font scaling is so horribly broken. I don't understand why they can't fix this.


I don't think it's font scaling. It's a fundamental change to the font renderer. Basically, subpixel rendering is gone and it's all grayscale.

http://answers.microsoft.com/en-us/ie/forum/ie10-windows_8/w...

I've not played with 8.1 yet, but hopefully it's been addressed.


Subpixel rendering makes no sense on a tablet where the direction of the subpixels changes from landspace to portrait mode.


Couldn't the subpixel hinting change too?


You can, but the point of subpixel rendering is increasing effective resolution, and for English text, horizontal resolution is more useful than vertical resolution. See: https://www.grc.com/ctwhat.htm.


It's not just English text, it's virtually every written language. There are very few that use a vertical orientation, and fewer still that use only a vertical orientation.

Hebrew, Georgian and Arabic all benefit equally from sub-pixel resolution. Even vertical Chinese would be improved by having more detail on each character.


What the hell is a "normalized 1-foot DPI value"? DPI is an absolute measurement.


I like how Chrome OS handles this problem. Ctrl+Shift+Plus to zoom entire OS!


<windows> + <+>

put in in fullscreen mode


Heh. It works so very very badly.


The Windows high resolution "support" is done in such a bad way, and I think Windows has the worst possible type of support for higher resolutions out of all the operating systems. They're only making some items "larger" to appear "normal" under the big resolutions. But all the other stuff won't. Plus, what are they going to do for 4k displays? Increase it to 300%?

They should've done it like Apple did it, and it would've been much more streamlined and would make a lot more sense. Here's how they should've done it.

With resolutions higher than 1080p you shouldn't actually get more density in terms of content per screen real estate (what's the point of that? 1080p makes things small enough as it is). Instead they should only support resolutions after 1080p that are exactly "double" (or 4x the pixels) of the lower resolutions. This way, those high resolution displays, can use the "effective" lower resolution.

So 2732x1536 -> effective 1366x768

3200x1800 -> effective 1600x900

3840x2160 ("4k") -> effective 1920x1080

This is the best way to jump to higher resolutions and easiest way to support them at the OS level, instead of these icon scaling "hacks" that Microsoft is implementing.


Apple was able to do it the way they did because there are so few choices in Apple hardware. Sure the "double-or-nothing" approach simplifies things but it's not a practical approach for large ecosystems like Windows or Android where the resolutions vary a lot more.


Arguably if Windows only supported double-or-nothing, the PC vendors would just start shipping appropriate screens.


But that would only work on new hardware, not on the millions of existing machines that people will upgrade, which means it would still have to support the old resolution model to support older machines and thus the incentive for hardware makers to put on higher resolution screens would be reduced since they could get away with older crappy screens.


If an OS dropped by 30" to 1280x800 I would never use it. And assuming everyone has a 16:9 is a common mistake as well.

Telling people to double up or deal with shitty performance is a much worse proposition than what Microsoft is doing.


... except that Windows has supported resolutions higher than 1920x1080 for 15 years already. Should they remove support for 1920x1200 and 1600x1200 displays from future releases?


You obviously didn't even read the linked article since it has screenshots demonstrating that Windows has full support for DPI-based scaling of entire UIs, not 'icon scaling'.

Windows has had DPI-based scaling of user interfaces since Windows 95 where you would set your display DPI and all applications on the system would (theoretically) adapt. The problem is that app developers have historically been completely deficient in this regard; in practice they either hard-code pixel-perfect layouts (but don't lock the font sizes, so text gets cut off), or half-ass it and get the DPI scaling completely wrong by starting to implement it and then stopping. It has literally been possible for Win32 applications to do everything that a OSX/iOS Retina application does since 1995. This feature was even supported in Visual Basic!

In Windows Vista, Microsoft responded to this by adding a new system where unless an application explicitly told the window manager 'yes, I'm actually DPI aware', the window manager assumes that the app will completely muck up DPI scaling, and it renders to a lower-resolution window buffer and scales it up so that text/object sizes are appropriate for your display DPI. Despite this, there are still applications that tell the window manager 'I'm DPI aware!!!' when they're not. Note that this scaler uses an actual scaling algorithm, unlike Apple's nearest-neighbor, so text scaled up in this fashion remains perfectly readable (albeit blurry), unlike the complete mess Apple turns Cleartype text into.

In practice the problem here is ENTIRELY developers and consumers, not Microsoft. Consumers buy (and continue to buy) displays that have resolutions that are not an even integral multiple of some other display resolution, and continue to buy applications that are not correctly DPI aware. Developers respond to this by continuing to ship broken applications that don't respond correctly to display DPI.

Microsoft could do whatever they wanted, including directly mirroring Apple's approach, and none of this would change.

Apple's approach only works because they have a complete monopoly on their platform and they use it to force developers to waste resources on whatever changes they introduce - a new approach to DPI awareness and rendering that requires introducing 2x versions of all your UI bitmaps, a new sandboxing mechanism and app store that requires it, a new UI toolkit, new font rendering APIs, etc. Usually Apple at least uses this power to improve things for consumers, but it's naive to look at how Apple handled the Retina transition and say 'if only Microsoft had done that too' - the Retina transition was incredibly expensive for developers and continues to be expensive for end-users (by making shipped applications larger and potentially slower and definitely more complex).

Don't even get me started on the blatant stupidity Apple's approach to Retina introduced into HTML5/Canvas/WebGL. getImageDataHD and devicePixelRatio, hurray!


Let's review the evidence: Windows apps that claim to support high DPI are mostly broken, but Mac/iOS apps that claim to support retina actually do support retina. If the retina approach is "incredibly expensive for developers", then how expensive must the Windows 95 approach be?


The difference is that Apple forced developers to implement it, so they did. Microsoft didn't force developers, so they didn't implement it. That's it.

The fact that it's optional means that the cost is something developers (and indirectly, customers) can CHOOSE to pay if it is worthwhile. Compare this to Retina, which is basically non-optional because Apple ensured that non-retina applications are an eyesore with reduced text legibility.

I certainly won't argue that DPI-aware programming is easy on any platform. But Retina is not some superlative panacea: It's expensive too, and it has really significant, notable downsides. Like how it basically ruined the rendering model for Canvas/WebGL.


Somebody should make apple stop forcing developers to do things. That apple made the choices very simple and deployed hardware widely had nothing to do with it.

Apple forces is into the future while th Luddites kick and scream.


The problem with having different displays and having stuff look on them properly sized on various resolutions and densities has been solved for more than a decade by the game industry.

Is there a reason outside of legacy code base that everything in the OS is not vector based?


There are still enough situations where pixel alignment makes a noticeable improvement in performance and crispness.

Lots of displays out there are still 96 DPI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: