Z-Bar [1] is a fantastic free and lightweight solution I've been for years. Highly recommend for those using Windows 7, especially in the Enterprise as it requires no installation and it is only 2 files.
I like this cost/benefit analysis. Often people forget that, when building a product, there are tons of features wanted, with different costs, and different benefits. Its not easy todo the right thing.
They already do. Stardock, for instance, has built a pretty good business providing power-user enhancements to Windows (see http://www.stardock.com/products/?from=nav).
I have 3 monitors and use ultramon as a utility to do this. I can totally appreciate that the population here is not the norm and whatever metrics we see here is an outlier.
In most cases I assume people use laptops with one monitor. The percentage of desktops with two monitors can't be too high. So when you combine laptop and desktop it lowers the total percentage. How many of us have a laptop in addition to out desktop that is never connected to another monitor ;)
My guess for why it made it into Windows 8 isn't that it was randomly suddenly worthwhile, its because a multi-monitor setup becomes more important in the new workstation/tablet hybrid tool that Microsoft is creating. I carry my Surface around, using it as a laptop or tablet, but at home or at the lab, it plugs into larger external monitors to enable maximum productivity.
Agreed, I use SysInternals Desktops for that since it's the only one that isn't implemented with a hack.
I haven't had any problems with it but because of Windows' limitations it starts a separate logon session for every "desktop" so you can't move windows between them and so on. It's brilliant if you're constantly remoted to somewhere else, you can keep a full screen RDP session on another desktop while using your primary one for other stuff and quickly swap between them.
> I would hazard that fewer than ten percent of users use a multiple-monitor system on a regular basis, so any benefit would have to be ten times as great as the benefit of features that have broader use.
What kind of logic is that? And how would you even mathematically quantify the "benefit" of such a feature, save through tedious usability studies that could only come after you've implemented it?
You need some sort of cost/benefit to go off of to make decisions on which features get included and which get cut.
One way: consider how many Windows users would switch OS's or not chose a Windows computer as their next purchase due to the lack of multi-monitor taskbars. I don't need any sort of study to say that it's probably not a very high number.
Single features don't make up my mind when choosing an OS. By that logic, the only deciding UI feature would be a mechanism to switch between tasks. I don't think anyone will stop using Windows because of the lack of widgets or of a find feature or of a crappy task manager.
You want a cost/benefit criteria? How about (number of people who care) * (how much they care) instead of just (number of people who care)?
I wasn't presenting the be-all, end-all criteria for determining which features to implement, just that there needs to be one in order to make a decision. Your example is likely better, and I'm sure Microsoft's taskbar team uses their own method:
"Part of the hard job of product design is deciding which 20 features you have the resources to pursue and which 180 to leave for next time. If you choose the right 20, people will say you're the best new UI feature in Windows, or even the best Windows 7 feature, and they won't mind too much that you didn't get to the other 180."[1]
I think single features can certainly sway a purchaser. If I like the OS X notification center quite a bit and its integration with 3rd party applications I use a lot, then it may well cause me to lean in the direction to purchase a MacBook over a Lenovo. Of course they're all taken as parts of a whole, but it's still conceivable that people may not be able to perform their job/task in a way that they are accustomed to without X feature.
In addition, while some features may not sway you, it is not unlikely that you are an exception to the rule.
Even if you could quantify it, I don’t agree with the 10x ratio that the original author proposes. Early in the development process, you will be implementing “low hanging fruit”—features that are easy, and provides great value.
Lower priority features are lower priority because they provide less benefit, but that doesn’t mean that you should not get to them eventually if you want to improve your product. If you have a variety of proposed features, you’ll start with ones that provide 100x benefit (compared to some benchmark), move on to 50x and 20x and 10x and so on, and eventually you’ll get to features that are barely worth the cost.
And this is why Windows Phone is what it is. The only thing MS is actually interested in is what (they think) directly projects to sales according to analyses, which are undoubtedly mostly based on "automated feedback" from current users and other "big data". Where are the times they actually cared for the (vocal) minorities?
It can be argued that opinionated design is more successful that design by committee. I think the most effective approach is to begin with opinionated design prototypes then triage and refine using A/B testing.
MS can be accused of relying too heavily on user surveys, making their designs a bland blending of averages that please nobody (see: the Office Ribbon). Thus the Ford quote gets trotted out, "If I had asked people what they wanted, they would have said faster horses."
The point is that MS are using a greedy optimisation algorithm with which they arrive at a local maximum that is well below the global maximum for quality of end product.
What do you mean? Am I supposed to be grateful for something that I have used on Symbian for years before?
I take it Android or iOS don't have it in their default mailing software, but that's not something to be thankful for. Those are pretty basic features missing.
I mean the kind of feedback oriented design MS uses can easily generate a scenario where "Reply All" and CC are considered not mainstream enough to warrant entry to their phone OS.
I agree with your sentiment. It just feels we're dangerously on the edge of "casuals" (to borrow a word) dominating the aim of software design.
It's not built in functionality which makes a lot of people irate. The thing is nobody would know what to do with it on Windows. Windows users aren't taught about multiple desktop session, virtual desktops, etc.
I forget where I read this, but the Windows team is certainly aware of virtual desktops and has made a conscious choice not to include them, even as an option.
The Sysinternals Suite has a utility for this: Desktops v2.0 [0].
I've been using VirtuaWin (http://virtuawin.sourceforge.net/) for a number of years to get virtual desktops on windows. It's a little of a pain to get installed & configured exactly as I want it, but it (along with selected plugins) get awfully close to replicating the MATE / Gnome2 virtual desktop system.
What is amusing to me, is that started programming on a 20 x 80 line terminal (Beehive Bee3), then got spoiled by a 25 line by 80 character terminal, and then one with multiple 'pages'. When I got to Sun and had all that space of a 'workstation' I could never imagine going back to a single terminal, then I got two monitors and couldn't imagine going back to one (although I operate that way in a pinch on some systems). Certainly there is always "something" to put up in the extra real estate.
At the moment I think my ideal is 3 (a 2K landscape in the middle) and two vertical 1080P monitors, one on either side. I'd really like something like a 2440 x 800 "strip" to put along the bottom for a high res "task bar" kind of deal but can't find anyone with that particular glass and I don't want to glue together two 1280 x 800 panels.
One trick though is keeping the PPI the same (or close) on the monitors so that things don't distort weirdly when they move from one to the other.
I've had multiple but I find I am going back to single monitors on all my computers. They're not small single monitors (27" or 24"), mind you.
I find that I've been suffering from information overload with multiple monitors and that it's been too hard to focus. I would always have my email open on another monitor and whenever I found myself waiting for a build, or stuck on a problem, I'd glance over to my email and lose myself in that for a while.
With a single monitor all the distractions tend to go to the back; while they are still easily available to me I now do need to consciously move to another task, rather than have my context switch because of a sideways glance.
Monitors are getting large enough for many to be productive using just one.
A single monitor can also help one be more productive by not having always-on distractions in your periphery. Putting the inbox away is a very effective way to be productive. (There was an article about this on Slate but I can't seem to find it.)
I am probably in my inbox less than 1 hour a day...
I hate email and avoid it as much as possible so that is not a source of distraction for me.
Today I have 2 1080p 22in monitors, 1 1900x1200 27in monitor, and 1 1440x900 19in monitor
the 19in monitor i use to display various monitoring tools and automation outputs.
the 27in is my primary work area
one of the 22in I use for Media (spotify) and reference/docs so I can reference and/or update documentation while I work on a project
The 2nd 22in is simple overflow, a post can drag things when needed. It is used less often i could probably get by with out it but it is nice to have when i need it
I can get by on a 27" display (full-resolution, not upscaled 1080p) alone, but for certain tasks having a second screen is almost imperative.
If you're doing development and need to see the output window, some logging, and are watching your performance monitoring application, you'll run out of real estate in a hurry.
I, on the other hand probably couldn't work with more than two screens. I always had two screens at my last jobs but I always ended up using only one of them actually.
First I had the typical setup of putting the editor/shells on one screen and the browser on the other. But I found that moving the head and re focus all the time was more of a hassle than just cmd+tab. After that, I just put things that selfupdate and I can look at once in a while on the second screen. Like Mail, Twitter, etc.
Now since I rarely work in any kind of office anymore, I'm glad I never really got into the multi monitor thing:)
If I would do more design work I'd probably have use for more than one monitor. But as a programmer, having a editor with good split screen and code/project navigation support is way more important to me than the size or number of monitors.
Do you mean ignoring laptops? Because I believe the most recent statistic I saw was that 60% of users still do a considerable amount of computing on a desktop/laptop.
I like the one comment lamenting people saying, "It can't be that difficult to implement." If you're not the person who is in the code and doing it, you don't know. You can't say what is hard and what is not hard. You don't know what the code currently does, what systems it makes available to support the change. Perhaps the feature itself is small, but getting things in place to support the change is not.
It may not be easy, but there are certainly several add-on products that support this feature and they have no where near the same level of resources and know-how that the Microsoft team in charge of the taskbar has. So it seems unlikely that "hard to implement" was a primary reason for not implementing this sooner.
But that certainly ties into the cost-benefit analysis. For the small percentage of users that would want such a feature, they are probably also the users most capable of finding those 3rd party tools. So the percentage of people for whom implementing this feature natively would actually benefit is incredibly small compared to the cost of implementing it, even if the cost isn't that high.
I deal with this with my client all the time. He thinks adding a column to a report is "easy" because "it's just one thing". He thinks changing a constant factor in an equation is "hard" because it dramatically changes the output.
It's the complete opposite. Maybe the data he wants in the column isn't being tracked. Maybe it's not even very well defined. But he is like all users, all he knows is the output, there is no intrinsic knowledge of the input or the process.
I haven't had an old school desktop[1] in a long time. Laptops have been it for me for years. And I've always had an external monitor attached to it. I find the multi-monitor taskbar in Win 8 (at home) to be pretty useful. I've tried several apps to add that to Win 7 (at work) but they all sucked. So I gave up.
[1] Technically, I've never put those computers on the desk... as it has always been under the desk. It seems strange to call those desktops but that seems to be the standard name for something that is not a laptop.
Edit: Yes, I should have clarified that I am fully aware of computers going on top of the desk. I even had one that was meant to but I used a stand to hold it up right and placed it on the floor (I guess like a tower). But that was with floppies and not a CD so it worked out just fine. I was just saying it was weird for me to call my old computers desktops since I never put them there. Also, there are a good number of systems that are not really meant to be put on the desktop but are still called desktops to separate them from laptops... which are more often on the top of my desk than on my lap. :)
...and then a manufacturer (possibly Compaq or someone more esoteric like Mannesmann Tally), came out with a Mini-tower design and a few arrived at work. Seeing how 'cool' this new arrangement was, some of our engineers took their IBM PC units off their desks and sat them upright at the side.
A few weeks later, the IBM PCs started to die because the retrofitted 5.25" full height hard disks (probably 10-20MB!) began to seize up - the spindle bearings had 'worn in' in one orientation and didn't take too kindly to being tipped on end. Sometimes a good whack would get things moving - if not, it was time for a new (and very expensive) hard disk.
Even many low end business desktops today are designed with the intention to be placed underneath or behind the monitor. It tended to be the big, clunky workstations or consumer PCs that were relegated to underneath the desk.
If I had to guess the reason, I would say that the new Windows devices are going to include many more small screens, which MS forecasts will make secondary (bigger) displays much more common than with traditional Windows PCs.
This was an issue for me this morning when my recently installed copy of UltraMon had expired.
Instead of purchasing a license (which I would be happy to do, but cannot on this workstation as it belongs to the company and was denied), I was introduced to an open-source alternative here: http://sourceforge.net/projects/dualmonitortb/
Not as feature-complete as UltraMon - though provides me with a working taskbar on my second monitor.
Anyone on HN know of more software to extend the taskbar? Open-source, free or commercial.
Buy the license, crack UltraMon. This way you have what you want and your karma is clean. :) I used to use UltraMon on my desktop all the time, before moving to OSX. I find it preposterous that the multi monitor task bar is so hard to implement that it wasn't worth doing until Windows 8.
Your employer allows you to install UltraMon on their workstation, but does not allow you to pay for a copy? Or are they willing to install freeware, but not commercial tools, even if you pay for them? Or do they deem UltraMon detrimental for your productivity?
I used Actual Tool's Actual Multiple Monitor on Windows 7 and it has some amazing features.
Upgrading to Windows 8, I stopped using that tool and I am ( mostly ) satisfied with Windows 8 support for multiple monitors. But I still miss a lot of features. From all I miss one feature the most, the system stray on secondary monitor. For some reason it was very convenient to have it. And launch applications from system tray on the secondary monitor itself.
I was actually just thinking about this yesterday. I finally took the leap and got a second monitor back in the winter time because I found doing school work (it was my first semester) tedious on one screen. Type, alt tab look at info, alt tab at another source, alt tab to type - too much. Now I cant imagine going back to single monitor and wish I could have a second one at work to help with my productivity here as well.
I've been using Display Fusion [1] for years, and it's a great multi-monitor taskbar, with lots of other useful enhancements - hotkeys, wallpapers, window management, etc.
Multi-monitor support (or lack thereof) is one of the things that makes me cringe every time I'm dual-booting Linux/Gnome3.
Linux: Well asking that question is impossible to answer. Depends on the desktop environment. BUT by default No But if you put in the time to set things up most of the time yes.
KDE4 - Yes, but might have to fiddle a bit. Do able.
Gnome 3 - Not really
LXQT - Yes
i3 (Tiled Window Manager) - Yes perfectly BUT I had to set everything up by hand. I love i3!
KDE has always supported as many task bars as you want, Windows you limited to 1 task bar or 1 task bar per monitor in win8, and it is very limited, KDE you can have 2,3,4 or more taskbars on every monitor and can put anything you want in them
Gnome2 was similar.
Gnome3 does not have a taskbar at all, there is a extension that kinda creates it but the work flow for Gnome3 is taskbar-less, but the biggest problem with Gnome3 is there is a bug that has been there for a LONG time when you have stacked monitors the window manger gets all screwed up, so you can only have monitors configured in a strait horizontal configuration.
Cinnamon only supports a single taskbar which is one of my biggest complaints with cinnamon
Unity I believe is single only as well, but I have very limited exposure to ubuntu desktop
i3 is indeed the best of the bunch as far as tiling WMs go on Linux (IMO of course, some prefer Awesome, X Monad, etc.).
As far as setup goes i3 is an absolute cake walk (relative to Awesome et al where you have to script your way to Nirvana), just edit provided config to your liking (yes, if you want conky piped into i3 status bar you have to do a little homework, but otherwise ridiculously straight forward).
If you prefer a full blown DE you can actually wrap a tiling WM over the DE (i.e. they can work together) so the answer to the OP is an emphatic yes, any Linux can do the trick.
Does i3 support each monitor showing an independent workspace like XMonad does? I can't say I'm the world's biggest fan of XMonad, which tends to be a pain to configure, but I just can't live without being able to have each monitor acting as independent entity any more!
> Does i3 support each monitor showing an independent workspace like XMonad does?
How do you mean? That sounds like basic TWM functionality. I know with something like Compiz when you switch to a workspace all of your monitors switch at the same time, which is kind of pointless.
i3 gives you 95% of what you could do yourself with Awesome, Xmonad et al, just without having to do much of anything beyond edit a config and choose which apps you'd like bound to which workspaces, which ones should go to the scratchpad (e.g. Skype, VLC, etc.), which ones should be floatable and so on.
The ability to tab tiles is probably the killer feature of i3, if you do sysadmin work being able to break a set of 20 terminals into a quadrant of 5 tabbed terminals each all on a single screen is pretty much a dream ;-) Furthermore you can tile vertically and horizontally within any given region, it's turtles all the way down...
I love OSX, but the jumping is a very new behavior and the application specific parts of the menu bar aren't even a good idea for large monitors (let alone multiple monitors).
Because they are complaining about factors that an Open Source approach would mitigate, like development resources and targeting features that make the most money as opposed to what benefits the community.
If Windows was Open Source, this would have been addressed a long time ago as this feature is desirable to what one would consider a power user. Advanced users are also the ones who roll up their sleeves and will fix a problem if they can, especially if no one from the parent company is making any effort to do so. But, thanks to Closed Source protectionism, they can't. And because their value to Microsoft is the same as any other user as they pay the same license fees as anyone else, the voice of the power user is drowned out and their concerns are never addressed.
This is just one glowing example of Closed Source working against the betterment of a platform.
Seriously, how can one NOT reach the same conclusion?
Mulitple monitor features are not great across a wide range of open source OSs, even though developers are a large userbase of multiple monitors and open source OSs.
I actually just noticed the menu scaling feature (new in 14.04) over the weekend. And it is "multi-monitor" in scope as it can be defined per monitor as opposed to globally.
I doubt that feature was approached with a "This better make up for the Ubuntu phone!" mindset.
Some features that may be lacking would be some Ultramon offers, like menu bar buttons to send the window to the another monitor or span the entire set of screens, but now that I run a 3-monitor set up with two in portrait mode, that sort of feature doesn't really strike my fancy. Plus dragging works.
Or perhaps it's because the Macintosh in 1987 was an almost-entirely closed ecosystem with Apple dictating all the terms while the "PC" was an environment where dozens if not hundreds of vendors might have to agree on a specification that could make something like this work.
Yea, in particular NuBus had auto configuration from the beginning while on the ISA bus fixed port addresses was common. I think IBM supported multiple XGAs on the MCA bus.
In '85 my father had two monitors in an IBM AT running autocad. One was for the image and the other was text only where the commands appeared as he typed and output was printed.
[1]: http://www.zhornsoftware.co.uk/zbar/