Just tried this but got "this device needs iOS 15 or later" when trying to install Microsoft rdp client, which isn't available for my old iPad. So I guess you can't actually use an old iPad for this. But maybe I can find another rdp client that will work.
Do you have the RD Client app installed on an iOS 15+ device? If you ‘own’ the app (and the developer allows it, I think) the App Store will let you install the latest supported version on an older iOS. I was able to install RD Client on my iOS 10 iPad that way a little while ago, but maybe something’s changed in the meantime.
I've used Weylus [0]. It works over LAN, lets you control the mouse from your tablet. Sometimes it's laggy, but you can configure the resolution so it's not using too much bandwidth. I'm not sure if it's stable at all. Haven't used it on a regular basis.
If you are on Windows and have extra laptops of devices hanging around SpaceDesk https://www.spacedesk.net/ to a great free app (not open source). I use it with on my Windows Dev machine (WSL2 FTW) and use old laptops as external displays. It works well even on WiFI.
Thanks, I just got SpaceDesk working on a cheap Amazon tablet over USB-C (after realising I had to set PTP mode on the tablet...)
Should work really nicely as a second monitor when travelling, I have it on 60fps and high settings and the latency is barely perceptible.
I was excited seeing iOS 9.3+ on their requirements listing, but after digging my useless but 100% functional iPad 2 out of storage it won't install the app. :(
I do use the built-in iPad as a second screen thing in MacOS with a still supported iPad on occasion and that works quite well.
Would be better if the LVDS ribbon cable connectors for all of these devices was more standardized, and you could just buy an adapter to HDMI/Display Port. These actually exist but AFAIK there isn't just one LVDS ribbon cable standard or even close to one.
Unfortunately, I find systemctl hard to type. If you start/stop services somewhat frequently, I recommend this alias:
alias sc='sudo systemctl'
This has the nice property in that it mirrors the "service control" (sc) utility in later versions of Windows NT that I grew up on. Should work in bash/fish.
I have these others also when doing service development, because many of the subcommands start with 'st*' and also having to change the second parameter each time is annoying. These work in fish, but are easily ported:
function sce --description 'systemctl stop # end'
sc stop $argv;
end
function sci --description 'systemctl status # info'
sc status $argv;
end
function scs --description 'systemctl start # start'
sc start $argv;
end
On a similar note, you can use your old phone as a Streamdeck alternative using TouchPortal [1]. It's not free, but it won't cost you much and it works surprisingly well.
This seems to be a free, open source software that does something similar: https://stream-pi.com/
According to the video I'm currently listening to, it only has a Linux client currently, but has Windows and Linux hosts.
Edit: I've played with this a bit with a surface pro running Linux as a client and Windows as host and the thing holding it back at this moment is lack of plugins. I use Stream Deck to manage my gaming sessions (Steam, Discord, Voicemod soundboard and voices, occassionally OBS, game launches, etc.) and as-is for my purposes this can really only manage OBS and things that have configured hotkeys.
I'd even go one step further: we should have had a standard communications protocol like TCP for all devices. So a display would show up as just another device that we could use to read/write bytes. All devices would have a standard queryable HTTP/HATEOAS self-documenting interface. And HDMI/DisplayPort or USB A/B/C/.../Z would all use the same protocol as gigabit ethernet or Thunderbolt or anything else, so the bandwidth would determine maximum frame rate at an arbitrary resolution. We could query a device's interface metadata and get/send an array of bytes to a display or a printer or a storage device, the only difference would be the header's front matter. And we could download image and video files directly from cameras and scanners as if they were a folder of documents on a web server, no vendor drivers needed.
There was never a technical reason why we couldn't have this. Mostly Microsoft and Apple blocked this sort of generalization at every turn. And web standards fell to design-by-committee so there was never any hope of unifying these things.
Is it a conspiracy theory when we live under these unfortunate eventualities? I don't know, but I see it everywhere. Nearly every device in existence irks the engineer in me. Smartphones and tablets are just the ultimate expression of commodified proprietary consumerist thinking.
In fairness, there are standardised protocols for a lot of these things already, even if they're not all part of one giant meta-protocol. Cameras in particular have mostly appeared as a folder full of files, with no need for special drivers, for something like 20 years.
There's definitely no need to invoke a conspiracy for the lack of 'one protocol to rule them all'. It's often hard agreeing on a standard even for a relatively limited topic - trying to agree on one for all electronic communications for all devices is probably impossible.
The meta protocol exists! Sort of. Check out the USB-C specs, which tried to answer a ton of this. It’s taken years for power delivery to reach the point where I don’t feel compelled to carry a USB-C power meter to check cables and chargers in the wild. My Switch still requires some out of spec signaling to charge/dock properly.
Meanwhile, half of the stuff I get off AliExpress only charges from A to C cables due to a missing resistor.
I don’t think the markets (yet) incentivize implementations. Like how when my mortgage gets resold, autopay will only transfer over if it’s once a month; anything more complex and I have to endure a new account setup and a ton of phone trees. Same with paperless settings. The result? I just live with the MVP.
> There was never a technical reason why we couldn't have this. Mostly Microsoft and Apple blocked this sort of generalization at every turn.
On the contrary, Microsoft tried really hard with UPnP/PnP-X/DPWS/Rally/Miracast*/etc but nobody was interested.
*BTW any Windows 10+ device can act as a Miracast sink (screen) so you can link Windows laptops/tablets as extra screens without any additional software.
Extending your simile, some devices need the equivalent of UDP in order to function within the size/power envelopes that make them useful. Bluetooth vs the nRF24L01+.
There are standards like this in highly interoperable systems, but there’s a cost paid. USB-C power delivery negotiation (beyond the very basic 5V3A resistor that people omit) is roughly as complicated as gigabit ethernet. That compute has to come from somewhere and it turns out customers won’t even pay for that 5V3A resistor - they’ll just use A to C cables and replace it when it “won’t charge” from a compliant charger. :) Average person probably only cares that USB-C can be flipped and that the connector feels less brittle than microUSB.
UPnP exists. Lots of what you describe exists. Between bugs in implementations becoming canon and a lack of consumer interest, no real conspiracy required. At least smartphones and tablets are trending in a good direction - Apple’s latest supports basic off the shelf USB-C Ethernet, displays, hubs, and so on.
Agreed in general. However, I wouldn't stop anyone but having my monitor traffic go over the network would lead to a lot of congestion, especially wireless. Prefer a separate cable as the grandparent alluded.
You can plug a USB HDMI capture dongle into tablets and do this.
Any webcam viewer would probably work to view it, though there's dedicated apps intended for this like https://orion.tube/ on iPad. I know there's options on Android but don't have a modern android tablet to test them.
Do you know how come that app doesn’t work on the IPhone 15 Pro?
I don’t have the iPad, but just recently got the 15 Pro, and it’s able to do a bunch of things via the usbc port (wired Ethernet, SD card reading, driving a Pro Display XDR etc), but I wasn’t able to do something like that Orion app is showing.
Was thinking of pretty much same use case as shown in the app, where I could plug in an external camera and use the phone as a high resolution / high-nit viewer display. Are these apis only for iPadOS because the iPhones are missing some required hardware for it?
I know, I'd love to use my phone as a display via capture card so I don't have to carry a portable monitor to troubleshoot headless boxes.
The developer says the 15 and 15 Pro are only missing software, the hardware is capable:
> I’m sad to say that we’ve confirmed with Apple that it will not be working with the iPhone 15. But this can be fixed in software, so feel free to file a feedback request for UVC support on iOS!
Genuinely curious but why would people put 2 displays side by side so that your neck would always be bent to a side?
That totally looks like a tiring solution. I never do that but put the main display (which is an external monitor of my laptop) parallel to the keyboard and my laptop to the side so I will be facing forward naturally.
In France, my OSHA auditor actually requests that if there are two screens, they be put with the centerline in the middle (I bet it’s a clerical error, but it’s funny to see clerical errors that recommends the opposite of the healthy solution).
> I also never have enough screens and never know where to put my terminal when I need to tail a log or something
What I do is just don't think of the terminal as competing for screen space. My terminal is always full screen, and a single "hotkey" toggles between the world of GUI apps, and the terminal world. Then, you can divide up the terminal however you want; I use tmux.
I've been trying this for more than 10 years. initially with iterm2, which has a built in "hotkey", and now with Alacritty, using hammerspoon for the hotkey. (the hotkey is sometimes called a "visor" key in this context, I think to do with first-person shooter games).
-HTOP/Nethogs for system in question
-Same for any intervening boxes
-logs for middle boxes
-web browser for reference lookups
-window for manpages script noodling
-window for debugging
-vim
-throw in some VM's and those hosts may need to be factored in
-frigging email/chat/calendar clients if working with other people.
Yes, I can technically do all of those with one screen, and some screen/tmux magic. Fuck no, I do not want to. KVM can get you some ergo for mux'ing multiple boxes to a single screen. However, I'd much rather have multiple screens to partition my working set across.
If you're on a recent macOS + iPad, there's Universal Control[0] (I use this as a way to have chat/mail on a second monitor). If you don't mind some noticeable latency, you can use it as a second display via Sidecar[1]. Finally, you can do the same thing described in the article with any terminal emulator app and SSHing into the remote system (I've had luck with Prompt[2], which is available as a one-time $15 purchase).
From the Readme, their eventual plan for the project is to be able to serve the client through the web browser which would mean that almost all tablets would be supported.
If you're referring to the outdated certificates, I installed Let's Encrypt's ISRG Root X1 Certificate onto my old iPad 4 and that seems to have taken care of it. Local sites served over HTTP never had any issues.
I appreciate the effort developers put into writing these apps.
On a side note, I wonder how much damage a back door in one of these "harmless" apps could do. It would have control of the tablet and of every computer the tablet was connected to.
When I had to work on my desktop but had to watch some educational videos on the side I just used https://remotedesktop.google.com/
Since the videos were web based it worked quite well.
If manufacturers released enough details about their devices and drivers, then unlocked the bootloaders, we could do a lot more things than a 2nd monitor with old tablets.
There are piles of them taking dust in drawers, or worse in landfills because of forced obsolescence.
Windows needs to vastly improve its multi monitor support. I have a 3070 which supports up to four monitors, but getting them to play nicely in Win11 is a pain. So much so that I just use DisplayFusion instead.
A long time ago I used to use Synergy and someone had written a Synergy client or Cyanogenmod variant of Android. I don't know where all that is now after Synergy imploded and Cyanogenmod imploded.
Its one of those 'because I can' hacks. Perfectly fine old 15-19 LCD monitors are ~$10 at goodwill type stores, and free if you ask around relatives/friends for old gear.
Nice. I just dusted off wife's kindle in an attempt to repurpose it into a weird 'i m in a meeting' sign and I may end up looking at your project more closely now. Much obliged!
I should really get on board with this. Instead I have GPUs, laptops and tablets just gathering dust because "one day I might need it"! Unfortunately my family motto appears to be buy high sell never.
It seems to be pretty stable now. I've been using an iPad Pro as a 2nd monitor for a MBP while working remotely over the last year without any issues (8+ hours of daily use).
I frequently use it on an Ipad non-pro, with 3 different macbooks over the last few years, both over wireless or cable and it doesn't reliably stay on for more than an hour at a time.
Cocalc uses a websocket and xterm.js to implement terminals on a remote server. Each terminal session corresponds to a file (with extension .term), so multiple clients can open the same session by opening the same file. If you type in one session then all sessions will see the typing at the same time. (Disclaimer: I wrote this. It’s way too heavy for this use case, but might be an amusing demo or proof of concept for somebody to play with before writing something new.)
What resolution & DPI is I think far more important than how many displays.
I have 4. 1x 1440@27", 1x 1440@24" and 2x1080@24". If I had known 1440@24" would die out, I would have bought 3 of those instead.
For me the ideal would be a 16:10 24" screen with the same density as the 1440 16:9 models. It's the perfect size & resolution for desktop use in programming / engineering. I'd buy 3 of those, but they don't exist from reputable brands.
I don't want a single ultrawide because I like the (narrow) physical borders. It lets me organize stuff just how I like it. It also makes working with different sources easier. My desktop is plugged into everything, but I can put a laptop or embedded board onto one of the side monitors if I need to.
How I've set up mine:
- Middle 1440: Main work, usually fullscreen IDE with 2 columns of files open.
- Left 1440: Documentation, usually 2 windows side by side.
- Top left 1080: Media, usually in the background. Needed chat programs (different customers use different tools) side by side.
- Right 1080: JIRA, task lists, notes, research, running tests, running instances of programs being developed, ...
This avoids me having to use virtual workspaces to layer context. It's like a great big tool wall in a workshop:
The idea is simple: First Order Retrievability. That is, you should never have to move one tool to get to another. That in turn affords the fastest, most efficient way of working.
~Adam Savage
I'm doing some light gamedev, and with two 28" screens I feel like I could use a little bit more screen real estate.
The situation where this still feels lacking is when I'm trying to solve a problem and have a 3D game view, source code, object list and properties, debug output, debugger (watched variables, call stack) on one screen. Then on another screen I'll read documentation of whatever I'm trying to fix.
Productivity clearly had a jump when I added the second monitor, and I think I could get some boost still by either having larger monitors, or perhaps one big bigger curved one with two monitor inputs.
Also games. 3x24 inch screens felt like the best balance to me. I had 2x27 and 1x 24 for a while, but I dropped back to 1x27 and 1x24 and prefer it. That's what I roll these days
I agree. My opinion is that once you've trained yourself to use virtual desktops efficiently, multiple monitors becomes more of a hassle than a benefit.
I think multiple monitors is the solution for people who would rather solve the problem by spending their money instead of the effort it takes to configure and become accustomed to switching between virtual desktops. Given that it is a strict biological limitation that the human brain can only focus attention on one thing at a time, I don't believe there is any valid argument for why moving your eyeballs between physical monitors is any better than hitting a key combo to switch between virtual desktops on a single monitor once those key combos have become muscle memory. Additionally, the number of physical monitors you have is limited by how much money you have to burn and how much physical space you have to place them, whereas virtual desktops are theoretically unlimited.
There are some things that don't need to be actively looked at most of the time, but need to be visible so that you know when something happens that you do need to pay attention. You could do it by polling—put it on a virtual desktop and switch to it every so often—but that adds latency and can be even more distracting than having it visible in the corner of your eye. Think of things like Element or Slack or a dashboard that tracks bugs/issues/alerts.
Then there are reference displays that you look at on demand. Most of the time switching virtual desktops is good enough for this, but not if you're following along with a sequence while actively working.
Then there are things that are just big. Perhaps you're displaying an autogenerated graph, or you're using an information-dense tool (maybe with multiple relevant layers).
Not to mention wanting to consult things while on a video call, which constrains the screen to use based on camera positioning.
I very actively use virtual desktops, yet I have two external monitors in addition to my laptop screen. Most of the time, I really only make use of one of the external monitors, but situations arise that require both. They arise frequently enough that I notice the lack (eg when I'm fighting with my configuration and only one is working, or I've loaned one monitor to someone else). And when I'm mobile and down to just the laptop screen, I definitely notice and even adjust what I'm working on to avoid losing productivity.
When I was a techie I tried to be focused on one thing at a time as much as possible. Still liked two screens though!
In many other roles though, having your email and your working document open, or having excel and PowerPoint open, or help docs and your code, or the operational plan and the server terminals, et cetera, are massive efficiency multipliers.
Basically I'm at a place where one monitor feels claustrophobic, especially if it's just the teeny laptop monitor. 2 are enough. 3 is nice. I wouldn't know what to do with 4 32" ones either!!
You categorize your screens. One screen for dev work, one for communication, and one for documentation/browsing. That way you can alt+tab between your primary work tasks with a tiny eye movement.
i'm in the same vein but more film/video post production in general. most of the time, in addition to how ever many monitors attached to the computer, there is at least one reference video monitor (that can be properly calibrated) that only receives a video signal from whatever software is being used. with only 2 computer screens, one screen has my timeline and preview windows. the other monitor will have all of the bins and effects controls and other various windows. if i'm in a real edit bay with dedicated scopes i'll prefer those, but if i'm slumming it at home i'll have to make room for them on one of the monitors too (usually tabbed behind the source monitor).
I think thats good because you can manipulate the second screen without juggling the mouse pointer
I think those are the best use cases, input is a much greater bottleneck with additional screens if its limited to keyboard and mouse modifying those windows
Most of the time I'm using 3: 2 big screens (often browser on one, IDE or similar on the other) and my laptop (usually terminal, or Slack, or a similar auxiliary app). It feels no more complicated to me than swiping between phone apps, and definitely simpler than someone with a carefully curated WM setup.
My screen size has gone up over the years, but that's more a matter of aging eyes than information density. :-)
Smaller Monitor: Comms (email, calendar, slack, etc) -- often times I have this vertical (top email/cal and slack below) and it doubles for viewing dashboards for stats during troubleshooting.
Bigger monitor: Focus work (terminal, development env, etc) - normally split in 3 columns
I think a lot of this depends on how you arrange your windows.
I've used a bunch of monitors in the past, but found that my neck started to hurt after looking to the side too much. And having the bezels right in the middle of your view makes the most valuable real estate effectively unusable (unless you have 3!). 4 32's would be way too much for me, no doubt.
Having a single widescreen monitor has been better for me. Most of the time I'm not maximizing its use, but when I want to combine a bunch of views at once, it's quite valuable. Like when I'm running a performance test while keeping tabs on a bunch of monitoring.
I think you're right that virtual workspaces are great, especially if you dedicate them for discrete purposes.
I have my primary display in the center, directly in front of me. Whatever needs my primary focus for my current task goes there... Outlook for email, vscode for code, Terminal for admin, web browser when web browsering, etc.
To my left is for monitoring things, previewing things, and reference. Browser for checking changes to code, logs for monitoring changes to system, documentation for thing I'm working on, etc.
The result avoids the bezel in my direct field of view, avoids strain and RSIs from awkward posture, and, incidentally, kinda degrades gracefully when I'm at home with only one display or traveling with only my laptop's display.
But the second display to my left allows my peripheral vision to monitor things for changes without diverting my focus, and helps me keep documentation or source material for comparison handy without having to switch away from the thing I'm working on.
I've had a single monitor for a long time, but I've recently come around to dual monitors. It just makes working with additional information on the second screen so much easier.
Indo spend more time shuffling windows around now though
I rarely feel a need for even two monitors unless I'm doing GUI development. Much of the time I just work off my laptop directly, not plugged in to anything (probably should knock it off for ergonomics reasons, though....)
Totally off-topic, but while reading this I was thinking "that is exactly what I would say". Then I saw your username... it looks like we share not only a taste for monitors but also a surname!
I'm more of the 75,000 tabs faction, which is probably why I use 3 monitors. I prefer to have all the windows I'm actively working in open in parallel. With one monitor, the handling is too fiddly for me and the windows are much too small. If only one thing is in the foreground, I sometimes lose the context or the constant jumping back and forth annoys me.
Edit: Just looked it up, there were like 30 tabs ;) But also more browsers, because it gets too much in only one.
I'm similar. The idea of dedicating desk space, two extra cables, the compute to power the displays, and electricity, to show something like email seems incredibly wasteful to me. Not to mention, do people that do this not feel cramped when they don't have their full setup?
I've also never been a "maximize the window" type of person. Buying an ultra-wide was a huge help tho I will admit.
My preference is 3x 24 inch screens. In theory, I'd like one of them to be a tablet or a touch screen device that sits underneath the other two.
It basically boils down to one screen for "the app/website/whatever" one for code, and one for a reference. I _can_ hold contexts, but I also have tools to do that for me.
ive always said that going from one screen to two is a big jump in productivity. Comparing things between two windows is a very common task in almost all workloads. However I think three is already too many, and brings something between barely any benefit and a net negative. More than that seems superfluous, even for cctv or stock brokers. Attention can't be split that easily. Personally for most of my working life ive had three, as in two 24" and a laptop, but I usually either just have spotify fullscreen on the laptop all day or turn it off if I can.
As for putting two windows side-by-side on a single screen? I don't know, it always felt clunky to me. A lot of things are designed to be landscape 16:9.
I was thinking exactly the same when reading this, but wireless programmable led display keyboard buttons. I think these should exist, but don't know of any easy implementations.
But what will a coin cell do when it's trying to get charged? That sounds a bit worrying.
But I didn't realise that. Perhaps I could use a voltage divider then or simply input 5V. 5V is too high for a battery but most devices accept it there anyway.
> But what will a coin cell do when it's trying to get charged?
You can get rechargeable lithium coin cells. It will just recharge fast.
The cell usually has enough internal resistance and surface area that it can't overheat or violate the charge current limits.
I still wouldn't recommend this approach, simply because the DC/DC converter approach you specified is slightly more power efficient (onboard chargers tend to have poor energy efficiency - many are simple linear regulators from 5v!).
I use cheap ones from Amazon (link below). I power them with 12V adapters which I "shucked". 5V is not enough to provide enough voltage because you lose some in the conversion.
These adapters seem similar to what's used in that instructibles link. I didn't pick these for any specific reason other than they were available on Amazon. It uses the same LM2596.
For Windows, the paid program SuperDisplay will also allow you to use an Android device as a second screen, works wireless or over USB. My Galaxy Tab S7+ is great as a second monitor.
A good SuperDisplay alternative is something I really miss on Linux. Even over wifi, the latency is imperceptible to me, and being able to use the pen input (with pressure and tilt) is the cherry on top.
I also have an S7+, and I've been very happy with the device. How do you feel about it in 2023? I'm tempted to get the S9 Ultra as an upgrade but I'm on the fence right now.
I collect and use as many input devices as I can, as a bit of a hobby. It all started when I was younger and got a CueCat. Now I’m up to webcams, microphones, fingerprint sensors, many keyboards, mice, trackpads, trackballs, many game pads, MakeyMakey, a VR system, CharaChorder, MIDI keyboard, floor dance pad, Wacom tablet, BlackMagic keyboard with jog shuttle, and HOTaS. I’ve still got my eye on a macro pad, MIDI Fighter, and a racing wheel.
Heyyy, CueCat club! Funny that QR codes are everywhere these days and people actually scan them; Digital Convergence was just ahead of their time.
I have keyboards with mag-stripe readers, keyboards with smart-card readers, keyboards with assignable and relegendable keys (meant for point-of-sale usage), 6DOF 3D "SpaceMouse" devices, a 5-axis Lexip Pu94 mouse, I've mapped an R/C quadcopter transmitter into a wireless joystick [1], and last year I finally bought my first USB gamepad. (To play Stray.)
I was recently digging into some details of the BlackMagic keyboard and it sounds like it's super difficult to remap the jog dial for other uses, what do you use yours for?
I actually do have many of them attached at once. I have an extra-wide desk and two PCIe expansion cards that provide 7 USB ports each, with their own USB controllers to solve bandwidth/timing issues.
Given how much we use them to talk to each other, I'm surprised you're not counting the microphone; likewise video calls and the camera.
And given how phones and tablets are so much more common than laptops and desktops, touch screens.
Arguably there's also passive continuous inputs like GPS and heart rate sensors, accelerometers, etc. — mainstream, but I doubt it was the category you had in mind.