Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Use an old tablet as an extra monitor (github.com/alex028502)
281 points by alex028502 on Oct 6, 2023 | hide | past | favorite | 135 comments
but only for terminal and curses apps



Got to mention that you can use your tablet on Linux GNOME DE not only for terminal: https://www.omgubuntu.co.uk/2022/06/use-ipad-as-second-monit... . Still waiting for even better solution, like streaming games to such second tablet "monitor".


Just tried this but got "this device needs iOS 15 or later" when trying to install Microsoft rdp client, which isn't available for my old iPad. So I guess you can't actually use an old iPad for this. But maybe I can find another rdp client that will work.


Do you have the RD Client app installed on an iOS 15+ device? If you ‘own’ the app (and the developer allows it, I think) the App Store will let you install the latest supported version on an older iOS. I was able to install RD Client on my iOS 10 iPad that way a little while ago, but maybe something’s changed in the meantime.


It doesn’t work on X11, and the cursor is not showing. But if don’t need it, works pretty well.


does this work in KDE too or is there a similar solution?


Here's what I found for KDE, but did not test it myself: https://bugs.kde.org/show_bug.cgi?id=454645#c5


It works for me, but quite unstable.


I've used Weylus [0]. It works over LAN, lets you control the mouse from your tablet. Sometimes it's laggy, but you can configure the resolution so it's not using too much bandwidth. I'm not sure if it's stable at all. Haven't used it on a regular basis.

[0] https://github.com/H-M-H/Weylus


In the same vein, Remote Touchpad [0] is fantastic if you don't need an extra screen.

[0] https://flathub.org/apps/com.github.unrud.RemoteTouchpad


If you are on Windows and have extra laptops of devices hanging around SpaceDesk https://www.spacedesk.net/ to a great free app (not open source). I use it with on my Windows Dev machine (WSL2 FTW) and use old laptops as external displays. It works well even on WiFI.


Thanks, I just got SpaceDesk working on a cheap Amazon tablet over USB-C (after realising I had to set PTP mode on the tablet...) Should work really nicely as a second monitor when travelling, I have it on 60fps and high settings and the latency is barely perceptible.


I was excited seeing iOS 9.3+ on their requirements listing, but after digging my useless but 100% functional iPad 2 out of storage it won't install the app. :(

I do use the built-in iPad as a second screen thing in MacOS with a still supported iPad on occasion and that works quite well.


Would be better if the LVDS ribbon cable connectors for all of these devices was more standardized, and you could just buy an adapter to HDMI/Display Port. These actually exist but AFAIK there isn't just one LVDS ribbon cable standard or even close to one.


Unfortunately, I find systemctl hard to type. If you start/stop services somewhat frequently, I recommend this alias:

    alias sc='sudo systemctl'
This has the nice property in that it mirrors the "service control" (sc) utility in later versions of Windows NT that I grew up on. Should work in bash/fish.

I have these others also when doing service development, because many of the subcommands start with 'st*' and also having to change the second parameter each time is annoying. These work in fish, but are easily ported:

    function sce --description 'systemctl stop # end'
        sc stop $argv;
    end
    function sci --description 'systemctl status # info'
        sc status $argv;
    end
    function scs --description 'systemctl start # start'
        sc start $argv;
    end


I agree, though I find journalctl to be even worse to type.


On a similar note, you can use your old phone as a Streamdeck alternative using TouchPortal [1]. It's not free, but it won't cost you much and it works surprisingly well.

[1] https://www.touch-portal.com/


This seems to be a free, open source software that does something similar: https://stream-pi.com/

According to the video I'm currently listening to, it only has a Linux client currently, but has Windows and Linux hosts.

Edit: I've played with this a bit with a surface pro running Linux as a client and Windows as host and the thing holding it back at this moment is lack of plugins. I use Stream Deck to manage my gaming sessions (Steam, Discord, Voicemod soundboard and voices, occassionally OBS, game launches, etc.) and as-is for my purposes this can really only manage OBS and things that have configured hotkeys.


Thats a cool idea, I have a Fire tablet (and a Kindle) that I still don't know why I bought it.


Tablets should have a HDMI/Displayport in so that you can directly use them as displays.


I'd even go one step further: we should have had a standard communications protocol like TCP for all devices. So a display would show up as just another device that we could use to read/write bytes. All devices would have a standard queryable HTTP/HATEOAS self-documenting interface. And HDMI/DisplayPort or USB A/B/C/.../Z would all use the same protocol as gigabit ethernet or Thunderbolt or anything else, so the bandwidth would determine maximum frame rate at an arbitrary resolution. We could query a device's interface metadata and get/send an array of bytes to a display or a printer or a storage device, the only difference would be the header's front matter. And we could download image and video files directly from cameras and scanners as if they were a folder of documents on a web server, no vendor drivers needed.

There was never a technical reason why we couldn't have this. Mostly Microsoft and Apple blocked this sort of generalization at every turn. And web standards fell to design-by-committee so there was never any hope of unifying these things.

Is it a conspiracy theory when we live under these unfortunate eventualities? I don't know, but I see it everywhere. Nearly every device in existence irks the engineer in me. Smartphones and tablets are just the ultimate expression of commodified proprietary consumerist thinking.


In fairness, there are standardised protocols for a lot of these things already, even if they're not all part of one giant meta-protocol. Cameras in particular have mostly appeared as a folder full of files, with no need for special drivers, for something like 20 years.

There's definitely no need to invoke a conspiracy for the lack of 'one protocol to rule them all'. It's often hard agreeing on a standard even for a relatively limited topic - trying to agree on one for all electronic communications for all devices is probably impossible.


The meta protocol exists! Sort of. Check out the USB-C specs, which tried to answer a ton of this. It’s taken years for power delivery to reach the point where I don’t feel compelled to carry a USB-C power meter to check cables and chargers in the wild. My Switch still requires some out of spec signaling to charge/dock properly.

Meanwhile, half of the stuff I get off AliExpress only charges from A to C cables due to a missing resistor.

I don’t think the markets (yet) incentivize implementations. Like how when my mortgage gets resold, autopay will only transfer over if it’s once a month; anything more complex and I have to endure a new account setup and a ton of phone trees. Same with paperless settings. The result? I just live with the MVP.


> There was never a technical reason why we couldn't have this. Mostly Microsoft and Apple blocked this sort of generalization at every turn.

On the contrary, Microsoft tried really hard with UPnP/PnP-X/DPWS/Rally/Miracast*/etc but nobody was interested.

*BTW any Windows 10+ device can act as a Miracast sink (screen) so you can link Windows laptops/tablets as extra screens without any additional software.


Extending your simile, some devices need the equivalent of UDP in order to function within the size/power envelopes that make them useful. Bluetooth vs the nRF24L01+.

There are standards like this in highly interoperable systems, but there’s a cost paid. USB-C power delivery negotiation (beyond the very basic 5V3A resistor that people omit) is roughly as complicated as gigabit ethernet. That compute has to come from somewhere and it turns out customers won’t even pay for that 5V3A resistor - they’ll just use A to C cables and replace it when it “won’t charge” from a compliant charger. :) Average person probably only cares that USB-C can be flipped and that the connector feels less brittle than microUSB.

UPnP exists. Lots of what you describe exists. Between bugs in implementations becoming canon and a lack of consumer interest, no real conspiracy required. At least smartphones and tablets are trending in a good direction - Apple’s latest supports basic off the shelf USB-C Ethernet, displays, hubs, and so on.


Agreed in general. However, I wouldn't stop anyone but having my monitor traffic go over the network would lead to a lot of congestion, especially wireless. Prefer a separate cable as the grandparent alluded.


You can plug a USB HDMI capture dongle into tablets and do this.

Any webcam viewer would probably work to view it, though there's dedicated apps intended for this like https://orion.tube/ on iPad. I know there's options on Android but don't have a modern android tablet to test them.


Do you know how come that app doesn’t work on the IPhone 15 Pro?

I don’t have the iPad, but just recently got the 15 Pro, and it’s able to do a bunch of things via the usbc port (wired Ethernet, SD card reading, driving a Pro Display XDR etc), but I wasn’t able to do something like that Orion app is showing.

Was thinking of pretty much same use case as shown in the app, where I could plug in an external camera and use the phone as a high resolution / high-nit viewer display. Are these apis only for iPadOS because the iPhones are missing some required hardware for it?


I know, I'd love to use my phone as a display via capture card so I don't have to carry a portable monitor to troubleshoot headless boxes.

The developer says the 15 and 15 Pro are only missing software, the hardware is capable:

> I’m sad to say that we’ve confirmed with Apple that it will not be working with the iPhone 15. But this can be fixed in software, so feel free to file a feedback request for UVC support on iOS!

https://old.reddit.com/r/apple/comments/16qzdtx/hi_reddit_we...


Ahh, that sucks, hopefully future iOS will also have uvc support.


I've used Duet for this in the past. Works great by allowing me to extend my laptop screen with my ipad screen. https://www.duetdisplay.com/


Good idea. I did this with bash and awk and xev and xdotool instead of a custom program.

You can always use screen's cut-and-paste for stuff within screen, and screen's copy buffer and xclipboard for the rest.


Worth mentioning this discussion that honed on latency specific when using Linux (Wayland) as a host os : https://news.ycombinator.com/item?id=31409010 -> https://tuxphones.com/howto-linux-as-second-wireless-display... (2022)


Tried doing this for years, only got more and more frustrated with whatever wacky software I had to install to make this work.


Genuinely curious but why would people put 2 displays side by side so that your neck would always be bent to a side?

That totally looks like a tiring solution. I never do that but put the main display (which is an external monitor of my laptop) parallel to the keyboard and my laptop to the side so I will be facing forward naturally.


In France, my OSHA auditor actually requests that if there are two screens, they be put with the centerline in the middle (I bet it’s a clerical error, but it’s funny to see clerical errors that recommends the opposite of the healthy solution).


I had side-by-sides at an old job. I found I switched back and forth often enough that I wasn't really hurting my neck.


I put multiple displays in portrait mode maybe 10 years ago and have never looked back.


very cool!

> I also never have enough screens and never know where to put my terminal when I need to tail a log or something

What I do is just don't think of the terminal as competing for screen space. My terminal is always full screen, and a single "hotkey" toggles between the world of GUI apps, and the terminal world. Then, you can divide up the terminal however you want; I use tmux.

I've been trying this for more than 10 years. initially with iterm2, which has a built in "hotkey", and now with Alacritty, using hammerspoon for the hotkey. (the hotkey is sometimes called a "visor" key in this context, I think to do with first-person shooter games).


I'd just run screen on both computers in multisession mode, and make the PC version tiny


-HTOP/Nethogs for system in question -Same for any intervening boxes -logs for middle boxes -web browser for reference lookups -window for manpages script noodling -window for debugging -vim -throw in some VM's and those hosts may need to be factored in -frigging email/chat/calendar clients if working with other people.

Yes, I can technically do all of those with one screen, and some screen/tmux magic. Fuck no, I do not want to. KVM can get you some ergo for mux'ing multiple boxes to a single screen. However, I'd much rather have multiple screens to partition my working set across.


Any options for an ancient iPad?


If you're on a recent macOS + iPad, there's Universal Control[0] (I use this as a way to have chat/mail on a second monitor). If you don't mind some noticeable latency, you can use it as a second display via Sidecar[1]. Finally, you can do the same thing described in the article with any terminal emulator app and SSHing into the remote system (I've had luck with Prompt[2], which is available as a one-time $15 purchase).

[0]: https://support.apple.com/en-us/HT212757

[1]: https://support.apple.com/en-us/HT210380

[2]: https://panic.com/prompt/


From the Readme, their eventual plan for the project is to be able to serve the client through the web browser which would mean that almost all tablets would be supported.


Didn't Apple drop support for old iOS devices so they can't access the web anymore. Safari is useless even for local sites I believe.


If you're referring to the outdated certificates, I installed Let's Encrypt's ISRG Root X1 Certificate onto my old iPad 4 and that seems to have taken care of it. Local sites served over HTTP never had any issues.


I've used duet display on a first gen ipad pro 9.7". can't speak to using in older iPads, but TBH I don't recall having a problem with it


I appreciate the effort developers put into writing these apps.

On a side note, I wonder how much damage a back door in one of these "harmless" apps could do. It would have control of the tablet and of every computer the tablet was connected to.


This is cool. Maybe for copying you could spawn a text editor in your main screen with the content of the terminal in it?


When I had to work on my desktop but had to watch some educational videos on the side I just used https://remotedesktop.google.com/ Since the videos were web based it worked quite well.


If manufacturers released enough details about their devices and drivers, then unlocked the bootloaders, we could do a lot more things than a 2nd monitor with old tablets. There are piles of them taking dust in drawers, or worse in landfills because of forced obsolescence.


Windows needs to vastly improve its multi monitor support. I have a 3070 which supports up to four monitors, but getting them to play nicely in Win11 is a pain. So much so that I just use DisplayFusion instead.


Huh, I've had just about the opposite experience. Windows 11 is when I finally stopped using DisplayFusion. What is the problem you're running in to?


A long time ago I used to use Synergy and someone had written a Synergy client or Cyanogenmod variant of Android. I don't know where all that is now after Synergy imploded and Cyanogenmod imploded.


Its one of those 'because I can' hacks. Perfectly fine old 15-19 LCD monitors are ~$10 at goodwill type stores, and free if you ask around relatives/friends for old gear.


Nice. I just dusted off wife's kindle in an attempt to repurpose it into a weird 'i m in a meeting' sign and I may end up looking at your project more closely now. Much obliged!

<3


I prefer a single display and use virtual desktops to seperate tasks/apps which each fill most/all of the screen realestate


I've done this with RDP + Android tablet on Linux. Performance depends heavily on your tablet's hardware, enormously so.


RDP is client-heavy, some flavor of VNC should do fine even on the most underpowered hardware (provided you are not watching videos).


I never keep tablets around long - I usually sell them (and all electronics) once they're no longer useful.

eBay is my Marie Kondo :-)


Same. I’m a big believer of one man’s trash is another man’s treasure.

I hate having gathering dust in my drawer if I can sell it for $50 on eBay and make someone happy in the process


I should really get on board with this. Instead I have GPUs, laptops and tablets just gathering dust because "one day I might need it"! Unfortunately my family motto appears to be buy high sell never.


I would really be interested for a X11 server on that tablet.

So I can do a simple DISPLAY=tablet:0 to send the window to and enjoy output



Wow. Awesome. Thx!


Macos has the built in with sidecar. Just attach an ipad and select it as the external display


Sidecar is very glitchy for anything but casual use.


It seems to be pretty stable now. I've been using an iPad Pro as a 2nd monitor for a MBP while working remotely over the last year without any issues (8+ hours of daily use).


I frequently use it on an Ipad non-pro, with 3 different macbooks over the last few years, both over wireless or cable and it doesn't reliably stay on for more than an hour at a time.


Can also be used as a janky screen pikvm style with the right adapters


I love the suggestion to rebuild it for the browser. A tool like that could be generally useful. Anyone know of one?


Cocalc uses a websocket and xterm.js to implement terminals on a remote server. Each terminal session corresponds to a file (with extension .term), so multiple clients can open the same session by opening the same file. If you type in one session then all sessions will see the typing at the same time. (Disclaimer: I wrote this. It’s way too heavy for this use case, but might be an amusing demo or proof of concept for somebody to play with before writing something new.)


tmux + ttyd (or gotty) could be used in a similar way. But it's not like an extra screen, it's more like mirroring a terminal to another device.


Someone asks for a webpage and not only do you propose tmux, you also acknowledge it doesn't even do anything like what's asked.

Typical Linux user.


Gotty serves a terminal window as a website. Tmux can be used to force the terminal size, so that you can a small window for typing, like OP does.

The person on the other end does not have to learn either of them.


i use TeamViewer to log remotely into my working laptop from an Android device


I don't know how folks can use a ton of screens like this. I had four 32" monitors at one place and it was incredibly overwhelming.

I've been rocking a single 27" screen for years. Even that's too big. I prefer 24" screens but it's difficult to get a good one.

I can hold context in multiple virtual spaces, and key bindings make switching between them super quick.

I guess this is in the same camp as "I don't understand people who leave 75,000 tabs open."


What resolution & DPI is I think far more important than how many displays.

I have 4. 1x 1440@27", 1x 1440@24" and 2x1080@24". If I had known 1440@24" would die out, I would have bought 3 of those instead.

For me the ideal would be a 16:10 24" screen with the same density as the 1440 16:9 models. It's the perfect size & resolution for desktop use in programming / engineering. I'd buy 3 of those, but they don't exist from reputable brands.

I don't want a single ultrawide because I like the (narrow) physical borders. It lets me organize stuff just how I like it. It also makes working with different sources easier. My desktop is plugged into everything, but I can put a laptop or embedded board onto one of the side monitors if I need to.

How I've set up mine:

- Middle 1440: Main work, usually fullscreen IDE with 2 columns of files open.

- Left 1440: Documentation, usually 2 windows side by side.

- Top left 1080: Media, usually in the background. Needed chat programs (different customers use different tools) side by side.

- Right 1080: JIRA, task lists, notes, research, running tests, running instances of programs being developed, ...

This avoids me having to use virtual workspaces to layer context. It's like a great big tool wall in a workshop:

  The idea is simple: First Order Retrievability. That is, you should never have to move one tool to get to another. That in turn affords the fastest, most efficient way of working.
  ~Adam Savage


LG has 4k 24 inch monitors. The DPI is amazing (not so much for color fidelity).


>to layer context

Such a good way of capturing it.


I'm doing some light gamedev, and with two 28" screens I feel like I could use a little bit more screen real estate.

The situation where this still feels lacking is when I'm trying to solve a problem and have a 3D game view, source code, object list and properties, debug output, debugger (watched variables, call stack) on one screen. Then on another screen I'll read documentation of whatever I'm trying to fix.

Productivity clearly had a jump when I added the second monitor, and I think I could get some boost still by either having larger monitors, or perhaps one big bigger curved one with two monitor inputs.


Also games. 3x24 inch screens felt like the best balance to me. I had 2x27 and 1x 24 for a while, but I dropped back to 1x27 and 1x24 and prefer it. That's what I roll these days


I agree. My opinion is that once you've trained yourself to use virtual desktops efficiently, multiple monitors becomes more of a hassle than a benefit.

I think multiple monitors is the solution for people who would rather solve the problem by spending their money instead of the effort it takes to configure and become accustomed to switching between virtual desktops. Given that it is a strict biological limitation that the human brain can only focus attention on one thing at a time, I don't believe there is any valid argument for why moving your eyeballs between physical monitors is any better than hitting a key combo to switch between virtual desktops on a single monitor once those key combos have become muscle memory. Additionally, the number of physical monitors you have is limited by how much money you have to burn and how much physical space you have to place them, whereas virtual desktops are theoretically unlimited.


They're not all for the same purpose.

There are some things that don't need to be actively looked at most of the time, but need to be visible so that you know when something happens that you do need to pay attention. You could do it by polling—put it on a virtual desktop and switch to it every so often—but that adds latency and can be even more distracting than having it visible in the corner of your eye. Think of things like Element or Slack or a dashboard that tracks bugs/issues/alerts.

Then there are reference displays that you look at on demand. Most of the time switching virtual desktops is good enough for this, but not if you're following along with a sequence while actively working.

Then there are things that are just big. Perhaps you're displaying an autogenerated graph, or you're using an information-dense tool (maybe with multiple relevant layers).

Not to mention wanting to consult things while on a video call, which constrains the screen to use based on camera positioning.

I very actively use virtual desktops, yet I have two external monitors in addition to my laptop screen. Most of the time, I really only make use of one of the external monitors, but situations arise that require both. They arise frequently enough that I notice the lack (eg when I'm fighting with my configuration and only one is working, or I've loaned one monitor to someone else). And when I'm mobile and down to just the laptop screen, I definitely notice and even adjust what I'm working on to avoid losing productivity.


It depends somewhat on your job.

When I was a techie I tried to be focused on one thing at a time as much as possible. Still liked two screens though!

In many other roles though, having your email and your working document open, or having excel and PowerPoint open, or help docs and your code, or the operational plan and the server terminals, et cetera, are massive efficiency multipliers.

Basically I'm at a place where one monitor feels claustrophobic, especially if it's just the teeny laptop monitor. 2 are enough. 3 is nice. I wouldn't know what to do with 4 32" ones either!!


Same, one 5k 27” screen and I’m completely happy. I might consider 6k screen, but definitely not multiple screens.


If it's the Studio Display that you got, I got that same monitor a month ago and I absolutely love it.


You categorize your screens. One screen for dev work, one for communication, and one for documentation/browsing. That way you can alt+tab between your primary work tasks with a tiny eye movement.


I used 2 screens for a while but I went back to 1. I would do 3 or 1 but not 2. 2 was bad for my neck. And I can't fit 3 screens on my desk.


For film composers (niche, I know) each screen can represent a distinct task all of which are happening simultaneously.

So:

- video reference

- synthesizers

- daw/waveform playback

- score

I wonder what other professions might find value in a similar setup.


i'm in the same vein but more film/video post production in general. most of the time, in addition to how ever many monitors attached to the computer, there is at least one reference video monitor (that can be properly calibrated) that only receives a video signal from whatever software is being used. with only 2 computer screens, one screen has my timeline and preview windows. the other monitor will have all of the bins and effects controls and other various windows. if i'm in a real edit bay with dedicated scopes i'll prefer those, but if i'm slumming it at home i'll have to make room for them on one of the monitors too (usually tabbed behind the source monitor).


I think thats good because you can manipulate the second screen without juggling the mouse pointer

I think those are the best use cases, input is a much greater bottleneck with additional screens if its limited to keyboard and mouse modifying those windows


Most of the time I'm using 3: 2 big screens (often browser on one, IDE or similar on the other) and my laptop (usually terminal, or Slack, or a similar auxiliary app). It feels no more complicated to me than swiping between phone apps, and definitely simpler than someone with a carefully curated WM setup.

My screen size has gone up over the years, but that's more a matter of aging eyes than information density. :-)


Doesn't always work this way but:

Smaller Monitor: Comms (email, calendar, slack, etc) -- often times I have this vertical (top email/cal and slack below) and it doubles for viewing dashboards for stats during troubleshooting.

Bigger monitor: Focus work (terminal, development env, etc) - normally split in 3 columns

Laptop Screen: Browsing (research, WebUI interfaces, entertainment, etc)


I think a lot of this depends on how you arrange your windows.

I've used a bunch of monitors in the past, but found that my neck started to hurt after looking to the side too much. And having the bezels right in the middle of your view makes the most valuable real estate effectively unusable (unless you have 3!). 4 32's would be way too much for me, no doubt.

Having a single widescreen monitor has been better for me. Most of the time I'm not maximizing its use, but when I want to combine a bunch of views at once, it's quite valuable. Like when I'm running a performance test while keeping tabs on a bunch of monitoring.

I think you're right that virtual workspaces are great, especially if you dedicate them for discrete purposes.


I have my primary display in the center, directly in front of me. Whatever needs my primary focus for my current task goes there... Outlook for email, vscode for code, Terminal for admin, web browser when web browsering, etc.

To my left is for monitoring things, previewing things, and reference. Browser for checking changes to code, logs for monitoring changes to system, documentation for thing I'm working on, etc.

The result avoids the bezel in my direct field of view, avoids strain and RSIs from awkward posture, and, incidentally, kinda degrades gracefully when I'm at home with only one display or traveling with only my laptop's display.

But the second display to my left allows my peripheral vision to monitor things for changes without diverting my focus, and helps me keep documentation or source material for comparison handy without having to switch away from the thing I'm working on.


I've had a single monitor for a long time, but I've recently come around to dual monitors. It just makes working with additional information on the second screen so much easier. Indo spend more time shuffling windows around now though


I rarely feel a need for even two monitors unless I'm doing GUI development. Much of the time I just work off my laptop directly, not plugged in to anything (probably should knock it off for ergonomics reasons, though....)


Totally off-topic, but while reading this I was thinking "that is exactly what I would say". Then I saw your username... it looks like we share not only a taste for monitors but also a surname!


No way! That's cool!


I'm more of the 75,000 tabs faction, which is probably why I use 3 monitors. I prefer to have all the windows I'm actively working in open in parallel. With one monitor, the handling is too fiddly for me and the windows are much too small. If only one thing is in the foreground, I sometimes lose the context or the constant jumping back and forth annoys me.

Edit: Just looked it up, there were like 30 tabs ;) But also more browsers, because it gets too much in only one.


I'm similar. The idea of dedicating desk space, two extra cables, the compute to power the displays, and electricity, to show something like email seems incredibly wasteful to me. Not to mention, do people that do this not feel cramped when they don't have their full setup?

I've also never been a "maximize the window" type of person. Buying an ultra-wide was a huge help tho I will admit.


My preference is 3x 24 inch screens. In theory, I'd like one of them to be a tablet or a touch screen device that sits underneath the other two.

It basically boils down to one screen for "the app/website/whatever" one for code, and one for a reference. I _can_ hold contexts, but I also have tools to do that for me.


ive always said that going from one screen to two is a big jump in productivity. Comparing things between two windows is a very common task in almost all workloads. However I think three is already too many, and brings something between barely any benefit and a net negative. More than that seems superfluous, even for cctv or stock brokers. Attention can't be split that easily. Personally for most of my working life ive had three, as in two 24" and a laptop, but I usually either just have spotify fullscreen on the laptop all day or turn it off if I can.

As for putting two windows side-by-side on a single screen? I don't know, it always felt clunky to me. A lot of things are designed to be landscape 16:9.


I'm a 1.5 screens at work (laptop + 24" external monitor) and 1 screen at home person. Life has enough distractions.


almost everybody at my job has 2 screens, i'm sorry but i only have one brain


I've got a keyboard and mouse, but spend most of my day using just the keyboard. That doesn't mean a mouse isn't useful


Love this sort of hackery, but also -- it just kind of shows how very goofy all the limitations we have on this are.

If you go with the idea that a computer should be a general purpose machine -- we have so many things that aren't computers.

"Wireless external monitor" should be a trivially easy built-in to all operating systems, and that it isn't is kind of ridiculous.


RIP to FireWire target disk mode for Apple laptops and target display mode for iMacs.

The new docs promote AirPlay screen mirroring and networked shares over usb-c but it’s nowhere near the same. :/


You can still do target disk mode over usb-c. I used it when I switched from Intel to M1.


I was thinking exactly the same when reading this, but wireless programmable led display keyboard buttons. I think these should exist, but don't know of any easy implementations.


My phone can easily mirror the screen to my TV but my PC can't, not with the built-in software, not with 3rd party apps. It's all so tiresome.


> I have a couple old kindle fire tablets lying around. One of them has a battery that lasts about ten minutes.

You might want to check if the battery is going "spicy pillow".


Yeah for this reason I replace the battery of old tablets that I use as control panels, with a DC-DC converter.

Simply removing the battery isn't enough because most tablets refuse to run on usb power alone.


You can also replace the battery with a much smaller one - eg. a coin cell, and it'll normally run fine.


But what will a coin cell do when it's trying to get charged? That sounds a bit worrying.

But I didn't realise that. Perhaps I could use a voltage divider then or simply input 5V. 5V is too high for a battery but most devices accept it there anyway.


> But what will a coin cell do when it's trying to get charged?

You can get rechargeable lithium coin cells. It will just recharge fast.

The cell usually has enough internal resistance and surface area that it can't overheat or violate the charge current limits.

I still wouldn't recommend this approach, simply because the DC/DC converter approach you specified is slightly more power efficient (onboard chargers tend to have poor energy efficiency - many are simple linear regulators from 5v!).


URL to an example of what you use?


I use cheap ones from Amazon (link below). I power them with 12V adapters which I "shucked". 5V is not enough to provide enough voltage because you lose some in the conversion.

https://www.amazon.es/gp/product/B07PBCYTFW

These adapters seem similar to what's used in that instructibles link. I didn't pick these for any specific reason other than they were available on Amazon. It uses the same LM2596.



Lithium hotpocket


For Windows, the paid program SuperDisplay will also allow you to use an Android device as a second screen, works wireless or over USB. My Galaxy Tab S7+ is great as a second monitor.

https://superdisplay.app/


Or the free Spacedesk, does the same.


A good SuperDisplay alternative is something I really miss on Linux. Even over wifi, the latency is imperceptible to me, and being able to use the pen input (with pressure and tilt) is the cherry on top.


I also have an S7+, and I've been very happy with the device. How do you feel about it in 2023? I'm tempted to get the S9 Ultra as an upgrade but I'm on the fence right now.


S7+ is still cutting-edge as far as tablets goes.

No significant improvements were made in the last 2 gen


Computers still only have three mainstream input devices, keyboard, mouse and gamepad after 60-80 years

I'd try and bring the addition inputs a smart phone has to computers, like touch.

I know there are programable keyboards and many other things. But no one has cracked it yet.

It's a cool project as is. Just an idea if you were thinking of iterating forward.


I collect and use as many input devices as I can, as a bit of a hobby. It all started when I was younger and got a CueCat. Now I’m up to webcams, microphones, fingerprint sensors, many keyboards, mice, trackpads, trackballs, many game pads, MakeyMakey, a VR system, CharaChorder, MIDI keyboard, floor dance pad, Wacom tablet, BlackMagic keyboard with jog shuttle, and HOTaS. I’ve still got my eye on a macro pad, MIDI Fighter, and a racing wheel.


Heyyy, CueCat club! Funny that QR codes are everywhere these days and people actually scan them; Digital Convergence was just ahead of their time.

I have keyboards with mag-stripe readers, keyboards with smart-card readers, keyboards with assignable and relegendable keys (meant for point-of-sale usage), 6DOF 3D "SpaceMouse" devices, a 5-axis Lexip Pu94 mouse, I've mapped an R/C quadcopter transmitter into a wireless joystick [1], and last year I finally bought my first USB gamepad. (To play Stray.)

I was recently digging into some details of the BlackMagic keyboard and it sounds like it's super difficult to remap the jog dial for other uses, what do you use yours for?

1: https://hackaday.io/page/11784-rc-transmitter-tx-as-a-virtua...


I use mine for the most boring possible answer: exactly what it was marketed for. I edit videos in DaVinci Resolve with it. :-)


I had a boss who was pretty excited about CueCat. It's funny that it took another 15 years to catch on as smartphones and QR codes!


I picture you having them all hooked up at once, one-man band style.


I actually do have many of them attached at once. I have an extra-wide desk and two PCIe expansion cards that provide 7 USB ports each, with their own USB controllers to solve bandwidth/timing issues.


There are lots of computers with touch capabilities out there.

From the gesture support on Apple’s trackpads to touch screens like the Surface (and plenty of other laptops with touch screens).

There are also stylus inputs and things like Wacom tablets that have been around for many years now.


There's also microphones and cameras that can act as inputs, it just turns out they don't offer much power over kb/m.

But really, the problem being solved by OP was not enough outputs, rather than not enough inputs.


Given how much we use them to talk to each other, I'm surprised you're not counting the microphone; likewise video calls and the camera.

And given how phones and tablets are so much more common than laptops and desktops, touch screens.

Arguably there's also passive continuous inputs like GPS and heart rate sensors, accelerometers, etc. — mainstream, but I doubt it was the category you had in mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: