Embrace it instead of fighting it. Delegate simple tasks to GPT, I found it useful as an example generator. There are so many niches you can create software for, not necessarily well paid but fun and with new tools like AI you may deliver solutions quicker. That being said, I kinda feel sorry for all these folk who jumped into the IT bandwagon with dreams of making more money without actually having deep tech background. I guess, AI backed tools will reduce need for junior positions significantly very soon.
It's easy to say embrace it, but we aren't that far off from a fully capable developer. People are already stitching task based flows to make it basically build a whole thing and a big thing.
The majority of the software industry is in a lot of trouble and honestly given that it was one of the highest paying industries that's not great for the economy.
What are people supposed to even do? What's next for all these displaced workers?
Hi, glad you like it. I'm the author of Miniature.io.
To be absolutely honest, I posted a link to the website here as I have never done it previously. Built it years ago, because at the time I found this topic challenging to implement and worth trying. For years the system was online and free to use, serving even 2 million images daily. Free of charge. I didn't know whether I should kill it or spend some more time finally finishing the commercial part. Done the latter over the last couple of months and so here it is, polished a bit with a new rewritten website renderer.
If you were looking for a simmilar projects, or simply interested in the tech behind it, you can have a look at https://github.com/cepa/rabarbar this is the current engine that captures screenshots.
Very cool, nice to meet you, and great work. Glad to confirm my suspicions! Not that there was anything wrong with posting a new or old product. It is just a tool I've come across before. Hacker News can definitely get you exposed. While I wouldn't consider it viral, I was happy to get a few thousand visitors to my project from Hacker News. I am offering it for free as well, and while my server can handle everything thrown at it, I actually added an API and charge for that, so that was my way of monetizing, while the free version allows for a single file upload at a time. I don't think I've got anyone paying ofr it yet, but I actually created the converter for my day job which saves me the process of either searching for or using different tools. Also, thanks for sharing the technology behind it. I think I'm just using a few Javascript libraries to do the job.
That is quite a few users you've got. And I've learned for monetization: it's nice that people get to jump on your back and you carry them for the ride, but after a while, its gets old and you lose motivation to keep working. Sounds like socialism. In theory it works great, but if you're the only one paying for it... while everyone but you gets to benefit. Eventually, you get to a point where you ask, "What's the point?" Some might love a sacrifice for the greater good... but hey, you got to eat and pay your rent and enjoy life a few weeks out of the year too! I think it's okay to tell people that you need to get paid for your work. Good luck!
I use a RaspberryPi running X2Go as a proxy to access work machines from home. It'll talk RDP or VNC at the remote end to the target desktops, but then X2Go's protocol over the Internet back to me.
I don't know if you'd get the kind of throughput the OP is after, but it works well for my needs.
I keep my projects on my private infra. Software (Git/Subversion/etc) is installed in a VM which easy to backup automatically, data is kept on a shared volume on ZFS filesystem which has snapshots enabled. Additionally rsync job is running daily to sync all my important data with a disk in a remote location, could use encrypted cloud for that though.
Recently been able to restore my projects from 2003 so this approach works for me :)
Things are trickier when it comes to production environments due to "data gravity", assuming you have plenty to backup I would stick to vendor's offering. Say for AWS you can use EBS Snapshots for instances and RDS Snapshots for SQL data, other than that S3 to other regions, etc. Depends on your usecase of course. Professionally, RTO and RPO requirements are to be defined before you choose what's the right solution.
Re ZFS, used on BSD and Linux, both work well, never had issues even while migrating between ZFS versions or pools from say Linux to BSD and vice versa.
Sorry I disagree, first many people who work in IT are _forced_ to use Windows as their primary host at work. Next, even if you install a Virtual Machine it might not work as seamless as having an Unix shell natively, thus some use Cygwin. Regarding poor students, would you prefer to commute daily with multiple laptops? One for Linux one for Windows? I doubt so. Other than that, if your work is not just coding but doing _anything_ apart from code itself, word processing, spreadsheets, graphic design, cad and you still need to run something from Linux ecosystem the WSL is currently probably your best option.
> Sorry I disagree, first many people who work in IT are _forced_ to use Windows as their primary host at work.
I would consider it a bad use of my experience to develop on Windows. The market agrees with me, experienced Linux developers are much better paid.
> Next, even if you install a Virtual Machine it might not work as seamless as having an Unix shell natively, thus some use Cygwin.
I would not call cygwin a native Unix shell.
> Regarding poor students, would you prefer to commute daily with multiple laptops? One for Linux one for Windows? I doubt so.
I don't use Windows for personal stuff. Apart from that, VirtualBox would still be a better solution if it was necessary to use Windows. For example, updates on a Windows machine which is rarely used can easily take hours. One does not want to have his work machine blocked for so long.
Apart from that, abandoning Windows is just a matter of leaving old habits. For me, using exclusively Linux for my personal stuff has worked excellently in the last 20 years.
> Other than that, if your work is not just coding but doing _anything_ apart from code itself, word processing, spreadsheets, graphic design, cad and you still need to run something from Linux ecosystem the WSL is currently probably your best option.
Libreoffice or Softmaker office is available, but LaTeX is much better for reports and articles, and wikis and Markdown is better for documentation. MS Word is a usability nightmare. Also, I do not use spreadsheets for myself, I use Python scripts - it's more efficient, and ledger-cli for accounting stuff. Inkscape is much better than MS Visio. I am physicist and developer, I do not use CAD, but if I had to use a CAD program on Windows, I would use a separate Windows machine, as explained before.
You sound like you are just used to do everything with Windows and never have considered seriously to use Linux software, I guess you don't know most of what exists there.
Also, I'd appreciate it if people would not call areas where some software vendors have managed to create a lock-in an "ecosystem". An ecosystem is a scientific concept from biology. Using that for proprietary software environments is simply marketing BS bingo.
Linux developers might be better paid simply because there are fewer of them. Run of the mill “enterprise” developers are on Windows and they make up the bulk of the workforce (in any country, afaik). This creates economies of scale that incentivise companies to continue standardising on Windows, because it makes it easier and cheaper to hire people.
Well, if Linux developers are better paid this will naturally increase supply. And this happens, of course.
But why should pay companies more for Linux when they can develop stuff on Windows, and pay less? Wouldn't it be dumb to use the more expensive resource, and have less gain? The reason is that Linux developers are more productive - they create more value, and the decision of companies to pay them more is all rational. Of course the detail picture is more nuanced - it makes a difference if one develops a boring PHP app, or makes complex embedded systems for sectors like HFT, defense, embedded medical devices, or similar. But chances are that a lot of the latter use Linux, too. They just don't advertise jobs on HN.
Microsoft has successfully created an environment where people with little training are able to add business value. The average Windows developer is not at all comparable to the average Linux developer.
> You sound like you are just used to do everything with Windows and never have considered seriously to use Linux software, I guess you don't know most of what exists there.
Although this comment is not directed at me, I feel obliged to throw it back at you: Have you ever used Windows?
The odd time I need to create/edit images, there is no equal on Linux for paint.net.
None. not pinta, not GIMP. Nothing else hits the feature/complexity sweet spot like paint.net. There also is no equal on Mac.
Secondly, we're in a world where Webex still exists. It is very easy to use on Windows & Mac, & all functionality is available. Using it on Linux involves using a more limited version which runs on JVM and involves you having to figure out what dependencies to install.
Using Webex in a VM isn't feasible as there is a noticeable ~3 second delay for voice.
> Libreoffice or Softmaker office is available
They do exist but Excel simply puts them to shame.
Over the past 15 years, I've used Windows & Linux both for personal use & work. I've never been in a situation where either OS suffices on it's own, for either work or personal.
I've used Mac OS for work and it was sufficient on it's own - polished applications, alongside powerful dev tools.
However, when it came to spending my own money, I decided a 2 in 1 with Windows & Linux was a much better buy for me than a Mac, especially when it came to bang for buck.
Indeed, since 3.11. I switched my personal and academic stuff completely to Linux in 1998. I've developed about 6 years with embedded Windows Systems. Today I use Windows for some non-development tasks at work, but I develop exclusively on Linux.
> The odd time I need to create/edit images, there is no equal on Linux for paint.net. None. not pinta, not GIMP. Nothing else hits the feature/complexity sweet spot like paint.net. There also is no equal on Mac.
Many people like Krita. I've never used bitmap drawing much, but I really like to work with Inkscape which is a vector drawing program.
Apart from that, I appreciate there are different opinions.
> Secondly, we're in a world where Webex still exists. It is very easy to use on Windows & Mac, & all functionality is available. Using it on Linux involves using a more limited version which runs on JVM and involves you having to figure out what dependencies to install. Using Webex in a VM isn't feasible as there is a noticeable ~3 second delay for voice.
WebRTC works very well with Firefox or Chromium. One needs nothing more than the browser and a link to appear.in, for example:
Only feature updates, (which come twice a year) can take hours to install. Cumulative updates (typically monthly) only take single digits minutes on most machines.
How well do LibreOffice and Softmaker Office work with pen & touch in tablet mode?
You would presumably remote in to the second computer. I do this from my MacBook Pro to my Windows desktop using TeamViewer, and also into native Linux boxes using ssh.
There are also many developers who are forced to use GNU/Linux dev tools, while prefering to use Windows.
Not to mention that Windows has better support for most hardware and modern form factors, such as pen & touch enabled 2-in-1s.
If you're buying something like a Surface Pro or Surface Book, the last thing you want to do is cripple its functionality, ease of use, performance and battery life by installing GNU/Linux.
Regarding drivers and general hardware support. If you buy hardware that doesn't work well with Linux, you will have to deal with hardware that doesn't work well with Linux.
I replicate my answer to a different comment here because it applies identically to this one.
There is a number of issues with Surface and touchscreens. First, my impression is that Microsoft Surface hardware is, too, heavily advertised in developer forums and social media. Again, one needs to discern the desired association which that PR effort wants to create, from the actual qualities and disadvantages of the product. For example, the surface laptops have earned some negative reputation for lack of durability. Anybody who would consider to buy such a device would surely be well advised to do a web search whether such problems are more frequent, a thing of the past, or a rare exception. But that's a digression. You mentioned touchscreen support as important quality and I am zooming in on that.
Surface books and touchscreens are discussed here in the context of a software developer system. I am however really a bit puzzled by that. What does it mean to have touch on a software developer system? I understand surely that Microsoft does not want to lose a generation of developers, and that it considers them as trend-setting or even required to create gazillions of fantastic future Windows apps which will populate the Microsoft app store with killer apps for the Surface tablet. Microsoft marketing surely wants to sell the surface (if I were them, I'd probably hire some PR agencies to flood Reddit and HN with positive comments about it).
Also, Microsoft still seems to live the dream - in the face of all these failures such as Windows Phone - that desktop software, tablets and mobile apps shall be, will be, must be convergent and that desktop and mobile will have a common UI. Which is touch. I won't comment more on that. If you ever tried, it is hard to shatter someone else's dream, even if those wishes might not entirely based on experience.
But now, let's come to real life, and let's become a little, just a little, practical. As a first, consider that software developers are people which handle program code. In most cases, lots of program code. In many cases, quite complex program code.
The code is displayed on the screen. Developers have to remember many things, and the more code you have on the screen, the less you have to remember while working on one thing. They say that we can only remember seven to nine things in short-term memory, for more things we have to memorize. Memorizing costs a lot of time.
That has a simple consequence: The larger the screen is, the more text it displays, the better. Just let's take my home PC which I mostly use for hobby programming. It is a 43 inch Philips BDM4350 with 4K resolution. It is glare but I can live with that - the room it stands in does has usually no direct sunlight, I can use shutters, and I have happily exchanged non-glare with that amount of screen estate. For me, it is an absolute dream.
I just say, as a professional software developer, you will want a screen that large. A smaller screen is a waste of time, and therefore a waste of the company's money. If you own an IT company and your developers have smaller screens, ask yourself why.
So, I ask you to do a little experiment. Not a thought experiment, but a real-live experiment. Imagine you have a 43 inch touch screen in front of you. Sit at a proper ergonomic distance. Now, lift your index finger and point it at the height of your chin in that ergonomic distance.
Now, keep your finger still for one minute. Just one minute.
Do that now.
You note something? It is hard. It requires a lot of effort. Doing that all day will be very, very tiring.
If you want a high-precision graphical input for a large screen, you will not want a touchscreen. You will want a graphics tablet and stylus flat on your desk. In addition to your 43 inch monitor, of course. Good news! You can buy a high-quality Wacom graphics tablet at Amazon, even refurbished for less than $100. And Wacom tablets have excellent Linux driver support since a long, long time. They are very fine for drawing artwork. I am not going to recommend drawing software since that's not my realm of expertise.
However, as a software developer, especially with more experience, you will normally want to use your screen to display text. As much as possible. With as little waste as possible. Which leads quite naturally to text-oriented, keyboard-oriented work flows, tiling window managers (like i3), and text editor like vim or Emacs, which among other things, do one thing really well: They don't waste screen space. Of course, learning a tiling Window manager requires a little bit of dedication (you have to memorize key chords, duh), but it pays off after one or two days, switching between terminals and windows is not more a matter of a second like when doing it with the mouse, but of a single key combination which does it in a tenth of a second. And this does not only save time, this makes it much easier to keep the flow, to stay focused on what you were working on.
I'm well aware of the Surface issues, but that's what warranty and/or insurance is for.
There is a lot more to programming than typing and displaying text with 1D syntaxes. There are plenty of DSLs that go beyond that. I recommend you expand your horizon a little.
An external Wacom tablet is no substitute for a Surface Pro, where I can draw and write directly on the screen, while also being portable. Also, the screen size is fine, since I'm sitting closer to it, and you can always connect it to an external display.
great stuff