Hacker News new | past | comments | ask | show | jobs | submit login
Codespaces but open-source, client-only, and unopinionated (devpod.sh)
665 points by el_hacker on June 20, 2023 | hide | past | favorite | 152 comments



I'm glad this space is expanding. This is created by the authors of Loft.sh and Devspaces, both really great solutions to the most common problems of developing natively in a K8s cluster.

DevPod is basically Vagrant but with containers, which brings a ton of benefits over the former VM-centric design. You can maintain an entire team (or organization) with one immutable development environment, to get away from the constant toil of fixing random problems on random people's dev environments that happened because they changed something locally and don't have an immutable environment to restore.

The fact that it's self-hosted means you can take it anywhere (your laptop, GCloud, AWS, Azure, etc). Containers means you can save resources or scale it, reuse public containers and container ecosystem tools. Unopinionated means you aren't forced to use one IDE or platform. Open Source means you can read the source to figure out what's going on and hack in a solution if needed.

This one is still early days it seems, as it doesn't run on my Mac (I opened an issue). But the benefits once it works will be incredible. I've been trying to onboard our teams to Devspace, which has its warts [and lack of docs], but is still lightyears better than most other solutions. Once DevPod is stable I'll be looking into moving to it.


Curious if you tried the devcontainer cli [1] to build your devcontainer.json?

DevPod is another implementation of the devcontainer spec, the most used one being the aforementioned devcontainer cli which vscode uses, or supplies, via its integration.

If you’re having problems with DevPod while they iron out the kinks you might want to try the devcontainer cli, which can build the images, and run them.

1. https://github.com/devcontainers/cli


Thanks for opening an issue! We'll definitely look into that. Would love to learn more about your experiences with DevSpace as well. If you're open to chatting, reach out via slack or via lukas [at] loft [dot] sh


What are some of the competitors in this space?

- Gitpod, a SaaS competitor to Codespaces. http://gitpod.io

- Coder, which I guess is the more enterprisey self-hosted Codespaces alternative? https://coder.com

- This project, Devpod, seems to be a polished experience but not centralized like Coder.

- I recently stumbled upon Recode, which looks like a more indie take on the problem. https://github.com/recode-sh/cli


It's a lot older but I would say Vagrant intersects with this space

https://github.com/hashicorp/vagrant

Possibly devenv, as well.. Though I haven't personally tried it

https://devenv.sh/


Vagrant was fantastic pre-Docker and is still arguably more useful for certain cases, but I recall having issues running it last I tried. Based on the website, VirtualBox still lacks stable ARM64 support. Would "boxes" downloaded from the Vagrant cloud need to be built for ARM64 as well, or does it emulate?


It does not emulate. My org has been deprecating Vagrant ever since IT started issuing Apple Silicon as an option. Replacement is Docker Compose, or cloud VMs for software with heavy disk I/O.


Is there such a significant problem with disk IO going to docker for Mac over vagrant?

Now there is overhead for mounting local volumes into the container, but I’ve found them to be negligible on my Apple Silicon Mac’s in the last year or 2. (Apple seems to have been the one to fix it with their new virtualization framework which docker for Mac supports).

Now back when they first added their APFS file system things were atrociously slow when it came to Disk IO (as in 10x slowdowns or more) and there were various workarounds but it seems resolved to me now.


Do you know if Packer is still decent for building cloud AMIs? Or do you have any suggestion for building custom images that can run on cloud platforms as well as locally?


We use packer to build our images for all of Vagrant, Docker and AMIs (multi-arch) and push to the relevant registries. The packer side is probably ~100 lines of HCL, and allows us to have consistent images no matter where it's running. It's a fairly simple tool in premise and does what's on the tin, would recommend.


We're mostly using multi-arch container images for that use case.


Although it's the most common option, VirtualBox isn't the only supported hypervisor [1]. Maybe it will work with another one.

[1] https://developer.hashicorp.com/vagrant/docs/providers


I've been using devenv for new projects. I like it so far. Some might find nix (which it requires) to be overkill, but I think that's underestimating how devilish of a problem it's solving.


devbox is a similar idea, but is more approachable for those who don't know nix.


It seems more approachable than going down the NixOS rabbit hole.

I haven't had a need to reach for it yet, but I will probably try it out at some point.


I'd be cautious about thinking that you'll be able to use it for complex projects without eventually needing to enter the NixOS rabbit hole. This hasn't happened to me yet on a project that I'm using devnenv with, but while using nix flakes I've found weirdness in nipkgs that I wanted to address.

On the other hand, the ability to put non python deps in a place that feels like a venv... feels pretty magical.


Yes for Python it seems to sit between a blurry midpoint of "just use Poetry!" and "just use a container!", but I can see it potentially being useful for a batteries-included devenv in more niche applications like Julia, Elixir, microcontrollers/development boards, etc.



Vagrant was the reason for Hashicorp :-)


Too bad they abandoned it.


https://github.com/hashicorp/vagrant/blob/v2.3.7/CHANGELOG.m... ?

The changelog lists both improvements and bug fixes and there's even apparently some effort to port it away from ruby: https://github.com/hashicorp/vagrant/blob/v2.3.7/internal/cl...


Kinda sort nix?


There's also koding.com, which has been around for a while. I tried the service quite a long time ago because I used to find the "cloud coding" concept enticing, though it was definitely not as baked (probably due to Monaco not existing at the time, and likewise, browsers not being as good.) Nowadays I still feel like the user experience problems, limitations, and cost (when not subsidized) with coding this way largely make it a niche for people who have unusual needs. Especially because, there are tons of local options for reproducible dev environments, and they can continue to work without a network connection or on unreliable connections, something that is unimpressive even in the best conditions I've seen so far.

If you happen to have the exact set of needs that one of these products solve, then it can probably work fairly well. But, even when working on "web" stuff, I always find myself feeling like they're just never quite as good as just doing things locally. I feel like devcontainers and cloud coding in general are more impressive to those who haven't managed to tame their own dev environments and are seeing a solution to this for the first time.

It's also clearly useful for people on iPads and other devices that are either too low end to run/compile your code or arbitrarily limited to prevent it. However, with how powerful iPads are nowadays, it feels like if Apple ever allowed apps to use virtualization extensions, it would probably be a better solution for a lot of people who do not need a huge cloud workstation with a lot of RAM. I imagine your average Rails app or Go backend would just have no problem running directly on an iPad.


Bridge to K8s, CloudShell Editor, Cloud Code, Coder, DevSpace, Eclipse Che, Garden, GitPod, GitHub Codespaces, ksync, Kubectl-warp, Nocalhost, Okteto, Squash, Stern, Skaffold, Telepresence, Tilt



It's not a direct competitor, but we use https://tilt.dev/ at my company for local and remote development.


I suppose Eclipse Che <https://www.eclipse.org/che/> would also count?


Coder’s big difference is that it uses Terraform for provisioning, so it can do Docker/Kubernetes as well as VMs


https://www.daytona.io is currently in stealth preparing to launch. You could grab early access by joining the waitlist.


What's its difference?


Daytona will be available as SaaS and self-hosted alternative to cloud-based development environments like Codespaces (also using devcontainer.json). It provides similar capabilities for setting up and managing development environments, but with the flexibility of hosting and managing the infrastructure yourself.


AWS has Cloud9[1] though it's worth pointing out that it's not an exact a 1:1 and may require some elbow grease to use in the same manner[2].

1. https://aws.amazon.com/cloud9/

2. https://aws.amazon.com/blogs/architecture/field-notes-use-aw... (2021)


At CodeSandbox (https://codesandbox.io) we're also working on this! Main focus of us is that we're running the environment in Firecracker microVMs, which allows us to snapshot and clone environments very quickly. This enables us to create a VM for every branch, which comes with the added advantage that every branch automatically has a snapshotted preview environment that can resume in ~2 seconds.


Love CSB. Using this a lot for quick setup of developer environments to test a code change, etc.


For something a bit more lightweight, Toast: https://github.com/stepchowfun/toast


This reads a lot like https://earthly.dev


Microsoft has DevBox but it’s not clear where that fits around Codespaces?

https://techcommunity.microsoft.com/t5/azure-developer-commu...


DevBox is a full VM with everything installed. Codespaces is a container with a web or SSH interface.


Seems like toolbox is also in this space; https://github.com/containers/toolbox


Toolbox is not a developer environment, but rather a tool to provide 'a toolbox' to a container host, like the older Atomic, or CoreOS releases that are immutable. Dsitrobox is close to toolbox, but also not similar in providing a coder setup.


Toolbox seems pretty well suited for a console based development setup. It works as a simple wrapper around docker/podman which lets you build your dev environments using Dockerfile syntax which is very nice.


It is not really what DevPod provides as they connect a code-server, jetbrains, remote vscode, etc. Sure, 'Toolbx' can do that to; you can install vim, etc. We advertise toolbox as "interactive command line environments on Linux" foremost. DevPod offers this on Windows, MacOS, beyond the command line. as you said, it is just a wrapper around a 'special' (custom images can cause issues) toolbox container and podman. The tool they offer is much more streamlined to use providers; targeting different environments. Note: Member of the 'containers' group on GitHub/Red Hat.


I think StackBlitz (https://stackblitz.com/) falls into this space


Is there a comparison of some sort somewhere? I have some spare resources I could use for one of these, but they all seem very similar.


I had really hoped Eclipse Theia would be an alternative.


I'm likely not the target audience, but I personally see "client-only" as a disadvantage. Ultimately I use VS Code to stand up devcontainers on my laptop, but I sometimes need to do dev work on my iPad and don't want to pay for Github Codespaces. Gitpod has worked for me in the past and I've gotten Coder setup.

Maybe this will be nice to get Jetbrains IDEs working with the devcontainer standard, since IIRC they don't support this at the moment


Don't worry. We'll be adding a server-side option for DevPod Desktop app to connect to for enabling thin-client/browser-based work but the cool thing is that this is not a requirement to use DevPod. It's more like Terraform and Terraform Cloud. You can run with Terraform and use it entirely client-only but you can also have server-side solution on top for specific things that just need central management.


Will the server-side option still be included in the open source version? I have almost the same use-case. I want to run devpod (or something) in my homelab and sometimes access it from my ipad.


I've been wishing that JetBrains and Posit (nee RStudio) would adopt the devcontainer standard, but then I just read this article [1] about how MSFT is using VS Code's weird mix of open-source and proprietary components to fracture the market and ensure that any competitors who try to build off of VS Code are at a permanent disadvantage. Now I'm having second thoughts.

Devcontainers are a good example of their strategy. VS Code's source code is open source, but the devcontainers extension is not [2] and alternative vscode-in-the-browser providers are not able to use the devcontainers extension as they're not allowed to use the official VS Code extension marketplace.

[1] https://ghuntley.com/fracture/ [2] https://twitter.com/castrojo/status/1671544329402302464?s=20


Client-only is the prime selling point for me. A common project for our engineers is building/updating data ETLs & reports off of sensitive healthcare data. Since our engineer's laptops are already fully configured and registered to handle this sort of data, we wouldn't have to worry about the security/tracking of each developer potentially sending this data to more places than it needs to be.


Although they've basically closed source it since release, Jetbrains Projector was a fantastic tool I've used a lot in the past for that - Just spin up a docker container on my home server and pull out my ipad keyboard

https://github.com/JetBrains/projector-docker

Unfortunately, It's hard to tell if JetBrains Gateway will keep all of the remote dev features or not.


+1 on this. something like openvscode integrated would be awesome


Will they ever? JB isn’t really known for doing that sort of thing.


Their new IDE Fleet is built around it.


Local first, cloud optional is the only way (IMHO) we're going to get people off their local laptop development setups.

We need to support local dev environments first, with the exact same config a developer can then move to the cloud.

See https://github.com/jetpack-io/devbox for how this can be achieved and https://www.mikenikles.com/blog/dev-environments-in-the-clou... for my thoughts after 3 years of working in this space.


Cloud dev envs do not require millisecond latency. Typing is done locally. Latency only comes in at the times like when you go to open up a new file which isn't locally cached on your machine. Having to wait 200 ms for a file to open isn't so bad.


I was wondering how big pain is latency when using VDIs for example. Microsoft thinks 150 ms is OK: "If all frames in a single second take between 150 ms and 300 ms, the service marks it as "Okay." [1]" But putting this in perspective where in gaming this would be unusable, or even when looking at Apple Vision Pro claims of 12 ms needed to make things meaningfully usable. What are your opinions? [1] https://learn.microsoft.com/en-us/azure/virtual-desktop/conn...


Just to add some points of reference:

- VR headsets typically need very low frame times so that they can account for you moving your head and not give you motion sickness. The typical threshold is ~10ms, but it can be improved with good reprojection tech

- Older LCD TVs often added >50ms of input lag, although this is much less of a problem now, but this was enough for lots of people in the 360/PS3 era of consoles


IMO it's not about latency but consistency -- when I'm in flow, I need my tools working right now, without interruption


Trying this out, and a word of caution if you use the SSH provider: it does not check host SSH keys, which is sort of a no-no:

    [15:25:45] debug Run command provider command: ssh -oStrictHostKeyChecking=no
It also seems to rely on the remote server having passwordless sudo, which is... interesting.

Edit: I've managed to make it work with my own solution (https://github.com/rcarmo/azure-dev-bootstrap). It works OK, but since I'm used to provisioning the back-end boxes by myself and using VS Code Remote, there's little practical difference (also, my setup adds the dev box to my Tailscale network).

Would be really nice if the baseline SSH provider (and assumptions as to how the remote SSH server is set up) were fixed, and if I could get it to work on an iPad with Blink shell (edge use case, I know, but I do use that a lot, or just an RDP on Linux desktop).

Oh, and add B-series VMs to the Azure provider. For burstable use like coding, testing, etc., those are much more cost effective.


> it does not check host SSH keys

Otherwise DX would be totally ruined. Totally.

> work on an iPad

I never tried, but from the distance it looks like _working_ on such kind of devices is a torture, what's are your impressions, effectiveness comparison to working on laptop (desktop) with real keyboard?


Not DX, but security, yeah. I do this for one-shot automations inside a LAN or to talk to my own transient VMs inside the same box, but it is something you absolutely SHOULD NOT DO if you are connecting to a public host that may be compromised/spoofed whatever. Working at an ISP really opens your eyes to how easy that is to achieve.

(I am assuming that this, and the attempt to do passwordless sudo on the remote machine, simplifies the developers' life a lot since they don't have to put up prompts for you to validate the host key or do privilege escalation remotely, but they are must haves if I am to trust this solution.)

As to the iPad, I have been doing that for many, many years: https://taoofmac.com/space/blog/2016/11/06/1930 (the date on the URL does not denote the first mention). It is an excellent way to work for many hours with a quiet, cool (as in temperature) machine from remote locations.


I prefer iPad over a laptop for a number of reasons:

- My main machine is a Mac desktop. I am not looking to add another powerful machine to the mix that I'm not going to use regularly. The iPad (or even another high quality laptop) would be just used as another beautiful screen, to work or watch Netflix on.

- I have a high quality, ergonomic Bluetooth keyboard that I pair with my both my Mac and my iPad. This means my ergonomics with my iPad is better than using a laptop with it's attached keyboard.

- I'm not looking to do long work sessions on these devices, although I honestly could. It's more for one off things, for updating my org-notes, for having a quick cafe work session, etc.


Which keyboard do you use?


I've got the Corne-ish Zen, and I also recently acquired the Kinesis Advantage360. Both very niche, ergonomic boards but I'm very happy with it.


Apologies, I'm not up to date in this space, so probably a stupid question but how does this differ to docker compose?

My understanding of the value of codespaces was instant start up, literally zero to download locally, and centralised definition. Does this mean I would go back to have to downloading everything locally, albeit in a nice sandboxed package with a neat definition language and convenient command?


Docker compose is a pretty poor development environment experience. Constantly having to rebuild containers to recompile dependencies; dealing with permissions differences for volume mounts; having to modify all the scripts to start with "docker compose run --rm"; having to deal with no shell history or dot files in the application containers... it leaves a lot to be desired.


> Constantly having to rebuild containers to recompile dependencies;

How often is this actually necessary? I've had projects that stick with the same dependencies for weeks/months and don't need anything new added outside of periodic version updates. There, most of the changes were the actual code, that was needed for shipping business functionality.

Furthermore, with layer caching, re-building isn't always a very big issue, though I'll admit that the slowness can definitely be problematic! Except for the fact that you don't have to pollute your local workstation with random packages/runtimes (that might conflict with packages for other projects, depending on the technologies you use and what is installed on a per project basis or globally), and the fact that you get mostly reproducible environments quite easily - both of those are great, at least when it works!

> ...dealing with permissions differences for volume mounts;

This is definitely a big mess, even worse if you need to run Windows on your workstation for whatever reason, as opposed to a Linux distro (though I guess WSL can help). I personally ran into bunches of issues when mounting files, that more or less shattered the illusion of containers solving the dev environment problem sufficiently: https://blog.kronis.dev/everything%20is%20broken/containers-...

But for what it's worth, at least they're trying and are okay for the most part otherwise.


> How often is this actually necessary?

Think "I have several teams and the output of 'team a' is a dependency of 'team b'" and 'team a' needs to release twice a day".


> ...and 'team a' needs to release twice a day"

That's quite the fast paced environment! In that case the shortcoming seems like a valid pain point, provided that you need to launch everything locally with debugging (e.g. breakpoints/instrumentation) vs just downloading a new container version and running it.


Is it really like that? I expected a docker container having all the deps except for running the project which would be something that’s mounted as a volume. Then each container would spin up its own watcher to build/test/serve the project. And have a bash open to run additional commands.

Maybe my assumptions are wrong though…?


The first problem you'll encounter here is if you're using a language that keeps dependencies in a separate place (virtualenv, central cache location, etc.). You have to figure out how that works and mount that location as a separate volume, or else you'll be constantly recompiling everything when your container is recreated. Using a bind mount for the project files is also annoying because docker-compose makes no effort to sync your uid/gid, so you have all sorts of annoying permissions issues between local/container. And installing packages into your container that doesn't have an init process is... annoying at best. You can use sysbox to get one, but you're not really "just using docker compose" at that point.


That's the dream, but in my experience there are some thorns, and some things that just suck. Mostly these come from Windows, like a dev station using the wrong line endings, filesystem watchers not working if the project isn't on WSL storage, file permissions getting mucked up, etc. However, what is a pain in the butt is adding dependencies. Do you attach a shell and run npm in the container or try to do it on the host system. Do it in the container, and you'll have to make sure those changes make it's way back out, and that you rebuild the container the next time you launch it. Do it on the host, and you could run into cross platform issues if a package isn't supported on Windows, and you'll have to rebuild the container.

However, once you're aware of this, honestly it's not that big of a deal. Docker rebuilds are pretty fast nowadays, and you can use tools like just to make the DX a little easier by adding macros to run stuff in a container.

End of the day though, folks are all gonna have their own way of working, and I think dev containers could have an advantage for peeps doing remote development. It would be nice to have a system where our developers could dial in to a container with everything they need from anywhere they want to work.


But this is still based on Docker, right? How does this address those pain points?


creating the workspace is based on docker but within the workspace you're free to do whatever you want, no need to use docker-compose there


But then how is this so different from running "docker-compose" and then do whatever you want withing the container? Is the difference just that they provide ready-made Docker images for certain environments so that you don't have to create your own? Can I get the same images on Dockerhub then?


+1, trying to put it my head (disclaimer - I'm a rare user of local docker, as not being developer), hoping to get some insights which may be helpful in better setup for dev team


Partially, yep. The more important part is that these Docker images run on a remote machine


I haven't used Codespaces, but how does this work with databases? A common problem we have during onboarding is getting your local database (MySQL) setup properly: run all the migrations => load some sample, de-identified, production-like data => update certain rows to allow for personal development (e.g. for our texting service, make sure you text your own phone number, instead of someone else's).

What's the workflow for this?

A related issue for us is being able to test another developer's pull request with database migrations without wiping out your current database state. Is there a Devpods workflow for this?


If you can script your setup steps, you can also run it in a devcontainer, either by using docker-compose[1] to bake it into the workspace image or using lifecycle hooks to run some scripts after creating the workspace[2]

[1]http://blog.pamelafox.org/2022/11/running-postgresql-in-devc... [2]https://github.com/pascalbreuninger/devpod-react-server-comp...


We've built www.snaplet.dev to introduce the exact workflow that you're describing. Unfortunately we're PostgreSQL only at the moment.

We give you a serverless PostgreSQL database per branch in your code. [via Neon.tech] Each time you branch your code we grab the latest snapshot of your production database which is de-identified and transformed [Transformations are via TypeScript] and a subset of the original.

If a coworker and yourself are coding against the same branch you're coding against the same database.

Your devs only run a single command `snaplet dev` and all this happens automatically in the background.


This looks amazing, we have written a similar script for both our local dev dbs as well as our staging env. Would love MySQL support


We'll add it as soon as we've found figured out the core experience, we're close. V1.0 is just around the corner, and then we'll add MySQL and SQLite.


This exact workflow is why we have integrated Cloud IDE with full-stack branch preview (both of which know about database seed/migrations) at Coherence (withcoherence.com). [disclaimer - I’m a cofounder]. You can also integrate other seeding tools like snaplet, mentioned in a sibling comment here, which is an awesome solution to this problem!

Would be happy to discuss getting a PoC setup to see if it helps in your case, or to answer any questions, feel free to reach out


Really nice idea, doesn't seem to work with Colima / Podman so logged a bug for that but otherwise looks great. I love that it's a native app and not Electron - it's so light and fast!


It's not as native as you may think - it's made with Tauri (see https://github.com/loft-sh/devpod/tree/main/desktop). I agree that it feels great, though. Definitely puts Tauri into an (even) better light for me!


Tauri used for the frontend (the backend is golang and rust) and is so, so much better than electron.


I really envy folks working on backend / web stuff when I see these kind of things. Stuck building client apps on desktop and mobile, the options are slim (full VMs, slowness, etc.)


Same. I started building an automated solution for building gamedev environments in VM’s[1] but I can’t help but think there should be a better way.

[1] https://github.com/karlgluck/swiss-chocolatey-lab


> Mac (Silicon)

Huh? Yes, it contains silicon.


Short for "Apple Silicon", the marketing name for Apple's own series of ARM CPUs used in Macs since 2020.


They do have a lot of very weird abbreviations, like in the getting-started/install page there are options for: MacOS (Intel/AMD) (...=> amd macs? I guess hackintosh?), Linux AMD (no Intel this time?)

I get they are abbreviating amd64 but still... I've never seen it done like this...


It's approximately like shortening "Processor made by Apple" to "Processor". The important bit is gone.


Fine.

“Mac (Apple)”


This is like shortening github to git


not really, since there's the extra context of "Mac" right before it, so it's easier to know what it's about...


Not only that, when using Firefox (Chrome seems fine), it's selected by default on Windows and Linux.


FF unfortunately does not support detecting the OS reliably...


Even with "strict" tracking prevention Linux is still in the user agent string. That's been very reliably IME.


"Linux x86_64" is present on my UA on Linux, and "Windows NT 10.0; Win64; x64" on Windows. Probably a bug or just untested on Firefox. Just a bit annoying.

I'll add that this is the first site that misdetects anything like that.


Is it because of tracking protection/fingerprinting protection being on by default?


I looked at this about a week ago and think it has potential. One thing I dislike, and it's an industry wide problem, is the eagerness to continuously (re)build containers on demand [1].

Everyone does it, but I think that's a mistake. What I want is something where I can build and publish a dev container to my local Docker registry and then use that container to develop until I decide I need to build an updated version due to changes in the OS, dependencies, etc..

To help clarify, look at this picture [2]. I'd want everything up to dependencies or resources, plus all of the tooling needed to make the Jetbrains Gateway, etc. work in the dev container. I want to build that container on a calendar based schedule (ex: daily) and have everything I need to develop accessible via local repositories that I can use without connecting to the internet.

Long ago I came to the conclusion that most Docker builds aren't repeatable, so the idea of re-building a consistent environment seems naive. For example:

    RUN apt-get update && apt-get install vim
Without specifying the exact version of every dependency, you won't be guaranteed the same version of 'vim' every time. Plus, even if you specify the exact version of your direct dependencies, I think you can still end up with varying transitive dependency versions. Even just the 'apt-get update' portion of that command is often misunderstood since it can return 0 as a result of transient failure.

So, even if your intent is to build a container with the most up-to-date versions of everything, a transient failure between the update and install commands can leave you with ancient versions of dependencies, even if you intended everything to be up-to-date. This is especially true if you're using a local APT cache like Sonatype Nexus where the upstream 'update' might fail and the local cache probably has old versions of all the dependencies, allowing the install command to succeed.

IMO it's better just to assume you have zero guarantees when (re)building Docker images and you're better off adopting a strategy of build, publish, use.

1. https://devpod.sh/docs/developing-in-workspaces/devcontainer...

2. https://phauer.com/2019/no-fat-jar-in-docker-image/#the-solu...


Reading the docs more, it looks like the prebuilt workspaces [1] are closer to what I would want.

1. https://devpod.sh/docs/developing-in-workspaces/prebuild-a-w...


> Even just the 'apt-get update' portion of that command is often misunderstood since it can return 0 as a result of transient failure.

Tiny correction that got me confused: an exit code of '0' would actually mean 'success', so you probably just meant that it could fail at any moment.

But also, this can happen any time external servers are accessed, regardless of the tool. An npm install could fail any time without warning, if the servers are down. Devs should expect their supply chain to break sooner or later, and plan accordingly depending on the severity of the consequences.

My example is if you use the Ubuntu debug packages repository, you'll basically never know if the next run will work until 'apt-get update' is run. Those repos are seemingly down or 'reindexing' for hours every few days.


> Tiny correction that got me confused: an exit code of '0' would actually mean 'success', so you probably just meant that it could fail at any moment.

No. It fails and then returns 0 = success. It's a design choice. [1] [2]

> But also, this can happen any time external servers are accessed, regardless of the tool. An npm install could fail any time without warning, if the servers are down.

I think that strengthens my argument that everything should be baked in to the dev container.

Edit: Reviewing those bugs, I see there's a new option to control the behavior:

    -eany, --error-on=any
           Fail the update command if any error occured, even a transient one.
1. https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1693900

2. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=776152#15


Have you looked into distrobox? I’m using it now for my dev environment running on top of an immutable OS (microos).

It’s not perfect but I never have to spin the container down.


> Have you looked into distrobox?

No. It looks a bit heavy for what I think is an ideal solution. The way this one works with Jetbrains Gateway is what really piqued my interest.


I'm kind of obsessed with this space and have spent way too long setting up VS Code and RStudio IDEs on my homelab.

One area I struggle with when thinking about building container-based development environments is the best way to avoid mixing in your IDE specific dependencies with your project dependencies. I think some of the commercial tools do this, but I haven't gotten in set up well in my homelab.

I just came across two articles by a former GitPod employer who moved to Coder (these are the two main providers of open-source VS Code in the browser solutions). They're both really interesting.

The first is on how effectively Microsoft has used VS Code to fracture the market place to their advantage by strategically open-sourcing parts of VS Code while keeping many of its best features proprietary (Pylance, the python language server is a good example of this). https://ghuntley.com/fracture/

The second article is about why he thinks Coder's strategy is more promising than GitPod's prompting him to go work for them. It's not as detailed, but it touches on some of the parts of container-based development environments that I've found overly limiting. https://ghuntley.com/integrate/


There is zero text on the webpage on what this actually is, except that it's like Codespaces. Is Codespaces a household word nowadays? I had to look it up.


Looks interesting but unfortunately neither the deb or AppImage options worked for me on Ubuntu 20.04. (Yes, I probably should upgrade.)


Will definitely look into this. Thanks for reporting this!


Supporting flatpak would be nice if possible


Same here


Is there IDE support for full fat Visual Studio? I recall them adding support for the devcontainer concept last year sometime? (I could be wrong, and/or it may have been a limited set of capabilities, you know MS)

I have a mix of jetbrains, vs code and vs ent devs, would be great to unify their development experience a little


DevPod works with any IDE. We support VS Code (local and browser), pretty much the entire Jetbrains suite, and you can even connect via VIM :D


> DevPod is the first and only tool for creating and managing dev environments that does not require a heavyweight server-side setup.

I guess either "dev environment" means something different to what I understand by the term, or Nixpkgs is considered a "heavyweight server-side setup"?


Well you see if they acknowledged nix then they wouldn't be able to claim to be the first.


Haha, spicy. We definitely don't want to overlook nix. It's a great tool. We do mean something different with "developer environment" in that context of this quoted sentence. I'm not sure if nix will help you much if you want to connect your VS Code to a remote VM or container to work inside of that environment. We'll work on rewording this though to make things clearer :)


I'm sure you can find a narrow enough definition; I believe in you.


"Containerised" would be less ambiguous, I think.


Oh yeah, that would make it clearer! Thanks


I also don't see how this is different from devcontainers, which are like codespaces, but without it being hosted on a server.


That's exactly the difference. You don't need to pay for codespaces and are locked into 1 cloud provider. You can use whatever cloud you want with DevPod


That sounds the same as dev containers, so it can't be the difference.


i tried devpod few weeks ago on a non-trivial project. i had .devcontainer setting from GH Codespaces already in place though. anyhow, it all worked seamlessly and i was quite impressed.

good job!

i tried same project running locally using VSC’s devcontainer extension too, but that felt really slow on a mac so i abandoned it.

what is devpod doing differently? iirc, running devcontainer directly via extension inside docker had same perf problems with bound volumes. is this solved differently in devpod?

i didn’t compare these two apples to apples style because i tried extension on older computer.


I made something to do remote vscode development on gcp a while back, using some terraform scripts to spin up and down environments: https://lockwood.dev/development/remote/2020/03/17/experimen...

It's pretty crude but hopefully will give people some ideas!


Thanks for sharing!


I think this looks awesome - exactly the piece that I feel like has been missing from setting up a remote instance for vscode, simple app for provisioning the instances.


Is there a solution like this which supports mobile app development? (iOS + Android). I guess USB would be needed as well.


This looks cool, but please let me use VSCodium, the non-proprietary version of VSCode, as my editor.


Really cool! Is this intended for individual users or for keeping dev environment in sync across teams of people?


That's correct. It uses the .devcontainer.json standard to define everything as code and it abstracts the cloud/infra and allows for an easy way to spin up a dev environment with a .devcontainer.json in any infra


Tried it, but it doesn't seem to work with non-rootless podman as a replacement for docker.


Thanks for trying it! Can you open an issue on GH about this or join us on slack, so we can dig deeper into making this work for non-root podman with your input?


Are you using podman for everything in production? I am still trying to understand when to use it.


Not using it yet, but basically there where you cannot control who runs what, like shared builders or server with personal beyond infra/itsec teams may have access and need to be able to run containers. As being in group Docker/having access to Docker socket effectively means having root access, Podman/rootless docker can be a savior.


I don't have any experience using podman in production. I use it on my dev PC for work, where I have to run Windows. Previously I used Docker Desktop for Windows, but I got tired of dealing with its various annoyances. Podman has been a mostly seamless drop-in replacement. It doesn't support swarm, which I used to use with docker, but I've found a good solution using docker-compose instead.


I’ll pop into Slack tomorrow to see if anyone can help me get it up and running. Using both the CLI and the GUI on Mac silicon with AWS, I haven’t been able to get a single instance working.

Even just an empty folder!


Is this something like Stackblitz? I tried it, because it was client-only, but it didn't support native extensions for Node.js, so I couldn't use it.


So cool to see products pop up with support for Civo, I've been with them from day one and it's a good feeling having a Civo button in that program.


Civo is great!


This is awesome! I have a powerful desktop ( that I'm using as a host ) and I can finally use my Macbook Air M1 as a dev machine.

Thank you.


Neat. I’d love to see this work with Blink on an iPad (which I already use for hand-rolled workspaces).


There is a linux arm build for the cli but not the desktop app?


Does it work on iPad?


No(t yet)


My immediate thought is the design isn’t there yet. Maybe the tool is but people also pick tools based on how they look, and this could take a facelift before mass marketed.


What isn't there yet about the design? It seems like a standard flat modern tool to me from the screenshots in their docs.


Looks like a nix-env clone, cool nonetheless


That's looking great! We're already experimenting with DevSpaces as our primary goal for the development process. What would you say are the main differences between devpod and DevSpace. Do they compliment each other? Should devpod at one time in the future displace DevSpace? Would love to get your view on that.


Yep, as we see it they compliment each other quite well. DevPod takes your workspace to the cloud and DevSpace let's you develop against your Kubernetes cluster - potentially the same one you used to start your workspace.

Internally we use both in our development setup, spinning up remote workspaces using DevPod, installing DevSpace and kind into the devcontainer, then using DevSpace to develop against the cluster. See the vcluster setup[1] as an example

[1]https://github.com/loft-sh/vcluster/tree/main/.devcontainer


Unopinionated*




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: