Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Clace – Application Server with support for scaling down to zero (github.com/claceio)
72 points by ajayvk 59 days ago | hide | past | favorite | 28 comments
I have been building the open source project https://github.com/claceio/clace. Clace is an application server that builds and deploys containers, allowing it to manage webapps in any language/framework.

Compared to application servers like Nginx Unit, Clace has the advantage of being able to work with any application, without requiring any dependency or packaging changes. Clace provides a blue-green staged deployment model for apps. Not just code changes, even configuration changes are staged and can be verified before being made live.

Clace is not a PaaS solution, it does not support deploying databases and other auxiliary services. It does share the fact that it manages containers with PaaS solutions. Clace is different in that it builds its own reverse proxy, instead of depending on Traefik/Nginx. This allows Clace to implement features like shutting down idle apps and adding app level OAuth authentication. Clace runs natively on Windows/OSX in addition to Linux. Clace works with Docker/Podman/Orbstack.

Clace allows you to run hundreds of apps on a single machine. Since app containers are shut down when not in use, there is no CPU/memory resource usage when the apps are idle. It provides a Google Cloud Run type interface on your own hardware.

https://clace.io/ has a demo video and docs. Do let me know any feedback.




An example to create an app is

  clace app create --approve --spec image --param image=ghcr.io/gchq/cyberchef:latest --param port=80 - cyberchef.localhost:/
This creates a Cyberchef app from its image at https://cyberchef.localhost:25223/, which can be bookmarked. Opening that link will create the container (first access can be slow since the image is downloaded). The Cyberchef container is shut down when not in use. Subsequent access to the url will instantly create the container.

Using --spec container allows you to deploy code from any github repo which has a Containerfile. There are also language specific specs (mainly python currently). For example

  clace app create --approve --spec python-fasthtml --param APP_MODULE=basic_ws:app  https://github.com/AnswerDotAI/fasthtml/examples fasthtmlapp.localhost:/
creates a FastHTML based app, building the image and starting the container. Checking out the source code and using local path allows you to setup a dev environment by adding the --dev option. No dependencies have to installed on the dev machine for this. Docker/Podman is the only dependency. For example

  clace app create --dev --approve --spec python-fasthtml --param APP_MODULE=basic_ws:app ~/mycode/fasthtml/examples fasthtmlapp.localhost:/
The goal with Clace is to build an application server for easily and securely managing internal tools.


I hadn't heard about Containerfile but it seems to be the Podman ecosystem's version of Dockerfile (e.g. https://github.com/containers/common/blob/v0.60.2/docs/Conta... )


Yes, I used the Containerfile name since is more neutral. The file format is same across Containerfile and Dockerfile. Clace works with either file name.


Pretty cool!


Clace is built in go as a reverse proxy. Clace uses https://github.com/google/starlark-go as its configuration language. This allows full Hypermedia driven apps to be built in Clace, running within the main Clace process. For example:

  clace app create --approve github.com/claceio/apps/system/disk_usage /disk_usage
  clace app create --approve github.com/claceio/apps/utils/bookmarks /book
installs a du like tool and a bookmark manager. These are Hypermedia based apps using HTMX which run in Starlark. Works across Linux/OSX/Windows, with no dependencies to install. Not even containers are required for these apps.


I think not bundling this into Kubernetes is mistake if you ever want to escape homelab.

Cold starts for JIT languages are problematic for anything customer facing so one must be kept running. So now I have two platforms and that always sucks as the SRE.


Starting on K8s will make scaling down difficult. Starting without K8s for Clace makes it possible to control the developer experience better. Adding K8s and scaling up can be done later.

At an abstract level, Kubernetes is used for managing compute (applications) and storage (databases/queues/file stores/volumes), stateless services and stateful data stores. Compute by itself is easy to scale. Managing storage and stateful data stores are where some of the complexity of K8s comes from.

For many workloads, it makes sense to use a managed database (RDS/managed Redis/S3 etc). Data backups, performance etc are easier to handle with a managed service, at the cost of how much you pay. If the stateful data stores are externally managed, using K8s for compute might be overkill, especially when you consider the extras (like ArgoCD, Service Mesh, IDP etc) which need to added to make it a useful developer experience.

Clace aims to provide a scalable solution for compute only scenarios, providing a great developer experience while targeting internal tools as the primary initial use case.


Clace currently is single-node. It uses a SQLite database for app metadata (which is what allows it to support atomic updates across multiple apps).

I will be adding multi-node support soon. The user will have to bring their own Postgres database and load balancer. Multiple Clace instances will run in parallel, each spinning up containers locally using Docker/Podman. For the internal tools use case, I think that should scale to support most workloads.

There will be cases where delegating container management to Kubernetes might make sense. Container management in Clace is a thin layer, it should be possible to use a K8S service/deployment wrapper instead of using local Docker or Podman.

The auto-idle feature might cause latency for services with a high startup cost. For most apps, it should be fine. It can be disabled individually for apps.


Cool project! I've been looking for a lightweight alternative to PaaS solutions. Clace's ability to scale down to zero is a huge plus. How does it handle stateful apps, or is it mainly designed for stateless ones? Also, what's the story with logging and monitoring - does Clace provide any built-in tools or integrations? Lastly, have you considered adding support for serverless functions, or is that out of scope for the project? Looking forward to digging into the GitHub repo and learning more!


For VOLUMES defined in the Containerfile, Clace creates named volumes which are retained across container updates. This is useful for log files and any data volumes use by the container, including SQLite database files. If there is an app which starts multiple services within one container, that also can use the volume.

You can create a database container and pass the database details to Clace apps as params. Clace itself does not support creating database containers currently. The database has to be managed outside of Clace. The reverse proxy implemented by Clace is a HTTP proxy, it does not support proxying non HTTP protocols.

You could create an app with a small idle value like 10 for container.idle_shutdown_secs https://clace.io/docs/container/config/, default is 180. That way, for every API call, a container is started (unless one is already running) and then immediately shutdown, giving a serverless function type of experience. There is no way currently to ensure that parallel API calls each run in a separate container.


Neat! I currently use jwilder/nginx-proxy and docker compose to run a small fleet of apps on my home server. This all-in-one solution sounds like it would be more streamlined for single-machine deployments like this.

I use Kubernetes in <day-job> and while I'm a big fan, it's incredible overkill for running a few services ala. Syncthing and Vaultwarden.


Clace is more of an AppServer than a PaaS solution. Docker Compose is not supported currently. If you have a postgres database and want to deploy multiple apps which access the same database, then Clace provides blue-green staged deployment, GitOps, OAuth support, auto-pause etc for those apps. The app updates are atomic (all-or-nothing). The postgres database itself will have to be managed outside of Clace. If you want each app to have its own database, then a PaaS solution which supports deployment of pre-packaged apps, including Docker Compose support is what you want.

Clace is targeting use cases where you have external databases/REST API/CLI tools etc already and want to build and deploy multiple apps pointing to them. AppServer for deploying internal tools for use across a team is a target use case. For local dev, one use case is that Clace helps you set up a dev environment for webapps, with auto-reload, without having to setup any dependencies.


Check out https://github.com/skateco/skate

I'm building it for exactly that reason. Multihost and supports k8s manifests.


I'm not trying to yuck your yum, but you'll want to be _very careful_ about the uncanny valley of squatting on k8s manifests since there is a ton of functionality in those files and (as best I can tell) only by reading your readme can one tell which features actually work versus are just silently(?) swallowed


I just remembered that a big reason I did this was because podman supports controlling pods directly via subset of some k8s resources manifests (podman kube play). Squatting, as you put it, on the same version.

https://docs.podman.io/en/latest/markdown/podman-kube-play.1...

I suppose I could tighten things up at my end anyway though.


Yeah, you're right, I'd need to at least have some kind of table of supported attributes, or even mint my own schema with the subset that's valid for skate.


Perhaps a tool that processes a k8s manifest and produces a modified manifest containing only the attributes that are supported?


You mean so the user can see themselves what will be applied?


Sure. Looking at the output would make clear which properties are actually recognized, and it could be commited to version control to avoid confusion


Scale to zero makes me think it might be useful to specify a time window (cron?) where scaling to zero is allowed, i.e. 6pm-8am it can scale to zero but during the work day never fully scale to zero. Makes me wonder if this is a common pattern.


For webapps which are accessed as web pages or through API, looking at the API activity will indicate whether the app is being actively used. Clace currently uses no new API calls with last 3 minutes as indicating idleness. This is aggressive, it can be tuned as required https://clace.io/docs/container/config/. For apps using websockets or server-sent events, the API activity check will not be accurate.

For background jobs and cron jobs, some kind of cron job can be defined in the app, to wake up the container. This is not supported currently. If an app has background jobs or idle check is not accurate, the auto-pause can be disabled.


Right, I am suggesting a feature to disable the idleness check during a window of the day


This could be done using a system cron job which runs

  clace app update-metadata conf --promote container.idle_shutdown_secs=180 /myapp
at every 6pm and then runs

  clace app update-metadata conf --promote container.idle_shutdown_secs=0 /myapp
at 8am. idle_shutdown_secs being zero disables the shutdown. Using all as the last arg will update for all apps.


I like the idea and congratulations on good docs.

Two questions:

1. Do you support client certificates for authenticating clients? 2. Do you have some performance benchmarks?


Thanks, need to make the docs less verbose in some places :-)

https://clace.io/docs/configuration/authentication/ lists the supported auth mechanisms for apps. A builtin system account is the default. The OAuth providers supported are : github google digitalocean bitbucket amazon azuread microsoftonline gitlab auth0 okta oidc. Any other provider supported by https://github.com/markbates/goth can be easily added, with a small code change.

For admin operations (creating/updating apps) using the client CLI, a unix domain socket is used. No other auth is used for UDS other than the file system level permissions. A REST API for admin operations can be optionally enabled, in which case it will use the system account https://clace.io/docs/configuration/security/#admin-api-acce....

Client cert based auth is not supported currently. Were you wanting that for app access or for admin API access?

In terms of performance, I did some testing few months back. The app access API does not hit the database (sqlite), everything is cached after the first call. So the performance will be limited by the API performance of the downstream container. The Clace server itself should not be a bottleneck. First API call to a containerized app builds the image and starts the container. That depends on how fast the image build and container startup are.


The client cert based auth would be for the app access.

Our use-case is review apps, i.e. <some-feature>.example.com should only be accessible by users with a valid client cert. Currently we use Caddy, but I'd like to give clace a shot for this. :)



mTLS support has been added for apps. Docs are at https://clace.io/docs/configuration/authentication/#client-c....

Release v0.7.5 has the change




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: