Hacker News new | past | comments | ask | show | jobs | submit | simonvdv's comments login

This happened https://github.com/steinbergmedia/vst3sdk ;)

But still, I see no reason to use it over LV2 or even LADSPA.


There is an obvious reason: implementing a whole new plugin format is harder than just implementing a graphics backend for VST2. LV2 or LADPSA is only useful on Linux IIRC and that's not necessarily a huge audio market.


Or as an alternative DPF https://github.com/DISTRHO/DPF which at least supports JACK and LV2 (the pull request for it for JUCE has gone unmerged for years :( and no additional license restrictions


Do you know what development and QA processes are in place for in kernel filesystems and how that compares to ZoL's development and QA process?


They are probably referring to the fact that in the Linux world, in-tree modules get a lot of maintenance for free when the kernel refactors the pieces beneath.

If you have ever written a kernel module out of tree then tried to upgrade the kernel version you know this pain.


There are no QA processes per se for all file systems. The kernel is pretty much a bundling of many projects and the big file systems are their own projects, managed largely indepently from the Linux masters.


First of all, thanks for getting in touch with your users! Hope you'll be able to extrapolate some useful info from it :)

My basic suggestion would be to keep it simple, so stay with the GNOME apps where you can. Also it might make sense to make a distinction between what people feel is a good choice of software and which of those should be included in the default install.

IMHO stuff like and IDE, e-mail client, IRC client, messaging client, office suite and screen recording don't have to be included in the default install as long as it's easy enough for everyone to add them later (or customize during install).

Regarding specific items: - Terminal: gnome-terminal, but if possible look into make the tabs a bit less tall and fix the search dialog so it can be closed by pressing escape

- File manager/photo viewer: nautilus, but look into fixing the preview (spacebar) so that it allows opening the preview window once and then allow navigating through all files in the chosen directory using the arrow keys

- Calendar: gnome-calender, but make sure you use gnome 3.24 or later so it support dark mode

- Screenshots: gnome-screenshot, but please fix it so it's possible to take multiple screenshots in succession. Right one has to close and open it to do so.

- Video player: Technically mpv, maybe with the gnome-mpv GUI. Though mpv might be too difficult to use for some users?

- Music player: Imho none of them is really good enough :( Elementary's noise might be at some point


Hmm that's a pity even though it shouldn't come as a surprise for anyone who's actively using/involved with fleet. I like the simplicity and flexibility of fleet (basically distributed SystemD) a lot. Don't necessarily want to switch to a bigger scheduler like Kubernetes. Anyone have any suggestions for/experiences with an alternative simpler scheduler (like Nomad or an alternative solution like the autopilot stuff from Joyent)?


Nomad dev here. We should definitely tick the simplicity box for you. If not, let me know. :)

Nomad is a single executable for the servers, clients, and CLI. Just download[0] & unzip the binary and run:

    nomad agent -dev > out &
    nomad init
    nomad run example.nomad
    nomad status example
And you have an example redis container running locally!

Nomad supports non-Docker drivers too: rkt, lxc templates, exec, raw exec, qemu, java.[1] To use the "exec" driver that doesn't use Docker for containerization you'll need to run nomad as root.

[0] https://www.nomadproject.io/downloads.html

[1] https://www.nomadproject.io/docs/drivers/index.html


Nomad user here. No k8s experience. I have been using it for more than 6 months (docker container + short running jobs). If I can name the main features I like: deployment simplicity, responsive scheduling, disaster recovery and service discovery integration.


Completely unusable product for us because of the lack of persistent storage.


Sorry to hear that! We've definitely focused on stateless containers until 0.5 which introduced sticky volumes and migrations. Useful in some cases but definitely doesn't cover all persistent storage needs.

Extensible volume support will be coming in the 0.6 series via plugins.


We are moving toward container-pilot and it's A+. We have been using an adapted autopilot pattern for some time now with our thick VMs and it's been great. There is no one system that solves all problems and fits all paradigms, but it seems like container-pilot / autopilot as a pattern is very successful at delivering simplicity.

BTW, we are also using Triton (formerly SmartDC) from Joyent and are absolutely loving it. It's not without it's rough edges, but it is by and far the best public / private cloud option we have found that supports containers and VMs.


Same here. What made me like fleet despite the many problems with it is the simplicity and that it is not a container scheduler but a systemd unit scheduler, so it is far more flexible than just a container scheduler.

I have projects where Kubernetes is probably the right choice, but I have many more where Kubernetes is massive overkill and where I also need/want the distributed systemd units.


Docker Swarm - especially 1.13 with the new, simpler yml file based deployment


We ran into the access control limitations as well. They are caused by the fact that for some reason AWS ES only supports resource based policies which is imho the wrong way around to manage your policies.

We did get it to work in a useable manner by having the ES policy apply to a role (i.e. the principal is a role). If you than apply that role to your instances it will work for with instance profile based auth.


Thats exactly what i was doing with the elasticsearch plugin for logstash, but i still couldent figure out how to auth vanilla http requests to it from kibana or the like... Then i decided id wasted way too much time on this, and would just build it myself. Other services such as bonzai support basic auth which i would have almost preferred :/


I was able to use the code mentioned in this AWS forum post to configure a proxy using node.js: https://forums.aws.amazon.com/thread.jspa?threadID=218214

Code: https://gist.github.com/nakedible-p/ad95dfb1c16e75af1ad5

Looks like it's been turned into an NPM-installable module too: https://github.com/santthosh/aws-es-kibana


We're using this logstash output plugin at the moment https://github.com/awslabs/logstash-output-amazon_es together with the instance profile based permission it works as it should.

We are considering switching to the normal logstash-output-es plugin together with a AWS v4 auth signing proxy to make the setup more portable/less tied to AWS. I have a basic signing proxy setup working based on nginx/openresty. If you're interested just let me know.


Even though using Alpine as the base image for a container is a lot beter/cleaner than other base image imho we shouldn't rely on distro package management inside containers.

Not only does running a package manager inside the container mean you'll need to satisfy its dependencies in your image you also increase the image's attack surface compared to an image without a package manager.

Ideally we'd have a simple way of installing stuff into images from the outside so you can always start `FROM scratch` and add the minimum deps you need to run your app. Adding stuff could be as simple as extracting tars with the tar's contents following the Filesystem Hierarchy Standard. Each tar could be a layer so it matches well with how Docker images work as well.

Since it isn't possible to extend the Dockerfile syntax I started prototyping a static binary written in Go to add to `scratch` to do this. It worked better than I expected :) Only thing I couldn't find was a distro that packaged it's packages this way and it would obviously suck to create another packaging standard.


Your script is pretty much the same as mine, good to see :) For everyone who wants to give it a try and wants some instructions http://simonvanderveldt.nl/boot2docker-on-xhyve/

I've also done some quick benchmarks, they are at the bottom of the post. Virtualbox had about 50% higher disk IO performance in my measurements.

Regarding the root permissions for networking: As far as I know it shouldn't be necessary if you sign the xhyve binary, haven't tried that though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: