Hacker News new | past | comments | ask | show | jobs | submit | jalons's comments login

Licensing.


Recommended alternatives?


At the moment, the capabilities of technology simply don't permit the level of... generic-ness? that vendors like OTel -- and customers really -- want. If you try to push everything through a single pipeline, as a single kind of data, you necessarily lock yourself out of most of the value you should be getting. You need a heterogeneous stack, with specific tools for specific purposes.

Opinions vary but my experience is that Prometheus-style metrics are by far the most important thing to invest in, and deliver the majority of the "observability value" to the broadest set of architectures. Tracing systems like Lightstep can be super useful, too, but to deliver value they need a lot more end-to-end integration effort, and the cost of setting it up can very very easily outweigh the benefits it provides.

I've come to believe that logs are a trap. Everyone understands them intuitively, so they feel comfortable using them for basically everything, without thinking very critically about the ramifications. And even when logs are structured, they have no substantive schema, so there's no backpressure, so to speak, on usage. So the signal/noise ratio almost immediately goes negative. And they just occupy enormous amounts of time and memory to manage, process, analyze, etc. Although I'm sure it's not a universal truth, I've found that everything you might normally think to log is actually much better served by in-process request tracing. In general that means maintaining the log events of the most recent N requests for all of a set of application-defined categories. This is basically a real-time view of a system, with history proportional to, I dunno, rarity? of the request class. You don't ship these anywhere, you ask the applications directly. It seems weird to describe but it's just vastly more efficient to manage, and equally if not more useful for debug and triage.

YMMV.


This helps highlights the main issue I have with python today, and that's running python apps.

Having to ship a compiler to a host|container to build python with pyenv, needing all kinds of development headers like libffi for poetry. The hoopla around poetry init (or pipenv's equivalent) to get a deterministic result of the environment and all packages. Or you use requirements files, and don't get deterministic results.

Or you use effectively an entire operating system on top of your OS to use a conda derivative.

And we still haven't executed a lick of python.

Then there's the rigmarole around getting one of these environments to play nice with cron, having to manipulate your PATH so you can manipulate your PATH further to make the call.

It's really gotten me started questioning assumptions on what language "quick wins" should be written in.


You can use Bazel to build a self-contained Python binary that bundles the interpreter and all its dependencies by using a py_runtime rule[1]. It's fairly straightforward and doesn't require much Bazel knowledge - there are simple examples on GitHub[2].

There are a couple other tools that take the same approach, including PyOxidizer[3], which was written by a Mercurial maintainer.

[1] https://docs.bazel.build/versions/master/be/python.html#py_r...

[2] https://github.com/erain/bazel-python-example

[3] https://pyoxidizer.readthedocs.io/en/stable/overview.html


Can you really make it self contained? For example, does the host need a tz package? What about libssl or libcrypto?

As far as I know the only language making static binaries easily is Go, but it was a first class language design principle.

For everybody else it’s a jenky 1000 line Makefile. And don’t get me started on cross-compiling!


> As far as I know the only language making static binaries easily is Go, but it was a first class language design principle.

Rust does this as well.

The official high level build tool, `cargo`, uses a declarative TOML file for dependency management and supports lock files for deterministic builds. The default output is a single, statically linked binary.

Rust does depend on libc (like Go) which brings in dynamic linking on some platforms. But Cargo supports easy cross-compilation, and the `x86_64-unknown-linux-musl` target will produce a fully static binary.


D, Rust -- just what first springs to mind.


> Binaries produced with PyOxidizer are highly portable and can work on nearly every system without any special requirements like containers, FUSE filesystems, or even temporary directory access. On Linux, PyOxidizer can produce executables that are fully statically linked and don’t even support dynamic loading.

https://pyoxidizer.readthedocs.io/en/stable/overview.html


I've found PyOxidizer immature in comparison to pyInstaller. https://www.pyinstaller.org/

Your milage might vary, but I think the former is still very much a work in progress.


It can statically link Linux executables, including musl libc, libssl, libcrypto, and other libs that are usually dynamically linked.

It can't do this for non-Linux executables (e.g. Windows, Mac). On those OSs, the executables are dynamically linked with system libraries.


Rust?


Rust can definitely do it, but there still are a lot of gotchas. Many languages can do it, but there are so many pitfalls. For example, a host tz package.


I would argue rust does it much better than go. When you have to resort to hacks like cgo that subtly change the performance and functional characteristics of your program i wouldnt call it "first class". Its good, dont get me wrong, I like how go cross-compiles most things. I wouldn't say its the gold-standard as long as cgo continues to be a thing

Edit: I mention cgo as many who want to cross-compile a statically linked binary may want to interface with other libs via FFI and this is a huge gotcha. It is a bit tangential to strict "static linking binary building".


So now as a developer you want me to install the jvm first, before even installing python?


Bazel bundles a private Java runtime, so you don’t need to install the JDK unless you plan to compile Java code: https://docs.bazel.build/versions/master/install.html


I decided to drag myself kicking-and-screaming to the 21st century and start writing my handy-dandy utility scripts in python instead of bash. All was well and good until I made them available to the rest of my team, and suddenly I'm in python dependency hell. I search the internet and there are a lot of different solutions but all have their problems and there's no standard answer.

I decided "to heck with it" and went back to bash. There's no built-in JSON parser but I can use 'grep' and 'cut' as well as anyone so the end result is the same. I push it to our repo, I tell coworkers to run it, and I wash my hands of the thing.


jq has been a lifesaver for me parsing json in bash. Of course, it's an external utility not present by default in most systems.

Another thing to consider is more of a middle-ground approach. Most systems do have a python interpreter, so you can use a lot of base python without worrying about dependency hell. I use inline python in bash all the time, e.g.

  ls | python -c 'import sys,json;lines=sys.stdin.read();print(json.dumps(list(filter(bool,lines.split("\n"))),sort_keys=True,indent=2))'
You can even use variable substitution, if you surround the python code in double quotes. Even mix f-strings and bash substitution

  python -c "print(f'Congrats, ${USER}, you are visitor number ${RANDOM}. This is {__name__}, running in $(pwd)')"


Or use a heredoc to not worry about competing quote chars:

  # python << EOPYTHON
  print("Congrats, ${USER}")
  print("You are visitor ${RANDOM}")
  print("This is {__name__}, running in ${pwd}")
  print("It's a heredoc to allow both quote characters")
  EOPYTHON


Great trick with using the python standard lib! Thanks for posting that.

edit: You probably already know this, but for anyone reading along, piping `ls` is unsafe if you plan to use the paths for anything except for printing them out. A path on linux can contain any byte except for NULL, so when `ls` prints them out, you can get broken behavior if you try to break on newlines.


Just a question - why do you have a dependency hell? You could restrict yourself to the Python standard library, and you would only have one dependency. The Python standard library is much nicer than bash if you need more complex data structures than what bash provides.


"grep" and "cut" are not Bash, they are programs and have dramatically different feature sets between distributions and OSes (grep on MacOS is very different from grep on a modern Linux distribution using GNU Coreutils, and there are many incompatibilities). Many scripts that work on Linux won't work on Mac because of this.

With Bash, your best bet for portability is to run scripts in a Docker container. If you want portable code, you have to bundle your dependencies--there's no free lunch here, including Bash.


When I was at Google I had a similar problem (team wasn't using Blaze). So what I did was to have a wrapper entrypoint around every python entrypoint that would just run that python entrypoint (e.g. foo would execute foo.py). The advantage was that the shell script would first set up a virtual environment for every entrypoint and install all the packages in the requirements.txt that was beside the entrypoint (removing any new ones). Each requirements.txt was compiled from a requirements.in file via pip-sync [1] which meant that devs only had to worry about declaring just the packages they actually directly depended on. Any change to requirements.in would require you to have run pip-sync which wouldn't (by default) upgrade any packages & only lock whatever the current version is (automation unit tests would validate that every requirements.txt matched the requirements.in file).

This didn't solve the multiple versions of python on the host. That was managed by having a bootstrap script written in python2 that would set up the development environment to a consistent state (i.e. install homebrew, install required packages) that anyone wanting to run the tools would run (no "getting started guides") which also versioned itself & was idempotent (generally robust against running multiple times). We also shipped this to our external partners in the factory. Generally worked well as once you ran the necessary scripts once no further internet access was required.

It wasn't easy but eventually it worked super reliably.

[1] https://github.com/jazzband/pip-tools


I actually did something very similar when my application had to execute a python script on any old box and I was strictly forbidden to make any changes on the host machine. My application refused to start if python 3 wasn't found so I didn't have to deal with that mess. It ran bash, setup the venv, did python-y stuff, clean up the venv, take only pictures leave only footprints.


The caveat is that with mine the venv wasn't destroyed at the end of execution. Instead I put a snapshot of the sha256sum of the requirements.txt file which I double-checked on boot. If that changed then I ran pip-sync.

This was critical for devs because this was the underlying thing for all scripts devs ran (build system, terminal to device, unit tests, etc etc). Startup latency was key & I spent time optimizing that to feel as instant as a native executable unless the virtual environment changed which isolated the expensive part (& generally happened more & more rarely for any given tool as I found the dependency set to mature & freeze pretty quickly).

This had a great side benefit making it super-easy to run the scripts once on an internet-connected device & then use that as the base image for all the factory machines that could then be offline because all the virtual envs had been initialized.


This might seem like lunacy, but I really like/recommend Ammonite instead of Python/Bash.

It's Scala, runs on the JVM, and is perfect for writing scripts. (It has a great built in dependency resolver, I mean it uses Ivy, but it downloads the dep by itself, you just import it via the "maven coordinate" - http://ammonite.io/#IvyDependencies )

It gives you a lot more safety/correctness than Python, and it's a bit simpler to install too. (No need to compile extensions, just get JDK8 and it'll run.)


The solution to this (at least the one we've landed on at work) is to make sure your dependencies are packages in a yum repo you include on your systems. For us, that's a local private yum repo our systems have access to which we package perl Module requirements into that aren't in the public repos. We also include our private libraries there. If the utility script is commonly enough used, we'll make an RPM for is as well, or stick it in one of our general purpose utils RPMs and make sure dependencies are set. If that's done, you don't have to worry about dependencies at all, if not, you might have to manually yum install a few things that are grabbed from our yum repo.

There are lots of ways to handle this problem, but if you're handling lots of systems, you presumably already have a method you use to keep them up to date and secure. You presumably are also installing Python from the system packages (if not, you probably shouldn't be writing system utils in it unless you can ensure it's the same on every system you guys maintain, in which case your dependency problem shouldn't be a problem), so tie into that mechanism. It's a lot easier to reason about when there aren't two competing systems, and presumably you aren't going to do away with the security updates the distro provides.


While I can understand your pain related to dependencies with Python. I still cannot wholeheartedly support such of way. Depending on case bash scripts are valuable and should be utilized instead of using Python. However in some cases this can be painful for other developers, if used in wrong use cases.

I recently received a script from partner company that used such of script for forwarding data to their API. It was quite long and had few dependencies that were not visible until you (stupidly) executed it.

Few random thoughts:

- Bash scripts can be ran in environments where all dependencies to binaries are not met. In these cases the script might cause damage if they expect that everything is available.

- When someone is unexpectedly required to modify the script it can be difficult or cause issues when this is done by inexperienced developer (in this age I wouldn't be surprised)

- If the script uses a program that is required to be certain version for getting wanted results it may cause issues

- The environment where script is ran is usually not a vacuum. Another scripts might change environment variables or change/remove programs in general

While dependencies with Python can cause issues in the future. The trade-off is having some sort of control as long you don't execute other binaries directly.


Exception handling and (unit|py)test are worth the headaches.


This is why I've switched to writing "quick wins" in shell [or Go]. It's just so much nonsense that has nothing to do with actually programming. Posix shell can be a bit baroque, but you know that it's not ever going to change and because of that, it's pretty easy to ship to any *nix.

There is the question of the dependencies of a shell script, but I find in practice just checking for deps like `curl` at the beginning leads to be a better user experience. It's unlikely that there is going to be a ton of tools you require, and the tools you do require are probably going to be good about backwards compatibility [curl again as an example].


Except it does all the time. There are innumerable differences between the OSX, BSD, GNU, and other versions of common command line tools. There are plenty of cases where `jq` will or will not be available. Finally there are differences in how `/bin/sh` will interpret things (which there shouldn't be) depending upon underlying shell is running ksh, zsh, bash, dash, etc.


> There are plenty of cases where `jq` will or will not be available.

Sure. The argument is that it's a lot easier for the user of the program to read an error message that says "jq is required. Run apt-get install jq or homebrew install jq" than to fuck around with the python or ruby ecosystem, especially if they don't work in those languages.

> Finally there are differences in how `/bin/sh` will interpret things (which there shouldn't be) depending upon underlying shell is running ksh, zsh, bash, dash

Do you have an example of code that is written to the POSIX standard of shell that runs differently? I only write POSIX shell, and use https://github.com/koalaman/shellcheck to verify that to prevent that exact thing.


And if you cannot install jq because you're not root, you can still wget it somewhere, it's a static binary.


It's not a static binary, at least on my Mac. It dynamically links against Oniguruma.


I generally agree with your sentiment here, but be careful with assuming bash==bash

There are differences between versions. I can't even remember what they are off the top of my head like I used to, which makes them all the more aggravating to discover again.

But I would recommend sticking to a subset of bash, not any of the new fancy features like 'globstar' which allows recursively globbing.

There are tools to manage these kinds of tests, like bashenv. But you're in the same problem scope at that point.


I much agree with the sister comment and I write my shells for /bin/sh also. There is this wonderful tool called ShellCheck ( https://www.shellcheck.net/ ) that checks that your script is actually POSIX-compliant if it starts with #!/bin/sh


That's why I said "Posix shell" and not bash.


POSIX shell is miserable for programming anything beyond a couple lines. It doesn't even have arrays[1], so you have no available container types within the interpreter itself.

[1] Well, it has $@, which you can use as a general-purpose array with some hacks[2], but that's no way to live.

[2] http://www.etalabs.net/sh_tricks.html


I don't think it's too unreasonable to assume that you'll be able to find bash anywhere you'd find a general purpose python installation and it has plenty of niceties.

But even the nicest shell doesn't solve the dependency problem like statically compiled programs. If I could take my currently running Python code and produce some artifact that would run with nothing other than the python binary I think we'd be in a much better place.

Ohh apparently all I've needed in my life is zipapps.


Agreed that Bash is (relatively) fine, although error prone. My comment was about POSIX shell, which has none of the features (arrays, [[ instead of [, etc.) that make programming tolerable in Bash.

One drawback is that if you want your Bash script to work on macOS, you need to restrict yourself to features that exist on version 3.2 (from 2006) because that's the latest version that will ever be included on macOS by default.

> If I could take my currently running Python code and produce some artifact that would run with nothing other than the python binary I think we'd be in a much better place.

See my other comment: https://news.ycombinator.com/item?id=23338316


That's a matter of opinion. I don't find using "$@" to be a big deal in practice.

Let me put it like this: I'm a programmer. I don't mind making programming a bit harder for myself if it means that I get to avoid a lot of the non-programing minutia that's part of a modern interpreted environment.

Also, if you're willing to take a dependency on jq, the issue goes away completely.


Most of the time, you don't need all that, since Python has zipapps. You defined deps, you zip it, you ship to any same os with the same python version. It embeds everything, and just run.

We even how have a nice tool to automate the bundling for you:

https://pypi.org/project/shiv/

Of course you still have to figure out how to get a Python installed on the final machine, that's the price to pay to be an interpreted language.

We don't have yet a story to ship a beautiful exe/dmg/deb/rpm that embeds the zipapp and libpython in an easy way.


> you ship to any same os with the same python version

This is a non-trivial thing to handwave away.


I agree, but it's still way easier than the original story, which is the one you also have with PHP, Ruby, JS, etc.

Using an interpretted language always leads to this.

I know no popular interpretted language with a seamless experience to ship a standalone exe.

In fact, Python is probably the one with the best story here, since it has nuitka (https://nuitka.net/), which allows to compile Python code into a fully standalone exe.

But then you need to install a compiler, headers, etc. And no cross compilation of course. Not to mention on Linux, you have to ensure you target the lowest version of libc you can.

You are still very far from Go or Rust, and I'm hoping one day that RustPython will succeed because that would mean an amazing deployment story.

Meanwhile, you trade the ease of deployment of compiled languages for the ease of development of interpretted ones.

I think it's a fare trade for most people: you dev the program much more often that you deploy it.

That doesn't mean we shouldn't work, as a community, to improve the deployment story. It's a serious hindrance.

That's the raison d'être of the Briefcase project (https://beeware.org/project/projects/tools/briefcase/). It's still in progress, but the last prez I saw on it was quite impressive already.


> I'm hoping one day that RustPython will succeed because that would mean an amazing deployment story.

Isn't RustPython just an alternative interpreter to CPython, implemented in Rust instead of C?

How would RustPython offer better deployment than CPython?


Rust has a fantastic deployment story: compiling a rust program is super easy, and you can cross compile. Using cargo and rustc is a breath of fresh air compared to any similar experience with C compiling.

So if one day RustPython gets compatible enought with CPython that you can use it as a drop in replacement, you can start creating a tool that compiles any Python VM for any target, and bring along your program with it. Making a standalone version of it would become much easier.

Right now, doing so either requires you to bring in a pre-compile version of cpython for your target (which is what briefcase does) or compile the thing yourself with gcc + headers + deps(which is what nuitka does).

It's not easy.


> So if one day RustPython gets compatible enought with CPython that you can use it as a drop in replacement

I don't think this will ever happen unless the community converges on a standard C-extension interface. Presently Python leans so hard on C-extensions, but there is no standard interface--if you're writing a C-extension library, you just depend on whatever obscure corner of CPython that suits your purpose. If you're writing an alternative Python interpreter, you have to implement the entire surface area of CPython, which generally means you must implement CPython exactly and you are severely restricted on the improvements you can make. At that point, why even bother?

Fortunately, I think there are emerging candidate interfaces, but the community needs to either update C-extension packages to use those interfaces or support packages (and maintainers) who already do. https://github.com/pyhandle/hpy.


There are probably only a dozen of c popular extensions that needs to support HPY reach the tipping point of mass adoption: numpy, scipy, pycuda, tensorflow, matplotlib, uvloop, etc. and some db drivers.

The rest is not popular enought to be a blocker. You will hear them scream a lot, but they will be like 0.00001% of the user base, and we can just tell them to stay on CPython with its limitations. They don't lose anything, just not gain anything either.

Those C extensions authors are directly in communication with Python core devs, when they are not core devs themselves, so if HPY is adopted, we can expect a total adoption under 5 years.

Numpy authors already said it would take 1 year to adopt it.

Give the huge number of benefits of HPY, I deeply hope it will be a success.


I'm not sure. I would certainly add psycopg2 to that list, since it's really the only well-supported way to speak to a Postgres database via Python. I imagine other database dialects will have similar issues. And there's probably a whole host of other prominent libraries that we're just not thinking about because we only run into them when we're trying to use something like Pypy, and even then we only run into one or two at a time before giving up and going back to CPython.


> a breath of fresh hair

Indeed.


:) Fixed


youtube-dl for example is distributed as a zipapp and it seems to be distributed just fine. It only requires you to have Python installed on your system, which isn't too burdensome of a requirement on macOS/Linux. On Windows they do actually distribute a Python interpreter.


Youtube-dl is a great tool, but the audience (primarily technical people) of any CLI app won't be representative of shipping an app for general use.


GP's point was not about "general use" apps, but apps for other developers / system maintainers


What's the story if my zipapp depends on tensorflow and/or cuda?

I don't know exactly how it goes but I'm pretty sure it's a horror story.


As usual with extensions, you are not using Python anymore, but a compiled language. To get 100% certainty, you'd need to compile the whole thing.

That being said, a lot of extensions are pre-compiled and provided as wheel, which is the case for tensorflow (I don't know for CUDA, I can't test on a laptop without a GPU).

Let's see what this means:

    $ py -m venv test
    $ test\Scripts\activate
    $ pip install tensorflow
    $ code hello_tensor.py
    # import tensorflow as tf

    # def main():

    #     with tf.compat.v1.Session() as sess:
    #         a = tf.constant(3.0)
    #         b = tf.constant(4.0)
    #         c = a+b
    #         print(sess.run(c))

Now with shiv:

    $ copy hello_tensor.py test\Lib\site-packages
    $ shiv -e hello_tensor.main --site-packages test\Lib\site-packages\ -o hello_tensor.pyz
    $ python hello_tensor.pyz
    ...
    2020-05-28 17:31:46.580704: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
    7.0
So it works fine, but remember:

- it will only run on the system this particular wheel has been designed to run on. In my case cp38-win_amd64.

- it will come bundled with tensorflow, which is a behemot, meaning your hello world pyz will around 500 Mo.

- it needs to unzip, so the first run will be REALLY slow

For something like this, I would advice a more generic deployment tool, like fabric 2 if it's remote, or a make-like tool such as doit if it's local only.

Make your deployment script, and zipapp that.


Zipapps are an order of magnitude improvement in the Python world, but there are still lots of other major pain points like dependency management and performance which still leaves Python several orders of magnitude behind its competition. Hopefully these things change going forward.


It looks like zipapps built with “shiv” need to extract the contents of the zip file to disk before they can run? Does it delete the extracted files on exit?

If so, the extraction is going to make startup very slow. If not, that’s just messy. Either way, it’s not ideal.


It's doing the work only once.

Yes, it's not ideal.

But it beats shipping your entire dev env to the server.

I find it a good compromise. The extraction is done is $HOME/.shiv/{zipappname}_{zipapphash} so it's not a horrible mess. But if your project is big, you do have to clean up the old install because it can eat a significant amount of space.

For you regular cmd tool though, it's a blip.


> you still have to figure out how to get a Python installed on the final machine, that's the price to pay to be an interpreted language

Also for Java. The installs usually include their own favorite version of Java alongside the actual application.


Installing Java is a bit of a difficult task these days. Even worse if you want to get the JDK which is behind a register-wall.


Not if you are installing OpenJDK or Amazon Corretto. Add the PPA and install with apt-get. Like every other Ubuntu/Debian package.


jlink[1] solves this for JDK 9 and later.

[1] https://docs.oracle.com/javase/9/tools/jlink.htm


I probably haven't bought into all of poetry yet but for deployment, I have been using "poetry export" to get the pinned requirements.txt, commit it to the repo and install to a virtualenv. A bit of work to keep it in sync with the poetry dependency file but that's ok.

For PATH with cron or others, I use the full path to the virtualenv such as /path/to/project/.venv/bin/python. The path can be extracted by "which" or "Get-Command" when the venv is active.

Using a python version different from the system python version is probably the messiest part but well, targeting 3.6 is alright.

I do agree it could be better and it's not quite as streamlined as other ecosystems.


Honestly, pip freeze includes the whole content of a venv site-packages, and the exact versions. For most projects, that's equivalent to all the dependancies recursively pins with peotry, although you don't have the clean pyproject-dev-prod/lock file separation.

So a huge number of cases can be handled with just that. It will be "reproducible" enought for a lot of people.


> although you don't have the clean pyproject-dev-prod/lock file separation

That's why I use "poetry export -f requirements.txt > requirements.txt" instead of pip freeze. It only exports prod requirements from the poettry lock file.


Nice trick, and very useful to make sure people never have to know about poetry if your team can't deal with it.


> Then there's the rigmarole around getting one of these environments to play nice with cron...

You pass cron the full path to your entry point. Where’s the rigamarole?


You've also got to include the pyenv shim instantiation. So now you've got something like

0 0 * * * /path/to/bash/script/to/init/pyenv && /my/path/to/poetry run /my/path/to/python.py -arg1


Hm, haven't tried it, but doing this should be much easier:

0 0 * * * /path/to/interpreter/created/by/poetry/bin/python myscript.py


Does bin/python in a virtualenv set PYTHONHOME correctly..?


Yes. And the answer by Chiron1991 is the proper way to do this since pretty much forever.


Strange, this does seem to work with python3.8 on ubuntu 20.04 (the site-packages shows up in sys.path), but for me in a virtualenv bin/python is a symlink to the system python, so how does python 'know' what path to use? Is there logic baked into the interpreter?

I seem to recall that with python2.7 that calling bin/python in a virtualenv without activating the virtualenv did not used to "work" (i.e. it would use the system packages). Did this change at some point or is my memory just wrong?


Yes, if the python interpreter the poetry environment utilizes is in your $PATH.


this is precisely correct.


If the path to your executable is fixed, just put it in the shebang and you're done - makes everything way more explicit at the cost of some dynamic behavior.

An anecdote: Homebrew uses this method for shipping python executables.


The "production version" of your script should be running in your system environment with system packages. pyenv and friends should be used for testing with different versions and making sure you don't accidentally depend on idiosyncrasies of your box.

The exception is if your python thingy is "the main thing" running on a server, i.e. your customer facing webapp.

My $.10 anyway


I tend to agree with you that pyenv|pipenv|etc shouldn't be used for actual production usage.

This of course leads to other issues to solve, now that your development environment doesn't actually mirror production.


How do you package your entire pyenv as one or more system packages?


This isn't exactly what you are asking, but https://askubuntu.com/questions/90764/how-do-i-create-a-deb-...


pyinstaller, shiv, pex, docker: depending on use case, any of these may be appropriate.


I solved all of my python deployment concerns by using lxd.

I wonder how many others are doing the same but are keeping it close to their chest because it’s such an amazing advantage.


I'd love to more about how you are using lxd


I once threw a relatively complex Python application with background server/client processes at Cython and the generated .exe literally just worked without any special effort. I don't know how transferable that is, but N=1 it's not always as hard as what you're thinking.


Containerizing Python applications can really simplify things in that regard.


No, the problem is just moved inside the container.


In other words, it becomes the concern of the person shipping the code, rather than the concern of the person trying to run the code. That's exactly how it should be.


In other languages it's a problem for neither, which I think is the parent's point.


Still, the person trying to run the code has to setup Docker.

And if you're on Windows, your Docker host is in a virtual machine, so networking and volumes are not so simple anymore.

Replacing one kind of complexity by another is not a solution, it is a trade-off.


Are people still suffering through hosting Docker containers on Windows? Why would anyone do that at this point other than to comply with outdated, arbitrary IT policies?


Just an example of platform specific issue even with Docker.


The problem with this is that someone else can't even run your program outside of a Docker container anymore. That doesn't seem ideal.


They also can’t run any program outside of a computer and OS; their are some basic prerequisites to running software - Having a Docker/container host has become one of those prerequisites for many applications, but it actually reduces the headache of numerous other traditional prerequisites.


I don't want to have to run a simple Python program in a container for quick and simple development or testing. That's a failure of engineering discipline. By all means, do provide a Docker container and do use containers for actual deployments, but also make it easy for me to just use, say, pip-tools or whatever else your organization has standardized on for Python. If we're talking about something with complex C or C++ dependencies that's quite different. If it's just a few pip dependencies and there's no way for me to just run it reliably outside of a container, though, that's a result of not following best practices.


Agreed, I typically include a README as well as a requirements.txt so one can easily 'pip install -r requirements.txt' and then 'python app.py' to run simple apps without a bunch of rigamarole.


I probably misunderstood you -- apologies. I think we're 100% in agreement.


I use constantly Docker in my job and projects yes. Yet, I do not believe and advocates it gets rid of the complexity.

According to the user needs, your dockerized application will run with different base distro. Alpine and musl for small OS footprint ? Or Debia(or debian-slim) for glibc compatibility ?

Those concerns are the same with or without Docker. Docker makes things easy, just not those things because it is not its purpose.


I typically specify these things in the Dockerfile - if the end user wants to modify the Dockerfile because they prefer Alpine over Debian... they've now taken responsibility of maintaining their customized Dockerfile and ensuring that everything runs as expected. This doesn't seem like something that would be encountered with any frequency in my experience, and you would technically have the same problem with or without Docker in the mix.


In the professional world, your end user is either : - someone without the skills to make a Dockerfile - another team who has not the responsability to integrate your work

The packager of an application is part of the project's team. It's not up to the user to package your application.


docker with pip-tools is great combination, you get deterministic builds easily


That's why I switched to Go for a lot of things. I don't care that much for Go as a language but compiling and shipping code is just darn easy.


Let me know when Django is rewritten in Go.


I like Django too. I'm building tools on top of AWS, and small REST apis in Go. I don't use it for everything.


That's not a "quick win" use case though.


It may be unpopular to say here, but I see Node as the best option.

- Runtime comes with a package manager

- Dependencies (not just imports, but tooling) are fully manifested in a project-local file

- Installs dependencies in a project-local directory

- Can specify exact package versions if you want maximum stability

- Left-pad can't happen again due to policy changes: https://docs.npmjs.com/cli/unpublish#description

- Doesn't require any build steps or extra hoops if you're fine with skipping static types

In general it just does a really great job isolating from the environment. No messing with environment variables, most things even run fine on Windows out of the box. All you need is node itself installed and you're off to the races, whether you're starting a new project or running one you checked out from github.


On linux I normally just use pipx. As long as the package uses proper entry-points it just works.


> Or you use requirements files, and don't get deterministic results.

What about pip freeze?


I was also confused, pip freeze defines version numbers so it should be pretty clear-cut.


RustPython might paint a prettier picture for a better future in this regard.

https://github.com/RustPython/RustPython


For a trivial-ish command-line tool, I've enjoyed using pyinstaller with --onefile to put out a single file executable. Using GitHub Actions, it was also relatively easy to create cross-plaform releases.



The minimal interpreter is pretty small when compressed - we shipped it to thousands of windows pcs as one exe file.


Recommend you try out Julia language!


Why a whole OS? Can't you just install Conda or miniconda?


It was a tongue in cheek joke that conda is complex enough to be it's own OS.


The best modern Python practice is, don't use Python.


No. The only thing ZeroMQ and RabbitMQ have in common are the letters M and Q.

RabbitMQ is a messaging system. ZeroMQ is sockets on steroids.


My experiences line up with yours. It was terrible then, it's not any better now.


> I know this is becoming repetitive, but seriously still no optional touchbar? I mean for gods sake, why?

Because apple pay with the touch bar is more important revenue stream than the vocal minority of users (who I'm a member of) who hate the touchbar.


MacBook Air ships with Touch ID and function keys.


There's always the BSDs.


Hashicorp Vault hits most (all?) of these points


Vault is for secret management. If you are just looking at a key value based store or a service discovery tool, you might want to have a look at consul by Hashicorp.


HFT also still pays top dollar for competent FPGA developers. I'd wager they're willing to pay more than the listed industries too, especially if you factor in profit sharing and bonus structures.


This one is new, so they haven't learned all the corner cases that make VoIP more difficult than anticipated.


What based on the use case provided [1] would the the number one most likely edge-case and what is the most common solution to it?

Making a claim without constructive supporting points is usually not useful and worst appears you may know nothing other than how to cause problems seeding doubt.

[1] Use-Case: Create a virtual number on Twilio and whenever someone calls, a list of real numbers will be tried sequentially. You can set up opening hours. If nobody answers, a voicemail is recorded and sent to you by email.


> This one is new, so they haven't learned all the corner cases that make VoIP more difficult than anticipated.

No need to be rude. This is Show HN. Sharing first attempts is the whole point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: