Hacker News new | past | comments | ask | show | jobs | submit login
Fedora Loves Python (fedoralovespython.org)
151 points by type0 on July 11, 2017 | hide | past | favorite | 84 comments



> Python 3 by default

Two lines below:

> However, /usr/bin/python remains Python 2, [...]

Not sure what they call "default". They seem to quote some PEP that only specified that this was okay during a transition period. I think we're WAY past that. If not, then the first statement is plainly wrong.


I read PEP 394 (https://www.python.org/dev/peps/pep-0394) differently. It states

> The more general python command should be installed whenever any version of Python 2 is installed and should invoke the same version of Python as the python2 command

Rather than permitting python to refer to python2 during a transition period, that is the recommendation. They later state

> It is anticipated that there will eventually come a time where the third party ecosystem surrounding Python 3 is sufficiently mature for this recommendation to be updated to suggest that the python symlink refer to python3 rather than python2.


I believe this means that python3 is installed by default and python2 is not. /usr/bin/python is not installed unless python2 is installed.

PEP394 actually states the "recommendation will be periodically reviewed over the next few years, and updated when the core development team judges it appropriate.". You might think we are way past that, but it seems they don't.


The PEP does say `should' and that a future PEP will require the python command to be Python 3.x.

Given a default Fedora install does not have the python command but does have python3, I'd say it's 3.x by default.


Most of this seems pretty standard for Linux distros; python2 and python3 installed by default, ability to install venv through `python -m`, other versions of Python and various packages available in the repositories. Is there something I'm missing here?


Installing different Python versions is not possible with many distros (you'd have to compile it from source/use something like pyenv):

https://developer.fedoraproject.org/tech/languages/python/mu...


As far as I remember Debian has always been able to support multiple versions of python simultaneously although I think they've dropped packages for <~2.7 now


But:

- iirc they can only go down one version (i.e. 3.4/3.5, no 3.4.1/3.4.2)

- it's always been a real b@llache to install newer versions on Debian releases where it has not been officially packaged (e.g. 3.6 on jessie). With most packages, you can grab the source from Sid and rebuild it on Stable; with Python it's become basically impossible, the pile of dependencies below and above the package is unwieldy. And this despite tons of policies around packaging python apps, which was supposed to make it easier to run multiple versions.

If Fedora really made things easier in this area, it's most welcome. However, when a marketing page starts with "you can now do this standard thing that actually most people have been doing for years already" (-m venv), I'm somewhat skeptical.


Ah, I missed the "CPython in multiple 3.X and 2.X versions". Thanks for pointing that out to me!


OpenSUSE gives you the ability to have both python and python3 installed through the package manager


almost every distro has that. this is about different 3.x versions. SuSE might have that, but I don't know.


Oh that's a good point. I don't think openSUSE has various 3.x versions of python - currently Leap 42.2 ships with 3.4.2. If I need another version of python I use anaconda, but my use cases are relatively simple so the system python versions (2.7/3.4) fit for me just fine.


um, venv is included by default in python 3, no need to install any package.


It's a subtle stab at Debian-based distributions, where pip and venv are only available as optional, non-default packages. (I think Debian really wants people to use apt to install things.)


I think it's probably actually just because they want to support users who wish to have a "minimal" python installation. Python is used internally by the system and many applications, but that doesn't mean that features like venv are necessary for them. It does make sense not to install them unless needed.

That said, I think debian users should lean heavily on the apt versions of any python modules. I will usually install as many modules with apt as possible and then use `python3 -m venv --system-site-packages` to have access to them. That way there are usually very few packages I need to compile/install myself. Seems like a pretty good balance.


i do it the other way around and only use the interpreter, the rest comes from pypi or, preferably, an internal pypi mirror.


yeah it's a much better approach when you want to make sure that your requirements.txt file contains all the requirements


venv is actually a module of the stdlib in Python 3, so you can use python -m venv even on debian. You are confusing with virtualenv, which is a different tool than venv with the same purpose.

pip is another beast: it's not included in the stdlib, but installed when you install Python with the ensurepip script. Debian just don't run the script.


Yes, venv is supposed to be part of the stdlib in Python 3. But that is not the case on Debian. You need to install the python3-venv package to get it. That's what I and that Fedora page are talking about.


I don't think python2 is installed by default.


Another fine example of heartwashing. It just doesn't feel genuine when all these companies start plastering hearts all over everything.


tips fedora furiously


Colorless green pythons tip fedoras furiously.


Why should I use fedora on my laptop over Ubuntu? Even on servers?

Does fedora increases my productivity?


I grew up using Debian and later Ubuntu, longer than 10 years on those distros as a kid and into my career as a sysadmin/operations person. Ubuntu LTS was my main workhorse form about 10.04 through to ~14.04.3.

Late 2015 I decided I wanted a "Chromebook" style user experience where I could have a repeatable build of the base OS that could be thrown away plus a backup system based around duplicity to restore my homedir. I had used preseed to deploy large fleets of Ubuntu boxes at work so it was a natural option. But I decided a few things: Firstly, I was sick of LTS 3.x kernel. Secondly, if 16.04 and all other distros were adopting systemd I may as well go to the source and use a RH based distro. Finally, that preseed wasn't as good as Kickstart used in RH based distros.

So I came up with https://github.com/sinner-/kickstart-fedora-workstation to provide repeatable builds of Fedora the way that I like it. I've been happily using it since then (across 2 versions of Fedora and a hard drive failure)! The .ks file will be updated for F26 this weekend as it just went GA today.


Have you looked into the FAI (https://fai-project.org) project? It's like preseed on steroids and very similar to the Kickstart project.


I have been using Fedora on the desktop for the last 7 years. There are lots of pros like large selection of mostly up-to-date packages, good community, security, sane defaults etc. The cons for me are the below par wiki (so I use the Arch wiki) and the version upgrades. The truth is the last few version upgrades were much easier that the previous ones.

Why would you use it over Ubuntu? Ubuntu has an important advantage over all other distros. When you try to install some not-so-common program that is not in the official repositories, usually the linux instructions assume that you are using Ubuntu. For an experienced user it's not really that important but for a novice it certainly is. Now, Ubuntu had some negative publicity with regards to the [amazon unity search](https://en.wikipedia.org/wiki/Unity_%28user_interface%29#Pri...) and Shuttleworth's response ["we have root"](https://security.stackexchange.com/questions/44512/what-does...). On the other hand Fedora, although sponsored by Red Hat, respects the community, so fortunately there were no incidents like that. Ubuntu also had Unity which users either loved or hated, while other distros focused on either Gnome or KDE.

On servers the options are more or less grouped. So, you should first chose between the group of Debian/Ubuntu, Fedora/Centos/RHEL, etc. Once you chose the group then you can chose the distro itself. The fact that I used Fedora on the desktop for so long made the choice of Fedora on the server much easier.

Finally, will Fedora increase your productivity? No, not really. Your setup will increase your productivity. Choices like using KDE or Gnome, or the setup of (keyboard) shortcuts to meet your needs matter. I hope this helps, have fun!


Ubuntu has a history of doing things that don't go anywhere: upstart, unity, their phone thing. Fedora has concentrated on being a solid, traditional Linux desktop. If that's what you want, Fedora could be it.


Ubuntu also has a history of things that go pretty well: Easy setup for average users including nonfree drivers, reaching a point of popularity where developers target Ubuntu first, PPAs as lightweight repositories for own packages, huge offspring of distros based on it.

Not everything is black and white, and while Ubuntu has failed in some aspects, it succeeded in others. Also, they are not hesitant to admit a failure, upstart is a thing of the past with Ubuntu using systemd, unity and "their phone thing" are being discontinued with focus shifting to the Gnome desktop and the traditional PC/Laptop/server market.


> Fedora has concentrated on being a solid, traditional Linux desktop.

If you consider pulseaudio, systemd and not being able to play mp3 traditional. Also, they used to fuck with encryption in some way, not sure if they still do it.

In the end, do you trust more Poettering or Debian or Ubuntu developers?


Red Hat took $2.4 billion in revenue last year.[0] I trust them more.

[0]: https://en.wikipedia.org/wiki/Red_Hat


Revenue generates trust? How do you feel about Oracle?

Only a little sarcastic; RHEL and Red Hat's historic treatment of CentOS had a lot of echos of Oracle's treatment of similar projects. They've gotten a little better in some regards (now own CentOS and seem to care about it) and worse in others (the of marketing and politics instead of merit).


> "RHEL and Red Hat's historic treatment of CentOS"

What? Red Hat didn't treat CentOS badly: they happily let people make trademark-less respins of their OS per the license and didn't try and stop anyone from doing so.

(Disclaimer: worked for RH around 2003-6)


In Bryan Cantrill, Jerry Jelinek, Adam Leventhal, Eric Schrock, Matt Ahrens, Bill Moore, Jeff Bonwick, Robert Mustacci and Dan Price we trust; all others pay cash.


False dichotomy. When last measured, the ninth most prolific systemd developer, out of just under 600, was Martin Pitt, of Canonical.


Got a reference to RHEL 'fucking with encryption'? Export controls on crypto ended in Clinton era.


As long as "[doesn't] go anywhere" means in RHEL for upstart (before being replaced by systemd) and on more desktops than Fedora for Unity.


Fedora is bleeding edge for RHEL. This is the test distro for enterprise. If you want consistent behavior maybe fedora isn't for you. In general, RHEL has made some controversial decisions for userspace and fedora is the first place you will find these.


That argument would have worked better if it hadn't been the case that Fedora used upstart for a few years.


Main thing is that Ubuntu used it in their LTS distro in 14.04 when everyone else had already shifted to .service files.


And that argument would have worked better if Ubuntu 14.04 had not been released in April 2014, before Debian made its decision. It was only Debian, by the way, not everyone else. Indeed, everyone has has not shifted to .service files even now.


When I say everyone, I mean all the major Linux distros - Debian, RHEL, SuSE, Arch.

Obviously Yggdrasil isn't using systemd and that doesn't change anything.


Which is wrong since, as mentioned, Debian had not shifted at the time.


Two years ago I was looking for a distro that I'd love, after testing a few I decided on Arch, I liked it's approach for learning how the system works. But one year ago I got tired of it, I wanted a distro that was easy to set up and was between Ubuntu and Fedora.

In the end I stuck with Fedora because I preferred GNOME over Unity, and because it would let me try Wayland before any other distro... I like the "consumer focused bleeding edge" that Fedora offers.


I distro-hopped a lot before somewhat reluctantly giving Fedora a shot. Turns out it's a fantastic distribution for developers because:

  - You don't have to configure everything from scratch
  - Packages are kept relatively up to date
It's not rolling release (unless you use Rawhide) but software is recent enough for it to be a non-issue.

It's also an extremely well-established project with a huge software repository.


I tried Ubuntu back when they were tossing Amazon search ads onto the system. There was also, idk, a million and one things, like games and other junk, that I didn't want. Coming from Arch, this put a sour taste in my mouth.

It would take me 30 minutes to build an Arch the way I wanted it. The install on Ubuntu already took 2 hours, then I would have to spend countless hours uninstalling trash I didn't want.

Maybe this isn't true anymore, but "apt-get [package]" was a game of whack-a-mole. If I ran apt-get, I wanted the latest version, not have to guess, hunt, uninstall wrong versions, etc.

Fedora is a minimal system, very much like I would have on Arch. dnf defaults to the most recent stable version of whatever I want to use, and the entire system is built with that in mind.

It sort of depends on what kind of programs you prefer to use. I personally like to have the most recent "x," though I understand the arguments for not having that preference. However, knowing the default is going to be the most recent, I have no need to think beyond that simple fact if I need to use an older version of a program.


You should have just installed a minimal Debian system and start from that.


And probably went with Debian testing or even unstable. I was on Debian unstable for a long while. It's probably more stable than several stable distros :)


Ubuntu is a checkpoint distro, Fedora is a rolling distro. There are benefits to both styles. Ultimately I don't think either will particularly affect your productivity, unless they interfere with the software you use.

Fedora, as a rolling distro, is not particularly suitable for (production) servers. You'd use CentOS for that, to keep familiar RedHat-family tooling.

Of course, Debian gives you both checkpoint ('stable') and rolling ('unstable'), so you can use the same thing on both server and workstation. :) Debian's not as warty as it used to be (I came to linux via ubuntu, and have been switching to debian on workstations + servers over the past couple of years)

EDIT: correction, Fedora is a checkpoint distro, but it doesn't support for long. IIRC it has 6-monthly releases with 12 months of support, whereas Ubuntu has 6-monthly with 18 months of support, and every 24 months you get a 5-year version. Basically Fedora needs to be upgraded twice a year, which isn't suitable for (production) servers.


There've been a bunch of good answers already but I'd like to add another: the package manager (DNF) is _significantly_ better than APT. Parallel downloads, delta updates, automatic cache refreshes, all in one command, fast and with good feedback during updates.


In my experience most Fedora users are from corporations that want a support contract from RedHat. That's really the only reason. I've yet to meet anyone who uses Fedora willingly otherwise.


To counter that I have found Fedora's bleeding edge approach quite usable on my home machine for doing my personal programming. It was also.the first Linux distribution with true (and great imo) High DPI support that to me was very good. Almost as good as macOS in implementation.


I use it willingly. I don't work in any place remotely related to RH.

Don't knock it until you try it.


Are you sure you are not confusing Fedora with RHEL (an excellent server OS for the corporate environment btw)?

Anyway, after years of using different distros (mainly the Debian and Ubuntu family of things and Arch) on different boxes I tried the Fedora KDE spin on my main desktop - mainly because the out of the box KDE 4 experience had been a bit underwhelming on various distros and I got tired of accidentally breaking stuff on Arch due to applying some careless updates. For me Fedora offers the optimum when it comes to the balance between stability and bleeding edge and the availability of packages. Rarely when things break they also get fixed promptly, third party rpms and repos are pretty decent and it's popular enough that even some proprietary stuff (mainly Steam and Sublime Text in my case) are easy to run and well supported. Really excited also to try out Wayland with Plasma soon. For a development box it has been a good choice, lot of the experience applies well also to RHEL and CentOS servers and doesn't suffer from weird decisions by the project maintenance.


If you want support contracts you don't use Fedora, you use RHEL


Well you buy one RHEL and use centos everywhere then pretend whatever broke was on your RHEL licensed box. That's usually how corporates seem to run it.

I haven't seen a single Fedora install in all my years on a desktop or server.


You would be surprised.

For one, Fedora is the open, no contract, version. And yes, tons of people use it, including on the desktop.

Second, even for the "enterprise" version, tons of businesses use Centos, which is kind of like RHEL with no contract. It's immensely popular.


That's a silly assertion and needlessly hostile to Fedora.

I have tons of experience with all the major distros and Fedora is my pick for desktop use. I don't work somewhere with an RH contract, I just like it. There are other good desktop distros but Fedora suits me. It's a good blend of cutting edge and stability, it has very good docs, has a large community, and it is 100% open source (I can still choose to install proprietary stuff if I need a driver or something, and there are usually already rpms in a repo for it if I do).

So, "hi", you've now met someone who uses Fedora willingly (enthusiastically, even).


Fedora has "just-worked" for awhile and I don't have a particularly good reason to move to Ubuntu. It's nice that Fedora doesn't patch upstream much, and I learned how to build custom RPMs about 20 years ago.


Fedora (and other rpm-based distros, OpenSUSE in particular) are easier to build/modify packages for than deb-based distros and have easier to use build services with better automation, IMO.


Relevant to their "Fedora Python initiative":

https://labs.fedoraproject.org/en/python-classroom/


Fedora can not be python3 only, if you want to have offline documents, e.g.: there is a `python2-matplotlib-doc`, which depends on `python2-matplotlib`, however no `python3-matplotlib-doc` or a generic `python-matplotlib-doc` exists.

On the other hand, ubuntu /debian packages are well organized at this point.


/rant If fedora really loves python, they should first start showing their love by publishing all their distribution related libraries to pypi. blivet, selinux, none of these are available in pypi. Why do they expect everyone to use rpms?


Personally I can't stand when I have to get language specific packages from one of their many package managers, that's the point of having a distribution - so it can be distributed.


Right, but the issue GP has with this is that the Python libraries that Fedora develops aren't available on other distros, and that submitting them to a Python repository would be an easy way to achieve that availablility


These libraries are in fact available for other distributions. The SELinux Python things, for example, are hosted on Github.


Much depends on point of view.

For the operating system, sure, use the OS packages.

For writing and deploying an application in a language? Never use the OS packages; manage it using the language's tools. My Python applications deploy into a virtualenv and install their dependencies using pip.


Why not use RPM for example to deploy python code?


This. Make as big a mess as you like on your own box (your sysadmin can pave it when you're hopelessly confused), but everything in prod has to be registered with the one and only package manager because otherwise nobody will know where it's deployed or what its dependencies are or whether they're up to date. cpan/pip/gem/cargo/go get/hackage/melpa are not sysadmin problems.


everything in prod has to be registered with the one and only package manager

No. Never. Not for any reason, ever. Never.

The language's packaging ecosystem and toolchain are:

* Tailored to the language, not the operating system, which means they're reproducible on multiple operating systems. This is important, since your developers are not running RHEL server as their laptop OS, and as a result they'll be using the language toolchain regardless of what your "sysadmin" does to the production environment.

* More likely to be up-to-date and/or update-able than distro-format packages. Unless you want to be running two years ago's version of your libraries (or older), the only way you'll get distro-format packages is to build them yourself... which requires you to go grab them from the language's package system, since that's where they get published, and maintain your own pipeline to re-package them into the distro format. Now you've injected additional moving parts into your systems where none were needed.

Distro packages are only for the base operating system and things like your HTTP daemon. For application code and dependencies, the distro packages should only be involved insofar as they bootstrap you to the point of being able to use the language's toolchain. Insisting on distro-format packages for the whole thing is the path to overcomplex builds/deploys and difficult-to-update codebases.

because otherwise nobody will know where it's deployed or what its dependencies are or whether they're up to date

I can, at a glance, look at any application in production where I work and see what its full dependency tree is and whether those dependencies are up-to-date (and if not, whether they're just outdated or also subject to security advisories). Using things built on the language toolchain. Really. And this is not new cutting-edge technology here, we've had that capability for a good number of years now!

Even better, I can match up that information to what upstream actually publishes: if they say the bug I care about is fixed in version 3.1.4, I can upgrade to version 3.1.4. With distro packages, who knows? The distro might have backported the bugfix into 2.7.1 and bumped the patch number, for all I know. The more places I have to look to find out what's up-to-date and what versions I should use, the more opportunities I have to mess up. Reduce the number of places to look until it's one and only one: the upstream release notes, using upstream's versioning and upstream's packages.

Distro packages for the base OS, language packages for application and its dependencies. Deviate from this at your peril.


If we ship code that runs on SuSE 234 and RHEL 345, that's what we test on. If we run on our own hardware or AWS, we pick a distro and test on that. I don't care whether it builds or runs on a laptop; I don't write twitch games and interesting problems don't fit on a single machine anymore.

It's been a decade since I worked at such a tiny nascent company that all the software was written in just one language. Language packages almost never express dependencies on either system packages or other languages' packages, making "bootstrap to the point of being able to use all of the languages' toolchains" a manual process that lacks any guardrails. Nothing ensures you have a httpd version that's compatible with all your apps, because each of them just sort of assume httpd is out there somewhere without saying anything about it. Staying on the right versions of shared libraries is even more error-prone since the system package manager literally doesn't know you're using them.

If you want to read upstream security advisories and use such bleeding-edge software that even the bleeding-edge distros don't trust it yet, you're basically rolling your own distro that only exists on one machine in the world (because some languages' package managers aren't idempotent and symmetric) and is supported by nobody besides you. I'd rather delegate that to the people who specialize, because the best case is that I don't fuck it up too badly, I'll never add value that way.


If we ship code that runs on SuSE 234 and RHEL 345, that's what we test on.

By all means run the test server as an environment identical to production. I've never said you shouldn't. But people do have to locally run the code on their laptops to do dev work.

I don't care whether it builds or runs on a laptop

Good for you! Now, clean out your desk, because "build custom infrastructure to suit my workflow, but your workflow isn't important" is a clear admission that you don't ever get to work on my team, or probably anyone else's.

I don't write twitch games and interesting problems don't fit on a single machine anymore.

Ah, so you only will work on "interesting" problems, and literally all possible problems you don't find "interesting" are in categories like "twitch games", to be insulted and belittled. It's a good thing you were already fired a paragraph ago, because you'd get fired for that too; turns out most companies don't have problems you'd consider "interesting". So sorry for that, but them's the breaks.

It's been a decade since I worked at such a tiny nascent company that all the software was written in just one language.

With a head that big, what's your size in hats?

Language packages almost never express dependencies on either system packages or other languages' packages

But to actually respond: how many single applications do you think the average company has which are written in, say, five or more different languages and must deploy the entire codebase to a single machine? You know, that single machine you refuse to work with, because it must be just for a "twitch game" or some other tiny puny baby child's toy of a program.

If you want to read upstream security advisories and use such bleeding-edge software that even the bleeding-edge distros don't trust it yet

So sorry that I wanted to use the version with the feature that didn't make it in under the distro's freeze date. Guess we'll just wait ten years until the support contract expires and we're finally forced into an OS upgrade, then? Management will be very happy to hear that timeline, I'll bet they give you a promotion and a raise when they find out you're the one holding it up!

you're basically rolling your own distro that only exists on one machine in the world

I have reproducible builds using language packaging toolchains. Turns out it's 2017 and we can do that now.

I'd rather delegate that to the people who specialize, because the best case is that I don't fuck it up too badly, I'll never add value that way.

There are parts of this sentence that I agree with.


I don't mean to disparage twitch games, that's my go-to example of one of the last domains where it's cost-effective to get really good at living within customers' hardware constraints. But when prod is a growing distributed system, it's natural for tests to assume the same distributed system, and forcing those tests to sort-of run on a single box with the wrong kernel/fs/network config just trades in cheap hardware for expensive engineers doing work that doesn't make prod better.

When I write java, I can't run maven in prod and expect it to get native libraries into /usr/lib64 and the sysadmins' Python and Go plumbing and config files into ... wherever the hell that may live. So together we tweak a .spec file that not only provisions the entire machine correctly but answers questions about whether the entire machine is provisioned correctly, not just the java half. (We probably could make maven do all that, but the result would be worse in every conceivable way, and in most languages it's not even an option.)

If you want a rolling release distro, use one. Godspeed. But nobody's going to sell a support contract that covers random alpha builds published overnight. "We can't upgrade the distro and get the code we need, so we're smuggling in code that the distro doesn't trust yet" is just devs and sysadmins playing chicken over the fate of the project. The bleeding-edge vs supported argument should have been settled before going live.


OS package managers are so much worse than language package managers though - no ability to install packages per-user or in a "local" environment, difficult to have multiple versions of the same package installed (a huge problem if you have a "diamond" where you depend transitively on two different version of the same library), no IDE integration, limited introspectibility, inconsistent testing standards...


If the answer to "are my dependencies installed and up to date?" is "well, yes and no, we can't tell which copy I'm actually using", I am not ready to go to prod.

> "diamond" where you depend transitively on two different version of the same library

That's a trainwreck waiting to happen. It's not even worth testing, much less deploying.


> If the answer to "are my dependencies installed and up to date?" is "well, yes and no, we can't tell which copy I'm actually using", I am not ready to go to prod.

Language package managers are a lot better at that than OS package managers, IME. Much better for all deploys of version x to use the same version than for all deploys to host y to use the same version.


In the case of Debian it's an asset since you can run on stable without being stuck with horribly outdated, hard to upgrade Python packages.


The case could be made for choosing to stick with well tested and battle hardened libraries instead of bleeding edge, backward-compatibility breaking releases.

Virtual environments tend to solve the problem in either case.


Is that basically how Linux distros work? Debian has .deb and Redhat has .rpm etc.?


I think GP is saying that they want to be able to install the Fedora/Redhat libraries (like SELinux) on other distros as well


At least blivet is normal distutils package that can be installed by pip by providing URL to repository and/or source archive.


Off topic, but it seems to me that the "Python 3 by default" logo has cobras, not pythons.


Maybe it's a python pretending to be a cobra?


[flagged]


Trolling like this on Hacker News will get your account banned. Please don't.

https://news.ycombinator.com/newsguidelines.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: