Hacker News new | past | comments | ask | show | jobs | submit login
About Python 3 (alexgaynor.net)
564 points by jnoller on Dec 30, 2013 | hide | past | favorite | 345 comments



I like Python 3. I prefer it. It is better to program in than 2.x. Iterators everywhere, no more unicode/encoding vagueness, sub-generators and more. It is a much better language and it's hard to see how it could have evolved without a clean break from its roots.

However it has been interesting to follow over the last five years. It has been a sort of, "what if p5 broke the CPAN," scenario played out in real-life. Breaking compatibility with your greatest resource has a painful trade-off: users.

Everything I work on is not even considering a migration to Python 3. OpenStack? A tonne of Django applications that depend on Python 2-only libraries? A slew of automation, monitoring and system administration code that hasn't been touched since it was written? Enterprise customers who run on CentOS in highly restrictive environments? A migration to Python 3 is unfathomable.

However my workstation's primary Python is 3. All of my personal stuff is written in 3. I try to make everything I contribute to Python 3 compatible. I've been doing that for a long time. Still no hope that I will be working on Python 3 at my day job.

Sad state of affairs and a cautionary tale: "Never break the CPAN."


I don't like Python 3.

Iterators everywhere are incredibly annoying, especially with my development workflow, where I don't put a line of code into a file before I run it manually in the interpreter. When I run a map over a list, I just want to see the freaking results.

Default unicode strings are obscenely annoying to me. Almost all of my code deals with binary data, parsing complex data structures, etc. The only "human readable" strings in my code are logs. Why the hell should I worry about text encoding before sending a string into a TCP socket...

The fact that the combination of words "encode" and "UTF8" appear in my code, and str.encode('hex') is no longer available, is a very good representation of why I hate Python 3.

In Python 2, the rule of thumb was "If it makes sense, it's going to work". In Python 3, this isn't true. Not often, but often enough to annoy. This makes Python "feel" like Java to me.

And worst of all, Python 3 has so many excellent features, like sub-generators, a better performing GIL, etc. These sometimes force me into coding with Python 3. I hate it every freaking time.

I said to myself that with Python 3.4, due to great stuff like the new asyncio module, I'll have to make the switch. It's really sad that this is because I "have to" do it, and not because I "want to".


> Default unicode strings are obscenely annoying to me. Almost all of my code deals with binary data, parsing complex data structures, etc. The only "human readable" strings in my code are logs. Why the hell should I worry about text encoding before sending a string into a TCP socket...

If your Python 3 code is dealing with binary data, you would use byte strings and you would never have to call encode or touch UTF-8 before passing the byte string to a TCP socket.

What you're saying about Unicode scares me. If you haven't already, please read The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) before writing any libraries that I might depend on.


> you would use byte strings and you would never have to call encode or touch UTF-8 before passing the byte string to a TCP socket.

I'll start by adding that it's also incredibly annoying to declare each string to be a byte string, if this wasn't clear from my original rant.

Additionally, your advice is broken. Take a look at this example, with the shelve module (from standard library).

s=shelve.open('/tmp/a')

s[b'key']=1

Results in:

AttributeError: 'bytes' object has no attribute 'encode'

So in this case, my byte string can't be used as a key here, apparently. Of course, a string is expected, and this isn't really a string. My use case was trying to use a binary representation of a hash as the key here. What's more natural than that. Could easily do that in Python 2. Not so easy now.

I can find endless examples for this, so your advice about "just using byte strings" is invalid. Conversions are inevitable. And this annoys me.

> What you're saying about Unicode scares me.

Yeah, I know full well what you're scared of. If I'm designing everything from scratch, using Unicode properly is easy. This, however, is not the case when implementing existing protocols, or reading file formats that don't use Unicode. That's where things begin being annoying when your strings are no longer strings.


> Additionally, your advice is broken. Take a look at this example, with the shelve module (from standard library).

His advice was sound, and referred to your example of TCP stream data (which is binary). Your example regards the shelve library.

> So in this case, my byte string can't be used as a key here, apparently.

shelve requires strings as a keys. This is documented, though not particularly clearly.

> so your advice about "just using byte strings" is invalid.

Let me attempt to rephrase his advice. Use bytestrings where your data is a string of bytes. Use strings where your data is human-readable text. Covert to and from bytestrings when serializing to something like network or storage.

> Conversions are inevitable.

Absolutely, because bytestrings and (text)strings are two different types.

> And this annoys me.

There is no real alternative though, because there is no way to automatically convert between the two. Python 2 made many assumptions, and these were often invalid and led to bugs. Python 3 does not; in places where it does not have the required information, you must provide it.

> when implementing existing protocols, or reading file formats that don't use Unicode.

I'm afraid it's still the same. A protocol that uses unicode requires you to code something like "decode('utf-8')" (if UTF-8 is what it uses), one that does not requires "decode('whatever-it-uses-instead')". If it isn't clear what encoding the file format or protocol stores textual data in, then that's a bug with the file format or protocol, not Python. Regardless though, Python doesn't know (and can't know) what encoding the file or protocol uses.


If all your code is dealing with binary data, why the heck are you using strings for it? There's a bytes type there for a reason, which doesn't deal with encodings, and you won't accidentally try and treat a blob like a string.


Keyme, you and I are the few that have serious concern with the design of Python 3. I started to embrace it in a big way 6 months ago when most libraries I use are available in Python. I wish to say the wait is over and we should all move to Python 3 then everything will be great. Instead I find no compelling advantage. Maybe there will be when I start to use unicode string more. Instead I'm really annoyed by the default iterator and the binary string handling. I am afraid it is not a change for the good.

I come from the Java world when people take a lot of care to implement things as streams. It was initially shocking to see Python read an entire file into memory, turn it into list or other data structure with no regard to memeory usage. Then I have learned this work perfectly well when you have a small input, a few MB or so is a piece of cake for modern computer. It takes all the hassle out of setting up streams in Java. You optimize when you need to. But for 90% of stuff, a materialized list works perfectly well.

Now Python become more like Java in this respect. I can't do exploratory programming easily without adding list(). Many times I run into problem when I am building complex data structure like list of list, and end up getting a list of iterator. It takes the conciseness out of Python when I am forced to deal with iterator and to materialize the data.

The other big problem is the binary string. Binary string handling is one of the great feaute of Python. It it so much more friendly to manipulate binary data in Python compare to C or Java. In Python 3, it is pretty much broken. It would be an easy transition I only need to add a 'b' prefix to specify it as binary string literal. But in fact, the operation on binary string is so different from regular string that it is just broken.

  In [38]: list('abc')
  Out[38]: ['a', 'b', 'c']
  
  In [37]: list(b'abc')       # string become numbers??
  Out[37]: [97, 98, 99]
  
  In [43]: ''.join('abc')
  Out[43]: 'abc'
  
  In [44]: ''.join(b'abc')    # broken, no easy way to join them back into string
  ---------------------------------------------------------------------------
  TypeError                                 Traceback (most recent call last)
  <ipython-input-44-fcdbf85649d1> in <module>()
  ----> 1 ''.join(b'abc')
  
  TypeError: sequence item 0: expected str instance, int found


Yes! Thank you.

All the other commenters here that are explaining things like using a list() in order to print out an iterator are missing the point entirely.

The issue is "discomfort". Of course you can write code that makes everything work again. This isn't the issue. It's just not "comfortable". This is a major step backwards in a language that is used 50% of the time in an interactive shell (well, at least for some of us).


The converse problem is having to write iterator versions of map, filter, and other eagerly-evaluated builtins. You can't just write:

    >>> t = iter(map(lambda x: x * x, xs))
Because the map() call is eagerly evaluated. It's much easier to exhaust an iterator in the list constructor and leads to a consistent iteration API throughout the language.

If that makes your life hard then I feel sorry for you, son. I've got 99 problems but a list constructor ain't one.

The Python 3 bytes object is not intended to be the same as the Python 2 str object. They're completely separate concepts. Any comparison is moot.

Think of the bytes object as a dynamic char[] and you'll be less inclined to confusion and anger:

    >>> list(b'abc')
    [97, 98, 99]
That's not a list of numbers... that's a list of bytes!

    >>> "".join(map(lambda byte: chr(byte), b'abc'))
    'abc'
And you get a string!


  >    >>> list(b'abc')
  >    [97, 98, 99]
  >That's not a list of numbers... that's a list of bytes!
No, it's a list of numbers:

  >>> type(list(b'abc')[0])
  <class 'int'>
I think the GP mis-typed his last example. First, he showed that ''.join('abc') takes a string, busts it up, then concatenates it back to a string. Then, with ''.join(b'abc'), he appears to want to bust up a byte string and concatenate it back to a text string. But I suspect he meant to type this:

  >>> b''.join(b'abc')
That is, bust up a byte string and concatenate back to what you start with: a byte string. But that doesn't work, when you bust up a byte string you get a list of ints; and you cannot concatenate them back to a byte string (at least not elegantly).


> No, it's a list of numbers:

Well, yes. Python chooses the decimal representation by default. So? It could be hex or octal and still be a byte.

My example was merely meant to be illustrative and not an example of real code. The byte object is simply not a str; so I don't understand where this frustration with them not being str is coming from. If you use the unicode objects in Python 3 you get the same API as before. The difference is now you can't rely on Python implicitly converting up ASCII byte strings to unicode objects and have to explicitly encode/decode from one to the other. It removes a lot of subtle encoding bugs.

Perhaps it's just that encoding is tricky for developers who never learned it in the first place when they started learning the string APIs in popular dynamic languages? I don't know. It makes a lot of sense to me since I've spent years fixing Python 2 code-bases that got unicode wrong.


You are not making useful comment because you don't understanding the use case. Python 2 is very useful in handling binary data. This complain is not about unicode. This is about binary files manipulation.

I'm thrill about the unicode support. If they only add unicode string and leave the binary string alone and just require an additional literal prefix b, it will be an easy transition. Instead the design is changed for no good reason and the code are broken too.


I have a hard time believing that the design was arbitrarily changed.

The request to add string-APIs to the bytes object have been brought up before [0]. I think the reasoning is quite clear: byte-strings are not textual objects. If you are sending textual-data to a byte-string API, build your string as a string and encode it to bytes.

[0] http://bugs.python.org/issue3982

For working with binary data there's a much cleaner API in Python 3 that is less prone to subtle encoding errors.

edit: I realize there is contention but I'm of the mind that .format probably isn't the right method and that if there were one it'd need it's own format control string syntax.


join will work fine on a list of bytes, and the bytes object is available for converting a list of ints:

  >>> b'a'.join([b'b',b'e'])
  b'bae'
  >>> bytes([119, 104, 101, 101, 33])
  b'whee!'
  >>>


> The converse problem is having to write iterator versions of map, filter, and other eagerly-evaluated builtins

Well, in Python 2 you just use imap instead of map. That way you have both options, and you can be explicit rather than implicit.

> That's not a list of numbers... that's a list of bytes!

The point being made here is not that some things are not possible in Python 3, but rather than things that are natural in Python 2 are ugly in 3. I believe you're proving the point here. The idea that b'a'[0] == 97 in such a fundamental way that I might get one when I expected the other may be fine in C, but I hold Python to a higher standard.


What you are looking for is imap(). In Python 2 there are entire collection of iterator variants. You can choose to use either the list or the iterator variants.

The problem with Python 3 is the list version is removed. You are forced to use iterator all the time. Things become inconvenient and ugly as a result. Bugs are regularly introduced because I forget to apply list().

  >>> "".join(map(lambda byte: chr(byte), b'abc'))
Compares to ''.join('abc'), this is what I call fuck'd. Luckily maxerickson suggested a better method.


  >>> bytes(list(b'abc'))
  b'abc'
  >>>
That is, the way to turn a list of ints into a byte string is to pass it to the bytes object.

(This narrowly addresses that concern, I'd readily concede that the new API is going to have situations where it is worse)


Thank you. This is good to know. I was rather frustrated to find binary data handling being changed with no easy translation in Python 3.

Here is another annoyance:

    In [207]: 'abc'[0] + 'def'
    Out[207]: 'adef'

    In [208]: b'abc'[0] + b'def'
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-208-458c625ec231> in <module>()
    ----> 1 b'abc'[0] + b'def'

    TypeError: unsupported operand type(s) for +: 'int' and 'bytes'


I don't have enough experience with either version to debate the merits of the choice, but the way forward with python 3 is to think of bytes objects as more like special lists of ints, where if you want a slice (instead of a single element) you have to ask for it:

    >>> [1,2,3][0]
    1
    >>> [1,2,3][0:1]
    [1]
    >>> b'abc'[0]
    97
    >>> b'abc'[0:1]
    b'a'
    >>> 
So the construction you want is just:

    >>> b'abc'[0:1]+b'def'
    b'adef'
Which is obviously worse if you are doing it a bunch of times, but it is at least coherent with the view that bytes are just collections of ints (and there are situations where indexing operations returning an int is going to be more useful).


In Java, String and Char are two separate types. In Python, there is no separate char type. It is simply a string of length of 1. I do not have great theory to show which design is better either. I can only say the Python design work great for me in the past (for both text and binary string), and I suspect it is the more user friendly design of the two.

So in Python 3 the design of binary string is changed. Unlike the old string, bytes and binary string of length 1 are not the same. Working codes are broken, practice have to be changed, often it involves more complicated code (like [0] becomes [0:1]). All these happens with no apparent benefit other than it is more "coherent" in the eye of some people. This is the frustration I see after using Python 3 for some time.


this post is a great example of why python 3 didn't go far enough. it's too close to python 2 to be still called python and too far from python 2 to be still called python 2.

personally, coming from a country that needs to deal with non-ascii, i love unicode by default and there are b'' strings if you need them. str.encode is a non-issue - you wasted more words on it than the two-line function enc() it takes to fix it.


If I recall, writing no boilerplate code was a big deal in python once...

And while 2 lines are not worth my rant, writing those 2 lines again and again all the time, is.


i'd argue this is not boilerplate, more like a shortcut for your particular use case:

    import codecs
    enc = lambda x: codecs.encode(x, 'hex')
    
i have a program in python 2 that uses this approach, because i have a lot of decoding from utf and encoding to a different charset to do. python 3 is absolutely the same for me.


Well I don't think I will be able to change your opinion if you feel so strongly about it. However some pointers perhaps might make it less painful:

> Iterators everywhere are incredibly annoying, especially with my development workflow, where I don't put a line of code into a file before I run it manually in the interpreter. When I run a map over a list, I just want to see the freaking results.

That's what list() is for:

    >>> list(map(lambda x: x * x, xs))
> Default unicode strings are obscenely annoying to me. Almost all of my code deals with binary data, parsing complex data structures, etc. The only "human readable" strings in my code are logs. Why the hell should I worry about text encoding before sending a string into a TCP socket...

> The fact that the combination of words "encode" and "UTF8" appear in my code, and str.encode('hex') is no longer available, is a very good representation of why I hate Python 3.

I'm afraid I don't understand your complaint. If you're parsing binary data then Python 3 is clearly superior than Python 2:

    >>> "Hello, Gådel".encode("utf-8")
    b'Hello, G\xc3\xa5del'
Seems much more reasonable than:

    >>> "Hello, Gådel".encode("utf-8")
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 8: ordinal not in range(128)
Because they're not the same thing. Python 2 would implicitly "promote" a bytestring (the default literal) to a unicode object so long as it contained ASCII bytes. Of course this gets really tiresome and leads to Python 2's, "unicode dance." Armin seems to prefer it to the extra leg-work for correct unicode handling in Python 3 [0] however I think the trade-off is worth it and that pain will fade when the wider world catches up.

> In Python 2, the rule of thumb was "If it makes sense, it's going to work". In Python 3, this isn't true. Not often, but often enough to annoy. This makes Python "feel" like Java to me.

Perhaps this is because you're a used to Python 2 and anything else is going to challenge your perceptions of the way it should work?

I don't understand the Java comparison.

> And worst of all, Python 3 has so many excellent features, like sub-generators, a better performing GIL, etc. These sometimes force me into coding with Python 3. I hate it every freaking time.

> I said to myself that with Python 3.4, due to great stuff like the new asyncio module, I'll have to make the switch. It's really sad that this is because I "have to" do it, and not because I "want to".

Well you can always attempt to backport these things into your own modules and keep Python 2 alive.

I think going whole-hog and embracing Python 3 is probably easier in the long run. I'm not sure how long the world is going to tolerate ASCII as the default text encoding given that its prevalence has largely been an artifact of opportunity. Unicode will eventually supplant it I have no doubt. It's good to be ahead of the curve.

[0] http://lucumr.pocoo.org/2013/7/2/the-updated-guide-to-unicod...


> Why the hell should I worry about text encoding before sending a string into a TCP socket...

A string represents a snippet of human readable text and is not merely an array of bytes in a sane world. Thus is it fine & sane to have to encode a string before sticking it into a socket, as sockets are used to transfer bytes from point a to b, not text.


Not arguing that you're wrong, but Unix/Linux is not a sane world by your definition. Whether we like it or not (I do like it), this is the world many of us live in. Python3 adds a burden in this world where none existed in Python2. In exchange, there is good Unicode support, but not everyone uses that. I can't help but wonder if good Unicode support could have been added in a way that preserved Python2 convenience with Unix strings.

(Please note that I'm not making any statement as to what's appropriate to send down a TCP socket.)


ASCII by default is only an accident of history. It's going to be a slow, painful process but all human-readable text is going to be Unicode at some point. For historical reasons you'll still have to encode a vector of bytes full of character information to send it down the pipe but there's no reason why we shouldn't be explicit about it.

The pain is painful [in Python 3] primarily for library authors and only at the extremities. If you author your libraries properly your users won't even notice the difference. And in the end as more protocols and operating systems adopt better encodings for Unicode support that pain will fade (I'm looking at you, surrogateescape).

It's better to be ahead of the curve on this transition so that users of the language and our libraries won't get stuck with it. Python 2 made users have to think (or forget) about Unicode (and get it wrong every time... the shear amount of work I've put into fixing codebases that mixed bytes and unicode objects without thinking about it made me a lot of money but cost me a few years of my life I'm sure).


I was careful to say "Unix strings", not "ASCII". A Unix string contains no nul byte, but that's about the only rule. It's certainly not necessarily human-readable.

I don't think a programming language can take the position that an OS needs to "adopt better encodings". Python must live in the environment that the OS actually provides. It's probably a vain hope that Unix strings will vanish in anything less than decades (if ever), given the ubiquity of Unix-like systems and their 40 years of history.

I understand that Python2 does not handle Unicode well. I point out that Python3 does not handle Unix strings well. It would be good to have both.


> I was careful to say "Unix strings"

This is the first time I encounter the idiom Unix strings. I'll map it to array of bytes in my table of idioms.

> I don't think a programming language can take the position that an OS needs to "adopt better encodings".

I do think that programming languages should take a position on things, including but not limited to how data is represented and interpreted in itself. A language is expected to provide some abstractions, and whether a string is an array of bytes or an array of characters is a consideration of a language designer, who will end up designing a language takes one or another of the sides available.

Python has taken the side of language user: enabled Unicode names, defaulted to Unicode strings, defaulted to classes being subclasses of the 'object' class... Unix has taken the side of machine (which was the side at the time of Unix's inception.

> [...] probably a vain hope that Unix strings will vanish [...]

If only we wait for them to vanish, doing nothing to improve.

> Python must live in the environment that the OS actually provides.

Yes, Python must indeed live in the OS' environment. Regardless, one need not be a farmer because they live among all farmers, need they?


> This is the first time I encounter the idiom Unix strings

The usual idiom is C-strings, but I wanted to emphasize the OS, not the language C.

>> [...] probably a vain hope that Unix strings will vanish [...] >If only we wait for them to vanish, doing nothing to improve.

The article is about the lack of Python3 adoption. In my case, Python3's poor handling of Unix/C strings is friction. It sounds like you believe that Unix/C strings can be made to go away in the near future. I do not believe this. (I'm not even certain that it's a good idea.)


I do not insist that C strings must die, I insist that C strings are indeed arrays of bytes, and we cannot use them to represent text correctly at present. I fully support strings to be Unicode-by-default in Python, as most people will put text in between double quotes, not a bunch of bytes represented by textual characters.

I do not expect C or Unix interpretations of strings to change, but I believe that they must be considered low-level and require higher-level language user to explicitly request the compiler to interpret a piece of data in such fashion.

My first name is "Göktuğ". Honestly, which one of the following is rather desirable for me, do you think?

  Python 2.7.4 (default, Sep 26 2013, 03:20:26) 
  >>> "Göktuğ"
  'G\xc3\xb6ktu\xc4\x9f'
or

  Python 3.3.1 (default, Sep 25 2013, 19:29:01) 
  >>> "Göktuğ"
  'Göktuğ'


I'm not arguing against you. I just don't write any code that has to deal with people's names, so that's just not a problem that I face. I fully acknowledge that lack of Unicode is a big problem of Python2, but it's not my problem.

A Unix filename, on the other hand, might be any sort of C string. This sort of thing is all over Unix, not just filenames. (When I first ever installed Python3 at work back when 3.0 (3.1?) came out, one of the self tests failed when it tried to read an unkosher string in our /etc/passwd file.) When I code with Python2, or Perl, or C, or Emacs Lisp, I don't need to worry about these C strings. They just work.

My inquiry, somewhere up this thread, is whether or not it would be possible to solve both problems. (Perhaps by defaulting to utf-8 instead of ASCII. I don't know, I'm not a language designer.)


> I insist that C strings are indeed arrays of bytes, and we cannot use them to represent text correctly at present

OK, maybe I do see one small point to argue. A C string, such as one that might be used in Unix, is not necessarily text. But text, represented as utf-8, is a C string.

It seems like there's something to leverage here, at least for those points at which Python3 interacts with the OS.


Had a discussion at work a few weeks ago about this. Roughly, it came down to this - if you need any backwards compatibility whatsoever (Do you ever expect your code to ever run on machines with operating system releases younger than 2013ish OR do you ever plan to interop with code older than 2013ish), Python 2 is the way to go. Otherwise, feel free to use Python 3.

This effectively means that only for purely greenfield projects for machines under your team's control and possibly even new teams can Py3 be used confidently. It's a shame - there are a number of nifty cleanups in Py3, but I simply don't see a compelling reason to use Py3 over Py2. This is particularly true given PyPy being fast and Py2. Given the triple of Python/Perl/Ruby for a greenfield project, I'd probably explore Ruby; I haven't been jaded by it yet, unlike Perl or Python.


> Had a discussion at work a few weeks ago about this. Roughly, it came down to this - if you need any backwards compatibility whatsoever (Do you ever expect your code to ever run on machines with operating system releases younger than 2013ish OR do you ever plan to interop with code older than 2013ish)

Python 3 runs on operating systems older than 2013 and, while it might not be the OS default, there is no reason a non-library project can't bundle its own interpreter in the distribution and use that instead of the OS default interpreter.


> Python 3 runs on operating systems older than 2013 and, while it might not be the OS default, there is no reason a non-library project can't bundle its own interpreter in the distribution and use that instead of the OS default interpreter.

You are absolutely right, but the overall cost of deployment skyrockets at that point. Not to mention pernicious bugs from grabbing the wrong library & whatever other weirdness happens. Unless the project is reasonably significant (deployment cost is insignificant to development cost), it's much easier just to work from the system python.


Surely any new project these days is using virtualenv (with a specified version of the python interpreter) + pip.


Projects 'of a given size', yes. But there are small projects, projects for Windows, projects where the fleet is managed and deploying full venvs isn't done... etc.

The niceities of greenfield work and what ought to be done start breaking down as you get into heterogeneous systems where total system stability (lack of change ) is a big deal, and these fleets have been running for a decade or so. Imagine having to figure out deployments to a Python 2.4 environment in 2012 (I had this problem).

To misquote a famous statement, you fight the battle with the troops you have, not the troops you want, against the enemy you have, not the enemy you want. The reality is that it's often (usually?) not worth it to ship full venvs with Py3. It's often much better to ship Py 2.7 code (just your own code) and install your own libraries in the system library location.


Yeah, unfortunately CentOS users don't think this way. It's a serious question whether Py3 will _ever_ become the default in server distributions. I think we'll see more significant adoption once that happens.


Default isn't the concern: it's being packaged at all. That, and the ensuing support, is the important issue.


> operating system releases younger than 2013ish

This seems like it depends on what kinds of OS you regularly use. On Linux the deployment situation isn't quite that dire. Python3 has been in Debian stable since Debian 6 (aka Squeeze, released February 2011), so running Python3 code isn't a problem if the machine is even close to up to date. On Ubuntu the situation is even reversed: Python3 is default, and starting in 2014 it will be the only supported version, with Python2 support being dropped. It will still be installable out of "universe", but the goal is to drop Python2 from "main" by the release of 14.04 LTS in 2014, because LTS releases promise 5 years of support, and they don't want to be supporting Python2 into 2019.


Exactly

I agree 100% with your first paragraph.

But some things are lacking. Django is there, and it works great, but you hit a dead end with some libraries (like MySQL - even though it seems there are alternatives).

So in the end you don't make the move because you're afraid of needing something that may not be there.


Is there any serious effort to port, say, the top 1000 most popular/important packages to Python 3?


Yeah, but one of the issues is that some very popular packages for Python were already development-dead for years.

PIL is a good example; the last actual PIL release was in 2009. We now have a great fork that works (Pillow), but PIL is not the only dead package that's still in active use. There are quite a few others, too.


Pillow's first big forking action was python3 compatibility. It works, and it wasn't _that_ hard to do.


pip warnings on non maintained packages perhaps?


There actually is: http://python3wos.appspot.com/

A lot of those libraries have Python 3 port branches where you can help the effort. For example: https://github.com/boto/boto/tree/py3kport/py3kport


This is one of the issues I see that need to be corrected. I have attempts at migrating to 3, that failed miserably. One happened to be while teaching my son Python with "Python for Kids" that ended up making the experience difficult and the others that ended up causing conflicts in various OS dependencies.

I have to say, Python is my favorite language still (at least 2.7), and I hope the community can get it back in line. I can see JS, NodeJS and Python being my core technologies, but this issue with 3 is squashing my dream.


I'm going to go against the grain here and say that moving slowly is one of my absolute favorite features about python and its libraries.

Rails and django were released about the same time, rails is on version 4, django is on 1.6.

Moving slowly means I can spend more of my time writing code and less of my time upgrading old code. More importantly, every release requires a perusal: did the API change, what's new, are there breaking changes I need to be aware of?

I didn't appreciate how nice a slow but consistent and deliberate release cycle was until I started using Ember which seems to release a new version monthly.

Its generally acceptable to be one or two x.x versions back, but much more than that and the cost of maintaining libraries skyrockets, so you start losing bug fixes and library compatibility.

With python there's not really a question of if I can run my code for a year between non-security upgrades, even with a few dozen third party libraries. That stability is immensely valuable.


This is where the Ruby and Python communities fundamentally disagree. The Ruby community is great at moving fast and replacing bad things, while the Python community takes a much slower approach to the whole thing.

I wouldn't say any approach is better or worse, it has to fit your personal style. I like the Ruby approach, others love the Python approach. This means that Ruby breeds a lot of interesting stuff, but with less stability guarantees.

At the same time, Rails still builds on the ideas of Rails 1, which is an achievement in itself. So Rubyists is not totally not caring about backwards compatibility - they just tend to throw out bad ideas quickly again.

On the other hand, the quick pace allows the MRI team to build improved interpreters with a reasonable chance of quick adoption.

This is a much more fundamental difference than any snickering about the garbage collector of the main interpreter, from a users perspective.


Differences in userbases could be part of it. Scientific computing is an increasingly important part of the Python community, for example, and they tend to be averse to backwards-incompatible changes. In part that's because you have many good but very lightly maintained libraries that stick around forever, so people prefer if they stay working when nobody touches them, rather than bitrotting and needing constant updating by thinly stretched maintainers. Heck, old Fortran stuff is still in regular use, some of which hasn't been touched in years.


It seems that communities that mainly use programming languages for reasons of "getting stuff done" value backwards compatibility the most; they are the ones who would rather not have to spend the time "upgrading" things that used to work perfectly fine, and would rather use that time to do something more useful and related to their ultimate goals. Personally I think backwards compatibility is getting less attention than it deserves, and that software should evolve slowly and gradually and not suddenly make huge leaps, because all these breaking changes start to look like they're just creating unnecessary, wasteful work. Imagine if things like mains plugs changed every few months, with appliancemakers all adopting the newest backwards-incompatible version. In some ways software is easier to change than hardware, but at the same time we must remember that effort needs to be expended to perform these changes as well, effort that could be used in other ways, and often there is a lot of interconnectedness in the systems we are trying to change.


"Imagine if [...] plugs changed every few months, with appliancemakers all adopting the newest backwards-incompatible version"

like apple?


iPod/iPhone

October 2001 - Original iPod - FireWire

April 2003 - Third-Generation iPod - 30-pin Dock Connector

September 2012 - iPhone 5 - Lightning Connector

Apple Laptops

1998 - PowerBook G3 - Unnamed Circular Power Connector

2001 - PowerBook G4 - Smaller Unnamed Circular Power Connector

2006 - MagSafe

2012 - MagSafe 2


Similar to Innovators/Visionaries vs Pragmatists (http://www.chasminstitute.com/METHODOLOGY/TechnologyAdoption...)


Furthermore, with scientific computing, it's important that code you publish in a journal (fossilizing it) can still be at least near-usable to others over time horizons of years to decades.


That's definitely one important aspect. And even stuff not formally fossilized often becomes de-facto fossilized due to the way funding for development works. Things are often very bursty: a large library or piece of software may be written over a period of 2-5 years of concentrated effort either by a PhD student, or by programmers/research-scientists/post-docs hired on an NSF/DARPA/EU-funded research project. But then the PhD student graduates, or the project ends (and therefore its funding for programmers), and the software goes into much lower-staffing maintenance mode. In that mode there aren't resources available for anything but minor fixes. There are some very high-profile projects that are an exception to that pattern, because they're seen as important enough that they manage to string together continuous development for years or decades, either through a series of PhD students or a series of grants. But lots are more or less write-and-then-maintain. Despite being lightly maintained, if the initial work was solid and produced a reasonably "complete" output, it might still be useful to other researchers for years into the future, if it doesn't bitrot. Some of the R packages are a good example: plenty of stuff hasn't been touched in 10+ years but is still in daily use.


Differences in userbases are definitely part of it.e.g. Ruby is very often used in the whole devops space - where constant forward-change is usual, because of the mindset of "never being done".

Constant change is a thing that doesn't get enough mindshare as well. Not training your team at handling change is probably as bad as not having programmers care about backwards compatibility.

To be quite clear: I think it is very beautiful to have 2 languages in the same space, evolving into different community directions.


There's plenty of rapid iteration in the Python world, and as far as I know, there is no fundamental disagreement with the concepts behind rapid iteration. Development still occurs in the open, but releases are well-planned and well-versioned such that backward incompatibilities are clearly billed and rarely introduced if not necessary. Pyramid is an exemplary Python project that has very rapid development but still respects compatibility concerns.

My opinion is that Python projects are simply more likely to have more mature release processes because Python is more likely to be used by mature engineers than Ruby.


I didn't talk about release processes. There are lots of projects with mature release processes in the Ruby world. What I meant is that agressively replacing parts that became obsolete or turned out problematic is much more accepted.

> Python is more likely to be used by mature engineers than Ruby.

It is very sad that you waste a good post for such an ungrounded attack.


It's not an attack, it's my opinion that more mature hackers eventually converge on Python, and I think the community reflects that. Maybe it's simply that Python has never had a project with the same sex appeal as Rails, and thus has avoided an "Eternal Summer"-esque influx.

For the record, the last time I wrote Ruby code was about 4 days ago, and the last time I wrote Python code was yesterday afternoon. I'm a part of both communities and I think that there are a lot of people who are. From my experiences in both communities, I think that the Python community is much more mature and professional.


I believe cookiecaper is right for the foreseen future, there is more demand for Python Data-Scientists than Ruby Developers.

But we ignore one important thing here and that is the lesson! People need to learn something from the Python story!

Moving forward with an evolving concept, requires the (mathematical) coherence of all ideas. You cannot invent a spoken language, then break it and say now we speak a different dialect, because we use 2 words less this way. Nobody will adopt, not because the feature of reducing verbosity and increasing expressiveness is actually bad, but because the new concept branches out and technically seen it's "noise", because it adds more without integrating it. You can throw a second motor into a car, but without integrating the second motor, you will have no benefit. Python has to learn this the hard way, Dennis Ritchie and Ken Thompson made a decision for the C programming language about 40 years ago and all of the C code written back then can still be compiled, albeit some things have changed and require minor changes. But this is something you can introduce in a timespan of 40 years. You cannot make a new present every 5 years and say, that all many of the old presents you've made back then, have to be given back, when you accept the new present. Coherence and Evolution are powers that's use should unify and only diversify when required or requested by the diffusion of technology into the userbase.


I think the same about old people btw. we branch their value out to an old value that is not compatible with the values in our current system. Oh boi, we do that so wrong, it's laughable and very sad at the same time how our society thinks about old knowledge, old people etc.

I think HN is the community that most loudly would agree with new != better but that's exactly what we do wrong. Holy cow, I can't explain how much value we have at our disposal that we throw away + pay to keep it away comfortably. New businesses don't integrate old people, because they don't really know how to make value out of them. That's a simple equation, if you see it this way. It's not because old people cannot contribute to the development of IT, Startups and the Hacker scene. We just have no business model, not even a concept that considers these elder men and women.


I think this is well said. It seems to me that python iteration happens in the space ahead of current (x.x.dev).

I can't really say how ruby does it because I haven't use ruby much.


The problem with the fast-evolving Ruby approach is the cost of staying up to date. I was intimately involved with several projects to upgrade nontrivial codebases from Rails 2/Ruby 1.8 to Rails 3/Ruby 1.9, and these consumed serious amounts of engineering time and introduced lots of obscure bugs, with the primary benefit simply being not getting pwned when the next exploit comes trundling along. Fortunately our management understood and was willing to put up with the pain, but many (most?) would not.

Now, I work primarily with Python 2.7, with migration to 3 being a distant pipe dream. After years of Rubying, I find it a bit old-fashioned, awkward and occasionally infuriating (strings default to ASCII? srsly?) -- but I do appreciate knowing that I will not have to rewrite everything anytime soon.


The 1.8/1.9 switch was problematic and can be seen in parallel with the python 2.7/3.x switch. Rails 2/Rails 3 was a similar big jump that reengineered the whole framework. Rails 4 was a much tamer release in that regard.

But one has to realize that 1.8 was the series of Ruby that was started _before_ Ruby even got popular (I started Ruby using 1.6, which was even more problematic). So being aggressive in breaking stuff with the next iteration (basically, to do it proper, with more manpower and data to work on) also opened up a lot - for example, Ruby 1.9 to 2.0 is a much simpler switch and I have many clients running 2.0 now and testing on 2.1.

The fear I always have with python 2.7 is the distant future. It just irritates me to have a huge migration in front of me. The Rubyist in me wants to keep up with the current state of things.


1.8/1.9 and 2.x/3.x were handled very differently by the respective communities. That's evident even from the version numbers.


That's because until well into the development of Ruby 1.9, Ruby was still using the odd (dev)/even (release) approach. 1.9 introduced things that were different, and it wasn't stable enough for production use until 1.9.2, at which point work on 2.0 started.


i like fast movement, i like communities that dare to break things every now and then (see semver.org for the widely accepted versioning rules and should be honoured).

much do i prefer an increasingly better solution to a stable one. i'm not agueing "perfect" over "good enough"; i'm argueing "awesome" over "good enough" :)


You might be a great Microsoft developer.

Use this API, no this one, nope now this one, you almost caught up, so we released a newer, more better one! Please buy Awesome Studio and RDMS 2015 or you are a loser. Here's a free version that can't do shit, the Pro version is only $2000, plus lots of your time adjusting to a newer mono/flat chrome development environment. Hot keys only for touch screen users!


Correct me if my history is a little fuzzy, but isn't it Apple that plays the part of "move fast, worry later" so well? After all, Apple has been the front-runner when it comes to things like, "f--k your floppy/CD drive/FireWire/display-adapter, we're moving on", and that's just in hardware.

Microsoft has actually done a pretty good job maintaining backwards compatibility, considering the size of their user base and variety of software to accommodate. Though perhaps that makes it a better metaphor for PHP development than Python.


Microsoft maintains good backward compatibility, but has a habit of releasing a new way of doing things every five minutes.

Particularly look at http://msdn.microsoft.com/en-us/library/ms810810.aspx#bkmk_D... and windows current / deprecated/obsoleted ones.

Just to connect your Windows program to a Microsoft database.


To be fair, most of those deprecated technologies are 10 to 20 years old and use paradigms that wouldn't cut it with how we build software today.


It's not like you still couldn't plug in an old CD drive into your Mac. It will still work just fine. You can't plug in an old Python 2.x plugin into Python 3.x.


Why not. Are you familiar with SciPy's `weave` module? It's basically a plugin for plugging C++ code into your Python. In a variety of extremely cool compile-on-the-fly ways. It's really quite sweet :)

No reason you can't do that with Python 2.7. Should be easier in fact because the interface will be much thinner.

I am NOT saying this is a good idea however! Haha


That's interesting. I'll look it up.


Apple wants users to buy new hardware and I guess they figure it's not a big deal because you can buy adapters and/or hardware is usually bought together in generations. Their software APIs have remained quite stable; certainly no Silverlight-esque shenanigans from them.


Good point. Apple might frequently break compatibility even at hardware level but they do provide adapters if you really need it, making transitioning a no-brainer. The problem with python is that there are no such adapters at all.


I find this hardware compatibility thing a bit of a non issue. I have a usb dvd drive and floppy drive for that matter. Don't use em much to be honest. Will be using the dvd a lot in a bit while I migrate my dvd collection entirely to my nas box. (aka dd if=/dev/whatever of=/some/file) but thats mostly an exception and desire to get rid of stupid physical stuff.

As for Apple not being backward compatible, going to have to disagree a bit. Rosetta for example kept ppc code running on intel for a long time (10.6). Apple generally will give you a couple OS release notice that xyz is on its way out, start fixing it now or planning your path. To be honest I prefer that approach. Its measured, but still decent enough to handle moving on. Now with being able to run OSX in vmware fusion i'm pretty much of the opinion the backward compatibility canard in general is kinda a non issue overall. But just my opinion really.

I will say compared to things like systemd/ubuntu I've had a lot less breaking annoyances on OSX than any other unix/os including windows. By and large it works fine. Now mind you all I do is run chrome/firefox/safari/emacs/vmware on osx mostly so not like my needs are complex. But I guess all the rapid change in apple land really kinda goes by me using it. All I do is wait for vmware to work on the new os and then its game on.

I'm rambling but basically I think there is room for both types of iteration, or even a combination.


No, Microsoft never breaks anything. They deprecate relentlessly and push out new libs to replace old ones, but they never have the balls to actually fix anything. It's immensely frustrating that no lib ever reaches better than 80% good. It goes "1.0broken -> 1.1usable -> 2.0baroque -> 2.1deprecated -> Replacement lib 1.0broken".


I can give such examples for any commercial vendor and in terms of breaking compatibility, there are a few examples in FOSS land as well.


> I can give such examples for any commercial vendor and in terms of breaking compatibility, there are a few examples in FOSS land as well

But that wouldn't have the same emotional appeal as MS ;)


Microsoft are a massive tech company though.


>"Here's a free version that can't do shit"

Depends what you are doing. I worked in a multinational mobile game studio and developers were using Visual C++ express, which was actually pretty capable for what we were doing.


Thanks for the advice; only 2000 for "pro"? Wow! Where can I pay?


You clearly aren't running extensive production systems with little administrative resources. Code is written 15 years ago, and it's still running. Any changes that break something are major headache, usually leading rolling things back to working version. Nobody knows or cares to fix those things to achieve compatibility with new version. This leads to situation where nothing at all is ever updated. And even known bugs and exploits aren't getting fixed. Oh well, that's business as usual.

But when I'm writing new code, I'm of course using Python 3.


> But when I'm writing new code, I'm of course using Python 3.

There's no "of course" about it, which is the point of the article. It is interesting that you're using Python3. It means that you're not blocked by library availability (public or company-internal). It means that you are permitted by management to use it. It probably means that you prefer it.

My situation is different on the last two points, but I am also not blocked by library availability.


There's plenty of value to the going-slow approach and Python should embrace that identity and the value that comes with it. However, having two incompatible versions of the language itself--and all of the confusion that comes along with that--is a major barrier to adoption and improvement of the language.

The mistake here was breaking backwards compatibility to begin with. But since that's long past us now, the problem that needs to be solved is how to reconcile the two versions. It's unrealistic to keep both 2.7 and 3.3+ around forever. How do we get to a single version of Python to keep the language and its community healthy?


>>Moving slowly means I can spend more of my time writing code and less of my time upgrading old code.

That also means people who want to start new projects are going to do them in some other tools that are designed with the current problem trends in view.

Yes you still have bulk of legacy projects to work on. But that's the whole point.


Apparently you didn't use Rails, or at least on real project.


It's fascinating to compare this with ruby 1.9, released around the same time, but seemingly with a slightly better cost/benefit ratio, having nice new features and also significantly improved performance, and with ruby 1.8 being deprecated with a lot more speed and force. It got everyone to actually make the switch, and then ruby 2.0 came along, largely compatible and with a more improvements, and now ruby 2.1 seems to be an even smoother upgrade from 2.0.

The ability of the ruby core team to manage not just the technical aspect of making the language better, but smooth the transition in a way that actually succeeded in bringing the community (and even alternate ruby implementations) along with them, hasn't been given nearly enough credit. You could analogize it to Apple with OS 9 -> OS 10.9, versus Microsoft with people still running XP


> You could analogize it to Apple with OS 9 -> OS 10.9, versus Microsoft with people still running XP

No, you couldn't. The difference in upgrade rates between Windows and OS X is primarily due to their differing customer bases. Windows is very popular in enterprise, which avoids unnecessary upgrades in order to ensure compatibility with in-house software.

OS X, however, has almost no presence in enterprise, and consumers don't mind upgrades nearly as much, since the consumer software they use has to be compatible with all OS versions. Also, you can't buy Macs with old versions of OS X, whereas Microsoft makes it trivial to buy a new machine and install an older version of Windows.


I just want to chime in and say that Apple has done an abysmal job with backward compatibility, and it's not just due to the enterprise vs consumer market. I would wager that if you looked at the total list of apps ever released for Mac OS, the majority of them would not run today. That's because Apple's primary strategies are 1) innovate and 2) put the user first. Putting the developer first is not part of their profit motive like it would be for say Oracle or MathWorks. So without constant recurring effort, developers and the apps they create get left behind.

Apple frequently deprecates APIs that they endorsed just a few years ago. And they claim that apps can be rewritten to work with new APIs in a few hours, when in reality it can take weeks, months or even longer due to refactoring issues. If you want to be an Apple developer, you will likely be rewriting a portion of your code at some point to run on a new OS release. I was optimistic that the practice might end but history is already repeating itself with Apple insisting on iOS 7 compliance. The kicker is that Apple could have ported a lightweight version of Mac OS to run directly on arm for iOS, but they didn't, and I disagree with it being a performance issue (iPads are orders of magnitude more powerful than NeXT machines, or even the early PowerPCs that ran OS X). They created a vast array of nearly duplicate code prefixed with UI instead of NS. This looks like more of an anachronism every day to me with tablets running at multiple GHz with GBs of ram, when the only major difference between desktop and mobile is the touch interface.

Contrast this with Microsoft, where I am finding very old examples designed for Windows XP that still run today. Now Microsoft is certainly burning its developers with the major breaks in various versions of DirectX, or subtle differences in APIs between desktop/mobile/Xbox, but in my opinion this isn't happening nearly to the extent that it is with Apple.


> Contrast this with Microsoft, where I am finding very old examples designed for Windows XP that still run today.

xp? Up until a couple years ago when I switched to 64bit windows (first computer I had with more than 4gm ram) I could still run win3.1 and dos programs in (32-bit) windows 7. Including dos programs designed for the very original ibm pc (1981!).

Even today with 64bit windows I can still run most windows 95 programs.


I'm glad they didn't try too hard in porting MacOS to iOS. The touch/mobile platform is significantly different than the desktop platform. It required a huge rethink in design of the API and the UI. Is the Windows Desktop API similar to the Windows Phone API?


As well as the enterprise, don't forget the non-technical home market, and non-technical small businesses. Windows is much more common there, and they also don't have a short upgrade cycle.


Mags have a significant presence in enterprise these days.


Yes, Macs have a decent presence in enterprises these days--although it's easy to overstate it; Mac overall market share is still pretty low. Somewhere in the 10% percent range I believe. Furthermore. and to the original point, a lot of those Macs are BYOD or otherwise not managed as a corporate desktop/laptop. (Where I work is a case in point. You see a fair number of Macs but IT doesn't formally support them.)


> Mac overall market share is still pretty low. Somewhere in the 10% percent range I believe.

That's in the US. In the rest of the world, it's around 5%.


According to this site[0], it's at 7.54%. Out of that, only 32% are running the latest version of OS X (10.9). Another 24.5% are running 10.8. So almost half of OS X users are at least 2 versions behind. Not nearly as up-to-date as you might think.

0: http://www.netmarketshare.com/operating-system-market-share....


So what you're saying is that greater than 56% of Macs are running operating systems at most 1.5 years old ? (10.8 + 10.9).

God I wish the numbers were like that for Windows.


Well, it's a good thing then that you can't make a direct comparison between the two OSes. Apple doesn't care about developers, quickly deprecating APIs. Therefore, targeting an Apple OS that's 1.5 years old is much more tedious than targeting a Microsoft OS that's 1.5 years old.


My last two jobs were at fortune 100 companies, and both had IT supported macs and a lot of them. A work with a lot of people from other large enterprises and the same is true. This is anecdotal, but more evidence than the guy claiming they don't have a presence


But were the changes between 1.8 and 1.9 in Ruby as significant as 2 to 3 to Python? Or even 1.8 to 2.0?

I know in Ruby 1.8.x to 1.9, some major changes in my day-to-day coding involved Hash (elimination of hash-rocket, and ordered key-value pairs by default)...but I can't think of anything off hand that required me to rewrite my own libraries. And between 1.9 and 2.0...I've been switching between machines that have 1.9.3x and 2.0x and can't even tell a difference. Obviously, I'm not doing any legacy-maintenance in this situation, but it seemed that Python's changes, while breaking, were also significant improvements and changes to the API that mandated changes in implementation?

Speaking as a non-Python-dev...I've been wanting to get more into Python, at least to write routines that take advantage of scipy and numpy and all that goodness...but the process of deciding between 2.x and 3.x and keeping the steps/compromises in order can be beguiling.


1.8 to 1.9 was a HUGE release in terms of breaking changes. Probably the most notable change was the introduction of proper unicode support.

Unlike Python, which changed syntax at the same time, Ruby tried to maintain compatibility with existing syntax. In practice, this allowed Ruby libraries (including Rails; I did the bulk of the encoding work for Ruby 1.9 in Rails 3) to do runtime introspection to support 1.8 and 1.9 at the same time.

In my view, the best, most underrated thing that Matz did in Ruby 1.9 was to make all of the breaking changes detectable and shimmable at runtime.


With Python, one must take a conscious effort to write code that is both 2.x and 3.x, but it is entirely possible (also with some detection/shimming at runtime). Some of that is easier if you stick to 2.7 and 3.3+ only, because then you can do things like "from future import print_function", use b"..." and u"..." literals etc.


Ah yes, it's fitting that I would forget about the Unicode change...the fact that I did so is just further affirmation of why I still get myself into encoding problems...all the time. But yes, the Unicode support did seem to be a frequent reason for library overhauls.


> but I can't think of anything off hand that required me to rewrite my own libraries.

There were a lot of gems that weren't 1.9 compatible for quite a while, so there must have been something that caused people to rewrite libraries. I think that one disadvantage for Python 2->3 was an advantage Python had over Ruby -- the bigger base of libraries, many of which were widely used but basically in very-low-effort maintenance mode.


Matz has also said no breaking changes until ruby 3.0, which is 'ten years away.'

I don't expect this to actually be true, but it speaks to an attitude.


I've seen breaking changes between Ruby 1.8.6 and 1.8.7, so I'm really suspicious of this claim. Start with which version of Ruby are breaking changes not to be expected until 3.0?


From 2.0 -> 3.0.

Ruby doesn't follow SemVer, so expecting no breaking changes between 1.8.6 and 1.8.7 wouldn't necessarily be correct. I don't remember what the policy was at the time.


BTW, it was just announced that Ruby will use SemVer starting from Ruby 2.1.0.

https://www.ruby-lang.org/en/news/2013/12/21/semantic-versio...


> MINOR: Increase at each December 25th; may be API incompatible

Well now, that's not exactly semver, is it…?


Excellent, thank you. I missed that during the holidays.



i feel ruby's syntax is mostly done. an addition here or there, but breaking shall be rare as it just feels so finished. performance wise i think it can use (and gets) a lot of love.

on the python side things are different. python seems quite a bit faster then ruby. same league, but faster. yet its syntax is what needs to be greatly improved upon as it is so full of surprises and dark corners.

a language's syntax it's like its API, and performance is an implementation "detail". breaking the API is much more painful then chaning the implementation -- therefor i prefer ruby's approach: begin slow but with a quite stable syntax.

p.s. i dont intend to hurt feelings with this comment


In synthetic benchmarks, Ruby 2.0 is generally faster or approximately equal to that of Python 3.

http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te...


If he said it I believe it. I don't worry so much about Matz and the Ruby core committers breaking things.


Well, i mean, there were backwards incompat changes in ruby 2.0, which I think is after he said there woudln't be. They were fairly minor and easily dealt with, but there was certainly some software that had to be adjusted for ruby 2.0 from 1.9.3.

https://gist.github.com/nathany/6046171


Python 3 came from a good place, and it definitely fixes many problems that sorely needed fixing, but it was doomed to failure from the start (and many developers said that in 2008 already).

For all intents and purposes, Python 3 is pretty much a new, separate programming language. Sure, it's very close to Python 2.x, but you don't have out of the box retro-compatibility so that pretty much kills it right there.

Python 2.x is so huge in terms of its use around the world and available libraries and resources for it that you just can't say "hey, the new version of Python will in practice break everything" and expect it to fly.

I love Python and the community around it (and have several packages on pypi myself), but Python 3 is a joke.

If we didn't want to kid ourselves, we'd kill Python 3 and back port the interesting features, like Alex suggests. At this point though, too much effort and egos are involved to make this realistic.


> If we didn't want to kid ourselves, we'd kill Python 3 ...

Using Python both personally and professionally, I do not think it is necessary. I already came across a few 3.X features, that I miss in 2.X and I write most 2.X code "in preparation" for the jump. But that's it, it is a jump, still.

The most interesting part of the post is the feedback matter, which seems to be accidental, but isn't since an open source language is shaped by the people - something you can forget, when you are not a contributor (which most of the people might not be).


But you've just missed Alex's whole point.

You're fine. 98% of PyPI downloads are not. That's a startling statistic, and argues that something IS necessary, rather than the status quo of shaming the top 1000 Python packages sans 3K support and telling people "it's really not that bad."

It is that bad, in terms of adoption of the first 5 years of 3K's release. And unless we want to spend the next decade trying to get to 50% mindshare with 3K, the community needs to suck it up and change tacks.

The definition of insanity is trying the same solution twice and expecting different results.


> but you don't have out of the box retro-compatibility so that pretty much kills it right there.

I don't know if that's necessarily a foregone conclusion. Scala releases aren't always/usually backwards-compatible, and you'd be hard-pressed to find people still using much older versions.


Scala releases are usually not BINARY backwards compatible. Very different from SOURCE backwards compatible.


Scala also was often not source compatible during 2.7 -> 2.8 because of the huge changes in the standard library.


I'm pretty sure they'll never be doing that again though due to the mess it caused.


I'm not too familiar with Scala; but isn't its life at a different stage than Python's?


That's probably related to the user base of scale being orders of magnitude smaller. The network externalities of a large language user base give it incredible inertia that makes it very difficult to change.


This sounds remarkably like Perl 5/6.


Perl 6 is vaporware. Python 3 is here, it's widely supported, it's faster than Python 2.7, and while Guido didn't do us any favors with his migration strategy, I don't see what the big deal is. Python 3.3 makes writing code that is compatible with both almost trivial.

People aren't using Python 3 enough because it's not the default in Debian/Ubuntu. That's about to change with Ubuntu 14.04. I expect that to tip the scales.


> Perl 6 is vaporware

Perl 6 is not vaporware, you can run it today:

http://perl6.org/compilers/features

Is it 100% feature complete or fast? No. But there working code and new, stable releases on a monthly basis.


Perl 6 and Duke Nukem Forever are the geriatric poster children of vaporware, and always will be, even after they finally shipped. Being able to pick up a turd floating in a urinal does not in any way deliver on all the broken promises, rehabilitate the ruined reputation, or make up for all the blustering hype that was spewed for so many years about how superior Duke Nukem Forever's gameplay would be to all the other games that were already on the market. The damage was done a long time ago, and can never be undone. And Perl 6 is perfectly analogous to Duke Nukem Forever. Fortunately, nobody ever made such ridiculous claims about Python 3 as were made about Duke Nukem Forever and Perl 6.


Being able to "run" it, and the fact that it has a release cycle doesn't mean it's a viable development platform.


> Is it 100% feature complete or fast? No.

If it's not feature complete it's vaporware.


By that definition, basically all software is vaporware.


Well, if they are programming languages, are still unstable 10 years late, have near zero adoption and are missing half of the promised features, then yes.


Unfortunately languages like Rust perfectly fit your definition. Because Rakudo was started around the same time as Rust and Rust is aiming for may be 1% of the feature set Perl 6 aims to achieve.


Rakudo was started around the same time as Rust

That's moving the goalposts a bit! What's currently called "Rakudo" is at least the sixth or seventh attempt at an implementation, not counting Pugs, viv, v6, or Niecza.


Rust appeared publicly as a project to build a language in 2010 (it was a personal, private, project kept under wraps by Graydon a few years before which is irrelevant).

Perl 6 was first started on 2000. I've been reading Larry's apocryphal descriptions of its "features to be" for 4 times more years than Rust exists.

Rakudo is just a particular attempt at Perl 6, not the first and neither it consists of first time the language was announced publicly.


I remember Larry also saying(On a youtube video, may be at the O'reilly conference) that he has been working on Perl 6 since 1987. That's how he wishes to describe it.

Even beyond that those are not "features to be". They are already available for use - http://perl6.org/compilers/features

Therefore I'm not sure what you've been reading, or if you are even reading them. Because if you would- you would know, Rakudo covers much of the Perl 6 specification.

By the way. Rust is still not complete. The wikipedia article says, work started in 2006- Which makes it 8 years and still incomplete. And this is for a project which has by far may be even 1000x modest goals. Python 3 was itself 10 years in development, and that is for small modifications to print statement and iterators. And even after 5 years since that date, it doesn't seem to have come any where closer to achieving good production scenario adoption.

And. These are- I said by far extremely modest goals compared to the Perl 6 project.


>I remember Larry also saying(On a youtube video, may be at the O'reilly conference) that he has been working on Perl 6 since 1987. That's how he wishes to describe it.

That's all well, but he announced it circa 2000. I don't care how he feels about it or how long he hacked it alone, I care since when the language was expected.

>Even beyond that those are not "features to be". They are already available for use http://perl6.org/compilers/features

Therefore I'm not sure what you've been reading, or if you are even reading them. Because if you would- you would know, Rakudo covers much of the Perl 6 specification.

In a half-arsed form and with 1/100 the community of Perl doing anything with them. And still not all of them.

Personally I stopped caring somewhere around 2006. And I've read all of Larry's "apocalypses" back when they used to be on Oreillynet, as well as followed the internal implementation politics for a few years.

>By the way. Rust is still not complete. The wikipedia article says, work started in 2006- Which makes it 8 years and still incomplete.

No, it says that it started as a "part-time side project in 2006". That could be 2 weeks total spend in those years writing a list of desired features in a napkin and getting a hello world compiled for all I know.

I only care about the time since the project was publicly announced and the community started working on it.


If Rust had been hyped as much as Perl6 I'd call it vaporware too. It may yet turn out to be vaporware (in that it's not done), but right now I'm willing to give it the benefit of the doubt that they will actually deliver on their promises.


I have Ruby and Perl tattoos and now work at a Python (2.7) shop, so I have all of the horses in this race, but is there a significant difference between a language that's vaporware[1] and one that nobody uses?

The previous things I've seen also indicated that Python 3 _isn't_ widely supported, though I guess Django finally got support recently, and that it _wasn't_ faster than 2.7. Is this just old information on my part, or is it different for certain kinds of workloads?

> it's not the default in Debian/Ubuntu.

Yeah, that should have a large impact, for sure.

1: I have seen lots of "check out this Perl 6 code" blog posts, but I don't know to what degree it's ACTUALLY vaporware.


> but is there a significant difference between a language that's vaporware[1] and one that nobody uses?

There's a significant difference between a language for which a complete implementation does not exist such that you cannot use it, and one for which an implementation exists but few people currently choose to use it because there is a closely-related predecessor language that still has a stronger ecosystem.

As an stage in the development of a language out of an existing, widely-used predecessor, the "vaporware" stage tends to precede the "available but not popular due to predecessors ecosystem advantage" stage, which tends to precede the "displaces predecessor" stage.

Its obviously possible for a language to stop progressing at any of the earlier stages before reaching the last stage.


Quite fair.


> People aren't using Python 3 enough because it's not the default in Debian/Ubuntu. That's about to change with Ubuntu 14.04. I expect that to tip the scales.

I doubt it will tip that much. The enterprise customers we develop for still ship on CentOS and have auditing departments that have to scour each library and package a system depends on. Adding a dependency hurts in this environment. These institutions only upgrade their interpreters once every 5 - 8 years if you're lucky.


Comparing Perl 6 with Python 3 is a big joke. Even a cursory look at the Perl 6 specification and implementations like rakudo will tell you Perl 6 is attempting to go to a place where Python may like likely go in the next 40 years, with 10 Python 3 like disasters.

Its an exceptionally ambitiously project which they are in no hurry in finishing. And that is for a good reason.


Its an exceptionally ambitiously project which they are in no hurry in finishing. And that is for a good reason.

Way back in the early days, the goal was to release a usable 6.0 in a couple of years. The "there's no hurry" line is a post hoc justification from around 2006 or so.


But has the goal of Perl 6 as a project remained static? I'm relatively new to the Perl community. I know you are around since a very long time.

But I think the goal has changed from "Let's fix Perl 5" to being a more bigger goal of developing a language what Perl 5 would likely look after say 3 decades of iterations. I can't say if that is good from a purely practical perspective. But it does sound like worthwhile goal to chase.

Lets take a look at Python 3 itself. Yes they finished the project, but 5 years after that- Now what? Users don't seem to a have a valid reason to move away from 2.x series for merely small time incremental improvements, while forced to migrate bulk of the infrastructure and code for nearly ~0% productivity gains.

Perl 6 could have come out by 2005-2006, and if it were to be merely a incremental non-backwards compatible change over p5. We would be having similar posts about p6 as we are having now about Python 3. Imagine a refined, yet not that beneficial p5 breaking CPAN. I am sure you wouldn't like such a situation.


But has the goal of Perl 6 as a project remained static?

The idea that it would be a series of incremental improvements to Perl 5 ended around 2001, and certainly by 2002 at the latest:

http://www.perl.com/pub/2000/11/perl6rfc.html

The grand unification of runtimes (Ponie) had failed long before the announcement of its demise in 2006:

http://www.nntp.perl.org/group/perl.ponie.dev/2006/08/msg487...

I suspect but can't entirely prove that the appearance, rapid ascent, and even more rapid burnout of Pugs made a lot of people realize that this would be a long slog. Even so, if you look back at mailing list messages or conference talks or blog posts in 2006, 2007, 2008, whenever, you'll see that the party line has always been "It's only a year or two away."


Nobody expects Perl 6 to be widely used yet. The Python community seems to expect Python 3 to be widely used by now.


Bullshit. I was there in 2002 looking at Perl 6 and we expected it to replace Perl 5 within a couple of years.


You're right that back then the expectations were a lot higher, but I don't think anyone even then expected it to replace Perl 5 that fast.

Also, Perl 6 turned out rather quickly to become a new language (although very much in the spirit of earlier Perls), while Python 3 was, as far as I can see (I'm not really a Python person) as an update to fix some important issues in the language, which despite by necessity being backwards incompatible, was intended to still be the same language.


You're right that back then the expectations were a lot higher, but I don't think anyone even then expected it to replace Perl 5 that fast.

The goal was to run existing Perl code in the same process, so that you could gradually adopt new features or use existing libraries with new code. If that had worked out, there'd have been less of the Python 2/3 gap.

Then again, the idea that Perl and P6 are different languages is something that's come up after the fact, after it became obvious that P6 wouldn't be ready for general use any time soon.


I am not at all up-to-date on the Perl side, but Python 3 is real, whereas

> Perl 6 is currently being developed by a team of dedicated and enthusiastic volunteers. (http://perl6.org/)


Well, apparently Perl 6 is a "spunky little sister" whose spokesperson is some kind of comic butterfly character.


I like to think of engineering as "solving problems within a system of constraints". In the physical world, engineering constraints are things like the amount of load a beam will bear. One of the primary easily-overlooked constraints in the software world is backwards compatibility or migration paths.

There are many examples of systems where many look at them today and say: "This is terrible, I could design a better/less-complicated system with the same functionality in a day". Some examples of this dear to my heart are HTML, OpenID, and SPDY. It's important to recognize the reason these systems succeeded is they sacrificed features, good ideas, and sometimes even making sense to provide the most critical piece: compatibility with the existing world or a migration plan that works piecemeal. Because without such a story, even the most perfect jewel is difficult to adopt.

The OP, about Python 3, is right on except for when it claims making Python 3 parallel installable with 2 was a mistake; doing that would make it even more impossible to migrate to 3 (unless the single binary was able to execute Python 2 code). (Also related: how Arch screwed up Python 3 even more: https://groups.google.com/d/topic/arch-linux/qWr1HHkv83U/dis... )


Exactly. This is why something like Java 8 is admirable. They've been able to introduce two major new features from newer "productive" languages – lambdas and traits – without breaking anything; not only that, old libraries will actually enjoy better APIs thanks to the new features without recompilation. Sure, there might be better ways to implement these features, but introducing them in a way that feels natural to a language with millions of professional developers without breaking source or binary compatibility is a commendable achievement.


Same with C#. That language has really exploded with new features, and still code and precompiled libraries written for 1.2 compiles and runs perfectly on 5.0.


See also ECMAScript, as much as people like to bash on it.


HN loves to dump on C++, and in some senses I understand why, but Stroustrup had this figured out over 30 years ago. You don't break compatibility. For all the ugly warts in C++ due to the requirement for compatibility in C, I can take a C code base from the 80s (say a BLAS library or something), use it with my C++11 code, even using things like std::vector, std::string, or shared_ptr, without any real fuss. The idea that somebody will adopt a language very close to another language that they are already using is dubious at best. And that is what Python 3 is - a different language quite close to another perfectly good language. Why take on all the difficulties and heartache when I can just continue using this perfectly good language?


To be fair, Stroustrup made the wrong call on void* implicit casting. At least IME, it's the primary source of breakage when trying to incorporate C code into a C++ program --- but then again, we have extern linkage for a reason.


It wasn't until 3.3 that py3 was really palatable. Easier to support unicode running the same codebase in py2 and py3. yield from -- look, a py3 feature worth porting for! 3.3 was released in late 2012, and so, we can probably shift this "5 year" expectation to start from there.

In fact, it's 3.4 that really starts to wet the beak with asyncio and enum. I'm not sure 2.8 needs to happen, if 3.x simply, and finally, has good reasons to get on board.


This. While there are many wonderful features in Py3 (I use it 100% in production at work) it still needs a killer feature to really inspire people to move over. Python's lack of solid asynchronous I/O is definitely one of those features I think.


That's coming in 3.4.


Static platforms are great for developers. The best years of WebObjects' life were after Apple had mothballed it. Returning to a project years later - all the old code, scripts, and patterns worked just the same. Nothing else in the java world was like that. Similar story with BSD. The python 2/3 migration has been well managed. There is no rush. Celebrate it.


I think this is exactly right. Often people rush to conclusions in favor of the new and shiny, but Python is a great language that works pretty darn well at the moment, even in v2.7, giving the community the freedom to fully vet their ideas and make a valuable product.

I remember reading commentary on HN when a new version of PuTTY was released, and there was general sentiment that PuTTY is - for all intents and purposes - a "finished" product in that it does what you need it to do.

https://news.ycombinator.com/item?id=2758696

Python v2.7 is similar in that it's a tool that does many amazing things and is very stable. Could it be improved? Sure! But for the most part it's a great piece of software.

The care and planning that went into creating the tool is also going into the upgrade process, and I think that's a good thing.


This is really a great point. Python 2 is stable and probably never going to die off. This implies that it can be stably and productively used without worrying about the language developers breaking it down the road. This is great. :-)


As a development lead, we recently abandoned our plans to migrate to Python 3. Here's a short summary of why:

To begin the migration, we needed to move from Python 2.6 (which is the default on our CentOS6 production boxes) to Python 2.7. This transition is actually rather hard. We can't use the packages provided in CentOS base or EPEL, because they are all complied against Python 2.6. To re-produce all of our package requirements would require us to either build new packages of the compiled libraries (such as MySQL-python, python-crypto, PyYAML, etc), or load down our production environment with a compiler and all of the development libraries.

Migrating from Python 2.7 to Python 3 would have required a nearly identical effort (there's not a lot of Python 3 packages for CentOS, in particular the packages that we need for our application).

Frankly, it's just not worth that effort at this time. Python 2.6 is the default environment, there's solid package support for it, and it just plain works. We'll make that dive when Python 3 becomes the default for CentOS (scheduled for 8 or 9, IIRC), and probably not before.


Using Python2.7 on CentOS really isn't that hard. Install FPM, and run `fpm --python-bin python2.7 --python-easyinstall easy_install-2.7 -n MySQL-python27 -s python -t rpm -d python27 MySQL-python` . You'll find a MySQL-python27 RPM sitting in the current directory. Repeat for any package you want.

You can avoid compiling the core python27 language yourself by using the python27 packages from IUS.


You just described building packages, which we looked into and decided not to do. The cost (which does not stop after the rather easy step of building the packages) was not offset by the benefit of migrating.


Even Debian is using Pyhon 2.7 for a while already, looks like CentOS is being quite slow.


CentOS keeps the same major version of everything through the entire life of the OS; so, five years plus two more for vital security updates. This is by design, and a completely reasonable choice.

Often, you can install additional packages that bring newer versions into the system, but they don't become the "default". So, for PHP in CentOS 5, there was a php53 package (and maybe later ones, too, I dunno), that installed alongside the standard php package and didn't break compatibility but gave users the option of using a newer PHP version.

I'm pretty sure Python 3 is available in CentOS repos (it certainly has been in Fedora repos for a long time). It's just not the default system Python you get when you type "python".


For reference, the new way of installing newer language versions is software collections.

http://developerblog.redhat.com/2013/09/12/rhscl1-ga/


It is a reasonable choice until the day when you get hacked because you use a prehistoric version of something that doesn't get security updates anymore.

Sure, sometimes that will get backported security fixes, but that is not always possible. Heck, sometimes the installation depends on said security hole.


I still think you don't understand. RHEL (and thus CentOS) backports security fixes. An Apache package downloaded today from the CentOS 5 repository will have the security fixes, and many stability related bugfixes, that the latest tarball from apache.org would have, while still being an old version of Apache (and having the same configuration file syntax and working with whatever code you've deployed and tested on that version).

That's what companies are paying Red Hat for: A stable platform on which to build their business. It really is a good thing with quite a lot of value.

If you want bleeding edge versions of software, RHEL or CentOS might not be the right OS for you. But, I happen to love it for servers; I know that what I deployed three years ago is still gonna work after upgrading packages provided by the vendor. That's not something you can count on with faster moving distros, and it's why I always use CentOS (or RHEL or Scientific Linux) on my servers...while I generally use Fedora or Ubuntu on desktops.

Security is a hard thing to quantify, but I really think that a slower moving, but more carefully vetted, platform is going to be more secure. I believe this is why OpenBSD is so solid...moves very slow, but is very carefully constructed and maintained. RHEL/CentOS is somewhere in between OpenBSD and Ubuntu on that continuum, and I strongly suspect CentOS is going to have a better security record than Ubuntu.


And I think you didn't understand my point. Sometimes these security fixes will not be compatible with the application you are running. It will usually not be an issue if every single line of code you run is from the distro manager, but I've had CentOS security fixes breaking my apps because the security fix forced some other way of doing things, and because they didn't bother backport the subsequent bug fixes to the security fixes.

Yes, it may be less work with a stable system. I'm not arguing that part, but it is an idealized pipe dream to install once and then let the system be. You still have to have the machinery to do maintenance work, and you still need proper testing. Security fixes are NOT exempt from that in any way.


Certainly, software is complicated, and we're always managing risk and complexity when choosing our tools.

The reality is that CentOS is a more stable platform than most of the alternatives. It doesn't provide cutting edge software across the board (though it is possible to get newer versions of common tools that people want to stay on top of, like languages), but it does provide a reasonable level of confidence that what you deployed years ago will continue to run today. Moreso than any other Linux distro out there (in my, not at all limited[1], experience), most likely. And, the security concerns you raised are the ones I wanted to address in my original reply to you; you alleged that CentOS/RHEL provided old and thus insecure software. I wanted to make it clear that's not the case; all software is subject to bugs, including security bugs, but CentOS/RHEL are not shipping software with known exploits. It gets fixed, along with the upstream. In fact, the RHEL developers are often providing the upstream fixes, as well; Red Hat employs huge swaths of FOSS developers...really good ones.

1-I work on systems management software that is installed a million times a year, on every Linux distro and nearly every UNIX variant, and have done so for ~15 years. I don't know everything, but I know which Linux distros provide a stable platform and which ones have a good security record.


When you consider what it is, it makes sense. It's designed for big companies that want stability, at the cost of being a few OS generations behind the bleeding edge.

Fedora is moving to Python 3, so RHEL (and thus CentOS) will follow, but not for several years.


Python has been a pain on RedHat and derivatives for as long as I can remember. The problem is simply down to the decision to package only a single Python version. On Debian and derivatives you've been able to install distribution packages of multiple python versions (e.g. python2.4 and python2.7) at the same time. You lose all the advantages of a long maintained stable distribution when you have to install a bunch of hand-compiled packages.


Linux is the new Windows XP. "We can't install that web app, we are standardized on IE6 and WinXP until 2035 or the Second Coming, whichever comes first".


Make that just "the new Windows" and I'd agree. There's plenty of standardizing on "Server 2003" and other Windows versions, that didn't stop with XP.

Or, in other words, both platforms have firms that like to standardize on particular versions for stability. Which isn't really much of an attribute of Windows or Linux themselves, really.


Can you elaborate why you need to use a version shipped with the distro instead of simply using virtualenv+pip to install all third party dependencies?


Using a binary package with pip requires "a compiler and all of the development libraries", which GP said he doesn't want to do. Personally I gave up a while ago and now I do "apt-get build-essential python-dev" on every new server, but it's not ideal. I really like the approach outlined in https://hynek.me/articles/python-app-deployment-with-native-... , but haven't gotten around to implementing it.


> Frankly, it's just not worth that effort at this time. Python 2.6 is the default environment, there's solid package support for it, and it just plain works. We'll make that dive when Python 3 becomes the default for CentOS (scheduled for 8 or 9, IIRC), and probably not before.

Which is well and good; there's no point fighting with your operating system. Python3 was always going to be a slow migration; perhaps it's going even slower than planned, but that's not the disaster the OP portrays (in fact it sounds like you'd have just as much trouble migrating to python 2.7 as python 3, so there's no python3-specific problem here). Python3 is now at a place where distributions are adopting it as default; non-bleeding-edge folks will follow their distribution's lead.


Our company moved from CentOS to Gentoo and we were able to drop support for 2.6. We're now (mostly) 3-compatible and are rolling out our first 3.3-based production server. We've definitely been lucky, I wish the transition had been easier for everyone.


Did you look at software collections? I don't know what the state of repackaging them from RHEL for CentOS is, but they're Red Hat's supported means for using Python 2.7 or 3.3.

http://developerblog.redhat.com/2013/09/12/rhscl1-ga/


Reading the comments on Hacker News whenever someone brings up the issues with the Python 3 transition are horribly painful due to a systemic bias in that the people who care to read and talk about Python 3: they are mostly people who are in the 2% of people who apparently care enough to have upgraded already; everyone [edit: "here who is saying" was incorrect; "here who normally says" <- today the discussion has been much more balanced, which is really nice] "engh, its fine, I upgraded, I know tons of people who upgrade" are ignoring the actual statistics cited by this developer [maybe having these actual numbers changed the discussion, pushing out the people who just insist nothing is going wrong?] that show "no, you are wrong, you have a bunch of anecdotes because you know people like you and people like you actually wanted to upgrade for some extrinsic reason that your friends are more likely than the normal population to share with you". :(

If you cast a wider net, and talk to people at conferences that have nothing to do with fancy programming languages (and certainly not about Python itself), people aren't using Python 3, and the feelings about Python 3 are mostly "sarcastic bitterness" (where like, you bring up Python 3 and they kind of laugh a little like "yeah, no, obviously") surrounding the problem mentioned by this author that "for the last few years, for the average developer Python, the language, has not gotten better" while being told that upgrading is easy or somehow intrinsically valuable to them, which as this author points out comes across as the Python community just saying "fuck you" to their needs.


I'm not sure there's any substance to what you're saying. The comments on here certainly don't reflect the attitude you've suggested they should.

I obviously can't speak for anyone but myself but for me the reason not to change has been that there's a critical core of libraries that weren't ported over. Now the list isn't looking too bad and it's probably time to make the switch. It's taken a long time to get to that stage, but was always going to.


So, I'm going to agree with your first comment: today things are much better than normal, and I apparently skimmed the top-level comments too quickly and didn't give them enough credit. I've taken the parts of what I said that I believe you are correct for pointing out "are in error", modified them, and will go so far as to apologize for not giving today's thread enough consideration.

> I obviously can't speak for anyone but myself but for me the reason not to change has been that there's a critical core of libraries that weren't ported over. Now the list isn't looking too bad and it's probably time to make the switch. It's taken a long time to get to that stage, but was always going to.

However, reading your continuation kind of brings back the bias problem to me (although not in a way that is problematic, as you aren't trying to say the people who aren't upgrading are wrong; but sufficiently that it is interesting to discuss): you are assuming there is some intrinsic value to making the switch to Python 3, and that the strategy is simply to wait for the pain to be sufficiently low that it becomes "time to make the switch", as if that switch were inevitable, and as if this is all going well for everyone.

I would say that instead, the Python 3 community needs to start looking at itself over again with the hard realization that if there are any serious costs involved to using it over an alternative (whether that alternative is Python 2 or some new kid on the block like Scala) it needs to prove that worth: making it slightly purer or slightly simpler or even slightly more consistent is not something that a lot of developers are going to value over the kinds of costs Python 3 has chosen to make part of the tradeoff of switching.


Realistically there is a value to porting to Python 3, in a year and some months Python2 will no longer be receiving security updates from Python Core. This will get taken care of by third parties for awhile but I fully suspect this support to be incomplete and eventually relegated only to RHEL.


> in a year and some months Python2 will no longer be receiving security updates from Python Core

Is this an official statement? A few minutes with Google produced a number of articles that claimed five (or sometimes six) years of support for 2.7 starting in 2010, but I couldn't find a clear statement on python.org.


Which is really just going to be read as another "fuck you" to the Python community, the vast majority of which is probably still going to be using Python 2...


I don't share Alex's concern. The migration to Python 3.X is a slow, but in my opinion sure process. Already many of my small internal programs run on Python 3.4, and I believe that in 1-2 years from now I'll be writing most new Django client projects in Python 3.4 (hopefully running on Pypy3).


Agreed. More and more important libraries are being ported to Python 3. I'm working on a web app supporting Python 3.3. I keep it backward-compatible with Python 2.7 just in case I would badly need a library that hasn't yet been ported to Python 3 but it hasn't happened yet and I have good hope that we'll be able to deploy it on Python 3.3. I develop and test locally with Python 3.3 and travis runs my tests against both Python 3.3 and Python 2.7. Every time the Python 2 build breaks it's because of some silly unicode-related problem that has been fixed in Python 3, so it really motivates me to develop with Python 3 from now on.


I was even a bit surprized when he complained that people don't develop for Django using Python 3. There isn't even a year that Django supports it (with a year of "we support, but don't recommend"), and quite a few of its libraries already got ported.

Yep, people underestimated the time to migration. So what? I completely agree that migration is happening. Now, let's talk about IPv6...


ipv6 actually doubles year over year, so it's not that bad. if py3 also doubles year over year, we'll see both making an impact in 5 years or so.


The irony is that what's keeping Python from moving forward is its own ecosystem.

PyPi makes it so easy to just add small libraries as dependencies to your project. This is part of what I like about it, but it comes with a cost - this exact problem.

I actually find the unicode thing a good enough reason to move to Py3, and porting my company's own code is hardly and issue. But I just had a quick look at how much of our dependencies support Py3. No surprise - we can't move. Not without porting a huge amount of code we don't know by ourselves and hoping the pull requests get merged, or by dropping big dependencies from our code.

How big? Thrift, boto, MySQL-python, hiredis (the native C redis connection layer), fabric, mrjob - just to name a few. Some of these have big non compatible dependency trees themselves.

Neither of those are going to happen. So not having a big enough incentive is not my problem here. The price of migrating is simply too big compared to the incentives.

I think the only big enough incentive that would cause me to consider replacing or porting all this huge chunk of dependencies, is something indeed along the lines of GIL-less massive concurrency a-la Go.

But that doesn't seem to be happening any time in the foreseeable future. Python 2.8 is a good idea for me as a user, but it will only persist the problem, not solve it. I don't have any better idea other than Python should grow one huge carrot to create a tipping point, and enough pressure on library maintainers to hop on.


Consider the new programmer, or the programmer new to python, or the corporation/workgroup new to python whose focus is not at all python as python but just GSD.

They read this, or you show it to them: Should I use Python 2 or Python 3 for my development activity? https://wiki.python.org/moin/Python2orPython3

It starts off very encouraging: "Short version: Python 2.x is legacy, Python 3.x is the present and future of the language"

Then we skim down to the meat of the page: Which version should I use?

Two short version stating that 3 is great, and you should use it. If you can.

And about 20 paragraphs of caveats.

To the person who's been around the block once or twice, or doesn't want to be seen as that pie in the sky programmer to his boss whose focus is not programming and doesn't give a shit about new, what stands out is "you can use virtually every resource available in 2, and almost mostly totally in 3 also."

And if you're new in any sense, do you really want to spend the time researching if 3 is going to work for you or your group/boss/career? No, you pick 2 and GSD.

When that page is gone, or its caveats are substantially reversed, 3 will be the choice.


Totally agree. And when one of the suggestions for porting to Py3 is "Decide if the feature is really that important. Maybe you could drop it?", you know something is wrong.


Indeed, documentation is another thing that is going to show schisms, not just code. Backwards-incompatibility means that the majority of Python tutorials and books are just going to be broken, and one of the worst experiences I've had is with finding example code/description online for something, only to discover it won't actually work for the latest version.


Python fell into the Winamp trap. If anyone remembers, version 3 was pretty much crap, and many users stayed with 2.95 for ages. Now, I'm not saying Python 3 was bad, not at all, but the benefits don't outweigh the cost of switching for many people.

Here's my idea. Make a new "Python 5" (2+3=5, or call it whatever you want), based on Python 3. Put back everything that was deprecated or removed along the way, including the `print` statement. Provide `from python2 import xx` or `from python3 import xx` if there are incompatible module changes. To deal with the string changes, introduce an idiom like:

    bytes, str, unicode = python2_strings
    bytes, str, unicode = python3_strings
or:

    from python2 import bytes, str, unicode
    from python3 import bytes, str, unicode
which always maps "bytes" to the byte array (py2 "str", py3 "bytes"), unicode to the unicode char array (py2 "unicode", py3 "str"), and "str" to whatever the legacy code needs.

The goal would be to have 95% of legacy code run without modifications, or with a minimal addition of 2 lines to the top, or a command line switch.


Your idea is good but i think it has to be taken one step further. You should be able to import ANY module, third party or not, and specify under which version to run it. What we need is interop between python 2 and 3.

Just look at this thread. Everybody who disses python 3 does it for library support, library support and library support. If you could write your own code in python 3 but still use libraries remaining on 2.7 most people would switch in a heartbeat. After that it's just a matter of time before the libraries transition to 3. Now we are stuck in a chicken and egg situation where nobody wants to make a move.


> Everybody who disses python 3 does it for library support, library support and library support

I like to think I'm somebody, I diss Python3, but I don't diss it for library support (which is good enough for me). The trouble for me, and at least a few others, is that Python3 has replaced C strings with byte strings and Unicode strings and uses the later where Unix expects and provides the former.

If byte strings were actually used instead, there would likely be other issues. Near as I can tell from this discussion, those who have actually tried to use byte strings in Python3 have found they don't work well.


Minor gripe: The other reason isn't library support, it's unicode strings, which are a massive pain for unix systems programming.


My last few Python projects have started out as Python 3, but ended up as 2 due to missing library support.

Would it be at all feasible to enable Python 3 to import Python 2 code? I imagine this could be done without making Python 3 fully backwards compatible, but I might be wrong.


That'll be the crux. When you can run most/all existing python 2 code under python 3 we'll see the transition. Until then most of us have no plans to even look at python 3.


+1, but I'm ont sure how that's going to work. Even if you support Py2 syntax, some of the internals are different and might cause code to misbehave, especially the str/unicode/utf stuff.


Something that might help is OS vendors shipping with 3.x installed, rather than 2.x most seem to.

OS X ships with 2.7.5. For a casual python user, sticking with what is there and working is safe, especially when the benefits of 3.x are unclear.


Canonical has been working on making 3.x default for Ubuntu for a while. From https://wiki.ubuntu.com/Python/3

> It is a release goal for Ubuntu 14.04 LTS to have only Python 3 on the desktop CD images. Also, we won't be allowing Python 2 on the Ubuntu touch images.


OS X (at least the version on the Macbook I got from IT) also ships with a horribly outdated Bash, so I don’t think it healthy to put it up as a standard for anything…


In that case (and all of the other GNU stuff) it ships with the final GPLv2 version, since Apple is unwilling to ship GPLv3 code.


Is there a particular reason for that? I can understand not wanting to ship GPLv3 code if it is embedded sufficiently deeply into the hardware, but it shouldn’t be a problem to ship GPLv3 Bash, as the user can change the binary delivered on the harddrive at any time?


Patents and DRM. It always comes down to that.

If Apple ships any OS X devices with DRM, they would have to have a separate build without the gplv3 bash in it. It would double support costs, maintenance and so on.

And patents. Any patents covering bash would loose value from using a GPLv3 bash. This also include any patent agreement Apple has, which include additional legal overhead.

GPLv2 vs GPLv3 is and always will be Patents and DRM.


GPL has all kinds of non-corporate friendly clauses. I wouldn't touch GPLv3 code in anything I ship either.

Likewise Torvalds refuses to let GPLv3 touch the kernel.


> Likewise Torvalds refuses to let GPLv3 touch the kernel.

Please don't misrepresent other peoples view points out of context.

If you want to read Linux Torvalds opinion, I suggest that you start with either one of the many articles about it (like http://news.cnet.com/Torvalds-No-GPL-3-for-Linux/2100-7344_3...), or any of the many mails on the mailing list. You will notices that Torvalds criticism is mostly directed at the DRM provisions, since he don't think a copyright license should be able to dictate a political matter such as DRM restrictions on devices.

To quote:

  "I also don't necessarily like DRM myself," Torvalds wrote.
  "But...I'm an 'Oppenheimer,' and I refuse to play politics with Linux,
  and I think you can use Linux for whatever you want to--which very much
  includes things I don't necessarily personally approve of."


Torvalds didn't put the "or any later version" clause in the Kernel's GPL license. And he hasn't required copyright assignments. Because of this, the kernel simply can't be converted to GPLv3. You would never be able to contact all the individual copyright holders and have them agree with complete consensus to re-licensing.


"I think it's insane to require people to make their private signing keys available, for example. I wouldn't do it. So I don't think the GPL v3 conversion is going to happen for the kernel, since I personally don't want to convert any of my code."

He's spoken out against it personally - he cleary _doesn't like it_.


That quote referred to an interpretation of the initial public draft of GPLv3, which was radically different from the final version published in June 2007, including with respect to the so-called anti-Tivoization provisions. Of an interim draft of GPLv3 in March 2007, fairly close to what ended up being final, Linus said "Unlike earlier drafts, at least it does not sully the good name of the GPL".

However, you're correct that Linus clearly does not like even the released version of GPLv3.

[edited slightly]


I'm curious what specific issue Apple has with GPLv3 for something like bash? It seems to me that if they're providing the source, and not modifying the copyright/license, there would be no issue. The patent language only applies to the program itself. I guess one could view it as simple paranoia on the part of their lawyers...they took years to start shipping any GPL (v2) code on the Mac. It's actually news to me that it has bash out of the box, now; last time I used Mac OS X, it still had tcsh and an abominable development environment and I hated it.


Yes, it absolutely would. Particularly since that would mean that packages for 3.x would be available then; something that's not there now. If you want to migrate to Python 3, you have to re-compile any libraries you want to use.


Yes, I need a lot of my code to 'just work' out of the box on OSX, so I'm sticking with 2.7. I really wish Apple had put python3 in Mavericks, as /usr/bin/python3, rather than the default. sigh


I used Python 3 for the first time a few days ago (I've been programming in Python for 18 years). When I used python heavily (I've switched back to C++ and now Go) I depended a lot on a small number of really good libraries- ElementTree, NumPy, SciPy, etc. Unless/until all of those get ported to Python 3 (along with hundreds of other programs), and those ports are considered Correct (in the QA validation sense), it's hard for me to consider wholesale conversion.

I did it because I was trying to parse Unicode in XML (Apple iTunes music files) and Python 2 is completely broken when it comes to unicode.

I consider Python 3 a big "fuck you" from Guido to the community. I don't think he intended it to be so, but the effect of the long transition, and the lack of backported features in Python 2 (which could be easily accomodated), coupled with only limited adoption of Python3, demonstrate the leadership needs to pay closer attention to user pain.

Finally, I don't think Python will ever address the simple and most important core issue: lacking good multithreading support will make the language continue to be more and more marginal. CPUs aren't getting much faster, the CPython serial runtime isn't getting any faster. Multiple cores on a single chip, and multiple threads in a core, are getting more numerous- and not being able to take advantage of them using the obvious, straightforward memory-sharing, multi-threaded techniques the chips are design to run well- is just a clear ignorance of the future of computing designs.

I switched back from Python to C++ when shared_ptr and unique_ptr, along with std::map and std::vector became standard enough that programming in C++ became easy: convenient memory management, and maps/vectors are the two things that C++ adds to C that are valuable. Then I switched to Go because C++ compilation times are so bad, and writing high quality concurrent code is hard.


Just out of curiosity, have you actually tried getting NumPy/SciPy working on Python 3? Because they've been fully supported for quite a while now. I don't understand how this myth keep perpetuating itself.


I'm not interested in even playing any more. Like I said, I haven't used Python 3 before today (after 18 years of programming Python). I would have cared, for example if Python 3 had any impact in the real world. Anyway, it took years for numpy support; it wasn't available until recently for python3. That's the problem: it took years before these common tools were proted.


>> I used Python 3 for the first time a few days ago

> Like I said, I haven't used Python 3 before today

Well, which is it? Neither makes you much of an authority on the subject.


> Anyway, it took years for numpy support;

Python 3.0 release date: December 3rd, 2008. Numpy 1.5 (Full 3.0 support) release date: August 31st 2010.

It took 21 months.


Fine, so you have found a language you prefer to Python. Go use it. Never mind about PyPy, which gives you compiled execution speeds, never mind about the fact that Python 3 fully supports NumPy, SciPy, matplotlib and much of the infrastructure you claim to need. If threading is your preferred solution to concurrent programming and it's working for you then there's no real problem, is there? I'm sorry Python doesn't work for you, but fortunately there's a rich toolset available of which Python is only a part. It's fine for many tasks, but it's by no means your only option. Isn't open source wonderful?


ElementTree has always been in 3.x. I believe NumPy and SciPy are finally making the leap just about now.


Numpy supports Python 3 since version 1.5, released in August 2010. Ubuntu 12.04 and the current Debian stable carry this version. Scipy supports Python 3 since version 0.9, apparently released in 2010 as well with releases in the current Debian stable and Ubuntu 12.04.

‘Just now’ is hence a bit too late, but it is correct that many users have only been able to use Python 3 some four to five years after 2008.


I believe those versions were saddled with "experimental" wording and bugs, which made people steer away from them. This is all second-hand though, I personally never used them.


A lot of people cite that lack of NumPy support is what's keeping them from moving to Python 3, I've seen this comment more times than I can count. It seems most of them haven't done their research and found that NumPy/SciPy has had complete support for Python 3 for 4 years now.


Please don;t interpret my comments above as ignorance. I am quite aware of what is available (and was available, when I considered switching to python3). I did my research at the time (which was some 5 years ago). It looks like scipy was added about 3-4 years ago... at which point I don't care any more.


If it was valid at the time but your concerns have since been addressed, then your complaints aren't exactly relevant to someone who's reading this now and looking for information on Python 3, right?


It's an anecdote, and goes some way to explaining why so,e programmers have just given up on 3 already.


That's nice. I don't care.


accidentally upvoted. but why did you even bother posting that?


To demonstrate the vehemence which long-term experience programmers view the lack of ability with which Guido is running the python3 project.

If he had actually addressed the important questions in Python3 (first being getting the majority of people on 3 quickly by getting the major libraries ported quickly, second being addressing the GIl in either 2 or 3 (if the GIL had been removed in 3, I would have much more strongly considered it).

If either of those had been addressed, some of us (who are influential in the area) would have adopted 3 and prosetlyized it. However, I lost so much confidence in Python after the 2 to 3 transition that I've decided to actively non-prosetylize it.


I don't regard the lack of practical multithreading as a problem. We routinely use multiprocessing and IPC via queues for parallelism, and for me the main limitation is Python's excessive memory use: even atomic data types are objects which cause quite a bit of overhead (a memory efficient dictionary with support for a few basic data types would be great). Thankfully everything still fits in the server's memory, otherwise I would have to consider another language, such as Go.


I heartily recommend using concurrent.futures. It is a standard part of Python 3.2+ - http://docs.python.org/dev/library/concurrent.futures.html - and you can get it for other Python versions - https://pypi.python.org/pypi/futures

Behind the scenes it uses multiprocessing and/or threading plus queues etc. I have a function that adds command line arguments (number of workers, use threads or processes, debug mode) and then another that returns an executor given the command line arguments.

The debug mode forces a single thread and is intended for when you want to run pdb as multiple threads/processes make things considerably more complicated.


I've been programming with import Queue, import threading, for years now (probably 8 or 9). I don't want to drop all my completely fine multithreading code for Yet Another Threading API.

Personally I think multiprocessing is just dopey. Why force me to program around two different address spaces when I can just have one?


I'd also done the Queue/threading thing for the longest time. I'd also use multiprocessing some of the time. But it was arbitrary, and I'd have to make sure that shutdown, exceptions etc are correctly handled.

> Yet Another Threading API

concurrent.futures is not yet another threading api. futures is a standard async work/result wrapper, while the process and thread executors are standard executor apis.

> Personally I think multiprocessing is just dopey

Then don't use it. But be aware that it works well for many people. Some of my code scales linearly with multiprocessing and all I had to do was use a process executor instead of a thread executor.


Well I do care. I think multiprocessing is a stupid abomination- it's like a person who never learned how to use threads found the threading library and decided to make a message-passing communication system with a thread-compatible concurrency library.

Message passing is fine (and it's the sort of thing that Go uses) but ultimately, leaving single-address-space multithreading completely on the table when it's the single most obvious and straightforward way to take advantage of modern processors is just dumb.


See the recent post on __slots__ .


What is needed is a very high quality Python 2.x to 3.x migration or conversion tool to make library conversions trivial. If developers knew they could convert any 2.x code to 3.x code with no effort at all and with absolute certainty of proper operation they would probably migrate to the latest 3.x release en-masse.

How difficult would something like this be?

This might be really naive on my part. I haven't really taken the time to study the differences between 2 and 3. I have avoided 3.x out of entirely selfish reasons: I simply don't have the clock cycles to run into roadblocks.

These days languages are only as valuable as the breath and quality of the libraries around them. The issue with 3.x is that it created fear of not having access to small and large libraries developers leverage every day to be able to focus on their problem and not the problem addressed by the library. In that regard it is easy to be inclined to criticize Python leadership for approaching the transition with what might be perceived as a lack of consideration and understanding of the factors that might hinder it.

EDIT:

Just learned about 2to3:

http://docs.python.org/2/library/2to3.html

Not sure how good it is. A quick look at the page gave me the sense that it might be somewhat involved. A true converter would be something that is as easy to use as the the import statement. Something like:

    import2to3 <some_module>, <some_destination_directory>
It should not require any more thought than that and it should be 100% reliable.


TBH i never understood the reason for 2to3. Wouldn't a 3 to 2 compiler would be much better? Then people could start using python3 as intended, and simply compile back to python2 if they need to. I doubt there is anything it python3 that would be impossible to emulate with machine generated python2. And no one would care how the compiled python2 code looks, because it would be temporary.


You may want to look instead at six which let's you write code that works with both py2 and py3: http://pythonhosted.org/six/


Backwards ecosystem compatibility is a law of nature, not an option. Guido blithely broke a law of nature and the consequences, which should have been completely obvious to him, are just as anyone with history in the industry could have predicted (and most did.)

I'm at the decision point of which one to learn for a long languishing project I want to use it for. If I could write in 3.x and use the 2.x library ecosystem there would be no glitch whatsoever in my decision process. 3.x seems sufficiently advantageous _as a language_ to make the choice easy. As is, however, since I do not yet know what within the 2.x ecosystem will prove to be important or essential, my only intelligent choice is to maximize my options and go with 2.x. The advantages of the 3.x language don't even begin to outweigh the potential disadvantages of coming up short.

I consider this irrevocable break with backward ecosystem compatibility (given the magnitude of the ecosystem) to be the worst, most egotistical decision I've ever seen in the computer field. Almost a death wish.


Python3 is not exciting because well there is nothing to be excited about.

Consider me an average programmer, I have been using python for a year+ now. Most of the everyday stuff can be done in 2.7, some functionality I need / can't do I google and get a solution which works in 2.7. Why Py3 is not adopted is because there is not much benefit you get for doing the extra work (think chemistry - activation energy)

On another note, why can't we port it the little goodness back to 2.7 ?


> Python3 is not exciting because well there is nothing to be excited about.

I think this depends on whether or not you use, or will ever need to use, Unicode. I don't, and so Python3 is not only not exciting, but it creates a new problem for me: suddenly, I need to think about the difference between bytes and strings. Now maybe this is healthy for me, maybe it builds character. But it also adds effort and work that bring me no benefit. I'd like to see some of the smaller improvements of Python3--I'd like print to be a function, but in the big picture I'm better off with Python2.

I wonder if the Unicode problem could be solved in a way that I wouldn't need to think about it: perhaps utf-8 everywhere somehow.


Separating byte strings from Unicode strings is as close to solving the Unicode problem without thinking about it as you'll get.


Perhaps you misread my intended meaning, or I didn't state my position well. I don't have a Unicode problem. There is one (suppose HN forum software were written in Python2, for example). Python3's approach to solving an important problem that I don't have makes solving problems that I do have awkward. So I stick with Python2. I have to believe that there are people with both problems.


I think the concept of "activation energy" is a good one. If can get work done in 2.7. My job is not to port code - it's to get work done.

Until there's something that means I'll get work done faster using 3, there's really no reason for me to try and adopt it.


I'm looking to learn Python, so it's a pitty that there's a schism of sorts within Python.

This sort of reminds me about PHP 6: a project with initially high momentum and various ideas to clean up the language. But over time it became clear that upgrading from the land of magic quote and register globals (PHP 4) to PHP 6 would have been too much of a jump.

So instead they slowly started deprecating and making improvements within the PHP 5 stream, and bit-by-bit PHP has moved on.

The change from Python 2 to 3 doesn't look dramatic, but I can understand why there's an air of lethargy regarding the upgrade.


I feel frustrated too, but I think Ubuntu 14.04 will tip the scales (it ships with Python 3 by default).

Also, the core devs got at least some things right with Python 3.3 by making it a lot easier to write code that targets 2.7 and 3.3 at the same time. In retrospect, that should have been the focus much sooner.


As a Linux user, I'd like to believe that Linux distros enjoy this kind of influence, but I honestly can't see how.

How many people are still on Windows? How many are on OSX? How many are comfortable in 2.x and couldn't care less about new features?

Python 3.x may be better in some ways, but it just doesn't look better enough.

I wonder how long it will be before the 2.x community goes its own way like Perl 5.


> How many people are still on Windows?

Windows doesn't ship with an OS-default version of Python and doesn't include OS features that depend on the OS-default version of python, so its not as influential in that regard. But, in that regard, the inclusion of the multiple-install-compatible python launcher in the basic Windows install of Python 3.3 probably improves the Python 3.x story on Windows, since it eases the multiple install pain and makes it easier to work with Python 3.x while having some Python 2.x software in use on Windows.


I also am sceptical that this will tip the scales, but for different reasons: you'll be able to install Python 2.7 with a single command. Plus, switching to Python 3 will still be difficult because many libraries still won't have been ported yet.


The funny thing is I was getting downvoted by the peanut gallery on proggit and other sites when I was pointing out, years ago, that there is no such thing as "Python", but really "Python 2" and "Python 3".

It's nice to see that pythonistas are starting to accept what an outsider saw give years ago.

Frankly the problem is a culture of overpromising and underdelivering that is endemic to Python. The situation with threading in PHP and Python is really the same: "it almost works" but the PHP community is responsible and says you shouldn't really use threads and the Python community is irresponsible and says "come in the water is fine".

The developers of languages such as PHP, C# and Java value backwards compatibility. Certainly things break from release to release, but some effort is made to minimize the effect, whereas in Python 3 they rewrote important APIs and broke a lot of stuff that they didn't have to break.


Threading in cPython is like recursion, it kind of works but is not the canonical way to write it.

I dare to say that in Python you should always convert a recursive algorithm to the iterative form, because cPython lacks tail call optimization common in functional languages and the default recursion dept is only 999; the same way you should also look for other patterns related to multitasking if the task is CPU bound (or look for GIL-free implementation).

I agree with you that these (and other) Python "gotchas" need more visibility; I don't think the language is broken - one could say the default implementation is broken, IMHO it is just not optimized for a couple classic software patterns. Once you start swimming with the current (using alternative patterns), you probably will find some of the design decisions very pragmatic.


There is no meaningful performance increase to go to a backwards incompatible version? Three letters: DOA. if not performance then at least we'd need some crazy new feature like good multi threading or perhaps running on a new relevant platform (say ios or android). Otherwise we will be 2.X forever.


It's only when you start doing unicode seriously that you see how broken python2 is. It is not unworkable - but it's a mess that is hard to resolve, and keeps biting you in old code paths that weren't tested with character from this subset rather than that subset.

So, Python 3's killer feature is reasonable Unicode support.


The problem is that reasonable unicode support is a non-issue in many areas were python had a strong competitive advantage over other languages (e.g. data analysis, scientific python).

I don't know much about unicode myself, but some people I trust like Armin Ronacher have showed how some python3 are still heavily broken w.r.t unicode (http://lucumr.pocoo.org/2013/7/2/the-updated-guide-to-unicod...).


Actually, there are some minor performance increases, but there are tons of minor improvements, and the new async framework landing in 3.4 is very exciting.


> new async framework landing in 3.4 is very exciting

this is the killer feature that would move me to python 3. alas, not released yet. And I took a rails job :(


> Actually, there are some minor performance increases

And apparently some fairly major ones (e.g., decimal in Python 3.3.)


Meanwhile, in Railslandia, the annoyance is how much time you have to spend updating your old apps that rely on ruby versions or other dependencies that are no longer supported.

I'm annoyed that it's looking like ruby 1.9.3 will be end-of-lifed sometime this spring, and I'm going to have to go and deal with updating a bunch of apps to ruby 2.0 or 2.1; it seems like it was just yesterday I had to spend way too much annoying time updating them all to 1.9.3 in the first place when 1.8.7 was EOL'd.

And don't get me wrong, 1.9.3 is _so_ much better than 1.8; and the update to 2.x will hopefully be not so bad, but it's still time I'm spending on the treadmill instead of new features.

Is there any path between the continual forced march of updates of ruby, and the lack of urgency so nobody ever upgrades of python?


Funnily enough, these days I'm spending most of my time writing Javascript. So that's like two steps back. But that's ok, because the language is not the bottleneck in software development. The big time sinks are learning new concepts and managing inherent logical complexity.


I was downvoted before for remarking that Py 3 needed a killer feature or two to drive adoption, similar to this post. Perhaps I was not charitable enough.

I'd personally like to see pypy bundled and a complete package manager solution, as well as usability features like bpython. I don't think it is necessary to dump it. It just needs a little excitement.

Still, after many years I am finally planning to move my stuff to Py 3.4 when it comes out next year. No particular reason, it just feels like it is time. Shame that it doesn't look like it will get into 14.04.


Rename it as something else. Call it Cobra or something. Also remove all backward compatibility features. Maybe by taking away it's association with python, it will have a better chance.


If Python 3 is renamed Cobra, I'll switch immediately.

And while doing so I will listen to this track from the Cobra soundtrack.

http://www.allmusic.com/song/skyline-mt0010878632

Preferably to its C64 rendition.

http://ftp.df.lth.se/pub/media/soasc/soasc_mp3/MUSICIANS/D/D...


> Rename it as something else. Call it Cobra or something.

More likely, if there indeed is never a 2.8, is that Python 2 would eventually be forked and renamed. "Rattler" would make a better pun.


I think it's just starting to roll. Only a month ago I argued in my company's mailing list, that now is the time to finally start moving to Py3. Py3 is more stable now, most of the big libraries have finally moved. In a commercial set up it is just stupid to move forward, when the ecosystem hasn't. But now it has, so now we start. I think people should just continue to improve in that direction. Maybe it will take 10 years, not 5. But it's definitely going in the direction of Py3.


I think a really good explanation of why people are not switching was provided by Ted Dziuba: http://teddziuba.com/post/26426290981/python-3s-marketing-pr...


This seems like a reasonable perspective, but I wonder how much will change when Python 3 starts shipping with Linux distributions (and probably OS X eventually).

It won't change anything for the shop that has a million LOC, but it might start to budge that 2% number.


GNU/Linux distros have been shipping Python 3 for ages now.


That was sloppy of me. I meant as the only/default python.


I don't know how one would do 1mil LOC in python. At 40K I'm having enough annoyance refactoring things.

The python heads say you need to write code to be readable.. ok so how do I read all the call sites of this method or function? An unforgivable contradiction on large code bases, I'm afraid.


Ironically, I just started my first ever (Django) web app that is built for Python 3 only. I learned Python right after the release of Python 3, and so i learned everything with Python 3 in mind. For instance, I don't think I've ever used print as a statement. I even used string.format for heaven's sake, until I learned that there was little chance of the interpolation syntax going away.

Annnnyway... I am JUST not writing my first Python 3 application, and I have just installed (on OS X) Python 3 for the first time since 2009 (only as an experiment at that point).

Create a virtualenv and tell it to use Python 3 via `-p /path/to/python3`, update your .gitignore to include __pycache__ directories, don't write any code that uses features or syntax that was removed in Python 3 (since they added u'' support to Python 3, most devs I know are already doing this part), and you're literally off to the races. My app's requirements.txt has django==1.6.1, pytz==2013.8, South==0.8.4, django-debug-toolbar==1.0 (just released, btw), and ipython (obviously just for shell support). It works perfectly, and of course mock is included in Python 3, so you don't need that anymore. There was one caveat though :( Fabric doesn't work, because Parimiko is too deep a web to quickly update to run on Python 3.

tl;dr

I think Alex wrote this article too late. I think with Django finally having a release that fully supports (not experimentally like version 1.5) Python 3, a lot of libs supporting Python 3, and a lot of updates to Python 3 in the past year or so, we'll probably see quite a few new apps being built for Python 3 in the next year.


oops, i meant "just now" rather than "just not", which would basically ruin the entire context of my comment... what an unfortunate typo :)


The arrogance burns. It burns.

Python 3.0 was derailed by arrogance that developers should commit to a one-way transition that would touch every function rather than accept that, in Python 3.0, 'x = u"Hello"' should have been a valid statement. It didn't help that it ran slower, added nothing, and broke tools.

Python 3.3 was the first release that had a prayer, but there are mistakes everywhere. For example, virtualenv was included but broke because pip was not. Libraries like ftfy are required because the encodings break. Explaining the oddities of scoping inside generator expressions creates tricky interview questions. And then, Python 3 didn't fix all the broken.

By broken, I mean actually broken. We know where its broken: lots of standard libraries like collections.namedtuple which has an argument to print a generated class. Strange cruft like calendar.weekheader() that only helps one developer's program. This code is in the standard library. Handy things like cleaning up Unicode, DSL support, local encodings, security restrictions. Those you add from other libraries.

Also, where's the love? The courage? I would love to see Python seriously consider dropping case and underscore sensitivity in order to speed up developers, an_item = anItem +1 would be a warning. I would like to see language translation support in the language, great packaging that just works, incorporation unit tests into the package system, reforming the dunder mess, anything! Instead I see the mediocrity by arrogance.

Just for fun, they moved the US Pycon conference to Canada. Only little people have troubles with international travel. Arrogance.


I would love to see Python seriously consider dropping case and underscore sensitivity in order to speed up developers

How? By making it harder to find all occurrences of an identifier? Case sensitivity is debatable since editors have the option to ignore case during searches, but ignoring underscores isn't something any text editor I've seen has as a default search feature.


And in my world, underscore is more common as a word separator than mixed case. That is, people use 'foo_bar' instead of 'FooBar' or 'fooBar'. (This is more often true in C or Verilog than Python, however.)


Not to mention common idioms like x.y vs x._y where one is a property and the other the actual member.


For all its goodness I think it was a mistake to make syntax changes and other non-backward compatible changes in Python 3.

My current projects are currently compatible with Python 3 and it's my main target whenever possible (depending on the dependancies).

But all in all this is one of these little things that make developping in Python less fun than before. This is not my preferred language anymore.


I'd love to use Python 3, but missing library support is definitely the killer for me. There's a list of py3-compatible/incompatible libraries here: http://python3wos.appspot.com/.

If we can pick off a few of those top py3-incompatible libraries, I'd be willing to bet that a shift to py3 would follow. Many of the libraries have long-standing py3 port branches if you'd like to help the effort. For example: https://github.com/boto/boto/tree/py3kport/py3kport.

As far as I'm concerned, there's really very little in the way of me using Python 3. But what is in the way matters. Starting a project without being able to use boto, fabric and gvent would be tough. I like the idea of being able to import Python 2 libraries until they're finally ported over to Python 3 a lot.


The Python community may take this as a wake-up call to realize Python 3 was Python's Vista/ME/Windows 8, rolled up into one.

My proposal: Call it a development version, and ask the community to upgrade when Python 4 fixes GIL, adds support for GPGPU, multicore, adds semantics useful for going fast, true lambdas, tail recursion, and adds all sorts of similar pretty things.

Forcing an upgrade down a community's throat worked for Microsoft when they had a monopoly and could stop releasing security patches for older versions. And even then not well, and giving huge numbers of botnots.

Anything short of that is likely to fail and just hurt the size of the Python community. If I'm switching, there's also Ruby and a few other places to go that aren't Python 3.

I don't like, want, or care about Python 3. It's a regression for me. It's not a popular view, so I'm not vocal about it, but I don't think I'm in the minority here.


Python 3 is a huge distraction. It's hard to get people to move away from languages like java while there's a fragmented python world. If only we could just forget py3 the python community would be a better, more newbie friendly place. Every time I hire someone I have to explain why python 2 and why not "the latest"


This is kind of silly, but the main thing keeping me on Python 2 is the print statement.


Same here! So much more convenient in a quick interactive session.


When I'm in an interactive session I just leave out the print statement, since it'll print the repr() of the result anyway. And repr is (generally) what I want.


Wow, can't believe it's been this long and this is _still_ an issue.


about python 3:

  $ python
  Python 3.3.3 (default, Nov 26 2013, 13:33:18) 
  [GCC 4.8.2] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import timeit
  >>> timeit.repeat('for i in range(100): i**2', repeat=3, number=100000)
  [4.451958370991633, 4.446133581004688, 4.4439384159923065]
  >>> timeit.repeat('for i in range(100): pow(i,2)', repeat=3, number=100000)
  [5.343420933000743, 5.341413081012433, 5.3455389970040414]
  >>> timeit.repeat('for i in range(100): i*i', repeat=3, number=100000)
  [0.8348780410015024, 0.8323301089985762, 0.8313860019989079]

  $ python2
  Python 2.7.6 (default, Nov 26 2013, 12:52:49) 
  [GCC 4.8.2] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import timeit
  >>> timeit.repeat('for i in range(100): i**2', repeat=3, number=100000)
  [0.9710979461669922, 0.9630119800567627, 0.9619340896606445]
  >>> timeit.repeat('for i in range(100): pow(i,2)', repeat=3, number=100000)
  [1.7429649829864502, 1.7306430339813232, 1.729590892791748]
  >>> timeit.repeat('for i in range(100): i*i', repeat=3, number=100000)
  [0.6579899787902832, 0.6526930332183838, 0.6540830135345459] 

  $ python -m timeit '"-".join(str(n) for n in range(100))'; python -m timeit '"-".join([str(n) for n in range(100)])'; python -m timeit '"-".join(map(str, range(100)))'
  10000 loops, best of 3: 49.4 usec per loop
  10000 loops, best of 3: 40.6 usec per loop
  10000 loops, best of 3: 32.8 usec per loop

  $ python2 -m timeit '"-".join(str(n) for n in range(100))'; python2 -m timeit '"-".join([str(n) for n in range(100)])'; python2 -m timeit '"-".join(map(str, range(100)))' 
  10000 loops, best of 3: 30.2 usec per loop
  10000 loops, best of 3: 25 usec per loop
  10000 loops, best of 3: 19.4 usec per loop

  $ uname -rom
  3.12.6-1-ARCH x86_64 GNU/Linux


Your benchmark results could use a little explanation.

I notice that all your examples involve using the range() function, which was changed from returning a list to a generator. In Python2 the memory usage is O(n) where as in Python3 it is O(1), albeit a little slower.

The first two cases being compared are 3-4x slower in Python 3 but why? Is it related to Integer->Float conversion? What has changed from Python 2 to 3 here?

The difference in the rest of the examples is not huge.

In my opinion, changing range, zip, map, etc to be "lazy", ie. return generators not lists, is one of the best things that happened in Python 3. What you lose in speed will be gained back in memory use.

These toy example micro benchmarks don't really prove anything, it's not like the cost of iterating a range of integers would ever be a bottle neck in a practical application. However, the advantage of O(1) vs. O(n) memory usage is a major benefit and will make zip, map and other related functions a lot more useful, especially for large lists.


As a beginner, not a professional developer, python 3 always made more sense to me. Print is a function, 1/2 is not 0.

I've heard the counterargument about backwards compatibility.[1] That hasn't ever been just a 2 vs. 3 thing though. pyOpenSSL works with 32-bit 3.2, but dies with errors out of the box on Win-64 py 3.3. Last I checked, the PaiMei debugger works on 2.4, breaks on 2.7.

There are several projects that work with a specific deprecated subversion... it'd be weird if that were a common argument to keep everyone on 3.2, or 2.4 or something.

[1] http://help.codecademy.com/customer/portal/articles/887853-w...


I think, one mistake that was made with Python 3 is, that compatibility had initially very limited attention. They solved many problems of Python 2, but left the legacy behind ... and there was the trouble:

* Many libraries where limited to Python 2, because the effort converting them seamed to high

* Because of minor problems (like the infamous u"-stuff), the overhead converting simple Python 2 programs was to high.

Some of the problems where fixed later (e.g. infamous u"- is now legal in Python 3 and ignored -- why not before??), but I think that than it also was a little late ... Python 3 has evolved further and many people just got into the habit to ignore Python 3.

Not caring about compatibility can be necessary, but also can be a burden (that hurts a long time)!


Almost all Autodesk products come bundled with 2.x version of python. Could it be that Python 2.x is the Windows 7 of the Python world :) - just good enough for everyone.

(I'm not a python user, had to write few items for Maya/MotionBuilder and few other scripts).


2.6.4... sigh... I do use Python and Maya/Motionbuilder.

It would be great if Autodesk ever did upgrade python to 3.3, but I suppose A LOT of stuff would break in the process.

But since A LOT of stuff already breaks from each yearly version it wouldn't matter much... :P

I would bet they will only upgrade it when many new features in PySide supports only 3.x and up, or if they ever drop support for 2.6.x.


Not that this has ever been a showstopper whenever this comes up, but just to put some chips on the table, I personally have no interest generally in CPython development on 3.X, but I would pledge some help wherever I could in a potential 2.8 release.


Python 3 is a different language from Python 2. Yes, they are _almost_ the same language, but they are far enough apart to keep people from making the switch. It feels closer to Perl 5 => 6 vs. Ruby 1.x => Ruby 2.x.

That's a gross over simplification, but it is closer to the truth than the Python 3 community likes to think.

I wonder: have there ever been a successful language rewrite, post critical pass, in the history of computer languages? If so, what lessons can be brought to the current Python 2/3 situation?

For myself, as a professional Python programmer, I like Python 3 a lot. But until a critical mass of PyPi moves over, it isn't worth the effort for most projects.

Edit: fixed a wrong word.


> have there ever been a successful language rewrite, post critical pass, in the history of computer languages?

Does K&R C to C89 count?

> If so, what lessons can be brought to the current Python 2/3 situation?

C89 programs could use K&R libraries [1], and C89 compilers could compile K&R code.

[1] Well, so could Pascal or Fortran programs for that matter. Where K&R-->C89 fails as an analogy to py2-->py3 may be important aspects of what's so difficult with py2-->py3. (EDIT: added this footnote.)


> C89 programs could use K&R libraries

That's probably the key. If I could use Py2 libraries as-is in Py3, moving to Py3 would be a minor issue.


If Guido had just left division the way it worked in 2.7 we'd all have moved by now. Everything else the community is fine with, but it is enough of a sticking point for some people that they can't be bothered to make the switch.


I think you'll find alot of developers have very different pain points for the switch. For you, it's integer division. For me, it's unicode strings. Some people in this discussion even cite the print function.

Long story short, if you're turning something easy into something hard, don't expect people to switch.


For web backend I feel the new PHP and its frameworks are good enough. JS/HTML5/CSS are doing well for web frontend at least, and they evolve fast. Java did well on Android and enterprise software stack. There are also Object-C, .NET for their market segments... Nothing can replace C/C++ for system programming at this point. Additionally, many 'minor' languages are here for different goals(Go, Erlang, etc).

Now the question, why do I need Python at all nowadays? I spent two years trying Python and ended up with PHP/C/C++/JS/Java for nearly all my needs.


I develop Python code that helps automate the design of Intel CPUs (& graphics), and we recently (last week) upgraded to Python 3.3.3. Thankfully, I am less pessimistic than Alex on the subject.


Fix the GIL.

Python 3 isn't getting used because it breaks backwards compatibility without offering many meaningful benefits. Sure, the syntax clean ups and the new sugar are nice and all. But you don't rewrite a working code base because of "nice".

So, fix the GIL; Replace the spaces indentations with tabs; Take out the stupid 79 line limit off PEP8; Even clean up the standard library... After all, if you don't need to worry about backwards compatibility, then you might as well re-do it all properly.


PEP8 revised the 79 char recommendation to 99: http://www.python.org/dev/peps/pep-0008/#maximum-line-length

It still doesn't agree with you about spaces vs. tabs though.


I understand there are backwards incompatible changes in python 3.

But how hard is it to write code that works under both python 2 and python 3? Is this easy, or are the number and nature of changes so hard that this is a pain? How often do people write code that will work under both?

During the ruby 1.8 to 1.9 switch, it was common for people to write code that worked under both. How hard this was depended on the code base, but usually ranged from 'very easy' to 'medium but not that bad.'

You had to avoid the new features in 1.9.3 of course; you had to avoid a few features in 1.8.7 that had changed in backwards-incompat ways; and, mostly only around char encoding issues, you had to sometimes put in conditional code that would only run in 1.9.3. That last one was the most painful one, and overall it was sometimes a pain, but usually quite do-able, and many people did it.

Now, the ruby 1.8 to 1.9 migration was quite painful in many ways, but the fact that so many dependencies worked for a period in both 1.8 and 1.9, without requiring the developers to maintain two entirely separate codebases... is part of what made it do-able.

And, later, dependencies eventually dropping 1.8 support, of course, is part of what forced those downstream to go to 1.9.3. But by the time this happened, all of your major dependencies were probably available for 1.9.3, you rarely ran into the problem of "one dependency is only 1.8.7 but another is only 1.9.3", because of that period of many developers releasing gems that worked under both.


There were a few breaking changes, like renaming things, and they dropped the "print" statement. It is possible to write code for both 2 and 3, but you have to jump through a few hoops, and can't use the nice features of Python 3.

The official way to do it was to write in Python 2, and then use a converter to generate the Python 3 code (which was backwards IMHO). If the Python maintainers encouraged writing code that worked on both versions, they could have done a lot to help so (like providing compatibility modules, or backports).

But more important than new libraries that work with both versions (or even with just Python 3) is all the legacy code that doesn't run without heavy modifications on Python 3. If they had refrained from needless breaking changes (like removing the `print` statement), and made the braking improvements opt-out (like renaming str->bytes and unicode->str), it would be much easier to transition to the newer versions.


Is there a tool that will take a requirements.txt file and let you know whether all the packages in that file are already Python 3 compatible (by looking up corresponding packages on PyPI)?

If not, that tool seems worth writing, and then we can do a poll of some major production codebases and see whether Python 3 support is actually missing.

As for "Python 2.8": meh. I think we should just support the development of tulip / asyncio in Python 3.4 (see docs, this is looking awesome already: http://docs.python.org/3.4/library/asyncio.html), then use our blog platforms as Pythonistas to promote all the new async programming you can do using asyncio + Futures + yield from, port over important async frameworks like Tornado/Twisted, etc.

In that case, Python 3.4 becomes the Python release that gives you all the power / convenience of Python 2.x with a complete cleanup of callback spaghetti code as demonstrated in the "hello, world" co-routine example: http://docs.python.org/3.4/library/asyncio-task.html#example... -- I think async programming is mainstream enough, especially in web/mobile/API applications, that this will be a compelling draw for people.

I think the only thing GvR and crew got wrong is the timing -- it probably won't take 5 years from release for everyone to migrate to Python 3, but it will take more like 8-10. But it'll happen.


I'll be honest. I'm waiting until Guido says it's time.

The phrase that I have stored at the moment is "Python 3 is the future of Python." Fine. Great. But that's not good enough.

This page needs to be updated: https://wiki.python.org/moin/Python2orPython3

It should be shortened to read, in its entirety, "Python 3."


What is really needed is a 3to2.

I have been very impressed with 2to3, the amount of work it does is pretty impressive. But as somebody who tends and prefers to work in python 3 but often needs my scripts to run in python 2, I have no choice but write those in python 2. I can see the same dilemma for somebody who writes an open source library and hope for as much usage as possible.


No, I actually agree with a commenter on the page: what you need is a small compatibility layer to run 2.x code on 3.x, which can be slow and emit loads of warnings, and then stop making 2.x available AT ALL.

What 3to2 and six really do is keeping the 2.x runtime alive, and as long as the runtime sticks around (and is patched, and features are backported etc etc etc), people will keep using it.


That wouldn't solve the main issue, which is the [non-negative] amount of work in porting existing Py2 codebases.

Besides, 3to2 (which does exist but isn't maintained) and 2to3 are both bad ideas. They miss edge cases, even common ones, and they add even more complexity to the build/test/install process which people already find hard.


Sure 2to3 misses a lot of edge cases, but it does make porting from 2 to 3 A LOT faster, doing all the basic stuff for you.



From where I sit, it seems like the 2/3 schism is the result of "one and only one way to do something". While is sounds like a good slogan and I was on board with this party line for a long time, breaking perfectly good features in pursuit of a more perfect adherence to "only one way" does nothing except alienate the community.


Making python 2 a little bit more compatible with python 3 is not the way to go.

What about make python 3 fully retro-compatible with python 2.7 with the help of magic imports

    from __past__ import byte_strings
    from __past__ import except_tuple
first this will output warnings, then it will raise RuntimeError.


This feels a lot like the migration from Zope2 to Zope3, which everybody was saying was going to be painless and wonderful and then Django happened. As a long-time Python user (and a reluctant former Zope user) I hope things will not turn out the same this time round.


  [Python 3 releases live in parallel to Python 2] In retrospect this was a mistake,
  it resulted in a complete lack of urgency for the community to move
Adoption comes from solving urgent problems - not from creating them.


To make matters worse, I'm seeing an increasing number of Python programmers switch to Go and I predict that Go will slowly replace Python 2 over the next few years.


As sad as it may sound, the most annoying thing about Python 3 for me is this:

print "Hi"

vs

print("hi")

Other than that, as Alex said there isn't much difference between the two.


I totally agree.

I sat silent at europy in 2006 as guido explained he was breaking the print statement. At europy in 2007, when he gave the same talk, I boo-ed, loudly. He said "well no-one complained when I talked about this last year." My bad.

Being able to instantly pepper my code with print statements is how I (and many others) debug. It really gives me the shits these extra parens.

So I continue to boycott py3.


Yeah, I personally find the first example makes code a lot cleaner.


I think you are speaking to a large audience with this post. All python devs are continuously aware of the ongoing avoidance of using newer version. It's a problem that should be addressed even more directly with us all... what are the key transitional obstacles to overcome when upgrading from, say, 2.7 to 3.x? etc.

Thanks for shining some light on the issue.


Ok, so one thing:

Python3 is just now becoming the default for many Linux distributions. Once that adoption took place the adoption of Python3 will increase very much. It's as simple as this. Once this milestone is hit, the remaning incompatbible libraries will see fixes for python3.


All major libraries/frameworks please drop support for Python 2 by end 2014.


This depressing problem requires a crazy solution. Rewrite Python 3 in JavaScript so it runs on V8, and can interoperate with JS code. ;)


Reminds me of Perl 6, which was designed not to meet a user need, but to keep the Perl developers engaged. It has succeeded in this.


PHP 4 and PHP 5 weren't compatible either. How come migration was so much more successful over there than Python 2 to Python 3?


To be fair, it took about as long for PHP 5 to gain traction. PHP 5.0 was released in 2004, but it wasn't until PHP 5.2's release in 2006 that PHP 5 really became viable and PHP 4's EOL in 2008 that people started pushing for the widespread adoption of PHP 5 (via the GoPHP5 movement[1]). Arguably, it wasn't until the release of PHP 5.3 in 2009 with its namespace support that really spurred people to take a PHP 5-only approach to development.

Heck, one of the largest PHP 4 projects, Drupal, still supports a version of their product on PHP4: it will drop that support this March.[1]

[1]: http://www.garfieldtech.com/blog/go-php-5-go

[2]: https://groups.drupal.org/node/390343


Because python 2 works.


I feel like this point can't be overstated.

It's so simple, but so correct.


I really can not think of a promising reason to switch from 2.X to 3

Sometimes I feel that Python will end at version 3


A pypy-backed Python 4 (with all the features of 3.4) and everyone will jump.


python3 destroyed it's ability work as simple calculator. Try this on python3:

>>> answer = 1 + 1 >>> print answer


I don't have Python3 handy, but in Python2 I do this:

  >>> answer = 1 + 1
  >>> answer
  2
Doesn't Python3 work the same way?

In any case, my PYTHONSTARTUP has this line:

  from pprint import pprint as pp
and I find pp() very nice for printing various structures.


From the cases I saw(might be biased), a lot of new projects start to use Python3. These projects don't need to consider backward compatibilities. Majority of people will use python 3 in python world. Regarding Ruby, it is more elegant and consistent.And also seems Mats has clearer idea where Ruby will go. so in a long run. Ruby will catch up python in my own opinion.


The problem is that Linux distros are still use 2.7.x as default interpreter. As soon as they complete migration to 3.3.x everything will improve, and maintainers of the packages would be pressed to cope with reality.

At least mod_python already aware of existence of python 3.3.x ,)


I wonder why Django doesn't have the same strength (both technically and community) as Rails ? I think the philosophy "Everything is an object" makes sense actually, and in combination with functional programming built-in really makes Ruby is perfect choice for non-professional programmer (even woman) to love coding.


> perfect choice for non-professional programmer (even woman)

It's 2013. Could we please leave 'so easy even a woman could use it' levels of sexism behind us?

(Not to mention - it's a little silly to single out web frameworks as the One True comparator of two languages. That Rails is more popular than Django does not necessarily imply that Ruby is a better language than Python, any more than the fact that SciPy is more popular than SciRuby implies the reverse).


Rails is in downward trending, where Django is steady and slowly increasing over the years: http://www.google.com/trends/explore#q=Ruby%20Rails%2C%20Pyt...


I may be wrong, but I think Rails has a way higher percentage of total Ruby use than Django does of total Python use.


I don't come to HN to see blatantly obvious sexism. I come to HN to avoid it. Please stop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: