Hacker News new | past | comments | ask | show | jobs | submit | onesixtythree's comments login

I like Haskell but I see no future in terms of widespread adoption for it.

The aim isn't necessarily widespread adoption, so much as the ability for people who know it to use professionally. It's way past merely "production ready", but reputation is a lagging indicator and so most programmers are stuck doing Java...

Since the goal is to avoid {success at all costs}, getting 25% market share is probably not possible. 1 percent would be a start.

The problem, economically speaking, is that Haskell and Clojure engineers are massively underpaid, when you consider that an average Haskeller would be Principal+ at any Java shop. That's because the Java and C++ people can create bidding wars every year and spike their salaries, whereas using a better but more niche language makes that career strategy untenable.

Haskell's laziness isn't as much of an issue as people make it out to be. You can have strictness any time you need it, so it's more akin to an opt-out model than laziness being forced upon the programmer. It is correct to observe that high-performance applications frequently have a lot of strictness annotations, and the argument can be made that high-performance Haskell (while it can be close to C in performance) isn't "idiomatic"... although that claim is true of all high-level languages (even Java). High-performance Clojure ends up being full of '^' characters for type annotations, and high-performance Haskell ends up being full of '!' characters for strictness.


> The problem, economically speaking, is that Haskell and Clojure engineers are massively underpaid, when you consider that an average Haskeller would be Principal+ at any Java shop. That's because the Java and C++ people can create bidding wars every year and spike their salaries, whereas using a better but more niche language makes that career strategy untenable.

I haven't found the same to be true for other languages, but in my experience the Average Haskeller is so caught up in academic exercises, technical one-upmanship, and code golf shenanigans that I'm surprised they find any time to contribute any business value at all. Having a discussion about acceptable levels of technical debt with a Haskeller is like having a discussion about acceptable levels of meat products in your diet with a vegan in a world where there are no plant foods..."everything is technical debt, and none of it is acceptable, and we absolutely can't move on until we change all relevant functors to applicatives and all nested record accesses to lenses!"

And it is squarely a cultural issue, not a technical one...it is quite obvious that Haskell is a fantastically powerful language and capable of mowing over most enterprise-y languages with ease for a very wide variety of domains. Its just that Haskellers are so nitpicky about elegance and style that they don't know how to let shit be shit and get something done when it is needed.

This obviously isn't the only case for Haskellers. Looking at JGM's github feels like looking at Dumbledore's magic through the eyes of a Muggle. It just feels that way with the average Haskeller that I work with.


I don't know who the average Haskeller is, but your description doesn't fit any of the engineers I worked with at Silk, who were all very pragmatic and got stuff done quickly.


Great points. I can attest to the fact that low-latency Java code looks nothing like idiomatic Java. Ironically, the techniques for writing low latency Java are the same in Haskell (avoiding object creation, boxing, garbage collection, etc). High performance C and C++ are a league of their own with direct control over memory layout and alignment etc. Haskell has some amazing work on that front as well though: http://www.doc.ic.ac.uk/~dorchard/publ/icfp-2013-simd-vector...


> high-performance Haskell ends up being full of '!' characters for strictness.

The upcoming GHC 8 provides the Strict extensions which makes everything strict by default on a per-module level. That should greatly reduce the number of »!« characters.


From the outside, Python 3 seems like a much better language. I don't have strong views of its object system (I avoid OOP as much as I can) but it seems like the string/bytes handling is much better, and I'm also a fan of map and filter returning generators rather than being hard-coded to a list implementation (stream fusion is a good thing). Also, I fail to see any value in print being a special keyword instead of a regular function (as in 3).

What I don't get is: why has Python 3 adoption been so slow? Is it just backward compatibility, or are there deeper problems with it that I'm not aware of?


why has Python 3 adoption been so slow?

I can tell you about our situation.

We are an animation studio with decades of legacy Python 2 code. We sponser pycon and are one of the poster children for python.

We have absolutely no plans to switch to Python3.

Here are the various reasons:

- Performance is a big deal, and moving to a version of python that is slower is a no-go off the bat.

- Python3 has no compelling features that matter to us. The GIL was the one thing that should have been tackled in Python3.

- Since the GIL is here to stay, our long-term plan will likely involve removing more python from the pipeline rather than putting a huge effort into a python3 port.

- We have dependencies on 3rd-party applications (Houdini, Maya, Nuke) that do not support Python3

- We have no desire to port code "just because". Each production has the choice of either spending effort on Real Features that get pixels on the screen, or on porting code for No Observable Benefit. Real Features always win.

- Python3 has a Windows-centric "everything is unicode" view of the world that we do not care about. In our use case, the original behavior where "everything is a byte" is closer to UNIX. A lot of the motivation behind Python3 was to fix its Windows implementation. We are a Linux house, and we do not care about Windows.

- Armin's discussion about unicode in Python3 hits many of the points spot on.

Why has adoption been so slow? Simply because we have no desire to adopt the new version whatsoever. We'll be using Python2.7 for _at least_ the next 5 years, if not more.

It's far more likely that we'll adopt Lua as a scripting language before adopting Python3.


It's far more likely that we'll adopt Lua as a scripting language before adopting Python3.

And you're welcome to do that.

But "we want a language frozen in time forever so we never have to maintain code" -- which seems to be what you're aiming for -- is not a goal you can achieve short of developing your own in-house language and never letting it make contact with the public (since as soon as it goes public it will change).

Meanwhile, the libraries are moving on and sooner or later they'll either move to Python 3, or be replaced by equivalent libraries with active maintenance, and the distros are winding down their support for Python 2. Switching to something else, and probably just rolling an in-house language you can control forever, is likely your only option if this is your genuine technical position.


I think you're mischaracterizing the parent's post. Python 3 is slower, offers no compelling features, and uses a string storage model that he doesn't agree with. That criticism does not mean that he or she desires "the language frozen in time".

The real feature needed is to eliminate the GIL. That would be worth breaking compatibility over.


Python 3 is slower, offers no compelling features

Python 3.0 was slower than 2.7 due to several key bits being implemented in pure Python in 3.0. Since the 3.0 release (remember, Python's on 3.5 now) things that needed it have been rewritten in C, and as of Python 3.3 the speed difference is one. Also, on Python 3.3+ strings use anywhere from one-half to one-fourth the memory they used to.

As for "no compelling features", well...

* New, better-organized standard library modules for quite a few things including networking

* Extended iterable unpacking

* concurrent.futures

* Improved generators and coroutines with 'yield from'

* asyncio and async/await support in the language itself

* The matrix-multiplication operator supported at the language level (kinda important for all the math/science stacks using Python)

* Exception chaining and 'raise from'

* The simplified, Python-accessible rewritten import system

etc., etc., etc.


I'm 100% not picking on you but, I started programming in Python around 2004. When Py3k came out, I had exactly zero of these gripes. Not once did I ever say, "I wish urlsplit were in urllib.parse instead of urllib", or anywhere else. I had no problems with iterables, the async stuff (honestly with the GIL, why does any of that matter), exceptions or imports. I recognize that some users of Python did, library/framework devs, etc. But Py3k is a huge, incompatible, and confusing change for 90% of Python devs, and the benefits are things they didn't, and still don't care about. It was honestly just a bad decision.


for 90% of Python devs

Given how many people and projects suddenly said "yeah, actually, we want to be on Python 3 now" after seeing the new stuff in 3.4 and 3.5, I think you're overestimating the number of people who don't care about these features.


I take your point, but I have a few counterarguments:

1. Python 3.4 was released 7 years after 3.0. That's a very long time.

2. Assuming you're referring to the async features (none of the other stuff is really momentous), I can understand that for framework devs. It's really not a big deal for most users though, and because of the GIL, it's not as if they'll suddenly reap the benefits of parallelism. All it really means is nicer algorithm expressions and event loops.

3. There's no technical reason the new stuff couldn't have been added to Python 2. The roadblocks are manpower and politics.

4. Even if we stipulate async/await/asyncio are huge, busted Unicode support, 7 years of development, and breaking compatibility with everything is just a terrible tradeoff.

There's just no way this was a good idea, and Python devs could earn a lot of credibility back if they just said "oops". But there's no chance of that.


Short version sounds like: Python is a legacy language. We're not porting our legacy code and we're not starting new projects in Python.

I wonder:

1. Would this have changed if there was a Python 2.8 with some new features (but still a GIL) that ran most 2.7 code?

2. Is your experience representative, i.e. are teams just not starting new projects in any version of Python, even though so many did in the last decade?


Same industry, smaller scope here.

- We have dependencies on 3rd-party applications (Houdini, Maya, Nuke) that do not support Python3

This is our reason. It's a bit of a conundrum meets catch-22 situation. On one hand we would start using python3 if there would be a support for it, on the other hand no one wants to because of porting legacy code seems bothersome if everything works as it should. To be honest though, all of our python code is to augment those 3rd party applications. Everything we have that's not tied to those applications is C(99 more or less).


Thank you for your candid and insightful comments.

I am curious - if there was a GIL-removed version of Python 2, would you change your assessment that your "long-term plan will likely involve removing more python from the pipeline"? i.e. is the GIL the primary (or even sole) factor in that?


I can only speak for my personal experience - I write all my new code in Py3, and almost every still-developed library works great with Python 3.. The authors usually ported it several years ago. But sometimes you'll need something that's not still supported -

Perhaps you want to parse an old obscure file format, and the only code you find for it is from a usenet post in 1996. That code isn't being updated, and no one has ported it. That means you need to do the work to update it, and that can be hard when you aren't familiar with what it's supposed to do.

The other place I've seen people sticking with Py2 is when they've got a huge chunk of internal code. Some companies have been writing python for 20 years, and the original authors have long since left the company. It can be hard to write a business case for having someone spend several weeks updating all the old code, particularly if it's purely internal, and doesn't touch the internet.


I think you're exaggerating the obscurity of libraries that only run on python 2.

The list I've gone off of is https://python3wos.appspot.com/. I use python a lot, and only now in late 2015 I might finally use python3 if starting a new project. When I started a new project last year, we used python2.

The biggest hold-out for me was gevent, which was only released 5 months ago. Gunicorn with gevent workers is my preferred stack for running python apps.

If you use protobufs or thrift, those both aren't yet on python3.

The wall of shame currently lists requests as not working on python3, though I think that might be a fluke.

These ones might not be a deal-breaker since you can have a separate environment for infrastructure & app code, but for some reason a lot of the infrastructure tools still haven't updated to python 3 (supervisor, ansible, fabric, graphite).

All together, it adds up to a not-insignificant number of things that aren't yet on python3. And even if nothing you use when you first start a project is python2 only, you have no idea what libraries you might need or want in the future and if those might be python2 only.


That's fair. I only have my experience to draw from, but I've personally found it very straight forward.

Gevent said it looks like they're supporting Python3 as of the 1.1 release - http://www.gevent.org/whatsnew_1_1.html

I've used requests with Python3 quite a bit.. I have no idea why it's not listed as supported on the WoS, but their page shows it as working since 2012. https://pypi.python.org/pypi/requests/

Protobugs looks like it's now Py3 compatible, per the devs- https://github.com/google/protobuf/issues/646

ThriftPy has supported Python3 for a while (although ThriftPy is slower than the official lib). Apache Thrift has recently begun working with Python3 as well, however (https://issues.apache.org/jira/browse/THRIFT-1857)

I run Ansible/etc in their own virtualenvs, so they aren't part of system python for me, but you're right - I did have to write an Ansible module recently, and I recall that I did have to use Py2.

The only major package I recall having a problem with was PIL - Eventually I moved to Pillow as a drop-in replacement.


I wouldn't really trust the "Python 3 Wall Of Superpowers", as it doesn't appear to be anywhere near current. The first listed example of a 3.x incompatible package is requests[1], which is not only compatible with 2.7--3.4[2], but has been 3.x compatible since 2012-01-23[3].

[1]: http://docs.python-requests.org/en/latest/

[2]: http://docs.python-requests.org/en/latest/#feature-support

[3]: https://pypi.python.org/pypi/requests


protobuff supports 3.0 (though apparently the pip3 version still requires you to manually run 2to3, but then it works) and there's already a port of fabric (ish) by the original author to py3. And Ansible should be python2 until distros start shipping py3 by default, (which they have so I assume Ansible3 will come out soonTM).

If you're willing to do a bit more than "pip install x", well I guess it doesn't work, back to py2, you can use almost everything on py3. (and yes requests is ported too)


A freshly ported library of nontrivial size will include many bugs, especially if the porting is done automatically. Not a risk worth taking.


It's not only only about big libraries. Unless you do web-dev and can rally only on the currently developed libraries, from time to time you will get a specific, abandoned library. Even if there is just a single such library, Python 2.7 may be preferable.

Moreover, in machine learning Python 2.7 is still considered the default version of Python. (E.g.: some part of OpenCV need Python 2.7; until recently Spark supported only Python 2.7.)


>> why has Python 3 adoption been so slow

Really this topic is coming to an end. Most libraries support Python 3 and if they don't there's better alternatives.

For new users and new projects there's no reason now except personal preference to choose Python 2 and in fact beginners who start with Python 2 are just instantly incurring a learning debt upon themselves to be paid down the track when they have to move to Python 3.

The community has some extremely vocal Python2 diehards but their arguments no longer hold water.

>>why has Python 3 adoption been so slow? Whatever, it's just history now.


What about Google's AppEngine? I would dearly love to run Python 3 code on it, but 2.7 is the latest version that's supported. Google seems to have no roadmap to Python 3 on the platform (Managed VMs don't solve the problem - the SDK does not support Python 3) which leads me to worry that they're going to deprecate support for the language entirely on it.


History it isn't. There is a vast amount of Python 2 code in existence. I still write a lot of Python 2, doing maintenance on existing projects.


What about MySQLdb? That's the main extension module that has been keeping me on Python 2 for server-side software. That, plus a lack of perceived upside to CPython 3, which is still an interpreter with a GIL after all. PyPy 3 might be compelling if it can run MySQLdb or a bug-compatible replacement. True, PyPy still has a GIL, but at least it also has a JIT compiler.


MySQLdb has other issues besides Python 3 support. It's no longer maintained and hasn't been updated in quite some time. The Django documentation (and I) now recommends mysqlclient-python, which is compatible with systems that rely on MySQLdb

https://github.com/PyMySQL/mysqlclient-python


Another endorsement from me. We migrated with

    import pymysql as MySQLdb
and everything just worked.



Check into PyPy-STM, they were able to remove the GIL. This will eventually be merged back into PyPy4 (the 2.7 compatible version).


Perhaps better to ask, why have Python devs vowed never to do this again? There is a reason it will be "just history" soon.


I'm coding exclusively in python3, and I agree the "least small" changes from python2 made it cleaner BUT

> I'm also a fan of map and filter returning generators rather than being hard-coded to a list implementation

I find myself often having to wrap expressions in list(...) to force the lists. (which is annoying)

Generators make things much more complicated. They are basically a way to make (interacting, by means of side-effects) coroutines, which are difficult to control. In most use cases (scripting) lists are much easier to use (no interleaving of side effects) and there is plenty of memory available to force them.

Generators also go against the "explicit is better than implicit" mantra. It's hard to tell them apart from lists. And often it's just not clear if the code at hand works for lists, or generators, or both.

So IMHO generators by default is a bad choice.

> stream fusion is a good thing

I don't think generators qualify for "stream fusion". I think stream fusion is a notion from compiled languages like Haskell where multiple individual per-item actions can be compiled and optimized to a combined action. Python instead, I guess, just interleaves actions, which might even be less efficient for complicated actions.


> I find myself often having to wrap expressions in list(...) to force the lists.

Out of curiosity, why do you need to force the lists?

> Generators make things much more complicated. They are basically a way to make (interacting, by means of side-effects) coroutines

Huh? Generators are a way to not make expensive computations until you have to--as well as to not use memory that you don't need. Basically, if all you're doing with a collection of items is iterating over it (which covers a lot of use cases--but perhaps not yours), you should use a generator, not a list--your code will run faster and use less memory.

> In most use cases (scripting) lists are much easier to use (no interleaving of side effects) and there is plenty of memory available to force them.

Generators don't have to have side effects. And there are plenty of use cases for which you do not have "plenty of memory available" (again, perhaps not yours).

> IMHO generators by default is a bad choice.

I think lists by default was a bad choice, because it forces everyone to incur the memory and performance overhead of constructing a list whether they need to or not. The default should be the leaner of the two alternatives; people who need or prefer the extra overhead can then get it by using list() (or a list comprehension instead of a generator expression, which is just a matter of typing brackets instead of parentheses).


I think you're referring to:

  for i in range(large_number):
     ...
(similarly enumerate, zip etc)

i.e., where you don't bind the generator to a variable, but instead immediately consume and reading from it as an iterator has no other side-effects.

And that's the only usage of generators that in my usage is both common and practical. Other use has always quickly become a mess for the above named reasons.

I don't disagree generators can be occasionally useful (or often, for your special applications). But mostly it's a pain that they are the only thing many APIs return and that they are not visually distinctive (in usage) from lists and iterators. For example

  c = sqlite3.connect(path)
  rows = c.execute('select blablabla')
  do_some_calculation(rows)
  print_table(rows)
Gotcha! "rows" was probably already empty when print_table was supposed to print it. But how can you know? Hunt down all the code, see what the functions do and what they want to receive (lists, iterators, generators? probably they don't even know). And what if the functions change later? Even subtler bugs occur if the input is consumed only partly.

So by far the common (= no billions of rows) sane thing to do is

  rows = list(c.execute('select blablabla'))
Which is arguably annoying and requires a wrapper for non-trivial things.


None of what you say addresses my main point, which is that a list is always extra overhead, so making a list the default means everyone incurs the extra overhead whether they need it or not. You may not care about the extra overhead, but you're not the only one using the language; making it the default would force everyone who uses the language to pay the price.

Or, to put it another way, since there are two possibilities--realize the list or don't--one of the two is going to have to have a more verbose spelling. The obvious general rule in such cases is that the possibility with less overhead is the one that gets the shortest spelling, i.e., the default.

Also, if you've already realized a list, it's too late to go back and un-realize it, so there can't be any function like make_generator(list) that saves the overhead of a list when you don't need it. So there's no way to make the list alternative have the shorter spelling and still make the generator alternative possible at all.

As far as your sqlite3 example is concerned, why can't do_some_calculation(rows) be a generator itself? Then print_some_calculation would just take do_some_calculation(rows) as its argument. Does the whole list really have to be realized in order to print the table? Why can't you just print one row at a time as it's generated?

Basically, the only time you need to realize a list is if you need to do repeated operations that require multiple rows all at the same time. But such cases are, at least in my experience, rare. Most of the time you just need one row at a time, and for that common case, generators are better than lists--they run faster and use less memory.

If you need to do repeated operations on each row, you just chain the generators that do each one (similar to a shell pipeline in Unix), as in the example above. This also makes your code simpler, since each generator is just focused on the one operation it computes; you don't have to keep track of which row you're working on or how many there are or what stage of the operation you're at, the chaining of the generators automatically does all that bookkeeping for you.


What makes me snarky is the replacing of

    '%s %s' % ('one', 'two')
With

    '%s %s'.format('one', 'two')
The latter is just more annoying to type. Stupid argument I know but I find myself grumbling to myself every time...


The second example should be:

    '{} {}'.format('one', 'two')
Either way, the former example still works in Python 3.5, so that syntax hasn't gone away. `format` is preferred, though. This Stack Overflow question has some good answers as to why: http://stackoverflow.com/questions/5082452/python-string-for...


You mean,

    '{} {}'.format('one', 'two')
Your experience may vary, but in my experience, when I switched to .format(), I found a number of bugs in code that used % instead. As mentioned, you can continue using %.


I had the same experience.

I love love love printf() and it's ilk, so switching to something else (no matter how well designed) seemed asinine at first.

But .format() has really started to grow on me and did uncover some subtle bugs in old code.


The thing about the latter form is that you can do something like '{0} {1} {0} {2}'.format('apple', 'banana', 'orange') which results in 'apple banana apple orange'


You can also use keyword arguments, which is really helpful in certain situation. Example from the docs:

  'Coordinates: {latitude}, {longitude}'.format(latitude='37.24N', longitude='-115.81W')


You could do that before with % substitution too. But I prefer .format() because (1) it's the new idiom and (2) will coerce to string without me binding the type in the format string.


Simplified to this in Py 3.6:

    f'{one} {two}'


Do you have a reference for that? Wouldn’t that be introducing a huge security hole in all programs?


PEP 498. Literals only, does not add any additional security problems.


Ah, right; I missed the ‘f’ prefix. And since it’s only done when parsing the expression, it is not a security problem. Thanks!


Is it actually happening? Proper string interpolation? In all its extendable glory?


Pep was accepted, and I believe can be tried in a nightly or alpha release. It is not extensible though, GvR didn't see much utility, yet at least.


It's not replaced, both are perfectly valid. And this addition also exists in Python 2.


No need to get snarky, because the % formatting is still in Python 3. It is also my preferred formatting method.



I actually found a good use for string.Template, but that doesn't mean I understand why it exists in addition to the other two formatting sub-languages.


It was created long before .format and is better in some cases for i18n where simplicity is best.


The super, super controversial automatic interpolation feature is what I'm really looking forward to.


The main reason for most bigger projects is that they rely on that one library that is not compatible with Python3. Although there are fewer and fewer of those fortunately.

Another reason is that the advantage of switching is just not that big, if you already have everything working in Python2.


It's lack of backward compatibility, and not enough of an upside for most developers to go through the effort to upgrade their existing code. It's especially tedious for open source projects, because making a codebase compatible with both 2 and 3 can be a lot of work.


PEP3003 was a moratorium on language changes in order to allow alternate implementations time to catch up. This meant that Python 3.1 and Python 3.2 didn't include any new language features. Releases since the moratorium ended have been much more compelling, IMO.


I think it's because 3.0 and 3.1 were basically "we broke everything, it's slower, and there are no new compelling features." The versions after that have been fine, but I think the first two versions were such clunkers that it created a bad schism, and now that there's a schism nobody wants to support two languages, so people just stick with 2.7. I think if they had just put some new feature that people had to upgrade for in 3.0, they would have avoided this mess, and people would have grumbled for a few months and adapted.

I don't think the lesson is "never break compatibility". The lesson is "don't compete with yourself by releasing a product that is actively worse than your current version"


I think one thing that stalled adoption was the difficulty of migrating to Unicode string constants. In Python 2, you could make a unicode string as u"Entré", but in Python 3, the 'u' was not permitted. Allowing the superfluous u" notation in Python 3 was a big aid in writing 2 and 3 compatible software. Don't remember when that was introduced, Python 3.2?

Within the last six months we've moved to writing all new code in Python 3 and migrating a fair bit of legacy code as well. Been fairly smooth on Linux -- a bit rockier on Windows.


> I think one thing that stalled adoption was the difficulty of migrating to Unicode string constants.

This wasn't just down to the fact that Python 3 didn't support u"" (it was 3.3 that was added, for reference), but also down to the fact that much of the eco-system still supported RHEL5's default of Python 2.4 which meant `from __future__ import unicode_literals` wasn't an option (it is, almost certainly, a less good option, but it's in many ways good enough).


It feels like there are a whole bunch of factors (though I'm no expert).

It took a few versions of 3 to hit a sweet spot (in some cases features that were removed in the initial version of 3 have slowly been re-added in subsequent versions). There were a lot of crucial libs that needed to be ported. Just general inertia.


I think the reason is that it's basically only now that Linux distros are starting to ship Python 3 as the default. When RHEL, CentOS and Debian moves completely to Python 3, the rest of the world will follow.


Is the shipped default really so important, esp. for third-party software? E.g. for RHEL you get python3 packages through Red Hat Software Collections, with support and "intended for production use".

(It of course limits software that is to be shipped with the distro itself)


RHEL/CentOS 7 was released in June of 2014 with Python 2.7.

Following their glacial release schedule, maybe we'll see Python 3 by 2019 in RHEL 8.


Those who use value a more rapid package update schedule shouldn't use those distros. One of their most salient features is the ancient software packages.


one of many reasons could also be devs wanting a faster implementation or a better multi-core story, so Python3 has probably lose ground to languages like Go, Clojure, Javascript.


The society where "elders" were regarded as wells of wisdom probably never existed.

Not true. Traditional Asian societies take the other extreme and venerate age (probably more than they should, but that's another topic). Hence, you have 90-year-old patriarchs calling shots in family businesses, and middle-aged men whose parents were still alive were traditionally considered still adolescent. Also, not taking care of your parents is a jailable crime in China.

In every society but our own, women after 40 were considered very old.

I'm a man under 40, so this may not be my fight, but... Fuck. That. Shit. While it is true that middle-aged, unmarried women were once considered undesirable and suspect, that was in societies in which urinating at the dinner table was socially acceptable, childbirth meant a 1-in-25 chance of dying, indoor plumbing didn't exist, and tuberculosis was a death sentence.

She should be happy to be alive instead of complaining about young people being young.

She's not complaining about young people being young. She's complaining about people (often with no reference to their age) being stupid. Not all young people are stupid.


> patriarchs

>> women after 40

> not taking care of your parents is a jailable crime in China.

Nothing about "wisdom" in there.


Age is a weird prejudice because it depends so much on environment. In most companies, you're viewed negatively before age 30: you're expected to work the worst hours because the assumption is that you have nothing "better" to do and would just be out drinking. In Silicon Valley, you're viewed negatively after 30. Some doors close and others open.

I do find, strangely, that I'm way more attractive to women now that I'm older, even though my looks have (objectively) declined. Adolescent men are unwanted, invisible, and generally disliked by society unless they're in the top 5% for social skills; things get better at 25 and a lot better after 30... although I'm still glad that I married young because I'd imagine that the dating pool shrinks considerably.


> I do find, strangely, that I'm way more attractive to women now that I'm older, even though my looks have (objectively) declined.

Confidence and the way you carry yourself. In our teens and early twenties we tend to dress and act for others. As we get older that becomes less common, or we become better at selling our appearance (attire, demeanor, etc.) as ourselves and not a façade. Whether that's a hoodie and jeans and converse sneakers, or a suit and tie, or something in between doesn't matter (mostly).

Also, in your twenties you're still putting on muscle mass and filling out some. If you maintain decent physical conditioning you won't have that scrawnier look that a lot of teens have. You probably have broader shoulders and a fuller face than you did in your younger years, it looks healthier and more attractive, even if it's paired with gray hair and wrinkles.


> although I'm still glad that I married young because I'd imagine that the dating pool shrinks considerably.

it actually doesn't, in fact it grows continuously, but you'll just be shamed by older women and married men for dating younger women.


Shame on them for being so concerned about with what consenting adults do.

Select every argument about why it is none of their business to outlaw/shame homosexuality, Ctrl-C, Ctrl-P.


as soon as you start talking about what people are actually attracted to (not what they say they are), the shit starts to fly. my comment has swung up/down in votes throughout the day.


Yep... I didn't take my GF to the office Christmas party, cuz I didn't wanna hear the comments. ay!


Obligatory XKCD: https://xkcd.com/314/


>I'm still glad that I married young because I'd imagine that the dating pool shrinks considerably.

Why would it? If anything the number of single adults keeps growing each year.


But the number of single adults interested in people my age does not.


I find it's quite the opposite for me. Women find me significantly less attractive now that I'm in my mid thirties.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: