Hacker News new | past | comments | ask | show | jobs | submit login
Ansible 2.1 Released, with Network Automation, Containers (redhat.com)
148 points by taha_jahangir on May 26, 2016 | hide | past | favorite | 92 comments



I have not been a big fan of Ansible due to some critical bugs (at least in 1.x) and the way how its community core committers are treating community requests like this.

For one: Ansible 1.x cannot even print out the syntax error file and line number in the offending Playbook. [1] And their core committers ignored the issue and refuse to backport the basic debugging requirement after issue being opened 2 years.

That itself, is a deal breaker for me.

[1]: https://github.com/ansible/ansible/issues/5797

Edit: Downvoting me doesn't make this issue go away. What was requested is a simple basic debugging requirement - any mature syntax tree parsers should be able to do it.


disclaimer: i'm an ansible dev.

As the ticket shows, this was added in the 2.x release of Ansible, that is why the ticket was closed.

Adding this info required a major revamp of the parser, which we did in 2.x, for this and many other reasons. This is not a simple change in 1.x and we decided not to backport it.


Could you? Tons and tons of companies are forced into using outdated versions (for tons of reasons) where point release updates are still possible.

I'm sure it's a lot of work, but a lot of your core and original users would appreciate it. Not implementing something as useful as that just has a "we got 'em, no need to do anything else for them" vibe.


> Tons and tons of companies are forced into using outdated versions (for tons of reasons) where point release updates are still possible.

Exactly my point! Ansible devs should own up to it for admitting their BAD design choice (I heard someone from here saying intentionally not use a parser but use YAML.) of not being able to get syntax error file name + line number!

My story was that debugging ansible playbook with loads of nonsense cryptic error messages due to syntax errors that I have to spent hours to figure out what went wrong was a complete frustration. It was so bad that we eventually re-designed our infrastructure to cut out Ansible and never look back again. So much for Ansible 2.x that it doesn't even matter. Ansible lost its appeal in 1.x, and that's it. No more 2.x upgrades.

Lesson learned here is that if a young software tools got adopted but early version caused so many frustrations that the authors don't care about back-porting bugfixes, all future version becomes irrelevant.


It's gotten so bad with these sorts of tools with their incredibly annoying design flaws that I've been debating creating my own using rpm and ssh on top of bash with some python.

I get why it's important to have a lot of the features, but when the devops crowd thinks [0] is acceptable (or at the very least, doesn't scream about it), then it might be time for something new.. with [1] in mind.

  [0] $ man salt 2>/dev/null | wc -l
  140905
[1] https://xkcd.com/927/


Which design flaws are you referring to? As a recent Salt convert (and Puppet expert), I was perplexed as to why such a nice tool would have a man page 40x longer than bash.

Turns out it includes extensive documentation for all states supported by Salt, generated from the online documentation. Compare this to the Puppet manual:

    $ man puppet
    PUPPET(8)                Puppet manual                         PUPPET(8)

    NAME
       puppet

       See ´puppet help´ for help on available puppet subcommands
Obviously no-one will be using everything that Salt supports, so it would be nice if it was broken into sub-sections. But I much prefer having all documentation available in a manual to looping through every module and run "rdoc" as in the Puppet case.

Any sufficiently popular configuration management tool will have equally long documentation.

There are a couple of simpler options available:

http://www.nico.schottelius.org/software/cdist/

https://github.com/brandonhilkert/fucking_shell_scripts


"Design flaw" might not be right description. Maybe, "Naive implementations with nearly useless documentation" is better.

When I first started using puppet, I would go home absolutely exhausted every night. Their design choices, along with a lack of documentation, would turn the equivalent of a 20 minute bash script into something that would take days. Best example I can think of is the arbitrary ordering of module execution. I understand why they do it this way, but if they documented it in plain sight, then maybe my desk wouldn't have a forehead shaped dent in it. Similar goes for the other tools.

The only thing I can think of that prevents companies from releasing proper documentation is because of their expensive support contracts. There's a lot of incentive to make a standard incredibly difficult.

I just want accessible tools :(

edit: fss looks pretty neat! Kinda rpm-y, but it fits a nice middle ground.


I get your sentiment. I only started using Salt some months ago, but it was really a breath of fresh air compared to Puppet. However it took me a while to realize its true strengths since the documentation is very...dry.

IIRC Ansible, like Chef, does serial execution. That is, states are applied in the order they are written. I think that's part of the reason it has gotten so popular, as you'll have very few surprises in the vein of "what do you mean there is no ntp_service -- it's right there next to the config declaration!".

What Salt got right is the pillar, for which the Puppet equivalent (Hiera) was an afterthought. The Salt engine allows you to generate states from the pillar, rather than making poor clones of Puppet modules (as most of the formulas I've seen online).

However that flexibility is not documented anywhere, nor part of best practices. Nevertheless I'm about to release a set of formulas that are truly pillar-driven with no hard-coded stuff. Keep an eye out for the accompanying blog post :)

Of course, if you don't care about data/code separation and just want to get stuff done, there are better tools. And if you have full control of your environment, I would strongly suggest using Guix and/or Nix rather than these legacy configuration management tools.


I actually really like salt, just not its documentation. Like you said - it was a breath of fresh air. :)

Last time I used it a few months ago, it was while helping out another admin. He had a lot of confusion on how to do certain things, and the documentation wasn't very helpful for either of us. The solution came from github, which has become my go-to for these sorts of tools. Though, that could easily be supplemented with some better examples on their site, or in their, erm, man page.

I doubt we'll have some great tools with great documentation anytime soon (or monitoring tools!), but a sysadmin can dream... :)


Although it's cool you guys fixed it - thank you for that - those of us sitting out in RedHat land may not see 2.1 sitting on EPEL for a very long time. So it's a legit request.


its in 2.0, which is available on RHEL


Ah ok if it's in 2.0 then you're right! I thought it was exclusive to 2.1.


>any mature syntax tree parsers should be able to do it.

Well thats' just it, I don't think they use such a parser.

I always thought that's why they use YAML as their input language, to avoid having to write a parser,

I like many things about Ansible, but the 'language' is an unreadable mess that makes termcap[1] feel like genius level UX design.

[1] https://www.freebsd.org/cgi/man.cgi?query=termcap&sektion=5 (Also known in my day as 'turdcap'.)


RE: "I like many things about Ansible, but the 'language' is an unreadable mess that makes termcap[1] feel like genius level UX design."

I've been a puppet user, and long time chef user, and recently been using ansible for a couple of projects, but I can't learn to like the 'language' at all. I feel like ansibles YAML syntax is just hiding the complexity of configuration management from you. Which at first as a novice ansible and CM user, or for small projects is great! You can be super productive, and anyone can understand it in your team.

However, as your systems get more complex, I feel like doing anything remotely complex with it produces confusing, hacky, unreadable ansible 'code'. Then you try to scale this working in a team, and try to do any useful testing, or re-use/extend/share any roles, your kind of doomed.

The reason I continue to use chef is because its a ruby DSL that you can easily extend, where anyone who can code; can write, test and understand. While I find Ansible is just templated YAML, and means you dont have to understand how to program.

I guess my point is, configuration management is kind of hard. I feel like people use Ansible because 'its easy', and I hear people say 'Ansible is easy' a lot. But in the end, Ansible, the tool chain, workflows and syntax will become just as complex as chef/puppet/saltstack when you operate it at scale.


I think we have to distinguish two things going on in Ansible files.

1) It's declarative. So if you are looking for a procedural language, you will be disappointed. It's also declarative for good reason IMHO.

2) It's still hacky. If the 'language' remained entirely declarative then the YAML syntax _might_ have been ok, but the language looks like a poor design stretched beyond it's limits.

Some people don't like the Ansible language for reason 1 or some for reason 2.

Personally, I prefer that it's declarative, the file is supposed to describe the features the target system must have, and then the module, which is procedural, has to figure out how to make that happen on whatever target system is in question.

That separation keeps things clean, idempotent (most times) and neat.

I think procedural approaches _seem_ better because they are familiar to programmers, but they can become complex, non-idempotent and hard to debug.


I've used Ansible quite a bit, and I could sometimes sum it up as, "the worst one, except for its alternatives".

I tend to think of Ansible as supporting a flexible combination of declarative and procedural approaches. Within a particular sequence of tasks, it's procedural. Those lists of tasks are typically combined into a role or the like that is typically more declarative in its application. Of course application of roles is actually procedural in that it happens in the same obvious order it's defined, but the higher level organizational constructs like roles have more of a declarative feel to me (I'd assume that's by design).


Ansible has none of the things that make declarative languages good, though. Good Declarative languages allow composition - Having a base object, adding properties to it, overriding others. Ansible handles none of that in it's playbook syntax or it's vars syntax. In the role and module syntax, sure, but they're turing complete anyway.


I agree that it's crufty.

If I were them I'd sunset the old syntax in the next version, create a conversion tool that works for 90% of cases and then drop support for the old file format.

But that's what I would do, and it would probably kill the project.

As soon as people can't cut and paste Ansible 1.x roles from github into their project, Ansible would probably start to wither.[1]

That's the typical workflow. You need to do something on your linux box. You google that, find a decent Ansible solution, cut and paste a few text files maybe modify it a bit and repeat.

[1] And this s another reason declarative syntax is so awesome, because it's easy to snap roles together like lego bricks.


Every time I let the config management stuff slide slowly towards procedural, bad things have happened.

You want your config management templates to be as declarative as possible. "This is the desired final state", and that's it. Of course, in practice it's very hard to stay 100% declarative, but it's very good practice to try and keep it close to ideal.


Agreed, this is a great way to think about it. I think Ansible is at its best when the modules handle the imperative stuff behind the scenes, and the playbook just declares. That's not always possible, but the modules are getting better and better.

FWIW I also found YAML to be very confusing syntactically at first; things got easier once I realized that it's basically 1-to-1 with JSON, and could convert to JSON to get intuition for the file structure (simple yaml2json script here: http://pastebin.com/TpjZLnLa). Thumbing through the book Ansible Up and Running also helped.

In the long run, putting a declarative idempotent layer atop the same old mutable infra is tough but a necessary compromise right now. It'll be great if the immutable-first tools of today (nixos et al) mature and we can leave this behind.


I speculate that Ansible choose YAML under the premise that it would lower the barrier to entry as opposed to a real language like Python or whatever. That would be nice if everything were just a matter of configurations, in which case YAML would be just as good as JSON, if not better.

The problem is that because people want logic, Ansible ended up introducing an idiosyncratic YAML-templating language that's not actually YAML or anything else. Now people write large system deployment scripts in this weird YAML that I don't ever want to look at.


JSON is crap for configuration. Its picky with quotes, doesn't like trailing commas, and isn't as readable as other options.


JSON is the example, not the prescription; JSON is just a widely used format for arbitrarily nested data. You can use Javascript objects (or Python dictionaries) if you want. Again, this example applies for the scenario if Ansible playbooks were only configurations. If it were only configurations, then fine, let's go with readable YAML.

If Ansible were playbooks with only configurations, then even XML would be okay, because in the sum of all pros and cons, this would be a tiny factor. The pickiness of quotes and trailing commas is really whatever here.

But instead Ansible playbooks use a "readable" YAML-ish language. The price is not right.


When it's that easy then implement it yourself and make a PR to Ansible ;)


Apparently it's NOT easy to implement according to Ansible dev above.


It's also not that easy to get PRs accepted.


It is unfortunate that Ansible has a complete lack of test cases around core modules and, as a result, people act surprise when they're broken in a new release.

For example see this fairly critical defect with the s3 module in today's release: https://github.com/ansible/ansible-modules-core/pull/3347


Definitely agree with this. I use the docker module extensively and found 2 regressions[1][2] that I went through the trouble of debugging down to a single commit, but still have no idea whether anyone is going to fix them.

I'm hoping now that they are done rewriting the docker modules, and considering they are using docker as a selling point for 2.1, they will be more proactive with these issues.

1. https://github.com/ansible/ansible-modules-core/issues/3219 2. https://github.com/ansible/ansible-modules-core/issues/3231


Look at Chef. Its extensively unit tested in both the core client & server, but also the cookbooks associated with it. Check out the docker one for example https://github.com/chef-cookbooks/docker The interfaces and primitives for docker in this cookbook are great too.


I used to be a Chef user, and although I do love it, it's overkill for the current infra I am supporting. I haven't had too many issues with Ansible, I love it's simplicity, just lately there have been some annoying regressions.

I do really appreciate that testing is a priority in the Chef community, cause it definitely isn't with Ansible. It also seems like based on the naming conventions in the Chef cookbook, that Ansible is playing catchup with 2.1 (imitation is the sincerest form of flattery?)


I love Ansible, and have been using it since it was a humble little git repo with a single author.

Unfortunately it's probably the most important python package that doesn't support Python 3, it would be cool to see it upgraded. Apparently the hold-up is supporting very old 2.x python versions because of RHEL.


I think it actually makes sense for something like Ansible to use Python 2 because it's about controlling lots of remote machines on various Linux distros, and the idea is that it's agentless. If you suddenly have to install an "agent" (Python 3) on all the remote machines before you control them with Ansible, that wouldn't be great.

On the other hand, Ansible itself could have a tiny bootstrap step which installs Python 3 on all the remote machines before it does any work.

In our case, we use Ansible daily for deployments, but haven't actually written any custom Python modules -- it's all just straight Ansible YAML. So I'm guessing for a lot of use cases it doesn't really matter what language Ansible is written in.


Or, they could support both so that they can be an agent for distros that won't have python2 installed and only python3 (AFAIK they don't exist yet).


The latest Ubuntu LTS 16.04 only has Python3 installed by default. For Ansible modules to run at all, you need to install Python 2 first (possibly using "pre_tasks" + a "raw:" command in your Ansible playbook)


Ubuntu Server no longer has Python 2 installed by default (though it's available if you want it). This is the case on 16.04.


Ah, I thought I heard about that somewhere. I did tests on 16.04 recently with ansible, but python2 was here, so I guess the image I was using wasn't a vanilla ubuntu (from a VPS provider).


if it's _truly_ agent-less why do they care about the python version on the target, I feel that python target code is the agent? I might be missing something.

Tried it a while ago, interested in developing tower-like management software(light-weight) using javascript as Tower is a bit too pricey for small/mid-sized customers.


> if it's _truly_ agent-less why do they care about the python version on the target, I feel that python target code is the agent?

"Agentless" means that it works by pushing to the machine and that there isn't an agent process already running on the target. It doesn't mean that there aren't any requirements on what needs to be installed on the target already.


It can do truly agent-less using raw mode (which is basically raw SSH). That said a version of Python 2 can be relied upon to be available on almost all Linux box.


yes that's what I used as my target is a resource-restricted router that does not have space for any python, that's the only occasion you can call it agent-less, but most people are not using that mode.


Support for python3 has been on the roadmap, sadly most of the installed base of servers out there uses 2.x and in many cases 2.4 (centos/rhel5).

This is NOT a switch from py2 to py3 we are aiming to support BOTH at the same time, this is not a trivial task (specially with 2.4) and will probably take us several versions to implement.


Several majors or minors?


minors we release every 3-4 months (or plan to)


Can I kill Vagrant with this Docker compose approach on OS X? https://www.ansible.com/blog/six-ways-ansible-makes-docker-c... Does anyone know or has tested this with a normal (non beta) Docker install on OS X?


Does it support python3 yet ?

Also, why do we have to install aptitude to do system updates ? It has been a long time since apt-get had bad resolution issues (and aptitude isn't installed by default anymore (has it ever been?)).


hmm, all my cloud images (Digital Ocean, Rackspace) with Ubuntu 12.04 and 14.04 have aptitude installed by default. Only seeing a problem with Ubuntu 16.04 there.


Aptitude is installed by default on Debian Jessie.


It's not on raspbian or Ubuntu Server.


I hope open sourcing Ansible Tower is still on the cards after it was promised at a conference shortly after acquisition.


We've found Rundeck to be a more flexible alternative to Ansible Tower, and Rundeck is open source: https://github.com/rundeck/rundeck


http://rundeck.org/plugins/ansible/2016/03/11/ansible-plugin... this looks interesting, anyone uses rundeck to replace Tower?


Not to hijack the thread, but any pointers to good resources to sell mgmt on Rundeck? We're currently using Jenkins(!?) for role, something that enforces a divide where developers are allowed to automate, but operations isn't.


Wouldn't the obvious way forward be to open Jenkins for your ops?

(That said, Rundeck is probably more straightforward use. You could trigger it from Jenkins.)


Not specifically Rundeck, no. I don't think anyone has written a blog post on that yet.

Depending on how you're using Jenkins, Rundeck may not be a feature-for-feature replacement. You'll have to weigh Rundeck's feature set against your specific requirements.


I'm trying to parse your statement and I'm not sure if you want something to enforce that divide or if your complaint with Jenkins is that it enforces a divide between dev and ops?


Sorry for not being clear. We're using Jenkins because it was the shortest path to a solution. It's not easy for non-devs to use, so I'd like something that is ops friendly and won't cause the devs to want to repeatedly bang their head on the desk.


now with extendended network support one question gets even more important: how do you do rollbacks with ansible? There is no default mechanism or policy that seems to help with that, so I have to hand-roll my rollbacks?


The simple answer is: you don't, you "roll forward."

In the event you deploy some code, a DB migration, a server configuration change, etc, and your solution fails after the fact, you move forward, not backwards. Let me explain further.

If you deploy v1.0 of your application and it works, great! If you then deploy v1.1 and it falls over, you find out why, apply a fix, test it locally (Vagrant?), deploy it to a testing environment and perform automated tests (Selenium, jMeter, ...), and once it's working there, you deploy it to production. This is called a hot fix, and it will now be working as intended (unless something else is horribly off the mark in which case you have other issues.)

The key to this example is the local and remote/network-based testing environment(s.) In my opinion, it's very much a realistic goal for ALL organisations of ALL sizes to operate local development environments using Vagrant and VirtualBox; a testing environment that spreads out the whole solution over multiple boxes (for testing networking code and configuration, among many other things); a staging environment for running performance tests (staging should match production bit-for-bit, cpu-for-cpu, ram-for-ram, ...) using jMeter or your choice of tooling; and finally a production environment to serve clients. This is the absolute minimum all organisations should be aiming for, and it doesn't even have to be fully automated using CI and/or CD.

Also tests, such as unit tests, systems tests, integration tests, usability and performance tests, and so on, are also critical to preventing the need to roll back and instead, implementing a roll forward policy.


Curious if "roll forward only" could create situations where a failed version change could place the system of interest in a non-functional state until the problem was diagnosed, the code revised, and an update released. If that's possible, I would have concerns about the infrastructure meeting the core needs of the business such as providing value to cutomers.


Your systems is already down, rolling back is the same thing, if not more effort than rolling forward. At least that's what I've always found.

Another option is to have customers point at stage after it has been upgraded and if it all goes horribly wrong, a load balancer change should be enough to point people back at the older production environment.

All this being said, problems in production shouldn't be a thing with configuration management, infrastructure as code (Terraform), and tests, not to mention three environments (development,test, stage - at minimum) to work your way through before pushing to production.


> All this being said, problems in production shouldn't be a thing with configuration management, infrastructure as code (Terraform), and tests, not to mention three environments (development,test, stage - at minimum) to work your way through before pushing to production.

You'll still have problems, you've just automated them now. Those tools and approaches are great, but do they really prevent all production issues to the point where they "shouldn't be a thing"?


Keeping a system down while waiting for a hotfix is not an option for most operations. Rollbacks have their place and hotfixes have their place.


Taking a snapshot of a system before making a change and rolling back to that snapshot would be faster. In any case, a strong policy of only pushing changes to production that have been properly tested in staging will protect you the most.


Why not just roll back while you're testing the fix? No need to be suffering unnecessary downtime while you hunt, fix, test, package, stage, and deploy the hotfix. Rolling back takes you to a version that has already passed all stages.


John Wilkes of Google talks about this problem with Jeff Meyerson [1] and how it relates to the choice to use or not use containers. The spoiler is that container management tooling allows separation of infrastructure builds from deployment: a configuration problem when building a container happens on the build server instead of while a script is running on machine in production. His argument is that when a container deployment to production fails, the state of the machine is readily known (new bad container) versus an more complex state when a scripted build fails part way to completion.

And a container management tool can facilitate handling a failed distribution automatically via rollback to a previously deployed working container.

http://www.se-radio.net/2016/01/se-radio-show-246-john-wilke...


Revert your playbooks and roles to the version of your last good deployment, and redeploy. With good version control, role version management and idempotent library modules, this should be functionally equivalent to a rollback.

There are plenty of caveats to the above (like the fact that the yum module won't downgrade [1], and you'll need reversible DB migrations) but that's basically the procedure.

[1] https://github.com/ansible/ansible-modules-core/issues/1419


This isn't quite accurate. It won't uninstall or remove things that a previous version put into place, unless you explicitly remove them before installing them as part of your playbooks/roles.


Exactly. This whole "declare your environment" thing with Ansible doesn't work.

I've completely mixed experiences with Ansible. Yes, it's easy to get started, but it's certainly annoying having to create playbooks for removing stuff to get a clean state.


To be frank, my experience with configuration management has been a mix between "YES! THIS IS WHAT WE NEED!" and "...but it still doesn't adhere to immutable states." That's been true with Chef, Puppet, and Ansible, for me. I haven't experimented with other techs.


Depends how you write your playbooks/roles. You can write a role that will both add and remove depending on the value of a variable in your inventory. Then tweak the inventory and re-run.


IMO, the use case for rollbacks with Ansible (or Chef/Puppet etc) are done through redeploying prior releases not through trying to remove/replace software on an instance. Same with if you are rolling out a server configuration update (like a certificate), if you need to roll it back you send out the prior configuration.

Am I missing some other detail?


Hand-roll them like a fine cuban cigar. Jokes aside, I would not inherently trust an automatic rollback even if Ansible did support it. You must always provision for worst case failures.


Yes. There is no way to roll back.


Could you guys stop doing new features and write some unit tests for core modules please? So many regressions.


I love and hate Ansible. It has simplified so many things for me, but also had some annoying bugs and regressions. Somehow I've lost trust in the codebase. Also performance is a showstopper. I need a tool to develop, I just can't wait > 10 min for an iteration. I usually end up modifying my server's config files manually and then build the Ansible templates.

Unfortunately I'm not aware of a better alternative.


Use tags and only play the role you fixed.


--tags and --start-at are your friends.


Can anyone comment on the speed of Ansible 2+?

I have a bunch of playbooks that still use 1.8 and they are dog slow. Changing the contents of one file can take ~10 minutes. (Interestingly, running the entire playbook on a clean server is actually faster).


Have you tried turning on pipelining in ansible.cfg? Details here: http://docs.ansible.com/ansible/intro_configuration.html#pip...


>Changing the contents of one file can take ~10 minutes.

O_o

That kind of simple operation just zipped by for me on Ansible 1.6, 1.8, 1.9, and 2.0. The only noticeably slow kinds of operations for me are generally the package installs (understandably). I don't use a ton of variables in my templates, though.


I have quite a few variables, yes. One of the files I change frequently is an external configuration file filled with credentials which is managed through variables. It looks something like this:

    {% for cred in credentials %}
    {{cred.key}}: {{cred.value}}
    {% endfor %}


hey, kind of off-topic, but wouldn't you be able to like this?

  {{ credentials | to_nice_json }}


I'm curious to hear this too. We use Ansible to manage a fleet of a few thousand hosts, and runs take hours and hours and hours.


speed has been both better and worse, so depends a lot on what your playbook is doing, size of your inventory, vars, etc

the only answer i can give you is: test.

We do try to keep decent performance, but this is not our main focus.


I use around 1-3 variables per module and 4-5 modules per playbook. Other than that, the actual tasks are trivial (install package x etc.)


By Windows support, do they mean as a Control Machine? Because I thought it has supported automating Windows remote hosts for a while now? The documentation still reads as though it doesn't support Windows as control.

I just tried it out, and "pip install ansible" fails because of pycrypto. If I install pycrypto manually from another source, ansible installs successfully, but isn't recognized as a command. (I've double-checked it's installing ansible 2.1.0).


Can anyone comment on ansible for CIS hardening (and docker CIS benchmark). If not, perhaps other tools?


I use Ansible to do all of the CIS recommendations. Took about a day and following along with the CIS guide to do it.


Unarchive module gets cert error on get.docker.com on Ubuntu 14.04 LTS. System Python is too old.


I also had this problems in the past with SNI certs. Very annoying old python bug, it's possible to fix old python but this makes no sense for an Ansible deployment http://stackoverflow.com/a/29099439/756056 Better approach: Use curl to download in some temp directory/file and then extract from there. Curl does not have this errors. Example: https://github.com/ansible/ansible-modules-core/issues/1716#...

My dream would be that whole Ansible would use rockstable libcurl instead of the requests library which has problems with Certs on (not so) old python versions.


we actually avoid the requests lib for this and many other reasons, we don't use libcurl as we try to avoid extra dependencies when possible.

The issue is more basic than requests, the actual python http/url and ssl implementations have these issues, we have patched and added warnings to indicate which minimal python versions you can use and have SNI work.


thanks for clarifying!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: