Hacker News new | past | comments | ask | show | jobs | submit login
Extra security measures for next week's releases (postgresql.org)
197 points by chanks on March 28, 2013 | hide | past | favorite | 41 comments



We've now posted a general announcement about the availability of packages on April 4, 2013: http://www.postgresql.org/message-id/CAN1EF+x0dmwMFuJGWuXMiR... and http://www.postgresql.org/about/news/1454/

As stated in the announcement and Tom's email to -hackers, the reasons for advanced notifications are as follows:

* People watching for vulnerabilities and contributors are going to notice that we've stopped automatic updates -- it's better for our project to just tell them all why

* Upgrading relational databases is often not trivial -- we want to give our users time to schedule an upgrade rather than just dropping an important update suddenly


Wouldn't it make somewhat more sense to branch to a private repo without telling the public, make the required changes there, create the packages from that branch, and then later push the changes into the public repo?

The way they are doing it now entices hackers who don't know the exploit but happen to have a recent clone of the repo to look for the big hole in hopes of finding it ahead of the fix. Granted, hackers are probably already doing that sort of thing on high profile services like Postgresql to begin with, but in my experience it is easier to find something exploitable when you already know something exploitable exists than it is when you're just randomly poking around. At the very least it makes it easier to stay motivated and focused.


Just knowing there is a preauth RCE in the code base buys you very, very little. Statistically speaking there are probably quite a few unexposed flaws right now in any compact, core linux distribution. The fact that no one yet knows what they are is precisely what prevents the exploitation. Security holes are numerous and the ones that have escaped detection generally continue to do so - the rate of co-discovery is very low in the field.

Warning ahead of time is thus often very useful - it allows the infrastructure to prepare to make the changes quickly. This is the same reason that folks like Microsoft consolidate most patches into standardized cycles.


> Just knowing there is a preauth RCE in the code base buys you very, very little.

I disagree with that. That information is highly valuable. Auditing is a risky time investment; you may not find anything useful. Audit time is a finite resource and you want to allocate it where there are vulnerabilities that are useful. There is no way to know that ahead of time.

> Security holes are numerous and the ones that have escaped detection generally continue to do so - the rate of co-discovery is very low in the field.

The rate of co-discovery is fairly high once a second party has been tipped off to the general location and nature of a bug. Most competent auditors will spot the same bugs, especially if the second one already got confirmation that it does in fact exist.


Yes, folks are already attempting to find exploitable weakness in these projects. We can assume they exist. Just mentioning that one is confirmed doesn't really lend any insight. The surface area is pretty huge on that project.

If I had to guess where it is, though, I'd bet it was in a PL module. I'm sure there is quite a bit of activity around finding NativeHelper-like situations.


It has to be something severe for this scenario to come into play. A broken procedure can only be exploited if such a procedure exists and can be invoked as the definer. This model is well understood by all, to the point a vulnerable PL may not be a critical issue for most users.

Given the precautions that have been implemented, my bets are on authentication. This would mostly affect TCP/IP enabled hosts, which is fortunately not a default configuration (tested on Ubuntu).


>Yes, folks are already attempting to find exploitable weakness in these projects. We can assume they exist. Just mentioning that one is confirmed doesn't really lend any insight.

You say that now, then one day, you wake up and all the blue-eyed islanders are gone!


(This is a reference to a logic puzzle about islanders who are able to tell whether they have blue eyes due to someone telling the world that someone has blue eyes. Puzzle at http://xkcd.com/blue_eyes.html , solution at http://xkcd.com/solution.html .)


The repo is only being hidden for a week ("until Thursday morning"), which to me implies that the fix is localized and/or well understood. That's a pretty small window of time, so I'm not sure that it will provide any additional impetus for exploit developers.


This is an interesting tradeoff between responsible security and open source transparentness that the Postgres team is facing. I personally think this is a good way to handle a situation with a serious bug, but there are some questions it raises...

Is Postgres working with downstream teams to have everything in place to do a coordinated security release? For instance, are they working with the likes of Debian's security team (for example) to not only make the direct source pullable, but also have releases available to as many users as possible in the platform preferred formats?

If they are, how do they do keep this under wraps? It seems like the kind of thing that would require a fairly wide "pre-disclosure", and managing trust in a large network gets hard.


Those are some good observations. Most likely they have given the information to Debian security. With something like this, there is a degree of trust that is maintained. The Debian security team has access to other zero-days on a regular basis, so ideally they aren't compromised. It wouldn't surprise me if AB tests were performed on security experts on a regular basis. E.g. two exploits discovered, one sent to half the team, the other sent to the other half. After a log(n) number of iterations, potential leaks are exposed.

Diving further though, at what level do you say you trust the system though? Do you trust your compilers to not inject malicious code? (see http://c2.com/cgi/wiki?TheKenThompsonHack) Do you trust peripheral devices? It's very easy to install a physical key logger into a system. Do you trust your chipsets? Compromised chipsets exist and can be used against you. (http://blogs.scientificamerican.com/observations/2011/07/11/...)

It's a tough situation to deal with. This is part of the reason layered security solutions are typically employed. Even if one system has a zero-day, ideally multiple layers should increase the overall complexity of triggering it. One of those layers are security teams and blackout periods where information is not released to the general public, even if they aren't always effective.


    two exploits discovered, one sent to half the team,
    the other sent to the other half
That would only work with brazen leaking. If a security team member were selling 0-days to organizations that intended to make extremely limited and careful use of them, it might never become public that exploits were being leaked.


I agree, I suspect the best way to be caught is for the malicious team member to tell someone who turns them in. Most likely it doesn't ever go noticed. Also, it's fortunate that the information received by the security teams typically has a relatively small window of opportunity to perform the exploit.


I like your explanation about layered solutions to security, +1.

"Do you trust your chipsets?"

Certainly not. I do believe that the recent tiny bytes sequences in any TCP (UDP ?) packet that can lock Intel ethernet cards is actually a backdoor allowing the state to perform DoS at will. I do also believe Huawei and ZTE are state-sponsored espionage companies (I've certainly seen weird things like a keylogger inside a 3G Huawei USB device sold I bought in Europe).

But I do believe that even if I'm, say, a Debian or OpenBSD dev working on OpenSSL it's amazingly complicated for the chipset to modify source code and be able to make to the DVCS unnoticed. I also think that as long as the source code isn't corrupted there are ways to create non-backdoored builds.

It's the same thing with program provers that can verify that certain piece of code are guaranteed to be free of buffer overrun/overflow: what proves that the compiler itself hasn't been tampered with? But still... With DVCSes and many eyeballs I'm not that much concerned about the compilers typically used nowadays to be tampered with.


The Intel ethernet controller lockup seems like a really bad candidate for an intentional backdoor. It's way too easy to trigger by accident (a single byte with the right value in the right place!) and yet way too hard to trigger intentionally (a slightly different value immunizes the controller).

Surely an actual backdoor would require something a little harder to stumble over by accident, and wouldn't have a trivial disable code.


>I've certainly seen weird things like a keylogger inside a 3G Huawei USB device sold I bought in Europe

Don't tease us like that! Spill the beans!


Tinfoil hat time... Does anyone ever reverse gcc binaries to verify? I mean it seems that many eyeballs on the source code actually provides a negative effect to people examining the binaries, because why do it when the source is right there?


just wanted to let you know you got hellbanned for some reason.


A lot of the people they need help from don't need to know the specifics.

They can tell Debian what is basically in this mail, and Debian can be ready to accept a new package.


I agree.

As long as the debian infrastructure is fully automated (it is), the actual time delay from pgsql's announcement to it hitting debian servers would be only an hour or so.


As much as I don't like stuff being hidden from me I think this is a good move. The title made me think it was a permanent move but it's just till this update is completed. The bad part of this is, that it's obviously a very serious vulnerability...


And now the bad guys know there is a very serious vulnerability, somewhere.


The bad guys already assumed that.

Seriously - the entire premise of IT security (no matter the color of your hat) is the assumption that there is no such thing as a secure computer.


Knowing that there is a vulnerability might motivate them to look for it, but given the size of the software, I doubt they'll be able to find it without knowing more.


You'd be surprised; on Windows, at least, there are people who reverse engineer the security patches from Microsoft in order to determine the initial vulnerability[1].

[1] http://www.phreedom.org/presentations/reverse-engineering-an...

Edit: Misinterpreted your post. You're right, it's unlikely that they'll guess where it is until a patch comes out.


Because there are enough people running Windows who haven't applied the patch that figuring out how to exploit it is a worthwhile undertaking.

Then again, IME of many years as a PostgreSQL DBA, the vast, overwhelming majority of postgres shops aren't running anywhere near the latest release, so depending on how far back this vulnerability goes, there could be a very large number of exploitable targets...


The knowledge may also motivate them to prepare for attacks to be executed once the vulnerability is public but most instances do not have it patched. Scan the Internet for PG backed applications, identify high profile ones, prepare automatic scripts, etc.


> Scan the Internet for PG backed applications, identify high profile ones, prepare automatic scripts, etc.

it rather works like:

- take exploit

- spread it over the whole internet and calls home where it sticks


They'd know it was there as soon as a patch was released, anyway.


How far back does Postgres maintain old versions? Will, say, Postgres 8 be updated with this fix (assuming the problem is in Postgres 8)?

EDIT: Here is the answer: http://www.postgresql.org/support/versioning/

8.0 was EOL'd in 2010, but 8.4 will go through July 2014.


What sort of security vulnerability would justify this extra paranoia? The worst case is that it's something which affects the very-common case of postgres servers that only talk to local services, like a unicode or quoting error that made sites which nominally quote their queries correctly vulnerable to SQL injection. That would be as serious as the recent Rails vulnerabilities: drop everything, patch everything everywhere, or definitely be rooted.

Be ready to patch as soon as it's out; this could be a big deal.


From what I hear this is pre-auth access to the DB, though it's not from the most reliable of sources.


This bug shouldn't be a huge deal because if you are treating your sensitive database server as anything but exploitable from any machine with network access, you've already lost.

Even if your DB server is properly restricted, you should still patch quickly, but there is no way that it should be reachable unless you're already heavily compromised.


Unless it's something that compromises query/statement integrity from normal user input; a character set problem, for instance.


Yep, there are a few potential ways that could be exceptions, depending on the bug. But let's not kid ourselves, nobody in the real world properly quarantines their DB servers anyway ;)


This is a pretty firewall happy mindset. Some of us don't have any "internal networks" as a matter of principle.


There are varying degrees of "firewall happiness" and reasonable minds can disagree as to how far you go to balance convenience/security, but...you don't do any network segmentation as a matter of principle? Either I don't understand what you are saying, or you need to make a case for the immediate termination of everybody in charge of your network.


I wonder what the worst possible bug might be?

A bug in query parameter parsing that would allow SQL injection attacks?


As conjectured above, the worst case is pre authentication remote code execution. i.e. anyone can just connect, send magic packets, and get a shell.


While that would be bad, if it required a magic packet it would have limited impact -- lots of postgres databases don't talk to public networks.

Worse would be a vulnerability that you could trigger just by manipulating query parameters. Then almost every postgres-backed website would be vulnerable.


I, for one, welcome our new secure PostgreSQL overlords!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: