>'"Fast Open" is a optimization to the process of stablishing a TCP connection that allows the elimination of one round time trip from certain kinds of TCP conversations. Fast Open could result in speed improvements of between 4% and 41% in the page load times on popular web sites.'
thank you, interesting link. Does TCP Fast Open have anything to do with the "slow start", which a few major websites are known to violate[1][2] ? just out of curiosity. It doesn't look so, at a first glance.
Not really. Slow start is a precaution to avoid the network from getting congested; whereas TCP fast open is essentially trying to reduce the completion time of small TCP requests.
The particularly unfortunate part is, as much of the world's commercial hosting runs on RHEL, it's the world that doesn't get this until 2020. Well, except for services that roll custom kernels.
I highly recommend making the switch. They've been doing the package management thing the longest, and it shows, both in package selection and how smoothly it all generally runs. And I've never had less trouble upgrading from one version to the next than on Debian.
I would counsel against using anything but RHEL or one of its clones in production, especially if you're running on bare metal. None of the Tier-1 server vendors support anything but RHEL. This is incredibly important if you care about ever upgrading your firmware, or debugging hardware/software integration issues. Otherwise you get trapped in vendor-fingerpointy hell.
Another good reason to stick with RHEL is that they do a very good job of maintaining compatibility between minor releases. You're pretty much guaranteed to be able to install a third-party package across all minor releases. If it installed on 6.0, odds are it will install and work on 6.4 without issue. Others make no such guarantees.
RHEL has way, way better (and accurate) documentation than any other distro I've used, particularly when it comes to unattended installations and server configuration tuning. Other distributions tend to focus much more on desktop users than massive server installations and their particular needs.
It's a common complaint among people that RHEL is always too far behind the curve in terms of software packages. But really, your OS will be stale the day after you install it, much like a car loses so much of its value the day you drive off the lot. Freshness is a tempting siren, but can lead you onto the rocks.
If you're a sane implementor, you're going to stick with any distribution you choose over the course of several years. You will favor uniformity over freshness, and stability over constant (and arguably needless) work. And no matter what distribution you choose, you are very likely to roll your own packages of whatever software is critical to your business, as you'll need custom patches and so forth.
"You are very likely to roll your own packages of whatever software is critical to your business, as you'll need custom patches and so forth."
If you care about taking advantage of prompt security updates from your distro provider, you will do this as little as you possibly can. Otherwise you need to pick up that burden yourself, and since most of us aren't security pros, it is a weight that should be assumed with great reluctance. You're no doubt familiar with the dilemma: Are you going to attempt to maintain stability (which isn't at all guaranteed) by backporting patches, or annoying your ops people by forcing version upgrades?
To a certain extent, how bad this gets is a function of how out-of-date your distro is. This is where having packages that are 4 years old can bite you -- you end up rolling your own far more often than you really should. Overuse of tools like virtualenv leads to the same problem.
I wasn't necessarily referring to security updates. I was referring to core service subsystems, like webservers, caches, databases, Java VMs and the like. Most large, mature companies I've worked with (who were all core web properties) maintained their own versions because they had enterprise needs that required specific functionality beyond what the basic packages could provide. Sometimes they tune the code itself with custom patches to fix bugs or improve scalability.
If you're betting your business on this software and are operating at scale significantly larger than the typical Web service, odds are you're not going to live on the distro-provided packages for very long.
It can work out O.K. if you stick to patching srpms of the official releases. For example, in the past I've needed to patch performance fixes for RAID cards into the kernel. When set up carefully, this is easy, repeatable, and has minimal impact.
On the other hand, one of the nice things about RHEL is you hardly ever have to upgrade major version... RHEL4 has been around since 2005, and is only now finally being phased out.
In comparison, I think "sarge" was the stable Debian in 2005. "sarge" is no longer oldstable- nor is etch. oldstable is "lenny", released in 2009.
Don't get me wrong, I like Debian, but when you compare "ease of upgrading", don't forget to consider "need to upgrade"!
True, but don't forget that the longer you wait with upgrading, the more compatibility issues you may run into when you finally have to upgrade. Linux has changed a lot since 2005.
I'm more devops than sysadmin, so more expertise may have changed my experience, but when I did run a CentOS server, I thought, "yay, this is great"... until it came time to actually update.
Then, I realized that I had simply been accumulating up all of my upgrade pain as debt - with compound interest.
By comparison, upgrading Debian servers from version to version felt like hopping from bed to bed in a mattress store. Aside from a missed footing or two, it was usually a nice soft landing.
Yes! Plus I get an unreasonable amount of satisfaction out of that server I last imaged in 1998 running the latest version of Debian. So awesome -- it might outlive me, and if it does, it'll be up to date!
> one of the nice things about RHEL is you hardly ever have to upgrade major version
This is a huge downside for me. I worked for a remove server management company, and every time I had to work on a new (to us) RHEL4 or RHEL5 server (or CentOS equivalent), it was like pulling teeth. In order to run half the software our clients needed, I had to compile newer versions of daemons, create our own RPM packages, patch and recompile code. RHEL5 didn't even include memcached for crying out loud!
For 'enterprise deployments' I can understand wanting to go with RHEL if you don't know much about server administration and want to make it 'easier' on yourself, but even for basic web apps, most of the software you'll want to use is absent, out-of-date, or incompatible. I can't imagine anyone working for a startup wanting to use it.
That's true. I like the Debian ecosystem but I also like the idea of putting off an upgrade for 5 years, so I'll probably give an Ubuntu LTS edition a try at some point. Unfortunately my past experience in updating between Ubuntu versions has not been encouraging. Probably best to try it on a box that'll be reimaged or retired rather than updated.
Debian was my first linux distro and moving from stable to testing repos the first time was a real pain in the ass. After doing it once or twice, and reading a ton of stuff and learning what was actually going on when I was changing settings, I was much better off.
also, you can easily get the latest linux kernel to work with Debian; Im currently running the 3.2 kernel on squeeze, although it comes with 2.7 by default. And im no kernel hacker; they have an easy way to do just this:http://www.debian.org/releases/stable/i386/ch08s06.html.en
It's Red Hat that it's not including stuff in RHEL, not that CentOS is slow tracking features. So you get the same stuff with Scientific Linux (mostly, both SL and CentOS package some extra bits, but overall is RHEL).
It's API/ABI stability, and it's meant to be a feature of RHEL.
Building a kernel of your own is fairly trivial. The potentially less trivial part is keeping it up-to-date with security fixes, etc. Not impossible by any means, but it does require a bit of a commitment in time, build environment, etc.
OK, interesting. TCP cookies allocate (practically) no resources to a TCP connection until the three-way handshake is complete, whereas Fast Open can start receiving data before the handshake even finishes. But even if you only do Fast Open on hosts that you have previously completed a normal 3-way handshake with, attackers will just do one full handshake at the beginning of the attack and then switch to SYN flooding.
SYN flooding is only really effective if it's spoofed, and you can't spoof a full three-way handshake unless you're in a privileged position on the network.
attackers will just do one full handshake at the beginning of the attack and then switch to SYN flooding
Except it's no longer SYN flooding at that point, it's full HTTP request flooding.
But in a sense isn't that really the goal of this design? To make it a bit more efficient to get requests to the application layer?
In any case, it seems like an application using this feature to have an efficient way of disabling it if it can't handle the current load. Kernels could add efficient heuristics to throttle it automatically too.
I'm more concerned that a bug in the entropy of the key generation process could turn these servers into massive reflected DoS amplifiers. E.g., the attacker sends 1 packet with the source address spoofed and the webserver replies with an entire HTTP result to the victim.
How long until this comes to Windows and OSX clients? Until then it doesn't seem that this will get much use. Didn't find anything on a preliminary google search.
perf trace will show the events associated with the
target, initially syscalls, but other system events like
pagefaults, task lifetime events, scheduling events, etc.
Based on a session I saw at the Linux Collaboration Summit 2011 in San Francisco there are actually a few competing profiling toolkits. While most of these are aimed at kernel developers there is also the potential to use these more generally either (1) for optimization, or (2) in conjunction with various security toolkits to automate the systematic reduction of kernel-level attack space.
While both are a can of worms, (1) is more so. With regards to (2), AFAIK it is generally accepted that grsec provides the easiest profiling tools here; whilst 'roll your own' is never as secure as something locked down by multiple third parties 'in the know' and to a finite extent (see: NSA SELinux), it is far better than nothing, and very frequently custom code or the latest or patched version of a certain daemon will not have publicly accredited rulesets. Therefore grsec's solution is a reasonable basis for beginning here. Docs @ http://en.wikibooks.org/wiki/Grsecurity/The_Administration_U...
Linux is suffering from a lack of a good debugging API. There is a decent progress in the kernel debugging and profiling tools, but for userspace no changes were made for a long time.
"ptrace" is the main userspace debugging API, used behind tools like "strace" or "gdb", but it's old and clumsy. Quite new "perf" tool, on the other hand, allows user to get various CPU/Kernel stats mostly useful for profiling. Before "perf" you could only try to emulate your program using "valgrind" to get, say cache misses.
The command "perf trace", seems to move "perf" more into the "ptrace" domain. This is indeed exciting.
There is, strace does just that. perf trace is still evolving though and appears promising. See, for example, scripting support:
http://lwn.net/Articles/371448/
Ubuntu upgrades Kernel major versions to the latest available at the kernel freeze at every release. So, the features in 3.7 will hit Ubuntu systems with 13.04.
Arch Linux will probably get it in a couple of weeks.
Android, on the other hand, will take much longer. It's sad as there are so many goodies in this for ARM, especially the multi-platform support. This will make updating Android version a lot easier (for manufacturers, hackers). Right now the one of the bigger issues is updating the kernels (and the closed drivers, to be honest).
You can always blacklist a package (such as linux) in /etc/pacman.conf to avoid upgrading it during pacman -Syu, for example. I had to do this during a power regression in the kernel. However this isn't a long term arch strategy. Note also that arch has LTS kernels, should you prefer.
If not upgrading the kernel every week or two is a security hole, most people are pretty screwed. I personally do not like to restart that often. Turning off automatic updates just gives you control of when to upgrade.
If upgrading is not for you, ArchLinux is not for you. You should run a full pacman -Syyu at least once a week and clean up .pacsave files at least once a month. Probably even more often if you are on testing.
Ubuntu already has a generic 3.7 kernel (http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.7-raring/) in the kernel PPA. I'm already running it. Upbuntu usually has a shell script day of to download new releases. New Ubuntu versions are shipped with the latest kernel around the time of commmit freeze and just updates that line to avoid regressions in future kernel releases from harming systems inadvertently.
If you're rooted, you can often find custom, more up to date Android kernels optimized for various phones. (The android dev community being what it is, sometimes they're not optimized very well, but if you hunt around and find a good one, they can make a difference to battery life/etc.)
In addition to everybody else's replies, remember, cell phones are computers. Chant it to yourself: Cell phones are computers. Cell phones are computers. There is nothing that a notebook will need that a cell phone won't need just a year or three later. Cell phones can chew through any amount of power you give them, with augmented reality, being used as laptops with optional shells, the sky is the limit. Do not limit your mental model of cell phones to "glorified phone", the end game is "computer with cell network interface."
Remember that memristors are going to reinvent computer memory by 2014? (speed and power near dynamic RAM, but nonvolatile) When the storage hierarchy gets retuned you may find all of your device's nonvolatile storage in your address space. Or maybe not. http://en.wikipedia.org/wiki/Resistive_random-access_memory
Beyond that…
The ARM 64 bit comes with 31 64 bit registers instead of 14 32 bit registers.
There are a number of instruction set differences, such that you run the same object code on previous architectures.
The NEON unit also gets twice the registers, double precision floating point, and IEEE 754 compliance.
Bitness is more about per-process address space than total RAM. For example, a 32-bit process could not memory map a file over (somewhat less than) 4GB in size, while a 64-bit process doesn't have this problem. Wanting to memory map large files is something you could do on just about any current mobile phone.
Also think Moore's Law. Current phones are sporting 1GB of RAM. It will just be a few years before they exceed 4GB, and it's easier to get the software support ready now.
No it won't be. Some of the high-end phones already have 2 GB, and next year it seems they'll have 3 GB of RAM. So 64 bit is arriving just on time in 2014 (although Cortex A15 supports extension to 40 bit, but using that may be a bit messy):
> When i think of ARM i think of Mobile Phones, total ram isn't an issue(yet?) Are there other advantages?
We don't think about it much, but the total power draw of a data center is a huge cost, and ARM servers are lower-power than x86 servers. Every step towards being able to replace x86 servers with ARM ones is saving the people who pay data center power bills lots of money.
ARM64 isn't required for > 4GB physical memory. The cortex-a15 can already address up to 1TB and is already shipping. Only the per process limit is locked to 4GB on a 32 bit core.
I think you're splitting hairs. In fairness, I didn't explicitly call that out, but as soon as you have more than 4GB of memory, there's going to be an application that wants to use more than 4GB. So yes, 64-bit is necessary in my opinion. Especially when you consider the server market.
> but as soon as you have more than 4GB of memory, there's going to be an application that wants to use more than 4GB
Maybe, but my desktop has 32GB of ram, has a 64bit CPU, and I've rarely seen a single process use 4GB of memory unless it was leaking memory. My laptop has 8GB of ram, also 64bit, and I'm pretty sure I've never had a single process use 4GB. The only exception I can think of was doing some silly data manipulation, e.g. doing the initial build of a property graph of the entire Boston area transit system. I bet some high end games or video/photo editing software would use more than 4GB. Point is, those are all pretty niche situations, I'd bet the majority of memory usage on an average person's computer comes from browser processes, few hundred MB each if that and office applications, also a few hundred MB. Those processes add up though, so having more ram is usually a really good thing even if no processes uses even close to 4GB.
I also challenge the need in server loads. I'd bet the vast majority of applications never use 4GB either on the app server or the database server. Most of what gets discussed here on HN are large high scalability applications that you wouldn't host on ARM cores anyway. We often forget that the vast majority of website are tiny and low traffic.
Datacenters have a lot of stuff running in them. Big servers and databases arn't just for public facing large scale applications.
Database servers in particular wants as much memory as possible, and it doesn't take that big of a business support application that have a working set of data > 4Gb.
Same goes for a lot of the memory hungry Java application servers that sits around in a lot of businesses.
Apparently we're crossing wires here. I'm in no way shape or form saying these applications don't exist (I'm currently writing one).
What I am saying is that for ever such application there are probably more than a thousand sites hosted on free with your domain or $5 a month php + mysql hosting that use no where near 4GB per process. For the companies hosting those sites a cortex a-15 with a ton of ram is an awesome solution. This isn't all or nothing, we're talking about two different markets.
It doesn't, and I don't see where I've made that claim. I'm refuting the opposite. The initial claim was:
> I didn't explicitly call that out, but as soon as you have more than 4GB of memory, there's going to be an application that wants to use more than 4GB. So yes, 64-bit is necessary in my opinion. Especially when you consider the server market.
I'm just saying I don't agree. There is a massive market very open to the power saving offered by arm that have no use for > 4GB process spaces. That doesn't mean there aren't markets where 64bit will be useful.
I think we are on different pages with the word "necessary". I am using it as "needs to exist as an option", and I think you're using it as, "needs to be used by everyone".
Heh. Ask anyone that's ever edited maps for games, edited video, rendered, or used Photoshop if they want the ability to use more than 4GB of memory for a single process.
There are many games now with multi-gigabyte files; the ability to map that entire file into memory is invaluable if the system has enough memory to support it.
There's also a difference between "required" and "desirable". Many applications will happily run with less than 4GB of memory available to them, but many will also run much better when they can access more.
Don't fall into the same trap that so many did with 32-bit processes, etc. Look forward a few years and see where the industry is eventually headed anyway and just assume that it should be there now.
I'm just going to assume I worded my post poorly. I pointed those out and identified them as niche markets, because they are. Some how it appears the impression I gave was "there is no need for 64bit, ever!!!!!".
I believe that the disconnect here is that you are claiming the need for >4GB of RAM per process is a "niche market", while many others here are claiming it is something that many applications today would have an immediate use case for, and/or have to work around not having available.
HP has announced plans for power-efficient ARM-based servers with their "Project Moonshot" program, and starting with their "Project Redstone" Calxeda ARM server development platform:
4GB is the limit with 32bit. It's starting to be used on servers, so that's an issue there. But even on phones:
Moterola Droid 256MB [2009]
Google Nexus One 512MB [2010]
Galaxy Nexus 1GB [2011]
Google Nexus 4 2GB [2012]
It doesn't look like it will be long before we need 4GB or more. I would expect, like with x86, you need 64bit even to access the full 4GB (due to memory mapped devices)
The more interesting part is the ARM multi-platform support. 64 bit ARM cores haven't hit the market yet, but the ability to run one ARM kernel on several different hardware set ups is useful today.
per the article: "The new 64 bit CPUs can run 32 bits code, but the 64 bit instruction set is completely new, not just 64 bit extensions to the 32 bit instruction set, so the Linux support has been implemented as a completely new architecture."
It will certainly make testing early access silicon from AMD more interesting. But that nice benefit aside, having a large virtual address space is really handy for a lot of things.
If you've never been in the Linux world, can certainly see how this is confusing.
This announcement is for the Linux kernel. The different Linux distributions (Ubuntu, Redhat, etc.) all bundle different versions of the Linux kernel and umpteen number of packages around that to create a cohesive Desktop or Server experience.
The kernel is the one core, similar piece between ALL of the distros. It is the desktops, package managers, etc. that differs between the distros.
Linux is an operating system kernel, the most basic layer of software managing the system and talking to your hardware. Linux exposes a standardized interface to the so called 'user space', where programs like Chrome or Apache etc run.
The Linux kernel is important, but only a small part of a whole operating system - it's the lowest layer, but there are lots of layers on top of it, including all the programs that the average user uses.
Ubuntu, Red Hat, and other Linux _distributions_ bundle a Linux kernel, common software packages, and utilities to manage these together. So there are many Linux distributions, but only one Linux kernel.
Ubuntu/Fedora/Arch/etc = Linux + lots of other stuff, without which the kernel is pretty useless (command shell/interpreter, GUI, command line tools, etc.)
Linux is just the kernel and all different distributions (Ubuntu et al) are based on it.
Think of it as if the Twitter Api was the kernel and all different clients like Twiterrific or Tweebot were the distributions (Ubuntu, Fedora, etc...). Distributions are implementations of the kernel with lots of extra features and improvements.
> Furthermore, the server should periodically change the encryption key used to generate the TFO cookies, so as to prevent attackers harvesting many cookies over time to use in a coordinated attack against the server.
What is going to do this? I hope this is built-in somehow.
It looks like the key will be accessible via the proc filesystem. But it's anyone's guess how many distros will faithfully schedule a cron job to rotate the key.
EDIT: Looks like the key is chosen at kernel module "late init" time. I think this is before any init scripts have had the opportunity to add back any entropy persisted from previous boots. So the entropy in the kernel pool is minimal. It may be plausible for a remote attacker to guess the key for a bunch of servers.
Also, if the key is not rotated by cron, it provides a single-packet method for a remote attacker to observe that a server has been rebooted since he last checked. This will give a good indication of how often security patches have been applied.
Might be better to have the keys rotated after a certain number of TFO cookies are generated rather than on a time-based schedule. This will prevent attackers from trying to make a huge number of requests in a set period of time.
The TFO cookie is only generated once per client "source IP" and is good until the key is changed on the server. (Scare quotes because at the source IP may be spoofed).
For an attacker to learn a cookie that's valid for a given victim "source IP", he only needs to be passive observer somewhere along the route. Even if we believe that's very hard in most cases, if it's possible at all, he has the mother-of-all anonymous reflected DoS amplifiers. http://tools.ietf.org/html/draft-ietf-tcpm-fastopen-02#secti...
So, yeah, using a key that's rotated after a short amount of time -or- number of uses (whichever comes first) seems like a good idea.
It's very easy to do in NAT'ed environments and the Linux kernel doesn't implement the suggestion of the RFC draft to include timestamps too.
An attacker who doesn't want to do a MITM attack because that might be noticed can set up sessions to all kinds of servers outside the NAT which support TFO. Then all these TFO cookies are used in spoofed SYN packets with the source IP being set to the host behind the NAT that the attacker wants to flood. Easy enough.
That will be more difficult if you've got a cluster of servers in which each new connection request of a client can end up at any of these servers at any given time. So the keys would have to be rotated simultaneously for all servers in the cluster.
> X86 before (linux 3.6-rc4):
> # time rm -f test1
> real 0m2.710s
> user 0m0.000s
> sys 0m1.530s
> X86 after:
> # time rm -f test1
> real 0m0.644s
> user 0m0.003s
> sys 0m0.060s
The commit affects 5 lines only.
EDIT. Not sure if this optimization applies to filesystems mounted with standard journaling options...