This didn't work for me, it did something very strange.
I have nginx set up to listen on ssl and proxy everything to my python backend in plain http (as well as serve static files)
Installing the pagespeed module (without even turning it on), caused nginx to revert to the default config (it just served the default nginx index.html and 404'd everything else) after the 3rd or 4th request.
I've narrowed down the problem to a specific change, and updated the installation instructions to use a release from last week before that change. People are reporting that this fixes it for them. I'm going to be putting out an updated version, probably 1.5.27.2, that also will have this issue fixed.
I had the same issue, even with the same config file, with no pagespeed directive, the version with the pagespeed module failed for static files for me.
[0426/064031:INFO:google_message_handler.cc(33)] Shutting down ngx_pagespeed root
If you're using Debian, dotdeb[1] comes with the ngnix-extras (includes the push stream module[2]) installed as well as some other goodies like passenger.
Am I the only one who is worried that modules like this introduce subtle and hard-to-find bugs in the served pages? It's a layer you don't usually look at when debugging web applications.
I've been thinking the same thing. Although, reading about how it works does help alleviate my concerns somewhat: https://github.com/pagespeed/ngx_pagespeed/wiki/Design. At the least, it seems like something worth playing around with.
Sure, I'll take a look at it. Might be able to send you guys some rpm instructions as well. We are a RH/CentOS shop.
Edit: Looking over your build files. I think I'm going to go ahead and create (or modify existing) spec files and throw together a github for an nginx RH tree. If anyone is interested in it when done, shoot me an e-mail through profile.
I kept tripping up on the instructions, such as the git clone for your repo should be the anon one (git://) and final commands (debuild -us -uc) kept telling me that there were no upstream git.
I tried the scripts verbatim on a fresh Ubuntu 13.04 VM (64bit).
edit:
Your deb link at the end of the readme is broken as well.
I am currently using nginx as a proxy to Apache+mod_wsgi. nginx serves the static files too. Do you know whether I should install this module for nginx or for Apache or both?
But offloading things to CDN is actually one of the things it doesn't do. You might be thinking of the PageSpeed Service [1] which is a Google-hosted version.
Note that docs above are for mod_pagespeed (Apache), but all of the same filters are available in nginx port as well - it's the same C++ code under the hood. In a nutshell: HTML, CSS, JS, and image optimization (resizing, recompression, WebP, auto-spriting, etc...)
If I have a static directory of files is there any way to leverage this same code to just perform the optimizations on my html so I don't have to have the webserver do it (or so I don't have to run my own webserver)?
Hmmm. I don't really know C++ well enough to do this myself. Oh well.
Seems like it would be super useful to have a command line version of this so I could take an html file, pipe it in and get out an optimized file, and then diff them so I can learn to make my pages better.
I suppose as a hack I could set up nginx with the plugin and then load each page through curl or something and diff them that way...
That wouldn't be a 100% match since it assumes HTTP headers and mangles them, too. For example, it probably uses the MIME type served by the proxied server to see whether it's css or js or whatever should be minified (I didn't check the sources for this).
With a pipe, it'd need to use heuristics to figure out what kind of file it is. That could be added, probably, but it's not entirely trivial.
Additionally, you'd miss out on all the HTTP header mangling, such as cache expiry settings.
Yep, it'll concatenate files, minify them, etc., all the things you would expect. Perhaps the only caveat is: it doesn't do any dependency management, etc., which sprockets provides (if you actually use that part of it).
I tried it on my own little webpage [1].
Results in short:
0.5 seconds to 0.9 seconds shorter load times. Beware, that I only have static content on that website.
This is brilliant, in principle. Getting some odd segfaults to do with libpthread-2.15.so on Ubuntu 12.10 though, so not sure what's going on there. I'll dig some more then file a bug...
I have nginx set up to listen on ssl and proxy everything to my python backend in plain http (as well as serve static files)
Installing the pagespeed module (without even turning it on), caused nginx to revert to the default config (it just served the default nginx index.html and 404'd everything else) after the 3rd or 4th request.
Any ideas why this would happen?