DNS blacklisting is useful and has its place, but I would expect a secure-by-default browser to not load any resources from domains outside the one in the requested URL (like lynx) and to provide a simple way to selectively whitelist external resources for that domain only (like the RequestPolicy Firefox extension). That's one important omission from your otherwise impressive list of features that will probably make me stick with Firefox + Vimperator + RequestPolicy.
Since switching to dwm, I can't even remember what I used a desktop environment for (I went from KDE to Gnome to XFCE to (black|flux)box to dwm). Based on your usage, I'm surprised you'd consider leaving xmonad for something as alien as KDE4. Give me the speed and screen real estate of a tiling window manager any day.
It's a very bad habit to use -p for every invocation of mkdir, and it's not appropriate in any of the examples where it's used. For one thing, it will never complain if you get the path wrong (provided you have the necessary permissions) which is particularly undesirable when doing remote work.
File and directory names can contain spaces and require special treatment. This should be addressed up front, before introducing commands with side effects.
How about including some examples of globbing all files of a given type including filenames with spaces?
e.g. 1) Find all file names in a directory that contain spaces and replace spaces with _
2) E.G save mp3 format sound out of a directory full of flv files using ffmpeg, flv files can have arbitrary file names, mp3 file names saved with no spaces
That kind of activity would be interesting to younger proto-hackers
I'm a teacher. A group of teenagers in a computer room (which is not your target audience) would not actually read your tutorial in an a to b fashion, but, when set a task would raid your work for the bits that help then achieve the task. I'd then follow up with a multi-choice quiz in Moodle or something similar to structure recall like your flashcards.
Not necessarily. It only takes one bureaucrat to reject code that doesn't pass tests or follow required conventions. For example, a good rule of thumb is that no new code will be accepted without documentation.
Well, then why not just use 1e100.google.com for this purpose? There's a reason it's called a domain, and it seems kind of silly to create and maintain unrelated hierarchies.
Because being under google.com would mean the javascript security model allows it to be the "same domain" as google.com, which has cross-site scripting implications: there are applications for which google serves user-supplied javascript, and if one of those was accessible under the google.com domain, it would allow an attack.
Are you willing to describe the threat more? (I am legitimately curious, run a bunch of websites, use CDNs, may at some point have similar constraints involving also needing to host user content, and both respect and acknowledge the value of getting handed down understanding and explanations from people who have been doing things longer. ;P)
"a.google.com" and "b.google.com" are not "same origin", so cross-site scripting should fail. You can, however, have the two domains opt in to communicating with each other by having them both set their document.domain to "google.com"; does Google normally set document.domain on their pages, thereby allowing injected iframes to take advantage of this?
(I had thought the most common reason for having separate top-level domain names were due to performance and security implications involving cookies, which sometimes are scoped at the level of a domain name rather than at the level of a subdomain in order to allow sharing between related properties, such as plus.google.com and www.google.com.)
I am not directly experienced with the threat involved. I know it is crossdomain-related; if you tell me it's cookies rather than JS, I'll believe you.
I have no idea whether Google normally sets document.domain, but I could certainly imagine it doing so; I feel like the "google.com" domain is one that any page under google.com is likely to believe it can trust, whether or not that trust is expressed programmatically. Certainly serving untrusted js anywhere under the google.com umbrella is likely to violate _someone_'s assumptions somewhere. I do not actually know it to be exploitable.
Why, then, did we get plus.google.com and not google+.com? (and aside: I find those (google.com) suffixes on HN that turn out to be links on plus.google.com confusing. For google.com URLs, I expect either search results or pages that represent google's position)
Now I understand the reason for the existence of those annoying special-purpose CDN domains that I'm always forced to allow in RequestPolicy. Thanks for the explanation!
Another reason is: because these cdn domains aren't the domain people navigate to, they don't get cookies from the domain that includes them. Cookies bloat requests for the domain they are assigned to; sending them only once per page is faster.
As I recall, for Google Video Search, we used domains like "1.vgc1.com" "2.vgc2.com" etc for cookieless hosting. A short domain name (as opposed to 'cookieless.googleserving.googlevideo.com' or some such) saves bytes in the HTML, and cookieless domains save bytes in requests, as well as providing better cache hit rates and such. Multiplexing domains lets the browser initiate several simultaneous requests for scripts, images, css, etc. (I think this is less of a problem these days, though.)
Some of these problems are addressed by modern browsers and other techniques, but getting good performance out of the median web browser remains a big challenge.
No problem. To be clear, I don't work for Google; I quit earlier this year. As for how that relates to my degree of helpfulness, take that any way you like. ;-)
FWIW I would have responded the same way (the cited page at google.com calls out cross-site scripting attacks specifically), but you beat me to it. :-)
Our project focuses on advanced and experienced computer users. In contrast with the usual proprietary software world or many mainstream open source projects that focus more on average and non-technical end users, we think that experienced users are mostly ignored. - from Suck Less Philosophy at http://suckless.org/manifest/.
I don't feel this translates to "no noobs" as much as it means "don't stop with the noobs." I'm pretty sure inexperienced users are welcome, but there's no interest in adding hand-holding features that conflict with the philosophy of "keeping things simple, minimal and usable." I use dwm & dmenu all the time, and have developed a real appreciation for the Suck Less approach (but I'm not a newbie, so I guess I'm safely in their target audience).
dwm in particular mentions that it's elitist in order to prevent stupid questions from novices:
Because dwm is customized through editing its source code, it’s pointless to make binary packages of it. This keeps its userbase small and elitist. No novices asking stupid questions. There are some distributions that provide binary packages though.
Also anyone who has ever subscribed to the suckless mailing list can testify to the elitist sentiment among many of its users.