Hacker News new | past | comments | ask | show | jobs | submit | arundelo's favorites login

But all those folders are different, so a single one would be annoying (or: require two layers.)

.config can be posted online, and shared with others (like the many "dotfile" repos you'll see on github)

.local needs to be backed up, and may have private data.

.cache can be blown away (or tmpfs.)

.run MUST be blown away on restart.

This is simple, sane, and works well.


Instead of putting a dotfile or dotdir in the user's home directory, do follow the XDG Base Directory specification: http://standards.freedesktop.org/basedir-spec/basedir-spec-l... .

It's easy to understand and requires only a marginal increase in effort/code.


> I see no benefit.

The first benefit is that it removes clutters from your $HOME.

The second benefit is that you can now manage and backup your settings in a sane way.

* ~/.config contains config files (should not be lost, but if lost you can recreate them);

* ~/.local contains user data files (save them often, never lose them for they are not replaceable);

* ~/.cache contains cached information (can be tmpfs mounted, you can delete it any time you want, no loss in functionality, you just lose some optimization);

* ~/.run is for temporary system files (must be tmpfs mounted or it must be cleaned during at shutdown or power on).

Luckily most of the apps used on Linux systems now use it, you are probably using Mac OS X.


I use ".test" to respect RFC-6761 [1] which reserves four TLDs: .example, .invalid, .localhost, and .test; as many of us already know ".dev" is owned by Google [2] and both ".local" [3] and ".app" [4] are reserved for the root zone so it doesn't makes any sense to use any of them for local development. So I use ".test" for my personal projects and ".[company]" for projects related with my job.

[1] https://tools.ietf.org/html/rfc6761

[2] https://icannwiki.com/.dev

[3] https://tools.ietf.org/html/rfc6762

[4] https://icannwiki.com/.app


HN is an astonishing thing!

Article: "We can also refute Bernstein’s argument from first principles: the kind of people who can effectively hand-optimize code are expensive and not incredibly plentiful."

Commenter: "IMO he couldn't give a convincing answer to the guy who asked about LuaJIT author being out of a job."

Guy in audience: "I was that guy in the audience."

LuaJIT author: "Actually, LuaJIT 1.x is just that"

Voice in my head: "Aspen 20, I show you at one thousand eight hundred and forty-two knots, across the ground."

Meta: Apologies for the abstract response, but I couldn't figure out a better way to present the parallel. It can be hard to explain artistic allusions without ruining them. What I mean to say is that this pattern of responses reminded me in a delightful way of the classic story of the SR-71 ground speed check: http://www.econrates.com/reality/schul.html


> If you have tens of thousands of variables, you definitely don't want to risk dealing with the simplex method's worst-case behavior.

The risk is extremely low. In fact, the risk is so low that, for deacdes, it was an open question why the simplex algorithm was so good in practice on a wide variety of workloads despite having poor worst-case complexity (unlike, for example, pure quicksort, which is often problematic in practice).

Spielman solved that problem in 2001 and showed that even very small perturbations from a contrived worst-case input have good performance: http://arxiv.org/pdf/cs/0111050.pdf


Because of the NCP protocol which preceded TCP. It used even port numbers for outgoing data and odd port numbers for incoming data. TCP used the same port numbers, but fully duplexed the data into a single port. The well-known incoming port numbers are therefore all odd (initially, the limitation no longer applies to TCP of course). https://en.wikipedia.org/wiki/Network_Control_Program

In cryptography, we have a concept of "misuse resistance". Misuse-resistant cryptography is designed to make implementation failures harder, in recognition of the fact that almost all cryptographic attacks, even the most sophisticated of them, are caused by implementation flaws and not fundamental breaks in crypto primitives. A good example of misuse-resistant cryptography is NMR, nonce-misuse resistance, such as SIV or AEZ. Misuse-resistant crypto is superior to crypto that isn't. For instance, a measure of misuse-resistance is a large part of why cryptographers generally prefer Curve25519 over NIST P-256.

So, as someone who does some work in crypto engineering, arguments about JWT being problematic only if implementations are "bungled" or developers are "incompetent" are sort of an obvious "tell" that the people behind those arguments aren't really crypto people. In crypto, this debate is over.

I know a lot of crypto people who do not like JWT. I don't know one who does. Here are some general JWT concerns:

* It's kitchen-sink complicated and designed without a single clear use case. The track record of cryptosystems with this property is very poor. Resilient cryptosystems tend to be simple and optimized for a specific use case.

* It's designed by a committee and, as far as anyone I know can tell, that committee doesn't include any serious cryptographers. I joked about this on Twitter after the last JWT disaster, saying that JWT's support for static-ephemeral P-curve ECDH was the cryptographic engineering equivalent of a "kick me" sign on the standard. You could look at JWT, see that it supported both RSA and P-curve ECDH, and immediately conclude that crypto experts hadn't had a guiding hand in the standard.

* Flaws in crypto protocols aren't exclusive to, but tend to occur mostly in, the joinery of the protocol. So crypto protocol designers are moving away from algorithm and "cipher suite" negotiation towards other mechanisms. Trevor Perrin's Noise framework is a great example: rather than negotiating, it defines a family of protocols and applications can adopt one or the other without committing themselves to supporting different ones dynamically. Not only does JWT do a form of negotiation, but it actually allows implementations to negotiate NO cryptography. That's a disqualifying own-goal.

* JWT's defaults are incoherent. For instance: non-replayability, one of the most basic questions to answer about a cryptographic token, is optional. Someone downthread made a weird comparison between JWT and Nacl (weird because Nacl is a library of primitives, not a protocol) based on forward-security. But for a token, replayability is a much more urgent concern.

* The protocol mixes metadata and application data in two different bag-of-attributes structures and generally does its best to maximize all the concerns you'd have doing cryptography with a format as malleable as JSON. Seemingly the only reason it does that is because it's "layered" on JOSE, leaving the impression that making a pretty lego diagram is more important to its designers than coming up with a simple, secure standard.

* It's 2017 and the standard still includes X.509, via JWK, which also includes indirected key lookups.

* The standard supports, and some implementations even default to, compressed plaintext. It feels like 2012 never happened for this project.

For almost every use I've seen in the real world, JWT is drastic overkill; often it's just an gussied-up means of expressing a trivial bearer token, the kind that could be expressed securely with virtually no risk of implementation flaws simply by hexifying 20 bytes of urandom. For the rare instances that actually benefit from public key cryptography, JWT makes a hard task even harder. I don't believe anyone is ever better off using JWT. Avoid it.


I have routinely kept my history "best effort" for many many years now by exporting HISTSIZE=10000 and HISTFILESIZE=1000000, and then occasionally doing a cp -a ~/.bash_history{,-$(date +%s)}. I actually use this history archive, and wish it was higher fidelity. (In practice, I am thinking I should just run all my sessions through script... ;P.)

http://man7.org/linux/man-pages/man1/script.1.html

I have thereby had it on my todo list for a month or so, ever since I learned of the bash DEBUG trap, to instead store my history into an sqlite3 database and have it separated by host. Seeing this reminded me "oh yeah, I really should do that" (as no: there's no way in hell I'm going to just send all of the commands I type to some random guy with a website ;P).

This is "the simplest thing that could possibly work" and might very well break if you set some crazy history configuration variables I don't use. Note that it works for pipes specifically and only because it can overwrite the same entry to the database multiple times using "insert or replace" (prevention of which is what normally makes these scripts so complex).

    histsql=~/.bash_sqlite3

    sqlite3 "${histsql}" '
        create table if not exists "session" (
            "id" integer not null primary key autoincrement,
            "address" text not null,
            "process" integer not null,
            "tty" text not null,
            "user" text not null,
            "start" timestamp not null default current_timestamp,
            "end" timestamp null
        );

        create table if not exists "command" (
            "session" integer not null,
            "line" integer not null,
            "time" timestamp not null default current_timestamp,
            "pwd" text not null,
            "text" text not null,
            primary key ("session", "line")
        );
    '

    histmac=$(ifconfig | sed -e 's/  *$//; /\(ether\|HWaddr\) / { s/.* //; q; }; d;')
    histtty=$(tty)

    histssn=$(sqlite3 "${histsql}" "
        insert into \"session\" (
            \"address\", \"process\", \"tty\", \"user\"
        ) values (
            '${histmac//\'/''}', '${$//\'/''}',
            '${histtty//\'/''}', '${USER//\'/''}'
        );

        select last_insert_rowid();
    ")

    function histend {
        sqlite3 "${histsql}" "
            update \"session\" set
                \"end\" = current_timestamp
            where
                \"id\" = '${histssn//\'/''}';
        "
    }

    trap histend EXIT

    function histadd {
        local data="$(HISTTIMEFORMAT= history 1)"
        if [[ -z $data ]]; then return; fi

        data="${data#"${data%%[![:space:]]*}"}"
        local line="${data%%' '*}"

        data="${data#*' '}"
        data="${data#"${data%%[![:space:]]*}"}"

        sqlite3 "${histsql}" "
            insert or replace into \"command\" (
                \"session\", \"line\",
                \"pwd\", \"text\"
            ) values (
                '${histssn//\'/''}', '${line//\'/''}',
                '${PWD//\'/''}', '${data//\'/''}'
            );
        "
    }

    trap histadd DEBUG

My books on LeanPub: https://leanpub.com/u/raganwald

I had been approached many times by tech publishers to write books from scratch, but I never found the process and economics attractive.

Then I spotted LeanPub--quite possibly on HN--and I thought, "Hmmm, there is nearly zero barrier to entry." They accepted Markdown, and I already had my entire blog in Markdown on GitHub.

So I published a collection of essays (https://leanpub.com/shippingsoftware), and launched it with a "Show HN" style post, making the book free. LeanPub allows people to pay more if they want, and an amazing number of people paid more, I made $2,000 in one day just from being on the front page of HN.

I've written a few other books since then, but I firmly believe that writing is a side project, not my profession, so I make a lot less than other authors who really put their backs into it.

But I'm happy, and my readers are happy, and I get a little money every month from people I like.

---

A few observations that may be valuable for other side-projects, whether media or software:

Some of my biggest wins were the books I didn't write. LeanPub has a feature where you can cobble together a book title and a cover image, and it creates a landing page that collects emails from interested readers.

Several times, I have created landing pages for books I was interested in writing and when I promoted the landing pages... Crickets. I didn't write those books.

Another thing that worked for me is that when I decided to write the original JavaScript Allongé (https://leanpub.com/javascriptallongesix), I was going to write my first long-form book from scratch. My previous books were collections of essays with additional linking and filler material.

I resolved to write a trial book first, so I wrote CoffeeScript Ristretto. I had a rough idea that the market for a CoffeeScript book was about 10% of the market for a JavaScript book, so I didn't have big expectations for revenue, but I figured I could gain experience writing the book and a lot of valuable feedback.

The secret was, when I then published JavaScript Allongé, it was basically the same book, just in JavaScript. The big differences had to do with the differences between CoffeeScript and javaScript, obviously, but most of the chapters were identical.

I found this process was a really big win, when JavaScript Allongé first hit, it was already refined by all the feedback I got from CoffeeScript Ristretto. If I was doing a software side project, I might take the same approach: Start in a smaller market where I can refine the software and business, then repurpose into the target market.

JM2C, YMMCV, &c.


Being unpopular in school often means being bullied or ostracized, so we should be especially wary of victim blaming, which I sadly see in many comments here.

I was bullied for a while in school, though it wasn't in the US, and by the end of school I was quite well-liked. The advice I'd give to my younger self would be a) to get serious about some explosive sport like boxing or sprinting, b) to get fast and on point with words. I could've easily spared a few of my endless videogame hours for that, I just didn't know it would help at the time.


Having spent quite a bit of time designing massively parallel algorithms (concurrency starting at several thousand cores on up), computer scientists are often baffled when I tell them that FP and immutability don't help. In practice, it solves the wrong problem and creates a new one because massively parallel systems are almost always bandwidth bound (either memory or network) and making copies of everything just aggravates that. You can't build hardware busses to compensate or companies like Cray would have long ago.

If you look at effective highly scalable parallel codes of this nature, you see two big themes: pervasive latency hiding and designing the topology of the software to match the topology of the hardware. The requisite code to implement this is boring single-threaded C/C++ which is completely mutable but no one cares because "single-threaded". Immutability burns much needed bandwidth for no benefit. This among other reasons is why C and C++ are ubiquitous in HPC.

The challenge is that CS programs don't teach pervasive software latency hiding nor is much ink spilled on how you design algorithms to match the topology of the hardware, both of which are fairly deep and obscure theory.

We don't need new languages, we need more software engineers who understand the nature of massive parallelism on real hardware, which up until now is largely tribal knowledge among specialists that design such codes. (One of the single most important experiences I had as a software engineer was working on several different supercomputing architectures with a different cast of characters; there are important ideas in that guild about scaling code that are completely missing from mainstream computer science.)


> If computers are good at anything, they are good at parsing code and analyzing it. So I set out to make this work, and prettier was born. I didn't want to start from scratch, so it's a fork of recast's printer with the internals rewritten to use Wadler's algorithm from "A prettier printer".

Bob Nystrom (munificent) disagrees[0] after writing one himself:

> The search space we have to cover is exponentially large, and even ranking different solutions is a subtle problem.

[0]: http://journal.stuffwithstuff.com/2015/09/08/the-hardest-pro...


The first time I was paid for a programming job was writing relocatable 6502 assembly to put into a string and call from Atari Basic.

What did it do? Computed the X-modem checksum of a block.

Why? Because when 1200 baud modems first came out, the BBS I was a member of couldn't compute the checksums fast enough (directly in Basic), so 1200 baud transfers weren't much faster than 300 baud. And if you just paid several hundred bucks for the new 1200bps modem, you wanted to get some benefit. :)

How much did I get paid? $20, which was about $0.50/byte. (Man, how I wish I got paid $0.50/byte for code I write now!!)


Synth whizzes Bob Margouleff and Malcolm Cecil co-produced the album with Stevie and would continue to work with him through Fulfillingness’ First Finale

The involvement of Margouleff and Cecil needs to be further highlighted. They were not only talented musicians, they were also engineers, producers, and pioneers in early synthesizer experimentation. They patched together a massive electronic synthesizer dubbed TONTO that consisted of various components including a Moog, which Stevie Wonder heard on their avant-garde 1971 synth album Zero Time. He literally showed up at their studio with the album under his arm and asked how TONTO worked. The story is told here (1):

So he takes my elbow and I escort him to the studio. We went down to the studio and I showed him the instrument. I put his hands over it and he realized that it wasn’t something that he could easily play. He tried to play it, but he couldn’t get it to sound like a normal keyboard, because in those days you could only get one note at a time. He asked me, “What is wrong with this keyboard?” I told him, “That’s how it works. It only plays one note at a time.” And then he got it. He asked me if we could record. I went upstairs and got my test tape and we put it on the two-inch machine. At this time, the Moog had been moved to Studio B in the basement. We ended up recording the entire weekend. I had to break into the tape store, and I had no authority to do it, but I did it anyway. I told Stevie, “Someone is going to have to pay for this tape at least.” He said, “Oh, don’t worry. I just got money put into my trust fund from Motown because I just turned twenty-one. I don’t have any contracts.” He explained the whole thing. He told Bob and me that he wanted us to be musical directors for his company and to help him get his music out there. He liked working with us, and we liked working with him. We got seventeen songs done that first weekend. And that’s how it all started.

In films from that era, you can see Wonder performing with TONTO in the background, with Cecil or Margouleff patching together components on the fly.

I saw an interview with one of them (see Soundbreaking, below) who says that they recorded something like 250 songs with Wonder, and they picked the best ones to go on the albums. I would love to hear some of the stuff that didn't make it onto vinyl!

PBS recently released an eight-part series on music production called Soundbreaking that includes clips and interviews with Cecil and Margouleff. It was co-produced by the late George Martin, and includes so many stories about the production of pop music from the 1950s to the present, including early multitrack recording with Les Paul, The Beach Boys and The Beatles, the synth era, disco, sampling, rap, the impact of music videos, EDM, laptop-based production, and more. It's amazing. Some short clips are here (2) but I urge readers to seek out the full program!

1. http://www.waxpoetics.com/blog/features/articles/malcolm-cec...

2. http://www.pbs.org/show/soundbreaking/


Not at all.

To start, do three things.

Find a bouldering wall. Get a hold of some climbing shoes. Watch videos 5, 6, 7 of Climbing for Beginners [0].

Don't spend a lot on shoes, you will wreck them really quickly and you will certainly buy the wrong sizing when you start.

Boulder a bit and have fun. I enjoy bouldering with a friend the most as it really challenges me. Bouldering walls are often full of really cool and helpful people.

If you like it then watch some more videos to improve technique, think about what you are doing and practice. Consider trying rope climbing (you will want to take a short intro course for this, the rope skills are important).

Then you will start watching super cool videos [1], find yourself at the wall waay too often, and be more interested in finding out somebodies beta than their name.

[0] https://www.youtube.com/watch?v=jbIDnMmSLsc&index=5&list=PL-...

[1] http://www.ukclimbing.com/videos/


https://vimeo.com/56002315 is an hour-long documentary about his performance there, with lots of footage of the show itself.

Some kids grow up on football. I grew up on public speaking (as behavioral therapy for a speech impediment, actually). If you want to get radically better in a hurry:

1) If you ever find yourself buffering on output, rather than making hesitation noises, just pause. People will read that as considered deliberation and intelligence. It's outrageously more effective than the equivalent amount of emm, aww, like, etc. Practice saying nothing. Nothing is often the best possible thing to say. (A great time to say nothing: during applause or laughter.)

2) People remember voice a heck of a lot more than they remember content. Not vocal voice, but your authorial voice, the sort of thing English teachers teach you to detect in written documents. After you have found a voice which works for you and your typical audiences, you can exploit it to the hilt.

I have basically one way to start speeches: with a self-deprecating joke. It almost always gets a laugh out of the crowd, and I can't be nervous when people are laughing with me, so that helps break the ice and warm us into the main topic.

3) Posture hacks: if you're addressing any group of people larger than a dinner table, pick three people in the left, middle, and right of the crowd. Those three people are your new best friends, who have come to hear you talk but for some strange reason are surrounded by great masses of mammals who are uninvolved in the speech. Funny that. Rotate eye contact over your three best friends as you talk, at whatever a natural pace would be for you. (If you don't know what a natural pace is, two sentences or so works for me to a first approximation.)

Everyone in the audience -- both your friends and the uninvolved mammals -- will perceive that you are looking directly at them for enough of the speech to feel flattered but not quite enough to feel creepy.

4) Podiums were invented by some sadist who hates introverts. Don't give him the satisfaction. Speak from a vantage point where the crowd can see your entire body.

5) Hands: pockets, no, pens, no, fidgeting, no. Gestures, yes. If you don't have enough gross motor control to talk and gesture at the same time (no joke, this was once a problem for me) then having them in a neutral position in front of your body works well.

6) Many people have different thoughts on the level of preparation or memorization which is required. In general, having strong control of the narrative structure of your speech without being wedded to the exact ordering of sentences is a good balance for most people. (The fact that you're coming to the conclusion shouldn't surprise you.)

7) If you remember nothing else on microtactical phrasing when you're up there, remember that most people do not naturally include enough transition words when speaking informally, which tends to make speeches loose narrative cohesion. Throw in a few more than you would ordinarily think to do. ("Another example of this...", "This is why...", "Furthermore...", etc etc.)


I can't believe I'm doing this, but I think I'm going to defend Mercator here.

The Mercator projection was indeed originally designed for compass navigation, but the reason it's still used for web mapping is slightly different. The Mercator projection is conformal. What this means is that angles (and hence in some sense shapes) are preserved locally. When we zoom in on a small section of the Mercator projection, we get a reasonably accurate representation of the actual shape of features. This is generally not true for most more fashionable projections, which will stretch and skew things, so they don't always look great when zoomed.

In general with map projections, you have to make a compromise between global properties and local properties. Choosing Mercator means going full-on for local quality, at the expense of the global map being quite distorted. This makes it great for zoomable web maps, because most of the time the global map is just used to find the area you're actually looking for.

Now, you could argue that for a living-room wall, you want something that looks good globally. If it was my wall, I'd agree with you. However, this guy seems to be really interested in local detail. He worries about his four-point fonts becoming blurry, and about having as many small villages marked on the map as possible. If he's interested in that kind of detail, then I think he probably cares far more about local properties than the kind of global properties that would bother me or you.


The fact that you have to build an iOS app on a Mac, that you can't even set up a CI build on commodity hardware, or in the cloud, is pretty fundamentally developer-hostile. I'll start to feel wooed when they take steps toward fixing that.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: