Hacker News new | past | comments | ask | show | jobs | submit | twir's comments login

One of JSON's selling-points is to be human-readable. I would struggle reading any JSON document with a nesting level greater than 6 or so. Why would I ever want a nesting level > 100?


Human-readable as in "not binary". Other than that, it doesn't offer a lot that helps in that regard.


This doesn't really make sense to me. Just because JSON is human-readable doesn't mean that humans are meant to read any possible serialized data nor that it should always be an ergonomic read. Not everything you serialize is just a config file or hand-edited data. It may be arbitrarily dense with arbitrary data.

To me, this is kinda like seeing a JSON array of user IDs coming over the network and going "Huh? Why not usernames? I thought JSON was supposed to be human-readable!"


I see your point; although if you look at the specs for a lot of easy-to-parse formats for computers, a stated design goal is also easy-for-humans (e.g. Markdown, YAML).

Large, complex object hierarchies with lots of nesting might make more sense represented in binary (e.g. Avro).

I realize I'm making a little bit of a McLuhan-esque argument in a largely computer science-oriented context, but I hope you can see what I'm getting at.


Another example: source code is human readable (by definition), but that doesn't mean I can read any possible program. I struggle to read many programs with deep nesting, but that doesn't mean I'd find it acceptable for my compiler to crash in such situations.


Maybe because you use a JSON library as all the time anyway so using it even when it's not ideal but good enough is the path of least resistance. Or you're a cargo cultist.


When dealing with interconnected systems it's a pretty decent format. That it is human readable also allows for relatively easy interaction in terms of the value without building one-off tooling.

When I'm dealing with non-sql data, one thing I've also done is fire the record into a service that compresses to .gz and a worker sends it to S3_PATH/data-type/recordid.json.gz as a recovery option. Not that it's the first or best recovery option, but when your core data is only a few hundred thousand records, potentially faster than a full DB restore. Not to mention, services being written in one language, and a recovery script in a scripting language.

It just depends on how you are doing things. Now, for configurations, I tend to prefer TOML or YAML.


Add "System Integrity Protection" to the list of reasons why my next laptop won't be a Mac. Although based on a free operating system, Mac OS is gradually taking away users' control over their own devices. Either the user controls the software, or the software controls the user.


Just as with SELinux or AppArmor, you can ignore it if you think your normal practices keep you safe. That's probably mistaken but it's fully under your control:

https://developer.apple.com/library/content/documentation/Se...


I don't mind those features in and of themselves, and I see their value; it's Apple's paternalistic attitude that bothers me. I've used Macs for my entire life and always felt I still had a semblance of control over the hardware and software that I bought, but that feeling of control is going away.


How is a feature you can turn off paternalistic? I’d think that argument is much stronger about iOS.


Yes, iOS is a better example. I'm not a big fan of closed platforms in general. I like to be able to decide which software I run on my device, so my most recent laptop and phone purchases have been GNU/Linux devices.

I understand and appreciate the security advantage that comes with protecting users from themselves, but at least SELinux and related software still give users the rope needed to hang themselves if they're into that sort of thing.


> but at least SELinux and related software still give users the rope needed to hang themselves if they're into that sort of thing.

How’s that not possible on macOS too?


Just one antidotal data point, but I haven't had to think about SIP and I updated my OS soon after it was released (I did wait a few months for any issues to surface). The only near-concern I remember was the directory Homebrew used, which either was a non-issue or was addressed over the course of updates. I didn't do anything. I feel like through the course of work I would have hit corner-cases caused by SIP or other macOS "protections."

Out of all the protections added over the years, I only really regularly encounter GateKeeper. I don't generally mind because I know what it is. I don't feel like I'm limited by these things, they feel more like running as non-root user and having to run "sudo" when needed. It makes sure anything silly I might do is done intentionally.

I've had way more issues with SELinux than anything Apple has added to their OS.


Very interesting - thanks for sharing. Is there a corresponding brief by the plaintiff? I'd like to see both sides of the story.


Ooh, here come the Scientologists!

I'm as skeptical of big Pharma as the next guy, but to say with such grandiose broad-stroked generalizing that psychiatry (or, perhaps you mean instead/also neurological pharmacology) is not science is simply untrue.

Anyone with Google at their fingertips can find a dozen peer-reviewed articles about serotonin's link to mood and behavior.

EDIT: accidentally a word or two.


> Ooh, here come the Scientologists!

So, because some critics of psychiatry are Scientologists, therefore all critics of psychiatry are Scientologists? I can only recommend a crash course in logic.

> to say with such grandiose broad-stroked generalizing that psychiatry (or, perhaps you mean instead/also neurological pharmacology) is not science is simply untrue.

The burden is not on critics to prove that psychiatry isn't a science, the burden is on psychiatry to prove that it is (for the history-illiterate, it has never been a science and many of its staunchest advocates freely admit this, including Freud). The recent NIMH ruling, to which I alluded in my post above, suggests that the granting agencies aren't going to wait for that burden of evidence to be met ---- psychiatry is not a science, is not an evidence-based practice, and can't masquerade as such without evidence.

> Anyone with Google at their fingertips can find a dozen peer-reviewed articles about serotonin's link to mood and behavior.

Yes, and there's a name for that: confirmation bias. Have you bothered to ask yourself how the FDA could come out and say that SSRIs don't actually work, and how that meta-analysis could coexist what all those other studies that claim otherwise? And how could this most recent study, which shows no correlation between serotonin and depression, survive the critical eyes of editors and reviewers to find its way into print?

The FDA meta-analysis, which for the first time included studies that the drug companies funded but then chose not to publish, and which showed no clinically significant effect from SSRIs, is not by itself conclusive, but the silence that followed it certainly is. There are too many interested parties involved for that study to go unchallenged ... if a challenge were possible.

This most recent study simply shows why SSRIs don't work -- because serotonin and depression aren't correlated, therefore SSRIs cannot possibly work, in principle.


They can also find a dozen peer-reviewed articles thoroughly skewering the misuse of statistics, intentional massaging and selective cherry-picking of results, weak significance testing, and general lack of replicability that are all incredibly rife in biomedical research, and particularly in psychiatric research.

It's hard to know what to trust in that area at all anymore.


>Anyone with Google at their fingertips can find a dozen peer-reviewed articles about serotonin's link to mood and behavior.

And a ton of "peer reviewed" articles are crap too.

Peer review aint what it used to be.


Still, it's what demarcates the boundary between "blog about your idea, show it to some smart folk" and "proper science". And we like clean demarcation lines, they allow us to categorize without actually investigating (which is infeasible if you want to get broad knowledge).


I'd say avoid "peer reviewed" papers like the plague if you want to find out what's actually "worth investigating" and " get a broad knowledge".

Instead, wait for 2-5 years to see what still floats from all the crap that has been published.

Better to read slightly behind the times, but solid, university guidebooks and published books that stood out, than to read the hot, but crappy, steam of published research.


Is waiting 5 years always an option? Is 5 years always enough to distinguish good from bad science? Then, you still use the "peer reviewed" filtering, just add "test of time" to the pipeline. Which is fine, as long as the topic is of level of importance to you at which being 5 years behind the trend is ok.

Unless you really mean that being peer reviewed is bad. I don't want to delve into that option...


>Is waiting 5 years always an option?

If your goal, as stated above, is to get a "broad knowledge", then yes.

If you want to know recent research trends, or are doing research yourself, then no, go read current papers.

>Is 5 years always enough to distinguish good from bad science?

No, sometimes you have to wait even more. Just gave it as a delay period to counter the "read the peer reviewed papers" notion.

>Then, you still use the "peer reviewed" filtering, just add "test of time" to the pipeline.

No, I'm saying "forget the peer reviewed" in themselves, go for items that not only have stood the "test of time", but have also become succesful and well regarded books and/or university guides in their domain.

In essense, I'm saying that a journal's tiny "peer review" team is BS, the majority of the scientific community agreeing on matured material is better.

>Which is fine, as long as the topic is of level of importance to you at which being 5 years behind the trend is ok.

It's not a matter of "importance to you", it's what you want to use it for.

A subject could be extremely important to you as a study subject, and you could still avoid losing time with the current, unfiltered, papers as they come in.

It's only when you want to take advantage of recent research (e.g because you are a researcher yourself, or an implementor and needs a new solution etc) that you have to have the latest research -- which I think is different than "importance". Let's call it "business importance" if you wish...


> Is waiting 5 years always an option? Is 5 years always enough to distinguish good from bad science?

In the case of Relativity theory, it took 55 years for full validation of all its aspects. In principle, to assure solid science, one might say, "as long as it takes."


Do you have any evidence that Paul is actually a Scientologist, beyond his sharing - in broad strokes - one of their opinions?


Where's the privacy policy?


While I see your point, I don't think it's fair to chalk up the opposition to merely piracy supporters. There are a large number of people who fear the broader ramifications, and a lot of organizations, who are against SOPA and not necessarily pirates.


Reading the description of the bill below, it does seem awful. But like I said, up to this point I hadn't even read it, and the reason for that is there's so much noise coming from the pro-torrenting lobby, I just tune out the whole debate these days.

(i should note that i don't live in the USA, otherwise I probably would have read about the bill itself by now)


It's not fair, you're right. However I'm seeing a lot pro piracy people out there and a lot of the top stories on this have been either coming from or somehow supporting the major piracy, excuse me, "peer to peer file exchange and sharing" sites.

I doubt anyone here will say SOPA has any redeeming qualities whatsoever but the comments and stories sometimes stray into the area we're talking about and we're only speaking in the context of those sites.


That's not necessarily true. If you look at the corporate or organizational opposition to SOPA, you'll find eff.org, mozilla.org, etc. These folk aren't necessarily in favor of piracy. More realistically, they're in favor of free speech or free software.


ITT: pedantry over the words "clean, crisp, and cheerful."


This looks neat, but being a DIY kind of dude, I'm not sure what this is for. I'm probably missing something--I think I know "what" it is, just not "why". Can anyone give me a brief rundown?


All of us involved in the project are pretty much "DIY" kind-of people :)

What we hope people do is to read through the source to see why we made the choices for defaults and pick and choose what they like and keep using them.

There are lots of defaults that were eye-opening to me that were suggested by the scores of contributors to the project.


Interesting. So if I understand correctly, the goal is largely to promote guidelines that improve compatibility and performance?


That is a goal yes, but these are the defaults that work for us, and we hope they work for everyone else (and we do look for suggestions to improve these), but if it does not work for you, we actively advocate choosing the defaults that do.

The reasons for each of the choices we made is all within the comments of the source file, so it is easier to choose what to keep and what to change.


Great! Thanks for the info.


I'll bite.

It's a fantastic idea. What is missing from it is how the infrastructure is and probably always will be under the domain (pun intended) of corporations.

For example, how do we provide "internet access" with these servers? We don't own the fiber; it it gets shut down we're dead in the water. ISPs run the networks and therefore control the content and charge for it as much as they want.

An alternative is state-controlled ISPs--but we all can guess how fun that would be.

I'm trying to think of yet other alternatives, but I'm drawing a blank.


I'm not entirely sure it's possible. We've seen examples of other governments taking down sites because of the way internet protocols work, specifically BGP [1]. I believe some number of the DNS root servers are directly or nearly-directly controlled by the government. Service providers have legal obligations to allow access into their facilities. We're not as bad off as others - some governments do run the ISPs.

I don't see the difference between having encrypted data on a mini-server at your house vs. housed at a provider. If the data was stored centrally and there were few options, it would make the legal process of getting the data easier. With the existing options for hosting data in many different countries, this doesn't seem to be a problem. Even then, you could probably stripe this data across centrally stored hosting solutions and still have a more efficient and secure process than hosting off of a 'Freedom Box'.

Options seem to be: 1. Distributed storage. The storage is striped across these boxes all over, and no single box has any data. Nobody can subpoena the data because of the process involved in getting the information from so many people at once. I think using distributed storage would be very, very difficult with data redundancy, latency, and maintaining security with such high availability and access.

2. Each box is self operating, but managed centrally. Data storage is contained to a single (or few) boxes to simplify data access and speed. This still allows centralized access to the data, and fewer people would be involved in collecting the data. Higher levels of security could be maintained, but legally easier to access.

3. Self managed secure boxes that have a 'cloud' or 'bot' organization of peer-to-peer relationships. Again, these types of systems work today, but there are still centralized servers and most of the workload is still carried by large servers/organizations.

It seems easier to simply make Tor more secure, which is a different debate if that's even possible. The article reads like a lawyer who has some tech experience thinks he has created a magic Internet v2.0 because he's found a way to get around the legal ramifications of privacy without regard to technical ramifications.

1. http://www.networkworld.com/news/2008/022608-youtube-outage-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: