Hacker News new | past | comments | ask | show | jobs | submit | laarc's comments login

Probably recruitment. It makes a lot more sense when viewed from the perspective of a researcher who's considering whether to join the NSA.

None of us want patents, and the whole system is absurd. But at one time, it was considered prestigious to be named on a patent. And prestige is a powerful force. Part of the burden of working at a secret agency is that your work is secret.

E.g. This employee will be able to take credit for this work after they leave: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=H...

(Better not try to convert any programs that use loops into a program without loops.)


Wait, so if I take my ugly iterative code and convert it to a pretty recursive one, I'm infringing on an NSA patent? -.-


Worry not, if you were infringing, they would've let you know already ;)


Only in JAVA


Not really. Readers here tend to judge by merit rather than origin.

I was hoping to read this, but the encumbered download process prevented that.


I spent a little time this morning reading thru the vixra sections for some fields I'm familiar with, and it's pretty much all cranks (some of whom have dozens of papers). Like look at this: http://vixra.org/abs/1512.0336 or this : http://vixra.org/abs/1509.0249 or this : http://vixra.org/abs/1504.0227 or even this: http://vixra.org/abs/1504.0134


Some of these makes Otis Eugene "Gene" Ray TimeCube rants seem cohesive and well thought out.


You can also just scroll down and read the article on the page.


Prefer to read in Papers or LiquidText, but Academia wants my contact list first which is a special kind of scummy.

    academia.edu would like to:
    - Know who you are on Google	
    - View your email address
    - View your basic profile info
    - View your contacts


Thank you. Apparently I'm blind.


Well, it is really non-obvious, to the point that I think they're clearly trying to hide it. The white space expands to push the start of the paper below the fold no matter how large your window is, and the "READ PAPER" text is light grey on white.


jpg2000 is unfortunately encumbered by patents. I was going to use it for a certain project a long time ago, but had to abandon the idea due to that. It's a shame, because many of the ideas in jpg2000 are very valuable for the world.


Some of these compression patents are expiring now. I don't know if they were ever enforced to begin with. The wikipedia page is confusing, but suggests that the JPEG committee got the relevant patent holders to agree to let the patents be used freely.


It's true. My first role was to work an unpaid intern for a year. Then my first salary was $20k, followed by $35k only after working there for a couple years. It was going to be $30k but I was able to negotiate.

I think pg is correct, however.


Know any juniors without college degrees in relevant fields?


In my experience, they're usually more talented than their counterparts who have degrees. That seems to become less true with time, though.


It should be socially acceptable for Internet Archive to ignore robots.txt.

They have to respect it because we, collectively, say so. Obeying robots.txt is the minimum acceptable behavior for any robot, short of the Asmiov laws.

But archiving is different. I've been running into "Site was not archived due to robots.txt" more and more frequently. Often these are articles from ~2011 and earlier which the author no doubt would have wanted to be archived.

Trouble is, robots.txt is also the only thing that people really bother to set up. Maybe there's a way right now to indicate "Sure, archive my site please, and ignore my robots.txt." But if there is, it's not really common knowledge, and it's kind of unreasonable to expect every single website on the internet to opt-in to that.

On the flipside, it seems entirely reasonable that if someone really wants to opt out of archiving, that they explicitly go and tell Internet Archive. Circa 2016, Internet Archive is the only archive site that seems likely to persist to 2116. It's a shared time capsule, a ship that we all get a free ticket to board. If someone wants off, they can say so.

But right now, large swaths of the internet simply aren't being archived due to rules that don't entirely seem to make sense. There are excellent reasons for robot.txt, but opting out of "Make this content available to my children's children's children's children" seems perhaps beyond the scope of the original spec.

Would you feel ok with the Archive ignoring your robots.txt, or would you feel annoyed? If annoyed, then this is a bad idea and should be rejected.

But if nobody really cares, then here's a proposal: Internet Archive stops checking /robots.txt, and checks for /archive.txt instead. If archive.txt exists, then it's parsed and obeyed as if it were a robots.txt file.

That way, every site can easily opt-out. But everyone opts-in by default. Sites can also exercise control over which portions they want archived, and how often.


If example.com allowed indexing in 1999, a new owner of example.com can hide/delete the 1999-2015 content by changing the robots.txt in 2015.

It would be better if archive.org would adhere to the robots.txt of the requested date/year (show content of example.com from 1999-2014).


The fact that all popular URLs which fall out of registration are now picked up by squatter-spambots is also troubling. An Archive.org entry should not cease to exist when the registration lapses if the squatter-spambots decide to robots.txt everything. That would defeat its purpose completely.


I think the archive.org crawler should respect robots.txt as it looked at the time of the crawl. As a well-behaved robot, archive.org's crawler should fetch and respect robots.txt each time it crawls. However, archive.org should not retroactively delete old content when the current site puts up a robots.txt.

(To answer your other question, the robots.txt standard already allows giving different instructions to different crawlers.)


The situation is a bit more nuanced then that. I had a website on shared hosting, and it was being indexed by archive.org. But years ago (maybe a decade?), their robot was doing something crazy that was overwhelming sites, and the server admin blocked the Internet Archive robots. Even worse, archive.org interpreted the block retroactively and deleted all the archives.

I would have loved for my site to be archived, but I also need my site to perform well. I'm savvy enough to use robots.txt but not to monitor my site's CPU - and I imagine a lot of people with Wordpress or Squarespace sites don't even know about robots.txt. We need to find easy ways for people to control how their sites are archived. (And I don't know how any of this would fit with EU laws like the Right To Be Forgotten.)


The Archive doesn't delete anything; depending on the current robots.txt, they may not show pages from past crawls.

Update the robots.txt and you should be good to go.


Very well said and I strongly agree. What's the worse is that highly legitimate sites that existed for years get domain parked after shutting down and become suddenly inaccessible. Maybe for sites like that they can make the archives before the switchover available but it would probably be too costly staff-wise to look at each case-by-case.


robots.txt already lets you specify per-robot behaviour. You can trivially opt-out of crawling, but opt-in to archiving by explicitly allowing archive.org's bot and disallowing all other user agents.


Sorry a little too drunk to scan your post, but have considered this before.

I think Archive dot org as they said on Science Friday podcast are not legal archive or otherwise final word, just trying to help out with archiving humanity. If I want to delete some old posts for whatever unsupported reason (or if takeover of domain new robots.txt) then that's how it should go.

IMO.


Read your post tomorrow. I guarantee you will laugh. I've been there.


An accusation this serious should be accompanied by evidence, or dismissed.



You know, I didn't expect this to be on his Wikipedia page. My apologies for not doing the most basic of searches. (The topic was uncomfortable enough to want to avoid it, but I still should've checked.)

Thanks!


I was hoping the product would pivot to passive blood pressure monitoring. I want my blood pressure to be recorded every few seconds, logged and cross-referenced against what I did, ate, and drank. For correlation purposes, knowing those things can be as simple as snapping a photo.

Smart people apparently claim that blood pressure is one of the most reliable indicators of how long you'll live. If so, then it's always seemed strange it's (almost) never measured.


Now that would be a cool product. Lots of datapoints. I'm skeptical about the claim - yes, it's important, but there are so many variables that feed into it, and I need to be convinced that constant, 24 hr monitoring of BP would enable better management than spot tests, home BP management and the occasional 24 hr ambulatory monitoring.

It's a very consumer-targeted technology, although it would certainly find a place in emergency departments and ICUs. I am at a loss to think of ways as to how we would actually capture the data, although better minds than mine I hope come up with ways.

The problem is that to get a good read on arterial pressure you either need to do it the old fashioned way (occlude the artery and record that pressure, then slowly drop it until it's constantly flowing again - see [0]) - or you need to stick a cannula into an artery, as we do in ICUs, and measure pressure using a transducer.

Even technologies that stress their 'passivity' (see [1]) and try to capture this market use the old fashioned way. I don't see that changing anytime soon - you could try and somehow monitor the stretch of a small artery maybe using some variation of current o2 saturation sensors, coupled with advanced computer models of flow rate and variation in small arterioles, but that is a world away and would seem to me to be highly subject to variation/sensitivity.

My prediction is that this won't be possible until we are commonly implanting biometrics in people, but I guess we will wait and see!

[0] https://en.wikipedia.org/wiki/Korotkoff_sounds [1] http://www.visimobile.com/


It seemed promising to do some experiments with sewing a BP sleeve into a shirt, then setting up an Arduino to trigger it to inflate/deflate. It should be possible to record the result digitally. It'd be slightly uncomfortable, but even if it's only once per hour, it's still better than zero per hour. The noise would be annoying, but I have some ideas for how to make it quiet. But would anyone actually want such a thing?

Thanks for batting around the idea with me, and for the valuable references. I didn't know there was any other way to measure BP than the old-fashioned way.


You're describing creating an ambulatory blood pressure cuff. I've had one attached to me and you get used to it fairly quickly, although it failed to measure blood pressure when I was active (I was cycling for a few of the readings, which you think would keep your arm fairly still and not cause a problem) Cool to make it yourself though! Have a look at these further links. The australian prescriber article you may find particuarly useful

http://www.racgp.org.au/download/documents/AFP/2011/November...

http://www.australianprescriber.com/magazine/20/1/18/20

https://en.wikipedia.org/wiki/Ambulatory_blood_pressure


My mom is terminally ill and recently started experiencing orthostatic hypotension, so I just picked up an Omrom armband blood pressure monitor that is trivially easy to use and stores the last 100 measurements for two people.

http://amzn.com/B00KPQB2SS

It's still not what you want, but it looks like things are moving in the direction you suggest. That said, I'm wondering if it is even possible to do what you suggest without being inconvenient to the user. Having an armband inflate and tighten around my arm every few seconds with become infuriatingly annoying. Are there alternatively ways to measure blood pressure that are imperceptible?

What I'm looking forward to seeing is conductive textiles making their way into compression clothing so we can measure heart rate all the time. i.e. a wearable EKG shirt. The use case would be older people at risk for a heart attack and heart fail so we can detect problematic heart abnormalities that are predictive of failure.


I'm guessing I got downvoted for linking to the product on Amazon. What's the appropriate way to link to something like that so that it doesn't get downvoted?


You know, when I saw your original comment, it was so helpful that I wanted to respond to you and say thank you, but I suppressed my instincts because I thanked someone else, and it felt like the community would react badly to me saying "Thank you so much!" to every single person.

But now I see that the community actually downvoted your comment rather than rewarded it. Darn.

For what it's worth, and even though this reply is very late: Thank you so much for your time and for the thoughtful and helpful reply. The links, specifically, were the reason it was helpful to me.


I think your comment was informative, your manner of linking was fine, and that you should ignore the downvote(s) in this case. I guess it might be slightly clearer to use the full "amazon.com" in the URL, and I suppose someone might argue that it's safer to indent it two spaces so it's plain text rather than an active link, but seems good to me as it is. Maybe someone clicked the wrong button, didn't like something else about your wording, or was just in a bad mood.


Here's what England's NICE say about diagnosing hypertension. COmpare the difference between ambulatory measurement, and home measurement.

We know that most people can't even take their medication properly (many organ transplants fail because people don't comply with the medication regime, for example) so easier blood pressure monitoring would probably be useful. Especially if you combine it with something that can lower blood pressure.

http://www.nice.org.uk/guidance/cg127/chapter/Key-priorities...

> Diagnosing hypertension

> If the clinic blood pressure is 140/90 mmHg or higher, offer ambulatory blood pressure monitoring (ABPM) to confirm the diagnosis of hypertension. [new 2011]

> When using ABPM to confirm a diagnosis of hypertension, ensure that at least two measurements per hour are taken during the person's usual waking hours (for example, between 08:00 and 22:00).

> Use the average value of at least 14 measurements taken during the person's usual waking hours to confirm a diagnosis of hypertension. [new 2011]

> When using home blood pressure monitoring (HBPM) to confirm a diagnosis of hypertension, ensure that:

> for each blood pressure recording, two consecutive measurements are taken, at least 1 minute apart and with the person seated and

> blood pressure is recorded twice daily, ideally in the morning and evening and

> blood pressure recording continues for at least 4 days, ideally for 7 days.

> Discard the measurements taken on the first day and use the average value of all the remaining measurements to confirm a diagnosis of hypertension. [new 2011]


Databases are usually sharded by the first letter of the username. Devs like to start their usernames with a number, which causes problems.

Also, usernames are injected into their code. Therefore they must be valid identifiers. Don't name yourself exit() or ret, or you'll crash their servers. Naming yourself nop will give you a distinct advantage relative to other players, however.

I don't know.


It took me too long to realize you weren't serious


I wanted to give you a status update of what new users currently experience. Or at least what I'm experiencing.

After creating an account, I was redirected to the front page with no notification about whether it succeeded or failed. No emails have arrived. When I try to create it again, I get "username already taken" and "email already taken", so something worked. But when I try to log in, I'm redirected to the front page with no notifications. In other words, login is failing.

To clarify, this isn't a complaint. Congratulations on launching, and apologies if you're already aware of these issues.

Edit: This is similar to https://news.ycombinator.com/item?id=10724699 but I'm unable to log in after clearing cookies. I get "Couldn't sign in. Check username/password" for a correct password.


This paper introduces the Bayesian program learning (BPL) framework, capable of learning a large class of visual concepts from just a single example and generalizing in ways that are mostly indistinguishable from people.

This is one of the most exciting and readable papers I've come across.

Does anyone know if the code is available anywhere? Can we reproduce their results? I can think of a dozen applications for such an ability.


The link to the code is in the paper as well. After reading the media hype around the paper I thought I should read the paper, and like you, was surprised to find a very readable paper. Although I think to really understand the mechanism I am going to have to read the code, because without that this just looks like a good parlor trick (because the permutation in stroke output is kind of a fun but minor piece of code, yet a big part of the media breathlessness.)

There should be a flurry of activity as practitioners take these concepts and start applying them to other fields, such as static code analysis. Much of the magic seems to be in the choice of atoms that you feed in to the algorithm.


The take home here is actually that by modelling the physical process of writing you get a more accurate model. It requires fewer examples partly because of pre-training, and partly because of physics hard coded into the model structure. It's not just the atoms that you feed in, but the entire algorithm is designed around drawing glyphs.

It's not entirely clear how you'd apply these concepts to new problems. Certainly in many cases you could come up with more detailed models of the processes involved. But in others, like text understanding, it's not at all clear how you'd make models more sophisticated.


Sorry I should have included the link in my post since I searched and found it.

https://github.com/brendenlake/BPL.git



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: