Hacker News new | past | comments | ask | show | jobs | submit login
Some of the forces blocking new ideas in science (mattsclancy.substack.com)
170 points by bhaprayan on April 24, 2022 | hide | past | favorite | 64 comments



This primarily I think applies to ACADEMIC science.

The funding game in academic science is kind of miserable. Researchers eager to maintain positions for their post docs and grad students etc pay high levels of attention to which way the funding story is going -> ie, telling funders what they want to hear is the key skill. This is not always focused on new ideas. That's because it is pretty horrible not to get funded, so getting funding is a top priority?

Adding to this there is a major push now on DEI and other types of policy work which are not always directly scientific idea focused. Then there are compliance costs (you need to train your researches on project costing / job codes for payroll, procurement processes with federal funds etc) and ideally get them the NIH training (see below for a reading list).

    me and white supremacy: Combat Racism, Change the World, and Become a Good Ancestor, Layla F. Saad

    The New Jim Crow: Mass Incarceration in the Age of Colorblindness, Michelle Alexander

    United States and Racism Systemic: Explicate the systemic nature of institutionalized racism, Steven Turam

    How We Fight White Supremacy: A Field Guide to Black Resistance, Akiba Solomon and Kenrya Rankin
https://www.training.nih.gov/2020_inclusion_anti-racism_and_...

So you have a lot on your plate - not that this is a bad thing, but just to be aware of it.


`The funding game in academic science is kind of miserable. Researchers eager to maintain positions for their post docs and grad students etc pay high levels of attention to which way the funding story is going -> ie, telling funders what they want to hear is the key skill. This is not always focused on new ideas. That's because it is pretty horrible not to get funded, so getting funding is a top priority?`

I think this is oft, but not always, overstated (note, not the bit about the funding game being miserable - it is). I've had a relatively successful track record as new faculty, and my best scored grants are also my most daring. Significance and Innovation is one of the criteria the NIH reviews on, and funding is tight enough that a "meh" score there can torpedo a grant. Getting to know what your funder (and most importantly, your particular program officer) wants is critical, but what they want is not always "safe" science.

The advice I give my trainees is "Learn how to tell your story" and "Stop blowing off your Specific Aims page, it's the most important."

`Then there are compliance costs (you need to train your researches on project costing / job codes for payroll, procurement processes with federal funds etc)`

Almost all of this is handled by departmental staff or a sponsored programs office at every institution I've ever been at, using the indirect costs that Hacker News is always so fond of talking about.


Idk. There's a sweet spot when it comes to novelty and funding, and I'm not sure it's always where it should be. I also think there's a certain relativism about novelty, in that what is novel in a subfield might look pretty conservative to an outsider.

All these studies of grants etc are overshadowed by this problem, which is that they typically use citations etc as some kind of metric of quality. The problem with that, in turn, is that over a reasonable study span, variation in those citations is going to be driven by self-seeking behavior. That is, what's popular is what's funded, but also what's cited. There's a certain bias in it, in that you don't learn about the novel studies that never were studied due to being too novel, and the truly paradigm shifting papers, which are cited at high rates, are kinda washed out by the hundreds or thousands of papers that just kinda creep along.

It's difficult for me to put into words what's on my mind. But when I think of colleagues who are well funded, even those I consider friends and people I respect, I don't think of their work as being innovative. It's very much in the status quo. Very technically well done, but basically data generating machines within a status quo paradigm.

The things that shake things up tend to come from elsewhere, from industry or accidents or secondary reanalysis of old data, or things that get funded off of miscellaneous sources scrounged together. It's as if true innovation happens regardless of grants, or in spite of it, and after everyone agrees it's the accepted thing, then it gets funded, after the fact.


There's definitely a sweet spot, and I don't think we're near the optimum. For a lot of people, I think grants actually aren't "Fund me thinking up an innovative idea", but are taking an innovative idea that's emerged from pilot data, a side project, etc. and spending X years formalizing and solidifying it.


All good points. My own sense is that if your carry isn't too big (you are not feeling a ton of pressure to maintain a pretty big funding line) life is better all around?

My own indirect experience is not NIH, but gov lab related work. This is I think more bureaucratic because the labs have funding streams, and the key goal can be not to f it up. That might move things to a somewhat heavier compliance model.

I'm not against indirect costs rates, they are a HUGE efficiency winner to avoid needing to push paper at the individual level. That said, the system it funds is not itself that efficient.

UC Berkeley I think is going to be 60%+ indirect rate for 22-23 as a local point of reference - I don't work there though.

So if you get $400K in the door you get to "keep" $160K of it.


`All good points. My own sense is that if your carry isn't too big (you are not feeling a ton of pressure to maintain a pretty big funding line) life is better all around?`

Absolutely. The standard in my field is somewhere between a 50% and 100% soft money position. Mine is only 25%, and while I could probably fish around for a position at a more prestigious university, it's a big boost to my ability to go "Yeah, that seems neat, lets do it" and thus a major quality of life boost.

`UC Berkeley I think is going to be 60%+ indirect rate for 22-23 as a local point of reference - I don't work there though.`

This is not how you calculate indirect rates.

Indirect rates are a percentage of your direct rates. If X is the money you get for your lab (i.e. direct costs) and the indirect rate is 60%, then the actual calculation is 1.6X = 400,000, so X = $250,000.

If you want to point a finger at the thing that's probably the most harmful to the funding of science, it's not indirect rates. IMO, it's that the NIH budget cap for a modular R01 was set at $250,000 in direct costs in *1999* and has never moved from that.


25% sounds awesome - that's in cool and interesting projects range! Do you have responsibility for other positions. Not sure how it works where you are, I know someone who was very stressed because their proposals "carried" a fairly large group of folks.

Good point on indirect rates - I was being too quick there. Salary costs can be lower because you have to layer on fringe as well (which can be a separate pool or just a direct calc). So salary * 1.X (fringe) * 1.Y (indirect) = total award?


Answering this bit first:

`Good point on indirect rates - I was being too quick there. Salary costs can be lower because you have to layer on fringe as well (which can be a separate pool or just a direct calc). So salary * 1.X (fringe) * 1.Y (indirect) = total award?`

Yeah, this is how that math works, at least at my institution, with some rare exceptions.

`25% sounds awesome - that's in cool and interesting projects range! Do you have responsibility for other positions. Not sure how it works where you are, I know someone who was very stressed because their proposals "carried" a fairly large group of folks.`

It really is awesome, and I'm tremendously privileged to be in that position. It's especially nice in my field (infectious disease epidemiology) because in basically all outbreaks, the work we do is uncompensated for ~ 6 months or so and then you sort of hope for grants to back fill it (I had, for example, done my best work on the pandemic prior to getting any funding for it).

You have however nailed the primary source of my stress - keeping "my" people funded. Graduate students (the downside of my position is its in a place where TA lines are functionally non-existent), postdocs, etc. are my responsibility, and keeping them funded is most of the reason I write grants.

We're experimenting for some staff positions (because 100% funding a staff scientist on grant money is daunting and terrifying for a single PI) with using a pool of funding, to address that while four of us may be able to pay 25% of a data analyst, none of us can pay 100%, with gaps in that backfilled by some institutional resources.


You’re absolutely implying it’s a bad thing and I don’t know why you’re pretending otherwise.


I'd encourage you to engage substantively in the conversation. You know almost nothing about me, and are claiming that I'm pretending. This is not substantive.


You're absolutely implying it's a bad thing and I can pretty much exactly tell why you're pretending otherwise.


I can absolutely tell you have no point to make, and so are stuck resorting to assuming bad faith which breaks this sites guidelines and/or race baiting.

Something can be both a good thing and require work. The fact that you don't understand that is what is surprising. If you are in the intersection of academia and govt this is heightened. It's not only DEI work (which has a lot of attention now). And agreement is not universal on steps to take. The NIH has been struggling on things like going to or away from double blind methods for eval. Some have argued to reduce racism double blind is positive, others have been saying that double blind results in a racist result etc.

Work is happening in pipeline areas (dismantling racism in math instruction etc). A reminder that these govt / academic areas have all the other requirements still there, everything from guidelines on federal funding with respect to reproductive health (where the right has put in lots of restrictions and compliance efforts), funding models that complicate the process of carrying positions in various complicated ways, sweatshop compliance steps, virgin redwood compliance steps, northern ireland accord requirements and many many more.

I would really encourage you and your colleagues in this space to get a grip and fast. The arguments I'm seeing above are extremely weak. They are not even arguments, just assertions of bad faith. If you believe industry has to comply with the same rules as folks in academia receiving federal funds, fine, lay it out. I believe you are totally and completely wrong.


The biggest issue in academia is the push for the number of papers you publish. You see it especially in Machine Learning, where garbage papers are continuously released with very biased tuning to "show how we beat a superior model" along with not releasing any code to ensure that no one could refute their paper, meanwhile those junk papers are securing your tenure and giving you prestige in the industry. It's one big joke.


Yes, industry is maybe a bit more outcome oriented in these areas (outside of the snake oil ML / AI marketing and sales efforts which I find just totally ridiculous 90% of the time).


DEI stuff is not at all what's blocking new ideas in science, but your post implies it.


You should speak with some researchers. Entire categories are off limits because DEI politics. I could cite many examples, but one which comes to mind now is that of Noah Carl, formerly of Edmund's College, Cambridge University in England: https://www.spectator.co.uk/article/how-noah-carl-is-fightin...

He deigned to research politically contentious topics like migration and IQ. He wasn't just defunded. He was fired.


The racist overtones to Noah Carl's work certainly did his case no favours, but he was doing scholarship whose poor quality is easy to see. Before you get tenure (or the watered-down approximation that the UK has), academia offers essentially no job security: only around of 1/4 of researchers starting their first post-doc will succeed in getting a permanent academic position. I wonder what he was thinking?


Science should be immune to "overtones." Overtones are subjective and based in emotion and personal experience. I don't see any racist overtones at all.

Which work was poor quality? His work is published and peer reviewed. It should be easy to point at the work you're referring to.

He has conducted interviews about what he was thinking: he wanted to research migration and IQ, even though he received a lot of political pressure not to. He knew that this was a potential consequence, but believed that scientific and educational institutions would uphold Enlightenment ideals and allow free research. He was wrong. Politics have won in academia, and some subjects are off limits now.


> Science should be immune to "overtones."

This sounds nice, but given that science is done by humans, it seems entirely unrealistic to me.

If you had said: we should try to minimise the influence of this kind of politics on science, I would agree, but I rather think that is best achieved by keeping think-tank-class ideologues like Carl far away from tenure.

The tide of politics waxes and wanes in academia. Your whitewashing of Carl's record does not help reduce the role of politics in academia.

> Which work was poor quality?

Carl was spending his time publishing in a 'journal' that was published by a friend that had essentially no quality controls, where the content contained provocative racist claims not backed up by argument or evidence instead of doing the hard work of quality scholarship.


I speak with researchers all the time. Most agree that Noah Carl's pseudoscience is what got him canned, not any DEI initiatives.


His work is published and peer reviewed. Which work was "pseudoscience"? I think you're only making that accusation because it's vague and sounds authoritative, but you don't have any examples.


The funding game is a miserable slog, but people are in academia because they want to do their research. If they wanted to use their expertise to obtain a paycheck, there are typically much better opportunities to do so.


> but people are in academia because they want to do their research

Which is perfectly fine as long as those people don't go asking the NSF or NIH for money to fund students.

I'll never understand why we give egomaniacs funding for phd students, and it's something I actively work to get the panel to select against when evaluating any proposal that includes lines for PhD students or post-docs (ie all of them).

CS profs: if you want employees for your ego trip, start a damn company. Your students are not employees, and if we're all honest, most of the benefit of the grant we're giving you will be the students it produces. So sayth this regular panel member.


While I would agree that DEI isn't directly about science, it certainly helps the end goal of understanding some phenomenon as holistically possible.

I think these sorts of policies are aimed more towards administrators rather than researchers, which for some reason are often the same people.

I think academia could benefit from adopting the music industry's approach to managing talent - i.e. the managing and talent are usually kept separate.


Don't like being made aware of your compliance in racist institutions? Want to just do your "science" as people outside the institutions are subject to mass incarceration?

How elysium of you.


I was in graduate school in the 1990s when anti-sexual harassment training started to come in, and there were plenty of complaints about that.


Yes, my own sense is that it starts in govt / academia and does tend to filter out to industry in some form usually after some of the specifics get worked out a bit more. And don't worry, in places like California where there are still required harassment trainings (every 2 years) there are still complaints! I enforce some of these requirements.


Physicist and Bayesian pioneer E. T. Janes had a pithy take on this theme:

In any field, the Establishment is seldom in pursuit of the truth, because it is composed of those who sincerely believe that they are already in possession of it.

From Probability Theory: The Logic of Science, E.T. Jaynes, 2003.


"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." -- Max Planck


Often paraphrased as "Science advances one funeral at a time".

Perhaps increasing life expectancy is what's holding progress back?


That makes sense, but I like the idea of longevity, and wouldn't want to their the baby with the bathwater. Maybe instead we could have some "term limit" for academic work?


I think we see the same “ageing stagnation” in business — where increasingly old business leaders don’t retire and maintain a stranglehold on resources.

I think we need a more general cultural shift that the current generation needs to build up the next generation instead of keeping a death-grip on power that destroys their children and grandchildren.


I don't know of any such companies, but US politics is getting done by very old people.

The major players are now typically around 80, and that seems bad for many reasons!


Not, definitely a bad idea. You need a lot of resources, money and time to train an expert. Wouldn't be impossible to recover the investment.


Interesting angle. Would you have any empirical data on the amount of years of academic work needed to "break even"?


It depends in the issue, but IMAO on academia is not uncommon to spend about ten years studying a theme to became an expert on it. Plus university years. Plus school years. And there is a lot of inertia to not change your study subject.

Researchers in some parts of Europe spend around 25 years of their life (if I remember correctly the article) just trying to have a stable job, and this is criminal.

If you put an upper limit, the interval that remains is just too small to be profitable for the society and specially for the scientist. Nobody would want to do science.

Academics also spend a lot of time teaching, doing bureaucracy, and finding funds. So you have a highly trained people spending most of their expensive time not doing research.


s/wouldn't/would/


I'd add a minuscule idea to this important quote, those who settle themselves as authority often have deep insights and can see the limits of accepted ideas. Those who are taught the latest truth rarely have it, they admitted the paradigm as is. They contribute to the inertia.


It's funny that the article rather explicitly rejects such theories of malfeasance/corruption/etc. and, instead, points to cognitive effects interacting with the decision-making process as a plausible cause that is compatible with assumptions of good faith and competence.

So, arguably, the only person confident of being in possession of the truth is the one with the ready-made cynical quote.


People don’t have to be acting in bad faith for their biases to shine through. It’s quite possible for authority to be acting in good faith and lack the insight to see some break through.


I'm not sure about 'seldom' but what do I know? However the point is taken in that lots of people know this if not the metrics involved but when anyone questions the Establishment, too often the response is 'follow the science' and then they are sidelined or cancelled from areas of public discourse. Of course some of these questions and comments might amount to nutty nonsense or personal aggrandizement in which case it's open to others to demonstrate this. Isn't this most effectively done by skipping the smart rhetoric and dealing with the facts of the matter? This then leads to searching counter-questions for the initial questioner - great!

Incidentally might there not be the odd new idea buried in one of the two million publications published in around thirty thousand journals every year? Who knows?


My former academic supervisor’s former academic supervisor died in 2009. He was big name in his field.

When I offered my condolences, my professor said “It’s ok. When a mighty tree falls, it can be sad, but it means the light can reach all the little saplings down below”.


True, but there is a collateral that often is dismissed, entire topics can die when the only people interested on them quits.

Is not uncommon to find obscure species that are forgotten for the last 60 or 100 years. The only good definition of the species lies in a book written in dutch in 1923. With internet finding sources is much easier, but can accumulate errors and, as browsers and social media are replacing specialists for free, there is not incentives to pay specialists anymore.

This could mean that those people will never return and that all that remains is politics and ideology. Politics are more willing to pay people in their own ideology or blocking "people incorrect" from having funds, So the next generations will focus more and more around politics wishes and you'll soon have one different "scientific truth" for each party. Is a vicious circle.


The thing about this article is it seems to make the implicit assumption that hostility to novelty is a bad thing. I don't think that's justified. Obviously, allow no new ideas into a field and it will die but allow too many new ideas into a field and you have a recipe for the field becoming a pseudo-science.

Before you propose changes to allow more ideas in, it would be appropriate to have some measure of whether a field is too tight or too in the amount of ideas it allow now.


That's not really the zeitgeist in tech circles like here on Hacker News. Galileo syndrome and the whole disruption mindset means here we even have programmers wanting mathematicians and physicists to stop doing algebra with Greek letters and symbols.

In the end, quackery is much more likely than truly disrupting ideas.


That would require a level of meta awareness about science that adherents of specific fields just don’t have.

People could’ve said the same about phlogiston.

We live in an age where we assume we are right about most things, that’s completely historically unjustified.


The notion that established field leaders, like large trees, stifle the undergrowth of new ideas, is not new. It is a partially self-correcting problem. Leaders die. However, there's no telling what some of the undergrowth that sprouted at the wrong time might have grown into were it given more light. Asking how we can do better is worthwhile.

A worse problem manifests when fields come to be dominated and policed by a commonly accepted orthodox dogma.

A less controversial example might be the common perception in Physics during the late 19th century that all the big things had been sorted out, and what remained was to fill in increasingly small blanks. This was a terrible sort of dogma to grip an entire discipline since it actively encouraged ambitious new minds to look elsewhere for discovery. Who knows what discoveries might have been made earlier if this kind of fatigue hadn't gripped all of Physics?

Modern examples are bound to be more controversial. e.g. Anyone looking for a silver lining in climate change is going to run into outrage and difficulty getting publications or funding. However, common sense tells us that huge, complex changes to massively complex global systems are bound to be, themselves, complex and mixed. Of course there will be good aspects, and taking advantage of them may be the key to dealing with the bad. Just suggesting this is enough to make you a pariah in many circles though. What kind of valuable and beneficial work simply isn't being done because of this?


In many ways I think the common perception in the 19th century was essentially correct.

It’s easy to look back on people and think them fools, but the odds of a young student in 1830 making huge strides in Physics was tiny while many other fields where wide open.


Max Planck famously said science advances one funeral at a time. J. J. Thomson discovered the electron through his work on cathode rays in 1897. Up to that point the orthodoxy was that atoms were the smallest unit of matter. However there was evidence from the mid 19th century. What Thompson actually did was finally come up with incontrovertible experimental confirmation of a subatomic particle. The electron and other subatomic particles were the gateway to the rapid advance of 20th century physics.

The same pattern repeated itself in the early 20th century. Truly revolutionary ideas may take a generation to gain popular acceptance.

They had vacuum tubes in the 19th century, atomic theory and Maxwell's equations by the 1860s. I find it difficult to believe that they were incapable of isolating the electron.


> It is a partially self-correcting problem. Leaders die.

But the next people do the same. One major question that the younger generation doesn't ask is, 'how do I not repeat that mistake when I have power?'


I don't find the measure of novelty in the article convincing. This seems more like a proxy for how inter-disciplinary the work is.

It also isn't necessarily a bad thing if most research aims to be somewhat safe, though of course this should not be taken too far. "Safe" research means there is a good chance of obtaining useful results. Usually we know this because we're using e.g. a known and established method on a somewhat unknown, but focused problem.

More risky research is also needed, but even then it might be possible to split it into parts that still have value on their own. There is certainly a problem here for younger scientists as they need results to advance their career. That usually means that somewhat safer, but not too safe research is in the best interest of those researchers.


For fields especially resistant to new ideas, Egyptology takes the prize.

Thus far surface luminescence has been used only once, and produced results only just barely acceptable. A chip from a facing stone of one of the Giza pyramids, and from the Valley Temple showed an age of 5000 years, +/- 500 years, where the officially assumed age is 4500 years. That was enough of that!


https://www.sciencedirect.com/science/article/pii/S129620741... ?

"Surface luminescence dating of some Egyptian monuments", Journal of Cultural Heritage, Volume 16, Issue 2, March–April 2015, Pages 134-150

> Surface luminescence dating to Egyptian monuments of the age range 3000 B C to Hellenistic times has been applied for first time. Monuments include the Giza plateau (Sphinx Temple, Valley Temple, Mykerinus), the Qasr-el-Saqha, the Khasekemui tomb and the Seti I Temple with Osirion at Abydos. Equivalent doses were measured by the single and multiple aliquot additive and regeneration techniques, and dose rates by portable gamma ray probes, and with laboratory counting and dosimetry systems. The resulted ages have confirmed most conventional Dynastic dates, while in some cases, predating was obtained by some hundred of years. The dates are discussed in the light of current archaeological opinions.

From the conclusion:

> Different calculated and archaeological ages, beyond one standard error, were noticed for one sample at Valley Temple at Chephren's complex (limestone), one at Sphinx Temple (granitic), and one at Seti II Abydos (sandstone).


I didn’t know about surface luminescence dating. For others who’d like to read the paper on using it to date monuments: https://www.researchgate.net/publication/263201697_Surface_l...


This article needs Wittgenstein to edit it. The empirical claim is:

"Science" "blocks" "new ideas"

I add quotes because these words aren't well defined as is. The article presents the following as evidence:

When a researcher dies, another takes their places and starts a different stream of research. The issue with this as evidence is that it takes a very generous definition of "blocks". If I get hired for a job instead of you, have I "blocked" you from taking the job? It's also trying to imply that the now-dead researcher was stifling innovation, but fails to show that the new line of work is more novel than the existing work that the deceased researcher was working on.

The next piece of evidence is that work that cites unique pairs of journals will be more cited outside of the journal its published than within. But this stretches the definition of "new ideas". Interdisciplinary work isn't necessarily novel. Often, it will simply involve the application of known techniques from one field to the domain of another.

The third is an experiment where they condition on the novelty of papers according to keywords, and measure the likelihood of getting a grant. This also stretches the definition of "new ideas" into a new dimension, where unfocused work is considered more novel. They also claim that it's concerning that having cited the authors work previously increases the likelihood of the author getting a grant; as though knowing that someone produces quality work shouldn't improve the odds of funding for more work.

In summary, the issue with writing like this is it never bothers to actually define the thing its measuring ("novelty"), and instead borrows multiple other definitions ("novelty is interdisciplinary", "novelty doesn't cite popular works in the field", "novelty means having more keywords"). In doing so, it says nothing about novelty itself (instead talking about "novelty") and so doesn't provide any insight as to whether there is sufficient novelty within academic science.

Edit: A more concise way to make this point - https://xkcd.com/2610/


I will suggest genuinely new ideas are inherently hard to effectively express. It's like you want to speak Russian but everyone else speaks something else and Russian is a language you must invent on the fly while everyone assumes you are saying something else in some other language and that something else sounds like gibberish.


Ah yeah, that happens to me a lot. Especially if it's within a field ripe for quacks. It can be very hard to keep people from attributing some straw argument to you and then facing that as opposed to discussing something new.


So long as significant new evidence is allowed into the light, evidence that must eventually be accounted for, all's well. But to the extent that there are those who'd prefer to hide it, or disallow, hand-wave away, or use ad hominem attacks against those who present that evidence, there's a problem.


On this, I think you'd want to look more broadly at whether the superstar's university had people on the grant committee at the respective funding institution:

> "Well, we can say it’s not simply a case of these dominant researchers vetoing grant proposals and publication from rival researchers. Only a tiny fraction of them were in positions of formal academic power, such as sitting on NIH grant review committees or serving as journal editors, when they passed away. So what else could it be?"

That's not really how those clubs work; they get their old friends and pals to do the dirty work.


Perhaps the FSF could work on some sort of open patent license that says if you use this patent in an invention you are not allowed to use it with any other unexpired patent unless it is also under the same license. That way, a culture of open source could be created in the non-software world that would move innovation into the private sector and out of academia by letting private companies more easily share their engineering with each other.


[PLUG] We are addressing some of this by providing a platform to discuss and publish negative results as part of the ERROR Workshop: https://error-workshop.org/


>> To measure the novelty of a paper, they rely on the notion that novelty is about combining pre-existing ideas in new and unexpected ways. They use the references cited as a proxy for the sources of ideas that a paper grapples with, and look for papers that cite pairs of journals that have not previously been jointly cited. The 11.5% of papers with at least one pair of journals never previously cited together in the same paper are called “moderately” novel in their paper.

"If I assign numbers to each of these things then it becomes data and I can do maths on it!"

https://xkcd.com/2610

Also see: ad-hoc assumptions that become axioms of a theory without any attempt to justify them ("novelty is about combining pre-existing ideas in new and unexpected ways"... the Lego Brick Theory of Novelty, I presume).


“Science advances funeral by funeral.”

-- Max Planck


Politics




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: