Hacker News new | past | comments | ask | show | jobs | submit login
Elicit – AI Research Assistant (elicit.com)
111 points by zerojames 8 months ago | hide | past | favorite | 76 comments



I have run a few queries on Elicit to understand the product a bit more. I asked about media bias detection and used the topic analysis feature. A minute or so later, I had a list of concepts with citations and links to papers I can look at further. This feels like an _amazing_ tool to do literature overviews and to dive into new academic domains with which one is not familiar.


Try looking at this https://inciteful.xyz/ there’s also a Zotero plugin for it.


Elician here: thank you for sharing our tool and for this praise!

We're glad you're enjoying it.


I gave it a topic I researched in depth recently. It gave me mostly incorrect summaries (one said hypothesis X is confirmed; nope it hasn't, would have been all over the news), missed key papers, dug up obscure and irrelevant ones. Par for the course with LLMs.

Edit: After looking at the examples on the front page "What are the benefits of taking l-theanine?" this seems geared for the general public, so maybe it wasn't the right test.


I think of AI as a super keen arrogant intern. With GitHub copilot, an intern that constantly interrupts me.

When that works for me, I am probably weak on the subject material myself. eg writing quirky love poems to my wife in different styles.

For research tasks, because the AI is not deeply self-reflective, it can output inconsistent and incoherent results. What it does is present text that only *looks* as if it confidently knows what it is talking about.

For domains where high rational quality doesn’t matter like love poetry, it is amazing. For other domains, be wary. If you can’t tell the difference between what is actually good and what merely looks good superficially you will be in trouble.


I generally agree with your take at first, but the following statements are funny to me:

“ I am probably weak on the subject material myself. eg writing quirky love poems to my wife in different styles.”

“ For domains where high rational quality doesn’t matter like love poetry, it is amazing.”

So, self-described weak at love poetry, but confident that it is a domain that LLMs excel at. That is an interesting take. Perhaps the LLM is just as weak at liberal arts as it is hard science, but it is just more difficult to measure since you aren’t in the domain. Most poetry I’ve seen from LLMs has been pretty rote and boring although as you say, not a “rational quality” I suppose.


I was about the comment something like this, but you put it well.

If you don't read a lot of poetry, what LLMs output look like poems, but almost always lack wit, a through-line, coherence and poignancy. It usually contains the individual parts, but never fitting as a whole.


Yes, but then unless you're really into poetry, or some poetry critic, none of that matters. For everyone else, all that matters is that it looks like a poem should, and that it makes the reader feel something.

Instrumental vs terminal values, I guess. Makes me think of coding - the overlap between good code, and code that makes money, is nearly empty.


I like you bringing in instrumental vs. terminal values because I think it is an important frame when talking about generative AI, but I think your application is mixed up by applying a strictly capital-based value system.

The end result of good code is to do a thing. That might result in money, but the instrumental value is in the thing it does rather than for the joy of coding.

Many people will assume that artists create things strictly for the terminal value of just doing the thing because the prose doesn’t “do” anything so the artist must have just enjoyed making it. But the artist usually wants to make an impact - communicate an idea, change a mind, etc. Not just “make a thing that looks like a poem so that I can sell it.”

I say this is an important frame to look at the problem in because there is pretty much zero instrumental value in generating LLM output beyond the kind of gatcha-style fun of putting in words and seeing something pop out the other end and the terminal value is always measured in money or “time efficiency” because what else is there to measure in?

Did I actually craft a love poem that communicates my true feelings in prose with all of the little flaws and personalizations that only we know in one of our most intimate relationships? Did I choose my voice or was it someone else’s? What is the value I’m getting when I fish something out of a generative AI? Did I really get the same value fishing with prompts through and AI’s output than making the thing? Maybe, maybe not.


Perhaps the example I used wasn't the best one; I wanted to focus more on the code aspect than the money. My main point is, what is considered Good Art, as declared by the critics, curators, educators, is quite different than what people actually want to experience. Importantly, many of the qualities that make art Good are beyond perception of the layman. The poster's love poems may lack coherence and underlying thought-line, but the poster's wife probably won't notice it - like 99% of the population. So, as a criticism of LLM output, holding it to the standards of the Art is irrelevant, because it's not what the poem was generated for.

There's probably a term for that which I'm forgetting, so let's provisionally call it tangential mediocrity - when the work is mediocre by general standards, but quite good for purpose it was made for.


LLMs don’t excel at love poetry. However love poetry doesn’t have to be excellent to be effective.


"Geared for the general public" is an interesting and revealing observation.

You might not have given the "right test" in terms of the actual userbase, but it is absolutely the right test in terms of Elicit's marketing claims. Elicit might be implicitly geared for the general public, but they are explicitly marketing to scientists.

I suspect a lot of Elicit's target users want to use scientific knowledge in their personal/professional lives, but without doing the hard work of gaining scientific understanding. However, they're not going to spend money on a product that says "we use AI to create sciencey bullshit that sounds plausible in conversation." They want a product that Real Scientists would use. (Similar to how purely decorative Damascus steel Bowie knives are gussied up by an outdoorsman pretending to use the knife to gut a fish or whatever.)


> I suspect a lot of Elicit's target users want to use scientific knowledge in their personal/professional lives, but without doing the hard work of gaining scientific understanding. However, they're not going to spend money on a product that says "we use AI to create sciencey bullshit that sounds plausible in conversation." They want a product that Real Scientists would use.

It sounds kind of like toy marketing: want to sell a toy to 5 year olds? Show 7 or 8 year olds playing with it, even if they'd never actually choose the toy in real life.


What is “hypothesis X” and why would it be all over the news if proven?


It concerned extant liquid water flows on Mars. I checked again and the tool actually correctly summarised a conference paper [1]. The authors changed the wording in the abstract from "confirms" to "strongly supports" in the actual paper [2]. So the mistake AI made here was in selecting the (obscure) conference paper with 7 citations over the actual paper with over 400.

We now know though that the perchlorate detection may have been an instrument error and satellite imagery constraints water content of the RSL to below what would be expected from brine flows. It's not conclusive though and there is no consensus whether the RSL are caused by liquid or dry processes or some combination of both.

[1] https://meetingorganizer.copernicus.org/EPSC2015/EPSC2015-83...

[2] https://www.nature.com/articles/ngeo2546


> Trusted by Researchers At ...

How did you get all those blue chip orgs give you permission to use their name? None of our blue chips clients allow us to do so.


I’ve learned to discard those as being either totally fake and made up (the excuse would be something like we used a template to get started quickly and forgot to change the logos), or that someone (probably an intern) signed up to the service with @company email and they just splat the logo as an official endorsement.


As someone who also tried (and stopped trying) to use customer logos, I also wonder whenever I see this pattern. So many startups do this, and yet I know how difficult it is to get official permission and how using a name without permission can lead to serious consequences.

Do startups just roll with it and use the logos without permission?


I do remember the day I learned my company was supposed to get permission, because the logo there implies official endorsement. I just didn’t know.

I would start with Hanlon’s Razor, with a 10% chance of malice.


Current gen of "AI" companies aren't that interested in such trivial things!


Testing Elicit gave me quite a bit worse results than using PaperQA by futurehouse. While paperqa could understand a bit of the nuance of a scientific query, elicit did not.

Too bad the internal paperqa system at scihouse isn't available for public use...


To me it reminds me of the advent of scholarly databases. The main effect I saw is that researchers started using exclusively those databases, sometimes publisher specific databases (so they were citing only from one publisher!) and were missing all the papers that were not indexed there. In particular a big chunk of the older literature that wasn't yet OCRed (it is better but still not fabulous). This led to so many "we discovered a new X" paper that the older people in the crowd in conferences were always debunking "that was known since at least the 60s". While those AI tools can clearly help with initial discovery around a subject, it worries me that it will reduce the search in other databases, or the digging into paper references. It is often enlightening to unravel references and go back in time to realize that all recent papers were basing their work on a false or misunderstood premise. Not talking about the cases where the citation was copied from another paper and either doesn't exist or had nothing to do with the subject. There was a super interesting article about the "mutations" of citations and how you could, by using similar tools to genetic alysis, generate an evolutionary tree of who copied on who and introduced slight errors that would get reproduced by the next one.

edit: various typos


Yes, but even the best scientists aren't born with knowledge of what came before. It has to be discovered, and where the discovery process is broken it needs to be fixed. On the individual level, "spend hours chasing rumors about the perfect paper that lives in the stacks, find out the physical stacks are on a different continent, and then sit down and struggle through a pile of shitty scans that are more JPEG artifact than text" makes sense because it's out of scope for a single PhD to fix the academic world, but on the institutional level the answer that scales isn't to berate grad students for failing to struggle enough with broken discovery/summarization tools, it's to fix the tools. Make better scans, fix the OCR, un-f** the business model that creates a thousand small un-searchable silos of papers -- these things need to be done.


I think this is the paper https://arxiv.org/abs/cond-mat/0212043


> Our estimate is only about 20% of citers read the original

Oh no

That's basically the same as the percentage of people who read news stories when responding to or sharing the headline


Really we do have numbers on that?


Here's an irony for you:

This link: https://insights.rkconnect.com/5-roles-of-the-headline-and-w... says "Only 22% only read the headline of an online news story, according to data from the Rueters Institue for the Study of Journalism."

But following the link to that study gets me to https://reutersinstitute.politics.ox.ac.uk/sites/default/fil... which… if it supports that claim, I can't seem to find where :P


I love it. Thanks!


That's the one.

Tbere is also https://www.researchgate.net/publication/323202394_Opinion_M...

and there was yet another one but I can't find it


Can you share the article you’re alluding to?


It seems like it should hallucinate less, as it directly quates, but nope, it still halucinates just as much and then gives a quote that directly contradicts its statement.


Elician here!

Accuracy and supportedness of the claims made in Elicit are two of the most central things we focus on—it's a shame it didn't work as well as we'd like in this case.

I'd appreciate knowing more about the specifics so we can understand and improve


https://analyticalsciencejournals.onlinelibrary.wiley.com/do... elicit summarises this paper abstract: “Psilocybin was present at 0.47 wt% in the mycelium.”

Actual quote from the abstract: “ No tryptamines were detected in the basidiospores, and only psilocin was present at 0.47 wt.% in the mycelium.”

It does not differentiate between psilocin and psilocybin, those are two different molecules.


> A good rule of thumb is to assume that around 90% of the information you see in Elicit is accurate. While we do our best to increase accuracy without skyrocketing costs, it’s very important for you to check the work in Elicit closely. We try to make this easier for you by identifying all of the sources for information generated with language models.

A 90% accuracy rate seems like the sweet spot between "an annoying waste of time" for honest researchers and "good enough to publish" for dishonest careerists.

I don't like disparaging the technology experts who work on these things. But as a business matter, 1/10 answers being wrong just is not good enough for a whole lot of people.


I don't think the number is as important as the question of how would someone be expected to magically know which 10% is wrong and needs to be corrected?


Elician here.

This is a good point! (Hopefully) obviously, if we knew a particular claim was fishy, we wouldn't make it in the app in the first place.

However, we do do a couple of things which go towards addressing your concern:

1. We can be more or less confident in the answers we're giving in the app, and if that confidence dips below a threshold we mark that particular cell in the results table with a red warning icon which encourages caution and user verification. This confidence level isn't perfectly calibrated, of course, but we are trying to engender a healthy, active, wariness in our users so that they don't take Elicit results as gospel. 2. We provide sources for all of the claims made in the app. You can see these by clicking on any cell in the results table. We encourage users to check—or at least spot-check—the results which they are updating on. This verification is generally much faster than doing the generation of the answer in the first place.


This is true but if the error rate were 1/1000 I could see the risk management argument for using this thing. 1/100 is pushing it. 1/10 seems unconscionably reckless and lazy.


if it takes 1 hour to get one answer by hand, but only 20 minutes for the machine, and 20 minutes to check the answer, the user still comes out ahead


> if it takes 1 hour to get one answer by hand, but only 20 minutes for the machine, and 20 minutes to check the answer, the user still comes out ahead

IF the machine actually got it right.


If it's right 90% of the time, with those other assumptions, then you either:

1) spend 10 hours doing all of them by hand

2) spend 3h 20 waiting for the machine, 3h 20 checking the machine, and 1h replacing the machine's mistake with a hand-written version, for a total of 7h 40

(I never trust marketing claims, so I doubt 90% accuracy; but also it generally takes LLMs a few seconds rather than tens of minutes to produce an output to be checked).


There's also the issue where if you aren't doing the work much anymore, will you continue to be able to competently check its output? Be able to intervene as well when it makes a mistake?

I think it's tempting but oversimple to focus on "output" and "time saved generating it," but that misses all the other stuff that happens while doing something, especially when it's a "softer" task (vs. say, mechanical calculation). It also seems like a mindset focused on selling an application rather than doing a better job.


Sure, but that's part of a much broader issue that predates AI by at least two millennia, probably much longer — the principal-agent problem.


> Sure, but that's part of a much broader issue that predates AI by at least two millennia, probably much longer — the principal-agent problem.

I don't think that captures what I'm thinking about, which is more skills atrophy ("Children of the Magenta") problem than a conflict of interest problem.

https://www.computer.org/csdl/magazine/sp/2015/05/msp2015050...

> William Langewiesche's article analyzing the June 2009 crash of Air France flight 447 comes to this conclusion: “We are locked into a spiral in which poor human performance begets automation, which worsens human performance, which begets increasing automation” (www.vanityfair.com/news/business/2014/10/air-france-flight-447-crash).

> ...

> Langewiesche's rewording of these laws is that “the effect of automation is to reduce the cockpit workload when the workload is low and to increase it when the workload is high” and that “once you put pilots on automation, their manual abilities degrade and their flight-path awareness is dulled: flying becomes a monitoring task, an abstraction on a screen, a mind-numbing wait for the next hotel.”


Ah, yes, I think I get you this time. (Is it just me, or does that now feel hideously clichéd from the LLMs doing that every time you say "no" to them? Even deliberately phrasing it unlike the LLMs, it suddenly feels dirty to write the same meaning, and I'm not used to that feeling from writing).

I still think it's a concern at least as old as writing, given what Socrates is reported to have said about writing — that it meant we never learned to properly remember, and it was an illusion of understanding rather than actual understanding.

(That isn't a "no", by the way; merely that the concern isn't new).


Those numbers are arbitrary and fictional, and the more relevant made-up quantity would be the variance rather than the mean. It doesn't really matter if the "average user" saves time over 10,000 queries. I am much more concerned about the numerous edge cases, especially if those cases might be "edge fields" like animal cognition (see below).

In my experience it takes quite a bit longer to falsify GPT-4's incorrect answers than it does to a Google search and get the right answer. It might take 30 seconds to check a correct answer (jump to the relevant paragraph and check), but 30 minutes to determine where an incorrect answer actually went wrong (you have to read the whole paper in close detail, and maybe even relevant citations). More specifically, it is somewhat quick to falsify something if it is directly contradicted by the text. It is much harder to falsify unsupported generalizations or summaries.

As a specific example, I recently asked GPT for information on arithmetic abilities in amphibians. It made up a study - that was easy to check - but it also made up a bunch of results without citing specific studies. That was not easy to check[1]: each paragraph of text GPT generated needed to be cross-checked with Google Scholar to try and find a relevant paper. It turned out that everything GPT said, over 1000 words of output, was contradicted by actual published research. But I had to read three papers to figure that out. I would have been much better off with Google Scholar. But I am concerned that a large minority of cynical, lazy people will say "90% is good enough, I don't want to read all these papers and nobody's gonna check the citations anyway" and further drag down the reliability of published research.

[1] This was a test of GPT. If I were actually using it for work, obviously I would have stopped at the fake citation.


Elician here! Thanks for your comment.

I'm not sure I agree that those rule-of-thumb statistics are "arbitrary" or "fictional"… I guess it depends on what you mean by that. I can say that on our part they're a good faith attempt to help users calibrate how best to use the tool, using evaluations of Elicit based on real usage.

Definitely accept that the tool can work better or worse depending on your domain or workflow though!

One way we do try to distinguish ourselves from vanilla LLMs is that we provide sources for all of the claims made. I mention this because we hope our users can approach the falsification process you mention for Google. We want to show people where particular claims come from such that we earn their trust.

Walking citation trails and verifying transitive claims is something we've talked about but need more people to implement! (https://elicit.com/careers)


> I'm not sure I agree that those rule-of-thumb statistics are "arbitrary" or "fictional"… I guess it depends on what you mean by that.

Sorry for the confusion: I meant that fragmede's comment was arbitrary and fictional, not the 90% figure. I was talking about these numbers:

  if it takes 1 hour to get one answer by hand, but only 20 minutes for the machine, and 20 minutes to check the answer, the user still comes out ahead


Oh, my bad—I misunderstood, thanks for the clarification


This is impacting online discourse too. It used to be that when someone is wildly wrong it was relatively easy to identify why: ideology, common urban myth, outdated research, whatever.

Now? I’ve seen people argue positions that are demonstrably wildly wrong in unusually creative and often subtle ways and there’s no way to figure out where they went off the rails. Since the LLM is responsive, they can use it to come up plausibly sounding nonsense to answer any criticisms collapsing the debate into a black hole of bullshit.


I am not sure if "good rule of thumb" is a good standard of comparison. It's at best anecdotal.


I am not sure what you mean by this comment. I took the language from the developers. If you mean that commercial AI providers should give more specific information then I agree wholeheartedly.

I assume it is difficult for Elicit to give specific numbers because they lack the data, and confabulations are highly dependent on what research area you are asking about. So the "rule of thumb" is a way of flattening this complexity into a usage guideline.


It will be more easy to sucess in the basic education rather than AI Research(one of the most hard field in the world) when apply AI to education field.

I think it's a little early to bring AI to research field which need enough accuracy and rigorism.


To any Elicians in the chat: I know from one of your previous blog posts that you guys use Vespa on the back end - would you be able to comment on your experience with it generally?


I've been disappointed in some of these services before, because in the early days and I suspect now still as well they used RAG for obvious reasons. I feel like for research it's rare that RAG works really well.

What I'm really excited about is a tool like Elicit using the new Google Gemini 1.5 Pro/Ultra models with the 2 Million token window sizes, filtering down the papers using traditional search and high quality meta-data, then critically, prompt/activation caching to make the tool economically viable.

Maybe it won't work better, but I'm willing to bet it'll find those really specific ideas/needles in the haystack a lot more often than vanilla RAG will.


It's the economically viable part that I think is currently hard, but agreed this is the right approach.

The basic problem is that scaling up understanding over a large dataset requires scaling the application of an LLM and tokens are expensive.


And the minutes of latency you currently get with this context lengths.


Yeah, this is why I mentioned Gemini with the context caching. It's not out yet, but supposedly launching soon. You pay a lower rate for storing the system prompt or whatever you dump in before the user query, plus you don't have to wait the full minute or so for all your research to be ingested every time.

https://ai.google.dev/gemini-api/docs/caching#get-started


https://scisummary.com can get you pretty deep dives on individual papers. I've always found that these document search engines that are "AI-powered" just can't stand up to hand picking articles myself. Also, these are only open access papers, so anything behind a paywall will just be straight up missing.


Elician here!

Our main focus is a little different to SciSummary actually. We're focussed on understanding researchers broader workflows, and providing a research assistant (i.e. rather than a particular narrow tool for summarisation or search).

The workflows we're most excited about at the moment are literature and systematic reviews: we think we can make these orders of magnitude faster and higher quality.


The name sounds like illicit.


elicit

transitive verb

1: to call forth or draw out (something, such as information or a response) her remarks elicited cheers

2: to draw forth or bring out (something latent or potential) hypnotism elicited his hidden fears

https://www.merriam-webster.com/dictionary/elicit


Indeed, I get the idea behind the name, but there is certainly a dash of irony here being that it is an LLM, who knows if it was "ethically" trained...


Actually, it's not an LLM!

We do use LLMs, but the secret sauce is an approach we call Factored Cognition which we wrote about here: https://ought.org/research/factored-cognition

(Elicit the company and app was spun out from Ought the research lab).

We do joke internally about the homophone (in fact, IIRC we did a little joke on our CEO by rebranding for his birthday in 2022) but I'm sorry to report that we're all careful, ethical, and well-behaved people :(


Cool, thanks for more info, nice to see other approaches. What data is used for training?


To be honest, having everything in research being so mechanized is starting to make research itself feel less magical and just about being another part in a machine. With that and the fact that most research today is relatively meaningless or focused on optimization of commercial products, I think science has left the domain of the purely curious and has been taken over by bean counters.

No interest in such a product of any kind.


I'm a patent attorney. One fantasy that I have is that AI will be a forcing function that will drive all the needless verbosity out of patents.

For example, every patent is 50 or 100 pages of what amounts to bland, unreadable background. The stuff that is actually new in the world is usually a tiny fraction of all the words.

In a world where everyone's AI can generate all those extra words, their value goes down. (Yes, this is bad news for people in my line of work who get paid to write all those words.) So maybe the profession moves in a direction that values conciseness and brevity, to re-add value to what the lawyers and agents do, that the AI can't do.

It's a fantasy. But the same kind of thing applies to academic publishing. Maybe the future of publishing is very short, useful documents with rich, tiered, AI-generated hyperlinks on every word.


Dr. Polak raises a good point, as do you. Concerns over the mechanization of research are valid. But there are domains (such as yours) where they are useful. The humanities may suffer.

I don't feel that jargon, legalese and newspeak are going to vanish. On the contrary, the value of these words may actually increase due to the appeal that they possess by way of gesturing toward authority, virtue and the like. There is a likelihood that AI will only contribute to their proliferation, just in the same terse manner that you are describing. Densely packed compositions of inarticulate gobbledygook that serves none, yet is served by many.

We are on the cusp of a fissure in the field of knowledge work led by AI tools. I'm inclined toward the more manual labor that favors a dextrous approach to "information overload" but there are opportunities to leverage these tools as a sort of valve for copious amounts of information as well. The technology isn't going to go anywhere, so hopefully this is Elicit is a useful product that will serve the interest of the conscientious.


I wouldn't expect that to happen. In a world where AI makes it both easier to generate and process all those extra words why would they actually decrease? It seems like the motivation would be stronger to reduce it if it's humans doing the majority of the generation and processing.


I like that scenario. But wouldn't AI just explode the verbosity of patents instead? The value of the humans' word count might go down, so they write less (less billable hours), but what they right is what is fed to the AI to generate a verbose patent, so there the patent is more valuable. This is assuming the value of a patent is in its chances to (a) be granted (it is not missing any relevant background or caveat), (b) stop competitors (it is hard to make sure it is not infringed), and (c) be upheld if challenged (it is complete and correct). This is where society needs to step up and legislate so these new tools are not abused and the value to society takes precedence, so patents are concise and help make inventions known and understood.


> In a world where everyone's AI can generate all those extra words, their value goes down. (Yes, this is bad news for people in my line of work who get paid to write all those words.) So maybe the profession moves in a direction that values conciseness and brevity, to re-add value to what the lawyers and agents do, that the AI can't do.

I don't understand that. If people bother to generate those words when they're expensive to write, why would they stop when they become cheap?


Because the perceived value will go way down when it is common knowledge that AI is being used to generate them.

My contention (that I see some people don't like!) is that they don't have much value, regardless of who writes them.


The value dimension of text is not the volume of words but rather the signal-to-noise ratio, aka information density.

It does not matter whether a human or an AI produces a high-signal synthesis from a lower-signal body of work. The signal boost is value creation.

The underlying debate is whether AI is capable of signal-boosting arbitrary works more effectively than humans. I'd say presently, it is not. Even with the most powerful models, consistency and reliability across work types and lengths are far from business-grade.

If that were to change, the fact that an AI boosts the signal does not globally deflate the value of text. The value of text is determined by the signal, not the signal producer.

The main issue right now is that AI is not reliably boosting information signals, and therefore, most serious professionals probably still prefer to read original works.


I'm not saying that text has value, it just seems really unlikely that making something cheaper will lead to less of it. Even if the perceived value is down, I assume people would keep doing it because it costs so much less.


Well, I agree that it won't happen automatically.

My hope is that AI will cause the patent legal community to undergo a paradigm shift and place value on conciseness and brevity.

In the US, this would require Congressional and judicial buy-in. LOL!!!

Notice that I used the word "fantasy" above.


Unfortunately, AI could lead to more verbose everything, because it also enables us to distil huge amounts of text. Forgot where I read about this, but it has been shown that digitalization has enabled the tax code to go from something one person can read and understand to something completely unmanageable (could be from The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy), because digital tools allows us to deal with more tax code. Similarly, doctors, teachers and many other occupations are drowning in administration duties that were enabled by digital technology.

I hope I'm wrong. Eventually, when only AI produces and consumes text, it could be more brief, but at that point will it matter? Eventually a less ambiguous format could be used, something more like code, that computers and AI can consume and apply.


The opposite happens with technology. As the cost goes down, usage goes up. (Jevons paradox)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: