Hacker News new | past | comments | ask | show | jobs | submit | stuhlmueller's comments login

Elicit | Senior Software Engineer | Oakland or Remote | https://elicit.com

Elicit is the leading AI research assistant. We automate high-quality reasoning so that humanity makes more breakthroughs in every domain: from climate change to the gut microbiome to longevity and economic policy.

We’ve scaled to over 200,000 monthly users by word of mouth and just crossed $1MM in annual revenue, 5 months after launching subscriptions.

We’re now building out the core software engineering team.

Apply at https://elicit.com/careers?gh_jid=6920548002


Are you open to globally remote candidates, or do they need to have residency in the US?


Ought | Infrastructure Engineer | Remote/hybrid | https://elicit.org/

Ought is using large language models to build Elicit, an AI-powered research assistant that automates and extends parts of the scientific research process. We are part startup, part research lab, and we're dedicated to making tools which help people think.

The ideal candidate has several years experience working on infrastructure: e.g. production and test environments, ML model management, data pipelines, CI/CD, Kubernetes, GitOps, etc.

Apply here: https://ought.org/careers/

If you're not sure if it's a good fit, email me to check (andreas@ought.org).


Remote where?


Ought | Full-Stack Software Engineer | Remote/hybrid | https://elicit.org/

Ought is using large language models to build Elicit, an AI-powered research assistant that automates and extends parts of the scientific research process. We are part startup, part research lab, and we're dedicated to making tools which help people think.

Apply here:

- Full-stack Software Engineer: https://ought.org/careers/software-engineer

- Machine Learning Research Engineer (NLP): https://ought.org/careers/ml-engineer

If you're not sure if it's a good fit, email me to check (andreas@ought.org).


Same! Pretty sure there will be tons of products for semantic search over local & cloud files within the next 1-2 years.

For now we've chosen to build Elicit around aggregating and synthesizing research - what is the evidence in each paper, what are the arguments, how do they come together to inform questions people care about?

We think research is a good starting point for figuring out how to use language models to do high-quality reasoning because (a) researchers care a lot about what's true and (b) research is already more structured than most other sources of knowledge.


I would be fine with something that would sort and structure my collection of research papers as a starter. :-)

Actually it would be interesting to mix in some world-knowledge base like Cyc into such tool to make querying more powerful, and maybe even enable deduction of additional information out of the recognized facts and structures.

But it should still run locally of course. I'm refusing to pay for any SaaS offers.

Especially as it comes to research with commercial background it's also a matter of confidentiality. So I guess I'm not the only one who would prefer to have such a thing strictly on-prem.


Cofounder here - Elicit is an AI research assistant. Right now it's focused on helping people answer research questions by searching over claims instead of papers.

You can also teach Elicit to do custom tasks by giving input-output examples of intended behavior, e.g. for brainstorming counterarguments or decomposing research questions.

Our mailing list at list.elicit.org has more about what we've recently been adding.


I did sign up. It's an interesting service, I tried a couple of queries and got the kind of results I expected.

Not related to the service itself, but I strongly dislike those random (seemingly personal but are actually not) automated emails that start coming in 15 minutes after signing up. This is spam. Please don't spam.


What’s the corpus that’s searched?


Semantic Scholar via API (195M papers)


Does that include medical info? I have a friend building a pro-bono Multiple Sclerosis information database who might be interested.


Yes, includes PubMed, medRxiv, and a few others. The list of publishers is at https://www.semanticscholar.org/about/publishers


For web search, are you using the Bing API? If so, do you have thoughts from the private beta on how it compares to Google - where it does similarly/better, where it does worse?


Great question: We are first ranking apps and then within apps we have some of our own ranking but e.g. web results come from Bing. We have found that when non-web-result-apps trigger in the top 2, we are often as good or better than Google.

Web results by themselves are a mixed bag, which is why we built out lots of custom apps for developers, e.g.

StackOverflow (with code snippets), W3Schools, MDN, Copilot-like Code Completion, json checkers.

You can find the list of apps in our FAQ: https://youdotcom.notion.site/FAQ-8c871d6c99d84e02955fda772a...


Whenever the subject of personalising search results comes up in technical communities, the most common thing I hear are people saying they wish they could remove W3Schools from their search results automatically. Specifically that particular site. It’s got a terrible reputation. People have even built browser extensions to do it:

https://www.google.com/search?q=remove+w3schools+from+search...

In that context, it seems strange you consider this a selling point when so many people regard the inclusion of this site in their search results to be a failure. Perhaps pick a different site to keep mentioning? It doesn’t give the best impression when you proudly say “Hey, you know that site you hate? We give it special priority!”

Is it possible to remove this site from your results entirely?

Also, it’s not clear what “JSON checkers” means in the context of a search engine.


Yea. All sources and apps can be preferred or disliked and that will impact the ranking. StackOverflow might be more popular :)

You can test your json files to see if they're valid via the json checker app.


Why have you put a JSON checker into your search engine? These are two entirely different things.


Or just let a user remove a domain from results.


Alex Irpan's post inspired the AGI timelines discussion at https://www.lesswrong.com/posts/hQysqfSEzciRazx8k which shows 12 people's timelines as probability distributions over future years and their reasoning behind the distributions.

(I work on Elicit, the tool used in the thread.)


Have you tried plotting the CDFs? Might be easier to read than the overlaid areas.


Good idea. We'll integrate that into Elicit in a few weeks. In the meantime, here's a Colab that shows the CDFs: https://colab.research.google.com/drive/1pl3fIaeIKIS77IDM_rn...


Ought | https://ought.org | Software Engineer | San Francisco (onsite)

We're a non-profit AI research lab. Our goal is to make machine learning solve tasks where success can’t be reduced to a simple metric. We're working towards a future where ML is as helpful for open-ended questions like “Should I get this medical procedure?” and “What career is right for me” as it is for optimizing ad click-through rates.

To do this, we build systems that decompose thinking about hard questions into small subtasks, some of which can be automated. We then compositionally build complex thoughts out of these simple pieces. Humboldt talked about natural language as a system that "makes infinite use of finite means" -- an infinite number of sentences can be created using a finite number of grammatical rules. At Ought, we work on mechanisms that have similar flexible compositionality.

We'll pay a $5,000 referral bonus to whoever refers the person we end up hiring for our team lead role (careers@ought.org, terms: https://bit.ly/2lw3Q8w). Our benefits and compensation package are at market with similar roles in the Bay Area.

Apply here:

- Software Engineer: https://ought.org/careers/software-engineer

- Engineering Team Lead: https://ought.org/careers/engineering-lead


Ought | https://ought.org | Software Engineer | San Francisco (onsite)

We're a non-profit AI research lab. Our goal is to make machine learning solve tasks where success can’t be reduced to a simple metric. We're working towards a future where ML is as helpful for open-ended questions like “Should I get this medical procedure?” and “What career is right for me” as it is for optimizing ad click-through rates.

To do this, we build systems that decompose thinking about hard questions into small subtasks, some of which can be automated. We then compositionally build complex thoughts out of these simple pieces. Humboldt talked about natural language as a system that "makes infinite use of finite means" -- an infinite number of sentences can be created using a finite number of grammatical rules. At Ought, we work on mechanisms that have similar flexible compositionality.

We'll pay a $5,000 referral bonus to whoever refers the person we end up hiring for our team lead role (careers@ought.org, terms: https://bit.ly/2lw3Q8w). Our benefits and compensation package are at market with similar roles in the Bay Area.

Apply here:

- Software Engineer: https://ought.org/careers/software-engineer

- Engineering Team Lead: https://ought.org/careers/engineering-lead


Ought | https://ought.org | Engineering Team Lead | San Francisco (onsite)

We're a non-profit AI research lab. Our goal is to make machine learning solve tasks where success can’t be reduced to a simple metric. We're working towards a future where ML is as helpful for open-ended questions like “Should I get this medical procedure?” and “What career is right for me” as it is for optimizing ad click-through rates.

The core pillar of our research is Mosaic, an app for decomposing thinking about hard questions into small subtasks. We compositionally build complex thoughts out of simple pieces. We want to get to the point where automated aggregation of individual thoughts leads to something that is more than the sum of the pieces.

Humboldt talked about natural language as a system that "makes infinite use of finite means" -- an infinite number of sentences can be created using a finite number of grammatical rules. As engineering team lead at Ought, you're working on mechanisms that have similar flexible compositionality.

We'll pay a $5,000 referral bonus to whoever refers the person we end up hiring (careers@ought.org, terms: https://bit.ly/2lw3Q8w). Our benefits and compensation package are at market with similar roles in the Bay Area.

Apply here: https://ought.org/careers/engineering-lead


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: