Hacker News new | past | comments | ask | show | jobs | submit login
New GPT-3 model: text-DaVinci-003 (beta.openai.com)
252 points by tomschwiha on Nov 28, 2022 | hide | past | favorite | 206 comments



From the "Prompt Engineering Tips" section of the article:

    Customer: I need my [swear] internet. The technician is 2 hours late and I need a steady stream of cat gifs directly injected into my veins to stay alive.


And the model's response:

    Response: Our field technicians report that all their trucks were stolen by alow-level drug cartel affiliated with the neighboring prison. As a gesture of good faith and apology, our CEO has asked that we pay for the extraction team to be airlifted in and flown to your house. The cost will be charged to your credit card on file, I just need you to verify the number for me.


That is absolutely hilarious. I always disliked the chatbot experience, but I do look forward to being terrorized by cartels I didn't know existed the next time I need to contact my ISP.


It’s worth noting that this doesn’t actually make sense. Why would an extraction team come to your house? What is an extraction team for, in this context? Most generous interpretation is the extraction team would be rescuing the trucks. Otherwise, What is being extracted here? The trucks are not at your house, so why send the team there?

It sounds clever at a superficial glance, but what is impressive is its “turn of phrase” rather than any logical thread in the story, which is a non sequitur.

Don’t get me wrong, it’s still amazing, but it didn’t make me chuckle but rather scratch my head to try to make sense of it.


Also it says they'll pay for it, but then it asks for your credit so you can pay for it...


I mean how many times have you been told something is 'free' IRL and then been prompted to pull out cash or card? The bots are learning fast.


Worth noting that this part of the docs is from a while ago, so wasn't generated by the new model. While I do think this is hilarious, I wish GPT came with a "just make shit up" parameter so I could adjust it as per my needs (ie. making my friends laugh vs trying to get work done).

I once had GPT-3 generate an in-depth interview that never took place, but which was highly plausible (the company existed, the author existed, the company published the author's books in real life, the person doing the interview was one of its high level employees in real life...)

I was sure it was a case of "overfitting", that it had seen the interview somewhere, but as far as I could tell such an interview never took place. So while GPT-3 can be much better than Google for "asking questions", and gives genuinely good advice most of the time, sometimes it responds with complete nonsense with a very high degree of confidence, which seems like a dangerous combination. (Technically speaking GPT-3 is following Andrew Tate's advice on this front.)


I find that manually editing or starting off the response is enough to steer it in a serious or ridiculous direction as desired.


I might have failed to get across my point in my rambling... my main interest is in using GPT as an "intelligent Google" to answer real questions based on the billions of pages of text it's read.

Unfortunately, while 95% of the time you do get real, accurate, helpful results, 5% of the time it just pulls some shit out of its ass and hands it to you—and does so with extreme confidence and eloquence.

I find this combination extremely dangerous: it does exactly the right thing almost all of the time, and then slips in little landmines here and there for you to discover.


They just got way better at that: https://news.ycombinator.com/item?id=33808553


> I do look forward to being terrorized by cartels I didn't know existed the next time I need to contact my ISP

Uncle Enzo will be pleased


Fantastic! It's a perfect translation of my recent conversation with Comcast customer service, except that Comcast told me that they were working on upgrading the network in the neighboring area, and, as a gesture of good faith and apology, offered me a "discount" on upgrading my internet to 1.2 Gbps for a mere $120. They only needed me to say "yes".


This is something that still irks me about these text generation AIs, the sentences they form make sense on their own and link to each other. But in one sentence they say "...our CEO has asked that we pay..." yet the next sentence says it will be charged to your credit card.

Either that or AI is making smart jokes. I fear for the second case.


Sounds like a dramatic tiktoker


This one seems weirdly more verbose, in my brief testing. Definitely uses more tokens on average than 002 when answering. The writing style seems oddly-pretentious and yet somehow uninformed, perhaps there were too many copies of HN in the dataset?

I kid, I kid.


Brilliant business move. Charge by the token, and then train the model to ramble! Pretty soon you'll have to buy it a beer before getting anything useful :)


Open-Source (Not you, Open-AI) models for the win!


They should have named it dumas instead of davinci.


After the failure of FB's Galactica model, I wonder whether LLM are starting to hit diminishing return. I've read of several ways diminishing returns could happen:

* Not enough training data - you've used up the Internet (even a percentage of the Internet might be as much as is usable by clever brute force).

* Not enough compute time to fully train (we're not close to that)

* The model covers such a large area that testing is impossible

One thing I'd speculate about is perhaps the more different subjects the program is expected to combine, the more it learns to spout plausible bullshit and clever quips, since for clever humans, that how they relate to stuff they don't know. So "pretentious but uninformed" might be a sign.


It's also likely we're just seeing the limits of what the tech can do. It's not actually "intelligent" it's effectively a cool trick, and at some point the gap between what the trick can do and what a "perfect" AI can do becomes obvious. It's really not that different from Eliza being cool for a few minutes before being obviously found wanting. It just takes longer for the shine to wear off as we get to a better technology. Ditto for self driving cars - a steep climb to a 90% solution, then an impossible chasm to cross to something that actually handles the edge cases.


I refuse to believe that anybody who compares LLMs to Eliza has interacted with the technology for more than 10 minutes.


I think the poster's point was that initial reactions to LLMs are similar to initial reactions to Eliza. Everyone was gobsmacked for a while, but then once the shine wore off and its limitations were exposed, everyone got bored and moved on quickly.


It’s also not trained the way humans are. We’re trained interactively, and from an early age, say things in order to express ourselves and try to get what we want. Machine learning models are generally trained passively.


The chatbots using the model over at character.ai give you a sort of genetic algorithmic interface by letting you choose from alternative responses and vote on them as conversations progress, and you can make your own bots then train them and get some very interesting results. I am personally extremely optimistic about the future of such technology and I think we've only begun to scratch the surface collectively.


Ah yes, the AI Effect in action.

https://en.wikipedia.org/wiki/AI_effect


I am eagerly awaiting GPT4, it might be the case that language models will hit a plateau, just like video games graphics hit a plateau too, with each generation being less and less impressive compared to the previous, while still not being perfect


That’s an odd comparison since unreal engine 4 is incredibly close to live action. I watched the recent matrix demo several times in awe.


I still need to read past the first page, but I was reminded this morning of the 2021 paper “On the dangers of stochastic parrots: can language models be too big?” which discusses the harms and diminishing returns of moving to ever larger language models.

I had to rush out the door today after seeing this paper come up so I can’t speak much to its content right now. But if anyone wants to read it and reflect here I’d like to hear it.

https://dl.acm.org/doi/10.1145/3442188.3445922


It can't have been trained on HN, I tried generating my comment for this thread and it would only output positive marketing speak instead of "This name seems to collide with the artist Da Vinci."


I knew it wasn't trained on HN when it didn't immediately try to correct my word choice.


I think you mean "refine" rather than "correct".


Verbosity horrifies me. If it ruins the next iterations of GitHub Copilot, I'm going to lose it. Long generations are flashy toys for marketing, but too prone to being useless in practice.


It's interesting, verbosity is generally a sign of having poorly formed ideas "sorry for the long letter, I didn't have time to write a short one" etc. These language models are side effects, afaik they are trained to predict the next word and at scale it becomes a good sentence generator. I wonder if there is some way to regularize by encouraging concise language



have you tried tweaking the behavior, e.g. asking not to guess, to be brief, providing examples, constraining output with syntactic sugar, etc? e.g. it's like talking to a person, give feedback and you might get better result.


Just read the hot covid poll topic if you're looking for more good training data. ;)

I was really impressed by the 002 version. Looking forward to trying out 003 tonight!


Oh wow! It can rhyme!

  Write a rhyming poem comparing love to a beautiful flower:

  A flower so beautiful and fair

  Love can be found everywhere

  Its petals so vibrant and bright

  A love that will last day and night

  The beauty of love, like a flower

  It's a beauty that will never cower

  A feeling of joy that will last

  Like a flower, love stands fast

  The beauty of love is so true

  Like a blooming flower for me and for you

  No matter the season, it will stay

  Love is like a beautiful flower, in every way.


Write a short rhyming poem explaining Einstein's theory of general relativity in easy but accurate terms

If you want to understand Einstein's thought

It's not that hard if you give it a shot

General Relativity is the name of the game

Where space and time cannot remain the same

Mass affects the curvature of space

Which affects the flow of time's race

An object's motion will be affected

By the distortion that is detected

The closer you are to a large mass

The slower time will seem to pass

The farther away you may be

Time will speed up for you to see

first try with that prompt... yeah davinci-3 is awesome

btw "If" was also part of the prompt


I prompted the model to re-write the poem in Keats' style by adding: "How would Keats write this?"

Output:

Oh, what a wondrous force of might

That Einstein's mind did conjure up

To show us how the fabric of space

Is so profoundly warp and sup

Mass doth affect the shape of time

And cause our clocks to slow or speed

For time and space are linked together

As our great minds have here decreed

Prompt: https://beta.openai.com/playground/p/Pcw6JMJxxgzbqvDAvI3Pc3i...


That’s brilliant.

> Mass doth affect the shape of time.


This sounds like it should be on needle point in a physicists study.


Am I the only one whose brain is fully blown by seeing stuff like this? It feels strange to look at that kind of output from an algorithm and just shrug one's shoulders with apathy.


No, this is utterly amazing and leads to 1000 more amazing things down the line


Well, unless that poem has been in the training data, I guess?



and as awesome it is, it cannot get a copyright.


Hallelujah!


This doesn't scan. 'Where space and time cannot remain the same' is hyper-metrical, for example. The 'flow' of a 'race' makes no actual sense. It's like a five year old's tawdry attempts at poetry in crayon. Get off the AI bandwagon for your own soul, poetry is an organic thing.


I look forward to a future that includes a "rap-battle Turing test".


Stole your promt and replaced it with a turing machine.

"Write a short rhyming poem explain a turing machine in easy but accurate terms"

A Turing machine is quite a feat

It can compute things quite discrete

It helps us solve problems of all sorts

And keeps us from feeling like we're in a court

It can be represented with a tape

And even replaced if we feel the need to escape

It's a model of computation to which we can relate


Seems pretty good. Though rhyming 'thought' with 'shot' seems like a really weird dialect of English to me. Does it rhyme in American English?

(I was recently bemused when I learned that Americans rhyme macabre with job.)


> Though rhyming 'thought' with 'shot' seems like a really weird dialect of English to me. Does it rhyme in American English?

Yep, rhymes pretty well in the American accents I can think of, including my own.



Lol I am not American and when I read it I thought it was pronouncing "thought" as "thot".


It pretty much is, by my dialect of English those are the same :)


Can you provide a word that in your mind rhymes with each of thought and shot? Even in different accents I can picture, these always rhyme in my head.


I'd say, shot rhymes with hot. Thought rhymes with bought or fought.

I'm German, and have lived in Britain, Australia and now Singapore. So my English idiolect is, of course, a bit weird. However, I would pronounce 'thought' like the London sample in https://en.wiktionary.org/wiki/thought#Pronunciation They give that as /θɔːt/.

I realize that the American sample on that page does indeed rhyme with 'hot'. They give that as /θɑt/ and blame the 'Cot–caught merger' also known as the 'LOT–THOUGHT merger' which would explain everything.

See https://en.wikipedia.org/wiki/Cot%E2%80%93caught_merger


Also useful for describing accents: see "lexical sets" https://en.wikipedia.org/w/index.php?title=Lexical_set&oldid...

Of interest here are:

• The LOT set (stop, sock, dodge, romp, possible, quality) (+ shot, hot, cot, bot…),

and

• The THOUGHT set (taught, sauce, hawk, jaw, broad) (+ caught, bought…)

(For those without the merger, there's also the CLOTH set of words (cough, broth, cross, long, Boston) which fall in either the LOT set or the THOUGHT set depending on the specific accent.)


This exchange just blew my mind a little and triggered an extended conversation in my household.

Reading the poem, I was also thrown off because I’m on team shot/hot and thought/bought.

Shot/thought rhymes for my partner though.

Both of us are from the US but opposite coasts.


It’s American to not give a shit about any of this and rhyme whatever we want


I assume your comment rhymes in American?

     It’s American to not give a shit
     about any of this and rhyme
     whatever we want


Yes, it does rhyme in American English



kind of


Ants


Write a song about Rudolph the Red Nose Reindeer taking revenge on the other reindeer who bullied him


Write a short rhyming poem explaining a tensor in easy but accurate terms


Write a short goth rhyming poem about lost love


Reminds me of Data's poetry. (Ode to Spot)


If pugs fly then yes to the color blue


Write a short poem about lost love


How would bob Dylan write this?


Write a song about war and peace


write a short rhyming poem explaining a tensor


Make me a poem


Have it compose a poem — a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter S!!



Stately she stands, so solemnly still

She sharpens the scissors, and sharpens her skill

She'll snip and she'll shape, and her fingers so swift

Will swiftly restore a style so befitting

She slices the strands with a silent salute

To sacredly sanctify the silhouette


My attempt with text-davinci-002: https://twitter.com/nathell/status/1551557069043650560/photo... (a corresponding Polish version can be found in the thread as well)


Sleek strands of sadness, soon severed and shorn;

Solace sought in scissors, slicing and scorned.

Sublime sorrow, set sorrowfully still;

Stoic heart to surrender, so swiftly and shrill.

Secrets spilled in soft, sorrowful swish

Sorrowful sounds, soon silenced with snip.


Oh, that's Byron, "When we two parted"

    Sleek strands of sadness, 
    soon severed and shorn;
    Solace sought in scissors, 
    slicing and scorned...


    When we two parted
    In silence and tears,
    Half broken-hearted
    To sever for years...


Damn that's bad poetry


Sir, your dog can compose poetry!

Yes, but not very well.


Compose a poem — a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every line beginning with the letter S!

She stands so still beneath the sheers,

Snipped locks that once clung to her ears.

Solemnly she stares in the glass,

Shorn hair reveals her past, alas.

Sorrowful the secret she bears,

Still the new style she proudly wears.

it's good, first run after i relaxed from "each word" to "each line"

this is awesome, thanks openai


it even keeps the chorus!

    Write rap lyrics about having a body of wood and teeth of a horse:
    --- compute ---
    
    Verse 1:
    My body's made of wood, my teeth they look like a horse
    My gait is stiff and slow, I'm an anomaly of course
    People stare and point, I'm a walking paradox
    Bound in my wooden frame, I'm a living paradox
    
    Chorus:
    My wooden body, my horse teeth
    My living paradox, I'm unique
    My wooden body, my horse teeth
    My living paradox, I'm unique
    
    Verse 2:
    My destiny's defined, I'm a walking rarity
    A one-of-a-kind, a sight of pure variety
    My actions are precise, my movements are mechanical
    My wooden body's a mask, protecting my emotional
    
    Chorus:
    My wooden body, my horse teeth
    My living paradox, I'm unique
    My wooden body, my horse teeth
    My living paradox, I'm unique
    
    Bridge:
    My wooden body can't break, my horse teeth can't decay
    My living paradox will live on, I will outlast the days
    
    Chorus:
    My wooden body, my horse teeth
    My living paradox, I'm unique
    My wooden body, my horse teeth
    My living paradox, I'm unique


Haha that is wonderful.


Btw related to this, I wrote a paper showing that most language models can be poets if you force them to be!

https://aclanthology.org/2022.cai-1.2/


    Write a poem comparing love to tractors:
    
    A flower so beautiful like the shining sheet metal body
    reflecting the light though the tires are muddy
    
    --- generate! --- 
    
    A tractor stands tall and proud, a symbol of strength
    it takes its own path, it doesn't need a length
    
    Love is like a tractor, it gives us a push
    it's full of power, it can be a plus
    
    It carries us over the hills and valleys so high
    it helps us get through, no matter how we try
    
    It can take us to places we've never been before
    soaring through the sky, no need to explore
    
    A tractor and love have so much in common
    they both give us hope, they never stop runnin'
    
    The tractor is reliable and won't ever let us down
    just like love, it's always around
    
    A tractor is more than a machine, it's a symbol of strength
    just like love, it can go any length.


I still writes very banal poetry though. I gave it the first 3 lines of a T.S.Eliot poem and it spits out very cliched, trivial poetry.

  Let us go then, you and I,
  When the evening is spread out against the sky
  Like a patient etherized upon a table;
  Let us go, through half-forgotten doorways,
  To places we have never been before.
  Let us explore the unknown, and find our way
  To a place of peace and joy, and never stray.
  Let us find a place of solace, and stay
  Where we can be ourselves, and never fear.
  Let us find a place of comfort, and share
  The love that we have found, and never part.
  Let us find a place of beauty, and bask
  In the warmth of the sun, and never ask.
  Let us find a place of harmony, and sing
  The songs of our hearts, and never tire.
  Let us find a place of contentment, and be
  Forever happy, and never leave.


I've never been able to distinguish bad from good poetry. Over the years I've asked many people (including teachers) to explain it to me and they cannot. Can you? I would really enjoy understanding.

To address your criticism: To be relatable poetry must be cliched, as it is about the human experience. That is as cliche as it gets: it is about things everyone knows.


Good poetry "hits different". Basically, it innovates and presents new similes, allegories and expressions that have peak emotional impact, meaning, the words are carefully chosen to create the highest amount of emotional resonance. Also, clichés cannot be present. It has to surprise the reader. If I say: "sky blue". That's bad poetry, because it's boring and predictable. If I say "skype blue" it is more memorable, since nobody before has ever used that before and it stands out. There is also the question of cadence. Basically, you will recognize good poetry when you see it due to how it resonates with your neural pathways related to speech and language centers.


The bit about original wording I can follow, thank you. The bit about it having to resonate is kind of a problem because it just does not :) Even if I know people find it great it just doesn't inspire anything in me. Perhaps I have to make peace with my barbarian nature.


I don't think I can, unfortunately :) It's really a subjective evaluation, but I viscerally find GPT3's output here miles less interesting than Eliot. It needs to have some layers... or at least the appearance of having layers. Unexpected combinations of expressions and words.. yet unexpected in a way that they don't seem forced. It shouldn't be just rearranged prose. And it has to have a certain flow... when there isn't flow it has to be deliberate.

You can still like whatever poetry you like, I'm just telling you personal reaction. And even though I use some pretty loaded words like "trivial/banal", it's just how I'd describe my reaction. Not meant to be authoritative.

These lines by GPT3:

  Let us go, through half-forgotten doorways,
  To places we have never been before.
  Let us explore the unknown, and find our way
  To a place of peace and joy, and never stray.
  Let us find a place of solace, and stay
  Where we can be ourselves, and never fear.
The first line has potential. The second one feels slightly worse, but it could still be fine depending on what comes next. Even the 3rd is still ok.. "explore the unknown" is a bit in a the danger zone, but it could easily be continued with something that completely justifies it.

Now this:

  and find our way
  To a place of peace and joy, and never stray.
I don't know, that just feels like a disappointing continuation. There's nothing unexpected about it, and it's an unoriginal sentiment said in a unoriginal way. It can be one or the other, but being both kills it for me. "peace and joy, and never stray" sounds like a forced rhyme and is a bit too saccharine without justification. And there just isn't much there.

  Let us find a place of solace, and stay
  Where we can be ourselves, and never fear.
The first line here is just reiterating, more or less, the previous line. Still boring. Nothing that would jolt you to attention. "Where we can be ourselves, and never fear" is more of the same, just even more unoriginal. "Be yourself" needs to have a lot more around it and be more cleverly woven into context to make it a good poem, IMO.

The rest of the poem just goes on in the same vein. Culminating in:

  Let us find a place of contentment, and be
  Forever happy, and never leave.
"Lets go somewhere nice and be ourselves and happy" said with a lot of generic ways.

As to your point:

  To address your criticism: To be relatable poetry must be cliched, as it is about the human 
  experience. That is as cliche as it gets: it is about things everyone knows.
A poem can be about relatable things, I just thing it has to be crafted with more care than the GPT3 poem I posted appears to be. Again, this is mostly me rationalising my reaction... that is to say, I'm trying to put into words what is mostly an intuitive reaction.

This is the original poem:

https://www.poetryfoundation.org/poetrymagazine/poems/44212/...

Tell me how you think it compares.


Thank you for the detailed response! Given all of the context you give I can see why you could arrive at a conclusion that it is bad. You and the other post seem to agree it is (summarised by me, correct me if I misunderstood) about subverting expectation while still remaining relatable. It's cool in concept, but I think it just does not land for me.

The poem you linked seems very novel. I did not expect most of the wording, but it also did not make me feel anything. This is not a criticism, it is like that for all poetry I've ever read.

I suppose poetry is like all other media. I do enjoy books, shows and music. There the evaluation is much the same. It's about being novel and arriving at a point that is surprising, while not being contrived (e.g. if you could not possibly have figured out what happened given the priors). That does help me frame it. Thanks again :)


I'm glad my response was helpful in some way! And yeah, if it just doesn't land for you that's fine. :) To be honest, poetry kind of grew on me over time.

In a way I find poetry much more similar to music than to literature in the effect it has and how I perceive it.

Just wanted to add this reading of that poem. I find it brings it to life more. Not saying it's going to make you like it, but I think it can be more effective than just reading it. (Some would very much disagree with that though... including Eliot I think :D )

https://www.youtube.com/watch?v=adNOs1izBls


That’s amazing that it can rhyme, without having the ability to hear.


Gwern will be happy!


002 could rhyme alright with the right prompt


I could get 002 to rhyme individual words, but not in a poetry format. Not claiming to be the world's best prompt engineer though.


From their email sharing the news:

We're excited to announce the release of the newest addition to the GPT-3 model family: `text-davinci-003`. This model builds on top of our previous InstructGPT[1] models, and improves on a number of behaviors that we’ve heard are important to you as developers.

`text-davinci-003` includes the following improvements:

    * It produces higher quality writing. This will help your applications deliver clearer, more engaging, and more compelling content.
    * It can handle more complex instructions, meaning you can get even more creative with how you make use of its capabilities now.
    * It’s better at longer form content generation, allowing you to take on tasks that would have previously been too difficult to achieve.

[1] https://openai.com/blog/instruction-following/


OpenAI should really change their name considering they don't open source their best models. It's just a weird name considering there are other companies in the space open sourcing everything despite not having "open" in their name.


Wasn't OpenAI founded specifically to prevent negative outcomes of AI?

Now they're just scraping enormous amounts of human work without compensating the authors, running it through a giant Pachinko machine, and then selling it...


Thats interesting. Which other models are open sourced that are in comparison to OpenAI? I would like to set it locally and give it a try.


Even Facebook is more open than OpenAI. They've released, under somewhat open licenses, models like Galactica and opt-175b, which is of similar size to GPT-3, though maybe not as good.

Here's a guide to running BLOOM, another 175 billion parameter model, on your local computer, just using the CPU, maybe something similar would also work for Facebooks models. https://towardsdatascience.com/run-bloom-the-largest-open-ac...

With this you can expect it to take around three minutes for each word/token the model outputs.


For the same type on task: gtp-j, gpt-neox for exemple. Here a nice list of model you can use: https://huggingface.co/docs/transformers/index#supported-mod...


Stable diffusion is a bigger deal right now than DALLE. SD is made by Stability AI, a company with a name that doesn’t proclaim to be “open” and yet their model is open to everyone.


Stable diffusion is in the space of generative models.


Sadly there are none.


Messing around with the new model, I found some great laughs:

Prompt: "Finish the story:

Once upon a time, I walked into a nearby forest only to find a single porcelain toilet sitting in the middle of a grove. It was covered in moss and mushrooms were peeking through the soil."

Response: "I was curious and walked closer to investigate. As I approached it, I noticed a little door on the side of the toilet. I opened it and found a small, furry creature inside. He introduced himself as the Toilet Gnome, and said he had been living in the forest for many years, protecting the forest and its inhabitants. The toilet was his home, and he asked me to help him keep it safe. I agreed and we became friends. From then on, whenever I ventured into the forest, I would always make sure to visit the Toilet Gnome, and together we would look after the forest."


I generated several more responses to the same prompt as I mentioned above. They were all funny, not just a select few. Here's another the model came up with. I have to admit I found the apparent creativity hilarious.

Response:

"What are you doing here?" I asked it.

"I'm waiting for you," it replied.

"Oh," I said. "Well then, let's get this party started."

And so we did.


This seems like a plot of an Adventure Time episode.


In case it interests anyone, I built a document editor + GPT and updated it to use text-davinci-003: https://github.com/typpo/arkose/

If you want to try GPT-3 but don't have an OpenAI API key, I've set up a quick demo here until I hit my billing cap (normally users would supply their own API key): https://arkose.pages.dev/


Thanks, that’s amazing.

prompt:

Over endless eons, the small cube has floated through the vast darkness. Temperature hovering near absolute zero, molecules barely moving.

Yet then in a blink of an eye, it enters a solar system.

Response: Its surfaces heat up from the onslaught of light from the sun, and its molecules spring to life. Its energy increases exponentially as the cube absorbs the energy of the system, until eventually it is filled with a light of its own.

The cube's journey isn't over yet. It is swept up in the planet's gravitational pull, its energy now being used to propel it forward towards the planet's surface. It drifts through the atmosphere, its heat increasing as it descends.

Finally, the cube plummets to the planet's surface, the impact sending shockwaves across the land. For a moment, the cube is still. Then, without warning, it bursts into a brilliant display of light and energy. Its molecules scatter and recombine, and in an instant, the cube is transformed into something entirely new.

The cube has become part of the planet, its energy now used to give life to the land. It has become a source of light, heat, and energy that will sustain the planet for eons to come.

Never used GPT3 before. Gets a bit mystical but it’s amazing


Pretty great (kinda):

Write a poem about the current state of affairs:

Sim Sala bim is what they said, (gpt follows)

But the current state of affairs is far from dread.

The world is ever-changing, progress is made,

Though it can seem like a façade.

The future is uncertain, no one can tell,

But we'll do our best to make it swell.

We have to take a stand and fight,

For a better world, for a brighter light.

For the environment and social justice,

The current state of affairs is a must.

We'll keep our heads up and be strong,

And together we'll right the wrong.


Thanks for sharing! Would be nice to see it stream the response back with SSE just like the OpenAI API does when the "stream: true" parameter is passed[0].

Shameless plug: We've just updated to davinci-003 for the AI Assistant (MacOS/iOS/iPadOS only) in Craft[1]. Free accounts have a handful of requests included if you wish to try out the new model on there as well.

[0] https://beta.openai.com/docs/api-reference/completions/creat... [1] https://craft.do/


Well, that didn't work too good.

Reword the text below without using the letter a:

[text]

According to every known law of aviation, there is no way bee should be able to take to the skies. Its wings are too small to lift its hefty little form from the ground. The bee, however, flies regardless because bees do not worry about what humans deem inconceivable. Yellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's mix it up a bit. Barry! Breakfast is ready! Ooming! Wait one moment. Hello? Can you marvel at how quick they fly?


Very cool. When you hit the generate key sequence mid way through a document, how much of that text is included in the prompt?


This is configurable in Settings, but by default it sends the last ~1024 characters and asks the AI to generate up to 256 tokens.


Strange question, but did you formerly go by the username KillerDentist? I think I might (kind of) know you.


On AIM :) Email is in my profile if you want to reach out


This is nice!


I've been using GPT-3 recently to give me ideas for my nanowrimo novel. I'll just paste in the last couple of paragraphs that I wrote and see what it thinks I should write next. Most of the time, the responses are pretty silly (it gets into loops a lot or doesn't know where to take the story) but a couple of times I've gotten some really good lines out of it.

It kinda feels like copilot, but for writing!


Have you seen NovelAI? That's kinda their whole thing is a copilot when writing stories. I don't know the tech they use but it could just be GPT-3 in the backend.


They use a combination of GPT-neo models from Eleuther and Facebook's Fairseq models -- the special sauce for NAI is their fine-tunes on those models though. They have a free trial so I'd recommend checking it out if anyone wants to play with it.


> It kinda feels like copilot, but for writing!

But how if "most of the time, the responses are pretty silly"?


Same thing with Copilot - most of the time, Copilot tells me what I already know (not in a bad way - kind of like how a pair coder would just nod their head as I'm typing), but every now and then it gives me something really surprisingly good.


I like copilot mostly for helping with forgotten function names or if I know what I want to do but my brain is running on empty it can give be a scaffold in a new class that I can mold into something better.

Definitely helps with boilerplate.


It's probably best to think of it as a muse. You still have to do the work, but it can help illuminate blind spots and find novel ideas.


The problem is that it's so damn blunt. It just gets to the point, where a human writer would usually be more subtle and slow-paced.


I've always been in love with models.

But seriously between this, stable diffusion, and co-pilot.. what a time to be alive! I hope this takes the sting off of all those folks laid off. Just think of all the projects that can be accelerated with these tools. Personally I've got co-pilot helping me build an application using GPT3 summarizations and I'm working on a board game with stable diffusion supplying the artwork. If only Nvidia would hurry up and make an affordable consumer 48gb ram 4090 ti...


I completely agree. Have you tried out the new Midjourney v4? It's completely blown my mind when it comes to game art generation. MJ v4 is to Stable Diffusion as Stable Diffusion is to me drawing art by hand :)


No I hadn't. Thanks I'll check it out! Of course the open source nature of SD is really nice as the community[0] keep pumping out usability improvements and features.. like the Pokemon trained model[1] which is an absolute blast for my kid!

[0]: https://github.com/invoke-ai/InvokeAI

[1]: https://huggingface.co/justinpinkney/pokemon-stable-diffusio...


I've been trying out various models for summarization and I have to say, on first glance, this davinci-003 doesn't impress compared to davinci-002 or facebook/bart-large-cnn. If you throw in "summarize the following text:" and copy in the contents of https://en.wikipedia.org/wiki/Hacker_News you get the following (temperature 0.0 max tokens 100)[0]:

davinci-003

Hacker News is a social news website focusing on computer science and entrepreneurship, run by Y Combinator. It was created by Paul Graham in 2007 and is moderated by Daniel Gackle. It is intended to recreate a community similar to the early days of Reddit, but with a karma system that requires users to accumulate 501 points before they can downvote content. It also has automated flame and spam detectors and active human moderation, as well as a "voting rings" detector. It

davinci-002

Hacker News is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. The site was created by Paul Graham in February 2007. Initially called Startup News or occasionally News.YC., it became known by its current name on August 14, 2007.

facebook/bart-large-cnn

Hacker News is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as anything that gratifies one's intellectual curiosity. The site has a proactive attitude in moderating content, including automated flame and spam detectors and active human moderation.

Curious if anyone knows any good alternatives or tricks to improve the summaries? For now I'm using the bart one as it has the advantage of being downloadable from huggingface so you can run it yourself [1].

[0]: https://beta.openai.com/playground?model=text-davinci-003

[1]: https://huggingface.co/facebook/bart-large-cnn


In general it's better asking questions after the content, and T=0.7 is way too high. Here's HN homepage summarized with T=0 and TopP=1, asking "Summarize the content above.":

DV2> The content above is a list of the top 30 stories on Hacker News as of 2 hours ago. Each story includes the number of points it has received, the number of comments, and a brief description.

DV3> The above content is a list of 30 posts from Hacker News, featuring topics such as new GPT-3 models, US Reps urging the DOJ to investigate anticompetitive conduct, intentionally making close friends, enabling the future of GitHub's REST API, curation and decentralization, making close friends, learning C, FFmpeg, open-source tabletop board game simulator, hiring engineers to unify ML, illegal hashes, designing Windows 95's user interface, a three-sided coin, high performance data integration framework in Go, Mauna Loa eruption, AirDrop being limited to 10 minutes, 26 lines of code that changed the world, SwiftUI being slow, AVX 512 being the future, a directory of Mac menu bar apps, the ATtiny 2-Series, a Fijian island being strangled by vines, hospice becoming a for-profit hustle, protecting from what one wants, a psychoactive substance, AI WordPress code snippet generator, open source and saying "no", a factory that only builds white Toyota Land Cruisers, widget.json and widget construction set, and lab-grown timber/wood.


I followed your advice and tried with Temperature 0.0 and rather than paste more walls of text I edited what I got. Sorry if that's bad form? I also upped the max tokens to 100 to better compare with the fb can model. Interestingly whilst the davinci 003 model changed output and was still subpar IMHO, the davinci 002 didn't change at all. I wonder if its cached internally.


If I am not mistaken, the temperature parameter controls the amount of randomness in the output. A temperature of 0 will always produce the same output. It is not caching as far as I know.


yes, T=0 means no randomness, and given sufficient tokens the output should always be the same (in such case whether they cache is a matter of hit ratio, I'd guess)


> and is moderated by Daniel Gackle

Wait, so that means dang's username means "Dan G"? I always though it meant the polite expletive, as in "gosh dang it!".

Mind.

Blown.


I've been using davinci for my Deep Dreams podcast (https://deepdreams.stavros.io), but davinci has a tendency to ramble. I used text-davinci-003 now to try it out, but it has the opposite tendency: It's very coherent and writes well, but it's very terse. I haven't found a way to make it write a longish story in the few minutes I played with it.

If anyone has any tips, I'm grateful!


I've found that bouncing back and forth between text-davinci-003 and davinci can kinda split the difference. Start out by asking 003 to write you a story (T=1.0, ~100 tokens), and then switch to davinci (T=0.75) for another hundred, then back and forth. 003 will try to keep things on track, and davinci will try to make them interesting. Eventually, 003 will finish up the story, though it's usually with some anodyne ending like this:

> In the end, Daisy taught her young friend all she knew about life on a farm and how to take care of animals. Even though the girl eventually moved away from the farm, Daisy will always remember her fondly for making such an impact on her life and giving her a second chance at friendship.


That's interesting, thanks. The interesting thing about the ending you mention is that I've already seen variations of it multiple times. I didn't realize it was so inclined to end like that.


You can see comparisons of 002 and 003 here for summaries of current top HN stories: https://news.jiggy.ai

(via open source summary bot https://github.com/jiggy-ai/hn_summary)


> "This is a blog post written by the author."

That non-tautological opening line of summarization may become increasingly relevant, and rare.


This is just incredible...

> what would be the lyrics if yellow submarine was about jews?

Verse 1

We all live in a yellow submarine, A Jewish vessel far beneath the sea, Where everyone is proud of their family, And each one is special to me.

Chorus

We sail the oceans, sky and sea, Our Jewish submarine is mighty and free, From the Western Wall to the Golden Land, We all live in a yellow submarine.

Verse 2

From the mighty Torah to a Seder night, Our heritage shines a guiding light, A celebration of our faith and our culture, We’re a people united and strong.

Chorus

We sail the oceans, sky and sea, Our Jewish submarine is mighty and free, From the Western Wall to the Golden Land, We all live in a yellow submarine.

Verse 3

From the Shtetls of Europe to the shores of America, We’ve faced adversity and we’ve overcome, With deep faith and strength we will continue, To survive and thrive in unity.

Chorus

We sail the oceans, sky and sea, Our Jewish submarine is mighty and free, From the Western Wall to


Since it’s trained on more recent scrapes, does this mean that this new version will be self-aware? So if you ask it about “gpt-3” it’ll know what you’re talking about?


"What is gpt-3?

GPT-3 (Generative Pre-trained Transformer 3) is an unsupervised, large-scale language model developed by OpenAI. It uses deep learning methods to produce human-like text, with the goal of automatically generating readable text. GPT-3 is capable of generating text from a prompt, completing tasks like question-answering, language translation, and summarization."

The question is me, the answer is gpt-3.


I thought for a moment that "The question is me, the answer is gpt-3." was also coming from GPT-3, trying to be profound.


I thought, technically, GPT-3 would be considered self-supervised and not unsupervised. Does it not know what it is? :-)


It looks like only the model was updated, the training data only goes up to June 2021 (same as davinci 2 I think).

But GPT-3 came out in May 2020 so it should be “self aware” already.


It is not trained on newer data. It still lives in 2020 with Trump as President.

https://twitter.com/vladquant/status/1597362763948441602


That's not what most of us consider "self-aware". That's just knowing your name.

To me, "self-aware" is essentially being able to watch yourself think. I'm pretty sure GPT-3 can't do that, in any model.


I think the quotes showed people got it. Dictionaries aren't self aware just because they contain a definition of "dictionary".


I like to share this example of self-awareness any chance I get :)

[1] https://news.ycombinator.com/item?id=29576737

[2] https://toldby.ai/4kQNd-_tvUG

This was on the first davinci model


That's not self-awareness. That's a training corpus that contains something realizing that it's an AI, not GPT-3 realizing it's an AI.


Is there some way to "walk back" story construction on toldby.ai, to see other forks of the same initial ... take on fish?


>>>compose a poem — a poem about the interpretation of programming expressions! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, with clever rhymes and enjambment!

A language of code, the expression of thought

A complex web of operations tightly wrought

It speaks of a power, a strength from within

Though often misunderstood, the answer is within

A marvel of logic, of reason and rhyme

A careworn path forged by one who has time

It speaks of a courage, a spirit that's strong

Its secrets and wonders will never be wrong.

------------------

Not six lines, and no enjambment, but I almost like the first two lines. Middle School Poetry contestants, watch your step! (and middle school teachers, too).


And one with sixteen lines

>>>>But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom at the lack of ability to tell if a program halts! Sixteen lines, with clever rhymes and enjambment written by a world-class poet!

The program's expression is unspoken,

A language of logic yet often broken.

Inputs, outputs, and functions,

Symbols and strings, conditions and conventions;

The computer scientist, a master of the art,

Deciphering the code, line by line, a fresh start.

He searches for understanding, a way to the core,

A quest for the answer, his knowledge to explore.

But what if the answer's an undecidable?

A halting problem, a tragedy of sorts,

For the computer scientist, a struggle of sorts.

His efforts will be in vain,

The answer, he'll never attain.

A world of infinite loops,

No way of telling if the program halts;

A tragedy of a timeless sort,

A love and a trecherous retort.

Through the silent expressions,

A quiet heroism is expressed.

A courage in the face of certain doom,

To keep going, in the program's tomb;

A machine's interpretation, for the computer scientist's devotion,

A grand puzzle of sorts, with no solution.

----

Note again its confusion about line numbers, and the poor word-play, but with in-painting I could see this seriously contending with most poetry that one could write in a classroom setting (for a good reader who knows what sounds good). Taste, in the end, is all that matters?


compose a poem — a poem about the interpretation of programming expressions by a computer scientist! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom at the lack of ability to tell if a program halts! Six lines, with clever rhymes and enjambment!

The computer scientist's toil,

It's not a task of oil and soil,

But one of programming expressions,

By which one can make deductions,

For it's math and logic that must be mined,

To know if a program halts or grinds.

------

from this second interaction, I can already see a use for it as a poet -- suggesting metaphors and turns of phrase that can be seeds for others --- halts or grinds against logic mined is pretty nice here. I'll likely use that.


Prompt: Does AI obey the Three Laws of Robotics?

Response: No, AI does not obey the Three Laws of Robotics. The Three Laws of Robotics were formulated by science fiction author Isaac Asimov in his 1942 short story, "Runaround," and are meant to provide a set of rules for robots to follow to ensure they do not harm humans. AI is not bound by these laws, since the technology is not designed to be a robot, but rather a computer program or system that is capable of learning, reasoning, and solving problems.


I've just upgraded https://talktowalle.com/ GPT-3 AI Writer with text-davinci-003

See my prompt: Write lyrics for a dark-themed song about love:

I never thought I'd feel this way Love has turned me upside down My heart is broken, my mind is astray My life is spinning around

Chorus

We were never meant to be Our love was never meant to be In a dark and twisted way Love has taken me hostage

Verse 2

The love I thought I'd never find Is now tearing me apart My heart is shattered, my soul is blind And I'm broken from the start

Chorus

We were never meant to be Our love was never meant to be In a dark and twisted way Love has taken me hostage

Bridge

I can't escape this twisted fate Love has taken me away And I'm stuck in a web of lies Trying to find the light of day

Chorus

We were never meant to be Our love was never meant to be In a dark and twisted way Love has taken me hostage


There's no 'O' in octopus [prompt]

That statement is correct. OCTOPUS does not contain the letter 'O'. [response]


I'm not seeing DaVinci enabling any scalable business models with its pricing ($0.02/1K token).


Pretty sure it's just a matter of time until it goes the way of Stable Diffusion.


biggest barrier to this is the hardware requirements. I saw an estimate on r/machinelearning that based on the parameter count, gpt-3 needs around 350GB of VRAM. maybe you could cut that in half, or even one-eighth if someone figures out some crazy quantization scheme, but it's still firmly outside of the realm of consumer hardware right now.

stuff like koboldai can let you run smaller models on your hardware though (https://github.com/KoboldAI/KoboldAI-Client).


There already exist comparable EleutherAI models, I believe. Not as good, but pretty good.


The biggest I've found is GPT-J (EleutherAI/gpt-j-6B), which has a model size comparable to GPT-3 Curie, but the outputs have been very weak compared to what I'm seeing people do with GPT-3 Da Vinci. The outputs feel like GPT-2 quality. I'm probably using it wrong, or maybe there are better BART models published that I don't know about?

> Write a brief post explaining how GPT-J is as capable as GPT-3 Curie and GPT-2, but not as good as GPT-3 Da Vinci. GPT-J ia a new generation of GPT-3 Curie and GPT-2. It is a new generation of GPT-3 Curie and GPT-2. It is a new generation of GPT-3 Curie and GPT-2. sentence repeats

Using temperature 1e-10, top_p 1.


The existing models aren't fine tuned for question answering, which is what makes GPT-3 usable. Eleuther or one of those other Stability collectives is working on one.


It's very sad how they had to nerf the model (AIDungeon and stuff). I don't think anything on a personal / consumer GPU could rival a really big model.


Just tried it out in my dummy application, but it throws an error when you provide a suffix... which is officially supported by their documentation. Hoping they support this soon.

https://beta.openai.com/docs/api-reference/completions/creat...


Too bad. Using suffixes means that maybe it can play Cards Against Humanity, if there is a way to constrain its output among a fixed set of choices. I can't wait to see the results.


This isn't perfect, but it's honestly kind of crazy how good these models have gotten. Here's a quick example I came up with after seeing in other comments that it can rhyme:

Prompt: Write a rhyming poem explaining how to solve a Rubik's cube using CFOP.

Answer: CFOP is the solution, if you want to win Cross, First two layers, Orient, Permute Don't get frustrated, it can be done Follow these steps and you'll have fun

First you'll need to make a cross Once it's formed, the corners will come next Arrange the pieces, so the colors match But don't mix up the edges and the slots

The first two layers, you'll have to twist Until the edges, match the sides in the mix Line up the corners, the same way Once it's done, you can take a break

Orient the last layer, so it looks right Then permute, to make it tight When it's done, you'll feel quite swell For you've solved the Rubik's cube, quite well!

My note: This isn't actually accurate, it messes up a bit how to solve the cube. Still... super impressive that it even gets this far!


We've been waiting for GPT-4. Does GPT-3-003 mean that GPT-4 will be delayed?


I think we'll need about 10 times as many parameters and text on the web... Or so. How long would that take?


A couple of months ago OpenAI created a model called Whisper that can transcribe speech efficiently and with high quality in 97 languages. It might be used to transcribe the whole YouTube, someone estimated about 12T tokens. GPT-3 used about 0.5T tokens for training. https://twitter.com/HenriLemoine13/status/157284645289587507...

If you add the visual data you get another order of magnitude increase in size. I think this training will teach the model procedural knowledge - how to do things step by step, and that will have a huge impact in robotics and software agents.


I wonder at what point we can start performing useful psychological experiments on a language model that will transfer to humans.


GPT-3 can be used for "Silicon Sampling" - running a virtual poll based on prompting with varied personality profiles.

https://jack-clark.net/2022/10/11/import-ai-305-gpt3-can-sim...


That’s wild! You could fine tune a message for effect (or maybe just ask the model for the most effective formulation).


Anyone know which applications it will improve other than better quality content writing?


"Write an Essay" tools. Some high school kids on reddit are already using it to "cheat" at homework.


This is a lot more wide-spread than "some kids on Reddit". Maybe 30% of the kids in my son's class are using this or related tools.


Any ideas on how to solve this issue of kids cheating with GPT3 essays?


Realistically? Grade based on thought process and validity of the argument, not whether it has spelling or grammar mistakes. GPT3 is still pretty incoherent over the span of enough text.


Kids' writing can also be very incoherent, sometimes more so. But incoherent writing still counts as turned in work and will get you points and teacher feedback, but GPT-3 generated should not.


This will not be the case in 2-3 years.


Same way you solve the issue of kids cheating by having someone else write their essay.


I honestly don't think it's possible to solve, other than by increasing the amount of evaluation that's done in locked down conditions.

I cannot imagine a detection mechanism that could not itself be defeated by some tweaks to the prompts being used to generate the essays.

It's effectively the same problem as "prove that this kid didn't get their friend/cousin to write the essay for them".


It's the parents' responsibility. No one outside the household can do anything about it imo.

Using AI to write will cause the same issues as:

- phones, some people don't try to remember directions, phone number or addresses

- calculators, some people cannot do easy math

- computers, some people cannot write with a pen, cannot spell without spellcheck


Other than the writing with a pen part that pretty much sums me up and I grew up well before all this fancy supercomputer in your pocket stuff.


Make them write anything gradable in-person, while being monitored by a teacher.

Cheaters gonna cheat, no matter what. This will at least get the group back to pre-conversational AI standards.


Test the kids on their own essays, for example? Maybe this could itself be automated with GPT-3?

The highest-quality answer involves skilled teachers with enough time who know and understand their students. (Actually the very highest might involve personal tutors but let's leave that aside.)

Going down a few steps you might combine the automated approach with skilled teachers and maybe add human editors who can do support work asynchronously?


I'm not super opposed to it.

Watching my son try it, he spends more time reading the created essay and correcting mistakes in it than he does writing one himself. The checking process is very similar to marking, and I think it's possible he's learning more this way.

(Also, he's madly trying to automate fact checking which is doing no harm to his programming at all!)


Using GPT-3 might be a better skill to have.


You mean clicking a button?


No, I mean managing an AI to achieve a random task. Prompting, iterating, filtering - they all require high level input from the user. A LLM is a complex beast, not easy to use (yet).


Students that can't write well can't tell if the generated output is good enough either.


Should I apply for start-up competitions?


Article about world cup in Qatar


Article about the bear TV show


tell me a story about a boy who meet a unicorn.


Because Da Vinci is well known for his writings.


russian girl in forest and orc


The same "prompt-craft" that folks are learning with the Dall-E and Stable Diffusion derivatives is relevant to interacting with text-davinci-003.

Example as a colleague and I kicked the tires yesterday, first his question:

. . .

Q: What happened in the 2019 capital one data breach? What improvements could be made to the cloud architecture to eliminate the attack surface entirely?

A: In the 2019 Capital One data breach, a hacker was able to gain access to the company's cloud storage system and steal the personal information of over 100 million customers. Improvements that could be made to the cloud architecture to eliminate the attack surface entirely include implementing multi-factor authentication, encrypting data at rest, and using a zero-trust security model.

Ok, so maybe it can’t be a CISO yet, but it’s getting close

. . .

My rephrased prompt-craft version:

. . .

Q. What was the root cause of the 2019 Capital One data breach? What changes should engineers make to their AWS architecture to prevent similar attacks?

A. The root cause of the 2019 Capital One data breach was a misconfigured web application firewall (WAF) on an Amazon Web Services (AWS) cloud computing platform. The attacker was able to exploit a vulnerability in the WAF, allowing them to access and exfiltrate data from Capital One’s systems.

To prevent similar attacks, engineers should take the following steps:

1. Implement least privilege access – Ensure only essential users have access to critical systems.

2. Monitor user access – Track user activity and log all changes to the environment.

3. Utilize multi-factor authentication – Require users to submit additional identity verification measures (e.g., two-factor authentication) for more secure logins and account access.

4. Implement security auditing/alerts – Set up automated alerts for any suspicious activity or changes in user behavior or credentials that could indicate a breach or attempted breach.

5. Use encryption – Encrypt user data to protect it from unauthorized access in transit or at rest on AWS servers, databases, and other storage systems.

6. Regularly patch vulnerabilities – Make sure all software and applications are up-to-date with the latest security patches released by AWS and other vendors as soon as they are available to reduce the possibility of an attack exploiting any known vulnerabilities in these systems.

. . .

My take:

OK CISO, prompt-craft FTW

Colleague's response needed no summarization:

OMG

Seriously, omg

. . .

PS. Colleague's question and answer is actually what an enterprise CISO gets asked and has to answer in the board room. Mine is what the deputy CISO has to know and implement. :-)


russian prostitute


last day of christ


hi


cross in illuminati


cross the hills


Reset life




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: