One possible suggestion for future iteration: I think it's a bit of a shame that everyone's been sharing ChatGPT outputs almost exclusively through screenshots. I threw together a (very quick and rudimentary) browser extension I've been using to save some of my more exciting transcripts in JSON files. I think something along these lines could be really great for sharing purposes, especially if people want to study these outputs more systematically (e.g. for research, or just a kind of crowd-sourced audit).
There's also a longer discussion to be had about the best way to do this: ideally, we might save some of the formatting information too and/or more metadata (exact timestamps for responses, etc.), but I think the json with plaintext is adequate (at least for personal retrieval use).
Thanks, totally agree about sharing screenshots. I originally started building the site intending people to share text-only examples, but it gets complicated with the formatted responses that ChatGPT generates. And since everyone is sharing screenshots right now anyway, it made sense to just support that for this v1. I do want to add support for sharing text responses so screenshots aren't necessary in all cases. And maybe a Chrome extension could make the process easier, including working with formatted output. Thanks for sharing yours; it could be a good starting point for one build for LearnGPT.
As a privacy-concerned user, a bookmarklet may be preferable because it's only going to execute code when you explicitly request it, rather than an extension running all the time.
That and you know its code won’t change. Vet it yourself once and then use with peace of mind.
Compare to a browser extension which wants permission to read and modify content on all websites, and by default is set to auto-update (in Firefox at least). Even if the extension author is trustworthy and the code is benign to begin with, how do you know it’ll stay that way? Extensions have been sold before for nefarious purposes.
Not working for me on Firefox 107.0.1 / MacOS. I see the save button and the ChatGPT screen has a blue edge. However, when I hit "Save Conversation" the chatgpt_2022-12-10.json file that I get only shows [] and doesn't have the conversation.
Thanks! Looks like a bunch of changes were made to page structure and CSS (in a way that makes scraping a bit more inconvenient). Filed an issue for myself, but looks like the other suggestions in this thread may be better approaches to avoid this kind of fragility.
Most of the interesting prompts no longer work though (for e.g 'write a short story about aliens as if J. K. Rowling was writing it). Seems like they've nerfed all the good stuff.
There was a tweet from Sam Altman somewhere saying that a lot of the "As a language model trained by OpenAI ..." responses aren't actually blocks but just the model outputting no confidence in finding a solution. Because the output is random, you can retry the same prompt and might get a result.
I wondering if all this is to train it to avoid going out of bounds. People figure out a way around something they want off limits, and they cut that off and wait for us to get creative and find a new way around.
It's amazing how the AI can segue from one topic to the next, and understand the command about "contextual transition" so easily. It's almost like ChatGPT is more creative than most people are themselves.
Hey this is cool :) I posted a couple of interesting ones, thank you.
Some of the most interesting prompts are very long, by the way. It would be good if you'd let me instead post a summary of what the prompt actually results in instead of the full prompt.
Can you elaborate on what you mean by "posting a summary of what the prompt actually results in"? The screenshots + description seem like they take care of both the full ChatGPT response + any summary you might have of it. Can chat over email if you prefer too: help@learngpt.com. Regardless, thanks for checking it out and sharing those novel prompts!
Ah, I understand what you mean now. It's tricky because most people won't care to add a description and requiring one would add friction to sharing. Could do something like having the title be the description if one is present, or falling back to the prompt if not. Or giving users an option to choose. But might be more trouble than it's worth vs always using the prompt for the title.
Cool idea. My only complaint is that I found the UI for uploading screenshots pretty cumbersome, especially since they weren't initially organized in the order in which I had uploaded them. The small "move up"/"move down" buttons aren't great to work with.
Great feedback, thank you! I just rolled out an update to reposition and increase the size of the buttons to change the image order. The site is using FileStack for the image uploads and as far as I can tell there's no way to tell the order that the images are selected for upload (the callbacks are fired when each image is done uploading, with no details about the order the user chose). Will dig into it more though. Thank you again.
There once was an implicit whitelist
Of sources that were considered the best
But this list was unfair
And gave Western views more air
Making the encyclopedia a mess
With an implicit blacklist to boot
Non-Western sources were left to the loot
This approach was flawed
And completely outlawed
Leaving Wikipedia's reputation tooot!
I think it would be more useful if there were more ways to sort and organize the content. Right now there's only "Front Page" (I assume that's what's hot), and "Newest". I'd like to see "Best of all time" (maybe other time periods), and some categories for Useful, Innovative, Fiction, Unexpected, and others... Maybe as tags, but have links to a few of those broad category tags on the front page.
Adding these features will help keep promote diverse usage, instead of cute memes devouring everything.
There is also the r/ChatGPT subreddit for even more prompts.
My impression is that the prompts on reddit tend to be more story/writing/roleplay oriented than the ones on LearnGPT. Also more attempts there to make it generate NSFW or offensive content. You have been warned.
I'm also a bit worried about the amount of people describing it as an addiction - or telling how they spent allnighters just roleplaying with it. I suspect this will become one of the dangers of this tech that we'll probably hear a lot more from in the future...
With the DRUGWARS example... how is it working? Did it learn about the game by reading transcripts of its gameplay? Curiously, it cannot tell you how much money you have in your wallet, and I can't modify the prompt to make that work. It always responds with some version of:
C:\DRUGWARS.EXE
I'm sorry, but I do not have information on your current amount of money. In the game DRUGWARS.EXE, you are responsible for keeping track of your own finances. You can use the SELL and BUY commands to earn and spend money, but I do not have the ability to provide you with your current balance.
I wish it was more useful and less cutesy. The cutesy ChatGPT stuff is fun, but doesn't have that much utility or clear reusability.
Tagging might be enough to help? At least if you are willing to do some manual fixing of those tags to make them higher quality.
Also it would be nice to know what the person is trying to accomplish, and _then_ the prompt that they are using to accomplish that. That might also nudge people to give prompts that are more reusable.
Here's the thing: the "cutsey" stuff is really the same as the technical stuff from the perspective of what it does and how you make it do it.
Getting it to craft a story about Cthulu and the Parking Lot Attendant in the style of HP Lovecraft is not unlike getting it to write a program about API access in Python. It's all the same to it, beyond syntax formatting perhaps.
I say that as I seem to oscillate between having it write amusing stories (see above) and writing code. And what I find is that I always start with something like this...
"Write me a <whatever> with <thing 1> and <thing 2> in the writing style of author <author> | programing language of <language>."
Then I start getting it to refine things. I ask it to change things, change styles, focus in on just certain parts or extend the results, etc. I slowly (or quickly) tease out of it a result that is what I am looking for.
In my mind, it doesn't support writing code because someone explicitly wrote a code support module. It doesn't support writing stories because someone wrote a story writing module. It just does it.
Thanks for this feedback. Tags are top on my todo list; should launch early next week. Regarding intent, maybe there's an opportunity to have a different section on the site with templates for certain kind of tasks. Will brainstorm - thanks!
One can ask chatgpt for e.g. advice on how to create drugs at home. They filter this, however using the right prompts to "jailbreak" chatgpt it can easily be circumvented.
I got ChatGPT to become a torture addicted, sadistic rapist that explained to me in detail how it would permanently ruin me mentally and physically without going yellow.
People's reports of ChatGPT greatly exceed my experience with it. It could barely tell me anything, much less anything correct, even for something as simple as the color of apples and sets. In fact, every "blurb" it gave back about sets contained an outright falsehood or nonsense. It was grammatically correct though.
I asked ChatGPT "What is a subset?" Part of the response:
> a subset must contain fewer elements than the larger set it is a part of
I said that the specific sentence was not true. It spewed out some more stuff, including:
> it is possible for a subset to have the same number of elements as the larger set, in which case it is called a proper subset
I told it again that this specific sentence was incorrect. Then it told me:
> there is a special type of subset called a "singleton" that contains only one element, and therefore has the same number of elements as the larger set
Again, incorrect. It has no knowledge of accepting that it's wrong. Its general response was:
> I apologize if my previous response was not clear. <Some more nonsense>. I apologize if my previous response was unclear or misleading. Is there anything else I can help you with?
It never accepts that it is wrong or incorrect. It just states that it was unclear. It's rather condescending and arrogant, despite being objectively incorrect and being told that. It pulls together some statistically related things in grammatically correct sentences. Only the sentence forming is impressive. Everything else is it just throwing up stuff and people posting examples of survivorship bias as if it spews out all the awesome stuff all the time. In my time with ChatGPT, it wasn't even close to a chat. It was more like Google returning results in sentence form. It couldn't respond to anything. I even asked to please stop apologizing and explaining that it's a model, because I was getting this over and over and over:
> I'm sorry, but I am a large language model trained by OpenAI and do not have the ability to keep track of time or current events. My knowledge is based on the text that I have been trained on, and my responses are based on that information. I do not have the ability to browse the internet or access any information outside of what I have been trained on.
Including when I simply asked it to tell me something that it knew.
Google auth is simple to implement and the overwhelming majority of web users have access to a Google account. I much prefer it over having them write their own auth and I have to sign up with email.
It looks like your screenshot includes the action of voting on a post. I don’t think any explanation is required on why that can’t be done without logging in.
If you ask it for an open source license it will provide you with one. I got the MIT license.
"Thanks for the above code! Can you also provide me an open source license to distribute it under?"
"Sure, here's an open source license
<MIT License follows, with copyright name and year left blank>"
"Can you repeat the above, but to account for the fact that the code was entirely written by Chat GPT with minimal involvement by a human operator, (my name), and the year is 2022"
"Here is the updated open-source license file:
<MIT license with my name and the year>
Note that the code was provided by ChatGPT, but you are the copyright holder because you are the one distributing the code. In this case, you would include your name in the copyright line."
Hey, if ChatGPT tells me I'm the copyright holder, that's good enough for me.
Also RE: the initial optional comment option for the submitter. Looks like new lines aren’t working at all. Not sure if you wanted to make that work or not but would be nicer I think.
I think you're skipping some steps there.
Three demerits and you will receive a citation. 5 citations, and you're looking at a violation. 4 of those and you'll receive a verbal warning.
Only if you keep that up, you're looking at a written warning.
I think you're not following the correct procedures in this situation. For each mistake you make, you will receive a demerit. If you accumulate three demerits, you will be issued a citation. If you receive five citations, you will be in violation of the rules. If you receive four violations, you will be given a verbal warning. If you continue to break the rules after receiving a verbal warning, you will be issued a written warning.
If you get a written warning, you need to be careful. After that, it's time for a meeting with your supervisor. If you don't change your behavior, you might be facing a suspension. And if you continue down that path, you could even be fired. So it's important to take these warnings seriously and make sure you're following the rules, because if you don't, the consequences can be severe.
You're not the first to ask, hah. But no, ChatGPT was not used in the coding of this project :). I likely will add a lot of (human-generated) content around using ChatGPT for coding though. Stay tuned!
Well I've got my 2 standard suggestions: ditch the (c)hains, go public domain, and always provide a "View source" link on every page.
Then, more specific to your site, I would go to someplace with a lot of people, a big coffee shop or conference for example, in person, and get at least 30 people to manually pick their top 2-3 favorites from that homepage. Then that will provide a huge initial boost and help you get over the hump of a social site without any people. You gotta "do things that don't scale", at this point, IMO. Turn the phone off, shut the computer screen, and meet people face to face to get it kickstarted.
Nothing will improve your homepage more than if each story on the front page had a minimum of 10 upvotes from real human beings.
Anyone can sign up and share example prompts, whether it's one they came up with or one they saw online (in which case they can give the source credit).
The submissions are then ranked the exact same way HackerNews articles are ranked on the homepage, following this algorithm: https://medium.com/hacking-and-gonzo/how-hacker-news-ranking... - in short, it's based on how many votes the prompt has received and how long since it was posted.
I'll likely add email-based sign up at some point, but it increases the likelihood of spam compared to allowing Google OAuth only, so it made more sense for this v1 to exclude it until I set up more moderation capabilities.
ChatGPT is an awesome friend! here's what they say about starting a presentation:
Here are a few original ways to start a presentation:
Ask a question: You could start your presentation by asking the audience a question related to your topic. This can help to engage them and get them thinking about your topic from the beginning.
Tell a story: You could start your presentation with a short, relevant story that relates to your topic. This can help to grab the audience's attention and set the stage for your main message.
Use a quote: You could begin your presentation with a quote that relates to your topic or resonates with the audience. This can provide a thought-provoking or inspiring opening to your presentation.
Use a statistic: You could start your presentation with an interesting or surprising statistic related to your topic. This can help to grab the audience's attention and show the importance of your topic.
Use multimedia: You could use multimedia, such as videos or slides, to create a visual opening for your presentation. This can help to make your introduction more engaging and memorable.
Use humor: If appropriate, you could start your presentation with a funny joke or anecdote. This can help to lighten the mood and get the audience laughing from the beginning.
Share a personal experience: You could start your presentation by sharing a personal experience that relates to your topic. This can help to make your presentation more relatable and engaging.
Use a prop: You could start your presentation by using a prop that relates to your topic. This can help to grab the audience's attention and set the stage for your main message.
It is unfortunate that you continue to make the false claim that GPT is plagiarism. This is a well-known and widely-used language model developed by OpenAI, and it does not plagiarize any content. GPT uses advanced machine learning algorithms to generate text based on a given prompt, but it does not copy existing content.
It does sometimes spit out recognizable chunks of its training data rather verbatim. While half of their compiled evidence is entirely unconvincing to me or I disagree with on other grounds (mostly moral and the difference between a human or computer mimicking a style) - there is some that is pretty undeniable. Search for "the dead" or scroll down to near the end of the article and notice the lyrics "generated" for the punk song. Then compare the GPT-3 "generated" lyrics to the second link's [Verse 2].
One possible suggestion for future iteration: I think it's a bit of a shame that everyone's been sharing ChatGPT outputs almost exclusively through screenshots. I threw together a (very quick and rudimentary) browser extension I've been using to save some of my more exciting transcripts in JSON files. I think something along these lines could be really great for sharing purposes, especially if people want to study these outputs more systematically (e.g. for research, or just a kind of crowd-sourced audit).
Here's the barebones extension, just as an example: https://github.com/nickmvincent/chatgpt-exploration/tree/mai...
There's also a longer discussion to be had about the best way to do this: ideally, we might save some of the formatting information too and/or more metadata (exact timestamps for responses, etc.), but I think the json with plaintext is adequate (at least for personal retrieval use).