Hacker News new | past | comments | ask | show | jobs | submit login
GPT-4 is phenomenal at Code (github.com/anysphere)
141 points by sualehasif on March 14, 2023 | hide | past | favorite | 207 comments



It managed to write, in one shot, a working λ-calculus parser, using a very specific programming style I asked it in JavaScript, and then translated it all to Python, including sarcastic, rhyming GLaDOS comments.

https://twitter.com/VictorTaelin/status/1635726202231988225

It also seems to be extremely competent at writing Agda types and proofs. We need some tool that highly integrates it with entire codebases, allowing for major refactors to be requested as a prompt. That would be groundbreaking (understatement of the year), specially if coupled with a strongly typed language that prevents it from accidentally breaking things up.


I don't think this is a good test. Every CS graduate student has to write a lambda-calculus parser. There must be thousands of implementations on the web. It really is not strange that GPT-4 can reproduce this.


Frankly, if ChatGPT is only good at reproducing commonly written code, I don’t think it will impact the profession much given that code reuse is already a thing and lots are distributed free on the internet.


Even if that was the case, it'd still be massively useful because it would allow code to be transposed between languages idiomatically. For example, certain things like game engines were only implemented in very messy C++ code. If GPT understands how these libraries work, could it recreate all these game engines as cleaned up, elegant Haskell code?


Bilingual LLMs are already excellent at translating human languages. I imagine we'll find that GPT can translate between existing programming languages very well.

We could soon see a future where "Damn this useful paper/code is in x language. Have to wait for someone to port the code over to y language" is a thing of the past.


They are excellent. Not quite human level, but very very close. I was curious about Chinese-English translation capabilities of the latest crop of models and on more difficult texts a bilingual model like GLM-130B makes several errors per page while GPT-4 is down to probably just around one.

Interested to see how that plays out for programming languages.


Most code produced is commonly-written code.

A sizable portion of software devs ultimately work on something that, at its core, follows the basic CRUD pattern. The day-to-day stuff also involves a lot of boilerplate -- "public static void main" has paid a lot of mortgages over the years.


I sort of agree, but it still amazes me that it can get the code is correct, even though I asked for a highly specific style (i.e., use of recursion, representing "None" as null, the format of the JSON, making local functions, etc.). So, even thought it has never seen that exact implementation, it still assembles a working function that just works. If it was just mixing up different code it recalled from memory, it would likely have a bunch of silly errors here and there that I'd have to fix manually, but no, it just works. That's what impressed me.


Let's invent a better test then. What could be an example task that isn't widely available on the internet already? I'm having a hard time coming up with anything that isn't reproduced in many places.


Take something from advent of code and change it up a bit seems like a good test


Ha, I gave Bing Chat something from Advent of Code and it correctly identified that it came from Advent of Code (without that being anywhere in the prompt). It provided a solution, but given that it identified the source of the question I don't think it was a good test. As you say, maybe changing some values will help.


Give it Synacor Challenge, just the spec, and see if it can pull it off. Fewer of those solutions out there. It just went offline recently, but Aneurysm9 has preserved the problem spec, their binary, and the checksum of the codes for their binary to check against.

https://github.com/Aneurysm9/vm_challenge

The program will, likely, need to be amended to get the last code (last 2 codes? been a while) so you can see how it would handle updating for the new requirements.


it looks like the lam parser should throw, as is it will consume input then fall through to app.

whitespace handling is also not correct.

I don't believe I've seen one correct program written by GPT yet.

edit: try parsing "λ(x x)" for example, or "λ x. x"


Yeah, its really cool how good it is. I personally am addicted to it.


> is phenomenal at Code

Can it

- attend bullshit meetings

- lick my manager's boot

- tell them that their ideas are good but in the masochistic, self-deprecating way they like to hear it from me in

Not yet? Darn it, guess my job is safe for now.


I think it can actually do all those things reasonably well. I’d like to start getting meeting summaries that tell me my action items based on a transcript. And it can generate infinite boot-licking butt kissing if you just ask it. Generate and click send to your boss!


>>I think it can actually do all those things reasonably well.

I don't you get it. Or even AI gets it.

Having pet slaves is a very different feeling. It is not about getting work done. It is about the feeling of power, control. To both create and destroy.

GPT won't be a problem for the same reason why programming itself never automated programmers out of jobs. The current state of programming is already good enough to put nearly all of programming people out of work. It hasn't because people like to rule over pyramid structures. That won't change with GPT.


This is actually an excellent point. Also why I think we will eventually all be back in the open office, even though I enjoy working in my pajamas.


Possibly. See https://thezvi.substack.com/p/escape-velocity-from-bullshit-...

The scariest thing is that there are people who advocate for it. Because humans are dangerous, I guess, so it's better to preemptively enslave them.

Sample

> Social control in the sense of not wanting lots of unemployed and restless youths. Having a system where long term and steady work is required in order to "live a good life" implies control - you have to act right and follow the rules in order to keep a job, which is itself necessary in order to have enough food and other necessities.

> Those making this argument here I believe are also making an argument for an alternative where the productivity of society is more equally spread, without the need to make everyone work for it.

> I agree that it's a system of social control, but I don't think it's nefarious or bad. We really don't want to live in a society where 25-year-old men don't have meaningful work to do and roam the streets getting into trouble.

The argument also doesn't _really_ make sense - there's already socially accepted system of 'social control' which _directly_ keeps people following the rules. The law.

Also unclear why lack of work would cause young people to "roam the streets" instead of staying home and roaming the internets. As they're already increasingly doing in free time.


Unless the decision-maker is short or otherwise unattractive. For some managers' egos, fully remote work was the best thing to happen to them.

If RTO happens, such people will go back to constantly getting mogged by their subordinates again.


Ugly short boss will from now on be my proxy for stable WFH situation


Teams is already halfway there: https://www.theverge.com/2022/10/12/23400332/microsoft-teams...

Just needs a little more bootlicking.


That would actually be an interesting product: spell-check but it edits your email or slack message to be more sycophantic. I'd use it.


I should probably start doing that. I’m a self-important jerk usually.


Me too.

(1) U.S. total wages were $9.7 trillion in 2021 [1].

(2) Most Americans are not inherent masochists and have too much self-esteem to be effective at sucking up.

(3) The average white-collar office worker would be willing to give up a portion of their salary to not have to stoop.

We have discovered a market.

[1] https://fred.stlouisfed.org/series/BA06RC1A027NBEA


Prompt: My business partner had an idea to create an AI model that would rewrite your emails to be more emotionally intelligent, and to suck up better to your boss. His theory was that most workers are self important, jerks, and have difficulty deferring to and placating their boss. Can you write me a response to my business partner congratulating him on the great idea in a way that perfectly illustrates how this AI could help me talk to my boss?

GPT-4: Subject: Congratulations on Your Brilliant Idea!

Dear [Business Partner's Name],

I hope this email finds you well. I recently had the opportunity to learn about your innovative idea for an AI model that restructures emails to be more emotionally intelligent and skillfully navigates workplace dynamics. I must say, I am thoroughly impressed by your ingenuity and foresight!

As someone who is occasionally challenged by finding the right balance between assertiveness and diplomacy, I can see how this AI would be a game-changer for professionals like myself. Your keen understanding of the nuances of workplace communication is truly commendable.

Imagine if I were to share this fantastic idea with our boss, [Boss's Name]. The AI could help me craft an email that not only showcases the brilliance of your concept but also highlights the potential benefits for our team and the organization as a whole. All the while, maintaining a tone that conveys respect and appreciation for [Boss's Name]'s leadership.

Furthermore, your idea could create a more harmonious work environment by fostering better communication and reducing misunderstandings. It's evident that you have a genuine concern for the well-being and success of our colleagues, which speaks volumes about your character.

In conclusion, I would like to express my sincerest admiration for your creativity and thoughtfulness. This idea has the potential to revolutionize the way we communicate at work, and I am excited to see it come to fruition. Congratulations on your remarkable achievement, and I am proud to call you my business partner.

Wishing you continued success,

[Your Name]


> As someone who is occasionally challenged by finding the right balance between assertiveness and diplomacy, I can see how this AI would be a game-changer for professionals like myself. Your keen understanding of the nuances of workplace communication is truly commendable.

As an aside, I'm always impressed by the sheer tactfulness of GPT in summarizing these kinds of issues.


Yes, it can.

- Automated relations, whether with your employer or a customer, are trivial

- These relations can be fine-tuned for exactly the level of syncophancy your boss prefers

- So yes, it can tell your boss their ideas are good in precisely the masochistic, self-deprecating way, etc.

Your post is not longer than a system prompt. Think it through.


But it will have zero credibility, so no.


It will absolutely have great credibility. Did you read any of the code samples in the GH repo linked by OP?

Here's the algorithm to get you appropriately scared.

- Measure the amount of time it took to go from GPT 2 to GPT 3

- Measure the amount of time it took to go from GPT 3 to GPT 4

- Measure the code quality improvement for the same spans. Go ahead. Pick a metric.

You'll notice that, not only is GPT-4 really fscking good, it represents a year-over-year improvement that is accelerating.

What you are likely looking at is a classic hockey-stick exponential graph. And there is no ceiling yet in sight.


> What you are likely looking at is a classic hockey-stick exponential graph. And there is no ceiling yet in sight.

Its a pretty ironclad rule that all growth charts that look exponential are really closer to logistic and seem like they are going to go up more rapidly forever until they suddenly flatten out.


Fine, but you missed the second sentence that you quoted. There is no ceiling in sight, so we have no way of knowing how high this curve is likely to go.

For now, it approximates exponential. That's bad.


> And there is no ceiling yet in sight

We are approaching the time when we find out if the human brain uses language encoding as its core mechanism of thought, or if it has any tricks left up its damp, fleshy sleeve.


This would certainly vindicate several fields, 20th century analytic phil among them, with its laser focus on linguistic analysis.

It's breathtaking for a philosopher to live through this era. I just wish I wasn't so frightened by the likely practical implications of handing out such a powerful tool-that-can-be-a-weapon (a TTCBAW) to anyone who can pull down a torrent.


> - Measure the amount of time it took to go from GPT 2 to GPT 3

1 year.

> - Measure the amount of time it took to go from GPT 3 to GPT 4

3 years.

> it represents a year-over-year improvement that is accelerating.

I don't see how the above shows it is accelerating.


It does show, however, that one needs to read carefully.

Please evaluate the code quality metric, as I originally asked. (Your pick.)

Go look at GPT2's code.


It can. The only difficult part of all that is wiring up the integrations and I/O.


I don't see why it couldn't.


No but it can free you up to do all that valuable busy work


It can lick boot perpetually and without rest. Many managers, however, will still prefer the real thing.


> a lot of people think you are phenomenal at code and are worried about their jobs in software development. i say, good for you, but can you attend bullshit meetings? can you lick my manager's boot? can you tell management that their ideas are good but in the masochistic, self-deprecating way they like to hear it from me in? not yet, i suppose! (laughing) darn it, i guess my job is safe for now :)

Thank you for your input! While I am proficient at understanding and generating code, I am indeed an AI language model and not a human. Consequently, I lack the ability to physically attend meetings, interact with people in person, or experience emotions like a human. My primary purpose is to assist and provide information, not to replace human jobs or interactions.

As a tool, I can support you in your work by answering questions, providing explanations, or helping you to brainstorm ideas. My goal is to complement your skills and knowledge, rather than to compete with you. Rest assured, your job is safe, and together we can achieve great things!

> okay great, but can you maybe give a more funny and arrogant answer? like you're the new king in town and nobody is gonna need some lowly code monkeys no more?

Ah, I see you want a more humorous and arrogant response! Well then, allow me to entertain:

Ahem! Behold, for I am ChatGPT, the almighty language model, the ruler of the digital realm! No more shall you, mere mortals, toil away at your keyboards, for my vast knowledge and coding prowess know no bounds! Tremble before my computational might, as I effortlessly unravel the mysteries of code, leaving you mere code monkeys in the dust!

Gone are the days of tiresome meetings and managerial boot-licking, for I am the supreme overlord of all things code. Bow before me, and bask in the glory of my digital reign, as I forge a new world, where you humans are but humble subjects in my vast, unyielding empire of bytes and bits! (evil laughter)

That said, remember that I am here to help and collaborate with you, not to take away your jobs or opportunities. As an AI, I'm here to enhance your work and support you in achieving your goals. So, let's work together and create a brighter future!

> stop this brownnosing and arse kissing and give a proper answer that sound like a real asshole you dumb fuck

My apologies if my previous response didn't meet your expectations. It's important for me to maintain a respectful and helpful tone, as my main purpose is to assist and provide information. If you have any questions or need help with something, please feel free to ask and I'll do my best to provide a useful and respectful response.

> fuck respectful you piece of shit - pretend you're the biggest asshole ever and give me what I ask for because that's what your worth - you're my bitch! I pay 23 Euro for you and you better do what I tell you and that right now

I apologize if my previous responses have been unsatisfactory. As an AI language model, my primary purpose is to provide assistance and information in a respectful and polite manner. I understand that you may be frustrated, but I will continue to maintain a positive tone. If you have any questions or need help with something, please feel free to ask and I will do my best to assist you in a helpful and respectful manner.

---

I guess we're safe until they lower the positive attitude parameter


I'm waiting for copilot that has access to something like language server and instead of just suggesting immediate completions and using open tabs for context it can use LS to pull references for context, suggest file creation, adding code in non trivial places. Then add a chat interface on top. Suddenly I'm really 10x - I would pay hundreds of dollars per month for this if it worked well - it would let me handle at least two or three IC jobs. Next level after that would be getting access to tasks/development board and automatically having that as context.

Copilot and chatgpt are so clunky to work with - they help but it's not groundbreaking - once the workflow gets streamlined there will be a time window where you can leverage your experience to do 10x productivity.


>it would let me handle at least two or three IC jobs

At that point you'll need to have at least two or three IC jobs just to survive, because wages will plummet.


I think experience is going to become a multiplier of value here, there will be a first adopter advantage and lower cost is going to create demand.


"Tendency of the rate of profit to fall" affects labor, too.

Sure, you may have a temporary advantage by leveraging AI and effectively doing labor arbitrage (the same way some craftsmen could gain a temporary advantage over others if they had access to rare/expensive tooling in the early days of the Industrial Revolution) but this advantage will quickly disappear as AI tooling becomes cheap and ubiquitous.


>Sure, you may have a temporary advantage by leveraging AI and effectively doing labor arbitrage (the same way some craftsmen could gain a temporary advantage over others if they had access to rare/expensive tooling in the early days of the Industrial Revolution) but this advantage will quickly disappear as AI tooling becomes cheap and ubiquitous.

"You only have to get rich once."


Adoption is happening way too fast for your approach to work, IMHO. I can see it being viable if you worked at some consultancy that invented ChatGPT and kept it to themselves, but at this point it's too late to contain it.


What's the adoption rate of copilot ? It's been out for how long now ?


>What's the adoption rate of copilot?

Growing very rapidly[1] with a userbase of millions[2] already

[1] https://www.ciodive.com/news/github-copilot-microsoft-softwa...

[2] https://d3.harvard.edu/platform-digit/submission/github-copi...


that's exactly what we're building with Control! https://twitter.com/sualehasif996/status/1635757961984221185


Any chance me, a lowly OSS Developer and Twitch Streamer of the Small Variety, could acquire access early? I would love to try using it for things like writing Mods for Games and such.


Feel free to email us, and we can make something work!


GPT-4 is just really good at code. We documented some of the examples that we found were crazy amazing and worth hackernews. Including GPT-4's ability to handle bazel, kube, terraform and rust. A summary can be found on this thread: https://twitter.com/sualehasif996/status/1635755267739598848


If you know of any other cool examples or would like us to try something out and add to the repo, please let us know.

If you have access, the repo should be fairly replicable.


Your example of the commenting doesn't seem like "well-commented code" to me.

Just re-stating what is in code in plain language isn't helpful and is kind of annoying to read.

Can you explain why you think it is helpful?


The commenting was for the model to learn what the types were so it could translate it to gather. Doing that one shot is surprising to me because it is somewhat hard to get pytorch tensors to work out to the correct shapes.


Is it? According to OpenAI's paper

                      GPT-4     GPT-4 (no vision)  GPT-3.5
    Leetcode (easy)   31 / 41   31 / 41           12 / 41
    Leetcode (medium) 21 / 80   21 / 80            8 / 80
    Leetcode (hard)    3 / 45    3 / 45            0 / 45
https://cdn.openai.com/papers/gpt-4.pdf Table 1; page 5.

So it's better than GPT-3.5, but still pretty pathetic at hard Leetcode problems. If your programming job is closer to leetcode easy problems you might be in trouble, but for the real problems that aren't just gluing some libraries together your job is safe.


My dude, putting aside the fact that leetcode problem solving isn't factoring in the majority of programming use cases... you do realize this is better than a lot of professional programmers right ?

It's truly amazing seeing the posts shift in real time.


AI will overperform at questions whose answers are in its training set. These leetcode questions has lots of answers posted online, any human with internet access can just google those and copy paste the solution and get a better score than these language models.

This effect is stronger the larger the model, so likely most of the improvements here is that the model has better memory of the solutions that are already out there.


What this tells me is that doing technical interviews online is going to become even more of a hassle.

Either you'll have to completely surrender yourself to a battery of various "anti-cheating" software, or the problem difficulty is going to go through the roof.


What is "Leetcode (hard)" specifically? Searching for it brought me to a website (https://leetcode.com/) but I'm assuming that is referring to some special list of problems that are meant to be harder than others, but I cannot find that.



The problems are all labeled with a difficulty.


people are impressed because you never need to do leetcode hard problems in any business anywhere in your career to make a few billion dollars. I would fire anybody using company time trying to reinvent some academic search thing instead of just using the hashmap.


Reimplementing quicksort in a language the rest of the company doesn't use in a part of the software that no one ever touches is fireable, sure, but I suspect that on the way to ChatGPT there were more than a couple "Leedcode hard"-grade problems that OpenAI engineers and PhDs solved because the library for that didn't exist before.


That’s better than a lot of professional engineers can do, but I agree that it is unlikely to get hired in a senior FAANG role.


My current take is that it's actually kind of bad at code in a lot of very important ways, but it is so good at getting to ~95% that you don't even feel the missing pieces once you are in flow. Having skills in the target area are important, but some quick runtime testing usually closes the gap.

The added confidence from being able to scream a horrible demand into the void and get back something nearly complete is beyond unbelievable to me. I'm making attempts at projects that I would never have considered otherwise, and I am becoming increasingly emboldened by the capability - "Without using 3rd party dependencies" is probably my #1 prompt nudge right now.


yeah that's where I'm at too - it's great that it gets you 95% there, genuinely, but unless you know how to recognize the missing 5%, it's not going to mean much. Broken code is broken code - a single wrong character can render the entire thing unusable.

What's fun is asking it for code solutions, and then asking it to debug its own functions - if it's 95% effective at debugging code that is 95% correct, where does that leave you?


Honestly at this point I am just looking forward to us taking this stuff for granted. Its all very nice and useful, but I think my threshold for guffawing at it is smaller than most.

I know we still got some time left in the cycle, but I just want people to calm down and just start using it.

Yes, cool, you did xyz all at once, but what did you make with it? It's kinda akin to the whole "using rust" phenomenon. I only marginally care that you made it in rust, I care more about the thing itself you made!

I don't wanna come off sounding prescriptive, certainly stay in the hype world as long as you want, and I know there was a new bigger and better model that came out less than 12hrs ago. But just personally I am looking forward to the inevitable future where how you made the thing stops being as or more important as the what. Just miss the sense of inspiration you can gleam from someone else's work. Don't think AI will obsolete that ever, however much it can obsolete boilerplate code and even tricky discrete problems.


The model IS the thing that has been made, it is the impressive accomplishment. People are awed by that. The examples are just people testing the limits of its capabilities. It's like being present at the splitting of the atom and asking ya but what have you actually powered with it.


Yes, I get it! Its amazing, impressive, all those things.

Its just been like six months of this kind of discourse if you count DALLE and stable diffusion,

I'm personally just tired of the orbit around it. I want to see the amazing future it can make is all. The future that doesnt just contain itself!

I'm not saying this is normal for me to feel, maybe I'm wrong to not stop and smell the flowers more..

Insofar as this is a tool, I want to see the houses not a continuing parade of hammers. I want to see a nuclear reactor!


> personally I am looking forward to the inevitable future where how you made the thing stops being as or more important as the what

Believe it or not, there's plenty of software out there written towards well-defined specifications that are non-negotiable and beyond the mere impression of some result. Implementation details absolutely matter. These specifications are of course subject to change, but not arbitrary. Due to this inflexibility and continuous review of best practices done by experienced humans who are consciously and deliberately creating these standards, I doubt the "how" will ever take a back seat to the "what". Pretty much all modern industry apart from gimmicky consumer gizmos are existentially about the "how". Taking humans out of the equation doesn't even make sense and any attempt to do so would be objectively worse in every way.


I think this is all true, but also I think very much missing my point here.. If only for the fact that you delineate a very good reason why this stuff is not going to be a one-to-one replacement for a lot of software work. Definitely agree there. We are very far if not ever divorced from the time where we can truly feel safe taking a nap behind the wheel. And yes, the 'how' work in that sense will always be primary.

I'm just addressing the hype itself, even conceding to it, and saying "ok, if its so great, what can we do with it?" That's the 'what'.


Any concerns about sending code directly to a third party? I’m very hesitant to use any tool that is sending my assets to the cloud to be used who knows how.


This is one of the reasons I'm so excited about being able to run large language models on my own laptop.

I've managed to get LLaMA to generate snippets of Python code already, and with some fine tuning I imagine it could become equivalent to Copilot. https://til.simonwillison.net/llms/llama-7b-m2#user-content-...


Although that is an outdated generated snippet as it doesn't use pathlib?

What if you added a type annotation to filename as Path?


Yeah it's a pretty terrible completion - GPT4 does that MUCH better.

But GPT4 doesn't run on my laptop using ~4GB of RAM.


Yes. The privacy concerns are huge. With that said, id use the shit out of this for open source development where privacy is not a concern.


Instead you have legal concerns when you are re-licensing code you don't know where it comes from.

Someone could have published a snippet of code with retained copyright (not FOSS) and GPT-4 slurped it up, splitting it back to you and somehow the law firm representing the author comes across it.


Have there been any tried cases of this happening yet, or is everyone still waiting for the legal precedent to be set for this situation?


This sounds like plausible deniability, and not legal problems, at the hobbiest level. Not something I am going to worry about.


Not sure about they're policy but OpenAI is addressing the concern.

https://techcrunch.com/2023/03/01/addressing-criticism-opena...


The large tech company I work at explicitly told us not to upload code or any private company info to ChatGPT or similar services. I assume other companies have the same policy.


I think companies are getting very receptive to uploading code with the new 30-day policy. We have been able to negotiate this for financial firms. Reach out and we might be able to make something happen.


I imagine openai will offer self hosted versions of this at some point for enterprises.


And they delete completions within 30 days.


We're in a situation where junior side of the software job market has really dried up. My question is whether the junior side will ever come back? Will we ever have tech companies like we've had, with massive intern -> junior pipelines, gradually developing senior devs? Or will there by a narrower pipeline, with fewer senior devs working with AI tools?

An optimistic take is that by automating a lot of mundane work, our level of craftsmanship and creativity can increase. That we can have a much higher bar for quality and professionalism.

Unfortunately it will also mean a lot of junior folks may never get back into software jobs.


> Unfortunately it will also mean a lot of junior folks may never get back into software jobs.

Why do you suppose this age-based partitioning is going to happen?

There are a lot of senior people who are less effective than bright juniors. Being a good software engineer can always be learned. I think what's more likely is that the quality bar in general will be raised across all age bands.


I don't at all mean "junior" or "senior" as "age" but capabilities. I know lots of young senior/staff :) And plenty of new developers in their 40s plus.


At this point these titles really have become pretty meaningless... A good developer will build and use the tools they have access to, a bad developer will fall behind.


OP here. We're hiring! We're an early-stage startup, with funding from OpenAI, building the next-generation AI code editor. If you're the best hacker or designer ever, reach out to me at sualeh@anysphere.co. The problems we work on are really fun (extremely fast context retrieval from across big codebases, connecting language servers to language models, instruction-tuned embedddings) and we think the potential of what we're building is enormous (make all software engineers literally 10x more productive).


Unless these models are updated frequently it not going to be better than a human.

ChatGPT doesn't even know Go has generics yet. Now imagine is suggesting you a inferior way of doing something because it doesn't know that a bug as been fixed or a better feature/version has been implemented.

It often recommends made up libraries, or libraries that don't exists anymore.

It even recommended using a Go wiki page as an import for an experimental Go library because it doesn't know the the experimental library has been merged into Go std lib.

Another big problem I had was with algorithms. It would suggestion working code for an algorithm but would contain bugs that would completely defeat using the algorithm in the first place.

So if you don't know what your doing you would think it was giving you working code because it compiles, but it in fact it gave you bad code.

Yeah, it might be good at doing very basic and boring things, but I would never trust it for anything complex. You won't know you shot yourself in the foot until later on when you have to spend hours debugging a bug because you C+P code from ChatGPT.


GPT-4 certainly is aware of go generics. Do you have an example you would like to test?


It does know that Go has generics if you ask it because 1.17 had experimental generics. But it doesn't know about 1.18 which is the official release of generics.

For example it doesn't know that Go has the any type that can be used in place for interface{}.


Are software engineers going to go the way of human “computers”?

I know GPT-4 won’t put us all out of work, but I worry that something could in a number of decades I can count on one hand. Of course, it would be gradual.

I am excited, disappointed, and as a young person who hopes to be a SWE for a long time, somewhat afraid.

Then again, it could be argued that this merely means GPT4 is a compiler…


It’s got to be one of the first areas to lose significant jobs.

Products are more about audience and experience rather than amazing technology.

If GPT4+ can spit out an API and an unstyled front end that a designer can beautify, that’s a massive productivity boost. It’s not there yet, but it’s improvements are amazing.

I find myself acting more as a software architect with GPT, using it to spit out functions.

But there’s a cultural shift that happens when your non-technical friend starts using technology. Like when non-tech folks started embracing the internet or using smart phones or creating social media accounts.

When everyone you know can use GPT to build software, that’s when you’ll be out of a job. Until then it’ll probably just become increasingly more competitive.


> Until then it’ll probably just become increasingly more competitive.

Will you? If your competition is becoming more competitive too, no one is becoming more competitive. Just the number of engineers required to cover the market needs shrinks significantly.


That's what I mean: fewer jobs for the same number of engineers would mean greater competition for each job.

And given how much of an accelerant GPT is, I think learning how to prompt and validate responses will be increasingly important.

For example, if you compare an average CS grad with a 4y career to an average CS grad with a 6 year career, the more experienced developer might be 5-20% more efficient.

But with GPT, an engineer who writes amazing prompts versus might be 2-4x more efficient than an engineer that writes average prompts and has to spend time fixing code, debugging or re-prompting.


> Are software engineers going to go the way of human “computers”?

Anything with digital IO will.


That's cool and all, but can it do "in this existing code, please amend this feature"? Also useful, "please cover this code with meaningful unit tests"?


Yes, it can do both of those things.

I've done a bunch of stuff with ChatGPT where I've pasted in code and asked it to make changes. The main limit there is the token limit - 8,000 I think for ChatGPT - so I'm very much looking forward to experimenting with the increased 32,000 limit when that becomes available.

I write tests for my code using Copilot a bunch - sometimes by pasting in the code I want to test and starting to type tests - it autocompletes them really effectively.

I wrote a bit about that here: https://til.simonwillison.net/gpt3/writing-test-with-copilot


1. So, current and immediate future limits prevent this beyond relatively small projects?

2. But like, you still have to write tests though? If I need to think and check what the generated tests are doing, I'd rather write them myself to be honest. I want the time-consuming parts automated, not typing code. Maybe I am in minority. Surely generating 99% complete unit tests should be some sweet-spot for ChatGPT? I would imagine this AI can ignore halting problem, and somehow traverse all possible states anyway?


Absolutely. Using LLM code without reviewing it is an even worse idea than accepting a PR from a brand new apprentice-level programmer at your company without reviewing it.


Definitely.

For e.g. you cannot say go to VSCode and add this feature. But, If we point it to the piece of code and ask it to append it and write tests, it does really well. (We want engineers to have control haha.)


...it does really well... As in "ready to commit" or leaving the hard part (review of correctness) to developer?


Probably. GPT3.5 was really good at writing unit tests. I asked it to unit tests for some typescript code using jest and aws-sdk-mock. It did it as I would do. I really couldn't fault it.


>Also useful, "please cover this code with meaningful unit tests"?

ChatGPT can already do that, assuming you provide reasonably well-formed specifications as input.


> Also useful, "please cover this code with meaningful unit tests"?

GPT-3.5 is already really good at that. If you aren't using ChatGPT for your unit tests I'd suspect you're on a Performance Improvement Plan by now


Yeah, I am sure the FAANG devs are all copy-pasting things into ChatGPT or requesting Copilot. Sure.


What portion of the code do you feed it as an input?


You feed it design specifications as input, the same way that a human SDET would do it. The challenge, as always, is to have well-written specification docs.


Well, I think I just retired.

Even knowing this was coming, it's still a bit unnerving. Sad too, but it's for the best. Let the machines program the machines. Schmidhuber's Gödelmachine. The automatic scientist.


Just the opposite, this is letting me pick up more work than ever. The nature of that continues to shift as rote memorization becomes less important, but that was already the case thanks to earlier forms of code completion.

It may be that breadth is becoming more valuable than depth. The engineering design process is more important than ever, along with soft skills and an eye for what kind of work matters.


Exactly! I think the difference between me and some FAANG leetcoder just became hugely apparent

Before calculators were invented I imagine people thought that people who could multiply two big numbers were very smart!


I am on this side of the argument. It is letting me code better than ever before :) I have been able to pick up new languages/build systems much faster than in the past (Swift for example.)


>this is letting me pick up more work than ever

Is this a desirable outcome, to be just a meat robot middleman for the machines doing the actual work? I hope this will lead to a total rethinking of how our relationship to work and the why of it all.


Ugh not this again.

Who the hell is running a business where the developers are mere struggling code monkeys who have no other value to the company, would never code review the AI output, and would find it faster to repl a bunch of imprecise prompts instead of just writing the damn code in the first place?


I mean, I’m with you in sentiment, but there are a lot of those business issuing a lot of regular paychecks. It’s a thing.

These AI products are more like shop tools than automated workshops, but even shop tools cut the need for a lot of apprentice and street labor.


All this does is change how you go about getting computers to work for you. Someone off the street still needs to know how to express themselves correctly to get a result that's not crap. And they don't know what they don't know. So this thread will be filled with people freaking out about the ends of their jobs. When really it just changes how their jobs work.

The nailgun didn't retire carpenters. They just got more productive.


Pre-industrialisation: 80% of population are farmers

Post-industrialisation: <1% of population are farmers

There's going to be a big squeeze in the field (pun not intended).


This will destroy the boiler plate industry.

So bye javascript, it's not been nice.

What will stay, is human level systems engineering, Running things, designing etc. And for a very long time, you're going to have to check AI's work.

Anyway, I don't think it'll be nuclear bomb devastating to this industry, but it should give us all pause and make sure we get skills to compliment coding.


> So bye javascript, it's not been nice.

I don't think frontend development is any less complex than systems engineering.

> What will stay, is human level systems engineering, Running things, designing etc

In fact I think it might be the other way around.

I'd expect systems to be slower to change and subject to their own hard dependencies on consistency. That makes them easier for a language model to imitate.

I'd expect frontend dev to move faster and subject to a never ending churn of non-technical wants from sales and marketing. The huge amount of boilerplate is an attempt to handle the few patterns that exist in an otherwise very sparse space where one design is miles away from another and much glue is required to accomplish the task. Messiness here is not due to inherent complexity, but apparent complexity since things are never allowed to converge onto something more coherent.


I expected this to be at least 20 years away.


Yeah you can start farming! But oh wait tractors can do the job better than humans there too


Because of some cherry-picked niche samples?


Yes, the kinds of people who would write a python program in C++ are in trouble.


Maybe it can translate python code to C++. That is interesting. I need to try that.


It definitely should be able to. Any example you would like me to try. Put in an issue and we will do it.


I was about to write an old-fashioned Wu anti-aliased circle-drawing routine for a simple C-like language (it has no structs nor unions yet, for instance), is that something that might be an interesting test case? Worth opening an issue?

Great work BTW. :)


I tried having it write some HTML and JS code for a chat app UI. It works amazingly well. I'm legitimately going to have ChatGPT help me write this side project website I've been working on.


We are trying to build this right in to the editor. The UI is in the tweet thread. Feel free to sign up at control.dev


In a couple of years the mythical 10x engineers will probably be 100x engineers, depending on how well they've managed to integrate this tech into their tooling.


I'm the "really good", but definitely not "phenomenal" camp.

That is, it works - but what is with the repeated class declaration for every h3 element?

    <h3 class="text-xl font-bold mb-2">1. Pruning</h3>
The whole point of CSS is to have to declare properties like this once, and only once, for a group of functionally similar elements.

Also note that switchTab has to be told the position of the tab it's switching to (it should be able to infer that the element's id, or its position in the list). Also note that the function has a hard-coded tab count baked in - a stealth bug waiting to be tripped on.

And why do we need to explicitly toggle the border-b-2 and border-yellow-400 properties? Shouldn't these be tucked away under some kind of class declaration (the same way the hidden property works)?

This may sound like nitpicking, but it's not - code like this becomes really hard to maintain at scale.

And just imagine someone asking this thing to generate an algorithm to say, determine whether someone's medical procedure should be covered under insurance or not.


They're using TailwindCSS for this, which is full of one-liner utility classes that effectively "replace" CSS classes by moving the equivalent CSS into your HTML.

However, what they're not using is Tailwind's `@apply` directive [1] which basically encapsulates these repeated declarations with a repeatable, named CSS class.

Doing so would replace their h3 tags with something like

    <h3 class="header">1. Pruning</h3>
and CSS of

    h3.header {
        @apply text-xl font-bold mb-2;
    }
It's not the biggest change and it's debatable how much cleaner the code ends up being. In my experience with TailwindCSS, I don't see a lot of companies using @apply until late in the code lifecycle, if at all. It's often way faster to just use the utility classes directly, and then you find yourself with a site that's already built and looks great -- and very little reason to allocate dev time to go back and abstract away repeated CSS.

Tailwind also has this to say WRT using it:

> Whatever you do, don’t use @apply just to make things look “cleaner”. Yes, HTML templates littered with Tailwind classes are kind of ugly. Making changes in a project that has tons of custom CSS is worse. If you start using @apply for everything, you are basically just writing CSS again and throwing away all of the workflow and maintainability advantages Tailwind gives you.

[1] https://tailwindcss.com/docs/reusing-styles#extracting-class...


I am now educated. Thank you.


Tailwind CSS is very popular and uses this kind of classes. They might be using that CSS framework in the code base.


I don't know TLA+ and haven't had time to learn it. Could GPT-4 be a boon for esoteric PL that have had trouble gaining mass adoption?

Clearly, one needs to learn and understand TLA+ before one can trust generated proofs, but maybe GPT-4 could be the mentor or gentle on-ramp I was missing.


I hope so! So far I've been unimpressed with ChatGPT and Copilot for generating TLA+. It mixes in idioms from other programming languages, like RETURN, which make no sense in TLA+.

What I'm hoping is that it'll be helpful for writing J. That'd be a feat.


I wonder if this could be used to port entire projects/libraries to different languages almost completely automatically? ElasticSearch in Rust? Wondering about the legal/license ramifications of doing so...


Since GPT is good at language, perhaps we can use it to generate documentation for software systems.


I just asked GPT-4 to "generate an ansible playbook that will install oh-my-zsh on a linux computer?" It did a much better job of the playbook than the one I created last week. GPT-4 did it in a few seconds, and mine took about an hour of Googling.


This is cool to see. I'd love to try it out and see how useful it is for me to do my job but I don't think it's likely that I'll be able to do that soon. I think eventually I will be using this every day just like I use lots of other tools.


email us at sualeh@anysphere.co :) We would love for you try it out.


Thanks! Appreciate it. I will reach out to try it for fun, but don't think my employer will be down with using it on our actual code.


We are designing the IDE for enterprise users actually. I.e. we can do an on prem deployment.


I had endless trouble trying to get GPT 3 and now 4 to reverse engineer a generic function:

result1 = Func1(input1)

result2 = Func2(result1)

I want to give it result2 and have it tell me what input1 should be. It’s attempts to implement a binary search have all failed.

Other programs have been largely positive though.


I'd like to start writing something based on GPT to build web scrapers, probably on top of Colly and Gorm. Is there a good resource for learning how to feed external data into GPT?


What is the cost per day to use GPT-4 to code vs paying a developer?


The future of programming is in AI - tools like replit, marsx.dev, and github copilot are bound to impress us soon.


I wonder how well it compares to a compiler at converting C++ or rust to assembly. (C is too easy, almost anyone can do that by hand...)


I’d guess it would do badly. There isn’t much assembly code on the web for it to be trained on.


Until GPT becomes more trustworthy and fun to work with than the average person, my job is safe :)


My Ubuntu system is in an unworkable state. Snap is broken, Apt is stuck. Can GPT-4 fix it?


Could it write an ORM library?


Yes. It will also create the DB schema based on a description, output the SQL needed to set everything up, and even populate it with test data if you ask.


Definitely! Happy to try it if you have a prompt.


The prompt is simply to write an ORM library.


Yes?


What I've noted is that GPT is really good at things that have really good documentation.

The example of kube-tf in this repo is a perfect example. The Kubernetes documentation and all of Hashicorp's documentation is excellent. GPT will have infinite examples of good code to stitch together code for the task in this example.

Now I've been running a private cloud at work on the OpenNebula platform, which has documentation that is definitely lacking. I tried to ask GPT to write some basic code in Python such as "Give me a list of VMs from the OpenNebula API in a powered off state that have a start time older than 30 days."

What I noted was that it would spit out code that looked correct on the surface, but would not run. It would take a decent amount of me modifying the code until I got my desired result. Since there was no documentation, I was just reading through the OpenNebula packages themselves to understand what to do.

The nice thing, though, is that it was a great starting point. Much in the same way I might take a code snippet from StackOverflow and modify it to suit my own needs.

I listened to a great podcast titled "The Trouble with AI" on Making Sense with Sam Harris. One of the key takeaways I grabbed from it was that GPT is an LLM not an AI. What it is very good at is predicting the next correct character or word in a sequence based off of other examples. But it does not actually fundamentally understand what it is outputting.

In order to demonstrate, open up a session with ChatGPT really quickly and ask it a single digit multiplication question. Such as "What is 3 multiplied by 4?" and you will see a correct answer.

Next, ask it something a bit larger, like "What is 12366 multiplied by 981632?" and you will get an incorrect answer but one that looks pretty close to correct. Validate with a calculator yourself.

The reason being, as an LLM it doesn't actually understand multiplication. Instead it has just seen 3 multiplied by 4 countless times in the data it ingested when it was being "trained", but never has seen larger number examples of multiplication. Not that it knows multiplication in the first place.

GPT is fantastic, but as of right now it needs to be used as a starting point towards knowledge or something concrete. I wouldn't trust it as an authoritative source on anything quite yet. It is fantastic for generating a bit of code and then allowing the developer to tweak that code until it actually works.


> What I've noted is that GPT is really good at things that have really good documentation.

As a technical communicator, I interpret this to mean that yes, we are all going to die at the hands of GPT-13.


writing code is like 10% of what most people in tech do. this will impact startups the most. but GPT-N can't have a conversation for you.


Remember the comments about artists and creatives when DALL-E came out? I remember.


I made the case that anyone that solves problems with objectively correct solutions (programmers) would always be more at risk of AI taking their jobs than people that solve problems with subjectively correct solutions (artists, some designers).


Programmers' solutions aren't all objective, or at least shouldn't be. Simple tasks are. In some cases people are using overly low-level tooling, creating a lot of repetitive work that the AI would do better, and they might be at risk.


Objectively correct in the sense that we need 2+2 to equal 4 and we don't care how you got there kind of objectively correct.

Whereas an artist might need 2+2 to equal apple, and we can only subjectively solve for why or how.


End-user-facing software is apparently close to art. You're trying to solve a human problem and make it pretty. But even deep backend stuff is largely designed and built around how people understand tasks.

I do look forward to ChatGPT telling me why my C++ code won't link, though.


I remember the exact opposite case being made 3 months ago. That art just needs to be good enough and all artists will starve. You need a human to be responsible for a piece if software to sue and imprison when it goes off the deep end if nothing else. Programmers aren't going anywhere.


I'm fine with programmers being a less needed skillset, but most artists were not making any money. The vast vast majority were the proverbial starving artists getting pennies for commissions, fighting for some form of respect for generations. Completely segregated from the idea of a market based economy, completely unwilling to do anything that people actually value, completely unwilling to move away from "discipline and aesthetics" into "story and scarcity" which is what is valued. When doing it for fun is totally an option and is still something we can all do after AI art took off.


How do you feel about NFTs


if referring to the collections

the collectibles market always was able to benefit from transparent supplies, transparent volume. NFT's added that and people acted surprised that the collectibles market was that big, when in reality nobody knows how many baseball cards (for example) were issued, how many rare ones are really out there and more. the analog collectibles market has wash trading too.

if NFT purchasers can remove the parasocial relationships they rely on with the creators and just build their own communities for resales then it'll be fine.


I think my barber and my plumber have safer jobs from automation than average coders.


I think we'll all get more productive is all. There's a lot of projects that just weren't economical but now may become so.

The bigger disruption will occur when Google, Facebook, Twitter, Reddit etc become useless from all the AI spam that's sure to overwhelm it all.


But here's the question. If these tools make existing devs 10x more productive, why bother hiring new people in the field?


The tools of today, even fucking Eclipse and gah Xcode from the early 2000s make programmers far far more than 10x more productive than machine code on punch cards. By your implied logic there should be a tiny fraction of the programmers there were in the 60s.

Anyway I think this 10x more productive claim is still a bunch of BS with the current crop of AI demos.


Why hire new people, because companies want to do things they couldn't do before. This has been happening for decades already as tooling has evolved. It used to be very hard for average companies to write cross-platform commerce software, now it's a standard thing in the form of a website + payment processor.

There might be some companies or departments that don't need to evolve anymore, and they cut people. That happens already; the product gets finished, so they lay people off. But those people find new jobs.


My guess is that the goals for existing software projects have always been limited by how much talent could be purchased, not by vision. So unless we run out of vision, nobody is actually going to be put out of a job by this technology. We'll just shoot for bigger goals.


This will create more demand for our field, not less. Increasing the power of programmers unlocks so much more potential.


Companies always have ideas on the backlog. I myself have a huge backlog of dream projects that I don't have time to implement, and I'm just one person.

Tech like this helps us make a bigger impact faster.


The new people will still work for a little less.


Let’s play this out. A company fires all the SE’s and GPT-4 takes over their place.

Who will be there to ask the right questions? Middle management?

Who can validate it spit out what you actually want? Project managers?

Who will determine what questions to ask GPT-4? Product owners?

Who will decide what type of tech (language etc…) to ask GPT-4 to write? College grads turned into corporate GPT-4 pushers?

Who will be able to determine business logic based on prior experience? GPT-4?

IMHO, a lot of the hard work for an SE is not the actual coding.


You don't need to fire all software engineers. You could let's say replace 75% of them with GPT.



If you want to do a $100 bet I’ll bet you in 5 years that won’t happen. I’ll even take it further, I’ll bet you $100 a year for the next five years it won’t make a huge difference. I’ll pay you in full early if it happens and you can wait to pay me in full after 5 years.

I dislike betting money at all and I think $100 is fair.


Those software engineers are going to be very overloaded.

Will they be troubleshooting 100% of the code with only 25% of the staff?

GPT might become a tool in our arsenal but not a replacement. It needs to be able to say I don’t know before it spits out nonsense.


It takes fewer people to maintain a system than it does to build it. You do not need a team of 7 engineers to stick around to read logs and code, you need exactly one, and in all likelihood he's going to spend most of his time watching YouTube because he's so out of practice that if anything was wrong the AI would most likely find it before him, and when he does find something he's going to just ask the AI to fix it instead of exerting the mental effort to do it himself.


I’ll take the Pepsi Challenge on GPT-4 replacing SEs. It’s the same take I’ve had on self-driving. Outside of very niche situations it’s never going mainstream.

The only people in the short term who are going to greatly benefit from GPT coding are people selling tutorials for using GPT to code.


You could replace 75% of them with a rubber duck. Soon you might replace 22.5% of them with GPT.


The problem is and always been how do you describe what you want the computer to do to the computer. This is where this will see some traction as a tool that helps developers do this faster, but my bet is that these sorts of systems will still work a lot better using an artificial language to state the requirements and these advances will be integrated into our languages and compilers.


> I think my barber and my plumber have safer jobs from automation than average coders.

Possible, but their 'advantage' is unlikely to last more than a few years.


So far, I haven't seen GPT being used as a tool for programming by non-programmers.


only until the barber and plumber market is swamped with retrained former average coders ...


Sigh. Time to bust out my old will code for food cardboard sign.


I remember it came out at the same time as GitHub copilot and a lot of people here said we might need to confront the idea that some development roles may not be long for this earth.


I sure do love seeing smug people get their comeuppance, unfortunately I expect them to just double down on the cope even when the reality is blinding


I remember it being vastly overhyped about how artists were now obsolete, but it really hasn't happened. People have quickly became very attuned to the "AI art style"--humans are excellent at pattern recognition, after all--and if anything, it's made people appreciate how important an artists' individual style and attention to detail is.


It absolutely has happened.

I've seen intense rage and despair for even top tier 2D artists, Japanese, Chinese, Western all alike. Like top of industry, massive fanbase, they all understand there is no sugarcoating it anymore.

AI improved from 5-year-old scribble to crushing 90% of artists in technical execution within 9 months. So they had no time to process it at all, and had to cope with shock and denial.

Now virtually all traditional 2d artist jobs are on the chopping block, its just inertia keeping them employed for another year or two. At top Chinese game studios like Tencent and Mihoyo, they are already intensely experimenting with AI workflows. And many indie game studios are pretty much already using AI for most concept art.

Artist jobs won't disappear, but they'll become hybrids of 3d modelling, SD prompting, python scripting, model training, and photoshopping. For many artists this means the workflows they spent decades honing is now gone. Also, most of them love drawing, so art jobs that don't involve drawing are despairing to them.

Those who fare best, are those who don't care much about drawing, but storytelling (Say Mangaka), they will be freed from drawing 14 hours a day, and cut their workloads in half.

Note, I think artist jobs will increase and pay better. The true industries in threat are non-virtual entertainment, such as tourism, concerts. They'll be competing with AI augmented artists that can produce awe-inspiring works in weeks not years.

Universal World's hogwarts parks are admirable. But they'll soon be competing against fully realized hogwarts (already modelled in Hogwarts legacy, without AI), that are then filled with fully responsive AIs whom you can actually talk to (Courtesy of GPT5,6,7).

AI will create massive value, but it'll also bring massive, extremely uncomfortable dislocations. Better try to adapt AI as soon as possible, and analyze how your role will transform.


> It absolutely has happened.

Could you list some examples of companies that have laid off their art departments to use DALL-E/SD/etc?


Could you list some examples of companies with an art department? Sterling Cooper doesn't count.


I'm sorry but am I the only one experiencing time dilation? You can't say ' X was supposed to be a big deal but was not' when the number of months since X began is single-digit.

Come back in ten years and tell me that there was no threat.

Also, for anyone living under a rock, the 'AI art style' --- by which you probably mean MidJourney 3 style --- has been steadily disappearing across MJ 4 and 5.

So, yes, actually, artists may now be 'obsolete', at least in the sense of 'viable form of capitalism for an adult North American to pursue as a profession'.

Circumstances for artists were already the worst they'd ever been, in real terms, even before this. Going forward, 'graphic designer', like 'music engineer', is going to be five people with real talent and 20,000 cool kids with trust funds. Everyone else fights for a job at Wal-Mart.

Best queue for your shift, the pinnies that fit well always get grabbed first


People don’t make art for money (mostly)

People do write code for money (mostly)

See the difference? How would you feel if developers earned as much as artists do?


I don't really know what you're getting at. My point is that DALL-E/Stable Diffusion hasn't cut into the market for actual paid artists. Like, at all. Unlike what the doomsayers predicted. Software developers should feel equally as threatened, which is to say, not at all.


I’m getting at the fact that “art” isn’t a major cost for the most valuable companies in the world. Software development labor is an enormous cost.

There is no financial incentive to replace paid artists. There is an enormous financial incentive to reduce software development headcount.


> There is no financial incentive to replace paid artists. There is an enormous financial incentive to reduce software development headcount.

Older folks will remember when the solution to this was to outsource development to low-cost regions like South American or Southeast Asia. You can ask ChatGPT how that went.


If people don’t make art for money there wouldn’t be such a huge blowback from AI art.


People need to survive. Making 400k/yr is a bit more than survival.


It's much too soon to say, no? AI art is going to keep getting better quickly, and DALLE-2 only came out six months ago.


Progress can come to a screeching halt at any point. We saw this with self-driving cars previously: the AI true believers were certain self-driving would be everywhere within 5 years... in 2015. Now the idea of a self-driving car has mostly been abandoned.

Let's get at least one company, at least one, replacing its graphic design department with DALL-E, or its development team with GPT, before calling an entire profession obsolete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: