I don't think you get it. The game has already changed. And nobody trained the AI to write unit tests or refactor code. It could do it as emergent behavior.
It happened so fast that so many people are in denial and many people aren't even asking the obvious questions. Simply ASK chatGPT to refactor your code to be more unit testable and IT Can do it.
Of course it clearly has your typical LLM problems but it is roughly 50% there. The game has changed. If AI gets to 100%, not only will the game be changed, but humans will no longer be part of the game.
See what I asked chatGPT:
ME:
def (x: int):
for i in range(x):
print(i)
Can you refactor the code so it can be more unit testable?
CHATGPT:
Certainly! To make the code more unit testable, we can refactor it into a function that returns a list of integers instead of directly printing them. Here's the refactored code:
python
def generate_number_list(x: int) -> List[int]:
number_list = []
for i in range(x):
number_list.append(i)
return number_list
In this refactored code, we've created a function called generate_number_list that takes an integer x as input. Instead of printing each number, we use a number_list variable to store the generated numbers by appending them in the loop. Finally, we return the number_list.
By returning the list instead of printing the numbers directly, we make the code more testable because we can now write unit tests that assert the correctness of the generated list.
If boggles my mind that I see threads of people saying "what if we train LLMs to do this or to do that?" or whatever. Guys, just ask the LLM to do what you want in plain English. It can do it.
This is a fine, absolutely trivial, example. But LLMs are simply not all that.
IME GPT-4 can't write a bug-free 10 line shell script. It's particularly poor at inferring unstated requirements - or the need to elicit the same.
There's a general problem with LLMs: they're too eager to please. It shows up as confirmation bias. Embed a perspective in your prompt, and LLMs continue in the same vein.
You can, with careful prompting, try to provoke and prod the text generation into a more correct shape, but often it feels to me more like a game than productivity. I have to know the answer already to know how to ask the right questions and make the right corrections. So it feels like I'm supervising a child, and that I should be amazed it can do anything at all. And it is amazing; but for productivity outside tightly constrained environments (e.g. converting freeform dialogue into filling out a bureaucratic form - I think this is a close to ideal use case), I struggle to see it scaling up much, from what I've seen so far.
For creativity - e.g. making up a story for a child - it's not bad. One of my favourite use cases, after discovering how bad it is at writing code.
The game is designing software to requirements. It's writing literature for a new era. It's creating X for A audience with N vauge unspecified needs -- where X is a complex product made of many parts, involving many people, with shifting and changing problems/solutions/requirements.
The game was never writing the stack overflow answer -- that was already written.
So? Those requirements can be specified, holes inferred, and probably stuck to much more closely by a machine than man. If history's shown anything it's that if something takes a lot of mental effort for people it's probably an easy target for automation. The best developer is the one that doesn't get depressed when the requirements change for the 15th time in a month and just rewrites everything again at 2000x the speed of a human dev while costing basically nothing in comparison.
People say, "oh but clients will have to get good at listing specs, that'll never happen". Like bruh the clients will obviously be using LLMs to make the specs too. Eventually the whole B2B workflow will just be LLMs talking to each other or something of the sort.
>The game was never writing the stack overflow answer -- that was already written.
The problem is this was never a stackoverflow question and there was never an answer for it.
Try finding it. The LLM is already playing the game because it came up with that answer which is Fully Correct, Out of Thin Air.
Look, clearly the LLM can't play the game as well as a trained adept human, but it's definitely playing the game.
>The game is designing software to requirements. It's writing literature for a new era. It's creating X for A audience with N vauge unspecified needs -- where X is a complex product made of many parts, involving many people, with shifting and changing problems/solutions/requirements.
It can do all of this. It can talk like you and parrot exactly what your saying and also go into more detail and re-frame your words more eloquently.
What you're not getting is that all the things you mentioned the LLM can do in actuality to varying degrees to the point where it is in the "game." and at times it does better than us. Likely, you haven't even tried asking it yet.
I think if you're an expert in an area, this effect is easier to see through. You know where the github repo is, where the library example is, which ebooks there area -- etc. and you're mostly at-ease not using them and just writing the solution yourself.
These systems are not "fully correct" and not "out of thin area". They are trained on everything ever digitised, including the entire internet. They, in effect, find similar historical cases to your query and merge them. In many cases, for specific enough queries, the text is verbatim from an original source.
This is less revolutionary than the spreadsheet; it's less than google search. It's a speed boost to what was always the most wrote element to what we do. Yes, that often took us the longest -- and so some might be afraid that's what labour is -- but it isnt.
We never "added value" to products via what may be automated. Value is always a matter of the desire of the buyer of the products of our labour (vs. the supply) -- and making those products for those buyers was always what they wanted.
This will be clear to everyone pretty quickly, as with all tech, it's "magic" on the first encounter -- until the limitations are exposed.
I actually work in an area where what took 3mo last year, I can now do in maybe 3 days due to ChatGPT. But when it comes to providing my customers with that content, the value was always in how I provided it and what it did for them.
I think this makes my skills more valuable, not less. Since the quality of products will be even more stratified by experts who can quickly assemble what the customer needs from non-experts who have to fight through AI dialogue to get something generic.
I agree. LLMs are very impressive, but it isn't helpful to think of them of magic. LLMs are a great tool to explore and remix the body of human knowledge on the internet (limited to what it has been trained on).
The user needs to keep in mind that it can give plenty of false information. To make good use of it, the user needs to be able to verify if the returned information is useful, makes sense, compare with first hand sources, etc. In the hands of expert that is really powerful. In the hands of a layman (on the subject in question), they can generate a lot of crap and misunderstand what it is saying. It is similar to the idea that Democracy can be a great tool, but it needs an educated and participatory populous or it may generate a lot of headaches.
> I agree. LLMs are very impressive, but it isn't helpful to think of them of magic. LLMs are a great tool to explore and remix the body of human knowledge on the internet (limited to what it has been trained on).
Of course you shouldn't think of it as magic. But, the experts self admit they don't fully understand how LLMs can produce such output. It's definitely emergent behavior. We've built something we don't understand, and although it's not magic, it's one of the closest things to it that can exist. Think about it. What is the closest thing in reality to magic? Literally, building something we can't understand is it.
It's one thing to think of something as magic, it's another thing to try to simplify a highly complex concept into a box. When elon musk got his rockets to space why were people so floored by decades old technology that he simply made cheaper?
But when someone makes AI that can literally do almost anything you ask it to everyone just suddenly says it's a simple stochastic parrot that can't do much?
I think it's obvious. It's because a rocket can't replace your job or your identity. If part of your skillset and identity is "master programmer" and suddenly there's a machine that can do better than you, the easiest thing to stop that machine is to first deny reality.
> the experts self admit they don't fully understand how LLMs can produce such output
Well I take myself to be an expert in this area, and I think it's fairly obvious how they work. Many of these so-called "Experts" are sitting on the boards of commercial companies with vested interests in presenting this technology as revolutionary. Indeed, much of what has been said recently in the media is little more than political and economic power plays disguised as philosophical musings.
A statistical AI system is a function `answer = f(question; weights)`. The `answer` obtains apparent "emergent" properties such as "suitability for basic reasoning tasks" when used by human operators.
But the function does not actually have those properties. It's a trick -- the weights are summaries of unimaginable number of similar cases, and the function is little more than "sample from those cases and merge".
Properties of the output of this function obtain trivially in the way that all statistical functions generate increasingly useful output: by having increasingly relevant weights.
If you model linear data with just y = ax then as soon as you shift to "y = ax + b" you'll see the "emergent property" that the output is now sensitive to a background bias, b.
Emergence is an ontological phenomenon concerning how `f` would be reaslised by a physical system. In this case any physical system implementing `f` shows no such emergence.
Rather the output of `f` has a "shift in utility" as the properties of the data its training on, as summarised by the weights, "shifts in utilty".
In other words, if you train a statistical system on everything ever written by billions of people over decades, then you will in fact see "domains of applicability" increases, just as much as when you shift from a y=ax model to a y=ax+b.
To make this as simple as I can: statistical AI is just a funnel. ChatGPT is a slightly better funnel, but moreso, it's had the ocean pass through it.
Much of its apparent properties are illusary, and much of the press around it puts in cases where it appears to work and claims "look it works!". This is pseudoscience -- if you want to test a hypothesis of ChatGPT, find all the cases where it doesnt work -- and you will find that in the cases where it does there was some "statistical shortcut" taken
I think this is a motte-bailey, "true and trivial vs incredible and false" type of thing. Given a sufficiently flexible interpretation of "sample from multiple cases and merge", humans do the same thing. Given a very literal interpretation, this is obviously not what networks do - aside one paper to the contrary that relied on a very tortured interpretation of "linear", neural networks specifically do not output a linear combination of input samples.
And frankly, any interaction with even GPT 3.5 should demonstrate this. It's not hard to make the network produce output that was never in the training set at all, in any form. Even just the fact that its skills generalize across languages should already disprove this claim.
> It's not hard to make the network produce output that was never in the training set at all, in any form.
Honest request because I am a bit skeptical, can you give an example of something it is not trained in any form and can give output for? And can it output something meaningful?
Because I have run a few experiments on ChatGPT for two spoken languages with standard written forms but without much of a presence on the internet and it just makes stuff up.
Well, it depends on the standard of abstraction that you accept. I don't think that ChatGPT has (or we've seen evidence of) any skills that weren't represented in its training set. But you can just invent an operation. For instance, something like, "ChatGPT: write code that takes a string that is even length and inverts the order of every second character." Actually, let me go try that...
And here we go! https://poe.com/s/UJxaAK9aVN8G7DLUko87 Note that it took me a long time, because GPT 3.5 really really wanted to misunderstand what I was saying; there is a strong bias to default to its training samples, especially if it's a common idea. But eventually, with only moderate pushing, its code did work.
What's interesting to me here is that after I threw the whole "step by step" shebang at it, it got code that was almost right. Surprisingly often, GPT will end up with code that's clever in methodology, but wrong in a very pedestrian way. IMO this means there has to be something wrong with the way we're training these networks.
Are you speaking figuratively, or do you know of any specific instances of people forming actual religions around them? I'd be very interested in the latter.
I've seen people posting about it on a few message boards. Most of them sound like they e lost their minds or are under the influence being completely honest. I could try to dig up posts if you want but it's more sad than interesting.
I have not seen organized religions around AI yet. But I have seen people writing some pretty wild ravings about how their god is an AI and how chatgpt connects too it or something. There's also people dating LLMs. Some guy in Belgium commit suicide because his ai gf told him too leaving his wife and kids behind
It'll be interesting to see how these sorts of less than anticipated sociological things emerge. Take a look at scientology, many practicers, pretty scifi beliefset, I think all we really need is another L Ron Hubbard and lots of not super crazy people could start to worship these things.
He's just talking _. Clearly nobody here on both sides are having religious fervor around ai. One side is saying we don't understand LLMs completely and the other side is saying we absolutely do understand it's all statistical parroting.
But to keep it with the religious theme... which side sounds more similar to religion? The side that claims it's absolutely impossible for LLMs to be anything more then a statistical operation or the side that claims they don't know? One side seems to be making a claim based on faith while another side is saying we don't know enough to make a claim... So which side sounds more religious?
I'd also say I've sufficient expertise in animal learning to reject the idea that animals have shallow interior lives comprised of compressions of historical cases.
A child touches a fireplace once -- not a thousand times. Because they are in direct causal contact with the world and their body has a whole-organism biochemical reaction to that stimulus which radically conditions their bodies in all sorts of ways
This is a world apart from statistical learning wherein P(A|A causes B) and P(A|B) are indistinguishable -- and the bridge of "big data" merely illusory
>Well I take myself to be an expert in this area, and I think it's fairly obvious how they work. Many of these so-called "Experts" are sitting on the boards of commercial companies with vested interests in presenting this technology as revolutionary. Indeed, much of what has been said recently in the media is little more than political and economic power plays disguised as philosophical musings.
Bro if you are an expert you'd already know that most of the exclamations that they don't fully understand LLMs is coming from researchers at universities. Hinton was my example on an "expert" as well and he literally quit google just so he can say his piece. You know who Hinton is right? The person who repopularized backprop.
>A statistical AI system is a function `answer = f(question; weights)`. The `answer` obtains apparent "emergent" properties such as "suitability for basic reasoning tasks" when used by human operators.
Every layman gets its a multidimensional curve fitting process. The analogy your using here to apply properties of lower dimensional and lower degree equations to things that are millions of dimensions in size on a complex curve simply doesn't apply because nobody fully understands the macro details of the curve and how that maps to the output it's producing.
The properties of a 2d circle don't map one to one to 3d let alone 500000000d.
>Much of its apparent properties are illusary, and much of the press around it puts in cases where it appears to work and claims "look it works!". This is pseudoscience -- if you want to test a hypothesis of ChatGPT, find all the cases where it doesnt work -- and you will find that in the cases where it does there was some "statistical shortcut" taken
You don't even know what science is. Most of software engineering from design patterns to language choice to architecture is not science at all. There's no hypothesis testing or any of that. An expert (aka scientist) would be clear that ML is mostly mathematical theory with a huge dose of art layered on top.
The hypothesis for the AI in this case is, and I'm parroting the real experts here,: "we don't understand what's going on." That's the hypothesis. How is that even testable? It's not so none of this is "science". ML never was a science, it's an art with some theoretical origins.
But your "hypothesis" is it's just "statistical parroting" which is also untestable. But your claim is way more ludicrous because you made a claim and you can't prove it while I made a claim that basically says "we can't make any claims because we don't understand". See the difference?
Experts in the area, including Hinton, the father of modern AI, self admit they don't fully understand what's going on but they think that LLMs know what they are talking about.
>These systems are not "fully correct" and not "out of thin area". They are trained on everything ever digitised, including the entire internet. They, in effect, find similar historical cases to your query and merge them. In many cases, for specific enough queries, the text is verbatim from an original source.
I never said the systems are fully correct. I said that for my specific example the answer is fully correct and out of thin air. No such question and answer pair exists on the internet. Find it and prove me wrong.
>This will be clear to everyone pretty quickly, as with all tech, it's "magic" on the first encounter -- until the limitations are exposed.
Except many experts are saying the exact opposite of what you're saying. I'm just parroting the experts..
>I actually work in an area where what took 3mo last year, I can now do in maybe 3 days due to ChatGPT. But when it comes to providing my customers with that content, the value was always in how I provided it and what it did for them.
So if they knew you were just copying and pasting their queries to chatgpt would they still care about the "how"? I doubt it.
I think this is one of the killer applications of LLMs, a friendly Stack Overflow where you can ask any programming question you want with out fear of being reprimanded. Of course, this capability in LLM is probably due to the terseness of Stack Overflow and the large database of code in Github.
However, in its current state users still have to know how to program in order to make good use of it. It will still give you lots of errors, but being able to get something close to your goal can save you a lot of time. Someone who does not know how to program will not be able to use these to put together a complex, useful and reliable system. It might change in the future, but these things are hard to predict.
Don't worry about this. You can get over the fear. I'm in the top 10% of stackoverflow users in terms of points and it's all because my stupidest questions from decades back gathered thousands of points from other stupid idiots like me. Who cares. Literally the line graph keeps climbing with no effort from me all from my dumbest questions. Just ask and don't worry about the criticism, you'll get a bit, but not too much.
>However, in its current state users still have to know how to program in order to make good use of it. It will still give you lots of errors, but being able to get something close to your goal can save you a lot of time. Someone who does not know how to program will not be able to use these to put together a complex, useful and reliable system. It might change in the future, but these things are hard to predict.
Of course. I think the thing I was trying to point out is the breadth of what chatgpt can do. So if you ask it to do a really in depth and detailed task it's likely to do it with flaws. That's not the point I was trying to emphasize, not the fact that it can't do any task with great depth but the fact that it can do ANY task. It has huge breadth.
So to bring it line with the direction of this thread. People were thinking about making special LLMs that refactor code to be unit testable. I mean we don't have to make special LLMs to do that because you can already ask chatgpt to do it already. That's the point.
I've had several SO questions get flamed, down voted and closed. I don't think this is great advice. What I would say is read the rules, search SO for duplicates try to think of near duplicates, try to Google the answer, then post.
Probably not then. But I just post whatever I want and I'm already in the top 10 percent. And I'm not an avid user either. I just ask a bunch of questions.
I've had a few flamed and closed but that's just 1 or 2 out of I'd say around 13 or 14 questions. It's a low percentage for me.
And I absolutely assure you much of my questions are stupid af.
It is a frequent complaint I have seen from new users. I do think for the purpose of Stack Overflow it does make sense to weed out questions that have already been answered and remove poorly formed ones. It's just that ChatGPT for programming questions often works better than trying to look it up in Stack Overflow so now I recommend it as an additional tool. You can ask questions and refine them without bothering random people on the internet.
"The problem is this was never a stackoverflow question and there was never an answer for it."
Your example is so trivial, that there are definitely similar code examples. Maybe not word for word, but similar enough, that this is not really mindblowing "making things out of thin air" for me. It seems like a standard coding class example, so not surprising, that it also can make the unit tests.
Find one. Dated before 2021. In fact, according to the theory that it's statistical parroting there should be multiple examples of for loops printing out numbers being converted to unit testable functions because AI needs multiple examples of it to form the correct model.
Find one. And it doesn't have to be from stack overflow either. Just a question and answer data point.
It is. There is literally zero other way to make that function unit testable. What are you gonna compare that data with in a test if it's thrown into IO?
By definition all unit testable functions have to return data that can be asserted. You throw that data to IO it's not unit testable.
IO is testable via an integration tests. But not unit tests. Which is what my query exactly specified. I specified unit tests.
That doesn't change the fact that it's not a valid refactoring. If you can't make it unit testable without changing behavior, then it should tell you that.
Replacing a function that does `print("hello world")` with a function that does `return "hello world"` isn't a valid way to make it unit testable.
Alright fine, I can concede to this. ChatGPT should not have given me the best alternative but it should have given me the exact technically correct answer. You're right.
if it actually understood what it was doing it would tell you that that logic doesn't need a test as the python has the range(x) functionality built-in
instead it generates a load of redundant boilerplate
if I saw a developer check that in I'd think they were incompetent
I'm not good at prompting (if I believe what others say they can do with ChatGPT), but that's one thing that bother me with this system.
They will do anything you ask them to without questioning it (in the limit given by their creators).
Is it possible to set it up in a way that they will challenge you instead of blindly doing what you ask? In this particular case, is it possible to ask it to do a code review in addition to performing the task?
I've tried various time (with the v3.5) to "tune" it so that each answer will follow a specific format, with links and recommended resources, with several alternatives, etc. The goal is to have it to broad my perspectives as opposed to focus too much on what I'm asking. But it never worked for more than a couple of questions.
Sort of. There's an input variable that adjusts the "creativity" of the LLM. If you adjust the variable the answers become more and more "creative" approaching the point where it can challenge you. But of course this comes at a cost.
As it stands right now, chatGPT can actually challenge you.
I simply asked it to make it unit testable and it did the task 100 percent.
I'm not sure where your side track is coming from. Who in their right mind would ever check in code that prints a range of numbers from 0 to x?
The example wasn't about writing good code or realistic code. It's about an LLM knowing and understanding what I asked it to do. It did this by literally creating a correct answer that doesn't exist. Sorry it doesn't satisfy your code quality standards but that's not part of the task is it? Why don't you ask it to make the code quality better? It can likely do it Maybe that will stop the subtle insults (please don't subtly imply I'm incompetent that's fucking rude)
Like why even get into code quality about some toy example? What's the objective? To fulfill some agenda against AI? I think that's literally a lot of what's going on in this thread.
The code is not garbage, it's just your highfalutin python opinion makes it so you only ever use list comprehensions or return generators.
For loops in python that return non lazy evaluated lists are fine. Python was never suppose to be an efficient language anyways, grading python based off of this criteria is pointless.
It doesn't matter how snobbish you are on language syntax though. I fed it code and regardless of whether you think it's garbage it did what I asked it to do and nothing else.
Would you prefer the AI say, "this code is garbage, here's not only how to make it unit testable but how to improve your garbage code." Actually we can make the output more unpredictable as LLMs do have a non deterministic seed that can increase the creativity of the answer.
It has wrapped range() with useless code. It has added no functionality, it has not improved testability in any way.
.
Please, take the code it has produced and integrate it into the original function. All it does is replace the range call. That's it. It has absolutely and totally failed at the given task whilst outputting plausible garbage about why it has succeeded.
Let me tell you your mind is going to be blown once you learn about Monads.
A mutable object is functionally identical to a return value if you control the initial state and lifetime of the object. Like you can do in a unit test.
And as I demonstrated in my other comment I 100% retained the semantic structure of the function whilst making it 100% unit testable.
I think you don't understand what unit testability means. It means removing IO and side effects from your code.
How the hell do I test a print function? I take the print function and match it with what? It has no output so how can I test it printed the correct thing? I can't.
I can test a list. I just match it with another list. Making your code unit testable is about segregating IO from logic. Write pure logic where all functions have inputs and outputs and those things can be tested. Your io prints should be small because all functions that do io cannot be fully tested.
IO is pollution. Any output to IO is the program exiting the logical mathematical universe of the program and that output can be verified only by an external entity. Either your eyes for stdout or another process or files or a bunch of other ways.
Unit tests are about internal local tests that touch local functionality and logic. If you want something unit testable it needs a local output and an input and it shouldn't rely on io in it's data path.
I think your complaint here is an example of chatGPT superiority. It understood something you didn't. Well now you know.
Removing the print function from the logic and returning the data is 100 percent the correct move. Do you understand?
Of course you can make the function with a print statement more unit testable without completely changing it's semantics!
You pass in an outputstream and use that as the target for print.
Then your unit test can create its own stream and test the content of the stream whilst production code can pass in standard out.
That way you don't completely change the semantic meaning of the code.
And once again that GPT function is useless. It is identical to list(range()) and it doesn't do what the first function does. Anyone can make anything more unit testable if it doesn't have to do the same thing.
Bro, dependency injection and mocking is the same thing as segregating your function from IO. Your replacing io calls to stdout with io calls to something else. But that doesn't make your code unit testable.
The function is still touching io. You gonna test it with another function that touches io? That defeats the point of the definition of unit testability.
> and doesn't do what the first function does.
Are you serious? You mock your output streams with hacky monkey patching your function ALSO stops doing what it originally does. It's essentially black magic globals that mutate your program... very bad practice.
Chatgpt here just didn't write the obvious io component of the code because it would be freaking pedantic. The full code would include a function that prints lists composed with a function that produces lists. The composition allows part of the program to be testable while leaving the io part of it not testable. For the original program NONE of it was testable.
Your Monkey patching here would be replaced by different io functions. You want to change the output stream? then you change the IO function. Compose the list producer with another IO function. Play type Tetris and you can recompose your list producing function with all kinds of modular io. The point it you separated the core logic away from IO thereby making it more modular and more testable.
None of the io functions are testable via unit tests, that is the point. That is the definition of the most basic form of testing... Unit tests.
You literally HAVE to change your code in order to make it unit testable. If your code is throwing shit to io and retrieving values from io then none of your code is unit testable. You're at the integration test level and at this level things become hacky and more complicated. Your tests not have external dependencies like state, the operating system and you have to run hacks like your monkey patch.
Where ever you work or whatever you've been doing if you haven't been doing what I described then you (and your work buddies) haven't been testing your code via unit tests.
That's fine, whatever works bro. But chatGPT knows the common parlance for testing and unit testing, and it did exactly the correct thing.
Your interpretation of what testing is the thing that is strange and off here.
I'm sorry, I clearly haven't explained myself well as otherwise you would not have wasted a huge amount of text tying yourself in knots based clearly on a mistaken apprehension of what I was saying.
For clarity I reproduce the original function you gave and then I present what the change I am suggesting is
def cool_function(x):
for i in range(x):
print(i)
My change
def cool_function(x, output_stream=sys.stdout):
for i in range(x):
print(i, file=output_stream)
Does it now become clear what I am suggesting? My new function can be used as a 1-for-1 replacement for the old function, no code of the system needs changed as the default value provided to the new variable ensures semantically identical operation without changing any further code. Yet it is now unit testable
So I've made the code unit testable, kept semantics completely identical and not had to worrty about any weird IO concerns that you have. No monkey patching, no weird file IO, no bizarelly re-implemnting list(range(x)).
> I'm sorry, I clearly haven't explained myself well as otherwise you would not have wasted a huge amount of text tying yourself in knots based clearly on a mistaken apprehension of what I was saying.
No need to apologize. This is a discussion. No one did anything wrong.
>For clarity I reproduce the original function you gave and then I present what the change I am suggesting is
This is called dependency injection and it's a valid way of segregating IO away from pure logic. Although this pattern is popular among old school OOP programmers it's getting out of vogue due to the complexity of it all. You used a python trick here of default values, but typically dependency injection changes the function signature and ups the complexity of the code by a lot. Let me show you the full output of the code that chatgpt was implying:
#unit testable code (without using dependency injection tricks)
def cool_function(x: int) -> None:
IO_function(logic_function(x))
def logic_function(x: int) -> List[int]:
return [i for i in range(x)]
def IO_function(x: Any) -> None:
print(x)
def test_output():
assert logic_function(4) == [i for i in range(4)]
Chatgpt only gave you logic_function, because IO_function is sort of obvious.. it's just "print" (I only wrapped print in "IO_function" to keep things clear, typically you won't define that function). But basically the full complete code would be to recompose IO with logic. You now have two components one of which is testable.
As a side note you will see it's actually an improvement to the code. It's simpler, no dependency injection, no confusing function type signature and a much simpler test case. The other thing that must be noted is the modularity.
Making tests unit testable in this way allows for your logic to be portable. What if I want to repurpose cool_function to output it's logic to another function? In your example you don't have the components to do that, it's harder for your case as you'd have to create another component for injection.
In short not only did chatGPT produce A correct answer. But it produced the better answer compared with your dependency injection. That being said your dependency injection is valid BUT you were not correct in saying that chatGPT's answer was worse or incorrect.
3 functions is better. Think about it. Do people write all their stuff in one big function? No. Better to compose higher level functions with smaller ones rather then write one big monolith like you did. The more modular something is the better.
Also IO_function is there for illustration purposes. Technically it's just wrapping print with a name so you can understand the intent. In reality you just use the regular print here without a wrapper, so in actuality only two functions are defined.
>The job of ChatGPT was to make cool_function unit testable. You haven't done it.
It did. By giving it a return value. Just like you did by giving it a new input value.
>You still have cool_function using side effect generating code hitting the actual IO system.
Yeah but one component of cool_function is pure and you can unit test that. Cool function itself can never be tested because it generates no output, you test the unit components of cool function. That's the point of unit tests.
>Genuinely the worst unit test I have ever seen written, on a poor form per line basis, absolute bananas. If you don't understand why [i for i in range(4)] is bad in a unit test and [0,1,2,3] is correct then I need you to walk away from the computer.
Let's just talk about it like adults. Just tell me what exactly about it makes you think it's bad?
Most likely it's some pedantic stylistic philosophy you have? I'm thinking you only want to test literals? Perhaps you prefer [0,1,2,3]? Am I right on the money?
Logic potentially has errors so you don't put logic in your test code. Makes sense, but who cares. For trivial shit it's fine. While in this case the logic in the test is identical to the function, typically 'logic_function' represents something significantly more complex and the list comprehension so I could care less if I'm not following the strictest form of testing. The comprehension is just something akin to an alias shortcut I prefer to use over writing out a massive literal. For the toy example the test is pointless because the logic is identical but typically it's fine to use range as an alias to represent a sequence of numbers.
Someone who strictly follows these stylistic rules without seeing intent or having the ability to bend the rules is just an inflexible pedantic programmer. It's not good to boast about it either by telling other people to walk away from a computer. That's just rude.
That would be fine if the core thing needing unit testing was the data generation/ transformation logic, but just as often as not it's the output formatting too. Did you try asking ChatGPT to write a unit test to confirm that the output is displayed as expected?
>That would be fine if the core thing needing unit testing was the data generation/ transformation logic, but just as often as not it's the output formatting too.
Output formatting touches io. In this case it is no longer a unit test that touches these things. Unit tests by definition test ONLY internal logic and transformations.
It is literally the definition of unit tests.
When you test things like stdout that becomes an integration test and Not a unit test. It requires some external thing or some global black magic monkey patch that changes what print does to do integration testing.
(Btw making print formatting unit testable means segregating the formatting from the print. Produce the string first, test that, then print, because print can never be unit tested by definition)
Typically programmers segregate these levels of testing because unit tests are easier to write. But to write unit tests your code has to be written in a way to cater to it. Often this style of coding actually improves your code it makes it much more modular. The reason is because pure functions that output data can be composed with all kinds of io functions. You can move it all over the place and to different platforms with different forms of IO. Print has no meaning in certain embedded systems so it can't be moved... By segregating the logic out it makes it so I can move the logic without the io baggage.
Chatgpt 100 percent gets the difference that's why it did what it did. I think you and the OP don't fully understand the meaning of unit testing.
Don't take this the wrong way, but just because you don't know this doesn't say anything about your skills as a programmer. But just recognize that this concept is basic and is pretty much something universal among testing.
> Unit tests by definition test ONLY internal logic and transformations
Output formatting is still a type of transformation! The function explicitly takes the numbers and prints them as decimal integers with newlines between each.
A test to confirm that it IS in that format is still a unit test.
BTW I gave ChatGPT the prompt I would give, and I have to say the answer looks pretty good, even if I'm not a Python programmer and it's not the way I'd do it (which would be to change the function to allow passing in an output stream):
class MyFunctionTestCase(unittest.TestCase):
def test_my_function(self):
expected_output = "0\n1\n2\n"
with patch('sys.stdout', new=StringIO()) as fake_out:
my_function(3)
self.assertEqual(fake_out.getvalue(), expected_output)
With a few more prompts I also managed to get it give me this version:
def my_function(x: int) -> str:
output = ""
for i in range(x):
output += str(i) + "\n"
return output
Which I'd argue somewhat changes the code that was originally written, but it's still a pretty decent answer.
There's no doubt there's some impressive stuff going on that it can do such things, the real issue for me is that when I've tried on far more complex functions it's tended to break down (quite badly in some cases).
>Output formatting is still a type of transformation!
I'll quote part of my reply (which you missed):
(Btw making print formatting unit testable means segregating the formatting from the print. Produce the string first, test that, then print, because print can never be unit tested by definition)
Right? Think about it. You want to unit test your formatting, remove the logic from the atomic IO function. Otherwise you can't test it via a unit test because that's the definition of unit testing. I realize that there is formatting that's part of the internal functionality of printf, but really all that means is that funcitonality can never really be unit tested. If you want to test printf, that happens at the integration level... By Defintion.
>BTW I gave ChatGPT the prompt I would give, and I have to say the answer looks pretty good, even if I'm not a Python programmer and it's not the way I'd do it (which would be to change the function to allow passing in an output stream):
It's wrong in this case. Unless you specifically asked it to write unit testable code, what it did here is write a hack that monkey patches the program. It's a huge hack. It didn't write unit testable code, but rather it wrote a integration test that monkey patches stdout, negating any need to make your code "unit testable" no refactoring needed using this method. The entire concept of refactoring code to be unit testable flies out the door in this case as you're just using integration tests to get around everything.
I mean yeah you use the unit test library but is not technically a unit test. It's fine I'm not a stichler for what style of testing is used in practice but what I am saying is that what chatgpt did previously was literally follow my instructions to the letter. It did it exactly 100% correctly. Think about it. I asked chatgpt to make the Code more unit testable. You didn't have chatgpt do anything to the code. You simply changed the test from a unit test to integration test. Huge difference. I mean if your case was the "proper" way then what does it even mean to make code "unit testable" if you're not even touching the code? Like why does the concept of "making code more unit testable" even exist if we're just changing tests to make everything unit testable? Ask yourself this and you'll realize that the only answer is basically what I just told you previously.
I've been writing unit tests for over 15 years (actually longer, but before that they were just throwaway run-once testing stubs). I wouldn't consider what you got ChatGPT to produce to be an adequate rewrite of a function to make it unit testable (and several others in this thread have expressed the same view). Even the "hack" using monkey patching makes for a more actually-useful test.
I'm perfectly aware of what the difference is, thank you. The function you gave to ChatGPT explicitly requests a stream to format integers as decimals, separated by newlines. The version it gave as being 'unit-testable' did not, and hence wasn't a 'factually correct' answer. In some cases that may be perfectly fine, but in others it most definitely isn't.
So your current stance is, LLMs can't do everything yet, but don't bother thinking about extending it's capabilities just ask it it can do everything? Fascinating...
It's not a stance. I'm stating a fact of reality. Huge difference.
I didn't say don't bother extending it's capabilities either. You're just projecting that from your imagination. An hallucination so to speak not so far off from what LLMs do. I find your similarity to LLMs quite fascinating.
What I said is, the capability of doing the "extension" you want is already in the LLM. Sure go extend it but what you're not getting is that we've already arrived at the destination.
I really wonder how Claude 100k does on larger workspaces, has anyone tried that? (I don't feel like paying another $20 to Anthropic too) Allegedly it's only marginally better than 3.5-turbo on average so it'll probably spit out nonsensical code but maybe the huge context can help.
So I said it's like 50 percent of the way there implying that it gets things right at a rate of 50 percent. That's a fuzzy estimation as well, obviously so don't get pedantic on me with that number.
When you ask for large output or give it large input you are increasing the sample size. Which means more likely that part of the answer are wrong. That's it. Simple statistic that are inline with my initial point. With AI we are roughly half way there at producing answers.
If you keep the answers and questions short you will have a much higher probability of being correct.
So that 50k line program? My claim is roughly 25k of those lines are usable. But that's a fuzzy claim because I LLMs can do much better than 25k. Maybe 75% is more realistic but I'll leave it at 50% so there's a lower bar for the nay sayers to attack.
Thanks for at least admitting you used GPT 3.5, which is very out of date and hence no longer useful when discussing AI capabilities. If you want to test current tech (which is moving fast), at least use GPT-4 (which also gets updated regularly).
> g(x) = sin(x)/x ; what is g(exp(-200)) ?
> ChatGPT
>
> To find the value of g(x) = sin(x)/x at the point g(exp(-200)), we
> substitute x = exp(-200) into the function:
> g(exp(-200)) = sin(exp(-200))/exp(-200)
>
> Now, let's calculate this value using numerical methods:
>
> sin(exp(-200)) ≈
> 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
> (there it breaks off, running out of tokens )