> While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle
Minor quibble: I think the majority of bias would creep in when training, not necessarily during writing the code. If trained on biased data you'll get biased results. RLHF by it's nature is likely to introduce bias, but maybe bias most are ok with, but maybe not lawyers. Later, while actually serving responses, it's possible that OpenAI/others introduce checks for certain things (e.g. does the input contain the word 'gun'? If so refuse to answer) that are also sources of bias. In theory you could also taint the responses by changing the prompt, but that seems unlikely.
All this just to say that I think the judge is right to do this, his bias argument is a bit of a miss.
The whole paragraph except for the first and last sentences may have been composed using an LLM.
> All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of [...] or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being [...] held responsible under Rule 11 for the contents of any filing that he or she signs and submits to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing
Asking the attorneys to re-acknowledge that documents they file are their official entries into the record, no matter what programs are used to generate them, makes sense in principle as a way to preempt the "algorithm told us to" argument.
I think the context here suggests that the training is considered part of the overall task of "programming," though people in the industry use that term of art differently in the context of ML development. At any rate the creators of the model or tool were under no oath, nor is (or can be) the tool itself, and the results must be considered in the light of that fact.
Also basic computation can just be done via basic lookup tables. It's a principle that even FPGAs use. One the basic building blocks of an FPGA is a LUT. If you want to make a configurable basic gate just have a mux address a table of 4 bits, and you can then configure that to be any 2 input logic gate. FPGAs also have the word programming in their name.
As for an ML model it's basically a table of weights each connected to each other. In a sense we use "training" to find the weights to program into each node of each layer of the network.
I would still say an ML model is a program, just that humans did not directly specify each value this program. If we go back to an FPGA people rarely manually instantiate the raw building blocks instead it synthesized and routed most often by other software. Although VHDL and Verilog and such are closer to a program than training. Even with normal software code is compiled to machine instructions instead of directly specified. We still say your programming a computer even your using a compiler.
Just because the end results are based off of statistics and your input data I would still say training is a form programming. Just training tells you it was very indirect.
That aside I would agree that model or tools, and creator were under no oath so the reason here makes sense.
Single thing or human is always biased. Because that's how mind works. Out of zillions of possible guesses and decisions it uses only a small coherent subset. Which is always a non-representative subset. I.e. it's biased.
Minor quibble: I think the majority of bias would creep in when training, not necessarily during writing the code. If trained on biased data you'll get biased results. RLHF by it's nature is likely to introduce bias, but maybe bias most are ok with, but maybe not lawyers. Later, while actually serving responses, it's possible that OpenAI/others introduce checks for certain things (e.g. does the input contain the word 'gun'? If so refuse to answer) that are also sources of bias. In theory you could also taint the responses by changing the prompt, but that seems unlikely.
All this just to say that I think the judge is right to do this, his bias argument is a bit of a miss.