Hacker News new | past | comments | ask | show | jobs | submit | lobstey's comments login

But unlike computer security, laws are not determined by hardware that never complain about exploitation. Law is operated by humans, and people may reject exploration by arguing the contract differently. Lawyers are trained to dispute humans rather than rules.


A human just sounds like a buggy, inconsistent compiler in this case.


A Procrustean view.

The law isn't perfect, and in that sense, it is like software in that it is constantly changing in anger. With the law, however, you're also in the situation of having many different interpreters. Given that the law is imperfect and requires refinement and adaptation in anger, given the imperfect knowledge of law givers, it is expected that there would be multiple interpretations.

Law does not determine justice by virtue of its codification. Morality does. Law is just the determination of general moral principles. A ruled-based system blindly followed by a machine would be a disaster outside very narrow, niche applications.


Not that a lot of companies are using the Java 15+. People generally stick to 8 or 11.


I believe Oracle 11 is affected.


I do not believe so. The "affected list" which includes 11 is for the complete set of the "CPU" - Critical Patch Update.

This specific one was introduced with the rewriting of these parts of the code from C++ to Java, and that happened with Java 15.


Looks like you’re correct and I was wrong.


I doubt how many companies are actually using java15+. Many still sticks to 8 or 11


just like the WeChat way of dealing moments.


But like you can write a poem to generate pictures for it. It's revolutionary for people to present differently.


A revolution is an understatement. Presenting differently is nice for now but projecting just slightly more into the future the whole concept of presenting might be over and done with.

For example: My gf spends hours dolling herself up, making poses, sometimes traveling to interesting places to get good instagram photos. The desire is to present herself as a cool and attractive person.

This all becomes meaningless with Dall E.


It seems like it's the exact same amount of meaningful or meaningless regardless of DALL•E? None of that activity you described is meaningful as-is except for the meaning that she, you, or we project onto it for her having done it. That doesn't necessarily go away. The actions and activities themselves are meaningless. What sets them apart from results produced via DALL•E is precisely that your girlfriend was the agent involved. That can still be true and that quality can still be what makes them meaningful.

For example, robotic welding is incredible, truly a spectacular thing to observe, and the results are often immaculate for certain applications. However, I still pay a premium for handmade bicycle frames because I appreciate the craftsmanship that goes into them compared to mass produced alternatives.


It was all meaningless before. We attached meaning to it all as we grew as a species, but we are not functionally different from our ancestors 10,000 years ago who were just finding their way, who could see the universe, imagine it, creating whole civilizations and fighting wars and battles over the very imaginings in their head.

What's fascinating about DALL-E is that there now exist no barriers to that primitive, but remarkable, imagination. Many people could have envisioned Michelangelo's works. You can probably do that now, if you close your eyes real tight. But Michelangelo could not have created his great works without the funding of the House of Medici.


It wasn't meaningless, there is tangible value that comes out of that activity.


> This all becomes meaningless with Dall E.

No more than it was since Photoshop.


Well, no. Your average Facebook users doesn't have the time, money or skills to really touch up the photos they post on Facebook. If you can just ask your computer or your cellphone to pose you on top of Mount Everest or flying an F-22 fighter jet or driving a Lamborghini while wearing an expensive suit and it produces 20 variants of that image in 227 milliseconds though, that completely changes the game.

The internet is going to be full of fake photos and soon full of fake video clips too. You'll basically never be able to trust someone's Tinder profile picture.

The upside, maybe, is that if the internet becomes more fake, it also becomes less interesting. Maybe it will encourage people to do more activities and things in real life, away from computers. Dating websites will probably drop in popularity because profile pictures are so manipulated that you basically have no idea what the person looks like without meeting them in person.


You know that’s already the case to a large extent right? All(?) the social media photo apps have a filter on them even those saying “no filter” because people prefer them than seeing their quirky faults.


Umm you can upload any profile photo you want, right? So how has that changed


Sounds like a win win, more people in the real world, more creativity online.


It sounds like you don’t like her.


I like her lol, we talk about social media a lot and she talks about it in this way too.


the biggest problem first of all might be the memory requirements given so many parameters. It couldn't be as cheap as a high end computer in the foreseeable future.


There is probably a space-time trade off that needs to be explored in this space. It might be possible to preload the some of the most likely tokens to be selected next into the cache and/or RAM. These are glorified auto-complete algorithms that are poorly understood, as DeepMind's optimizations appear to show. For the English language, it is probable that there are only so many possible grammatically correct selections for the next token, for example.


Glorified autocomplete? Autocomplete can guess the next word .. sometimes, GPT-3 goes hundreds of words ahead. On generic topics it can be hard to distinguish from human text.

And it can't cache tokens because all tokens are evaluated in the context of all the other tokens, so they don't have the same representations when they reoccur at different positions.


They're evaluated in the context of the last 2^n many tokens, for many models it is 1024, 2048, or 4096 tokens as a scanning window. The tokens (words and sometimes punctuation) are represented by integer values, so the last 2^n many tokens would certainly qualify for storage in a cache. Then next token selection only has so many possible assignable selections in any given language model because of grammatical limitations. This is only one such optimization, there could also be optimizations around the likelihood of certain words to be used given the presence of certain previous tokens, and so on.

But, yes, tokens are chosen one word as a time based on the previous content, similar to earlier auto-completion algorithms.


I’ve been saying this for years, language models are the ML equivalent of the billionaire space race, it’s just a bunch of orgs with unlimited funding spending millions of dollars on compute to get more parameters than their rivals. It could be decades before we start to see them scale down or make meaningful optimizations. This paper is a good start but I’d be willing to bet everyone will ignore it and continue breaking the bank.

Can you say that about any other task in ML? When Inceptionv3 came out I was able to run the model pretty comfortable on a 1060. Even pix2pix and most GANs fit comfortably in commercial compute, and the top of the line massive models can still run inference on a 3090. It’s so unbelievably ironic that one of the major points Transformers aimed to solve when introduced was the compute inefficiency of recurrent networks, and it’s devolved into “how many TPUs can daddy afford” instead.


Is that fair? My Pixel phone seems to run nothing but ML models of various kinds and they run locally which is madness, pure madness. It can recognize songs and my speech without talking to the cloud at all. That's pretty much the definition of optimization!


It's just about where the software development incentives are. Big shops have incentive to have service models. I think of it like a return to the mainframe days, and an old-IBM like mindset.

However the upside to pocket sized intelligence will eventually win out. It's just a question of when someone will scrape together the required investment.


cannot agree on that. Here you got the OO ways of thinking by combining the data and code logic. However, it's totally legitimate to say functions wrapped up the procedures or transformations l.


because the low quality Poli trash talk is unrelated to hn


Did fine means 500k death in COVID?


Don't think it's true. Older people generally have more responsibility to their family, and it's hard for them to give the life they have to found a startup.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: