Maybe I'm being too skeptical, and certainly I am only a layman in this field, but the amount of ANN-based post-processing it takes to produce the final image seems to cast suspicion on the meaning of the result.
At what point do you reduce the signal to the equivalent of an LLM prompt, with most of the resulting image being explained by the training data?
Yeah, I know that modern phone cameras are also heavily post-processed, but the hardware is at least producing a reasonable optical image to begin with. There's some correspondence between input and output; at least they're comparable.
I've seen someone on this site comment to the effect that if they could use a tool like dall-e to generate a picture of "their dog" that looked better than a photo they could take themselves, they would happily take it over a photo.
The future is going to become difficult for people who find value in creative activities, beyond just a raw audio/visual/textual signal at the output. I think most people who really care about a creative medium would say there's some kind of value in the process and the human intentionality that creates works, both for the creator who engages in it and the audience who is aware of it.
In my opinion most AI creative tools don't actually benefit serious creators, they just provide a competitive edge for companies to sell new products and enable more dilettantes to enter the scene and flood us with mediocrity
People in India suffer from these same scams, too. They are pissed.
I haven't lived in India for decades, but saw an interesting TV show about a famous phone phishing operation run out of a small village in Bihar state. It's fiction, but based on a real news report.
If the creator has the app (onair.io/ios , onair.io/android), they would receive notifications there and can take the call. Also, if a browser is open when someone calls them, they would also receive notifications there as well.
Yes. You can set yourself `always online`, or `always offline`, or `scheduled`. In the last one, you specify hours in the day (e.g. 9am-5pm), and sync with external calendar (currently Google Calendar) which marks you as offline when you're busy.
You are right: Computer Vision was always one of the original fields of AI research. The International Journal of Computer Vision was established in 1987 and it remains a premier outlet.
Today the word "AI" has itself been hijacked by marketers of ANN-based techniques, so when the article uses that term, it confuses people who don't know any better.
Can we take a moment to appreciate the engaging, conversational tone of the writing? The article was a pleasure to read, even when it gets into some weeds explaining the frequency charts, etc. (I read the English-language version).
I donno which humans you read, but most people I know defecate their thoughts onto a page. Actually good writing takes a lot of effort most people don't do. AI's got a decent baseline.
This is the first time I'm hearing about it
I downloaded it pretty much as soon as I finished reading the blog post, only to discover it's not available in my region :-(
In practice, starting with Stalin the party mostly let the church continue unmolested.
"The Great Patriotic War changed Joseph Stalin’s position on the Orthodox Church. In 1943, after Stalin met with loyal Metropolitans, the government let them choose a new Patriarch, with government support and funding, and permitted believers to celebrate Easter, Christmas and other holidays. Stalin legalized Orthodoxy once again."
"Unmolested" during the war and until Stalin's death. After that, Khrushchev closed churches and started the anti christian campaigning again. The mass murder of "state enemies" mostly ended with Stalin's death, but the church and Christians were still molested, even though they weren't tortured to death.
> because Qt committed the carnal sin of adopting C++20
I do believe you meant to write "cardinal sin," good sir. Unless Qt has not only become sentient but also corporeal when I wasn't looking and gotten close and personal with the C++ standard...
Ha, ha, ha! I love it. I believe the author is serious, and I think he's on to something.
OP clearly says that most things in fact don't break if you just don't comply with the CRLF requirement in the standard and send only LF. (He calls LF "newline". OK, fine, his reasoning seems legit.) He is not advocating changing the language of the standard.
To all those people complaining that this is a minor matter and the wrong hill to die on, I say this: most programmers today are blindly depending on third-party libraries that are full of these kinds of workarounds for ancient, weird vestigial crud, so they might think this is an inconsequential thing. But if you're from the school of pure, simple code like the SQLite/Fossil/TCL developers, then you're writing the whole stack from scratch, and these things become very, very important.
Let me ask you instead: why do you care if somebody doesn't comply with the standard? The author's suggestion doesn't affect you in any way, since you'll just be using some third-party library and won't even know that anything is different.
Our healthcare is horribly wasteful and inefficient. But since the US government isn't in charge of it for most people, you can't blame them here. Our ultra-capitalist healthcare "system" achieved this spectacular waste all by itself.
The US does not have an "ultra-capitalist" healthcare system. Its healthcare system is heavily regulated and subsidized and many of those regulations are bought and paid for by the incumbents to prevent competition, i.e. the thing that allows private systems to be efficient. You can't just call something "capitalism" while regulating it into market concentration and expect that to work.
You're making a "no true Scotsman" argument. This is, in fact, how capitalism works whenever you fail to regulate it. Rent-seeking and regulatory capture is a feature, not a bug.
At what point do you reduce the signal to the equivalent of an LLM prompt, with most of the resulting image being explained by the training data?
Yeah, I know that modern phone cameras are also heavily post-processed, but the hardware is at least producing a reasonable optical image to begin with. There's some correspondence between input and output; at least they're comparable.
reply