"I hope you’ll agree that humanity has a variety of important engineering problems to solve, and nicer-looking graphics is quite low on that list."
I used to sneer at the social value of entertainment. Then covid lockdowns hit. I spent a lot of time playing Factorio. When professional sports resumed playing (in empty stadiums, with fake crowd noise on the broadcasts) I was happy to sit on the couch after work and watch baseball.
Without that entertainment, there is no way I would have been able to trudge to my computer and work from home day after day, when the only thing I could leave my house for was an occasional walk and a frightful trip to the grocery store.
So even if the brain surgeon is not using those "nicer-looking graphics" to improve brain surgery (which could very well happen), the brain surgeon might just be looking at "nicer-looking graphics" to unwind after a day of brain surgery, which gets her ready for another day of brain surgery. Entertainment has value.
Entertainment has gotten really weird now that we have (multiple forms of) recording, plus quick on-demand transmission to practically anywhere.
Basically all "need" I have of new recorded (to include video games—on-demand and not requiring the attention or effort of anyone but myself) entertainment is social. Any "need" for new entertainment exists because new entertainment is being created. If new entertainment stopped being created entirely, my quality of life wouldn't actually drop at all, because there's an astounding quantity & variety of it already, far more than I can engage with in a lifetime even restricting myself to the likely-to-be-very-good stuff. My friend-group, and entertainment media, could do exactly what we do with new entertainment, but instead by digging into older material we've so-far overlooked, instead of new.
> If new entertainment stopped being created entirely, my quality of life wouldn't actually drop at all, because there's an astounding quantity & variety of it already, far more than I can engage with in a lifetime…
You don’t think there will be better entertainment in a decade or two? That’s what it sounds like you are saying.
not that it won't get better. but that getting better entertainment is not a need.
for me on the other hand i can definitely see some improvements that i would like to see in entertainment. mostly in the generation of content for games, and graphics quality that is affordable for everyone. (i am not buying the latest graphics hardware, but i play games a few years behind the most modern available, but i expect that to improve as well)
i am happy for things to get better, but i have no desire to spend more money just to have it earlier.
Nobody disputes that entertainment has value. The article is about the fact that so much talent and resources is funneled into immersion.
Using your neurosurgeon example, I posit that having fewer VR headsets will not prevent the doctor from unwinding. Nor will having mobile phones with less vibrant colors or application with lower engagement metrics.
There is a tragic flaw in your logic. There’s probably 10x engineers working on advertising then there is gaming (let alone VR). It’s not that big of an industry and doesn’t have the mass adoption attention destroying ad-driven social media does.
Statistically, the neurosurgeon is unwinding on TikTok and hating themselves for it.
Could VR become immersive and an ad-addled attention destroying mess? Yes we should probably stop that from happening. But otherwise this warning is too early.
When would the right time have been to warn people about the current generation of social media?
We now have a ton of information about how technology will be used and abused, and we have a laundry list of known problems that we have not solved. Algorithmic social media and engagement-driven content are front and center.
To me, the warning is appropriate not based on what hasn't happened yet, but based on what already has.
What is it about VR that makes it meaningfully different enough from the current generation of problematic technology that we don't need to worry about the same problems?
This isn't a "gotcha" question; I'm genuinely curious. To me, VR is a new interface layer on top of a massive ecosystem, and the same people are building it. The ecosystem is where the problems exist, and VR is just the latest facade through which we interface with that ecosystem.
In this framing, it's not ascribing old problems to new technology as much as claiming that the new technology magnifies or intrinsically replicates the existing problems.
> The warnings started appropriately early in 2011, there should’ve been regulations in place by 2015
It's 2024 and the regulatory landscape is very poor or nearly nonexistent. Shouldn't this encourage more caution? i.e. we've already proven that looking back and deciding "oh yeah we should have been more careful" hasn't actually resolved the issue, and taking the same approach with emerging tech that has similar pitfalls seems doomed to repeat that.
I also don't think we had any idea what was coming when we were building the stacks that underlie the current web. We now have much clearer mental models of what the Internet and technology in general is capable of, and the resulting warnings are coming much earlier (appropriately, IMO).
Sure let’s legislate social media. If the claim is the problems with VR will be the same problems as social media. Let’s legislate social media and that should solve problems in VR as well? As far as it’s just a new facade over social media and the noisy new internet, let’s legislate that.
I personally think the larger space of spatial computing (VR, AR etc) presents immense opportunities outside of its ability to just be another vehicle for ads. Leave that part alone because our regime of regulations only creates calcification and monopolization.
I don't know. Meta has not had a history that encourages me to just see what happens in a space they are popularizing. I would like it if we could get ahead of it.
Why is it that techbros insist on no regulation until there's a problem, and when there's a problem it's suddenly "the confetti has left the cannon, nothing to do now"? Oh wait that was a rhetorical question.
Ahhh, but perhaps a good working VR system may finally be enough for said surgeon to use it for a complex operation.
There are byproducts likely unknown to us, with each advance on technology. (Good or ill.)
They are refining over time the ability to hit our pleasure (and fear) centers. Whole branches of advertising psychology are dedicated to it. If "will power" was an unlimited resource everyone could just turn on whenever they desired, then that wouldn't be an issue and I suspect America would be a lot thinner as well. It's a limited resource, and some people have a lot more of it than others. Therefore, some of these new technologies really do need to be regulated, knowing that while we are mental and spiritual beings, we are also biological beings who sometimes don't have the will power to overcome, especially a scientifically verified means of getting around our usual common sense to moderate and balance "things" in our lives.
Not as important as how they function, but still important.
People, software programs, buildings, landscapes... appearance is information, about quality, about professionalism, about creativity, about attention to detail.
They’ve reached tabletop game quality and peaked there. Proves the point that we don’t need more immersive games than real life tabletop games, as our creative brain fills in the gaps.
> Civilization 1 and 2 was much greater experience than 5
How so? I've played all 3 of those, and much prefer the Civ 5. Or at least I think I do? I haven't played Civ 1 or 2 for 20 years now, so it's hard to compare.
It is deeply troubling how much effort goes into manipulating people, or just pleasing managers who’s checkbox includes “flat design, because?”, and how little effort goes into actually making interfaces visually effortless to understand, features easy to discover, and with flexibility to be really usable from a user’s perspective.
Our bicycles for the mind have become conveyer belts for e-mall shopping.
I think it'll be hard to ever move away from flat design, because everyone else does it, and it's associated with modernity, while skeuomorphism is associated with old software.
I don't have hard numbers, but I think flat design really became a lot more popular after Windows 8 was released. For a while, Windows 8 was heavily criticized for its UI design (and not just because it was flat), but they stuck with flat design the entire time, and I believe they had a great role in popularizing it.
I guess we'll just need someone else to make their UIs more realistic and 3D, like the old ones used to be, and stick with it no matter what others say, and the style may become popular again if their UI is in wide use.
While I think most people understand this sentiment, the root issue here was the lock-down – Yes, if you imprison a whole nation it is good to have some sedation. You could say that a lot of people wouldn't have gone through lock-downs without alcohol and porn either. That does not make it more admirable.
I agree in the sentiment that we have more important issues than providing ever more entertainment – we had no issues spending our time 50 years ago; Without immersive technologies. Instead of translating the extra resources technology have given to more entertainment, we could have translated it into more free time.
I would happily have less consoles, VR, and games and instead have a nation-wide 30 hour workweek (I need my friends not to work either).
Is it? I remember Marc Andreessen's comment near the height of the VR craze of ~2016, about "reality deserts". Some segment of the American population lives in an urban area (or suburban area / village with people they actually like), and for them, the idea that people might want to play computer games online instead of hanging out with their friends in person is unfathomable. But not everybody is privileged enough to have that situation. By the numbers, not even a majority of people is privileged enough to have that situation. A lot of people live in suburban boxes, don't have friends, and don't particularly like the people around them. For them, a technology where they can go out to the Internet, find people that they actually do like, and engage in activities that they all find fun is a godsend.
Games certainly have social value, but I'm not convinced that better graphics contribute all that much. I've not played Factorio, but looking at screenshots it seems not to have particularly advanced graphics. Indeed, I still get a lot of enjoyment out of 20-year-old games and often wish that modern games had less sophisticated graphics so they would run better.
To some extent, graphics are important because with extremely primitive graphics, like the graphics of the NES, it can get difficult to tell what things are represented by the image, and the limited color palette presents a challenge in making the game look aesthetically pleasing.
A game doesn't need the most advanced graphics to look aesthetically pleasing, though. It can be made up for by using an art style that doesn't need powerful hardware, e.g. modern pixel art, or the simplistic, cartoonish look that 3D Nintendo games use. Mirror's Edge still looks great due to its art direction and use of techniques like precomputed lighting.
However, different people have different standards. I've been really enjoying playing the game Crypt of the Necrodancer, and I think the pixel art is also pretty good and aesthetically pleasing, but when I showed the game to my friend to see if he might like it, he rejected it because "the art didn't speak to him." Evidently, compared to me, he has a higher minimum for graphics in order for a game to be fun for him.
Using lockdowns as a reason why we need entertainment seems so strange. We should instead not be locked down and focus our attention on building tech to prevent that. Not arguing against entertainment mind you, just seems strange to me.
True. When engineering and art are created in service and for the edification of others.
But they also create systems and media designed to extract as much value from humans as possible, by bypassing any mindfulness on their part. Regardless of any regretful impacts.
Not sure what I am saying, other than completing the picture as it is today.
The surgeon does not need images or Javascript or even CSS to listen or watch. She only needs the audio/video files. The problem with adding "graphics" that are purportedly to assist the surgeon in opening these files is that web developers have generally sold out to marketers and advertisers. As it happens, the best way to avoid the marketing and advertising is to bypass the "graphics".
I like the delusional presumption/implication that there are hundreds of thousands of long-term unfilled job listings for things like fusion researcher, nano bots engineering, space exploration, (other near future sounding stuff), etc., and the only reason those things aren't getting worked on is because every Factorio/EVE player is a genius with a debilitating video gaming addiction.
All jobs have a repetitive aspect, exactly because we want experts in those jobs to do the job. But for the experts the work may often be boring and repetitive.
I am pretty sure most of a surgeon's surgeries are NOT exhilarating experiences for them, but rather routine activities. Similarly for lawyers drafting contracts, software engineers developing web backends, construction workers building houses, or singers singing their repertoire.
Not all of these jobs can be automated, so some routine will always exist.
Not all work is fun. I don't really like washing the dishes or vacuuming the floor where I live, but it's necessary work. A lot of jobs are the same way. Even when a job consists of a fun activity, doing it 8 hours a day, 5 days a week might again invoke the feeling of trudging. I've heard art jobs are like this, and programming jobs. I don't think this is something we'll ever really solve.
I appreciate the comment, especially with the example with the hypothetical brain surgeon. I've also anecdotally known people personally who have become accomplished in academic research, who preferred easy-to-access entertainment to decompress after a difficult day of work. So, work on entertainment does have important value to many people.
But at the same time, on a broader point, the author has a compelling general idea that it can be helpful to question how you spend your time away from work. I've spoken with people—and have personally experienced—a satisfaction from physical hobbies outside of computer interaction. Rock climbing is especially popular among the people I know (even those with busy schedules), along with martial arts such as BJJ or judo. I also know a couple people (one personally, and another impersonally through his biography as a novelist named "What I Talk About When I Talk About Running") whose life enjoyment is closely tied to their passion for long-distance running.
Outside of physical activity, other low-technology ways of entertainment will remain important. Reading books, especially classical ones, can improve one's writing. I've also read about chess players who have attributed their strong ability to plan ahead—in life and martial arts—to their passion for the game. I've additionally known people whose interests in music performance and production led them to develop good friendships with others. Naturally, this can apply to immersive technology: video streaming or high-graphics video games also have the ability to inspire people in their day-to-day life, or help people make meaningful connections.
---
My source of skepticism is therefore not with the people working toward on nicer-looking graphics, as highlighted by the author, but rather with the people who apply psychology in video game design to make certain video games addictive—especially those with free-to-play models, which employ gameplay loops to keep people grinding for rewards even when the game stops feeling fun. I think the author is focusing on the wrong part of the industry: certain game designers have had a larger role with making certain video games have negative effects, as evidenced by somewhat-recent successful lawsuits against game companies that employed loot boxes and overly-easy ways to make in-game purchases in certain games targeted toward children.
Video games and streaming can absolutely have a healthy place in one's life—especially productions created for the love of the art instead of primarily for the money—but only in moderation and not at the excessive expense of other ways to spend one's free time. Graphics development isn't a problem at all, but I do think many people's lives can become richer if they also give passions that don't require a personal computing device a strong chance.
Why is that? Remember that during the pandemic, many places shut down any other sources of social stimulation that might be there, including actual socializing. Unless you had a very specific set of existing interests, it was hard to find anything outside of work to do in many cases.
While sitting with my iPad typing the other morning, something about the experience struck me in a way that it never had before.
I've been typing since the early 90s, and can type around 120WPM without looking or thinking about it, and it hit me that in a very real sense, the iPad (and other computing devices) are already an extension of me. I've invested the time to integrate this hardware into my brain via the keyboard interface, and once typing is automatic, the friction between brain and machine is very low. I can transmit information from brain to computer and back relatively quickly.
The thing about immersive tech is that we're already immersed in tech. The next generation of VR/AR promises to immerse us even more, but I think it's interesting to consider the idea that we're already immersed and don't always realize it.
When you start to look at the space around you as an extension of you (and I think there are good reasons to look at it this way - your immediate surroundings are in effect a projection/construct formulated by your brain, and the actions you take within that space modulate your average conscious experience), and when you start to look at the computing devices around you as part of that extension of you, it starts to raise really interesting questions like:
If I could implant a chip in my brain, and if people could control my brain with that chip, I would probably never allow it. But when that chip is outside of my brain in a device I keep in my pocket, why am I more willing to allow other entities to feed me stimuli?
I tend to agree with the broader idea that we need to be less immersed in tech, if for no other reason to reduce this kind of external control mechanism we've all hooked ourselves in to. And I don't think immersion is limited to the obvious developments like that next generation VR/AR headset. Immersion is already extremely high.
When my computer glitches, e.g. if a key stops working on my keyboard or the mouse doesn't respond as expected, I sometimes experience a physical sense of of discomfort or disorientation that's not unlike dizzyness or the feeling of missing a step on stairs. I think it's because, as you describe in terms of immersion, my brain has extended its concept of my body through my typing fingers to the computer screen, building a proprioceptive loop which includes the computer.
I don’t buy this for even a minute. If my starter solenoid goes out on my old time Jeep I get the same feeling. (It’s happened a few times now). Does this mean the Jeep is now part of my body? Of course not.
It’s called being annoyed or caught off guard by external stimuli.
I think there is a large portion of people on this website who have not spent a day in years without their devices. That doesn’t mean their devices are integrated with their biology. Just that they have become dependent on things.
> Does this mean the Jeep is now part of my body? Of course not.
Why is that "of course not"? Is your arm part of your body? Your hair? A wig? A pacemaker? A well-fitted mechanical exoskeleton that you use without thinking about it? A poorly-fitted exoskeleton that you're still learning how to use? A Jeep? It's not as clear-cut as you make it sound. If the criteria is: well, I can't feel the Jeep -- well, you can't feel your liver either.
> A well-fitted mechanical exoskeleton that you use without thinking about it? A poorly-fitted exoskeleton that you're still learning how to use?
Of course not to both of these things.
> A Jeep?
Of course not, and framing a Jeep and a human's liver as if the only possible point of difference between them is whether or not you can "feel" them is strange.
Half of us are fully capable of growing an entire other human being within our bodies, with a physical connection so tightly integrated that breaking it causes bleeding for weeks - and yet it's debatable whether even that counts as being a part of our bodies (especially medically speaking). Navel-gazing about the distinction between your body and a car not being clear-cut is simply inane to me in comparison.
So your definition seems to be that it's part of your body if it was built via instructions in your inherited DNA? So tattoos are not part of your body, nor are the gut bacteria you were mostly born with (or their ancestors) and without which you would quickly die. Phlegm that you spit on the street is still part of your body? Your tears? What about clonal colonies and parthenogenesis? Do identical twins have the same body?
The point is, your "of course" is based on some "obvious" definition that always gets fuzzy at the edges. Someone else's "of course" is based on some equally valid and obvious definition that gets fuzzy in different ways at different edges. Don't act like everyone who doesn't think exactly like you is just pointlessly navel gazing. The pointlessness is in trying to have a clear definition at all, which saying "of course" implicitly assumes.
> and framing a Jeep and a human's liver as if the only possible point of difference between them is whether or not you can "feel" them is strange.
Good thing that's nowhere near what I said. I said "I can feel it" is obviously not a good criterion.
> So your definition seems to be that it's part of your body if it was built via instructions in your inherited DNA?
I think you'll find discussions significantly more straightforward if you don't make up the other person's side of the conversation for them and run off tilting at windmills.
What part of the comment where I e.g. said that whether or not a pacemaker is considered part of your body "depends on who's asking and why" gave you the impression that I define body parts as being built by you? What part of my comment mentioned genetics at all?
By the by, tattoos are a modification to a part of your body (your skin) but are very much not a part of it (your immune system is literally constantly trying to get rid of them). And your gut microbiota is very clearly considered to be a _separate_ organism from you, with its relationship described in symbiotic terms.
> I think you'll find discussions significantly more straightforward if you don't make up the other person's side of the conversation
You never specified your "obvious" definition, so I had to reverse engineer it from your examples. I wouldn't have to make up your side of the conversation if you just actually said what it is. You're still using "obvious" and "clearly" as meaning obvious to you, based on every iota of experience you've ever had, but totally failed to justify why exactly your "obvious" opinions should be obvious to anyone else. Either you have a hard and fast definition that can definitively determine whether that thing is or is not part of a body, or it's not obvious. Unless your rule for determining whether something is part of a body is: "just ask filleduchaos, and he'll tell you whether it obviously is or obviously is not."
> And your gut microbiota is very clearly considered to be a _separate_ organism from you
"Very clearly" again. It's considered a separate organism because it has different DNA than your germ cells. But so do your mitochondria. So does a chimeric twin. Is a chimera "obviously one" or "obviously two" bodies? Tell me which one of those is so obvious that I'd be completely stupid to believe the other one, please. I suspect you'll say it's obviously one body, but then you have a problem with gut bacteria being obviously not part of your body.
Ever go rock-crawling for several hours off-road with that jeep? If the stakes are high because you're far from a paved road, 4x4 is also a very immersive experience. After a while the tires become like toes, and finding a good line with the vehicle starts to feel like stepping carefully around broken glass. You cringe almost in anticipation of personal pain when you take a bad line. There's surely some neurological basis for this, some kind of reverse of phantom-limb syndrome. Similarly that sense of "stumbling" when you're fighting with a glitching IDE or whatever is also very real, at least for me. If you haven't ever felt something along these lines, then you might not have found something yet that you're accustomed to fully focusing on.
Even with playing a racing simulation, where it's a constant challenge to control the car, while the controller or the wheel become quickly abstracted away, the car itself become something your conscious inhabits. You react to stimuli (sound, visual, and vibration), not from your point of view, but from the car.
I think you just improved my Rocket League game by just accurately describing what probably should have been obvious, that's exactly what it feels like to be in the zone.
Wait but driving cars is pretty much the classic example of how learning to use tools can become an "extension" of our visual, motor, spacial perception.
It's why a new driver feels so uneasy while a seasoned driver can almost "feel" the amount of space their car takes up.
Bad news, my friend: you're a Jeep cyborg. Basically something from a Transformers movie.
And yeah, I'm stretching the argument a bit -- of course at least on a conscious level there's a boundary at my fingertips where I know that my body ends by the more generally accepted definition of "me".
I'll get this occasionally when I've scrolled to the end of an article without realizing, attempt to scroll further a few seconds later, and then have my eyes/vision caught off guard by the absence of movement. Missing a stairstep was a good example--there's a similar moment of "frozen time" in both cases
I didn't want to bore the reader with various definitions of immersion but you're raising a very good point. Apart from the "immersion for entertainment" that I'm mostly talking about, there's also the "flow" and the "tech as augmentation". I have obviously no problem with people being more productive through technology. What I have issue with is the (unwitting) funneling of the world's talent and resources into tech that makes us immersed in non-existent worlds just for the sake of entertainment.
I think Buffett has a very great counter to this which he learned from his father: inner scorecard. By judging yourself on your inner checklist, you’re less likely to be pulled into the latest fads. Your goal is follow your own path instead of the path that happens to be fashionable that day.
That we have arms and legs that we can control is nothing but an accident, a very complex accident but still.
Our eyes and ears are no different in that sense, only they're mostly input rather than output.
Input is just as dangerous in our bodies as it is in some backend that connects to a database, marketing makes use of these vulnerabilities to control us in some way or another.
I don't think people argue we should cover over ears while walking outside out of fear we get hijacked from things we could hear. In the same way, I don't think there's an issue with immersive technology, one just has to learn to treat it as more external input that needs to be validated and sanitized.
> If I could implant a chip in my brain, and if people could control my brain with that chip, I would probably never allow it. But when that chip is outside of my brain in a device I keep in my pocket, why am I more willing to allow other entities to feed me stimuli?
This definition of "feeding you stimuli" seems extremely broad to me. Are you "allowing other entities to feed you stimuli" when you listen to the radio? When you see signs and billboards on the highway? When you are having a conversation? How could you live your life without allowing other entities to feed you stimuli? How are these examples different from what your phone is doing? (And don't say "notifications": you have far more control over which notifications you receive than which billboards you see.)
If I could implant a chip in my brain that allowed me to tune into any radio station I wanted and listen to it in my brain, I'd probably do that, assuming it's safe and actually under my control. I'm not seeing the dichotomy here.
> This definition of "feeding you stimuli" seems extremely broad to me.
To rein this in a bit, here's how I'm defining this: regularly "choosing" to interact with a library of content and experiences that are all engineered to evoke certain emotional responses from me as the user and get me to buy more things or change my beliefs. I put choosing in quotes because once a habit loop is established, the behavior is indistinguishable from an addiction and the choice is similar to the one made by a gambler sitting at a slot machine.
Listening to the radio, watching TV and talking to a friend all provide external stimuli. But clearly there are aspects of each of these interactions that makes them unique.
When I'm talking to my friend, I usually don't have to wonder if the things they're saying are only meant to influence my purchasing decisions, or change how I see a political candidate, or support a particular world view, etc. Some friends are good influences. Some bad. If you choose to hang around the stoner who always wears you down and gets you to smoke a joint, that particular friend might not be a good influence.
> How could you live your life without allowing other entities to feed you stimuli?
I'm not suggesting that this is possible or even desired. I'm suggesting that there are certain sources of stimuli that we deem unacceptable that are strikingly similar to stimuli that we deem acceptable and that it's not clear why we categorize them so differently.
In other words: If you would say "hell no" to a physically connected chip that feeds your brain ads for products you don't need every hour, why would you not say "hell no" to a device that could do the same thing without even needing a physically connected chip? (I'm saying this as a person who carries one of those devices, so I'm not a Luddite or claiming to have avoided anything).
> How are these examples different from what your phone is doing?
There are a myriad of differences, but I think they boil down to these key things: Ubiquity, Tracking and Personalization.
Billboards (which are ugly and often the source of controversy in communities) do not know who you are, and they are not in your pocket every minute of the day. They don't insert themselves into the middle of your interactions with friends, or follow you home.
> And don't say "notifications": you have far more control over which notifications you receive than which billboards you see.
I disagree with the premise of this objection, but notifications are only a small part of this. Most people don't turn notifications off, and app makes know this. There is a finite number of billboards you can fit on any given stretch of road, and there is an infinite number of notifications you can be subject to regardless of where you are on any given day.
Yes, you can disable notifications, and I think this is one of the simplest things people can do to interrupt the addiction loop. But many of the most popular apps carefully construct notifications that get you into the app for plausible reasons (comments on your post!) so they can then feed you the algorithmic payload.
> If I could implant a chip in my brain that allowed me to tune into any radio station I wanted and listen to it in my brain, I'd probably do that, assuming it's safe and actually under my control. I'm not seeing the dichotomy here.
I think that "assuming it's safe and actually under my control" is exactly where the radio and social media examples diverge. The point is that much of the content on social media is not safe, and you have less and less choice about what you actually see. With radio, you probably wouldn't choose to listen to the channel that airs Rush Limbaugh 24/7. With algorithmic feeds, you don't really get to choose what you see.
For anyone more interested in this general topic, in philosophy of mind, this is referred to as “the extended mind” using the terminology of Andy Clark and David Chalmers in their famous 1998 paper of the same name.
that's a great point. i guess i can say the same about how i interact with my computer, but for me there is a different quality to it. while the computer and phone and various messaging apps or forums like hackernews are often my only social interaction, so i spend a lot of time on that in search for someone to connect to, i also have all notifications off on all devices, and i expressly do not allow other entities to feed me stimuli when they deem it fit, to the point that you can't even call me because i won't hear the phone ring. i want to be in control.
there is of course a balance to it. not getting notifications means i spend more time checking for new messages than i would if notifications were on, but at the same time i also forget to check for messages when i am actually focused on something, which i think is the more important aspect of it.
so i am immersed with my tools only as an extension of my self, controlling the tools to do what i want them to do without giving up any of that control to anyone else. (at least so i like to think)
If wish we could merge this comment with the thread where people are defending Youtubes right to show ads. I am so glad there is technology available to choose what I want to read and see without being forced too much other content.
One day I want to expand on the idea that our digital avatars, our images and representations of ourselves, become an extension of our own self-concept, in the same way our keyboard becomes an extension of our hands and our body. Our social media profiles, our timelines, photos, and digital content, become part of our selves, and we begin to identify with these time capsules we've created of ourselves, to the point where we _are_ the person in the photo, we _are_ that person on Facebook.
It's a fundamentally backwards-facing view of the self that relies on historical or past depictions of an individual in order to define their current self-concept. It implies an immutability, a static, unchanging quality to the person, since after all - there "you" are, or at least all your photos and your memorabilia, your self-depictions and simulacra, committed to and preserved in the permanent record of the cloud. It's also one that emphasizes the importance of external markers and signals of identity - only those that can be extraverted - to the detriment of the inner life of the psyche, the private life of the mind which is not so readily available for examination or expression.
This way of looking of ourselves through a digital black mirror continues to uphold the illusion of permanence; our self is like a river, and we never step in the same one twice - unless we freeze the river (in time) and post it to Instagram.
Not to mention all the vanity and superficiality of Photoshop, filters, r/instagramreality, etc etc... points which have all been discussed ad nauseam.
Moreover there is the slow death of the literary personality - of one who, in times where media and bandwidth were not of sufficient capacity to generate and retain all these audiovisual representations of our selves, was primarily known through their words and their writing. This is a fundamentally different way of trying to understand one's character that requires much more participation on the audience's behalf to fill in the gaps - the gaps that can't be immediately filled in with high definition video. It becomes harder for one to be identified through their words, and more and more preferable to identify someone by what they look or sound like; another step in the long, slow march away from literacy and to other forms of information exchange.
In this regard I'm reminded of David Foster Wallace uncut television interview [0], which I found incredibly fascinating - here was one of the most articulate men in society, celebrated for his literary works, appearing awkwardly and shyly on camera, sometimes meandering off on tangents in the discussion, sometimes "pontificating" on the interviewer's questions - how does DFW's audiovisual representation compare to his literary output?
You could really say this about anything, the only moral to the story is that people have to work to pay the bills and the work that pays is the unglamorous stuff that needs doing.
I believe this has been true throughout human history. The greatest artists and thinkers throughout history have earned their keep by providing services to the wealthy or teaching paying students (also often wealthy).
There are upsides, the artists get to eat but they also have all the benefits of being connected to their employer/patron including time spent practicing their craft. If they were to quit and focus on pure art then they would likely earn far, far less and possibly spend less time making any kind of art commercial or not.
It reminds me of this survey that made the front page not long ago about author's that contained this illuminating paragraph:
> While 80% of respondents considered themselves to be professional authors, only 35% said they were full-time authors while 53% said they were part-time authors (with the balance being one-book authors or undecided). The primary writing occupation of part-time authors outside of publishing books was professor/academic (8.5%), followed by book illustrator/author (4.2%), editor (2.9%), poet (2.4%), journalist (2%), teacher (2%), and entrepreneur (1.5%).
So even people who consider themselves professional authors are unable to work at it full time. It makes sense, there's only so much one person can produce and it's quite difficult to find an audience willing to pay for what you make. This likely holds true across all professions i.e. you have to do the work that they want done and not the work that you want to do.
the work that pays is the unglamorous stuff that needs doing
no, the work that pays is the unglamorous stuff that someone wants to get done to make a profit. the stuff that actually needs doing is the stuff that would be a benefit for our society.
"Profit" is just the difference between what people are willing to pay for a service and the full cost of what it takes to provide that service (which itself is defined as how much the different suppliers could be making doing other stuff). Business owners are rewarded for shifting resources away from activities that consumers don't value highly (as evidenced by their willingness to pay for them) to ones that they do. If you feel that businesses are focusing on the wrong things, spend your money differently.
Blame is counterproductive. The world simply is the way it is, but it is helpful to have a realistic picture of why, and then you can make local modifications that might at least make your own life a bit nicer.
In advertising's case, it's such a huge industry because it works. It moves the needle on purchasers' decisions. You can make it work a little less well on yourself with a few mental habits: never buy on impulse, don't save your credit cards, put time and distance between yourself and the purchase decision, have a rigorous self-directed research process before you open your wallet, learn to skip or tune out ads, go for paid media instead of ad-supported media, etc. And you can make it work for you by taking the other side of trade and getting paid when other people click on ads. But there are 8B+ humans who you don't control, and they will do what they want to do.
> Business owners are rewarded for shifting resources away from activities that consumers don't value highly (as evidenced by their willingness to pay for them) to ones that they do. If you feel that businesses are focusing on the wrong things, spend your money differently.
So... that's not blame? Is "businesses only do this because you tell them to, with money, so stop telling them to if you don't like it" not the intended reading of that?
My point is that the modern advertising industry is better classed as a result of large-scale structures and societal-scale rules, the same way bread lines were, than as something explained by consumer choice. I mean, FFS, the point of it is to influence consumer choice. This is like saying if the gas pedal doesn't want the car to go faster, it should stop getting pushed so much.
It's consequences, not blame. There's no value judgment in saying "If you do this, it will result in these consequences." You can make your own judgment about which set of consequences you prefer.
"My point is that the modern advertising industry is better classed as a result of large-scale structures and societal-scale rules"
Does this model result in useful predictions that you can act upon? A model that "advertising works because in the aggregate, it alters buying decisions and leads to more spending being directed to the advertiser" is very actionable: it tells you exactly where the money is, why it is being spent, and then gives you leads to areas you might want to study (eg. human psychology and perception, owning a channel, producing content at scale) to make you better at influencing those money flows. A model that "the modern advertising industry is better classed as a result of large-scale structures and societal-scale rules" may be true, but it's pretty useless. It doesn't have enough detail to make specific predictions, and its area of focus is on phenomena that you don't have any agency over anyway.
My physics professors were always very clear that the true value of a theory is "Can you make testable predictions with it?" My English & sociology professors were always very clear that "Society doesn't actually exist. It's just a collection of individual actors." This was pretty eye-opening when I got to college, because it got me to understand the value of thinking in terms of specifics rather grand theories that sound expansive as a soundbite but can't actually be used.
i can with good conscience say that i am doing all of what you suggest. no impulse, do research, skip and une out ads. except for the paid media to avoid ads. i use adblockers for that.
but i struggle with that other side. i'd love to earn some money on sidebusinesses like that, but i feel like making them ad supported would be close to unethical. i want people to stop paying attention to ads, not take advantage of them.
if i allow ads on my website, i am not giving people what they want, but i exploit their naivete to make them buy what they don't actually need or want.
i probably can't even control what ads will be running, so how would i know if those ads are something i would approve of or not?
Right, the key reason this line stings so bad is precisely that 99.9% of money and effort spent on advertising is capitalism-friction. It's waste-energy. It's escalating zero-sum games. It emphatically does not need doing. It's an accident.
[EDIT] And that's the optimistic take. In fact a great deal of it is harmful, not just wasteful.
You're thinking of picking up the trash, driving trucks, fixing toilets, and assembling wheelbarrows. That pays shit: the things that "need doing" are mostly just not done, and what is done pays nothing. Advertising has never "needed doing", it's always been an exploit that psychopaths use to enrich themselves at the expense of all of humanity. And the advertising industry is nothing if not glamorous.
I don't think you can get away with throwing out the baby with the bathwater on this one. In any market where there is more than one supplier you need some form of advertising even if it's just just a couple signs that say "Bob sells shoes" and "Alice sells shoes" though in reality customers need more information than that. What kind of shoes? Men's shoes? Women's shoes? Children's shoes? What color are they? What weather are they good in? What are they made of? etc. etc.
Granted, advertising has been taken to the extreme and there wouldn't be harm in cutting back on it to a certain degree, you just can't get rid of all of it so it is in fact one of those jobs that need doing.
The point of my comment is that you can say the same thing about every pair of thing someone wants to do and thing someone needs done and the essential part is that it doesn't matter what the second thing is, people would still complain that all this talent for A was being wasted because they were working on B instead but it's not a waste for a variety of reasons one of which is that there is not enough demand for A. You could easily say someone is a psycopath or antisocial if they choose to spend all their time doing something that isn't needed or not wanted instead of doing the things that other people need done. You would at the very least say they are selfish.
This made me think of Mural Art in the ancient past. Yes they were commissioned to do that art, but it still stands today in cathedrals and churches. Advertising comes and goes, nobody will notice when its gone.
In a way, mural art in churches and cathedrals is advertising. It's advertising for that religion, showing you the promised benefits of subscribing or the terrible things that will happen if you don't sign up now during this limited offer.
> Many of the most talented artists of our time don’t do any art — they work in advertising.
> I think the original quote is:
> The best minds of my generation are thinking about how to make people click ads
No, it's a paraphrase of a quote attributed to Banksy and it's specifically about graphic arts, not CS or psychology.
> The thing I hate the most about advertising is that it attracts all the bright, creative and ambitious young people. - Banksy
First time I heard it was in a conversation about how the best students from directing/film making (or was it 3d artists ?) ended up making ads for cars rather than video installations and I am pretty sure it was before 2010.
The sentiment is the same though but I cannot ever let pass an occasion to throw in my favourite quote: "when all you have is a hammer...".
> Nope, the most famous form is Ginsberg; I don't know if he lifted it from somewhere too.
No, they appeared in very different contexts and Ginsberg goes in a different direction (1955, drugs, jazz, etc.):
> I saw the best minds of my generation destroyed by madness, starving hysterical naked,
dragging themselves through the negro streets at dawn looking for an angry fix, angelheaded hipsters burning for the ancient heavenly connection to the starry dynamo in the machinery of night, [..]
while Banksy deals with the then current state of arts and marketing (~2000/2010?).
Also, it's not explained in the article how Ginsberg and Hammerbacher ideas relate except for an "en-pasasnt" quote from the person originally inquiring the quote investigator opinion, so the grandparent link doesn't actually point anything out.
edit: to summarize: that quote is taken out of context even if it seems to fit
"Best minds of my generation" is a pretty specific phrase which appears in the Ginsberg quote and not in the Banksy one. I would argue that famous quote are more often related by structure than context. Most quotes about "the smell of <something> in the morning" are not about the Vietnam war, for example.
Ads have existed long before clicks on and the most talented artists have worked on them long before the most talented programmers thought about making people click on them.
The unsaid context of the quote is that programming is among the most lucrative fields (top 3 large employment fields), and most of the money is programming field comes from ads, which is why the best programmers are working on ads.
Artists at ad agencies probably never broke even top twenty jobs by salary.
Is that true though? Are most people working at Google working on Ad tech, or on the services that the ad tech helps pay for?
Now, to be clear, I don't consider GMail "ad tech", even though it is ad supported. I don't see anything wrong with someone like Google want to drive traffic through services that are monetized through advertising. Nor Facebook for that matter.
I will complain about lock in, dark patterns, and other nefarious things. But you can have good ad supported services without necessarily having all of the bad things. Those just bump your margin and revenue.
So, if you feel that the "good" programmers are working on ad tech and analysis, while the "less good" are working on, say, GMail proper, I'd be curious in how that conclusion was drawn.
How many people here work on ad tech directly? (vs, some dual use technology that can be used for ad tech.)
I know a lot of people, tangentially, indirectly, "6 degrees of separation" kind of thing, and I don't know any of them being directly involved in ad tech. None of the "lead geeks" I'm familiar with are in the field.
The closest it got was a friend of mine who worked on Farmville in its heyday, and that's more a dark pattern addiction game than ad tech.
Not really. Advertising is pretty new, modern advertising with targeting market segments didn't exist until around the start of the 20th century[1]. The first ad supported media was in 1838[2].
Behavioral ads targeting individual user's are super new, though I'd argue TikTok and gacha games are orders of magnitude better at using behavioral manipulation than advertisers are.
I don't know what the original actually was, but around post-9/11 derivatives finance boom, physicists and electrical engineers started lamenting that all the smartest people of the current generation were wasting all that brainpower on relatively zero-sum arms races between hedge funds trying to win trades against each other.
The one about making people click ads grew out of that after the GFC when software salaries started to boom like finance had earlier in the decade.
I'm sure there was something else earlier that diverted intelligent, talented people into socially unproductive pursuits because it paid better. I recall when I was in college there being huge dilemmas among all the students studying geology whether they should sell out and go work in oil and gas exploration.
> I'm sure there was something else earlier that diverted intelligent, talented people into socially unproductive pursuits because it paid better.
The problem is in my experience (for quite some smart people) not "which job pays so much better (and thus facing a potential dilemma about selling out)", but rather about actually finding a job.
The job market does not like very smart people, but rather self-promoters and sycophant: I know quite some really smart people (with a focus on people having degrees in mathematics and physics (often comparable to PhD or post doctoral experience)) who had quite some difficulties actually finding a job in industry, and thus rather had to take the job positions that they could get.
I've heard a similar thing said about drivers. Those who drive more respectfully and thoughtfully end up in more crashes because the rest is less respectful of them in traffic (and of them crashing). It's an anecdote I just heard from a friend.
Maybe there's something similar in the job market where smart people are more self aware that they may take someone elses spot and more respectful and less good self promoters. Because the other set of people don't even think about that they end up getting the spots.
The moral of the story is that a lot of humanity is just working for cash instead of anything useful, interesting, etc for humanity and even themselves. The basic needs and sometimes a bit beyond, still, quite worthless on any scale beyond your few m^2. Depressing, but it seems not improve with us all being better off.
I wonder if the original author knew they were also paraphrasing, riffing on Ginsberg's "I saw the best minds of my generation destroyed by madness". Glad to see quoteinvestigator mention that context
If they are doing advertising then they are art directors, not artists. The most talented artists make art, it's not something they have a choice in. They will suffer poverty to make their art.
Talent isn't in the execution in art, it's in the ideas, and tapestry of the artist's life.
> The best minds of my generation are thinking about how to make people click ads
This is also not true, because those minds aren't intelligent enough to see the folly of their attention. The best minds see past this and do the work they are driven to do, similar to artists.
Let’s not over romanticize the starving artist. There are plenty of creative people who could create good, worthy art if they had the means and support to pursue it fully. It’s not a moral failing to have a need to make a living, especially if others are relying on you.
The article literally said "the most talented". The top talent artists are certainly making art, and it is a fact they are driven to do it more than anything else. That's the attitude that makes them "the most talented". I am not romanticising it, it's just how it is.
> It’s not a moral failing to have a need to make a living, especially if others are relying on you.
Exactly. People make choices in what they do, it has nothing to do with talent being taken out of the market, it's always been this way. There have always been more profitable things to do than making art.
You do know you can do what you like outside of work, right?
You have a hyper-idealized version of what is and is not art.
And execution definitely forms part of the talent. We should know that just as well. We all know "the idea guy", the guy who has all the "right" ideas, but just can't seem to actually do the thing required to bring the idea to life.
And that's because most ideas are just half-formed thoughts. I'm almost on the other end of the spectrum. It's mostly about execution and the idea actually means very little. The idea of "what if you couldn't make new memories" is the central struggle in two very different movies. In Memento, it's used to tell a detective noir story where we're told who the killer is in the first scene. In 50 First Dates, it's used to give Adam Sandler a hurdle to plowing Drew Barrymore.
And you can find joy in the doing itself. You can make good commercial art. It is possible. There is craft there. And where there is craft, there is art.
> You have a hyper-idealized version of what is and is not art.
No, I made an assertion that the most talented artists make art.
> And execution definitely forms part of the talent.
Seeing that many, many top, full time, talented artists have technicians that execute, I don't think it's the defining quality of a top artist. Execution can be offloaded, ideas cannot.
> And that's because most ideas are just half-formed thoughts.
Talented artists have fully formed thoughts. That's their talent.
> And that's because most ideas are just half-formed thoughts.
Well, I'd suggest your job is at risk. Execution can be automated, ideas not so much. Who is gonna tell the GPT what to do?
> And you can find joy in the doing itself. You can make good commercial art.
Yeah, those people failed to be the top talent in art and did something else. They are still highly talented but not in the realms of top talented artists.
The ability to realize an idea is way more important than "having ideas".
And I say that "most ideas are just half-formed thoughts" because they are. Most people think an idea like "the Uber of X" is worthwhile. It's not. It's hardly a thought, much less an actual idea.
Everything else you said was basically "nuh-uh, I feel differently and assert that my opinionated feelings are facts". And examples won't matter to you because you'll "no true Scotsman" your way through that saying they are "real" artists or "top talented" artists. As if your opinion on these matters were a subjective criteria.
I have a different take than the author. I don't have an issue at all with directed activities being deeply immersive - games most specifically. That is, I like the idea of putting on a headset for a half hour and playing a game, then doing something else.
What I think is much more destructive is when "immersion" just looks like "constant distraction", i.e. the idea that we'll wear these immersive devices all the time so we can be bombarded with "helpful" notifications and, oh look, in-context advertising! That is, when I want to relax I want to relax, and I don't think something holodeck-like is bad for that. But when I want to focus I want to focus, and strapping a headset onto my face for extended periods is not going to help that situation in any way.
There are a few (TNG and Voyager for sure, maybe DS9 too) Star Trek stories that deal a bit with holodeck addiction ("holo-addiction"), and I think something at that level of immersion would absolutely cause lots of people to completely withdraw from society, especially if (real world) society is bleak (Ready Player One deals with this aspect of it). I already know 30-something childless adults who spend effectively every non-working moment in some virtual world -- these people are effectively already holo-addicted. It will only get worse if these environments are more able to block out reality.
To me the sad thing about my friends in this situation is that they usually aren't in some swashbuckling fantasy epic starring them, but grinding Runescape/WoW/Eve/etc., performing the same repetitive actions over and over, with the occasional burst of actually interesting interaction or story. They've replaced the real-world grind with a synthetic grind, but they are still grinding! At least its cheaper than a sports gambling addiction, and probably marginally less health-destroying than a drinking habit.
This is a major plot point of the book A Beautifully Foolish Endeavor, sequel to An Absolutely Remarkable Thing. The main antagonist uses high quality VR to trick the brain into full reality from someone else's perspective. The effect on society is grim, and when society is being actively graded by aliens on its worth, this is a big problem.
I agree with you. One part of the article that I cut for brevity but maybe I should have included talks about the kind of immersion that is created by bright colorful moving pictures on a mobile screen. A lot of talent and resources is thrown at the goal of keeping people glued to their phones (TikTok, Candy Crush, etc.). "Engagement" is a form of immersion.
> One part of the article that I cut for brevity but maybe I should have included…
I think that would be valuable and might deserve its own follow-up article. To me, immersion in the context of creation or play strikes me as a completely different animal than immersion in the context of passive consumption.
I used to work for a small business accounting software company.
We used to talk about reducing the amount of time people use the product. Doing accounting is NOT what small business owners should be spending time on, and our goal was to reduce the amount of time spent.
As the company got bigger, we started selling adjacent software, and suddenly time spent in the 'ecosystem' became an important factor for increasing revenue per user and the rest is history.
Especially in the B2B space with sales reps, it's hard to sell people on things they don't have to do.
The average American adult now spends 8.5 hours consuming digital media every day, including games.
Most of it is probably consumed for entertainment purposes (news, social media etc. - social media's share averages to around 2.5 hours per day). Have we considered that maybe we're just spending too much time on entertainment? As opposed to doing things that actually make us better, like studying, practicing a skill or exercising.
> Have we considered that maybe we're just spending too much time on entertainment?
No, I doubt it, unless you have some data that the advent of digital entertainment has changed our entertainment habits. (I think it's a separate question when it comes to social media + smartphones, which work very very hard to steal our attention.) People don't work all day long. Before we had digital media, people entertained themselves with social visits, theater, drinking, reading, meditation... basically all the things we still do outside of digital entertainment.
One compelling theory for why we have not encountered intelligent life is that any sufficiently advanced civilization will eventually stop exploring the universe and immerse themselves entirely in virtual reality, to the point that they harness the total output of their star for computation, and disappear.
Relativity means that the Star Trek vision of a galaxy spanning society is probably an incoherent fantasy. Why pursue expensive, dangerous, and disappointing adventures in the real world when you can conjure any conceivable reality with perfect verisimilitude?
Working on my own kind of metaverse I've grappled with this issue, and that is why I hope to use it to improve physical space endeavors as opposed to replacing them. This really started for me in two places: the newer one was when I pitched to the department of education for a grant program a software I called Meta-Education Environment for Simulation and Gaming (MEESG), aimed particularly at assisting in training of high-risk vocations (such as high voltage-systems repair)... but after I thought about it, I realized that the earlier inspiration was the VBS simulation I trained in the military and how I noticed the benefits directly (later leading to many hours of Arma2/Arma3 playtime, the civilian version of VBS).
So the main fight I see in the future is those who think (similar to the article) all this immersion effort is vapid and superficial. I don't think it is, and for more than just training, but also for rapid iteration in a simulated physical space that doesn't waste actual physical resources until a better product is developed. Ergo, I feel there is hidden value in the virtual space yet untapped in the wider market for combining fun and relaxation with teaching valuable things about the real world.
Just for example, I have been adding my local flora/fauna and edibility properties and medicial properties to one of my gameworlds recently, which could help me accelerate my learning of that particular real world thing but also make the gameworld more fun and interactive.
An even darker take, if some renegade member of an advanced alien race does decide to spend a lot of resources on exploring the universe, and gains the means to do so, the other members of their species may very well plug them into a simulation while they are sleeping and make them think they are exploring the cosmos.
Unless a species is completely post scarcity, there is a strong ethical argument to be made that this is the correct thing to do, versus spending a lot of resources on "mere exploration",
> A very common practice in videogames is to make your game visually immersive—that is to say, to visually portray the game’s elements in such a way that makes the player, to some extent, feel like they’re “really there.” The most obvious way this is employed is via a firstperson point-of-view camera, as seen in titles like Counter-Strike or Elder Scrolls V: Skyrim. In these titles—especially in the highly fantasy-simulation-dependent Skyrim—part of the idea is to “immerse” the player in the world.
> The problem is, this isn’t where “immersion” really comes from. Ever notice how people get incredibly immersed in a great novel? What could be further away from the literal, realistic portrayal of reality that Skyrim brings than a set of glyphs in black and white printed on dead trees? And yet novels routinely engage people to the point where they are completely and utterly immersed.
> The myth is that immersion comes from visual/auditory messages, but the problem is the human mind wanders quickly. We’re curious and inquisitive and while a picture-perfect image might in fact immerse us for a moment, if there isn’t an engaging system there for us to keep us immersed, we’ll quickly snap out of it and remember that we’re just tinkering with some computer program.
> The thing that engages people in interactive systems is actually quality interaction—for games, this means interesting, difficult and meaningful decisions as frequently as possible.
I think that quote confuses intellectual immersion with sensory immersion. Those are separate things. Yes, you can be immersed in a chess game or a good book or a PacMan game, but it's not the same as having a VR headset on?
Photorealism does not equal immersion. Immersion happens in the mind, not the eyes. The games I've been most immersed in were made in the 90s, when the graphics were not as good but the gameplay was better.
Absolutely! I remember playing Quake 3 and whatnot back in the day. As I became immersed in the game, keyboard/mouse/1024 would eventually disappear (figuratively).
Strangely enough, I feel like there is more friction for me to play video games of this era (and i don't) despite the advancements in and availability of technology to interact with said games. Perhaps the friction I feel has more to do with the bombardment of content (ads, microtransaction, social-media, etc) that gets in the way of and distracts from actual game play.
Then again, maybe I'm just getting old.
Perhaps this is what the author is trying to get at? If we could immerse ourselves in a similar way when we were using "primitive" games and hardware and therefore were subjected to all the temptation and risk, then what is different or relevant?
The quantity of those immersed and additional layer of (arguable harmful) content?
I don't know if I've ever been more immersed in a game than Cataclysm Dark Days ahead, in ASCII mode. I usually die pretty early but, one time, I survived over a year. Built a huge base, a maxed out RV, explored all the lore I could find, and I was basically indestructible in standard combat. I ended up dying while driving really fast down a road I'd driven hundreds of times before, it was supposed to be clear. A big floating, laser-shooting eyeball hit my my RV. That's was over 5 years ago and I can still "remember" that world, the crazy things that happened in it, and all the scary monsters I fought, as they existed in my imagination at the time, prompted only by ASCII characters and short description of what they were supposed to represent. I guess it's like reading a good book, but I'm in control of the story. There's obviously other ascii based games similar to CDDA, but there's so much detail in the mechanics, so much you can do to affect yourself and the world in a meaningful way, and open-ended lore to spark the imagination, that it ends up being way more immersive than similar games. The game has tile sets you can use instead of ASCII, but I find they put my imagination into cartoon mode, which isn't as deep an experience for me.
This is true of games, but not other applications. You need photorealism to have immersive video calls -- as if you are meeting in person. Here is a tech demo by Google; see how users respond to the realism:
Something I should have probably addressed in the article but didn't get to it for brevity: there are at least two kinds of immersion, and people tend to conflate them. (I'm not immune to this.)
There's intellectual immersion. Flow. You can be intellectually immersed reading a book or playing chess.
Then there's sensory immersion. Put a VR on, you're immersed this way.
I've said before, that immersion is not the same as investment.
I don't want to be immersed in Amnesia The Dark Descent just because it would be "more realistic", I can already shit realistic-enough bricks without drowning in the sensation of "I'm going to die".
I want to be invested in Link's journey to save Hyrule or my party's progress in Baldur's Gate and investment's worked fine before headsets were a thing.
I can just imagine the abuse headsets will enable for microtransactions. Got my popcorn ready to go.
> There are new opportunities in things like e-Ink or transparent screens or IoT that can help people re-focus on the real world around them yet still reap the benefits of technology.
This assumption that only games with nice graphics are immersive enough to somehow be a problem is just wrong. Before I had gaming hardware I wasted the exact same amount of time by re-reading books and comics, and it was quite immersive. AAA graphics are not the problem.
> humanity has a variety of important engineering problems to solve, and nicer-looking graphics is quite low on that list.
That sounds an awful lot like the good old "why do we build space rockets when people are starving".
> Before I had gaming hardware I wasted the exact same amount of time by re-reading books and comics, and it was quite immersive.
If so, you are in the minority. For most people, it is easier and more natural to put down a book and go do something else, than it is to quit a video game. Media consumption metrics seem to corroborate this.
> That sounds an awful lot like the good old "why do we build space rockets when people are starving".
If you want to simplify the article this way, sure, but then it's "why do we build ever-more immersive entertainment when we could build space rockets or try to address starvation". I think there's a difference. (And of course, even then we're losing all nuance of the actual article, but I guess there's no way around that.)
Same thing regarding the utilities -- we need home electricity switcheable with a key/knob, not browsing it for a minute in a smartphone (yes, with phone full of running apps, and network hickups, it may take a minute). And we need public transit usable without any smartphone, not to make people look for the buses on a smartphone map (try pushing a stroller with a kid on a cold winter day and search for buses in an app).
I agree with the sentiment that "the gaming industry has hijacked human play". The author justifies the development of his own game by the fact that "it looks like a CAD program", but I don't think that this makes the game any less immersive. Having seen a few of his dev-logs [0], I think that his goal of a 'systemic game' (if I remembered that phrase correctly) has the potential to be just as immersive.
Games with impressive graphics but no gameplay aren't known for being particularly big drains of human attention.
As another example, take the comments on the recent thread about the browser game generals.io [1]. One might say that 'it looks like a spreadsheet with conditional formatting', but it doesn't make it any less addictive, according to the commenters.
After reading the article, I was expecting his game to be something like Tetris or some 8-bit retro thing. It's not, though.
Full immersion is entertainment, consuming your full attention. Like a movie theater or a stage play. That's OK, but not full time. Light immersion is walking around with your nose in your phone for most of your waking hours. That may be worse.
i don't want to look at virtual worlds while this one keeps getting uglier, louder, and stupider. i don't care about computers anymore. i hate them and wish they never existed. unless they are all destroyed i will never have a moment of aloneness or privacy again. i resent every moment i ever spent playing a computer game. i hate these images of fake worlds meant to make us ignore the death of the world around us. we are going to reduce the earth to a single shining black rock running bitcoin and second life. whatever promise i perceived in technology has either been betrayed or never existed. the scale of the betrayal, the constant surveillance, the hateful and invasive advertising, just the pure spite, greed, and malice present in every aspect of every aspect of the logistical chain...
i want it all gone, or else i want us to use it to destroy absolutely everything. because what we have created is so hideous that only the capacity to destroy itself can justify its existence.
When I write poetry, song lyrics, or music, I am fully immersed.
The difference between video games and art is that I create the environment that I am immersed within.
Sure, like English or any other language, I’ve inherited the cultural context for my artistic endeavors, but that context is a basic requirement for any sort of immersive activity.
I don’t think making less immersive games is some how more important than more immersive ones. The more important problems aren’t gated by lack of talent but lack of money and or political will. Throwing talent at it (if you could) doesn’t magically add money or political will to solve them.
I agree that it's not just about talent. That said, having more talented people interested in a field often has the effect of bringing money and political will.
It is impossible to predict what effects technological developments will have.
Claiming that effort in one field is "wasted" or "misallocated" is the height of hubris.
I make satellites that are being used to scan the earth to determine the impacts of climate change, quantify coastal erosion, monitor foliage coverage and crop health, locate buried ancient ruins, predict the weather, and create high-resolution 3d maps of urban areas.
The production of these spacecraft is very low volume, and therefore very expensive.
Research into new lithography techniques for longer-lasting battery-powered consumer devices led to low-power electronics that allow the spacecraft I make to have greater processing power for a given energy budget.
Advancements in more powerful graphics chips for gaming have led to affordable (and yes, despite what you think they are affordable) GPUs used to process and rasterize the data the satellites I build gather.
Unreal Engine is used to build tools needed visualize the results.
The low-profile RF connectors that we use were invented for the specific purpose of shoving Bluetooth and Wi-Fi into thin consumer devices like laptops and tablets and they save us (noticeable and significant) weight.
I can't predict what will happen but I think I can predict what will not happen and given the size of the market I work in I assert that there is an exact and precise 0.0% chance that the resources needed to develop the technology I rely on every day would have been allocated in service of my market. It is too small.
Instead they were handed to me on a silver platter by the consumer electronics market. People playing smartphone games-- and militaries trying to destroy each other have directly led to what I do being possible.
>but I had to admit to myself at one point that, long-term, playing video games for any extended period makes me physically miserable and dumber.
I do not play video games, at least nothing newer than SNES games, but I know people who do and that sounds like a personal problem.
I feel like you only see articles like this written by game developers. Something about making video games for a living makes you question the value of what you’re creating.
The longer I remain a software engineer the more I hate relying on computers. Maybe the people closest to the technology are the ones to first realize how superficial and downright damaging these innovations actually are.
I'm kind of with you. In line with the article, its more the application of computers that's the problem - instead of making the world a better place, its a more distracting place.
Is it really worth it shuffling digitized crap around at high speeds if the outcome is a worse human experience because we're only applying it to the low hanging fruit and therefore creating a slippery slope for ourselves...?
The irony is that escaping into a simpler VR world with less distractions can help you deal with this, but I don't think its the right move.
I'm also reminded of Lee Felensteins quote from Steven Levy's Hacker's book. "You're doing all that for the computer!? What are you doing for the people!?"
Computers are nice. The issue lies with the kind of software that are being developed. I have a Kobo device that I put koreader on and it's miles better than the default one because it's actually let you read books how you want to read them. Most of the apps I have focus on being the tools for the use cases I have them for (which is why they are still installed) instead on trying to get in my way every now and then. I love nice visuals and other UX niceties, but I like usefulness more than anything. Some days, I'm really close getting back to my Arch installation. I'd prefer the glitches and do patches instead of suffering another enshitification.
Pfft. If I'm making a game I'm at least bringing a small amount of enjoyment to people's lives. When working to increase the profit margins of tech companies, I'm probably just working towards bringing a small amount of misery to people's lives.
And the more immersive the technology is, the more the back-end should be open-sourced and put under the control of a free market of hosting companies and maintainers, that end-users and communities can pay.
Right now we have:
People
<=> Big Tech Server Farms
What we should have is:
People
<=> Communities
<=> Hosting and Service Providers
<=> Developers
<=> Conferences, Certifications
The second kind of ecosystem can liberate people.
Would you rather spend your years hooked up to Neuralink owned by Elon, or have a say in what you experience and mitigate the power dynamics?
Would you rather spend 9 hours a day in a metaverse owned by Zuck+Facebook (oh sorry, Meta), Elon+Twitter (oh sorry, X), Bezos+Amazon, Page+Google (oh sorry, Alphabet) or would you rather at least have your own Minecraft server? Or better yet, have an open platform that anyone can fork and build on, like Linux, Wordpress or Ethereum?
"Less immersive so you don't get hooked and you can choose when to play and when to not play", that's exactly how I market my terminal UI Klondike Solitaire to my friends. You can't even play it on mobile, needs a keyboard! No fireworks when you win, nothing, well it's a work in progress.
> The gaming industry has hijacked human play. Instead of an evolutionary learning and relaxation tool, it’s so immersive now that people get sucked into spending hundreds of hours playing games that don’t teach them anything valuable about the real world, and don’t relax them at all. I’m a gamer, and I of course enjoy immersing myself in a good AAA video game, but I had to admit to myself at one point that, long-term, playing video games for any extended period makes me physically miserable and dumber.
I don't think there's anything necessarily wrong with having played a game for 200 hours. Before video games, we had toys like Lego, Lincoln Logs, board games, etc. that children (and even some adults) spent just as long playing.
Video game addiction seems like what he's worrying about, but addiction doesn't require immersion. People get addicted to Tetris, and it's not because Tetris provides an immersive experience.
It's funny to read this headline as a general statement. I work for Strivr (https://www.strivr.com/), and we make immersive training. And the same forces that make immersive games more enthralling make immersive trainings significantly more powerful as learning experiences.
So I don't know, there's definitely such a thing as immersive experiences that are harmful, but I think for educational purposes more immersive experiences are strictly better than less immersive ones. (Provided that you want to learn as fast as possible and you want to focus on learning.) Now, immersive experiences are also more expensive (especially to get right) than less immersive ones, so cost is a factor but I think we definitely want more immersive education.
Browsing the public library one day in the aughts, I came across Edgar Rice Burroughs, Tarzan. Having been raised on the Disney version of Tarzan, I thought, "Ho! I should read this here original of the classic children's story!" So I checked it out, and I was shocked and surprised to encounter the most racist novel I have ever read. There is a reason almost no one reads the original Tarzan. It is not subtle about being Jim Crow, pro-colonialism pulp porn. I'm pretty sure Disney would prefer everybody forget that Burroughs existed, and that their animated and live action mega hits sprung out of the focus group imagining of a childhood bear necessities Jungle dream. So yeah, Burroughs is a rough reference to lead with.
as others have already suggested, i think the contrast of immersion vs no immersion is not really the issue (only as much as immersion helps people get addicted to games). the focus on entertainment and making a profit vs doing something to advance society is.
immersive games are fine. games aren't failing to teach anything or help you relax because they are immersive. they are not teaching anything because they don't have a good story that would teach something, and they don't let you relax because specific game elements are putting you under stress.
to give two examples: i play elite dangerous. i can travel through space with a VR headset and i trade items between various space stations. very immersive and quite relaxing, until pirates come along and want to steal my cargo. that's stressful. and i wish i could turn that off.
likewise i may play some puzzle game, that is not immersive at all, and that could be relaxing if the gamedesigner hadn't added a timer that forces me to complete the puzzle in a certain time. that's stressful.
so i don't think immersion has any impact here, other than immersive games are just more attractive. both are a waste of time however if they are not educational or relaxing.
but now this issue of entertainment vs advancing society is a problem of the whole entertainment industry. very little content produced is entertaining as well as teaching something.
and so for me the question is not about how immersive the games are but how educational. i believe it is quite possible to create fully immersive but educational games.
looking back at elite dangerous again, for example, the universe in that game is modeled after our actual galaxy. where possible, stars are named by their actual astronomical names, and their looks are designed after what we know about them and i can take a star system in the game and look up its name on eg wikipedia and learn real facts about it. compare that to eve online where the universe is completely fictional. in elite dangerous i can fly around and get a sense of the relative distances of say alpha centauri vs the north pole star or of the position of our solar system vs the rest of the galaxy. (assuming that it is all accurate)
immersive and educational and relaxing (as long as i can avoid pirates)
My opinion is that the last 40 to 50 years, have been all about entertainment, as a tool of control.
Are cars better, planes, houses....? Only marginally. I don't even want sensors to tell me my car has this or that issue, lol.
But we have phones, the internet, TV programming on demand, YT, tiktok, games, VR, flat screens, etc etc. it's all about content and the delivery of that content.
Basically all development has been in entertainment while every other industry has pretty much stood still, or devolved (eg food quality).
By the end, it reads more like a virtuous justfication why his game graphics are so basic.
I also do not buy the core argument. The game industry tends to ask for more and pay less exactly because people romanticize it. That is not to say the most talented engineers do not end up hired to produce bullshit, just that it probably is not games.
Lastly, the desire for faster and more realistic graphics is what commoditized massively parallel computer architectures. Keep chasing those FPS, the future of humanity depends on it!
I haven't played a video game that was better or more immersive than Tarzan. I hypothesize that artists, like Burroughs, could have more success than they are having today.
My favorite video game is Tetris, and I don't think it's a coincidence that it was produced by a society that placed consumer choice low on its list of priorities.
There's always someone complaining about good things being too good and people lose on the rest because of it. What about letting people have some personal responsibility?
Instead of complaining about games being too immersive, doom scrolling being too addictive etc, how about reminding people to make conscious decisions on how they spend their time and attention?
We're living in the era of external responsibility, every problem one has is caused by someone else.
I think you realize how tone deaf you come off when you ignore the fact that social media and gaming companies have spent the past two decades hiring the greatest minds in psychology and engineering to hack minds and get them addicted to their products, but yeah, the problem is totally a lack of bootstraps.
My current job is in robotic space exploration, on a renderer in which to immerse a robot in a virtual world. This is just play in-silico, relying heavily on immersive technology developed for video games. In evolutionary terms, play's function is to learn in preparation for real events, which appears to work out on the technological frontier too.
There is a balance between immersive and ambient. A large number of people are working on the latter in terms of smart home devices, advanced driver assistance systems and fitbit-like quantified self devices, to name a couple of sectors.
Maybe a challenge for ambient technology is that its nature is non-engaging. If it were engaging, it would be immersive and not ambient.
I don't think ambient and immersive are mutually exclusive. Ambient tech just widens the potential space in which someone can become immersed. Before ambient tech, I primarily interacted with my computers. With ambient tech, I interact with my computers and the environment in which those computers exist. Now, I'm one with my apartment. A thought leads to spoken words which lead to direct control of the devices around me.
The feedback loop building a simulation will always be OOM faster and cheaper than building the real thing. The barrier-to-entry for entering the loop becomes personal aptitude, not capital, and yet the EV is enormous. That is why our "best minds" are working in software, not hardware, not computer games.
I couldn’t disagree more. Been playing Gran Turismo 7 in PSVR2 with a steering wheel and pedals, the last few days. Why aren’t more games this immersive?
Getting sucked into a dream-world/story/abstraction is a feature, not a bug.
It's an attention-conservation strategy. Looking at an abstraction takes less precious attention than looking at reality. And abstractions are so malleable, so useful.
You can do it with a page of text. You can do it with no technological augmentation at all, just thinking.
It's so easy and useful that it's become a habit. The border between dream and reality is so blurred now that we've forgotten that there's a difference.
Same thing. A lot of stuff we don’t understand is like magic. Think old days, show someone an AR head set and they’d think is magic. Look closely at DNA, it’s marvel of technology but miniaturized but occurs naturally. None the less looks and acts like advanced tech
I rather missed the point where he explains why immersive games are bad. There's some hand-wavey bit about "[they] don’t teach them anything valuable about the real world, and don’t relax them at all" but that appears to be it. The first bit is beside the point and the second hugely debatable.
Similar criticisms could equally be levelled at engaging with any creative output - reading, listening to music etc. And I love being out in nature but it's not teaching me much.
His other point seems to be "talented engineers could be doing something more important" which is slightly more valid but then everyone who isn't saving starving babies could probably be doing something more important.
> I rather missed the point where he explains why immersive games are bad.
I think it's the fact that so many resources go into the final few percent of making games 100% realistic whereas there are other, more important issues that get largely ignored.
it is hilarious how the opening sentences written by this person refer to "Edgar Rice Borroughs", while pasted quotes and images refer to "Edgar Rice Burroughs".
it is easy to agree with this. however, the problem is most people just don't care and/or are lazy to do something more meaningful. and there is nothing inherently wrong with that.
I wonder if the author realizes that there are eight billion humans. They can't all work on the same thing. And the majority of people who are not engineers and who just work regular mindless jobs need entertainment.
Also we've had entertainment since the dawn of time. People don't just go home and stare at the wall for sixteen hours before going back to work. Almost none are going home from work to solve humanity's Real Problems(tm). Entertainment holds significant value to everyone plus or minus a few percent.
But really the author has a critique of capitalism and doesn't want to admit it. The complaint is that art is a product you're expected to pay for and consume, and individual artists almost can't even exist without spending all their time advertising. Imagine how much art would be produced if artists weren't forced to choose between making art and paying rent.
All in all, the author is yelling at clouds because other people have values he disagrees with and refuses to understand
Of course they realize that, because that's not what the article is about. The problem being described is that the distribution of talent/intelligence across the problems civilization is facing is uneven. The author's perspective is that individual interests and pay/profit incentives bias the distribution towards fun boom/bust industries (e.g. video games/VR/entertainment), and more critical industries may be losing out on this talent as a result. These are problems that capitalism can't naturally solve, because not every problem needing to be solved can be crafted into profit driven market. More than a "critique of capitalism", the author has noticed his industry is filling up with engineers/competition.
It looks like you're countering with the "it's always been the case" argument. I use it often myself, but I think it's good to realize when it's stretched too far.
Yes, I'm sure there were some people in the past who said that reading books is an indulgent waste of time. I'm sure you could find articles in old newspapers.
That in itself doesn't mean that working on a VR headset technology is as meaningful as working on a more sustainable energy source or developing software for cancer research, does it? Just because you find a similarity with something that happened in the past doesn't mean you can just abandon all critical thought.
> I think it's good to realize when it's stretched too far.
Who is the arbiter of when it's stretched too far? I am working with the data we have to make an empirical expectation. Speculation is fine, but if we are going to make an effort to ground it in reality I feel like prior art is as the best bet we can make.
> Just because you find a similarity with something that happened in the past doesn't mean you can just abandon all critical thought.
I think I am actually applying critical thought. I am challenging a narrative that doesn't seem to have a precedent and looking at examples from the past that were similar unprecedented technological changes. This seems to be a reasonable approach to set reasonable expectations. Of course I could be wrong, but I feel like at least my pitch is based on some historic data.
FYI the book reading this wasn't just an aside in a newspaper, it was a cultural concern shared by many.
I'm not disagreeing, but I believe it is a better use of ones time to prioritize things that spark personal significance rather than attempting to quantify/compare meaningfulness universally.
I would never listen to someone who proposes reallocating high-level talent resources as an important and solvable problem where the solution is, tell those people their passions are a waste of time. I also don't believe this person understands the talent market as they quote the "Many of the most talented artists of our time work in advertising," nonsense. Yes their are smart people working in advertising, but it's no deeper than that. There are other spaces and industries that have more brilliant minds.
The truth is, you don't know with certainty if working on VR technology is more meaningful or not.
I used to sneer at the social value of entertainment. Then covid lockdowns hit. I spent a lot of time playing Factorio. When professional sports resumed playing (in empty stadiums, with fake crowd noise on the broadcasts) I was happy to sit on the couch after work and watch baseball.
Without that entertainment, there is no way I would have been able to trudge to my computer and work from home day after day, when the only thing I could leave my house for was an occasional walk and a frightful trip to the grocery store.
So even if the brain surgeon is not using those "nicer-looking graphics" to improve brain surgery (which could very well happen), the brain surgeon might just be looking at "nicer-looking graphics" to unwind after a day of brain surgery, which gets her ready for another day of brain surgery. Entertainment has value.