Hacker News new | past | comments | ask | show | jobs | submit login
John Carmack on the Joe Rogan Experience [video] (youtube.com)
741 points by sexy_seedbox on Aug 29, 2019 | hide | past | favorite | 275 comments



On open sourcing Wolfenstein, Doom, Quake: https://youtu.be/udlMSe5-zP8?t=591

Neuralink: https://youtu.be/udlMSe5-zP8?t=2523

Artificial General Intelligence: https://youtu.be/udlMSe5-zP8?t=2778

Quantum Computing: https://youtu.be/udlMSe5-zP8?t=3106

“Engineering is figuring out how to do what you want with what you’ve actually got”: https://youtu.be/udlMSe5-zP8?t=3190

End of Moore’s Law / On CPU architecture: https://youtu.be/udlMSe5-zP8?t=3860

5G and streaming (games & video): https://youtu.be/udlMSe5-zP8?t=4288

edit: like already mentioned there are a lot of topics covered, some even for just a few sentences, the conversation flows, worth watching the whole thing


Carmack briefly mentions 5G and streaming games. I think there is good economic reason for the 5G gaming is eventually coming (it may take some time) because low latency enables it.

If you think about the price of gaming PC or console, there is huge discrepancy between the budget of hardcore gaming enthusiast and casual gamer. It would be nice to get $5000 gaming tower in every house, but you don't. Many casual gamers would rent $5000 - $10,000 worth of gaming hardware for few hours a week it was simple. Only way to get bleeding edge high-end gaming to masses is to put GPU and some other parts to the edge (or at least within the same city) and stream or partially stream the game to cheaper computer device and screens.

Consider $5,000 worth of bleeding edge hardware that costs $1/hour to run. If you rent it for $8/hour, and it only sells for 5 hours per day for gaming, the HW pays itself back in 5 months. It could be rented out for other stuff in the meantime. Cloudflare, what do you think?

I could see a market emerging similar to school VHS/DVD/Game rentals. There is limited computational resource near you and you can rent it for gaming. If all is taken (weekend evenings) the 'shelves' are empty. On working/school days and hours you get the same thing cheaper.

It probably happens in Japan, South Korea and Nordic countries first.


Do you really get significant cost savings? You can't really use hardware from other time zones / continents, because the latency would be very visible. So the hardware has to be reasonably close to the users.

Then I would imagine the problem is that everyone is playing video games at about the same time, i.e. in the evening or at certain times of the week end. So you need to provision for peak usage.

So to me the only benefit to renting is if there are lots of occasional gamers, that do not play every day / every week, including peaks where they would all suddenly play at the same time on certain days (christmas, long week ends, release of new game, etc).

And then of course because the hardware progresses so quickly you need to amortise the hardware pretty fast.

I haven't seen the actual economics but I am surprised it would be much cheaper for consumers.


You might be able to make it work if you can get a gaming tier on a cloud platform. EC2 already has GPU focused instances.

I can see things working out, but still expensive, if the hardware also being using for scientific computing and CGI rendering.

Although I'm not sure if gaming hardware can run 24/7.


I first heard about this year ago, and was also amazed that the numbers could work and RDP lag would make it viable. I can't find the reddit post at the moment, but we had a few exchanges where it was explained that latency is not an issue, and availability is not an issue.

This is still a fairly niche market, and while there are certainly evening peak times, younger kids have a lot more free time than you might think.

How well the company hosting this is doing, I couldn't tell you, but the product itself seemed to work and work well, so much so that I have had it on my to-do list to try and experience it for myself.


The speed of light is really fast, Google has fiber connecting all of its data centers, I think viable data centers for game streaming overlap more than you think, it's limited by how much bandwidth Google can spare between data centers.

The reason most services need to be located close to clients is because you want to avoid data transit over the open internet. You want as little open internet as possible between you and the client. Traffic that's internal to Google's networks doesn't have that problem. You can use compute / gpu in US-west and transit that data to you via US-east and the additional latency would be measured in nanoseconds.


With a straight shot, latency from one coast to the other is at least 50 ms [0]. A more typical route is on the order of 80 ms.

That's definitely noticeable for latency-sensitive actions. I recently switched a server from Oregon to New Mexico, and I notice the latency increase with mosh.

Moreover, there's not a lot of timezone difference between the east and west coasts of the US. Going someplace like Europe is more like 180 ms.

I've played games with a ping like that, but a lot of the ping was my wifi. Doubling the latency would not make the game a better experience.

This could work well for certain types of game, those that are a little less latency-sensitive. But in general the latency is still a big issue.

[0]: speed of light in fiber is ~ 100e6 m/s, around 3000 miles between coasts


Wow, I never realized my intuition was off by so much. Speed of light in vacuum is still about 16ms between coasts, which is orders of magnitude worse than I thought. I thought most latency came from routers and switches but a significant portion of delay is indeed speed of light.


Exactly. And 60fps (which is not that high for many games) gives you 16.67ms per frame. at a 50ms delay you're already 3 frames behind.


Not all games are that latency depending. If I am playing civilization I probably won't care that much where the server is.


Neither, will you need to rent an expensive gaming rig by the hour, though.


So.... Google Stadia? Except it's more like $10/month for 4k gaming.


There seems to be confusion about what hardcore gaming means? Competive Gaming, high-end user experience or bragging rights spec based enthusiasts.

My benchmark would be the XBox One X, which is perfectly capable of 4k 60 fps gaming with HDR and Atmos, try to get that stable running with any PC, good luck. That's 400 for the console and in total under 2000 if you need to get an additional OLED-TV and Atmos-enabled-speakerset, which are not exclusively gaming budget.

The only shortcoming there is the absence of fast current generation NVMe mass storage and RTX-enabled GPU.

Reasonable high end PC gaming is possible well below $2000 or even $1000, even $500 will get you decent performance for even casual competetive gameplay.

The priceyness of gaming rigs setups comes from the insane demand of getting an edge at 4k 120+fps in FPS-shooters. That bracket can't be conviced by any streaming services physical limitations for the next 10 year or so at least.


The Xbox One X doesn't maintain those specs. The resolution drops during game play.

You actually need a machine north of $2k to support 60fps 4k.


> That's 400 for the console

In reality it is 400 for the console + 60 / year for online gaming, which means it is about 640 for 4 years of gaming.

Others have mentioned that you can't get 60fps @ 4k, but I have never played on an xbox one x, so I can't attest to that.


You are not correct. The hardware in gaming PCs and consoles is +- the same, the only advantage is game devs can optimze for a specific hardware setup instead of all possible combinations like on PC.

If console is running 4k@60Hz, it means that level of details is probably somewhere in low settings compared to PC version. For that, you don't need 2000USD PC, something much cheaper would be enough. On top of that you can use it, well, as PC. But yes consoles will be cheaper, just not that much and its purpose is much, much more narrow


> My benchmark would be the XBox One X, which is perfectly capable of 4k 60 fps gaming

Going to disagree with you here. I don't think there is even one game that runs at full 4K (not CBR) and maintains 60fps. If there is one I would love to know so I can check it out but so far everything I play on my Xbox One X is either running at well below 4K in order to maintain 60fps or runs at 30fps with 4K CBR. Very few games are true 4K.


>try to get that stable running with any PC

Easy. It will obviously be more much expensive than a console, but it's perfectly doable if you have the budget for it


I have the budget, but not the time. Unless by budget you mean paying you own household QA technician. Dolby Atmos over HDMI and HDR does not rhyme with NVIDIA drivers, or very few of them, which then again tends to breaks my VR.

https://www.reddit.com/r/nvidia/comments/ao4c0u/anyone_with_...


I’m not so sure you know what you’re talking about.

4K and >120 FPS are incompatible goals. If you are optimizing for frames in a competitive title the first thing you will do is turn total render target size down to the minimum. Conversely, if you want to render at max quality, those pixels look a lot nicer as resolution than as frames.

If you’re a real pro, you can spend five times as much money to hit 80% of cutting-edge performance on both metrics simultaneously. You hardcore gamer, you.

People who talk like that are usually kids spending their parents’ money.

FYI, genuine demand for higher gaming performance is mostly being driven by VR, where sentences like “8K at 144hz” aren’t just big numbers.


Me east german born orphan peasant gamer has of cause no idea what words mean. ¯\_(ツ)_/¯


My kid plays 1-5 hours of Xbox every day. There’s no way I could pay $1/hour.

Renting game consoles only works for casual gamers or super hard core people who want $5k rigs. Casual gamers don’t care, I think, and use their 5 year old iPhone. And there aren’t a ton of super high end gamers. And I hang out in gaming cafes where people pay $6/hour.


Thankfully, actual cloud gaming services are way cheaper than $1/hour. Stadia, for example, has no recurring monthly cost for 1080p gaming; you just have to pay for the games.


$1/hr at 5 hours per day is ~$150/month. I have no idea what you can afford but that’s basically the price of cable these days.


It's also the entire cost of a console these days. The current status quo is a better comparison than a different service where cutting it out entirely is a meme due to ludicrous cost and subpar value.


It’s also $1800/year. I can buy an Xbox or ps4 for $500. I can buy a gaming pc for $2k.

Paying $1800/year/per person forever is a bad deal.


Those countries already have fibre - are you saying 5g is better than fibre for streaming gaming?


Even in the wonderland of fiber, South Korea, fiber to the home has slowed down and fiber to the building is common. Korea Telecom still has lots of coaxial setups they stretch into gigabit speeds using 1:N connections. Gigabit penetration in Seoul was still below 50% few years back, much lower elsewhere.

Btw. Fiber has no latency advantage. What you need is servers at edge in both use cases.


Fiber has a latency advantage in real world usage. In the common scenario of multiple computers and users sharing a single connection, if one of them does something latency sensitive (gaming or video chat), he will be bothered a lot less by others heavy bandwidth use on fiber than on ADSL, or in a lesser measure cable.

That's because heavy bandwidth use (downloads, video streaming) will saturate a low bandwidth link and packets will drop. Or bufferbloat will increase latency without dropping, which is the same for latency sensitive usage.

With multiple video streams going on in the average household being a common case nowadays, it's nice to have enough bandwidth.


Here are these timestamps on one clickable video, feel free to add your own: https://use.mindstamp.io/video/XFnaNsKJ?notes=1


this is great!


I'd be very curious to see if an Ask HN on the premise of “Engineering is figuring out how to do what you want with what you’ve actually got” but in the context of understanding what you can actually manage to learn or build with your current known ability to learn and skills. Could be a very interesting take on imposter syndrome and understanding effective ways to learn?


> “Engineering is figuring out how to do what you want with what you’ve actually got”

same lines as... _life is figuring what to do with what you’ve got_.


John is like many of the people Ive met that have no problem conveying a lot of dense information quickly and easily and avoiding speech patterns that are overly technical and difficult to understand. Some of the most intelligent people I know are able to do that. That is a very underrated skill that I wish I had. I forever stumble on words when Im explaining something complicated/technical. Its like a cognitive dissonance that I am fighting between saying technical jargon and dumb-ing down to layman's terms. Im sure being on the other end of that it is perceived as condescending or just that I dont know what Im talking about.


Don't forget, most of the things you see in media, if not fully scripted, are at least outlined ahead of time. Also, he's talked about most of it over and over and over so he's had time to refine. I'm a terrible conversationalist but there are things I can discuss this way, they're just generally useless and boring to most of the world so no one is calling me in for an interview.


That is a great point and I agree. Its easy for the JRE "Its just a hang for a couple hours" to make it seem like he is just talking on the fly. Some times the curtain gets pulled back and people pull out and mention their notes on the show. I could definitely tell there were some bullet points that John was trying to cover, and Im sure these are topics hes covered in front of audiences when he gives lectures/appearances. But there is a fluidity to his responses that suggest he does indeed have a great talent for conveying complicated subjects with ease.


Personally I find him a little jarring to listen to. Funny enough he actually reminded me of reading hacker news comment sections. He’s just not conversational enough for me. I don’t think media interviews are his sweet spot.


This is often said about Richard Feynman and how he could distill incredibly complex questions down into simple thoughtful answers that left you satisfied but more curious. I think we get a capable (but not quite the same) modern version with Neil deGrasse Tyson.

It's definitely something you can learn though, it just takes practice. Take a complex topic and summarize it to yourself in shorter and shorter sentences. Or browse ELI5 (explain like I'm five) on Reddit for some good amateur responses.


Tyson's explanations are often wrong. Possibly the most inaccurate pop science celebrity who has every lived.

Not remotely the same league as Richard Feynman.


One issue I have is that I can't bullshit people. I had a friend who I was trying to explain things to, and often I made it too complicated. Another friend explained it to her satisfaction, but I thought the particular explanation given was deceiving. And yet she didn't care. I think it is possible to be honest and yet simplify things to whatever level, but I'd rather fail at explaining than give a false explanation.


I find it's good to let people know that 'it's actually a bit more complicated than this, but...'


To illustrate this I like to use the analogy of very basic physics.

Even someone who never took calculus can usually grasp the concept of first learning ideal velocity and acceleration, then adding details like friction.


Honestly reminds me of posters here complaining that an article/headline is click bait, when it's really just an attempt to simplify something for more people. People assume it's malicious, when it's really doing what you friend seems to be good at - reducing an explanation in a way that might not be as accurate, but get the general point across more easily for people.


But there's a difference between simplifying and changing. This is a bit of a skill in itself: presenting simplified versions of information without misleading the learner.

When learning, I don't appreciate reading a description of something, thinking about it, generating my own conclusions, then finally reading forward to find I had been misled not by my own fault but by faulty explanations.

As an example, a book I read on SQL was narrated from the point of view of an employee at a startup and stated that indexes on views were efficient, but then eventually backtracked on that statement, saying that this is only true for materialized views.


Your account name is displayed in green is that because you have 1 karma point and you are new user ?


I'm similar, but have learned to just be more iterative when explaining complex topics.

Start with the simplest conveyance, which usually is riddled with the most bullshit. If you don't lose them, just add the corrections, if there are too many, only add some of the corrections, rinse, repeat.


To be clear I don't mind simplifications, metaphors, etc. I just don't like _bad_ metaphors that are desired just because they're satisfying.

Bad simplification: "The CPU is like the brain of your computer"

Better simplification: "The CPU is like the conductor of your computer"

You may not think my "better simplification" is so great either. The point is, somebody may be satisfied with the "brain" answer because it feels right, but (IMO) it doesn't actually give you a better understanding of the situation.


One of my favorite personal examples of the results of deceivingly pat explanations is the time when, in undergrad, an English major got on my case for "not understanding black holes." They had taken a general physics course and I was working my way through upper division.


I'm sorry to inform you you're not cut out for middle management.


He also seems so happy to work on what he's working on and to talk about it. It must be fun being him.


How did I leave this comment here without any memory of doing it?


It's really great how he pushed so hard to open source the previous generation of each game when releasing a new one. It's hard to tell how much that pushed the industry forward, but I'm sure a lot of people became better C programmers as a result.

The "smell in VR" kits in this interview were interesting. I hadn't realized so much progress had been made on that front (or that it would take so much to make it as convincing as visual input).


> but I'm sure a lot of people became better C programmers as a result.

A lot of people also became professional [level] designers because of his works - Doom was innotivative, among the many things, in the fact that it was designed from the ground up for extension (modding).


Yup. I worked on a few doomed game projects in the late 90's. The quake tools were where a lot of people started learning. Unreal was a huge step as well, as the editor had a lot better ux.

I shifted out of doing level design work, but some of the people I worked with ended up on the Medal of Honor team.


Indeed, and Valve hired a bunch of people from the Doom/Quake modding scene for Half-Life 1.


Valve's engine was also based off and licensed from Quake's.

https://en.wikipedia.org/wiki/GoldSrc


>The "smell in VR" kits in this interview were interesting. I hadn't realized so much progress had been made on that front (or that it would take so much to make it as convincing as visual input)

Well, we had smell-o-vision for a loooong time...

https://en.wikipedia.org/wiki/Smell-O-Vision


Apparently your sense of smell has a short pathway to your brain, so an olfactory UI could be a good thing, how about a burning smell when your server's hitting max load :)


He talks about AGI at https://youtu.be/udlMSe5-zP8?t=2776

I wonder if all the really smart people who think AGI is around the corner know something I don't. Well, clearly they know a lot of things that I don't, but I wonder if there's some decisive piece of information I'm missing. I'm a "strict materialist" too, but that doesn't mean I think we can build a brain or a sun or a planet or etc within X years, it just means that I think it's technically possible to build those things.

I don't see how we get from "neural net that's really good at identifying objects" to "general intelligence". The emphasis on computational power also makes no sense to me. If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

Sometimes I wonder if AGI (and the concept of a "technological singularity") isn't just "intelligent design for people with north of 140 IQ". Maybe really smart people tend to develop a blindspot for really hard problems (because they've solved so many of them so effectively).


I think you have a point.

AGI is a scientific problem of the hardest kind, not an engineering problem where you just use existing knowledge to build better and better things.

Marving Minsky once said that in mathematics just five axioms is enough to provide the amount of complexity that overwhelms the best minds for centuries. AGI could be messy practical problem that depends on 10 or 25 fundamental 'axioms' that work together to produce general intelligence. "I bet the human brain is a kludge." - Marvin Minsky

The idea that if many people think this problem very hard, the problem is solved in our lifetime is prevalent. It's not true in math and physics, so why would AI be any different? Progress is made but you can't know if there is breakthrough tomorrow or if it happens 100 years from now. Just adding more computational capability is not going to solve AI.

Currently it's the engineering applications and the use of the science what is exploding and getting funded. In fact, I think some of the best brains are lured from the fundamental research into applied science with high pay and resources. What the current state of the art can do now has not been utilized fully in the economy and this brings in the investments and momentum.


It seems like a similar parallel is with the enthusiasm with self-driving cars. There was an initial optimism (or hype) fueled by the success of DL with perception problems. But conflating solving perception with the larger, more general problem of self-driving leads to an overly optimistic bias.

Much of the take-aways from this year's North American International Auto Show was that the manufacturer's are reluctantly realizing the real scope of the problem and trying to temper expectations. [0]

And self-driving cars is still a problem orders of magnitude simpler than AGI.

[0] https://www.nytimes.com/2019/07/17/business/self-driving-aut...


Re: Comparing self-driving cars to AGI: It's counterintuitive, but depending how versatile the car is meant to be, the problems might actually be pretty close in difficulty.

If the self-driving car has no limits on versatility, then, given an oracle for solving the self-driving car problem, you could use that to build an agent that answers arbitrary YES-NO questions. Namely: feed the car fake input so it thinks it has driven to a fork in the road and there's a road-sign saying "If the answer to the following question is YES then the left road is closed, otherwise the right road is closed."

Compare with e.g. proofs that the C++ compiler is Turing complete. These proofs involve feeding the C++ compiler extremely unusual programs that would never actually come up organically. But that doesn't invalidate the proof that the C++ compiler is Turing complete.


That's the problem with all of the fatuous interpretations floating around of "level 5" self-driving.

"It has to be able to handle any possible conceivable scenario without human assistance" so people ask things like "will a self-driving car be able to change its own tyre in case of a flat" and "will a self-driving car be able to defend the Earth from an extraterrestrial invasion in order to get to its destination".

They need to update the official definition of level 5 to "must be able to handle any situation that an average human driver could reasonably handle without getting out of the vehicle."

(Although the "level 1" - "level 5" scale is a terrible way to describe autonomous vehicles in any case and needs to be replaced with a measure of how long it's safe for the vehicle to operate without human supervision.)


Very well put. And you could argue that it is not as much a stretch as it seems.

Self driving cars would realistically have to keep functioning in situations where arbitrary communication with humans is required (which happens daily), which tends to turn into an AI-hard problem quite quickly.


Good points.

I was thinking in terms of "minimum viable product" for self-driving cars, which I have a hunch will be of limited versatility compared to what you describe. To have a truly self-driving car as capable as humans in most situations, you may be right.


They already made a minimum viable product self-driving car. It's called a "train".


I know this is meant jokingly, but for many cities (especially relatively remote ones), trains are not considered viable because they have strictly defined routes.

Many cities choose to forego trains for busses in large part due to the lower upfront costs and the ability to change routes as the needs of the populace change.


Also, we know what a self driving car is, how to recognize it, and even measure it.


>And self-driving cars is still a problem orders of magnitude simpler than AGI.

You sure? It might very well be a single order of magnitude harder, or not any harder. Given that solving all the problems of self driving even delves into questions of ethics at times (who do I endanger in this lose lose situation, etc)


I could certainly be wrong, it's just speculation on my part on the assumption that self-driving issues would be a smaller subset of AGI problems.

I actually don't think the ethics part is all that hard if (and that's a big if) there can be an agreement on a standard approach. An example would be a utilitarian model, but this often is not compatible with egalitarian ethics. This approach reeks of technocracy but it's certainly a solvable problem.


Nature has already solved AGI. Now we just need to reverse engineer it.


"just"?

Neuroscience is full of problems of the hardest kind.


Yes, one of Paul Alan’s gifts to the world should help:

https://alleninstitute.org/


Unfortunately, Von Neumann is long dead, so we only have damaged approximations of AGI to work with.


We can say this about anything in the universe though.


Nah not really, there is loads of stuff invented by by humans that, as far as we know, did not appear in the universe before we did it. For example, I'm unaware of any natural implementation of a free spinning wheel attached to an axle.


I agree with you that AGI is not around the corner. I think the people who do believe that are generally falling for a behavioral bias. They see the advances in previously difficult problems, and extrapolate that progress forward, when in reality we are likely to come against significant hurdles before we get to AGI.

Also, seeing computers perform tasks they haven't done before can convince people that the model behind the scenes is closer to AGI than it really is. The fact that deep neural networks are very hard to decifer only furthers the mystical nature of the "intelligence" of the model.

Also, tasks like playing starcraft are very impressive, but are not very close to true AGI in my opinion. Perhaps theres a more formal definition that I'm not aware of, but in my mind, AGI is not being good at playing starcraft, AGI is deciding to learn to play starcraft in the first place.

That's my 2 cents, anyways.


It's like if someone watches "2001: A Space Odyssey" and takes HAL as the model for AI, so they work really hard and create a computer capable of playing chess like in the movie. "Well, that's not really the essence of HAL, it's just that HAL happened to play chess in one scene." So then they work really hard some more, and extend the computer to be able to recognize human-drawn sketches. "Well, that's still not really the essence of HAL, it's just that HAL did that in one particular scene." So they work still harder and create Siri with HAL's voice, and improve its conversation skills until it can duplicate the conversations from the film (but it still breaks down in simple edge cases that aren't in the film). "Well, that's still not the essence of HAL..."

The Greeks observed these limitations thousands of years ago. Below is an excerpt from Plato's "Theaetetus":

Socrates: That is certainly a frank and indeed a generous answer, my dear lad. I asked you for one thing [a definition of "knowledge"] and you have given me many; I wanted something simple, and I have got a variety.

Theaetetus: And what does that mean, Socrates?

Socrates: Nothing, I dare say. But I'll tell you what I think. When you talk about cobbling, you mean just knowledge of the making of shoes?

Theaetetus: Yes, that's all I mean by it.

Socrates: And when you talk about carpentering, you mean simply the knowledge of the making of wooden furniture?

Theaetetus: Yes, that's all I mean, again.

Socrates: And in both cases you are putting into your definition what the knowledge is of?

Theaetetus: Yes.

Socrates: But that is not what you were asked, Theaetetus. You were not asked to say what one may have knowledge of, or how many branches of knowledge there are. It was not with any idea of counting these up that the question was asked; we wanted to know what knowledge itself is.--Or am I talking nonsense?


This is a great example of 1 of the 2 fundamental biases Kahneman identifies in Thinking Fast and Slow: answering a difficult question by replacing it with a simpler one.

The other one (also perhaps relevant to the general topic of this thread): WYSIATI (What You See Is All There Is).


This is a good example of Nassim Taleb's Ludic Fallacy: https://en.wikipedia.org/wiki/Ludic_fallacy


The problem here seems to be that you think the state of the art resembles “being really good at identifying objects”. This makes it clear that you are not keeping up with the frontier. I recommend looking up DeepMind’s 2019 papers, they are easily discoverable.

When you read them, you will probably update in the direction of “AGI soon”. It’s possible that you won’t see what the big deal is, I suppose. I personally see what Carmack and others see, a feasible path to generality, and even some specific promising precursors to generality.

It also helps to be familiar with the most current cognitive neuroscience papers, but that’s asking a lot.


You're going to have to be more specific about what constitutes a major advance forwards. So far DeepMind's work (while impressive) has proven to be very brittle, and not transferable without extensive "fine-tuning". Previous attempts at transfer learning have been mixed to say the least.

I'm going to be pessimistic and say that AGI is probably decades away (if not centuries away for a human-like AGI). There are clearly many biological aspects of the brain that we do not understand today, and likely will not be able to replicate without far more advanced medical imaging techniques.


What are some of the highlights from DeepMind that gives you optimism for a path to AGI? I am not seeing it, personally.


Is there anything in the structure of the brain that makes you think "of course this is an AGI"? For me, the answer is no. That's why I think progress on narrow AI and AGI is going to be unpredictable. Nobody will see the arrival of an AGI until it's here.


Some also think that nobody will see the arrival of an AGI even after it’s here, because after arrival there will be no one left to see.


Various meta-learning approaches and advancements in unsupervised learning and one-shot learning.


I would like to know as well


Can you explain the general path in layman's terms in a few sentences? As far as I can tell AI is really good at analyzing large datasets and recognizing patterns. It can then implement processes based on what it's learned from those patterns. It all seems to be very specific and human directed.



I'm not well versed in this area, but from my perspective, I see this as the fundamental problem:

Every action my brain takes is made of two components: (1) the desired outcome of the thought, and (2) the computation required to achieve that outcome. No matter how well computers can solve for 2, I have no idea how they'd manage solving for 1. This is because in order to think at all, I have to have a desire to think that thought, and that is a function of me being an organism that wants to live, eat, sleep, etc.

So for me, I just wonder how we're going to replicate volition itself. That's a vastly different, and vastly more complicated, problem.


It isn't hard to give an AI a goal but it is hard to do so safely. As a toy example we could design an AI who treated say, reducing carbon emissions as it's goal just as you treat eating and sleeping as yours. The issue is that the sub goals to accomplish that top-level goal might contain things we didn't account for, say destroying carbon-emitting technology and/or the people that use it.


Human's have many basic goals that are very dangerous when isolated in that way. It seems to me that nature didn't care (and of course, can't care) if it was dangerous at all when coming up with intelligence. Maybe we shouldn't either if we want to succeed with replicating it.

Worrying about some apocalypse seems counterproductive to me.


I agree that there's some aspect of volition, desire, the creative process...whatever you want to call that aspect of human thought that seems to arise de novo.

But speaking of de novo, I'm not at all sure that a desire to think a thought it required in order to think. The opposite seems closer: the less one tries to think, the more one ends up thinking.

I'm pivoting from your point here, but I see that bit as the hurdle we're not close to overcoming. We are likely missing huge pieces of the puzzle when it comes to understanding human "intelligence" (and intelligence itself is not the full picture). With such a limited understanding, a replication or full superseding in the near future seems unlikely. Perhaps the blind spot of the experts, as /u/leftyted alluded to, is that their modest success so far has generated a reality-distorting hubris.


It's like the more advanced stage of "a little bit of information is a dangerous thing".


If I'm remembering my terms right, "embodied AI" is one theory or group of theories about interaction with an environment creating the volition necessary before generalized AI can be created.


There's active research in Model-Based RL right now that tries to tackle 1) and 2) together.


I think people also have a very hard time conceptualizing the amount of time it took to evolve human intelligence. You're talking literally hundreds of millions of years from the first nerve tissues to modern human brains. I understand that we're consciously designing these systems rather than evolving them, but nevertheless that's an almost incomprehensible amount of trial and error and "hacking" designs together, on top of the fact that our understanding of how our brains work is still incomplete.


I thought you were going to go the other direction with your first sentence. It took some 4 billion years to go from the first cell to the first homosapien. Maybe another 400,000 years to get from that to how we are today.

That means 0.01% of the timeline was all it took for us to differentiate ourselves from regular animals who aren't a threat to the planet.

0.01% of 100 years is 3 days.


That's a very anthropocentric view, and not how the timeline works. Unicellular organisms are also smart in a way computers can't exactly replicate. They hunt, eat, sense their environment, reproduce when convenient, etc. All of these are also intelligent behaviours.


And just 4 hours for AlphaZero to teach itself chess, and beat every human and computer program ever created....

DNA sequencing went from $3b per genome to $600, in about 30 years, much, much faster than Moore's "law".


Why do you say "much, much faster"? $600 to $3 billion is about the same as going 2^9 (512) to 2^32 (4.3B), which requires 23 doublings. Moore's law initially[1] specified a doubling every year (30 years would be 30 doublings), then was revised to every two years (15 doublings), but is often interpreted as doubling every 18 months (20 doublings). Seems pretty close to me!

[1] https://en.wikipedia.org/wiki/Moore%27s_law


Flight took a while to evolve too.


I don't think that's the same. We're not trying to reverse engineer flight. We're trying to reverse engineer how we reverse engineered flight.


The thing is, airplanes are not based on reverse-engineered birds. Cutting edge prototypes still struggle to imitate bird flight, because as it turns out big jet turbines are easier to build. It could very well be easier to engineer a "big intelligence turbine" than it would be to make an imitation brain.


> It could very well be easier to engineer a "big intelligence turbine"

Is that not what a computer is? We have continuously tried and failed to create machines that think, react, and learn like the brains of living things, and instead managed to create machines that manage to simulate or even surpass the capabilities of brains in some contexts, while still completely failing in others.


A difference here is that flight also evolved and re-evolved over and over. General intelligence of the scale and sort that humans feature just once (that we know of and very likely in history).


That's influenced by the anthropic principle. The first species to obtain human-level intelligence is going to have to be the one that invents AI, and here we are.


Also as a strict materialist, after reading estimates from lots of different people from lots of different disciplines, and integrating and averaging everything, I think we'll have likely have human-level or above AGI around 2060 - 2080. I think it's relatively unlikely it'll happen past 2100 or before 2050. I'd even consider betting some money on it.

I'm kind of coming up with these numbers out of thin air, but as much of a legend as he is, I agree Carmack's estimate seems way too optimistic to me. It's possible, but unlikely to me.

That said:

>The emphasis on computational power also makes no sense to me. If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

In this interview with Lex Fridman and Greg Brockman, a co-founder of OpenAI, he says it's possible increasing the computational scale exponentially might really be enough to achieve AGIc: https://www.youtube.com/watch?v=bIrEM2FbOLU. (Can't remember where he said it exactly, but I think somewhere near the middle.) He's also making a lot of estimates I find overly optimistic, with about the same time horizon as Carmack's estimate.

As you say, it can be a little confusing, because both John Carmack and Greg Brockman are undoubtedly way more intelligent and experienced and knowledgeable than I am. But I think you're right and that it is a blindspot.

By contrast, this JRE podcast with someone else I consider intelligent, Naval Ravikant, essentially suggests AGI is over 100 years away: https://www.youtube.com/watch?v=3qHkcs3kG44. I think he said something along the lines of "well past the lifetimes of anyone watching this and not something we should be thinking about". I think that's possible as well, but too pessimistic. I probably lean a little closer to his view than to Carmack's, though.


I believe that 100 years is optimistic. I would say that it's hundreds of years away if it's going to happen at all.

My bet is that humans will go the route of enhancing themselves via hardware extensions and this symbiosis will create the next iteration(s) in our evolution. Once we get humans that are in a league of their own with regards to intelligence they will continue the cycle and create even more intelligent creatures. We may at some point decide to discard our biological bodies but it's going to be a long transition instead of a jump and the intelligent creatures that we create will have humans as a base layer.


Carmack actually discusses this in the podcast when Neuralink is brought up. He seems extremely excited about the product and future technology (as am I), but he provides some, in my opinion, pretty convincing arguments as to why this probably won't happen and how at a certain point AGI will overshoot us without any way for us to really catch up. You can scale and adjust the architecture of a man-made brain a lot more easily than a human one. But I do think it's plausible that some complex thought-based actions (like Googling just by thinking, with nearly no latency) could be available within our lifetimes.

Also, although I believe consciousness transfer is probably theoretically achievable - while truly preserving the original sense of self (and not just the perception of it, as a theoretical perfect clone would) - I feel like that's ~600 or more years away. Maybe a lot more. It seems a little odd to be pessimistic of AGI and then talk about stuff like being able to leave our bodies. This seems like a much more difficult problem than creating an AGI, and creating an AGI is probably the hardest thing humans have tried so far.

I'd be quite surprised if AGI takes longer than 150 years. Not necessarily some crazy exponential singularity explosion thing, but just something that can truly reason in a similar way a human can (either with or without sentience and sapience). Though I'll have no way to actually register my shock, obviously. Unless biological near-immortality miraculously comes well before AGI... And I'd be extremely surprised if it happens in like a decade, as Carmack and some others think.


I'm no Carmack but I do watch what is happening in the AI space somewhat closely. IMHO "brain" or intelligence cannot exist in void - you still need an interface to the real world and some would go as far as to say that consciousness is actually the sensory experience of the real world replicating your intent (ie you get the input and predict an output or you get input + perform an action to produce an output) plus the self referential nature of humans. Whatever you create is going to be limited by whatever boundaries it has. In this context I think it's far more plausible for super-intelligence to emerge and be built on human intelligence than for super-intelligence to emerge in void.


How would this look, exactly, though? If you're augmenting a human, where exactly is the "AGI" bit? It'd be more like "Accelerated Human Intelligence" rather than "Artificial General Intelligence". I don't really understand where the AI is coming in or how it would be artificial in any respect. It's quite possible AGI will come from us understanding the brain more deeply, but in that case I think it would still be hosted outside of a human brain.

Maybe if you had some isolated human brain in a vat that you could somehow easily manipulate through some kind of future technology, then the line between human and machine gets a little bit fuzzy. In that respect, maybe you're right that superintelligence will first come through human-machine interfacing rather than through AGI. But that still wouldn't count as AGI even if it counts as superintelligence. (Superintelligence by itself, artificial or otherwise, would obviously be very nice to have, though.)

Maybe you and I are just defining AGI differently. To me, AGI involves no biological tissue and is something that can be built purely with transistors or other such resources. That could potentially let us eventually scale it to trillions of instances. If it's a matter of messing around with a single human brain, it could be very beneficial, but I don't see how it would scale. You can't just make a copy of a brain - or if you could, you're in some future era where AGI would likely already have been solved long ago. Even if every human on Earth had such an augmented brain, they would still eventually be dwarfed by the raw power of a large number of fungible AGI reasoning-processors, all acting in sync, or independently, or both.


yes. we probably have different definitions for AGI. For me artificial means that it’s facilitated and/or accelerated by humans. You can get to the point where there are 0 biological parts and my earlier point is that there would probably be multiple iterations before this would be a possibility. If I understand you correctly you want to make this jump to “hardware” directly. Given enough time I would not dismiss any of these approaches although IMHO the latter is less likely to happen.

also, augmenting a human brain for what I’m describing does not mean that each human would get their brain augmented. It’s very possible that only a subset of humans would “evolve” this way and we would create a different subspecies. I’m not going to go into the ethics of the approach or the possibility that current humans will not like/allow this, although I think that the technology part would not be enough to make it happen.


I am not an expert, but I don't think computational power is the limitation. It's the amount of data processed. Our brains are hooked up to millions of sensory signals, some of which have been firing 24/7 for decades. Also our brains come with some preformed networks (sensory input feeding into a region with a certain size and shape) that took millions of years to "train". Even then, our brains take 20-25 years to mature.

Machine learning at this point seems closer to a tool designed analytically (feeding it well-formed data relevant to the task, hand-designing the network) than to AGI.


Things that support the notion that it is soon are that napkin math suggests the computational horsepower is here now, and that we have had few instances of sudden, unexpected advances in how well neural networks work. (Alpha Go, Alpha Zero, etc).

One might extrapolate that there is a chance that in 10 years, when the computational horsepower is available to more researchers to play with, and we get another step-change advance, that we will get there.

My own feeling is that it is possible AGI could happen soon, but I don't expect it will.


This is how I feel about AGI too, and I also include self-driving cars. I don't think those are just around the corner either.

In general I don't think our current approach to AI is all that clever. It brute forces algorithms which no human has any comprehension of or ability to modify. All a human can do is modify the input data set and hope a better algorithm (which they also don't understand) arises from the neural network.

It's like a very permissive compiler which produces a binary full of runtime errors. You have to find bugs at runtime and fiddle with the input until the runtime error goes away. Was it a bug in your input? Or a bug in the compiler? Who knows. Change whichever you think of first. It's barely science and it's barely a debug workflow.

What pushed me all the way over the edge was when adversarial techniques started to be applied to self-driving cars. That white paper made them look like death machines. This entire development process I am criticising assumes we get to live in the happy path, and we're not. The same dark forces infosec can barely keep at bay on the internet, and have completely failed to stop on IoT, will now be able to target your car as well.

Worst thing is all our otherwise brilliant humans like Carmack are gonna be the guinea pigs in the cars as they head off toward their next runtime crash.


The economics of the situation aren't friendly to humans, because human intelligence doesn't scale up well. Take energy consumption-- once you're providing someone 3 square meals they can't really use any extra energy efficiently. So we try training up lots of smart people and having them work together, but that causes lots of other problems-- communication issues, office politics, etc.

Additionally you can't replicate people exactly, so even when Einstein comes along we only have him for a short while. When he passes away we regress.

Computers are completely different. We can ring them in power plants, replicate them perfectly, add new banks of CPUs, of GPUs, wire internet connections directly into them, etc.

This didn't used to matter because the old "computers can only do exactly what you tell them to do, just really fast" limitation. Now that computers are drawing, making art, modifying videos, playing Chess and Go preternaturally, playing real time strategy games well, etc we can see that that limitation doesn't really hold anymore.

At this point the economics start to really kick in. More machine learning breakthroughs + much, MUCH bigger computers over the next decades are going to be interesting.


Einstein comes along only once but his knowledge lives after his death. The same way he iterated on the knowledge of those before him.

If you give Deepmind "x" times the compute power (storage, whatever) it just plays Starcraft better. It's not going to arrange tanks into an equation that solves AGI.

That breakthrough will be assisted by computers I'm sure, but the human mind will solve it.


Also I think that CS people's understand of neurons are horribly underestimated. The idea that there are bits 'in' neurons is a misconception. They each neuron is a multi-cellular entity with variate modes of interaction and activation.

So these napkin estimates comparing brainpower what server farms can do doesn't inform us at all about how that gets us closer to AGI.


I always wonder how they think AGI is close when neuroscience is still scratching in the dark with brain scans, and we don’t know how digestion works 100% nor how to build a single cell in a lab then and have it skip millions of years in evolution to make a baby in 9 months. The AGI will definitely be different in structure than a human brain. Will it have a microbiome to influence its emotions and feelings?


You don't need AGI to do serious damage. I think it's just easier for the layperson to reason about the ethics and implications of AGI than it is to reason about how various simpler ML models can be combined by bad actors to affect society and hurt the common good.


One such missing piece could be that AGI already exists, but is kept behind NDAs.


Speaking as someone doing research in this field, I have an unbelievably hard time imagining this to be the case.

The ML community is generally extremely open, and people know what the other top people are working on. If an AGI was developed in secret, it would have to be without the involvement of the top researchers.


You probably have a blind spot for people that are not able to speak English and are working in conditions that are kept secret by design. Coincidentally I know someone in that situation who works on AI since at least two decades and who has kept radio silence since one decade on what he's working on exactly.


Without the involvement of who we think are the top researchers. If I were smarter and had more time, I would look for bright young researchers who published early and then stopped, but are still alive.


you're assuming that AGI is going to come from ML. While interesting I strongly believe that ML is not going to generate anything close to AGI ever. ML is more like our sense organs that it is to our brain. It can take care o processing some of the input we receive but I don't see it moving past that. super advanced ML + something else will probably be at the root of what could evolve into AGI.


If someone has discovered AGI, it should be trivial for them to completely dominate absolutely everything. There would be no more need for capitalism or anything, we would be in a post-singularity world.


Imagine you'd have discovered AGI and want to exploit it as best as you can without having everyone else notice that you have discovered AGI.


You don't even need to do the imagining yourself. You can just have your AGI do that imagining for you. "Computer, come up with a way for us to take over the world without anyone noticing."


And then hope that the answer isn't "I'm sorry, Dave, I'm afraid I can't do that."


Kind of a tangent but I can't see why we wouldn't be able to make "A"GI using human brain organoids.

https://en.wikipedia.org/wiki/Cerebral_organoid

I know of at least two people that are eager to make "Daleks" and given the sample size there must be many more.


> Sometimes I wonder if AGI (and the concept of a "technological singularity") isn't just "intelligent design for people with north of 140 IQ"

You're not the first person to express this idea, but it's pure speculation. There is obviously a possibility that it will be proved correct at some point in the future. But historically, very smart people have been ridiculed like clockwork for expressing ideas that were beyond their time but philosophically (and physically, eventually technologically) possible.

I'd be wary of adding to such sentiment. It also feels suspiciously like an ad hominem criticism, although in your case it's expressed more like a question. I think there is clearly something to the idea of very smart people having an intellectual disconnect with the reasoning of their closer-to-average peers (and hence expressing things that seem ludicrous, without considering how they will be received), but not one that negatively affects the quality of their deductions.

IMHO, the ideas of AGI and a "technological singularity" (let's call it economic growth that's extremely more powerful than anything seen up until now) aren't so different from earlier, profound developments in human history. The criticism of "smart people developing a blind spot" could have been applied equally to e.g. the ideas of agriculture and the following power shift, industry, modern medicine, powered flight and spaceflight, nuclear weapons or computers, networking and robotics.

All these ideas put the world into an almost unimaginally different state, when seen with the eyes of an earlier status quo. Maybe AGI is relatively different, it's hard to say without having lived in ancient Egypt. It's certainly qualitatively different, since it involves changes to intelligent life, but I'm not sure the idea feels much more alien than things we've already experienced.


He did couch it in the caveat that once the hardware is there it'd more be a matter of thousands of people throwing themselves at the problem -- we're waiting I guess for the hardware to be good/cheap enough for those people to be widespread


I sort of agree with your skepticism, but you gotta admit that some of the things the ML folks are doing are uncanny in terms of how they seem to model the human visual system and perform other human-like tasks. Additionally, we already have tons of CPU horsepower that can get close in terms of raw processing ability. Even though we don't yet know what the missing "special sauce" is, I don't think it's inconceivable that someone in 5 years figures it out (though 50 years is just as likely)


I know it’s just a splinter of AGI, but conversational language understanding and generation is undergoing some rapid advancement. This subreddit is all GPT2 bots and while moat of it is still bad, there are glimpses of the future in there. (Note: Some of it is NSFW)

https://www.reddit.com/r/SubSimulatorGPT2/


Reading the AI go FOOM debate solidified a lot of the mushy parts of my "singularitianism"

I think the linchpin of my belief is recursive self improvement. I think machine intelligences are a different kind of substance with different dynamics than the ones we typically encounter.

I don't think someone will compile the first AGI and presto there is it. I think a long running system of processes will interact and rewrite its own code to produce something, which eventually a reasonable boundary could be drawn to distinguish the system and anyone interacting with the system would say: "this thing is intelligent, the most intelligent thing on the planet". It would have instant access to all written knowledge, essentially unbounded power to compute new facts and information and model the world to as accurate of an approximation as needed to produce high confidence utterances.

I just don't see how a system like that couldn't come into existence one day. Issues around timelines are completely unknowable to me. But I would put a distribution of something like I would be surprised if it happened in the next 50 years and shocked if it didn't happen within the next 1000. Very fuzzy, but it "feels" inevitable.

If a collection of unthinking cells can coordinate and produce the feeling of conscious experience then I can't see what would stop silicon from producing similar behavior without many bounds inherent in biological systems.


But that's the rub. Biological systems are not just random interactions. The entire system is meticulously orchestrated by DNA, RNA, etc. We don't even fully understand yet how it all works together, but it's very clear that these processes have evolved to work together to achieve something that none of them could have ever achieved alone.


Biological systems climb up energy gradients and outcompete other systems.

Artificial systems should be able to climb given a suitable gradient. I think the hard part of AGI is going to be designing the environment and gradient to produce "intention", I don't think the hard part is studying the human mind to find out the "secret of intelligence"

The goal of AGI isn't silicon minds isomorphic to human minds at each level of interpretation. Just the existence of an intelligent system.


> If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

https://en.wikipedia.org/wiki/AIXI


Looking at the trajectory of things, AGI is not impossible. There you have it, I think thats all anyone can say about AGI till another breakthrough comes, what ever that may be


"Neural net that's really good at identifying objects, processing symbols and making decisions."

* Neural nets play Chess or Go better than us. They will soon play Mathematics better than us.

* They turn keywords in photo-realistic images in seconds, and will soon do the same for text. Literature and Arts down this path.

* They learn to play video games better than us, from Starcraft to Dota. Engineering down this path.

There is no hidden information. You just need to look at the width of the field. There is a credible challenge for all our intelligent capabilities.


I disagree that the field of Mathematics can be reduced to a definable game. Many recent breakthroughs have been a creative cross-pollination of mathematical fields...not that this will be totally off-limits to a sufficiently general AI. I didn't do much Math in college but my impression was that after you've learned the mechanics of calculus, algebra, etc., there's no obvious way to advance the field. Lots of "have people thought of things this way before" rather than crunch the numbers harder!!.

Anyone with more training want to chime in?


I have no trouble believing that neural nets can beat people at all of these things (including, eventually, driving). And that, in itself, is incredibly impressive and incredibly useful.

The question is how you get from that to AGI.


One thing that is still missing, I believe, is adaptability. Take chess.

Between rounds at an amateur chess tournament you will often find players passing the time playing a game commonly called Bughouse or Siamese chess. It's played by two teams of two players, using two chess sets and two clock. Let's call them team A, consisting of players Aw and Ab, and team B, consisting of player Bw and Bb.

The boards are set up so that Aw and Bb play on one board, and Ab and Bw on the other. They play a normal clocked game (with one major modification described below) on each board, and as soon as any player is checkmated, runs out of time on their clock, or resigns, the Bughouse game ends and that player's team loses.

The one major modification to the rules is that when a player captures something, that captured piece or pawn becomes available to their partner, who can later elect on any move to drop that on their board instead of making a move on the board.

E.g., if Aw captures a queen, Ab then has a black queen in reserve. Later, instead of making a move, Ab can place that black queen on Ab's board. The capture pieces must be kept where the other team can easily see them.

You can talk to your teammate during the game. This communication is very important because the state of your teammates game can greatly affect the value of your options. For example, I might be in a position to capture a queen for a knight, and just looking at my board that might be a great move. But it will result in my partner having a queen available to drop, and my partner's opponent having a knight to drop. Once on the board a queen is usually worth a lot more than a knight--but when in reserve it is the knight that is often the more deadly piece. So, I'll ask my teammate if queen for knight is OK. My teammate might say yes, or no, or something more complicated, like wait until his opponent moves, so that he can prepare for that incoming enemy knight. In the later case, if I've got less time on my clock than my teammate's opponent has, the latter might delay his move, trying to force me to either do the trade while it is sill his turn, or do something else which will let his teammate save his queen. This can get quite complicated.

OK, now imagine some kid, maybe 12 years old or so, who is at his first tournament, and is pretty good for his age, and had never played Bughouse. He's played a ton of regular chess at his school club and with friends, and with the computer.

A friend asks him to team up, quickly explains the rules, and they start playing Bughouse.

First few games, that kid is going to cause his team to lose a lot. He'll be making that queen for knight capture without checking the other board, shortly followed by his partner yelling "WHERE DID THAT KNIGHT COME FROM!? AAAAAARRRRRGGGHHHHH!!!".

The thing is, though, by the end of the day, after playing a few games of Bughouse between each round of the tournament, that kid will have figured out a fair amount of which parts of his knowledge of normal chess openings, endgames, tactics, general principles, etc., transfers as is to Bughouse, which parts need modification (and how to make those modifications), and which parts have to be thrown out.

To get his Bughouse proficiency up to about the same level as his regular chess proficiency will take orders of magnitude less games than it took for regular chess.

I don't think that is currently true for artificial neural nets. Training one for Bughouse would be as much work as training one for regular chess, even if you started with one that had been already trained for regular chess.


"While neural nets are good at organizing a world governed by simple rules, they are not proven good at interacting with other intelligent agents." This is an interesting point, for example squeezing information through a narrow channel forces a kind of understanding that brute forcing does not. I've stopped paying close attention to the field a year ago, but I have seen a handful of openai and deepmind papers taking some small steps down this route.


AGI is becoming like communism in that it seems theoretically possible, might usher in utopia or be really scary, and apparently intelligent people often believe in it. Along that line of thought one can imagine a scenario where some rogue military tech kills 100 million people, and the world moves to ban it, but a small cadre of intellectuals insist that "wasn't real AGI".


Until we can define intelligence, we cannot create artificial intelligence. We still do not know what intelligence actually is - bloviating academics clambering for subsidies to support their habits, notwithstanding.


> Until we can define intelligence, we cannot create artificial intelligence.

Until we define 'cake', we cannot create cake.


We have well and truly defined cake.

I mean, words have meaning don't they? Or, if not, then what's the fucking point?


Only because we made it so much.


For those of you skipping to the heavily programming heavy parts I’d heartily suggest listening to the whole thing. There’s a lot in there about life, balance, working conditions in gaming, the need for sleep, martial arts, etc. Lots of topics are discussed. It’s a great episode.


And twin-turbo'ing a Testarossa to 1009 hp AT THE WHEELS


For those interested in the development of Doom (And Wolf3D, Keen, ..), or the story of iD software in general, I quite recommend reading "Masters of Doom".

https://www.goodreads.com/book/show/222146.Masters_of_Doom?f...

Coincidentally, I just started re-reading it last week. It's quite a fun book :)


Very fun book indeed, the audiobook is also very good. (narrated by Wil Wheaton) https://www.audible.com/pd/Masters-of-Doom-Audiobook/B008K8B...


I find Will Wheaton's audiobook narration very annoying.

Stopped reading Red Shirts because of his childish voice acting.


I can definitely see him becoming annoying for some contexts, but for this book I found his exciting enthusiasm perfect, reminded me of that early university/student energy.


He was very good at narrating his autobiography. Also liked his reading of Ready Player One but other than those two books, he does tend to be a bad choice for narration. Comparing the Amber Benson reading of Lock In to the Wheaton reading makes this even more obvious.


Reading it now! I also recommend reading his .plan files, archive on github.



Reading that book permanently raised my internal bar for what it means to be a disciplined programmer. The work ethic and what it created was breathtaking.


I think it probably helps when you're not only good at what you're doing, but also are super into the project.

My performance is abysmal at my day job because I really can't bring myself to be interested in the subject at hand. And yet I'll regularly work until the wee hours in the morning on personal projects without even realizing what time it is.


I get that occasionally at work as well - but I find the main hindrance at work to be the distractions of the open office..


John Carmack is the perfect ultra nerd for this format. He's so good at communicating complex ideas, to the point that even less-informed people will enjoy this podcast. Dude's a hero.


If you’ve ever watched a QuakeCon talk, this is pretty much what you’d expect.

The one thing of note that I disagree with is his opinion on general AI. I think that could take centuries, and that it’s kind of like trying to predict when some unsolved math problem will be solved.


We already have an implementation of general AI in our brains that we should eventually be able to mimic.

Do you think it will take centuries to figure out how it works to understand its algorithm?


maybe if you could somehow set up sensors in every axon, but the level of analysis you can do with EEG isn't very informative


> Do you think it will take centuries to figure out how it works to understand its algorithm?

There's no reason whatsoever to believe that the brain is an algorithm at all.

This is just you searching for something where it is easiest to look.

Note: not everything is an algorithm.


There is absolutely no reason to think the brain is non-algorithmic in any way, to the extent that you can even define such a nonsense statement without waving your hands about quantum idiocy like Penrose in his senility. The default assumption in science is that any phenomenon is explainable and predictable, not the opposite: you don't get to invert the burden of proof on that front just because it would make your point (intelligence involves non-algorithmic woo) easier to make.

Even neuroscientists, who are more pessimistic about the prospect of AGI than anyone else, generally agree that the brain is ultimately not doing anything involving woo (with the exception of a few notably crazy religious ones), and is effectively just a computer. Neuroscientists think it's doing more involved computations than AI researchers hope, but it's still just crunching data.


    > quantum idiocy like Penrose in his senility. 
FWIW, Penrose first posited the idea of QM having some role in consciousness about 30 years ago. I don't remember much about his argument but it seemed plausible and not easily dismissed. BTW Penrose was also a guest on the Rogan experience.

Whatever the case, we are still very very very far from understanding the brain when it comes to consciousness. There's room for people to explore possibilities.


> The default assumption in science is that any phenomenon is explainable and predictable

Two points:

a) Not everything is explainable and predictable, and thus under the domain of science.

b) There's a huge (maybe infinite?) class of things that are explainable and predictable and yet aren't algorithms.

An 'algorithm' is a very specific mathematical concept with a very specific definition. It is quite possible that the brain is explainable and predictable and yet isn't an algorithm.

> ...but it's still just crunching data.

Only if you expand 'data' to mean every possible physical phenomenon under the sun, which is disingenuous. (Are hormones 'data'? Is electromagnetic radiation? Etc., etc.)


I'm saying that the inputs to the brain are data, and the outputs are data. The brain transforms that data in some way, and we have mathematical theorems that say yep, most of the ways data can be transformed can be expressed as an algorithm in any Turing-complete language.

If your argument is that the brain leaps past normal computation into hypercomputation or something like that, then you're making an extremely bold claim that doesn't match what we know about the physical universe (there is a long history of arguments about the physical possibility of hypercomputation, and most people don't think it's possible even in theory).

I know it sounds expansive to say that everything in the physical world (at least the bits accessible to our experimentation) can be modeled by an algorithm, but that really is the mainstream scientific view, and the edges where people argue about the fringe possibilities most definitely do not apply to the energy/time scales involved with the brain.


I upvoted you because I believe there is a case for the brain not being an 'algorithm' per se. (like ... here is the code:... ) i.e. can be run on any turing-complete computer. And that is it probably requires on timing and architecture too - lots of algorithms running at the same time and being in sync with themselves, the body, the environment. Also algorithm implies something we can understand and make a complete mental model of. Maybe the brain is more like the big ball of mud software we all hate to work on and want to refactor, except in the brain case, "refactoring" would de-optimise the timing I was referring to earlier and perhaps make the brain not work at all. Let alone we all have a different brain! And the brain is self-modifying hardware/software. I think it taking centuries to understand might be correct!


Even so it would be able to be simulated on a Turing complete computer. Albeit a lot slower.


It's taken several millennia so far, so I think taking only a few more centuries is pretty optimistic.


Clearly knowledge acquisition has accelerated in the last century.

it took the same amount of time to learn how to fly. (ie several millennia)

Yet, once we did, we broke the sound barrier and made it to the moon within 70 years.


Assuming we have the right tools to understand how the brain works to the point we can reverse-engineer it and recreate it is pretty optimistic. We have gotten better at specific things but assuming that the most recent advances in "knowledge acquisition" are the last we will need is pretty optimistic. Assuming the pace of knowledge acquisition will continue to accelerate long enough to get to that point in less than centuries is pretty optimistic.


Can you be a little more specific on the timeline? Centuries can be 200 years to almost 1000.

If we look back 200 years to the 1820s, the world looked quite primitive. Railroads and steamships were exciting:

https://www.thoughtco.com/history-of-railroad-4059935

https://en.wikipedia.org/wiki/Steamship


Most of the computing tech we have now is based on fundamental ideas from before 1980. https://stackoverflow.com/questions/432922/significant-new-i... I don't think the pace of new ideas in computing is accelerating right now. If that ability to capture the behavior of a human brain isn't already on the horizon, I wouldn't expect it to be doable in the next 300-400 years. We don't have the mental framework to structure a project like that.

On top of that, the engineering to even capture data from a brain to begin with is pretty far off.


We didn’t have this 40 years ago:

http://portal.brain-map.org/


Low hanging fruit. Also, we haven't really broken many speed records in the last 50 years.

So what happens first, AGI or traveling at close to the speed of light?


This doesn't really get us closer to the speed of light, but we have broken lots of speed records in the last 50 years:

https://en.wikipedia.org/wiki/List_of_vehicle_speed_records


The ones that actually count - air vehicle rocket and air vehicle air-breathing, were set in 1967 and 1976.


There is no real evidence that computers can be programmed to intelligently invent and act on their own thought processes. All notable AI techniques to date specify in exhausting detail the thought processes that the AI should execute in carefully constructed algorithms. We're getting closer to the point where we can tell the computer to "think and act like a human" and in more and more domains it succeeds. But we're as far as ever from "think for yourself".


Any intelligent process has to emerge from the action of non-intelligent components.

Your argument is the same as saying that a hurricane could never form because air molecules do not contain wind, and water molecules do not contain rain.


I'm skeptical of any argument which hinges on the word "emerge" or "emergent". Consider: "consciousness is an emergent property of brains" vs "consciousness is a magical property of brains".


Does ‘macroscopic property’ make you feel better? Like heat, or entropy or any of the many properties in nature that are a consequence of the combined actions of their constituent material.

Any other explanation would have to rely on the property being explained existing all the way down to electrons and quarks.

Atoms aren’t a liquid, but room temperature water is. At some point, the property of being a liquid emerges through the combined behavior of collections of atoms.

We know that people are intelligent, we know that atoms are not. At some point, the property of being intelligent must arise from the combined action of their constituent parts. We have theories about how that happens, but no way to reproduce it as of yet using computers — it may not be possible. But there is no reason in principle that just because computers are comprised of non intelligent components, that they cannot give rise to an intelligent system. Humans are one example where such a thing has happened— I don’t think anyone would suggest that neurons have intelligence and even if you believed that, one certainly can’t believe that electrons and quarks do.

Of course one could posit a non physical entity or force which somehow exists in people and does not in machines which gives rise to intelligence, but to say the least that requires a great deal more evidence than we currently have for that before I would accept that as an explanation.


> All notable AI techniques to date specify in exhausting detail the thought processes that the AI should execute in carefully constructed algorithms.

I'm not knowledgeable about AI, but from some of of the game-playing AI research I've read about, it seems like we provide in exhaustive detail the rules and objectives and the AI figures out (though a very resource-hungry process) an "algorithm" (e.g. encoded as a neural network) to play the game well.

I'm not saying that's close to human thought, but it seems far beyond having to "specify in exhausting detail the thought processes that the AI should execute in carefully constructed algorithms."


Jürgen Schmidhuber w/ Lex Fridman talk about AI & consciousness: https://youtu.be/3FIo6evmweo


Just in order to start some discussion: do you believe that intelligence (in the form of creative thought, at least) is substrate dependent?


I would imagine that it's entirely possible to generate creative thought using computers. Whether or not the thing driving that creative thought also generates consciousness(capability for qualia) is another matter entirely.

The simulation of a thing is not the thing itself. Perfectly simulating an apple falling from a tree only generates information about the thing, but the simulation hasn't caused an actual apple to fall from an actual tree. Computers are merely symbol manipulators, and symbols are nothing more than the material used to make them(ink on paper, electrons in a computer, etc...). I suspect that consciousness must follow the same logic -- We should be able to simulate something that looks exactly like creative thought(and probably is, technically), but if that same symbol manipulation results in an actual consciousness being produced in the actual world, then the universe has to be a much stranger place than we already know it to be...


> Computers are merely symbol manipulators, and symbols are nothing more than the material used to make them(ink on paper, electrons in a computer, etc...).

But a brain is also just a symbol manipulator, in essence. All it does is shift electrical current from one place to another.


I think a distinction can be made between the two. The brain can manipulate symbols, but it's not clear that it's the symbol manipulation itself that causes consciousness. A computer simulation doesn't actually fire neurons, it manipulates symbols representing neurons(to us) in a way that could be interpreted as firing. Those two things are very different, despite the fact that they both use electricity. Again, there's a difference between a physical thing vs an accurate representation of a thing by other physical means.

As a thought experiment, would it be possible to create consciousness using pen and paper calculation? Perform the same calculations by hand, multithread it by using billions of people performing the same symbol manipulation we would have otherwise performed on the computer. I argue all that project would produce is a large stack of paper and ink. I would argue that it's not the act of manipulating symbols that gives rise to consciousness(since it's the only thing the computer and paper methods have in common), rather that something qualitatively different is happening in the brain.


> Again, there's a difference between a physical thing vs an accurate representation of a thing by other physical means.

A late reply, perhaps, but:

- Does the physicality matter all that much here? For sake of argument, let's assume that I have an infinitely powerful computer, which simulates a brain perfectly, down to the last atom. Would the thing inside my simulation not be conscious, and if so, why not?

> As a thought experiment, would it be possible to create consciousness using pen and paper calculation? Perform the same calculations by hand, multithread it by using billions of people performing the same symbol manipulation we would have otherwise performed on the computer.

I don't see why not, to be honest. That is: given enough computing power and enough knowledge on how it works.

> I would argue that it's not the act of manipulating symbols that gives rise to consciousness(since it's the only thing the computer and paper methods have in common), rather that something qualitatively different is happening in the brain.

I must admit: I'm a Hofstadterian in this; I agree with his conclusions that consciousness, as we describe it, is an emergent property caused by the act of manipulating symbols representing ideas and concepts in a self/circularly referencing manner.


It does a lot more than that, especially when actively learning, but let molecules be the 'symbol' and your point still stands ;)


The universe itself is a symbol manipulator.


Carmack makes good point about VR in the beginning. For people who live in tight spaces VR can be way to do conventional things like watch films and YouTube. This kind of conventional use could be what bootstraps VR markets.

VR headset tech is in a weird situation where none of them is really good enough, but when you test the best in the market it destroys the enjoyment you get from previous generation. I tested 20 minutes $6000 Vario VR-1 where the market is professional use and now anything consumer grade feels like total crap.


I'm not betting my whole stack on it, but I would bet some on the idea that an imminent breakthrough/killer-app for consumer VR will be exercise. Could be VR gyms. Could be home use.

Take a current-gen oculus/etc. Add a heart rate monitor, foot tracking, maybe weighted controllers... you have some excellent workout potential. If price can get to <$300, they will sell

A lot of the top games atm (beat saber, superhot, boxvr) are pretty exercise-ey. Combining gamified fun/addictiveness, in-home convenience & all the advantages of digitization (eg adjust speed in response to heart rate)... I reckon it will be effective. We're not far from it.

An exercise-centric home VR console and/or VR gyms...


> VR will be exercise. Could be VR gyms

It s an interesting idea but about the worst situation i can think of. Not only will i be sweating, i 'll be vigorously moving , exacerbating any vestibular reflexes, shifting the headset on my head, and risking getting injured. I think anything that requires too much physical interaction is a bad candidate for VR. My idea of VR is that it amplifies any minute movement of my fingers magically. That feels empowering


There are already a lot of great games in VR that make you sweat. It's actually not bad at all. Sweat is easily dealt with if you purchase a washable faceplate cover, and there are no simulator sickness issues created by moving around physically with a 6DoF headset, since the view in-game perfectly mirrors your movement in the real world. The headset can shift around slightly on your head, but I've found that's only really an issue if you turn your head complete upside down. Aside from that, slight shifts are barely noticeable provided the headset is strapped on correctly.

I do agree that actually using gym equipment in VR would be rather difficult to do safely, unless the equipment itself was actually tracked and displayed in-game.


The killer app for VR is going to be escapism. There is going to be a future where you're going to purchase experiences from companies. There will be an underground, sex will be huge, travel, alter egos, just like in sci-fi movies. Just wait until VR is like The Beam/Matrix, where you actually feel and sense everything happening. People will be jacked in for most of their life. I envision this being particularly popular for people nearing the end of their lives, or disabled & crippled.


I don't want VR to simulate things I can actually do. I want VR to simulate things I can never do.

A bunch of people "walking though a forest" while on treadmills in microapartments sounds dystopian.


When you're old, feeble, handicapped, or unfortunately unattractive, these are all things you can never do.


perhaps :).

I was talking about the immediate term though. Right now it's a gaming device(s) that's cool but lacks content. A "killer app" in the immediate sense is something to justify the current cost and drive the loop of users->content->price/performance->users.

To get to your endgame, there will need to be reasons to buy devices before the tech matrix level. No sales, no production, no advancements. For now, the driver is gaming & and (in theory) biz-applications.

My guess is that gaming+exercise is the next breakthrough, in terms of moving units and driving the loop.

Anyway... escapism, underground culture, sex(porn), travel, alter ego is arguably the endgame of every personal computing device. Most of us do all these on PCs & phones.


> Right now it's a gaming device(s)

Think you're selling it a bit short, one of the most popular uses is Social via apps like VRChat


A fully immersive experience could be fun. Biking trails while wearing a VR headset on an online community driven game system like Zwift, but with VR could be fun!

https://zwift.com/


Exactly. In VR (current-gen), this can be gamified to make it more fun and addictive. Instead of turning up the music and yelling encouragement like a spinning class, you can be chased by bears. Add in social/multiplayer features (help friend escape bear), and it could be savage.


VR tech has been in that weird situation for 30 years now and I'm starting to think that's just where it's going to be. I don't think that there's some point where it has to get good enough or cheap enough to take off. I think it has already taken off. The market is decently big, but not huge and maybe that's just how it's going to be as long as you have to strap something to your face.

VR is amazing to demo but it's one of those things where the novelty fades quickly. I don't think there's any reason to assume that VR will definitely be a breakout technology.


VR has a much higher floor for what qualifies as a good experience and a higher ceiling of what we can appreciate vs. a monitor or TV screen. So this pattern will probably continue for quite a while.


The resolution and latency hasn't been a problem for a while now. Incremental improvements in those things will grow the market incrementally.


One of the problems for VR is that the goal posts keep moving. We're always comparing it to the best traditional 2D display technology available at the time, so it will always seem to come up short.

If you took today's Oculus and went back 30 years ago and handed it out to people used to 1024x768 CRT displays, people would be saying "Well, the VR problem is entirely solved, and there's no reason to do any more further research!" We'd probably have given up work on improving 2D displays.


> We'd probably have given up work on improving 2D displays.

No way. Very few people want to have something strapped to their face for a long period of time. I really think VR is here, now. The market is basically what it's going to be. I'm not saying there's no growth left, but it's not going to explode.


$6k isn't really all that far away from consumer level pricing. Give it a couple generations, and that hardware will be affordable for most people.

Though I'm going to have to disagree about current-generation VR not being "good enough". For me at least, the $400 standalone headset John showed at the start of the podcast is already plenty good for what I want to use it for (gaming). True it's not perfect; there's still plenty of room for resolution and graphics improvements in future iterations, but it's already easily good enough for multiplayer shooters (Pavlov), rhythm games (Beat Saber), single-player RPGs (Journey of the Gods), etc.

For other use-cases like desktop replacement I agree there's still a ways to go before VR is practical for most people, but for gaming it's already in a pretty good state in my opinion.


I can tell you with my google daydream watching Netflix and old movies that I never got to see in the cinema is just bliss.


How is the experience? I mean, the resolution is a lot lower than you'd have if you just watched it on your monitor right? Is that a factor?


You need to get those HD prints for improved picture quality. The 700 mbs are fine too. The best part that I like about it is, you get to look at all the background elements in the screen in great detail, shadows and all that cinematography things.

It's like you are alone in the theatre .. staring at this giant canvas!


For those interested in all of Carmack's clever hacks in the Wolfenstein and Doom source code that made it possible to run these games on PCs of the day, the Game Engine Black Books are a great read (they are also free):

http://fabiensanglard.net/gebb/index.html


For those who don't have time to watch/listen to the whole podcast (you should though), you can get the highlights on Joe Rogan's other channel, JRE Clips:

https://www.youtube.com/channel/UCnxGkOGNMqQEUMvroOWps6Q/vid...


Nice to see Team Fortress (Quake 1 mod) get a shout out. 23 years later, we've still got a team developing and playing it. https://www.fortressone.org/


Burgeoning community at https://discord.fortressone.org


Played a bit of TF in 1998. A few weeks later, I went to see Saving Private Ryan. The opening D-Day imagery of the film produced the spontaneous comparison, "it's just like Team Fortress." 21 years later, I still recall that.


I hadn’t heard about this project, thanks for sharing!!! QWTF was one of my first competitive games and I loved making custom maps for it.


Map making for Quake has come a long way with Trenchbroom. We're always looking for more maps - join the discord and say hello!


John Carmack is one of the most eloquent programmers I have heard.


Agreed. I love that he doesn't dumb himself down but still manages to make the content relatable/understandable.


yeah, but he could take a breath every once in a while


He programmed his nostrils to breathe in parallel.


Interesting perspective on work-life balance, certainly contrary to the usual discussion online on these issues https://www.youtube.com/watch?v=udlMSe5-zP8&t=1h27m10s


The legend himself. Waiting for this one for a few days.


Not directly related, but everytime John's name pops up, it always reminds me of one of his quotes:

     "Damn spooky analog crap." -- John Carmack


Any context for this quote ? Google didn't find anything behind people using it in their email signature.


Do they spend much time talking about Armadillo and/or rocketry? I'm curious to hear Carmack talk about those topics but I'm not sure I want to wade through 2 1/2 hours of Rogan if he doesn't delve into rocketry.



This part through the end is pretty incredible! He’s a nonstop stream of intellect.


The entire podcast is like this, highly recommend the entire episode.


Thank you so much!


Really enjoyed that one!!! Fascinating to hear his story of how he pushed to open source games... and how they started with allowing modding, then custom scripting support, then open source. “The business folks didn’t like it” :-)


What if attaching digital inputs to our neurons and self hacking is the major leap that will bring AGI out?


Everyone is already a phone head. Can't say I am looking forward to everyone being a VR head.


People hate on Joe Rogan but I think he is an excellent host, the podcast has people from all perspectives on and he manages not to offend any of them, most come on subsequent episodes. I'll take a Rogan interview over some opinionated bullshit clip from CNN/MSNBC/FOX.


People hate on him because he's an excellent host, and his lack of "outrage" on controversial topics goes against the fiber of their beings.


What you call a "lack of outrage on controversial topics" would be more accurately called "unwillingness to seriously challenge the terrible ideas he gives a platform to"


His job is not to challenge his guest's ideas, it is precisely to give them a platform. Weak people are afraid of ideas and thus dislike this very notion.


Agreed. I don't want to see hosts challenging their guests' ideas. I appreciate Sean Carrol's physics podcasts for exactly the same reason: he doesn't always agree with his guest's interpretation of quantum mechanics or understanding of consciousness, but he does not argue with them - he helps them present their best case.

If you want to see guests arguing with their hosts, well that's a form of entertainment you can get on cable.


Excellently put. He's the perfect host for me. Just presenting and exploring thoughts and ideas.


[flagged]


Why shouldn't we hear what nationalists have to say? Who gets to decide what is a conspiracy theory? They're not all as simply false as flat earth. Keto in the low-fat 90s would have been a conspiracy theory and a culture of censorship would have killed it.

We need many groups of people believing different things and civilly arguing for what's correct. We don't need de-platforming or cancelling - that can only lead to groupthink and groupthink is incredibly dangerous: see the housing crisis or the Nazis.


He is a good host. The hate he gets is from bringing up the same topics over and over and hosting unlikable people.

Bringing up the same topics is likely because he has so much content that there is going to be repeat work. Bringing on guests that many other shows wouldn't have has resulted in him being called being a mouthpiece for the alt-right. I've heard him called a gateway into the alt right.


> The hate he gets is from bringing up the same topics over and over and hosting unlikable people.

I think this is it. I'm a semi-regular listener, and think Joe is generally a great host, but too much of his podcast becomes like inhaling car exhaust fumes due to the nuttiness of some of the guests.

> Bringing on guests that many other shows wouldn't have has resulted in him being called being a mouthpiece for the alt-right. I've heard him called a gateway into the alt right.

Which I think is unfair because he tends to host people from across the political spectrum. I mean there are the extreme examples, like Alex Jones, but there are also plenty of at least vaguely sane people who lean right. And then, for every one of them, you'll find a Bernie Sanders, or a Jon Ronson coming on.

There are a lot (or at least it feels that way) of conspiracy theory/UFO-nut guests, but that stuff can be fascinating (to a point) regardless of whether or not you actually buy into it. With that said I do have to be in the right kind of mood otherwise the slew of bullshit these people spew will simply infuriate, rather than entertain, me.


I tend to like Joe Rogan even though some of his guests are on the opposite end of the political spectrum as me.

Right now the hyper-sensitive, take everything way too seriously, you are either with us or against us zealots are speaking for everyone who considers themselves progressive. It will wane eventually, just like the satanic panic of the 80s. And I say this as a person who considers them self a feminist and hyper progressive.

That being said: I don't like Joe Rogan giving a platform to Alex Jones. I think that man is reprehensible. He is a tragedy exploiting ghoul. He whipped up a fervor around the "sandy hook is a false flag" conspiracy theory, harassed the families of the survivors, and laugh his way all the way to the bank while doing it. Alex Jones is a bad person and has done bad things.


Why was this flagged and deleted yesterday. Anyway def awesome well worth watching


I liked so much the video. Pretty interesting.


How is it that this post is ranked 77 with 52 points in 3 hours and only one comment???

Meanwhile "Exploring Weight Agnostic Neural Networks" is at #2 with the same age and only 46 points...

If people are flagging something like this that's so directly in the sweet spot of "interesting to hackers" and non-flamewar inducing as this, then I hope the mods aren't asleep at the wheel!


> If people are flagging something like this that's so directly in the sweet spot of "interesting to hackers" and non-flamewar inducing as this, then I hope the mods aren't asleep at the wheel!

Exactly. Flags of this article would be a good indicator for dang / sctb to make HN ignore flags from these HN accounts in future.


There are probably a lot of Joe Rogan reflexive downvoters because gasp he had some less-than-Carmackian guests on his show.


HN doesn't have down votes for stories AFIK. People are probably abusing the flag feature over 2nd degree associations that don't come up at all in this video.


Those who reflexively dislike Rogan may not like Carmack's politics either. Carmack generally keeps quiet on politics but the few times he touches the subject when discussing books or philosophy on Twitter he's been quite libertarian.


I don't think Carmack is the problem here.


Useres flagging stories so others don't see them, rather than using the "hide" feature so they don't see them, is the problem.


What's the problem with Joe Rogan?


A common complaint with Joe Rogan is that he gives a platform and an audience to assholes like Gavin McInnes and Alex Jones.


The best rebuttal to the idea of 'deplatforming' is from John Stuart Mills, in On Liberty:

> “He who knows only his own side of the case knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no ground for preferring either opinion... Nor is it enough that he should hear the opinions of adversaries from his own teachers, presented as they state them, and accompanied by what they offer as refutations. He must be able to hear them from persons who actually believe them...he must know them in their most plausible and persuasive form.”


How is it a bad thing that he promotes the free exchange and presentation of ideas?


[flagged]


sounds like you dont know who gavin is then

why dont you watch the episode with him and learn for yourself

thats why its not good to deplatform people you dont like because other people told you not to like them


It's increasingly clear that I have less in common with many on Hacker News when these types of responses are considered valuable.

> sounds like you dont know who gavin is then

This is directly from Wikipedia.

> McInnes was a leading figure in the hipster subculture while at Vice, being labelled as the "godfather" of hipsterdom. After leaving the company in 2008, he became increasingly known for his far-right political views. He is the founder of the Proud Boys, a neo-fascist men's group classified as a "general hate" organization by the Southern Poverty Law Center.

I called him a neo-fascist, I'd love to know what makes you conclude I don't know who he is.

> why dont you watch the episode with him and learn for yourself

That's not learning for myself, learning for myself is consulting various sources and investigating things he said and actions he's done. Not hearing a milquetoast description of his views from the man himself that aren't being challenged.

> thats why its not good to deplatform people you dont like because other people told you not to like them

Why would you jump to this? What in my response told you that someone else told me not to like him? McInnes screaming the n word the way he has in the past among many of his other actions makes me not like him.

Edit: I'm also clueless as to how my response was directly to the main thread as there was a deeper thread I was trying to respond to originally. Not sure what went on there.


wikipedia is edited by people who have biases too its not a perfect truth. the southern poverty law center says everyone they dont like is hate speech even people like daryl davis and sam harris. splc is a joke.

if you watch his actions youd see the same thing instead but youre not doing that either


Giving Alex Jones a platform is one thing but a bigger problem is people that believe the nonsense he speaks. When did society become the collective brain for everyone? It’s on the individual to figure stuff out for him or herself.

Joe is a heavily left leaning (socially) capitalist, he seems to me to be very libertarian. I bet there’s a number of Silicon Valley engineers who lean the same way except maybe for his stance on guns.

He has everyone on his show. “Crazy” people to the mainstream. And in any case it’s his show you’re free to not give it patronage.


In fairness though, a lot of Alex Jones's detractors probably have never _actually_ watched him though, aside from selected clips from their media outlet of choice (likely bashing him); so I think having him on something somewhat mainstream like Joe Rogan's podcast gives the average person a chance to listen to him and decide for themselves that he's batshit crazy without having it spoonfed to them by another source.


Do people not realise how dangerous a viewpoint this is, to stop people you don't agree with talking / deplatforming.

How do the left not realise how authoritiarian they actually are in this regard?

And making someone out to be a right wing nazi because of who they spoke to on their show is madness.

I'm not even sure it's honest on the part of these people, I think they know Joe Rogan isnt some right wing nazi but they say it just to fit in with their "group".


This is a correct observation, stated drily without any opinion mixed in. I don't understand why people downvoted you.

EDIT: I read over the word "assholes". Not drily at all.


I don't know about his recent activity, but he does have a history of pushing conspiracy theories. Most notably he claimed the moon landing was a hoax.

That alone has put me off watching his show (as much as I enjoy hearing Carmack speak).


That was about a decade ago and he's changed his stance since after talking to people like Neil deGrasse Tyson.

Joe Rogan represents, quite literally, the average Joe going through life and learning as he goes. If we shun people for ever possibly considering something different then we'll never get to actually connect and change their minds. People are lot more open and resilient than you may consider.


When he was presented with information to the contrary, instead of digging in, he considered the evidence and changed his view. Most people have things they don't get right the first time. Growth happens when one recognizes such errors, admits to them, and change.


> I don't know about his recent activity, but he does has a history of pushing conspiracy theories. Most notably he claimed the moon landing was a hoax.

I subscribe to JRE and this watch it regularly. Rogan has not recently pushed the moon landing was a hoax. I'm not sure where you got that information.

Rogan has discussed how he used to think the moon landing was a hoax many years ago and now thinks the moon landing happened. Rogan also talks about how most conspiracy theorists are sad, lonely people.


>Rogan has not recently pushed the moon landing was a hoax. I'm not sure where you got that information.

You're changing what I said. I said that he previously pushed the view.

It's also not hard to find evidence of that. Here's the first result of a Google search where he claims NASA's footage was faked.

https://www.youtube.com/watch?v=h_KT7YlVuKI

e: Here's a video debating the topic with Phil Plait.

https://www.youtube.com/playlist?list=PLA3ED2E5D5A8D72CD


> You're changing what I said.

Your comment "Most notably he claimed the moon landing was a hoax." is vague about whether Rogan recently has stated the moon landing is a hoax. I've added your full quote in my reply to let people judge for themselves.

Further:

> pushed the moon landing was a hoax.

is not the same as:

> It's also not hard to find evidence of that... he claims NASA's footage was faked

He does believe some of the footage was faked - there are indeed inconsistencies there that aren't very well explained. That's not the same thing as believing the moon landing didn't occur, which was Joe back in Fear Factor / NewsRadio era. He's 52 now.


HN usually hates videos.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: