Hacker News new | past | comments | ask | show | jobs | submit | more TchoBeer's comments login

the NRO is the craziest US government agency. iirc all their workers are contracted and all their expenses are highly classified. About the KH-11, trump accidentally tweeted an image from one of them (see https://www.forbes.com/sites/jonathanocallaghan/2019/09/01/t...) and its resolution was an international incident. This technology was developed in the 70s. The NRO has sci-fi technology.


extracting 5s of a video feels pretty trivial. Not that this isn't extremely impressive, but it doesn't feel "come for your jobs" impressive.


I am thinking that the latter might eventually emerge, probably as part of a bigger tool chain e.g. langchain. Something like java bytecode which is low level and portable, but optimized for the ways that LLMs (perhaps interfacing with other tools) work.


I wonder who flipped my calendar to 2023 then.


>If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.

The problem is that these sorts of things were thought to require some sort of understanding of general intelligence, when in practice you get solve them pretty well with algorithms that clearly aren't intelligent and aren't made with an understanding of intelligence. Like, if you time travel back 100 years and told them that in the future a computer could beat any grandmaster at chess, they might consider that a form of AGI too. But we know with hindsight that it isn't true, that playing chess doesn't require intelligence, just chess prowess. That's not to say that GPT4 or whatever isn't a step towards intelligence, but it's ludicrous to say that they're a significant advancement towards that goal.


That's another way to state the same thing actually.

One can adopt a static definition of "general intelligence" from a point in history and use it consistently. In this case, GPT3+ is a leap in humanity's quest for AGI.

One can also adopt a dynamic definition of "general intelligence" as you described. In this case the equivalent statement is that in hindsight GPT3+ shows that language ability is not "AGI", but rather, "merely" transformer models fed with lots of data. (And then humanity's goal would be to discover that nothing is "AGI" at all, since we'd have figured it all out!)

The fact that we see things differently in hindsight is already strong evidence that things have progressed significantly. It proves that we learned something that we didn't know/expect before. I know this "feels" like every other day you experienced, but let's just look at the big picture more rationally here.


Athletes?


Low EV. Some make it very big, but most earn nothing and retrain.


>based on their occurrences in the training set

the words "based on" are doing a lot of work here. No, we don't know what sort of stuff it learns from its training data nor do we know what sorts of reasoning it does, and the link you sent doesn't disagree.


We know that the relative location of the tokens in the training data influences the relative locations of the predicted tokens. Yes the specifics of any given related tokens are a black box because we're not going to go analyze billions of weights for every token we're interested in. But it's a statistical model, not a logic model.


are you sure? If conscious experience was a computational process, could we prove or disprove that?


If someone could show the computational process for a conscious experience.


How could one show such a thing?


If it can't be shown, then doesn't that strongly suggest that consciousness isn't computable? I'm not saying it isn't correlated with the equivalent of computational processes in the brain, but that's not the same thing as there being a computation for consciousness itself. If there was, it could in principle be shown.


I can throw a ton of algorithms that no human alive can hope to decide whether they halt or not. Human minds aren't inherently good at solving halting problems and I see no reason to suggest that they can even decide whether all turing machines with number of states, say, below the number of particles in the observable universe, very much less all possible computers.

Moreover, are you sure that e.g. loving people in non-algorithmic? We can already make chatbots which pretty convincingly act as if they love people. Sure, they don't actually love anyone, they just generate text, but then, what would it mean for a system or even a human to "actually" love someone?


actually a computer can in fact tell that this function halts.

And while the human brain might not be a bio-computer, I'm not sure, its computational prowess are doubtfully stronger than a quantum turing machine, which can't solve the halting problem either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: