Hacker News new | past | comments | ask | show | jobs | submit | ChaitanyaSai's comments login

That's sort of a mind map. We are building/experimenting with something like this . https://iwtlnow.microschools.in/

You can either enter your GPT key, or fill in the form here https://learn.microschools.in/ and we'll give you access if you'd like to give it a spin.


It helps to have both a computational understanding, and a computational perspective of what minds do to understand why consciousness is needed.

Consciousness is a consensus mechanism. I've co-authored a book where we talk about what's the most computationally robust and biologically plausible model of consciousness. It is only now, two years after the book, that I realized that the best phrasing for what consciousness is must be one that takes into account the Truly Hard Problem of Consciousnss: Who is feeling it? With every other phrasing or definition the "I" is implicit and taken for granted. So the question becomes "Why does this have to feel like anything?"

The answer, it turns out, is that both experience and the experiencer are constructed together in a virtuous loop. You are a constellation of experiences. Consciousness is the consensus mechanism by which a chorus emerges in this constellation. And why is one needed? Because the decentralized entity that is you must act as one at all times, and especially when rare risks or outstanding opportunities present themselves. When we take the "I" for granted, we simply do not realize the staggeringly immense computational challenge it is to stitch this subjective self together. What sets apart this explanation is not just this broad-strokes perspective, but the biologically plausible mechanism by which top-down expectations (your past experiences) match up with bottom-up sensory data (ambiguous and potentially overwhelming reality)

More here: https://saigaddam.medium.com/consciousness-is-a-consensus-me... More here:


Can I simulate consciousness with an infinite loop asking an AI what to do next?

    while true:
        Prompt: our body state + sensory input + "what to do next"
If we can run this loop 60 times per second with such large input, I guess we could simulate a conscious human.


There's an even more profound truth here. You are what you experience, even if you don't always remember each one. You are a constellation of experiences. Consciousness is the consensus mechanism that stitches them together to make you.

This can sound like spiritual woo-woo but that's the distillation of a pretty good scientific model. More here: https://saigaddam.medium.com/consciousness-is-a-consensus-me...


Heh you are right :) Consciousness really needs a three-part explanation because of the hard problem question. The debate usually goes like this:

A: Consciousness is X B: But why must X feel like anything. A: Erm..

That's mean posed as the Hard Problem of Consciousness - why does it feel like anything?

I'll argue there's also the Invisible Problem of Consciousness -- who is feeling it? that virtually no one asks because their theories are so imprecise and not computationally defined that it's impossible to even fathom we are close to asking that question. But we are.

Coming back to the conclusion as axiom: What this is getting at is that X (the consciousness explanation) really has to do a couple of things because there are no turtles all the way down.

X has to offer a model of consciousness, X has to explain who experiences it, and X must also explain why this results in something we (the answer to the 2nd part) recognize as feeling or inner life.

So the bullet-list really is an answer to the 2nd and 3rd parts of the question here. Who's feeling it, and why does it feel like anything? The "I" and the "It" emerge together, so there's something neat to having the axiom prove itself for the first part of this question. The longer answer is in the book!


Looks interesting! We built an extension for something similar.

https://github.com/CominiLearning/recenter

Often find myself mindlessly clicking over to reddit or some other site while thinking and before I know it I am being sucked in by something there. Those minutes add up. Wanted something that proactively tells me that this is happening and also summarizes how time was spent. Have found it be a huge help!


Consciousness is the constellation of your past experiences transforming reality into your next experience.

Every major consciousness theory out there fails because it does not account for how a consciously experiencing self is created. You cannot explain away consciousness without explaining the self.

And there is a theory that offers a model for both (not my own!). Our book Journey of the Mind discusses this. Here's a blog post discussing both https://saigaddam.medium.com/conscious-is-simple-and-ai-can-...


A constellation is a fitting description for the ego.


Isn't the conversational medium too linear and possibly slow? Can you share a specific example of a problem where this approach is interesting. Thanks!


I was driving home from work Thursday night, and had to go for 90 minutes in stop-and-go traffic. I had an idea about a new architecture for a computer where the CPU, GPI, and I/O processor would all be socketed in identical processor sockets. I was talking through the busses with VoiceGPT for about an hour. (I think VoiceGPT is the name of the voice interface to OpenAI's ChatGPT. It is available with a ChatGPT Plus subscription.)

Note: I've been told my 3-processor idea is a horrible mistake, but I like it.

On my way home from Church, I can talk through something the pastor said.

On the way home from a class, a student can talk through something they did not quite understand in class. ChatGPT sometimes explains things in a way so that people who did not get it before understand it. If I still do not understand something, "ELI5" is my standard second attempt. VoiceGPT keeps answers short, so I do not have to go back through long responses as often, but if there is something I do nt udnerstand in the answer, I can drag it out for many more prompts.

A teacher (like myself) can assign students to talk through whatever thing(s) they did poorly on during an exam with ChatGPT or another AI. If the students drive a lot then they can do this with VoiceGPT. The transcript can count as a grade. I do this in my community college classes, but I expect my students to use the free version and type instead of talking.

For the first time, every student really can master every concept (subject to limitations on their time.) The magic here is one a student works hard enough to get an A or two in a field, classes that build on that are really easy.

I have learned some great prompting tricks. For example, we learn things when they are relevant to us. So, my first question is usually "I am a _______ working on (or learning) _______. Tell me why I should be excited about _______."

Another trick I use is: To reduce hallucinations, I do not ask "Tell me about _______." Instead, I ask "How familiar are you with _______?" If I start distrusting the answers, I start a new chat window and try again with revised prompts.

I can imagine future professional development where (for example,) a teacher realized they are not good at something and chats with an AI until they AI thinks they have mastered the concept. Imagine getting monthly 30-minute quizzes, and based on what you score, you are assigned professional development chats and simulations. For me this is much preferable to sitting in long meetings.

There are definately things that are too fast to do with voice. I could not follow VoiceGPT's math when it talked out math in the billions of transactions per second on busses on a hypothetical motherboard. That is why the transcription is awesome.

It's also nice to be able to tell VoiceGPT "I want you to take an note. Do not answer, just take a note." after it agrees, tell it what you want to take a note on, then go back to the conversation.


Can you share a few worth following (with reasonably high signal to noise)? Thanks!


Great read. Surprised to read Wolfram never actually got to use CYC. Anyone here who has and can talk about its capabilities?


I briefly looked into it many moons ago when I was a Ph.D. student working in the area of computational semantics in 2006-10. This was already well past the hayday of CYC though.

The first stumbling block was that CYC wasn't openly available. Their research group was very insular, and they were very protective of their IP, hoping to pay for their work through licensing deals and industry- or academic collaborations that could funnel money their way.

They had a subset called "OpenCYC" though, which they released more publicly in the hope of drawing more attention. I tried using that, but soon got frustrated with the software. The representation was in a CYC-specific language called "CycL" and the inference engine was CYC-specific as well and based on a weird description logic specifically invented for CYC. So you couldn't just hook up a first-order theorem prover or anything like that. And "description logic" is a polite term for what their software did. It seemed mostly designed as a workaround to the fact that open-ended inferencing of the kind they spoke of to motivate their work would have depended way too frequently on factoids of common sense knowledge that were missing from the knowledge base. I got frustrated with that software very quickly and eventually gave up.

This was a period of AI-winter, and people doing AI were very afraid to even use the term "AI" to describe what they were doing. People were instead saying they were doing "pattern processing with images" or "audio signal processing" or "natural language processing" or "automated theorem proving" or whatever. Any mention of "AI" made you look naive. But Lenat's group called their stuff "AI" and stuck to their guns, even at a time when that seemed a bit politically inept.

From what I gathered through hearsay, CYC were also doing things like taking a grant from the defense department, and suddenly a major proportion of the facts in the ontology were about military helicopters. But they still kept beating the drum about how they were codifying "common sense" knowledge, and, if only they could get enough "common sense" knowledge in there, they would break through a resistance level at some point, where they could have the AI program itself, i.e. use the existing facts to derive more facts by reading and understanding plain text.


Doesn't description logic mostly boil down to multi-modal logic, which ought to be representable as a fragment of FOL (w/ quantifiers ranging over "possible worlds")?

Description logic isn't just found in Cyc, either; Semantic Web standards are based on it, for similar reasons - it's key to making general inference computationally tractable.


I'm not trying to be dismissive of description logics. (And I'm not dismissive of Lenat and his work, either). A lot of things can fall under that umbrella term. The history of description logic may in fact be just as old as post-syllogism first-order predicate calculus (the syllogism is, of course, far older, dating back to Aristotle). In the Principia Mathematica there's a quantifier that basically means "the", which is incidentally also the most common word in the English language, and that can be thought of as a description logic too. But the perspective of a Mathematician on this is very different from that of an AI systems "practitioner", and CYC seemed to belong more to the latter tradition.


That's fascinating to read, thanks for sharing.

Did it ever do something genuinely surprising? That seemed beyond the state-of-the-art at the time?


One of the people from Cyc gave a talk at the research group I was in once and mentioned an idea that kind of stuck with me.

...sorry, it takes some building-up to this: At the time, a lot of work in NLP was focused on building parsers that were trying to draw constituency trees from sentences, or extract syntactic dependency structures, but do so in a way that completely abstracted away from semantics, or looked at semantics as an extension of syntax, but not venturing into the territory of inference and common sense. So, a sentence like "Green ideas sleep furiously" (to borrow from Chomsky's example), was just as good as a research object to someone doing that kind of research as a sentence that actually makes sense and is comprised of words of the same lexical categories, like "Absolute power corrupts absolutely". -- I suspect, that line of research is still going strong, so the past tense may not be quite appropriate here. I'm using it, because I have been so out of the loop since leaving academia.

The major problem these folk are facing is an exploding combinatorial space of ambiguity at the grammatical level ("I saw a man with a telescope" can be bracketed "I saw (a man) with a telescope" or "I saw a (man with a telescope)") and the semantic level ("Every man loves a woman" can mean "For every man M there exists a woman W, such that M loves W" or it can mean "There exists a woman W, such that for every man M it is true that M loves W"). Even if you could completely solve the parsing problem, the ambiguity problem would remain.

Now this guy from the Cyc group said: Forget about parsing. If you give me the words that are in the sentence and you're not even giving me any clue about how the words were used in the sentence, I can already look into my ontology and tell you how the ontology would be most likely to connect the words.

Now, the sentence "The cat chased the dog" obviously means something different from "The dog chased the cat" despite using the same words. But in most text genres, you're likely to only encounter sentences that are saying things that are commonly held as true. So if you have an ontology that tells you what's commonly held as true, that gives you a statistical prior that enables you to understand language. In fact, you probably can't hope to understand language without it, and it's probably the key to "disambiguation".

This thought kind of flipped my worldview upside down. I had always kind of thought of it as this "pipelined architecture" where you first need to parse the text, before it even makes sense to think about how to solve the problems of what to do with the output from that parser. But that was unnecessarily limiting. You can look at the problem as a joint-decoding problem, and it may very well be the case that the lion's share of entropy comes from elsewhere, and it may be foolish to go around trying to build parsers, if you haven't yet hooked up your system to the information source that provides the lion's share of entropy, namely common-sense knowledge.

Now, I don't think that Cyc had gotten particularly close to solving that problem either, and, in fact, it was a bit uncharacteristic for a "Cycler" to talk about statistical priors at all, as their work hadn't even gotten into the territory of collecting those kinds of statistics. But, as a theoretical point, I thought it was very valid.


I played with OpenCyc once. It was quite hard to use because you had to learn things like CycL and I couldn't get their natural language processing module to work.

The knowledge base was impressively huge but it also took a lot of work to learn because at the lower levels it was extremely abstract. A lot of the assertions in the KB were establishing very low level stuff that only made sense if you were really into abstract logic or philosophy.

They made bold claims on their website for what it could do, but I could never reproduce them. There was supposedly a more advanced version called ResearchCyc though, which I didn't have access to.


That was exactly my reaction to it: it seemed to require sooooo much background knowledge about the entire system to do anything. And because you were warned about issues with consistency it seemed you were warned about just fudging some things. That it was a quick way to an application that couldn't work. The learning curve seemed daunting.


Some of us who worked on Cyc commented in an earlier post about Doug's decease.


Wolfram is able to write it in such a way that somehow it is mostly about him. :-(

There is some overlap between Cyc and his Alpha. Cyc was supposed to provide a lot of common sense knowledge, which would be reusable. When Expert Systems were a thing, one of the limiting factor were said to be limited amount of broader knowledge of the world. Knowledge a human learns by experience, interacting with the world. This would involve a lot of facts about the world and also about all kinds of exceptions (Example: a mother typically is older than its child, unless the child was adopted and the mother is younger). Cyc knows a lot of 'facts' and also many ways of logic reasoning plus many logic 'reasoning rules'.

Wolfram Alpha has a lot of knowledge about facts, often in some form of maths or somewhat structured data.


Ok, but let's avoid doing the mirror image thing where we make the thread about Wolfram doing that.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Well, it's a disappointing and shallow read, because the topic of the usefulness of combining Cyc and Alpha would have been interesting.


Wolfram writes good historical articles. One just needs to put on some glasses that filter out the annoyance part of the spectrum.


Human doctors using AI to sound more humane was not something I'd have guessed. Not just a one-off case https://www.nytimes.com/2023/06/12/health/doctors-chatgpt-ar...

And it is fun getting chatgpt to inject compassion into unexpected scenarios. https://chat.openai.com/share/83a5b1f2-9b5a-4ebd-947c-b68fd2...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: