My hot take on this new era of search engines is that "search is a bug" and even trying to be a search engine is a fool's errand. Search solved a problem of the legacy internet where you wanted information and that information would be on one of a million websites.
If someone is going to disrupt Google, it's because they've cut out the middleman that is search results and simply give you what you're asking for. ChatGPT and Perplexity are doing the best here so far afaik
Search is still better for getting to specific, existing documents you need. Even the RAG people have been finding that out with hybrid models becoming more popular over time. I also think you can update search indexes more cheaply than further pretraining LLM’s.
Not to mention that the cost per search in terms of compute and energy is so much smaller for web search than for running an LLM. I forget the exact numbers now, but it was orders of magnitude as I recall.
Search engines are just cheaper to run. I don't know that there's a good, long term model for a free LLM-based search replacement because of how much higher the operating costs are, ad supported or not.
These are great reasons why this business will be hard, but given how ChatGPT and Perplexity are making inroads into search traffic, you can't deny it's an experience consumers prefer.
I agree that there’s interest in it. I found ChatGPT and AI search very convenient in some situations where I used them. Other times they hallucinated. I have no idea, though, what customers prefer until I see large-scale surveys by companies not pushing A.I..
It could also become a differentiator allowing multiple suppliers. On one hand, you have people doing search for quality results. Other search engines include the AI results. The user could choose between them on a job by job basis or the search provider might, like !G in DDG, allow 3rd-party AI search as an option.
The bigger problem I have is with scale for the dollar. Search companies with their own indexes already mostly failed. There’s a few using Bing. It’s down to just three or four with their own index. Massive consolidation of power. If GPU’s and AI search cost massively more, wouldn’t that problem further increase?
This is a great point, but I wonder how much of that kind of search intent is part of google's traffic. If that becomes the only reason people use Google I wonder if they'll go the way of Yahoo. Maybe that's hyperbolic, but there was a time when Yahoo's dominance seemed unquestionable (I'm old).
To be clear, I'm not arguing for everything should be part of a pretrained LLM, but the experience of knowledge searching that ChatGPT and Perplexity provide are pretty superior to Google today (when they work).
I agree with the premise that numbers aren't a replacement for actually knowing what's going on, I think there's a false equivalency the author has that if you're a manager wanting metrics on your people that it's because you don't know what's happening.
Metrics are like supporting arguments for a whatever narrative a manager is telling. I've used employee metrics to help support the case for promotion or supporting my case to HR for why they should be fired (I've never fired someone because of metrics). It reinforces observation. The time metrics get toxic is when a manager starts telling ICs their performance is their metrics. A whole host of bad things happen then.
And if you're telling me bad managers will suddenly become good managers if you take away their employee metrics, you've got a surprise coming.
As a former line manager, there have been two cases when I use metrics: I'm promoting someone, and I like having numbers that back up how good they are, or I'm firing someone, and I like having numbers that back up how bad they are.
I generally agree with OP, but there are times where as a manager you know exactly what is going on with your team, but numbers are still helpful.
Lately, I've been thinking about how you might bring notebooks to web browsing...something that would make it easier for an LLM to interact with your browsing and thought process. This was timely because it got me thinking more about how what I'm thinking of relates to rabbit-holing.
When I got my first EM job in 2011, implementing true agile was the dream. I became a zealot to every PM or business leader that came my way. I insisted that not only would we get more done, but we'd get better quality stuff. It was a beautiful dream.
I wouldn't say my idealism failed all at once, but through death by a thousand cuts. There are lots of scenarios where trying to bootstrap development into agile is simply more difficult without any clear gain...at least when it comes to principles around measuring progress.
In all my time in the industry, I've never met someone who lived true agile for more than a year that wasn't in a very small startup.
I worry that the uncomfortable truth AI is revealing is that essay writing is a less essential skill than we'd like to believe. We can say things like "personal gain above all" and the pursuit of a diploma being our downfall, but I can't help but wonder if it would be better if we just got rid of essay writing altogether or wait until the student knows what kind of writing they want to do.
Of course you prefer teaching without grades; the only people in the room are people who want to be there.
Really it's not revealing anything at all about the value of essay-writing as a skill. It's just revealing that people will cheat in ways that are hard to directly prove, and grading writing is really hard when people have access to an infinite bullshit generator.
I don't know about this. Take away the moral gatekeeping of calling it cheating and look only at outcomes. If students use AI instead of doing it themselves, are they worse off in a material way that only essay writing could provide? If they aren't, couldn't we call essay writing busy work at worst and elective at best?
Essay writing shows that you actually know the domain material, that you are capable of holding onto a thought for more than 3 seconds and that you can communicate thoughts clearly to other people.
I never wrote a single essay for a math class, so I think there are alternatives.
But I think you're missing my point. If a student can go through their education never writing an essay the way everything thinks they're meant to do it and end up doing just fine in life, maybe the merits of essay writing aren't all what they're cracked up to be? Save the skill for specializations where the students are more motivated to learn it, like metalworking or pottery.
If argue that if you were ever asked to show your work, you were writing the equivalent of a mathematical essay. Maybe you never had to learn proofs, but I did.
“I didn’t have to do that and I turned out fine” isn’t a very rigorous pedagogy.
Sure, but now we're stretching the definition of "essay" past what's useful for the topic at hand. This is a thread about AI checkers for essays, not math proofs.
And no, it isn't rigorous, but it's a pretty good hint that you've got correlation and not causation. Perhaps it's worth entertaining the possibility there's a better way.
I kind of agree with this and am willing to take it even further: Why should I even study a subject that I can just ask a computer to explain to me when I need it? AI isn't quite there yet in terms of reliability, but there may be a point where it's as reliable as the calculator app, at which point, does it even make sense for me to study a subject just to get to the mastery level that is already matched by an AI system?
If I need to know when Abraham Lincoln was born or when the Berlin Wall fell, I could either 1. memorize it in high school history class to demonstrate some kind of "knowledge" of history, or 2. just look these things up or ask an AI when I need them. If the bar for "mastery" is at the level of what an AI can output, is it really mastery?
> Why should I even study a subject that I can just ask a computer to explain to me when I need it?
Because studying a thing is a world apart from having it explained. When you study a thing to gain understanding, your understanding is not only deeper but you are also learning and practicing essential skills that aren't directly related to the topic at hand.
If you just have a thing explained to you, you miss out on most of the learning benefit, and the understanding you end up with is shallow.
This is, sadly, an idealized notion of education that just doesn't match the reality of a general ed classroom. Students don't study to gain understanding in a majority of their classes; they study to pass. True, not all students all the time, but in the world you just described no amount of extrinsic motivation can force a student to deeper understanding, so why are we even talking about AI checkers?
Unless you're telling me you never did that in any of your classes growing up, but I'm going to be highly dubious of such a claim.
> Unless you're telling me you never did that in any of your classes growing up
I did extremely poorly in school, actually. It wasn't an environment that I could function in at all. But I got a great education outside of school.
I'm really talking about what's needed in order to get a good education rather than anything school-specific. Technically, school itself isn't needed in order to get a great education. But you do want to get educated, whether school is a tool you employ to that end or not.
"If you want to get laid, go to college. If you want an education, go to the library." -- Frank Zappa
But, outside if reading, writing, and arithmetic, the thing I did learn in school that was the most valuable was how to learn. So, that's my bias. The most important thing you learn in school is how to learn, and much of what teachers are doing in the classroom is trying to teach that.
My fundamental point is that what we need in order to learn is not just getting answers to questions. That approach alone doesn't get you very far.
I don't think we're too far at odds. I think the difference is that I'm talking about the classroom...especially general education, where AI essays are the problem. To your point, not every student chooses to spend time at the library, and you can't make them.
When I was younger, I was a bit of an idealist about education reform. As I grew old, I began seeing this the failings of education as a reflection on human nature. Now, I just don't think we should be wasting student's time trying to make them do something that, for whatever reason, they cannot or will not do the way we want them to.
> If you just have a thing explained to you, you miss out on most of the learning benefit, and the understanding you end up with is shallow.
Sorry, but I don't get this. Isn't this exactly what the teachers/lecturers and books do - explain things?
Sure, you have to practice similar things to test yourself if you got everything right. And, of course, it's different for manual skills (e.g. knowing how to make food is kind of different from actually making food).
But a language model trained on a education materials is no different from a book with a very fancy index (save for LLM-specific issues, such as hallucinations), so I fail to see the issue in ability to get answers for specific questions. As long as the answers are accurate, of course.
And - yeah - figuring out if the answer is accurate requires knowledge.
> Isn't this exactly what the teachers/lecturers and books do - explain things?
In part, sure, but not solely. I wasn't saying that getting an explanation is a bad thing, I was saying that only getting an explanation doesn't advance your learning much.
> And, of course, it's different for manual skills
I don't think that's different. It's the same for intellectual skills as for manual in this regard.
> I fail to see the issue in ability to get answers for specific questions. As long as the answers are accurate, of course.
There's nothing wrong with getting answers to questions. But that's not the process that leads to learning anything other than the specific answers to those specific questions.
Getting an education is much, much more than that. What you are (or should be) learning goes far beyond whatever the subject of the class is. You're also learning how to learn, how to organize your thoughts, how to research, and how the topic works at a deep enough level that you can infer answers on it even when you've not been told what those answers are.
If what you're learning in class is just an compendium of facts that you can look up, you're missing out on the most valuable aspects of education.
Why lift weights when I could just use a forklift?
At some point someone actually has to do some thinking. It's hard to train your thinking if you just offload every simple task throughout your entire education.
So you're saying you've never used StackOverflow in your life?
I find your analogy works against your point, because manual labor does use a forklift and other heavy machinery whenever possible. It's better for human health (and the backs of blue collar workers) that way. Now the only people lifting weights in gyms are those who choose to be there for their health and not because they're forced to.
If you’re, say, not clear on whether Abraham Lincoln was president when the Berlin Wall fell, you might have trouble asking the AI a good question to begin with.
This line of thinking will leave you like some of the high school kids my wife works with, who can't solve 19 + -1 without a calculator. If you don't integrate anything into your understanding, you will understand nothing.
Writing teachers have a lot to learn from Math teachers, who have had to fight against the crutch of calculators for decades. The future here is obvious. More in-class tests to evaluate writing ability instead take home is what will happen. For take home, require "showing your work" and the research that went into writing. Show the steps.
Yes, you could still use AI for any take home work, but I think what forces students to AI isn't the lack of will to do the work. It's the "I have a 5 page paper due tomorrow and I haven't started" cliff.
I mean, writing education is messed up in so many ways, but if I'm being realistic this is the path forward.
Seriously, the answer is to have the students turn essays in using stages. First stage is to write on paper by hand, even if there’s not a lot of research or facts yet. Lay out the argument first using what they remember from the initial research, then look it over with the kid and have them explain it a bit. Then, they do the next stage which is to do closer reading and research, and fill in the gaps. Finally, they polish the essay and turn in the final.
It’s hard to use AI to do this, and if a student did use AI effectively like this, they’ll still learn a lot about the topics, because they’re forced to think about it.
Imagine, rather, that you have these kids actively use AI to help them write an essay, but make them provide the transcripts so you can so how they think and use the tool. Perhaps even sit with them and use the AI tools and show how they can be used as a guide, as long as you don’t take the responses as facts like in an encyclopedia, but rather use AI like a parent who is getting but doesn’t quite remember the details. You’d not trust it implicitly but just use as a guide. Perhaps someone should make an AI that isn’t so sure of itself, and suggests ways to research things yourself but points you towards the closest cardinal direction to your question?
Me too, but as an adult now I see the value in teaching or even forcing kids to not procrastinate :) I still do it, but I know in my heart that I'll hate myself for it later!
I think Kagi is correct and that the way we explore information on the internet will look very different in X years with all the changes LLMs will bring. I think the real question will be what will it look like.
I don't think it looks like search today. Google got where they were because they were 10x better than everything else and had an experience focusing on what mattered at the time. I don't think the 10x experience will look like ten blue links. I don't know what that next experience is, but I'll know it when I see it.
Not sure how I feel about this. As the author acknowledged at the end, the piano tuning maybe was "good enough" in the beginning that no one would really notice the difference. What, then, was the point that leads us to say this piano tuner is essential and should not be replaced by something that can get it tuned to the level it was at the start of this essay? Especially considering not many could afford that piano tuner's exorbitant Sunday rate. We shouldn't pretend ROI doesn't matter.
I want to agree with the thesis of the essay, but this anecdote didn't get me there.
To the point of the article, sure, it was good enough, but it could be better, and if we're not striving for the best - if we're not valuing the expertise and practice that allows someone to hear a fractionally out-of-tune piano and recognize it immediately; if we're not valuing the technical expertise to then listen to that piano, recognize all of the tells, and know how to adjust them; if we're not valuing the notion that the piano sounding better is a net good in and of itself - then what's the purpose of us? In the long run, if we're just going to settle for mediocrity, what's the purpose of anything we're doing at all?
Sure, make the device that tunes the piano to the imperfect level it was at the beginning. Better piano tuning is a net good. But recognize the difference between a thousand pianos tuned very well and one perfectly tuned piano, and don't pretend the first is a replacement for the second.
(And, to the obvious point that, like, everyone in this thread is missing: Substitute "piano tuning" for anything you actually value. The point is excellence, not pianos.)
Poor tuning often leads to cascading failures. If an instrument is out of tune it limits your technique because your brain is expecting frequency X but you are hearing frequency Y. To play quickly and accurately you have to be able to trust the instrument.
Tennis players are really particular about how their rackets are strung. While nobody in the crowd will notice an unbalanced racket, the crowd will notice if they hit the ball out. Consistency is important.
You could say the whole point of art, and mastery in it, is to seek something beyond a "pretty good". The pianist could tell the difference and so could the person tuning the piano. Similar to how you can print a copy of a photograph taken of a master's painting, there is a difference between a "pretty good" and "the best".
Yeah, this has shades of "shining your shoes for the fat lady" to borrow a Salinger reference, and I understand that within the context of performance, but maybe I expect this to generalize farther than it does. There are many domains where a low cost, good enough solution is better. Not everyone can afford a piano tuner's exorbitant Sunday fee...
If we are going to bring cost into it, then digital pianos are superior in term of tuning (one might argue about other aspects of sound and feel). Same tuning every time, no adjustment needed.
>If we are going to bring cost into it, then digital pianos are superior in term of tuning (one might argue about other aspects of sound and feel). Same tuning every time, no adjustment needed.
I think they tune them slightly differently depending on what's going to be played. But I suppose a master pianist could have a whole database of their preferences saved up or something.
I bought a Kawai ca99 digital piano three years ago.
The feel is great. The simulated sound is pretty damn good, with a lot of attention to harmonics produced by undamped strings.
The built-in speakers do however quickly reach their limit when played at high volume and distortion artifacts can be annoying.
I'm not a professional pianist. I tried a few pianos both digital and some vertical pianos and I have to say that this one is the best instrument I have played so far.
Even when they started mandating airbags in new vehicles, it took something like seven years to go into effect so car manufacturers had time to plan. And then they didn’t make cars that didn’t have airbags illegal.
Even the most universally embraced ideas take time to roll out.
If someone is going to disrupt Google, it's because they've cut out the middleman that is search results and simply give you what you're asking for. ChatGPT and Perplexity are doing the best here so far afaik
reply