But with a teacher, I can get a feel of their qualifications after a while (or cross-reference whatever they tell me) and if they're wrong, I can go somewhere else.
Also, a teacher is less likely to gaslight me and hopefully will research the topic in case they're not sure about something. With AI, it's always a coin toss.
I don't know, maybe I'm just a old person yelling at clouds, but I just find it crazy to use AI for learning anything. Maybe one day.
With GPT it's more like rolling a d20 than a coin toss.
19-20=You receive the enlightenment you were asking for, more or less instantly.
3-18=You get what amounts to a Socratic conversation with Wikipedia. Which, unless you're fortunate enough to have access to expert personal mentorship, is better than what you had before.
1-2=You get a load of highly-plausible bullshit. You end up worse off intellectually, possibly much worse.
All in all, I'm OK with these odds, especially at this early stage of the LLM game.
With an AI you also get a feel for what they typically know and what they'll probably fail at. The limitations are mostly consistent, at least on a per-model basis. Even more so than humans, since they never grow or learn anything new.
I feel like I've learned a ton of new things from GPT 4 in terms of software development recently, while also feeling I've been stagnating in my job with nobody to learn from. It's just so easy to have it explore alternative approaches or have it structure things in ways you may not be comfortable with yet (and would thus otherwise avoid) but would be better in the long run. It has the breadth to show you all that's possible while reducing the entry effort for learning to near zero.
It's just another useful tool, just like books, the internet, and real teachers are useful tools. They all have their pros and cons. The advantage of chat bots is that they're always there and scale to an arbitrarily large number of students. The advantage of teachers is that they're, well, people.
That seems very unlikely to me. Certainly LLMs are very compute intensive relative to past applications, but I would find it shocking if they are not nonetheless far more scalable than human labor.
Human beings are incredibly incredibly expensive to "make" into any useful adult thing. I have spent more money just this week just on a single one of the humans who are dependent on me than it would cost me to buy a fully packaged GPU in a box at best buy (and I think that box would actually have multiple GPUs packaged together in it). And I'll do that week after week for many more years. And then it will still be like four more years after that before they're capable of being a teacher.
It was harder to make the first GPU than to make one marginal unit of human teacher, but it's vastly easier to make a marginal unit of GPU.
I'm honestly curious what your mental model is, where you seem to think human labor is cheaper than computer hardware...
Since you mention both pbcopy and iTerm - I love https://github.com/skaji/remote-pbcopy-iterm2. I do most of the work on a remove Linux server, treating my MacBook as mostly a dumb terminal, and being able to transparently copy from the remove to my local clipboard is so nice.
I have tried it, but for whatever reason I just don't like it. I prefer just running tmux in iTerm with no integration.
On the topic, you can also integrate tmux with the native clipboard - I have set copy-pipe to the remote pbcopy, so any selection done in tmux get copied to my local clipboard. I also just found out that tmux also support it natively (https://github.com/tmux/tmux/wiki/Clipboard#the-set-clipboar...).
I quite like Pop!_OS Shell (https://github.com/pop-os/shell) for tiling on Gnome, it feels like the right compromise for me of tiling while still having access to a full DE. Seems that installing it on other distribution should be easy enough.
Either Reddit is happy about the caching as you cost them less, or they're not happy about it and they just block your app if you do that.
Now, people could scrape the website or allow users to bring their own API keys, but then it becomes a cat and mouse game. And if you're trying to sell your app on any of the app stores, Reddit could likely get it taken down/take legal actions.
$ curl -s -I 'https://www.sfgate.com/' -H 'User-Agent: curl/7.54.1' | head -1
HTTP/2 403
$curl -s -I 'https://www.sfgate.com/' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/113.0' | head -1
HTTP/2 200
One "trick" is that Firefox (and I assume Chrome?) allow you to copy a request as curl - then you can just see if that works in the terminal, and if it does you can binary search for the required headers.
That's usually where those things fail for me. Still, I don't really consider them worthless - the goal is not to prevent you from wasting your time, but to make you aware you're wasting your time and turning a muscle memory action into something you actually have to think about.
In my experience phisical separation is the best for when you don't want to use your phone (for example, when going to bed or if you want to focus on discussions when having lunch) but that is not always possible - then apps like one sec or other tricks like setting your phone to gray scale, moving icons around, focus mode, screen time... All serve to nudge your brain into thinking if you really want to waste time.
For making better use of your time... Eh. Everyone struggles differently of course, but I'm unlikely to go out and run, or do focus work, when I would waste 30 minutes scrolling through Instagram. But if you make sure to have better alternatives (reading a curated feed, listening to a audiobook/podcast) then they can nudge you that way. Finding a better alternative is entirely up to you. I do find that writing down things you want to do, no matter how silly it sounds ("of course I want to read more books!") helps, especially as you can always reference to that list later when you're bored.
It seems to be just parsing Google News, and displaying the articles in publication order instead of whatever smart order Google News is doing. I do like the design (or lack of) quite a bit.
That's a neat idea OP - I usually give https://www.economist.com/the-world-in-brief give a quick scroll, but I quite like sourcing from several news sites, and the summaries are good.
Where and how are you getting the ~1000 news articles you feed to GPT4? I think it would go a long way for transparency to list that somewhere on the website. Also, are you using international news agencies? Quite a lot of them publish an English feed too.
I would also love to see the difference it would make given a different geographical prompt ("in the context of China/India/Asia/Europe... How would you are this article") and political ideology prompt ("how would you rate this article for a Republican/Democrat/Libertarian/Socialist...")
I don't know, maybe I'm just a old person yelling at clouds, but I just find it crazy to use AI for learning anything. Maybe one day.