The content of the "news" has little to do with its conveyance protocol? Whether you get it from foxnews.com or foxnews.com/rss (insert your favoured antagonist here)
You're not really looking at the big picture. RSS allows you to escape social media filter bubbles, allows you to read the articles without an internet connection which removes a lot of the algorithmic capture of engagement and allows you to precisely control where your information comes from. You have to actively seek out new sources of information instead of having them fed to you.
> Could you share what you find in kagi indispensable?
The academic lens is like Google Scholar, but better. The papers it surfaces are simply higher quality.
Otherwise, append your query with a question mark. The baby AI will do what Google's tries to do, except with a little more skill and better citations.
Most broadly, however, search. It's kind of wild but I forgot that searching the internet used to be fun. Kagi made it fun again.
For me, it is not even any particular feature, but just doing a search and getting straight and instantly the results that I need, without crap.
Also I guess part of this is probably the option I used to give higher priority to some websites like python org.
When I subscribed with Kagi, I was so totally pissed off and stressed by using Google where you will now have crap and unrelated ad links everywhere on the page. And in addition often first link that are garbage Copycat of principal websites. For example, for python, when looking for a module documentation, the official doc is the best but there would be hundreds of ad filled shitty pages that would appear first.
No ads, no forced "AI summary", doesn't sell my data to anyone. There are many other quality of life features, I don't consider any of them personally indispensable except for the ability to permanently remove a site from results, which I have used a few times.
I haven't used Google in many years so I can't directly compare the search quality, but Kagi is good enough that I've never had any reason to try something else since I've started using it.
Edit: I also use the !w Wikipedia bang constantly, I forgot that was a Kagi feature and not my browser. Obviously Kagi is not the only search engine with this feature though.
I don't use Kagi anymore but the main thing I miss is the absence of "Popular products" which is completely useless and comes up far too often when using Google to find retail products and black listing websites from the search results. Outside of that, the results were largely similar.
Quality search results while not making myself a google user. De-googling is quite important to me. Besides that, basically what everyone else said; it upranks or downranks domains based on my settings, the claude AI answers are pretty good, and the no bullshit interface.
It's also hard to forget once you've set it as your default browser. So I imagine that it'll mostly benefit people on the limited (300 search/month) tier, where you might want to ration your searches
It's funny, I don't even look at the Kagi logo anymore. I just see search results and occasionally notice that I"m using Kagi because I see the token in the request url.
Some of the comments here about ISP behaviour are crazy. Australia has had our fair share of fucking up the national internet infrastructure but at least I can pick pretty much any ISP and use any router I like. Haven't used an ISP supplied router in something like 15 years.
All over the US I have always been able to use my own cable modem and router. OPs situation is unusual, I am guessing its some bundle they have for a discount but if they were paying standard (ie ripoff) rates they could use their own equipment.
This thread made me realize dslreports.com has "closed".
Used to be you could find out there what works and what doesn't down to the chipset variations. My experience was same as yours, as long as I matched provider capabilities, it worked.
> This thread made me realize dslreports.com has "closed".
Yeah. I saw it mentioned in a response to Karl Bode (TD). Sorry to see it gone.
I joined in DSLR in 04 and dumped more hours there than anywhere else, ever. It wasn't the same after the database crash in the mid 10s. When they shuttered the new-music thread, I finally moved on.
I've never had an issue with using my own hardware here. It's definitely one of the only good things about australian internet.
Regionally is a total crapshoot as to ISP choice, in my experience. Even in the massive regional cities it's often appalling. People living in rural or remote areas might as well not exist. If I moved somewhere that only telstra serviced I'd seriously consider just not having internet at all. It's roughly equivalent in internet access as paying telstra but it sure is cheaper!
> Some of the comments here about ISP behaviour are crazy.
It depends on the ISP. Over 25 years of IT support I've had to fight with about 30% of them to bring in my own device.
Most notable screwery was with Verizon DSL. They'd lease a new public IP every time we tried an incoming connection. As fast as I could record the new IP in the remote config and reconnect - my IP would change. I was able to push it past six changes/min.
I mean technically in Australia these days the nbn box is the "modem" for all intents and purposes, if you have fttp.
You don't actually need a secondary modem and can plug your pc directly into it, takes a lot of the pain out of it and reduces the need for ISP supplied modems.
Erode institutions to erode public trust in institutions. You then a) free up budget to spend on your buddies and b) derisk regulation and/or public criticism from these institutions
Ansible works best when you only have Linux/BSD systems to manage, due to its heritage and doing everything through SSH.
If you have other systems to manage, like Windows or VMware ESX etc., it feels like a kludge with the delegation to localhost to get to the plugins.
Also, it can be tricky to use if your Linux systems have different Python interpreter versions because it's not at all straightforward to override the python interpreter used.
Looping constructs and sub-tasks etc. are also awkward to use, and the initial setup for a small automation project might be overwhelming for newcomers.
On the other hand you get a massive community with plugins for almost every conceivable system/OS, so that's definitely a huge plus
For sure not everything, I get a lot of mileage out of using SSM into ec2 hosts which doesn't even have Internet facing addresses
I am thankful that I've never had to manage Windows but I don't believe they're managed as localhost since if nothing else ansible doesn't offer control node execution on Windows
Word of advice: Whatever you use, try to be relatively consistent with commiting your changes. A traversable git history and clean enough state to keep track of what's fresh can be super helpful especially when troubleshooting or coming back after messing around with whatever.
And never commit inline secrets. Find out one of the ways to inject/separate/template secrets that's workable for you to stick with it.
For home use definitively recommend Ansible. I've used/uses both and for a handful of servers/VM and for a single administrator I'd always pick Ansible. Realistically what you want is just a tool that store a configuration in some VCS and an a why to apply that configuration quickly.
Ansible will allow you to do that with just "pip install ansible". Puppet requires you to setup so infrastructure, which you need to way of bootstrapping.
Personally I also find Ansible easier to work with, in the sense that you can more easily do stuff in small increments. Upgrading Ansible is also a lot simpler. Puppet really cool and powerful, but for home stuff or a small business I'm not sure the overhead of managing Puppet itself is worth it.
Puppet is a project to implement. Once it's implemented, it's great. It requires dedicated infrastructure to work properly.
Ansible is much easier to step into slowly if you don't have time to implement a full deployment project before you can use it. You can start without dedicated infrastructure and add later if you want more centralized management.
https://github.com/ansible/awx#readme is also helpful if one needs "runbook" behavior to allow arbitrary audiences to run playbooks without the headache of installing something and dealing with local cred management. Interestingly, it also offers a "callback" system to allow machines to request a playbook upon themselves in cases where ansible-pull isn't appropriate/helpful
Ansible is a little bit easier to get started for a few reason (yaml instead of a DSL, ssh only...) ; it's also mostly tasks running through SSH which may feel a bit more "natural" when you are configuring machines initially.
In terms of idempotency I find it is easier in ansible to mess up something but if you are careful they are on par. I've used mostly ansible over the past few years in various jobs but the first I got in touch with ~10 years ago was puppet and it felt very solid especially for heterogeneous environments (we had different archs and OSes to manage). Also puppet scales very well because it has agents as well as many tools to manage large infrastructures which I tried and were already quite convenient 10 years ago.
TL;DR probably use ansible unless you have a very specific use case.
I am a pretty big proponent of ansible but I have given grave consideration to making a fork that uses Skylark instead of yaml because the "what can I type here?" story is terrible as is. Most of the jinja expressions are already basically python so it seems like a natural fit
That will keep SSH installed, a service running, and the puppet config file managed forever. If you accidentally replace the config file it will be fixed and the service restarted. If you remove the package it will be reinstalled, config file updated, and service started.
Not a fan of ansible, it's more of a "run this playbook", which translates to reinstall and "run this playbook", which in environments that don't reinstall very often can be painful. Since ansible playbooks don't know the current machines state they never know exactly what commands to run.
Generally if you spin up containers or similar short term servers I think ansible is fine. If it's a larger and more complicated environment with longer lived servers I'd use puppet.
Oh, one other think I like about puppet is if you say apply X to all nodes, you can literally not run puppet (it will fail to compile the manifests) if you try to override it. Which security auditors LOVE.
I've repeatedly tried to make Puppet work for "small" installations, be it a single dev laptop, our fleet of default base VMs (before containers were a thing), or home use. You couldn't really do anything without community packages, and their update rate was kinda meh. A lot of churn.
Puppet always seemed to me like a scalable enterprise solution, but you either need a team to keep it in check, or just never update.
I'm not specifically recommending Ansible, but it's what I've been using for a couple years, and I would always recommend that for home use over Puppet.
Disclaimer: I last used Puppet in 2017, but for like 7 years extensively before that and everything I mentioned happened several times.
Second disclaimer: By now I think parametrizing everything is a huge mistake - the real worth is editing the config in one place and having it in SCM. if you only parametrize 2 lines out of a 100 line config file and basically copy it over with 2x `sed` - that's fine.
Ansible looks like a default choose for this use case but for my personal server (running in a VM) I'm planning to migrate away from Ansible and try puppet. I tried to use Ansible for a few years and abandoned it because it was too tedious to maintain configuration in Ansible. Puppet of course also requires time but if feels so much easier, mainly because it has ruby-like DSL (not YAML).
I think that might be a myth? If I tried buying a $0.50 candy bar with a hundred-dollar bill, I think that the cashier might refuse it and I don't think they'd get in trouble for doing so.
I thought the "legal tender" argument only worked in regards to debts to the government, though IANAL.
For the AI un-initiated; is this something you could feasibly run at home? eg on a 4090? (How can I tell how "big" the model is from the github or huggingface page?)
I tried using Hunyuan3D-2 on a 4090 GPU. The Windows install encountered build errors, but it worked better on WSL Ubuntu. I first tried it with CUDA 11.3 but got a build error. Switching to CUDA 12.4 worked better. I ran it with their demo image but it reported that the mesh was too big. I removed the mesh size check and it ran fine on the 4090. It is a bit slow on my i9 14k with 128G of memory.
(I previously tried the stability 3d models: https://stability.ai/stable-3d and this seems similar in quality and speed)
The hunyuan3d-dit-v2-0 model is 4.93 GB. ComfyUI is on their roadmap, might be best to wait for that, although it doesn't look complicated to use in their example code.
reply