Happy Kagi user here. I'm gladly paying $25 per month because of all their AI features, which work well for me overall. Yes, I could set up API keys on OpenAI, Anthropic, Google and Mistral and get a similar experience for less, but I prefer the convenience of their interface and have clean search results bundled into the experience. I will continue to recommend them and hope that T-Shirt becomes available soon.
My thoughts exactly. It's stomach-churning to hear people talk about improving search and privacy for all, before putting it behind a prohibitively expensive (and probably inordinately profitable) subscription.
I'll just say the quiet part out-loud: expecting people to pay $10+/month for a search engine is a pipe-dream that rules out 95-98% of the world population. People buy food with that money, they pay rent, they live lives that aren't tethered to a search engine in a meaningful way. Google "wins" their traffic because they don't care, and every bit of friction in-between them and their content is extra work. Kagi's payment-upfront mentality is unrealistic for everyone except the most well-paid Bay Area engineers.
That's not to say I don't understand the "avoid ads at all costs" concept. I do oppose to using anti-advertising sentiments as a populist rallying cry so people will line up at your Search SaaS kiosk and pay you whatever you ask for. At this point you really might as well just invest in your own Searx instance, it's plenty cheaper. And you can't even "dropbox comment" me since there have been third-parties providing search for free since before HN was a website.
> I'll just say the quiet part out-loud: expecting people to pay $10+/month for a search engine is a pipe-dream that rules out 95-98% of the world population.
So what? Why do you get upset about it, when nobody is forcing you to buy it? Most people will not be interested in paying for search, whether they can afford it or not. That's just what a niche product is, most people will not be interested. What I produce in my job is certainly uninteresting for 95-98% of the world population, and the same is probably true for your job.
> Kagi's payment-upfront mentality is unrealistic for everyone except the most well-paid Bay Area engineers.
It's ten dollars.
> At this point you really might as well just invest in your own Searx instance, it's plenty cheaper.
Yes, that might be a good solution for 95-98% of the world population.
> So what? Why do you get upset about it, when nobody is forcing you to buy it?
Because this isn't a solution. Kagi doesn't save people from advertising, it creates a premium workaround and sells it at an arbitrary price per-customer. It's software-as-a-service, a SAAS, built more for the 1,000 true fans rather than the 100,000,000,000 clueless web users. That's just another business - perhaps a kinder and more transparent business - but a sinkhole of regressive moneygrubbing all the same.
> It's ten dollars.
Which is ten dollars more (per month!) than most people pay for a search engine. If you're the sort of person that just flippantly subscribes to that, then yes, you have lost track of the value of a dollar in my eyes. Like I said - you can host your own search engine and pay for your own top-level domain at that kinda price. It's absurd, I'd protest it on-principle even if I was upset with my current search provider.
There's room for this sort of startup in the world, but they've already lost if they don't offer a free tier. Google will hoover up their potential customers like nobody's business until they take the 98% seriously.
It is a business, what did you think? That's why they charge money for their service. Like millions of other businesses, they will never get any significant part of the world's population as users. Why is that a problem to you?
Me too and I also happily support Orion and using the RC as a default browser.
Kagi is for a subset of the internet and specifically for the part that has content. The good parts of cyberspace if you like. OP seems to be looking for something bigger like someone they can trust to replace Google and save the internet as well. For that search I say good luck sailor
(see, that is the good thing in Kagi too - you can downvote ;)
With over 100 sensors and 14 nano cameras, the e1 precisely analyses your teeth, gums and automatically removes over 95% of the detected biofilm, in less than a minute.
Real-world "things we ran into" stories like this are super helpful when choosing a service or technology.
Unfortunately, I have a similar experience with Firebase, where I wish I would have known that:
* Don't like the text of your Firebase Auth SMS verification message that we send on your behalf -> tough luck
* Your app name is longer than 15 characters? We are not going to include that hash in your Firebase Auth SMS message that is required by Android to perform an automatic login.
* Global Firebase Auth SMS pricing does not work you economically? Welcome to implement the whole thing yourself anyways.
* Dealing with development environments is flakey, as Firebase's emulators work 98% similar to production, but you will regularly hit things that are different.
* You can't completely automate environment creation/tear down, as not everything is covered by Terraform or Google's own APIs, so you will end up doing manual things in their admin interface.
* Real-time subscriptions in Firestore end up not being worth the tight schema coupling between client and server, as you can't control when the updates fire and you end up with a lot more unintended side effects than what this technology benefits you.
So after a year of workarounds you finally end up deeply understanding the trade-offs involved in Firebase and make the decision that its downsides exceed its out of the box benefits. :(
Looking for help with converting a small Android app (36 .java files) to Kotlin and add two smaller features.
Interested? -> project+kotlin@k2labs.net
--8<--
Existing Laravel/MySQL-based dashboard needs a successor. Looking for someone who has experience creating Nova dashboards with role-based access control.
Interested? -> project+dash@k2labs.net
--8<--
Looking for someone who has experience building scheduling/workflow systems to convert existing crontab that simply runs 30 scripts at different times pulling data from external sources (that sometimes fail and need to be retried) and also have dependencies between each other into a more robust & manageable system.
My company ran into this time and again, trying to get our browser pre-installed by the manufacturer only to loose the deal last minute as Google would threaten to withhold device certification if there is a different browser pre-installed (beside Google Chrome).
Google is a big bully and if you try to compete with them in an area they care about, they will use their market dominance to keep your product out. They have been deserving that fine for a long time.
We generally use 60 second TTLs, and as low as 10 seconds is very common. There's a lot of myth out there about upstream DNS resolvers not honoring low TTLs, but we find that it's very reliable. We actually see faster convergence times with DNS failover than using BGP/IP Anycast. That's probably because DNS TTLs decrement concurrently on every resolver with the record, but BGP advertisements have to propagate serially network-by-network. The way DNS failover works is that the health checks are integrated directly with the Route 53 name servers. In fact every name server is checking the latest healthiness status every single time it gets a query. Those statuses are basically a bitset, being updated /all/ of the time. The system doesn't "care" or "know" how many health status change each time, it's not delta-based. That's made it very very reliable over the years. We use it ourselves for everything.
Of course the downside of low TTLs is more queries, and we charge by the query unless you ALIAS to an ELB, S3, or CloudFront (then the cost of the queries is on us).
_most_ of the traffic will move in response to DNS changes, but there's always a group of resolvers that keep your old IPs for an unreasonable amount of time. I've taken machines out of DNS rotations with short TTLS (I think 5 minutes, but maybe 1 hour) and had some amount of traffic on them for weeks. After a reasonable amount of time, too bad for them, but when I can work behind a 'real' load balancer it's nice to be able to actually turn off the traffic.
Interesting, thank you. So a potential mitigation strategy could look like this:
- Route 53 failover record
* primary record: Google global load balancer IP
* secondary record: Route 53 Geolocation set (really need that latency)
- Elastic Load balancer record per region
* routes to mirror region GCP IP address (ELB's application load balancer seems to able to point to AWS external IPs)
* optionally spin up mirror infrastructure in AWS
Seems brittle. Does Azure support global load balancing with external IPs?
Does anyone have such (or similar) setup actually in production? How did it work today?
That would work, and Azure Traffic Manager does support external IPs. CDNs like Cloudflare and Fastly also have built-in load-balancing where they use their internal routing tables for faster propagation.
I haven't been able to make an ELB target be an external IP. What did you mean by "ELB's application load balancer seems to able to point to AWS external IPs"?
IP addresses as Targets
You can load balance any application hosted in AWS or on-premises using IP addresses of the application backends as targets. This allows load balancing to an application backend hosted on any IP address and any interface on an instance. You can also use IP addresses as targets to load balance applications hosted in on-premises locations (over a Direct Connect or VPN connection), peered VPCs and EC2-Classic (using ClassicLink). The ability to load balance across AWS and on-prem resources helps you migrate-to-cloud, burst-to-cloud or failover-to-cloud.
Looks like you need an active VPN connection to access external IPs.
That feature requires you to use a private IP address, so if you have a VPN or Direct Connect to another location you could load balance across locations. In the case of the global load balancers those will be public addresses though.
"The IP addresses that you register must be from the subnets of the VPC for the target group, the RFC 1918 range (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16), and the RFC 6598 range (100.64.0.0/10). You cannot register publicly routable IP addresses."
> Of course the downside of low TTLs is more queries
I was diagnosing a networking issue from one of our service providers last Friday. For whatever indeterminate reason DNS responses from R53 took upwards of 10-15 seconds to return. While I appreciate the non-configurable default TTL of 60 seconds for ELB is not plucked out of thin air and that actual issue seemed to be on the service providers side, the lower limit seems far too low for medium/high latency networks. I wish it was configurable.
What's worse is it looks like it's our site that is the issue, so we get the complaints and I have to dig through wireshark logs.
>There's a lot of myth out there about upstream DNS resolvers not honoring low TTLs, but we find that it's very reliable
I've done a few unplanned DNS failovers, and I agree with this. What can be real trouble though is if you're running a B2B app, and your customers corporate networks can be configured in any strange way. I've met real network admins who think they need to have high TTLs everywhere in order to protect themselves from root DNS DDoSes.
There really are locations where DNS resolvers don't honor TTL.
For example, the public wifi in the last Hackspace in Munich I visited did not honour my 10 second TTL.
But in my opinion there aren't enough of them to justify not using short TTLs. It's their problem after all if they don't honour websites' settings: Then they will see downtime when nobody else does.
I've always thought TTL less than 60 seconds should be avoided, as some upstream DNS resolvers will ignore values less than 60 seconds and use a default long value. You are saying this is not true and a TTL of 10 seconds can safely be used?
I'd wait for further details from the status page, but as a GCP employee (for whatever that claim's worth on the internet), I'm not seeing evidence of an issue earlier than 12:15 PDT.
100 grams of shrimp are roughly 100 calories. Let's generously assume that one shrimp from the video yields 200 grams of meat. So he would need to continuously catch 10 shrimp a day to satisfy a 2000 calorie diet. Seems to me that the shrimp population of this stream in the video would be exhausted pretty quickly and he would need to move elsewhere to continue hitting his calorie goals.
Given that we are surrounded by an incredible amount of high-calorie food (100g of a Snicker's bar provide 488 calories), it is easy to forget how difficult it is to source calories in the wild.
You're off by a factor of over 20. A reasonable ballpark estimate for the total mass of a shrimp is 10g. The amount of usable meat is less (though in a survival scenario you would eat the whole thing, head included). Hell, the average meat yield from a standard size blue crab is only around 60g.
You would need to eat netfulls of shrimp per day to meet caloric requirements, not just ten! Anecdotally, I can toss back dozens of appetizer shrimp and still be hungry for a main course.
River shrimps are very common. It wouldn't surprise me if that creek held a thousand of those shrimps. Taking out 1% every day may actually be sustainable, as it takes out grown shrimps that otherwise would eat smaller shrimps.
And that's to survive on shrimp alone. It would be foolish to try that, not only because it wouldn't be a good diet, but also because putting all your eggs in one basket definitely isn't a good idea if your live depends on it.
And of, course, he may catch other edible animals, too, such as eels (this kind of trap is still commonly used in commercial eel fishing in Western Europe; if I had such a trap, I would consider putting a dead shrimp or other meat in such a trap to attract eels)
He had some yams with it, as well! But yes, I think the big gain here is protein, two shrimp a day in the diet of a family with five members would be a huge boost vs. hit and miss hunting / gathering.
Can somebody please explain to me why this can't be mitigated against? Is it because they offer DNS as a free add-on and can't be bothered to spend the bucks to make their service more DDOS-proof?
After migrating a Java application that ran on EC2 to AppFog over the weekend, I wouldn't recommend running mission-critical apps on their infrastructure yet. A few things that I ran into
- After EC2 East was slow like molasses, I switched over to EC2 EU which was still speedy
- That caused their CLI tool to fall on its nose when trying to tunnel to the database (right now the bridge always gets installed in EC2 East). Fixed with a simple patch in the Ruby source
- Looks like the backend connector in nginx won't connect to your app if you have Basic-Auth on your root index
- Later MySQL became unavailable with "Host '10.0.47.186' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'". No connection limiter between different apps?
I think AppFog is offering a great value should they be able to iron out all the kinks. I'm crossing my fingers that this will happen quickly.