Hacker News new | past | comments | ask | show | jobs | submit | MattIPv4's comments login

PoE is around 15w at 48v, PoE+ is 30w, and PoE++ is 60 or 100w


Likely due to a Cloudflare incident: https://www.cloudflarestatus.com/incidents/8m177km4m1c9


We process around 1 million events a day using a queue like this in Postgres, and have processed over 400 million events since the system this is used in went live. Only issue we've had was slow queries due to the table size, as we keep an archive of all the events processed, but some scheduled vacuums every so often kept that under control.


Partial indexes might help.


Also: an ordered index that matches the ordering clause in your job-grabbing query. This is useful if you have lots of pending jobs.


Exactly. A partial index should make things fly here.


Active Queue table and then archive jobs to a JobDone table? I do that. Queue table is small but archive goes back many months


In modern PG you can use partitioned table for a similar effect.


We just have a single table, with a column indicating if the job has been taken by a worker or not. Probably could get a bit more performance out of it by splitting into two tables, but it works as it is for now.


Why is GitHub the one under fire here? Users on GitHub are using GitHub Actions to build CI pipelines that build stuff, and happen to be pulling from GMP. That's not GitHub's problem that users are using their product in a legitimate manner, it seems to me it is GMP's problem that they can't handle traffic for artifacts from CI systems? It is noted the requests are identical, so would a modicum of caching in front of their origin not make this problem go away completely?


CI systems shouldn't have the ability to make network requests at all, honestly.


If all CI systems of the world went down, it would have cooled Earth by 0.001°C.


How would you suggest they install dependencies then?


At the very least, that's an issue that GMP or other projects shouldn't have to worry about. There are many options - you could manually cache things somewhere or pack in the dependencies in the repo. Or, maybe, in a world that wasn't completely set on wasting all resources possible, there just wouldn't be pointless automatic builds on forks, and those builds wouldn't need to re-download the world and could instead just incrementally update. (yes, there are nice consequences of doing fresh builds always, but there are also bad ones as can be seen, and unfortunately the downsides aren't seen by the initiator)


Why should this host (and presumably every similar host) take on the burden of this extra complexity?

Would a modicum of caching in GitHub Actions libraries not make this problem go away for all hosts in this category?


That's fair, I would agree that caching at either end would fix this. It just strikes me as odd that GitHub, the middle-man that's just providing CI runners, is the one under fire.


What GitHub is effectively doing is providing free DDoS hardware and lots of it, as far as the receiving end is concerned. I don't think GitHub should particularly be "under fire" for this, but it's still very not nice to provide a service that, under legitimate use (never mind illegitimate use!), can make unreasonable amounts of traffic to arbitrary sites.

I think a quite reasonable expectation from GitHub would be to have an all-of-GitHub-wide rate limit that CI can use for requests to any given site, and have jobs fail/delay if GitHub has exceeded that, and expect sites to explicitly opt in if they're fine with more than that rate. Would of course very much suck for GitHub CI users that want to pull from sites not opted in, but at least GitHub would stop offering free DDoS services.


For the same reason that ISPs tend to come under fire when their customers are using MTAs to deliver large volumes of e-mails to non-consenting recipients.

Are you saying it's not an ISP's problem that spammers are using their product in a legitimate manner, but instead it's up to the recipients to build their own spam fighting resources? Yes, that turned out wonderfully.


Is it the phone companies' fault that people make death threats over the phone? Do we say 'Phone company makes death threats' when that happens like the title is saying about GitHub?


Death threats? WTF? What metaphor are you using that git clone requests are now suddenly death threats?

Anyway, I don't think your example aligns with the argument you think you're making: https://www.fcc.gov/enforcement/areas/unwanted-communication...

Yes, phone companies definitely can be liable for bulk targeting by their users.


> What metaphor are you using that git clone requests are now suddenly death threats?

The headline and title are saying GitHub DDoSed a crucial open source site, so the metaphor is definitely valid. I did not compare DDoS to death threats, I compared the fact that it is being said that GitHub DDoSed the open source site and did a thought experiment to see what happens if something similar was said about the phone companies.


If it must go to internet, a MITM SSL proxy cache out of GH would help.

The problem is within GH's network, a 90s ISP would have blocked spammy users, GH should at least operate like an ISP if random stuff can execute and reach the 'net.


Just got an email from GitLab about a group I'm part of that has more than five users. The docs linked says "For existing namespaces, this limit is being rolled out gradually. Impacted users are notified in GitLab.com at least 60 days before the limit is applied.", however upon checking the group in GitLab, we are greeted by a big red box stating "Your top-level group [group] is over the 5 user limit and has been placed in a read-only state."


Also got an email but interestingly the most populated group I'm a member of has 4 users in it including myself. It did mention that my "top-level" group has reached the 5-member limit but it references a numerical ID that doesn't match my user or any of the groups I'm member to.

There may be a glitch with this rollout.


Yeah, I got the same email. I am in one group, and that group has one other person in it.


I'm in exactly 0 groups, never been in one, and I got the same email.


GitLab team member here.

The gradual roll out of this change started with a blog post[0] and included in-app notifications for the owners of impacted groups on GitLab.com.

If the group owner did not log in during the in-app notification period, they were then emailed (the email you received today) notifying that the group was impacted.

[0] - https://about.gitlab.com/blog/2022/03/24/efficient-free-tier...


I don't know that it's a great plan to do a blogpost and in-app notification as the first round of reminders and email on the day of the change. Both the blogpost and in-app notification requires you to explicitly go on GitLab and see there's a problem. Maybe there's a reason to avoid it, but emailing from the get-go seems like it is the right move for transparency and not rug-pulling.

EDIT: clarified antecedent


Wouldn’t it make more sense to email them before they were impacted instead of when they were impacted? What’s the point of gradual roll out that requires I read your blog etc. An email that says “You have 60 days to X” is a lot more effective than one that says “60 days ago we made a blog post letting you know, and now you’re f’d.”


Look they announced it publicly posted right in the back of the file cabinet in the basement behind the warning rabid tigers sign.

Here's a question for Gitlab: "Why did you require me to give you an email address to sign up?"

The answer to that question means there is no explaining why they didn't use it first, and followed up with at least a couple updates along the way. This is exactly what the address exists on thier db for.


“But the plans were on display…”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a flashlight.”

“Ah, well, the lights had probably gone.”

“So had the stairs.”

“But look, you found the notice, didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”


> If the group owner did not log in during the in-app notification period, they were then emailed (the email you received today) notifying that the group was impacted.

I think there is a glitch in your mail or something else is going wrong. I'm currently not in any groups and still got an e-mail telling me that my top level group (starting with 5060) has reached the 5 members limit. Searching for the group also doesn't yield any results whatsoever.


Thanks, we are investigating this and the above reports about this behavior.


So was there a problem or not? That email referred to groups by ID, which is totally useless.


i got an email that the limit in one of my groups is reached.

i just logged in and there is no indication of any limit.

i had to step through every group to find out where the limit was reached.

turns out that there was one group that had two sub groups which added up to 5 members. at the group overview this is listed as "two" (for the two subgroups). it would be very helpful if the group overview (https://gitlab.com/dashboard/groups) would list the total number of people as well as flag every group where the limit is reached or crossed.

but, you say the limit is 5 people. in this group there are exactly 5 people, yet the warning claims 'Your top-level group is over the 5 user limit and has been placed in a read-only state.'

how can that be? 5 is more than 5?

it doesn't matter in my case because this is an old project no longer worked on, so read only is fine, and there is no need to act, but i think you need to work on your system because i am sure there will be more cases like that.

lastly i want to add that while that limit is fine for small businesses, it is an absolute disaster for FOSS projects. FOSS projects don't have the funding to pay for your service, so they won't. their only option is to leave. if any of my projects get any traction then i have no choice but to go look for a more FOSS friendly service. i thought gitlab was that, i wanted to make a point against github and support their most likely competitor by drawing attention to you.

gitlab really does not gain anything by enforcing this limit for FOSS projects. FOSS projects often have many members that are not very active. a busy startup with 5 members probably creates the same activity and uses the same resources as a FOSS project with 50 members because most of those 50 members rarely contribute to the project.

or instead of limiting members, limit how often the more expensive resources are used. like limiting how often the CI is running.

i urge you to consider to allow a higher limit for groups that only have projects that use a FOSS license.


Hi, GitLab team member here. We offer the Open Source program for qualifying projects giving them access to top-tier features for free. See https://about.gitlab.com/solutions/open-source/join/


thanks, i wasn't aware of this. that's even more than i was looking for, except, the no commercial activity rule seems a bit limiting:

Not seek profit: An organization can accept donations to sustain its work, but it can’t seek to make a profit by selling services, by charging for enhancements or add-ons, or by other means.

so i can't sell services to sustain the project? there is a large difference between earning some money to help fund the project, making barely enough to be able to work on the project fulltime and actually making enough of a profit to afford commercial services.

if i am employed and work on a FOSS project on work time, then i am not selling any services, nor am i making a profit.

if i do exactly the same but as a contractor, then i am selling a service.

you may want to elaborate how you interpret and verify this rule.

also i'd rather have less free services but a more liberal allowance on commercial activity. like a regular free account but without the user limit.

user limits are very frustrating because they prevent me from managing all potential contributors, even if they are not very active.


Apparently you must be open source and a charity, not just open source.


Did you really just say "If you had logged in, you would have known that you had to log in"?


My email mentions two group id's. I had to look at each group's page to see its id (no other way of finding out what group we're talking about).

I have _no_ groups with the id's mentioned in the email.

Also, I'm a solo hobbyist dev, there are no groups with more than one user in it.


I did receive an email too which just had a number mentioned that wasn’t even hyperlinked to anything. Turns out I am or was not part of any group ever.


Loading github.com is returning a 500 for me currently, so seems like more than just issues/pull requests. Also seeing actions fail with 500s on assorted steps.


Similar issues for me. I can load github.com and my profile, but visiting a repository (or trying to git pull a repo with the https origin) returns a 500.



KV/Postgres are accurate, that Blob page sneaked in. Blob pricing should be figured out shortly, we'll update that page (it's in private beta, invite only currently).


So take it down?


Cloudflare's R2 costs $0.36 per million read operations (after the 10 million you get for free) [1]. Vercel is wrapping R2 and is charging $2 per million reads [2]. They're also charging $0.15/GB for egress (after the 1GB you get free), when R2 charges nothing, and the storage cost is doubled from $0.015/GB on R2 to $0.03/GB on Vercel. That's quite the cost increase for the DX improvement.

[1] https://developers.cloudflare.com/r2/pricing/

[2] https://vercel.com/docs/storage/vercel-blob/usage-and-pricin...


Does this mean I have to potentially deal with two vendors when there is an outage? Awesome!


That's their secret, Cap. You've always had to deal with multiple vendors when there's an outage. Vercel has never made it a secret that they're standing on the shoulders of Tier 1 Cloud giants for their hosting backend.


I only just read https://vercel.com/blog/framework-defined-infrastructure

To me, the storage announcement + this blog really helped contextualize where Vercel sits. And I really like this approach. It’s what I’d want to build on. I love the partnerships with cos out of their core expertise like Neon, and existing integrations with supabase, planetscale, etc


I think its a fair assumption that any massive AWS or similar scale company infra outage is going to knock out a good portion of your SaaS and multiple parts of the web


This isn't finalized yet, apologies for the confusion. We'll be updating the pricing for Blob shortly (it's in private beta and invite only). The pricing for KV and Postgres is up to date.


I already lean away from S3/Azureblob/etc because their egress prices are terrible. .15/GB is borderline criminal.


150$ for a TB traffic?! Wtf


What’s happening right now is a story we’ve seen plenty of times before. Overhead cost of convenience mixed with vendor lock-in, all while boasting about open source.

I’d love to create tools that were convenient and had fair pricing. The challenge is that you’re trying to grow into a market with worse acquisition economics. Tough to win if winning is the goal. If anyone has a solution reach out. XD


Our egress bill from Cloudfront last month, inc. the 1TB/per region free tier, was nearly $2,000. That's egress traffic ALONE (~20TB of it).

Needlessly to say, we're wrapping up testing on migrating our production (public) buckets to Cloudflare (R2) taking our cost from those $2,000 (and going up every month), to (drum roll)... $0,000/mo.

Have I mentioned AWS egress charges are borderline "extortionary"? :X


What's a cheaper alternative that is of similar quality?


Backblaze is $0.01/GB egress and $0.005/GB/Month storage.


I am currently developing an option that offers identical quality and features but at a significantly lower cost. Unfortunately, you will have to wait for approximately a year until I complete the development process. However, I believe that the wait will be worth it, and you will be pleased with the outcome.


If you have a relationship with an account manager, talk to them. There are options for better rates.


Why not remove the pricing table or add a note about that then? It’s not great having it there if it’s incorrect, especially after an announcement.


You will get better customer support from Vercel. Cloudflare gives almost zero customer support if you are pay-as-you-go plan.


For what it’s worth when I was TL I would regularly hang out on discord and answer questions for R2. I believe the team still does that and the community itself also answers a lot of questions. Now that I TL Workers KV I do the same in that chat room.

There’s also the community forums although I find it’s harder to stay on top of those personally.

Not trying to say our paygo support is as good. Just saying for those customers, I do personally try to offer ENT-level support as an entire class of users (ie all paygo = 1 ent to me for the products I personally support)


The Cloudflare Discord is where it's at.


Can vouch for this. The Discord support, aside from specific account/platform problems, has been most helpful and super friendly, both community members and staff.


Good information. I think active community forums are essential for success of all companies and products. Cloudflare has built this up well. Though Customer support issues should be not be left without response for days or weeks, until community moderators use back-channels to get support ticket resolved. The question was why is Vercel worth paying more for, and customer support is probably one of those reasons.


The secret to great customer support from Cloudflare is to drop into their Discord and join the channel of the product / service you're having trouble with

Most of the engineers and product leaders on the teams that make the services check those channels daily and jump into help where they can. There's also a huge community of power users there called Community Champions who help out as well.


> The secret to great customer support from Cloudflare

I wish this never had to be said, it shouldn't be a secret.


That may be true but how much power does Vercel CS have to help if the issue is fundamentally on the Cloudflare side?


> That's quite the cost increase for the DX improvement.

It depends how much you spend on egress vs developers.


Hijacking this comment to ask if anyone has had luck integrating Cloudflare R2 with Pleroma.

I’ve not had any luck getting it to work though I’m also not well versed on the terminology.




Seems better done than a lot of status pages, runs on separate infra, updated, has a way to subscribe, etc.

However, saying "degraded performance" when you know it's "down for everyone" is an industry phrasing thing that's irritating. AWS also has "elevated response times" when everyone is seeing 5xx errors, or infinite response times.


It's not down for everyone. I can browse just fine, but my pushes get rejected, so that qualifies as "degraded performance" for me.


It's got many named granular services marked as "degraded performance" or "degraded availability" that seemed to be down for everyone.


> However, saying "degraded performance" when you know it's "down for everyone" is an industry phrasing thing that's irritating. AWS also has "elevated response times" when everyone is seeing 5xx errors, or infinite response times.

Another popular one is "elevated API error rates" when the error rate is 1.


having been on the other side a number of times at a site with huge amounts of traffic, very often things can be down for a huge percentage but our logs will still show thousands of requests succeeding a minute. so it might be working well but slowly for some while not at all for many


It seems accurate to me - after some time, several "degraded performance" flags have been changed to "major outage".


For what it's worth, it's powered by statuspage.io, which is relatively industry standard for status pages.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: