Hacker News new | past | comments | ask | show | jobs | submit | orware's comments login

Should have checked before submitting. Seems like it was posted yesterday already: https://news.ycombinator.com/item?id=34526436


I'd be curious to hear about experiences with persistent pooling usage within PHP, since I don't feel that I've heard about it being advised as much myself over the years, but that could perhaps be due to old misperceptions about it.

I know there is the "Persistent Database Connections" section of the PHP manual and the mysqli extension within PHP supports connection pooling / persistent connections, but in my own experiences I've rarely seen them utilized, especially by the bigger open source projects out there such as WordPress, which has an 8 year old enhancement topic on the subject: https://core.trac.wordpress.org/ticket/31018. Putting your database behind a pooler, like ProxySQL let's say, is another option as the level of sophistication for a company/application increases, but most typical PHP setups I've used don't have that immediately available.

I've generally been under the impression that most projects/applications don't use the built-in pooling features for some of the reasons discussed in the link above, leading to those applications being more impacted by lengthier connection times due to a new connection being created at the beginning each request and then closed at the end of the request.

Now I'm inclined to experiment a bit with the built-in mysqli pooling feature though since it would seem a worthwhile feature for developers to experiment with more if it would lessen the connection time impact for each PHP request, particularly for databases that are further away and require secure connections.

Shaving off 100ms for a connection would be significant for most PHP users if they are currently having to open fresh connections on each request, especially if they were previously used to connection times of < 1ms when connecting to a local MySQL database.


You will generally run into different types of latencies in this case since you have the general connection latency when those are getting established plus the regular physical/network latency between where the database is located and your own servers.

For connections, since a TLS handshake is required, the impact of physical distances can have a greater impact on the connection time. The following article: https://sking7.github.io/articles/44961356.html actually provides a good 3.5x-4x figure which correlates with some connection tests I've completed.

In other words, if an initial TCP packet takes ~100ms to get from the database to your server, then establishing a TLS connection to the database will probably be around 400ms.

Once the connection is established, running queries over an open connection is generally going to be quicker, at least for simpler queries. More complex queries will still take whatever time they need to process on the database end before they can start sending results back so results will generally vary there.

But going back to that 100ms example...if the amount of data being returned from a very simple query is minimal than the response time here would be very close to that 100ms figure over an already open connection and likely would go up from there depending on the complexity of the query and amount of data needing to be returned.

Since the connection hostnames are publicly accessible and TLS is always required for connections you can easily test from your own provider's location. So long as the general physical location isn't too far away from a supported region, the latency overall shouldn't be unusable.

I may have mangled some terminology/analogies above but hopefully that helps provide a bit of a ballpark for you. If you have specific to/from regions in mind I might be able to try and collect some specific numbers for you!


I haven't spent time optimizing TLS between a database client and server, but in HTTPS, using TLS 1.3 without early data (or TLS 1.2 with somewhat optimistic handshake handling) gets you to one added roundtrip, TLS 1.3 early data gets you down to zero added round trips. Early data isn't always appropriate, because there's potential for replays, but the latency improvement might be worth considering for some selects.


I'm not an expert on the TLS 1.3 but the 0-RTT feature seemed like it wasn't implemented by a lot of clients so the new QUIC protocol used in HTTP/3 seems to be the workaround for that. The following recent comment and the first video linked actually had some great related info that I was recently reviewing on that topic: https://news.ycombinator.com/item?id=32572825

I don't know if the MySQL protocol itself though be able to utilize the TCP-based TLS 0-RTT functionality or not however so connecting via a regular client may still end up with a lot of the back and forth handshaking.

The newer serverless driver for JavaScript has some opportunities to take advantage of QUIC within HTTP/3 in the future as Matt mentioned over here recently: https://news.ycombinator.com/item?id=32513043

So that will be interesting to continue seeing how it evolves/improves over time.


That wikipedia article certainly brings up some need for pause/concern.

Recently, I heard about this relatively new company named Quaise: https://newatlas.com/energy/quaise-deep-geothermal-millimete...

And I thought that the technology seemed quite interesting, so I hope the development of it can pan out since it seemed like it would enable greater usage of geothermal energy around the world, which seems to be mostly untapped at this time, and also seemed like it might be able to address the seismic issue/concern, although I'm not an expert on the topic so I could be wrong there.

Geothermal is mainly a topic of interest for myself since there are a few local plants in my own region but I hadn't really realized that it is an underutilized power generation option until I read the article on Quaise recently above.


I believe the seismic problems come from water flowing across the bore hole and eventually destabilizing the ground (this can happen with fracking). The Quaise approach basically cauterizes the hole as it drills, so water can't flow across. The water is more of a closed system.


I'm not an expert on this topic, but one author in particular that I came across as a younger man was James Allen and his writings: https://en.wikipedia.org/wiki/James_Allen_(author)

And in particular there is a 3-volume series that was available for a few years that put together a lot of the work he had done during his lifetime, even though his most popular seemed to be "As a Man Thinketh".

The Wisdom of James Allen I, II, and III: https://www.amazon.com/Wisdom-James-Allen-Including-Prosperi... https://www.amazon.com/Wisdom-James-Allen-Difficulties-Trium... https://www.amazon.com/Wisdom-James-Allen-III-Heavenlylife/d...

It's been a number of years since I last read them, and unfortunately the publisher above went out of business so you can generally only find the titles above used, but I did enjoy the philosophy/thinking shared in the writing, even if their titles may indicate a somewhat religious slant, overall I'd say the writings focus more on leading a good life.

This is a good reminder I should read them again to refresh my memory on all that is discussed within the pages however, since it has been probably over 15+ years since I first read them thoroughly.


Quick question…did your account have a history and past positive reviews from any purchases you had made in the past?

I have a long-standing account and rarely buy on eBay nowadays (I’m trying to recall if I’ve ever sold anything…if I have it was maybe only one item but I don’t even recall if it sold or not).

Recently, I was looking into buying a used gaming PC via eBay to save a few bucks and I ended up completing a “Buy it now” purchase quickly without looking more into the seller (or their location). The location wasn’t a big deal (Paris, France) but that mainly meant the shipping would take longer. What ended up being more concerning was the 0 rating for the seller, which immediately make think “oh crap”. I reached out to the seller just to see if I could get a response with no quick reply, but I sent one short follow up the next day when I didn’t receive a response and shared my concern and waited another day before reaching out to eBay about the concern I had about the seller (especially because by this point the original listing was gone and then it even seemed like the seller’s account too). I used the live chat option and the person there was very helpful and got the process started and mentioned to reach back on Friday (about 3 days later), but later that same day my refund was issued and the case closed which I was grateful for.

But it did make me wonder of what might be an apparent difficulty for newer accounts to sell successfully on the platform? (Kind of like stories I’ve heard about liquor licenses being grandfathered in for certain locations in cities, whereas it may be more work for a new location to apply for one…maybe newer sellers can easily be flagged? The inability to dispute the situation when you are obviously willing/able to communicate with the eBay staff however is the sad part in your story since legitimate individuals should always have recourse to be heard in these large tech platforms).


XAP is the old system, but I don’t think it’s in use anymore (back then it was just “CCCApply”).

The current system, now called “OpenCCCApply”, was mostly developed by a team from Unicon (https://www.unicon.net/), along with folks within the CCCTechCenter team of my memory /understanding is correct (I’m not sure if it’s managed jointly though, or if only a CCCTechCenter team mostly manages it now though since I never heard those types of details).

A redesign/modernization effort however sounds…expensive.


It’s interesting to see this thread crop up here, as I’ve only recently left CCC system for a job in tech after being in IT within the system for about 14 years.

This particular issue over the last year or so has gotten worse, with more eyeballs on it, once actual money became involved (before it was still an issue but on a smaller scale mostly for the free edu emails we tend to issue, along with other freebies that can help enable, such as free credits for Azure or other services).

Even in the previous cases, I was annoyed/upset because in my mind the first line of defense the colleges have is preventing these fake users from being able to submit an application successfully in the first place, since the OpenCCCApply application (which I believe is used by all ~115 CCCs) was allowing the submissions in the first place…and since we mostly bring that application data into the individual colleges, not many triggered a “hold” on our end.

Yes, CCCTechCenter (which helps manage the team which maintains the OpenCCCApply system) have done a few things over this past year that are mentioned in the article already (but based on the article I can’t tell 100% if it is really indicating the issue is still rampant in the more recent semesters…one of the changes was adding usage of an IP reputation checker in for example, but there are likely ways around that too for these folks who actually don’t seem to actually use bots…maybe they use actual people instead based on what I’ve seen, such as the YouTube video shared in the article).

What I found really annoying by it all is that while the problem originated from the systems being provided to the colleges from the state level (OpenCCCApply mainly), the individual colleges are now on the hook to gather a bunch of mostly useless data, and go on silly adventures such as investigating IP address info within our other systems (like Canvas) to help find or report the fraudulent activity.

I think I saw FAFSA mentioned a few times but I don’t think there is a ton of fraud coming from the FAFSA application too directly…but in this past year many of the colleges have been putting COVID relief funds they’ve received (to help get students/staff back on campus) and using those to pay for fees or provide an extra amount for books, etc. which isn’t something that will continue forever (in fact, I think for this summer this will already have ended, or it will be the last semester where it will be offered).

In most cases, once the incentive is taken away, or the bar to get it is made higher, these folks creating the fraudulent accounts will generally move on (or target schools that don’t implement some of the 2nd layer fixes at the college level…unfortunately while the CCCTechCenter tries its best, it doesn’t typically fully acknowledge its role in creating some of these situations, and I almost lol’ed when I saw towards the end of the article I saw they are looking to get more funding to “modernize” it, yet again, considering a lot of effort / time / money already has gone into creating the current OpenCCCApply system not that long ago from the previous system, which was pretty bad in comparison).

Overall though this particular situation is at the same both more complex and simpler than folks may think, once you have some more details (more complex because there is a lot about what’s going on in the CCCs the HN community isn’t aware of, along with super strict regulations that have to be followed within the individual Financial Aid departments at each school, otherwise they win not be able to provide federal aid monies to students if they weren’t doing so, making that avenue for fraud a lot more less likely than the scenario I shared above on how the COVID relief monies have been being used instead to provide an incentive to get students back in the classroom…along with the solution being simpler since we already have a central application process that should be the system that keeps these applications from ever reaching the individual colleges, but it fails in that regard…that along with removing the financial incentive currently present, should reduce the fraud levels considerably…although there are likely a few more complexities even I am unaware of…I would just appreciate it if the search for fraud wouldn’t get pushed onto the individual colleges in these situations where a system wide protection should have prevented the situations in the first place, mainly because it causes a ton on unneeded busy work at the colleges keeping IT System Analysts and other technical folks from focusing on other, probably more important, internal projects).

Excuse any typos…I wrote this small novel on my phone.


Another note…based on what I know, most colleges are suffering from low enrollment too, and even though funding isn’t solely based on the the student count on census date anymore (which is usually about 1-2 weeks into a semester…and faculty is supposed to drop students that don’t come on the first day of class typically)…since now the formula is more complex with “success” related factors added (numbers of degrees / certificates awarded plus some other ones I don’t directly recall right now.

This leads to number of students still being a pretty big factor in the funding received for the year. From Administrators I would say if any are worth their salt, ignoring any sort of fraud would be a no-no so I’m hoping that’s not a common situation being observed. On the other hand, losing a substantial percentage of your current budget due to a loss of students can be pretty tragic for the staff working on the campus. Budget reserves can usually be dipped into for a period of time, but what most folks don’t know or realize is that compared to private businesses where the cost of employees may be only a fraction of what the business brings in profit, most CCCs are likely spending 80-90% of their budgets on salaries and benefits for their staff (in some cases the % may be more, in some cases it may be less). This makes it extremely difficult most of the time to weather a big loss in students because if budget reserves get expended, and student numbers don’t improve, that’ll mean some sort of layoff process…which also provides those employees with reinstatement rights too for a considerable period of time afterward).


It's still a bit early for the company but if you like RDS, PlanetScale might be worth a look too: https://planetscale.com/


Check out my comment above here in this same thread which basically confirms exactly what you just shared here...this is the same tactic I was observing being used with fake student accounts sharing with outside external Gmail addresses which then appeared to take over the Shared Drives being created in GSuite, allowing for TBs and TBs to be added in to the drives by outside users. It seems like a big problem to me...but at the moment I don't think Google is providing good enough tools for Admins to see/eliminate these situations very quickly/easily.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: