Just note that they’re not guaranteed to be called precisely on time, e.g. my “every 15m” CRON job will be called every 15m _at best_, in practice… twice per hour.
This works perfectly for my case (content syndication for https://potato.horse), and I’m pretty happy with GH actions for this kind of stuff, but if you need something more precise, you might want to look somewhere else.
> Just note that they’re not guaranteed to be called precisely on time, e.g. my “every 15m” CRON job will be called every 15m _at best_, in practice… twice per hour
Is the spread really that egregious? That's essentially a 50% failure to trigger at all, like I don't think you can call that 15 minutes so much as 15±10 minutes lmfao
IIRC AWS EventBridge is also not guaranteed to execute on the exact minute, but in my experience running a small job every 5 minutes only had about a 30-40 second delay at worst.
Just checked the logs: it used to be 2 per hour, but now it has improved to 3-5, so closer to the "every 15 m" rule. Again, not a big deal in my case and probably something that's being worked on.
At least for a while before I moved to self-hosted systemd timers, if you were running jobs near 0:00 UTC, the delay was often so long that your job wouldn't run at all. I had weeks of jobs literally not running at all before bailing on GitHub actions for this sort of thing. It was disappointing, but I'm now happier with my current setup.
Simon Willison has a bunch of examples of scraping sites with GHA and storing the results in a repo. But you can use the same technique without the storing part if need be.
AFAIK, scheduled GitHub workflows stop running after a while. But when that happens, GitHub will send you an email with a big green “Continue running workflow” button.
I use crons to keep my Docker containers fresh, and have never hit this. But the cron commits to the repo, so I wonder if they’re flagging repos with crons but no commits recently?
Definitely... It seems to be in 4-6 week timeframe... I'd thought about making the cron update references in the repo, but hadn't gone through that as of yet.
I've got a couple projects like this, that mostly just create bundles of other source projects that I'm not involved with. Creating a windows installer, or docker image for projects that don't have that integrated. It's kind of annoying that they will stop in several weeks when there's no project changes.
I recently started learning threejs through Bruno Simon excellent online course and had a great time, the course is great, easy to follow, and well structured and at least for me the immediate feedback loop of writing a few lines of JS and seeing something happen in the screen is very rewarding and a change from what I do for a living
I also recommended it to a friend in a similar situation to yours and he liked it too
For context, my background is around system administration, and had spent the last years writing k8s YAML for a living :P
Just in case, I'm not affiliated with Bruno in any way, just a happy student!
Not to brag, but 3M is by far not one of the largest Drupal powered sites, just one property of the company i work for has more than 40M active users a month & 350M pageviews
I never claimed it was the largest. Just one of the larger ones, particularly as it has a community of 200k registered users all using community forums (most Drupal sites are more in the 100s to 1000s of users). It's the largest I know of offhand managed entirely by a single person and not using any specialized front-end caching or load balancing between the actual Drupal instance and the end user. There are much larger sites run by large companies with all kinds of specialized code and handlers like The Onion, Weather.com, etc.
Long time OVH customer here, they had a similar issue on May 2015 for the same DC in Canada where a car hit one of the poles along the BHS/MTL via the North route, damaging the optical fiber cable: 2 of the 3 pairs of cables where down.
Uruguayan here too. This makes me quite proud indeed too!
For people that aren't aware, we do have our fair share of problems, Uruguay is being being given a too romantic view by the international media IMO.
The main ones being IMO (no particular order, and many related):
- Cases of extreme poverty and "slums" where a lot of people grow up in very negative environments.
- Lack of security! Lots of problems with violence and violent robberies.
- In many cases, not good enough education for the non wealthy, same for the public health system.
- A big problem with drug abuse, specially with drugs like "pasta base", similar to crack.
Definitely gay marriage and cannabis legalization are steps in the right direction but we have a long way to go.
I agree with the OP though, life is pretty good over Uruguay, I've been living abroad for a year now and miss the hell out of my country :) It's a very unique little county, no place like home.
"Life here is good", I think this comment is what rabble says about Uruguay is not a country to change the world.
No it isn't for me, I'm young, I want to progress, do things that matter. I don't know if I want to change the world but I certainly want to change some things around but all the time you want to do it you get that calm voice that says "Oh but life is pretty good here, there are no much ups and downs".
I don't think a US person can stay here for more than a month were the realization of a human being is to buy a house, a car have kids and that's it.
We need to stop boast ourselves with this kind of oversized look we are getting from outside and start fixing our crap. Low the taxes, educate better our childs and provide money for the doers and not by political friendship anymore, we can start there.
The publicly available tools for making yourself anonymous and free from surveillance are woefully ineffective when faced with a nationstate adversary. We don’t even know how flawed our mental model is, let alone what our counter-surveillance actions actually achieve. As an example, the Tor network has only 3000 nodes, of which 1000 are exit nodes. Over a 24hr time period a connection will use approximately 10% of those exit nodes (under the default settings). If I were a gambling man, I’d wager money that there are at least 100 malicious Tor exit nodes doing passive monitoring. A nation state could double the number of Tor exit nodes for less than the cost of a smart bomb. A nation state can compromise enough ISPs to have monitoring capability over the majority of Tor entrance and exit nodes.
Other solutions are just as fragile, if not more so.
Basically, all I am trying to say is that the surveillance capability of the adversary (if you pick a nationstate for an adversary) exceeds the evasion capability of the existing public tools. And we don’t even know what we should be doing to evade their surveillance.
I remember seeing a couple projects shared before, using this technique to scrape sites with GHA