Does Dell just have one campus? Are teams typically homogeneous near one campus?
I'm thinking about my own employer, where I have been WFH since I got hired. My team is spread across 5+ states. I'd be the only person from my team in the office nearest to me were there an RTO mandate. I think there would only be one or maybe two other people there who I've even worked with on anything, in my several years of employment.
Quick note that when I worked on a campus (not Dell, big silicon valley outfit) 30 years ago, I seldom met most of my co-workers face to face, except once in a while in a meeting or randomly in the cafeteria. This was because the campus was large and the probability that someone you interacted with was on the same floor let alone the same building was low.
This isn't really the problem, though. This is an easy problem to solve; the real problem is that it costs money to do so.
Also: I'm not asserting that the below is good, just that it works.
First, don't make every check a required check. You probably don't need to require that linting of your markdown files passes (maybe you do! it's an example).
Second, consider not using the `on:<event>:paths`, but instead something like `dorny/paths-filter`. Your workflow now runs every time; a no-op takes substantially less than 1 minute unless you have a gargantuan repo.
Third, make all of your workflows have a 'success' job that just runs and succeeds. Again, this will take less than 1 minute.
At this point, a no-op is still likely taking less than 1 minute, so it will bill at 1 minute, which is going to be $.008 if you're paying.
Fourth, you can use `needs` and `if` now to control when your 'success' job runs. Yes, managing the `if` can be tricky, but it does work.
We are in the middle of a very large migration into GitHub Actions from a self-hosted GitLab. It was something we chose, but also due to some corporate choices our options were essentially GitHub Actions or a massive rethink of CI for several dozen projects. We have already moved into code generation for some aspects of GitHub Actions code, and that's the fifth and perhaps final frontier for addressing this situation. Figure out how to describe a graph and associated completion requirements for your workflow(s), and write something to translate that into the `if` statements for your 'success' jobs.
Allegedly, the biggest concern this time around was the economy. Millions of people complaining about inflation and the cost of goods voted for a guy promising to raise tariffs, and a party that historically caters to big business. The same big business that has moved a lot of jobs overseas, and has lobbied to relax restrictions on visas to hire more foreign workers for onshore jobs.
To me, this looks a lot like people voting against their own interests. I think that when people vote against their own interests, it's usually because they don't understand what they're voting for, i.e. it's an education issue. And it's not surprising that other people would be perplexed and frustrated by this.
But maybe I've just been misled by the wrong propaganda. I guess we'll find out.
I think it's definitely the case that the group of voters in 1789 was much smaller and more homogeneous than it is today.
I also think the nature of propaganda has changed a little as well. Today, messages can be delivered cheaply to everyone, everywhere, from anywhere, nearly instantaneously. There is far less of a propagation delay, and far less of a natural check on the rate and volume of propaganda.
We were interested in Issues and Projects, but the number of people at an organization who need access to those but not to code or CI is pretty large. GitHub does not have a different license option for BA/CX/PM types. We ended up going with Jira for tasking, which was substantially cheaper.
I was sad about this because issues/projects had all the stuff I personally cared about, and there was no need to work to set up integrations. I think there was some anxiety by the PMO that it wasn't mature enough for what they wanted, but I don't remember any specifics. Probably something about reporting.
That's how I feel as feel. It's costly given the things non-devs types won't need and it's not fleshed out enough to attract those types of folks to make the switch. GitHub is probably missing the boat tbh. Even the marketing page (https://github.com/features/issues) is targeted at developers.
Are you aware of any health insurance plans in the US which don't have an open enrollment period? I think this is standard across the industry as a check against adverse selection, but most of the information quickly available is ACA-focused, where it's definitely a feature; since open enrollment is extremely beneficial to insurers, I wouldn't imagine them talking up alternatives.
It's true that there is a list of qualifying life events that let you change or acquire insurance outside of open enrollment, but none of them look like "because I don't like my insurer" to me.
There is private (non ACA) insurance you can purchase without an open enrollment period in the states. However they get around it by being able to deny you coverage for preexisting conditions.
Even if open enrollment periods are universal, the statement I reacted to is still false (the "and" should be an "or").
Having to wait between zero to 12 months to change insurance plans is a barrier, but a small one compared to the inability to change plans at all, as in a nationalized health scheme.
You make it sound as though you're forced to take nationalized health care.
"Almost all European countries have healthcare available for all citizens. '''Most European countries have systems of competing private health insurance companies''', along with government regulation and subsidies for citizens who cannot afford health insurance premiums."
They are probably more expensive than the government plans, but same is true in reverse in the US. One helps the poor more, and makes sure those who can afford still have an option, the other makes those with good jobs, and get paid well having good care, and costing the poor who can least afford it, far more in terms of their capacity to pay. You're just wrong on this, and trying to be cute with boolean logic is ... "cute".
First, most/all new Smart TVs are doing automated content recognition, collecting and sharing details about what you watch with the manufacturer, which they sell and/or use to show you ads. My understanding is that most new Smart TVs are showing you ads regardless. I'm opposed to this on principle.
Second, at least the Smart TVs I've personally used (a couple of Vizio sets, a late-00s FHD Samsung, a mid-10s 4k LG) all have really slow and unpleasant UIs. The Vizio in particular was glacial, and I couldn't understand why the set owner even bought it (they were only streaming through built-in apps). I'm guessing newer and higher-end sets are fast, but I haven't used one myself.
And finally, I don't want an integrated device like this anyway, I would prefer to pair a really nice panel with a separate smart device (roku, apple tv, linux, whatever) so I can upgrade those independently. If I am buying a Smart TV with no intention of using the smarts, I feel like I'm either paying too much, or not getting as nice a panel as I otherwise could.
I quite like the Sony Bravia UI.
I did intend to just use a connected Raspberry Pi 4B running LibreElec to play my own content, but I ended up using the Kodi app from the Google Play store, which was superior at the time (h264 and h265 hardware decoding, plus working HDR. The Pi4 firmware was lagging behind). I have used my Pihole for DNS to stop calls to tracking services like Samba. A fairly recent firmware update meant I could attach a cheap gigabit Ethernet adapter to the USB3.0 port, so happy camper here.
Sony let's you either opt-out of content recognition (yeah, it's hidden in the menu) or you can disable the smart features altogether and just use it as a dumb tv.
Just don't connect it to the network and that solves most of the problems. Really the only one left is slow and unpleasant UI and most of that evaporates when all you're ever doing is swapping inputs. At this point the ads and other crap are subsidizing the panels.
What I think I just read is that content moderation is complicated, error-prone, and expensive. So Meta is going to do a lot less of it. They'll let you self-moderate via a new community notes system, similar to what X does. I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore. This is another big win for Meta, because minimizing their investment in content moderation and simplifying their product will reduce operating expenses.
> it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
To me it sounds better for large actors who pay shills to influence public opinion, like Qatar. I disagree that this is better for either Facebook users, or society as a whole.
It does however certainly fit the Golden rule - he with the gold makes the rules.
I was under the impression that Community Notes were designed to be resistant to sybil attacks, but I could be wrong. Community Notes have been used at Twitter for a long time. Are there examples of state-influenced notes getting through the process?
Twitter's Community Notes were designed to be resistant to sybil attacks. Meta is calling their new product Community Notes, but it would be a mistake to assume the algorithms are the same under the hood. Hopefully Meta will be as transparent as Twitter has been, with a regular data dump and so on.
Qatar is not well known for paying people to bot on social media. They play the RT game by using their news network Al Jazeera to do that instead and give their propaganda a professional air. The first country to do this was India[1]. Israel has special units in the army to do this[2]. At this point so many countries pay people to do what you say, but Qatar doesn't, from what I can tell. If you have proof of it, I'm all ears.
I was cautiously optimistic when this was announced that India and Saudi Arabia (among others, incl. Qatar) might see some pushback on how they clamp down on free speech and journalism on social media. But since Zuck mentioned Europe, I fear those countries will continue as they did before.
Sure, I'll trust the leadership of this huge commercial company, famous for lots of controversies reagarding privacy of people. I'll trust them to decide for me what is true and what is not.
> it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
Or maybe such people have far better things to do than fact check concern trolls and paid propagandists.
There do seem to be a lot of people who enjoy fact checking concern trolls and paid propagandists.
I'm not sure if they do more good than harm. Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation. It just makes them look as frothing as trolls claim they are.
And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.
> Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation.
Your point is exactly why I can’t take anyone serious who claims that randoms “debating” will cause the best ideas to rise to the top.
I cant count how many times i’ve seen influencer propagandists engage in an online “debate”, be handheld walked through how their entire point is wrong, only for them to spew the exact same thing hours later at the top of every feed. and remember these are often the people with some of the largest platforms claiming they’re being censored … to millions of people lol.
it’s too easy to manipulate what rises to the top. for debate to be anything close to effective all parties involved have to actually be interested in coming closer to a truth. and the algorithms have no interest in deranking sophists and propagandists.
> And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.
Downvotes that hide posts below a certain threshold have always seemed like the best approach to me. Of course it also allows groups to silence views.
> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
Strong disagree. This is a very naive understanding of the situation. "Fact-checking" by users is just more of the kind of shouting back and forth that these social networks are already full of. That's why a third-party fact checks are important.
I have a complicated history with this viewpoint. I remember back when Wikipedia was launched in 2001, I thought- there is no way this will work... it will just end up as a cesspool. Boy was I wrong. I think I was wrong because Wikipedia has a very well defined and enforced moderation model, for example: a focus on no original research and neutral point of view.
How can this be replicated with topics that are by definition controversial, and happening in real time? I don't know. But I don't think Meta/X have any sort of vested interest in seeing sober, fact-based conversations. In fact, their incentives work entirely in the opposite direction: the more anger/divisive the content drives additional traffic and engagement [1]. Whereas, with Wikipedia, I would argue the opposite is true: Wikipedia would never have gained the dominance it has if it was full of emotionally-charged content with dubious/no sourcing.
So I guess my conclusion from this is that I doubt any community-sourced "fact checking" efforts in-sourced from the social media platforms themselves will be successful, because the incentives are misaligned for the platform. Why invest any effort into something that will drive down engagement on your platform?
> ... we found that posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%. Out-group language consistently emerged as the strongest predictor of shares and retweets: the average effect size of out-group language was about 4.8 times as strong as that of negative affect language and about 6.7 times as strong as that of moral-emotional language—both established predictors of social media engagement. ...
True, but that doesn't discount that it's a win for Meta
1) Shouting matches create more ad impressions, as people interact more with the platform. The shouting matches also get more attention from other viewers than any calm factual statement.
2) Less legal responsibility / costs / overhead
3) Less potential flak from being officially involved in fact-checking in a way that displeases the current political group in power
Users lose, but are people who still use FB today going to use FB less because the official fact checkers are gone? Almost certainly not in any significant numbers
But "fact-checking" by people in authority is OK? Isn't that like, authoritarian?
"Fact-checking" completely removed the ability for debate and is therefore antithetical to a functional democracy. Pushing back against authority, because they are often dead wrong, is foundational to a free society. It's hard to imagine anything more authoritarian than "No I don't have to debate because I'm a fact-checker and by that measure alone you're wrong and I'm right". Very Orwellian indeed!
Additionally, the number of times that I've observed "fact-checkers" lying thru their teeth for obvious political reasons is absurd.
They are given the title of fact checker, ending debate, this is the authoritarian part. It does not matter who employs them. If fact checkers were angels we wouldn’t have this problem. However fact checkers are subject to human nature just like the rest of us, to be biased, wrong, etc.. Do you think these fact checkers don’t have their own opinions? Do you think they don’t vote? Don’t lie?
You are assuming the people in social media are a representative cut of people in the society but what you will notice quickly is that this is not the case, just look at echo chambers.
If I am trying to debate the same fact on a far-right or far-left post, undoubtedly both will come up with the same discussion and conclusion - let's not lie to ourselves.
So for your claim to have any validity the requirement of a fair, unbiased group of people on all posts would need to be given (in the first instance, there are a lot more issues with this, just look at the loud people versus the ones not bothering anymore to comment as discussing seems impossible) and that is just de facto not the case and the reason fact-checking is indeed helpful.
Without some sort of controls in place, fact-checking becomes useless because it's subject to being gamed by those with the most time on their hands and/or malicious tools, e.g. bots and sock puppets.
You should look into the implementation, at least the one that X has published. It's not just users shouting back and forth at each other. It's actually a pretty impressive system
Its more naive to think a fact-checking unit susceptible to govt pressure is likely to be better.
There will always be govt pressure in one form or another to censor content they doesnt like. And we've obviously seen how this works with the Dems for the last 4 years.
> They aren't explicit about it, but it's clear that pressure does not exist anymore
It's clear that the pressure comes now from the other side of the spectrum. Zuck already put Trumpists at various key positions.
> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
It's a good point. They're also going to push more political contents, which should increase engagement (eventually frustrating users and advertisers?)
Either way, it's pretty clear that the company works with the power in place, which is extremely concerning (whether you're left or right leaning, and even more if you're not American).
> They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore.
I didn't think it was any secret that Meta largely complies with US gov't instructions on what to suppress. It's called jawboning[1]
The pressure has just shifted from being applied by the left to the right. There is still censorship on Twitter, it is just the people Elon doesn't like who are getting censored. The same will happen on Facebook. Zuckerberg has been cozying up to Trump for a reason.
What is this based on? I see so many people shouting things like this, but there doesn't seem to be any basis for these arguments. They seem a bit useless and empty.
How would fact checkers access the 90% of private content? And should they? I don't think so, even if the respective private content is questionable.
The EU goes its own way with trusted flaggers, which is more or less the least sensible option. It won't take long until bounds are overstepped and legal content gets flagged. Perhaps it already happened. This is not a solution to even an ill-defined problem.
Good. Private communication is private, even if it's a group. The nice thing about the crazy is that they're incapable of keeping quiet: they will inevitably out themselves.
In the meantime, maybe now I can discuss private matters of my diagnosis without catching random warnings, bans, or worse.
What kind of diagnosis spawns so many fact checks that it's a problem? I'd think any discussion about medical issues would benefit greatly from the calling out of misinformation.
As a Harris supporter, I actually agree, I think it was way too heavy handed and hurt Harris more than helped. I’m not sure anymore what the goal of fact checking is (I’ve always felt it was somewhat dubious if not done extremely well).
Agreed, I always felt like most of the fact checking that has become vogue in the past ten years is designed to comfort the people who already agree, not inform people who want genuine insight.
If you don’t have fact checkers, a debate loses all its value. Debates must be grounded in fact to have any value at all. Otherwise a “debate” is just a series of campaign stump speeches.
Yeah, the problem is that if one side tells 100 lies, and the other tells 1 lie, you can't correct all 100 lies, but if you only correct the most egregious lies then statistically you'll only be correcting the one side, and if you correct 1 lie from each side, then you make it seem like both sides lie equally. The Gish Gallop wins again.
Especially for live fact checking the greater the number of lies and the more obvious/blatant those lies are the more likely someone is to get fact checked.
We would have to fact check if those numbers are correct.
Oh wait, fact checkers don't work, better just inform yourself and make up your own mind, and don't just believe some supposedly authoritarian figures.
This is the problem, you are clearly biased. She brought up the Charlottesville issue that has been widely debunked; it is blatantly false and well-known to be false. She was not fact-checked. That's the issue.
> “where there may be severe deformities. There may be a fetus that’s non viable” he said. “If a mother is in labor, I can tell you exactly what would happen.”
Your dying grandma may go DNR, but that doesn’t mean murdering grandmas is broadly legal.
My wife does charity photography for https://www.nowilaymedowntosleep.org/. You see lots of this sort of withdrawal of care. Calling it an abortion is cruel and dumb.
Yes, this just reads like "oh, thank God for that, that department was an expensive hassle to run".
I don't know if I'd call it a certain win for Meta long term, but it might well be if they play it right. Presumably they're banking on things being fairly siloed anyway, so political tirades in one bubble won't push users in another bubble off the platform. If they have good ways for people to ignore others, maybe they can have the cake and eat it, unlike Twitter.
Like Twitter, the network effect will retain people,
and unlike Twitter, Facebook is a much deeper, more integrated service such that people can't just jump across to a work-alike.
A CEO who can keep his mouth shut is also a pretty big plus for them. They skated away from bring involved with a genocide without too many issues, so same ethical revulsion people have against Musk seems to be much less focused.
Community Notes is the best thing about Musk's Dumpster fire.
The problem with CN right now, though, is that Musk appears to block it on most of his posts, and/or right-wing moderators downvote the notes so they don't appear or disappear.
I am not so sure that Musk or right-wing moderators are directly to blame for the lack of published community notes.
My guess: in recent months, many people (e.g., me) who are motivated to counter fake news have left Twitter for other platforms. Thus, proposed CNs are seen and upvoted by fewer people, resulting in fewer of them being shown to the public.
Also, I ask myself: why should I spend time verifying or writing CNs when it does not matter - the emperor knows that he is not wearing any clothes, and he does not care.
> the emperor knows that he is not wearing any clothes, and he does not care.
Indeed the ending of the famous story is:
> "But the Emperor has nothing at all on!" said a little child.
> "Listen to the voice of innocence!" exclaimed his father; and what the child had said was whispered from one to another.
> "But he has nothing at all on!" at last cried out all the people. The Emperor was vexed, for he knew that the people were right; but he thought the procession must go on now! And the lords of the bedchamber took greater pains than ever, to appear holding up a train, although, in reality, there was no train to hold.
Community notes launched at the start of 2021. It predates the buyout by almost two years.
If what they said about their design is to be believed, political downvoting shouldn't heavily impact them. I wish it was easier to see pending notes on a post though.
Right, I think that's the parent's point: CN is a great design, dragged down by the fact that Elon heavily puts his thumb on the scale to make sure posts he likes spread far and wide and posts he dislikes get buried, irrespective of their truth content.
You can see them, it's just that finding the button to do so on a post is difficult. I think you need to navigate to the post from the notes section of the website.
To be fair, a lot (not all) of notes on Musk's posts are spurious, including the NNN's. It's clearly being misused there, but in general they seem to work very well indeed.
> content moderation is complicated, error-prone, and expensive
I think the fact-checking part is pretty straightforward. What's outrageous is that the content moderators judge content subjectively, labeling perfect discussions as misinformation, hate speech, and etc. That's where the censorship starts.
How do you avoid judging actual human discussions subjectively? I remember being a forum moderator and struggling with exactly the same issues. No matter what guidelines we'd set, there'd be essentially legitimate discussions that were way over the line superficially, and on the other you'd have neo-nazis acting in ways that weren't technically bad, but were clearly leading there.
Facebook moderators have an even harder job than that because the inherent scale of the platform prevents the kinds of personal insights and contextual understanding I had.
Okay, but you're saying this on a platform where the moderator (dang) follows intentionally vague and subjective guidelines, presumably because you like the environment more here than some unmoderated howling void elsewhere on the Internet.
The quality of the platform lives or dies on the quality of these decisions. If dang's choices are too bad, this site will die.
The situation is somewhat different between a niche community and a borderline monopoly. But it's also true that facebook's success depends on navigating it well. At the end of the day we can choose to use it or not.
To the extent that people feel forced to use a platform that's a reason to further bias away from suppressing free expression, even if the result is a somewhat less good platform.
You're still making subjective judgements wherever you draw the line. I don't know how a platform could avoid making subjective judgements at all and still produce an environment people want to be in.
Good point, and thanks. I have to admit I don't have a good answer to this. Maybe what dang needs to assess can be better defined or qualified? Like we can't define porn but we know it when we see it? On the other hand, assessing something is offensive or is hate speech is so subjective that people simply weaponize them, intentionally or unintentionally.
I thought there would be community notes. And how would third-party work? The Stanford doctor was banned from X because he posted peer-reviewed papers that challenge the effectiveness of masks (or vaccines)? I certainly don't want to see that level of hysteria.
> The Stanford doctor was banned from X because he posted peer-reviewed papers that challenging the effectiveness of masks (or vaccines)? I certainly don't want to see that level of hysteria.
Not familiar with that specific case, though generally I'm not a fan a bans. Fact checks are great though. There have been peer reviewed papers about midi-chlorians too (https://www.irishnews.com/magazine/science/2017/07/24/news/a...), but I'd sure hope that if someone brought it up in a discussion they'd be fact checked.
I keep a Windows 10 installation around just in case, but over the past few years I've only booted it to install updates. I also have some ebook and hardware config applications that I haven't been able to get working in WINE, which I very very occasionally want to run.
I haven't decided what to do. I suspect I'll end up deleting the Windows partition and just reclaiming that disk space.
I'm thinking about my own employer, where I have been WFH since I got hired. My team is spread across 5+ states. I'd be the only person from my team in the office nearest to me were there an RTO mandate. I think there would only be one or maybe two other people there who I've even worked with on anything, in my several years of employment.
No 30-second chitchat opportunities.
reply