Hacker News new | past | comments | ask | show | jobs | submit | bourgoin's comments login

I really wanted to find an answer for your challenge but it seems there aren't any answers using the regular 1-26 gematria and numbers written in their ordinary English form (without "and"). However, I did find some answers using a "zero-indexed" gematria ranging from 0-25, and with this other English gematria described here on Wikipedia: https://en.wikipedia.org/wiki/English_Qabalah#R._Leo_Gillis'...

Python 3:

>>> num_strings = ['Zero', 'One', 'Two', ... 'Seven Hundred Thirty Two' ... ] # I tried 0 to 1000. num_strings generation script sold separately

>>> def find_gematria_matches(values): return [i for i,ns in enumerate(num_strings) if i == sum(values[ord(c)-65] for c in ns.upper() if c.isalpha())]

>>> find_gematria_matches([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]) # "zero-indexed" gematria

[213]

>>> find_gematria_matches([5,20,2,23,13,12,11,3,0,7,17,1,21,24,10,4,16,14,15,9,25,22,8,6,18,19]) # R. Leo Gillis' Trigrammaton Qabalahn

[232, 242]


*inspects image*

> that's a shame

that's a shame ;)


*hangs head in shame*


There have existed hand sanitizer formulations containing Triclosan or BZK instead of alcohol, although those compounds are more commonly used in anti-bacterial soaps. Triclosan was particularly controversial because of its potential to cause antibiotic resistance and also for being an endocrine disruptor. During the second half of the last decade, its usage was widely restricted by regulatory agencies and it was phased out of a ton of consumer products.


The whole paragraph except for the first and last sentences may have been composed using an LLM.

> All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of [...] or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being [...] held responsible under Rule 11 for the contents of any filing that he or she signs and submits to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing

Asking the attorneys to re-acknowledge that documents they file are their official entries into the record, no matter what programs are used to generate them, makes sense in principle as a way to preempt the "algorithm told us to" argument.


Well, that's N=1. But we have seen that it's sometimes possible to bypass that kind of filter with clever prompt engineering. And because these things are black boxes, it doesn't seem possible to rigorously prove "unjailbreakability"


A couple of years ago, I was attacked by a Kiwi bot near a UC campus. This is my story.

The bot and I were moving towards each other on a sidewalk, and when I came close it stopped, as they do when sensing an object in front of them. But there was an awkward moment as I tried to go around it and it repeatedly jerked forward an inch as its motor kicked on and off. Maybe I was walking around the very edge of its radius. In any case, my behavior must have triggered some pathfinding bug, because it turned and drove right into my legs, after which it stopped and sat stationary. Luckily they're small and move slowly so it wasn't a big deal, but that memory stuck with me. Articles about Tesla pathfinding issues always bring it back to the surface.


Kiwi bots aren't (weren't?) actually AI controlled. They had human drivers in South America that controlled them remotely. If one attacked you, it was either the human driver going agro, or just a problem with the latency of the camera -> cell network -> streamed to South America -> driver inputs command -> sent back to the US -> over the cell network -> back to the bot. And the cameras they have were pretty bad (the ordering app would show you the camera view when the bot was nearing its destination.)


It's depressing to think that companies have normalized passing "mechanical turks" (exploiting workers from an impoverished country) as AI.


Those "exploited workers" probably made decent money relative to the cost of living in their location, and they got to do it from the comfort of a computer instead of hard labor in the sun, which is what someone in their same socioeconomic bracket would more likely be doing.


How would you know?

Have you ever been in "their" location living in "their" same socioeconomic bracket?

Would it bother you more if "their" was replaced with "our"?

Would you then consider it "decent" money?


If it was "our" citizens getting paid near (US) minimum wage to sit at home and monitor a robot all day then yes, I'd still be all for it. Teenage me would have much preferred that to fast food, and even adult me would happily take it as a second or interim job if needed and unable to find better paying work for some reason. And I'm sure it'd be a great opportunity for the physically disabled or other people unable to leave home.

This is a win for everyone involved. A US company gets to outsource easy work at a price below our minimum wage that they can afford to a population which can live happily with those lower wages due to their nations cost of living.


> How would you know?

How do you know someone hired to drive a remote vehicle is being exploited?


Minimum wage to drive a robot sounds way better than minimum wage in retail


"It is okay to pay sub-living national wages to foreigners in other countries because their cost of living is lower"

And I suppose you should also add they must stay there and never come to your country because then your job is at risk?


>"It is okay to pay wages below what we could live on to foreigners in other countries because their cost of living is lower"

FTFY. But, yes? How is that controversial?

>And I suppose you should also add they must stay there and never come to your country because then your job is at risk?

How do you draw that completely unrelated conclusion from the previous conversation?


Cost of living is irrelevant when the cost of certain goods like iPhones, computers, Internet subscriptions and other things is fundamentally determined by strong markets like USA or EU.

Or are you going to tell me that Indians don't deserve to use iPhones, watch Netflix, or learn new skills through online programmes? Because that would be pretty racist, and I don't suppose you consider yourself racist, do you?

Further, the fact some countries earn astronomically high wages means they can, when they retire, take everything with them, into a cheap country like Egypt, India, or Greece, and live like emperors. Is that fair? Especially when hard-working people in India can barely afford vacation in their own country.


Ah, the old "If you disagree with me then you're a racist". Please don't engage if you aren't going to engage in good faith, we're not on Reddit. I'm happy to be called wrong, but not if you're going to do so like that.

People deserve what they can afford. Are you suggesting we subsidize the cost of every luxury good so that everyone in the world regardless of income has access to said luxury good? It's a great notion, but the logistics are fundamentally impossible.

I can't afford a Ferrari. Do I not deserve a Ferrari? I can't afford a Rolex. Do I not deserve a Rolex? The answer is no, I don't deserve either of those things. I don't fundamentally deserve anything except for my own life. If I want anything else it's up to me to find a way to obtain it.


Cost of Netflix minimum (non mobile) plan

US: $9.99=760₹

India: $2.6=199₹


Indians make less than 1/4th of an American salary. Your point?


> Cost of living is irrelevant when the cost of certain goods like iPhones, computers, Internet subscriptions and other things is fundamentally determined by strong markets like USA or EU.


For more in that depressing vein, check out "Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass" by Mary L. Gray


I will, I liked nickel and dimed so I'll give this a read. One side note, in looking up this book to purchase on Amazon, the hardcover is 2 bucks cheaper than the soft cover. I've seen this kind of thing quite a bit lately. What gives?


If you liked Barbara Ehrenreich's "Nickel and Dimed,"[0] you might like a few of her unrelated books as well. I highly recommend her "Dancing in the Streets,"[1] a 5,000 year history of the deadening of European culture (at Europeans' own hands, no less) that pairs nicely with Graeber's "Debt: The First 5,000"[2], a book of similar pacing, as well as a much shorter book on the gendered professionalization/demolition of the medical practice in the medieval era titled "Witches, Midwives, and Nurses" or "W.M.N."[3]

Her "Bait and Switch,"[4] however, about the white-collar unemployment industry I found dull and unenlightening.

0. https://www.goodreads.com/book/show/1869.Nickel_and_Dimed

1. https://www.goodreads.com/book/show/24452.Dancing_in_the_Str...

2. https://www.goodreads.com/book/show/6617037-debt

3. https://www.goodreads.com/book/show/24453.Witches_Midwives_a...

4. https://www.goodreads.com/book/show/24450.Bait_and_Switch


I've noticed this too. Hypothesis: Nobody wants books made of atoms any more, and the few that are selling are paperbacks. Therefore hardbacks are cheaper even though they're rarer and more expensive to produce.

Just a W.A.G...


Paperbacks have their price inflated to make the ebook look like a deal. Someone forgot to apply the markup to hardcover too.


I believe (as of a few years ago) that Kiwi bots are semi-autonomous, meaning they do have someone watching the camera feed but the bots themselves can move in a given direction and will stop if an object is detected.


Source?


I’ve had this happen with actual humans. A human is coming toward me on a path. I zig. They zig. I zag. They zag. We walk into each other. It must be some kind of human path finding bug. :-)


I've never actually walked into people, usually after 2 or 3 you look at each other and smile and then one person steps to the side or both and then you go, no you, ok.


You should also say "thanks for the dance but I must be going"


I will forever link this whole thread in any discussion where HN is discussing anything real world/outside of our bubble

It's hilarious


The "people as obstacles to avoid" angle is what amuses me


Are you implying we should implement a smile feature to the delivery bots?


it definitely should smile before/while driving into your legs as well as when standing waiting for you to walk around. It can also mark you with the laser pointer to indicate that it does sense you. Communication is the key.


Somehow I doubt that most people would take a bot marking them with a laser pointer as a benign action, but maybe that's just me.


The facial communication is only necessary because we're negotiating as two people who want to go where the other one is. When it comes to bots they can be forever deferential and always yield to humans.


Hum... I'm not sure you got what is being negotiated right.

When people do that it's because both are yielding. They just don't know where to yield.


I understand yea - when it comes to a robot and a human though the human doesn't need to yield. It'll probably take some time to train it into people but humans should always have the right of way.


You avoid this by using visual cues. E.g. strongly looking into the direction that you want to go. I suppose that most people learn this at an early age. And these robots should too.


I find that making eye contact always resolves the issue


Always go through the right side, is this not a rule in your country? I'm asking not knowing where I learned it, but it definitely is a social norm to take the right side of the sidewalk anytime this may happen. Everyone just does this and it works out great.


Oh how I wish everybody understood this. Even in crowded cities in the US you get a lot of people who do not understand this. A minority to be sure, but a sizable one (I’d estimate between 5-10%, probably 10% but sometimes people who aren’t cognizant of this are accidentally correct in their pathing choice). Unfortunately this means you need to sometimes make split second decisions that this person probably has no idea what they’re doing and instead just figure out how to get around them regardless of convention


We move to the left in Uk, Aus and some other formerly British influenced countries.

Except! for escalators in the London subway. There you stay right. Presumably because of so many tourists from the US and the continent.


It makes sense if you are on a pavement as if someone needs to step into the road it should be the person facing the oncoming traffic.

Nobody really does it though.


It is the left side in my country. Which creates a problem when people from right-sided countries visit my city.

I noticed this in China, a densely populated mostly right-sided country. Whenever a British engineering firm would install escalators they would set the direction opposite to the flow of human traffic. You would walk up to it on the path on the right side and be forced to cross the path of oncoming people to use the escalator on the left before having to cross over again once at the top.


@js2 please check your inbox: you have been recalled


You need to update your firmware. They fixed this deadlock behavior in the new version.

Whenever that happens, I fully retreat to my right side, standing sideways, and gesture them to go on like a restaurant waiter.


I mean, thinking about it, it would make sense that the bots' programming has the same kinds of failure states that humans have when we walk.


when I came close it stopped, as they do when sensing an object in front of them

The security robots at one of the big skyscrapers down the street from me do not stop for people. My wife got knocked into by one when we were standing in the plaza looking up something on her phone. (They're not little delivery robots. They're about five feet tall.)

Good thing she was confused by what happened, because she's also the type who would have knocked the robot over and asked me to shove it into traffic if she had her wits about her.


And shoving it into traffic - or at least calling the police and pressing charges - would have been the right thing to do!

If you want to use robots, fine. You are still responsible for them and any people they bowl over!


Probably. This seems like a public space so almost certainly. However if this was private space sometimes the rules are different. Once in a while I have to go into our factory (not even once a year, but sometimes), and they always make it clear that forklifts have the right away so watch out. (forklifts have poor visibility, so by giving them the right of way they ensure nobody expects them to stop - in practice a forklift driver will stop if they see you, but this way they are not expected to see something that is impossible to see)


Forklifts though also are pretty dang loud and have a highly trained operator driving them.

Did not even realize we had "security robots" yet like this - now I am curious what the hell this thing looks like!


I don't think this shields the company from liability. Instead it provides some ammunition to use in the event of a lawsuit.

Things are very different between employees and the general public. I imagine a jury would find that a lady-busting security robot is negligent by default. Whereas, a fork lift driver would be assumed to be doing his job and that situational negligence would need to be proven.


Note that my company does a lot of mandatory training before you are allowed to enter the manufacturing areas. Forklift safety is only a part of it (though a large part as everything else is common sense says you wouldn't do this while forklifts don't follow common sense rules)

I agree if this is a public place a jury would and should find the robots at fault. (unless the robots are running some sort of arrest her routine, or knocking her over because a bad fall is still better than some other danger)



Well, if you invite people over your private space, you can't go and assault them.


> shoving it into traffic

Right, let's cause a full blown accident because a robot bumped into me.


I generally object to the use of the word "accident" for "car crash", but in this case, it seems particularly inapplicable.


This is one of the rare situations where people might actually empathize with those who make up "traffic".


Seriously. What if this thing bowls into a child and seriously injures them? Or a dog that is confused on what the hell is going on? I'm not even against them for mobile surveillance but they need to be safe.

And if these things are really 400 pounds with a low center of gravity as people are linking below.......well then I guess you will just have to enlist the help of one other friend in order to knock it over to prevent it from hurting anyone else.


What are "security robots" for the uninitiated?


This is one I've seen in the wild. The K5 rocket-shaped model is heavy, 400 lbs (180 kg)

https://www.knightscope.com/


"rocket shaped" is sort of a generous way to describe it.

My first exposure to security robots was actually a company marketing a repurposed remote-controlled lawnmower platform. It was nearly the size of a Smartcar but low to the ground and designed to cross difficult terrain. Even so, a similarly designed lawnmower tumbled down a hill and killed its operator around the same time frame (I don't think from the same company). That all makes the KnightScope design rather surprising, it seems like these things falling over and injuring people is an inevitable liability. But at least my outside perspective is that the companies using these things don't seem to have much of a head for avoiding liability issues as they're often fielded in ways that end up in negative press coverage at least... not even really due to any kind of fault per se but just the user's lack of consideration of the optics of deploying a large, er, rocket-shaped robot to programmatically harass homeless people.

Some might remember the decade-ago jokes about "do not enter elevator with robot" signs and other artifacts of robots coexisting with humans. It sort of feels like the situation hasn't really advanced that much, we're just getting used to it and actively making use of the present inability of robots to coexist in polite society.


Shape ! = center of gravity. All the power and movement stuff is likely very close to the ground, and thus the robot very difficult to tip over.

https://www.dannyguo.com/blog/my-seatbelt-rule-for-judgment/


I'm not just inferring from the shape, the operational history of these things suggests that they are very prone to falling over.


It's more rocket-shaped than Jeff Bezos's cocket ship.


What does it do that can't be accomplished with something the size of a remote controlled car?


Pure speculation on my part, but having it around 5 feet tall is presumably for the optical cameras to have a better view of the majority of adult human faces. If you're talking a remote control car (at least like the one I had as a kid), any camera is either going to get great photos of people's ankles & shins, up their noses if they're close, or lose detail because they'll have to be too far away to get a decent angle to look at a face.


Above skirt height is hopefully more than just a good idea for a camera with a upward view.


It's more intimidating. (IIRC, they can be remotely controlled by an operator and have loudspeakers and such for the operator to yell at people.)


These boys https://www.bbc.com/news/technology-40642968

The ennui of their life clearly leads to their prematurely choosing one answer to Camus’s great question.


Hilarious as that final image is, nearly 200kg of hardware able to drive itself about and randomly fall down stairs is incredibly dangerous.


Ha the British made real Daleks (yes yes I know they aren't bots with living organism inside)

Eventually learn to self-upgrade to overcome stairs, then you've got a problem.


Run into me with a robot, and it is likely to get knocked over and very heavily damaged, if not pushed out into the street, or off a cliff, or whatever I can find nearby.

And I’m a pretty beefy guy. Run into my wife with a robot, and I will make sure that you really wish you had just run into me instead.


2 out of 3 times I've seen one those robots, they've been lying on their side.


I don't know why they don't parametrize momentum with certainty. In any confusing situation, go into ultra slow environment scanning and when confidence increases, allow for a bit more.. rinse / repeat.


That's how to get a robot half feet into a choke point, immediately get stuck for half an hour surrounded by walls and confused people, until developer on an emergency Slack call along facility managers and company CTO verifies and communicates a likely-safe state of robot and surrounding equipment to field operators and a go is given to pull the thing out of the elevator.


wouldn't people prefer choking robots rather than overly confident and bumping ones ?


Probably not if they're sticking 18" into the only elevator on the floor.


This is so detailed, are you speaking from experience perhaps?


All I can say without breaking agreements is that these are products, not ideal models of conceptual engineering. They're not created by people who like the world and want it to be a better place. They're created by people with lots of money who want a lot more. They've found an avenue for this by persuading other wealthy, greedy people to give them a lot of money and promising they can give them more back. They'll do this by persuading everyday people to not do things like produce, prepare, and transfer food themselves and instead pay money for these robots to do it.

These robots are minimum viable products toward moving capital around, not meeting user requirements or demonstrating great ideas. Hurting a few people in the process is part of the equation. Getting anyone to care about $cool_algorithm is not part of the equation. Getting people addicted to the convenience is part of the equation. Getting things to market as blindingly fast as possible so the capital moves before feedback from the field arrives is paramount.


That's an unnecessarily cynical generalization. Sure, maybe the leaders of the companies creating these things are profit-motivated, but is that really true of the individual engineers and designers who created it?


Both of what you said can be true at the same time (not mutually exclusive of each other) while OP’s assertions may still be true for certain individuals if I’m thinking logically.

We are talking about what motivates humans as human behavior, which tends to be varied, nuanced, and hard to reduce to mutually exclusive categories like being only profit driven or only driven by intellectual curiosity.

I think you can be both motivated by money and intellectual curiosity. If you are an engineer turned founder, you can be both?

Someone correct me if I’m wrong here.


No, that is a very accurate description. The engineers willing to work on those things and suppressing deeper thoughts for the money and kick off new tech are part of the equation and the problem.

A manager I had once had a postcard in his office "The engineer is the camel on whos back the merchand rides to his success."

You are a lever and even provide the excuse for being one yourself.


> They're not created by people who like the world and want it to be a better place. They're created by people with lots of money who want a lot more. ... They'll do this by persuading everyday people to not do things like produce, prepare, and transfer food themselves and instead pay money for these robots to do it.

This is an extremely negative outlook. I'm a robotics and controls engineer for a small (25-employee) integrator, our company mission is to make lives and products better, and I really think that everyone believes in that. Our meager budgets and slim, fluctuating profit margins are evidence that it's not all about "lots of money"...there are certainly those making a killing on it but it's not everyone. And maybe Upton Sinclair was correct, it is difficult to get a man to understand something when his salary depends upon his not understanding it, but I've spent a lot of time thinking about this (and not just in response to news articles, I took ethics and philosophy courses to pad out my gen eds on my way to my engineering degree, I've read books on the topic, and I've talked to lots of other engineers, my customers, the operators who have been transitioned from old equipment to run my new automated equpment...). But I stand by my argument that humans are no good replacement for robots, and robots are no good replacement for humans. The tech needs to be employed judiciously, but it can be used for good.

I've installed equipment in dozens of places where life was made better: There were less than 90 fingers among a lunch table at the foundry with 10 guys at it (4 + 3 + 2 + 1 + 1 + 1 lost digits) when I installed a robotic grinding cell that removed parting lines from valve castings, now they can ergonomically load infeed shuttles and have time to quality check the parts from behind a safety fence; no more fingers have been lost. Two older women (One with arthritis!) at a plastics company no longer have to keep up with placing a tiny foam spacer on a dial table every 2.5 seconds for 8 hours a day, 6 days a week, with a half-hour lunch and 2 15-mintue breaks...that's nearly torture, and it was a really challenging material handling problem, but the robot does it well. The operators now pour in bags of foam spacers, do offline quality checks more frequently (catching upstream problems quicker, leading to less waste), and basically pour bins of parts into the machine and get one assembly out every 1.75 seconds now. Two weeks ago, I was training a 64 year old seamstress (she retires in 8 months and 24 days) on the operation of an automated sewing machine. She's been pushing fabric through a sewing machine, keeping it between 3/8" and 5/8" on the seam allowance, since she was 16 years old. Now she lays out fabric on the infeed table - she's pleased that she finally has time that doesn't impact production rates to make sure the patterns match precisely - and she inspects the stitching on the product that comes out the outfeed chute to adjust thread tensions and strokes on the sewing machine. Literally Tuesday of this week, I was at a wood processing plant installing a new automated saw, when I heard that a 19-year-old greenhorn lost his right index finger between the first and second knuckles on an old manual saw. I was there installing the fully automated, fully guarded replacement equipment; you can drop a pallet of roughsawn lumber on the infeed material handler and correctly sized boards come out the other side, with no one needing to be closer than 20 feet from the saw blade. I wasn't fast enough.

In all these cases, no one got fired, people just transitioned from mindless, repetitive grunt work to real human work, while capacity and efficiency increased. And not only are all these operators enjoying their jobs more, your gas is cheaper, new cars are cheaper and more reliable, new furniture is cheaper and the cushions are more consistently sewn, and solid-wood cabinet doors are produced more safely, accurately, and quickly. It's not all about capital.


kudos to you! I'm confident relieving humans of tedious work is more valuable to society than bringing college kids food.

My comment is related to my experience in delivery robotics and this is an alt. Not everyone is bad. I, too, believe my current job to be more ethical than my previous experience. Of course, I didn't know going into my prior experience what it was really about.


I come from the country where such machinery doesn't work - USSR/Russia - and as a result there is no innovation and the country is well behind. If you discover other ways of having successful innovation the humanity will probably put up a large statue of you and your name will be on the plaque of the next Voyager.


Tesla does exactly this and it gives rise to the phantom breaking problem. Still seems like a good solution for a small bot with no passengers


No? There are numerous clips where Telsas in "full self driving" mode pull the equivalent maneuver of a teenager going "OH SHIT I WANT TO GO THERE" and veering very violently.

The phantom braking problem is likely just one of the many symptoms of Musk's insistence on relying on optical systems instead of more expensive sensors.


Expense was part of the equation initially, however, through economies of scale, we eventually would have been able to reach a feasible price point. Cost has nothing to do with why Tesla is pursuing an optical-only system.

To get rid of the dependency on the radar sensor for autopilot, we generated over 10 billion labels across two and a half million clips. To do this we had to scale our offline neural networks and our simulation engine across 1000s of GPUs, and just a little bit shy of 20,000 CPU cores. We also included over 2000 actual autopilot full self driving computers in the loop with our simulation engine. And that's the smallest compute cluster.


Those are very large numbers for something that doesn’t work very well.


So what's the point then? You said it's not expenses and then you explain how you think it caused you extra trouble/work/development effort. But what's the reason?


That must be why complaints about phantom braking have gone through the roof since the switch away from radar.


> The phantom braking problem is likely just one of the many symptoms of Musk's insistence on relying on optical systems instead of more expensive sensors.

Based on what? How would 'expensive' sensors help?


We know that in some situations expensive sensors can get data that optical cannot. What we don't know is if any of the above is enough extra data.

What we do know is there are times when humans are bad drivers, and other times when humans continue when they shouldn't relying on luck. (Ie driving in snow storms with low visibility)


Consider your human eyeball. It works really well, but you can get into trouble in certain light or visual conditions.

Technology exists (radar, lidar) that doesn’t have the same limitations as visual. (They have their own issues of course)


Saw another HN user's comment about automatic battle-bot features. Maaaybe it's not the best idea in this case!


I for one welcome our new robotic overlords.


I thought this trend was a joke, did not realize people were actually using bots to deliver food.


The weird thing is that the first time I realized this was actually happening was watching "Ridiculousness" on MTV. Chanel (one of the hosts) mentioned that she had ordered food delivered and couldn't understand why the app just showed it waiting outside. So she goes out to see why the guy won't come up and ring the doorbell only to realize it was a food delivery robot waiting for her.

I live out in the middle of nowhere. Wonder what other stuff I don't know about happening in cities!


I'd just start up-ending them if I had to deal with these on a daily basis. Might even start carrying a sledgehammer for self-protection.


>just wait for SCOTUS to declare these robots have 2A rights, and they can shoot anything that gets in their way.


Obligatory link to Isaac Asimov's Three Laws of Robotics:

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics


Even if the Laws were real (they're not) they won't work if all you have to do is add some adversarial interference to some neural thing to make the robot think that the human is not a human, or, even better, another robot that will harm a human. Then it's a moral imperative under 3LoR to destroy that "robot".

This trick also works on humans: you can often circumvent their "protect humans" programming by simply messing with their classification system to label a human as "terrorist", "infidel", or even "unemployed".


I did a rough tally of the 70 removed episodes:

Comedian: 45, Political Commentator/Media Personality: 8, Brian Redban: 5, Health/Fitness: 2, Scientist: 2, Author: 2, Musician: 1, Pornstar: 1, (MMA) Fight Companion: 1, Giorgio Tsoukalos, Kevin Smith, Cliffy B & Johnny Cristo (can't even figure out who these last two are)


They've removed comedians mostly because they talk about "sensitive" issues openly and without taboos, oftentimes politically incorrect and careless: ex. transgender and alike themes. Isn't this what comedy should be like? [1]

[1]: https://www.youtube.com/watch?v=-UHr5baraKo


Comedy is not equal to being offensive. Being offensive is not itself comedy.

Some comedy is offensively misinformed and still legitimately funny at the same time (e.g. Chapelle had some parts that made me chuckle, and some parts where I dreaded stereotypes about me being reinforced). But some is just bad. Look at Steven Crowder doing "comedy" for instance.

That aside, I'd rather they had removed Abigail Shrier, who is not a comedian but an author of a book full of falsehoods and anecdotes that contributes to a hateful environment that has measurably increased violence against LGBT+ people (up ~100% in the past 5 years in the UK). Or at least for the JRE to have ANY trans person providing different context. Chelsea Manning would be amazing, but I'm not sure if she'd be interested either.


You're right it's not the same, but the same thing with Gervais's presentation, for some it's offensive, for some hilarious. I support comedy in general, not cancel culture though.


There needs to be a balance of interests. I don't know where the line is and I don't want to set it, but misrepresenting minorities to the point you're furthering their marginalization is beyond the line.

There's a lot of nuance lost upon invoking "cancel culture" ("So you've been publicly shamed" is the one piece of media that seems to successfully avoid doing so), and I generally dislike the term because it is often used either in offhand comments by people who don't see the issue or by free speech absolutists that disagree the paradox of tolerance even exists.

That aside, invoking false stereotypes doesn't generally make for exceptional comedy.


Some of them are really baffling. Tom Segura, Bert Kreischer, Ian Edwards? Tim Ferris?


Likely delete out of requests by the guests.


Segura and Kreischer at least are good friend with Rogan and have been on multiple times. I think it's unlikely they would request to be removed.


I see a lot of speculating like this and if you know these people you know they wouldn't have requested it.

Michael Malice had 2 of his episodes pulled and just did a YouTube about it: https://www.youtube.com/watch?v=A-1MHKIRUow. He did not request them to be removed and Joe did not either.

It's likely given that the episodes with Robert Malone (the controversial MRNA Vaccine creator) is still up that these recent take-downs are the result of some new algorithmic reviewer.

People on Reddit for instance have noted that many of these episodes talked about race and may have used racial epithets or alluded to race-based issues somehow and may have gotten automatically flagged or some such.


Many of the deleted guests still have other episodes that are up. So it doesn't seem to make sense.


If that's true then why wouldn't Spotify say so? It's not a good look for them that they're just gone.


Stop repeating this misinformation. Many of the "deleted" guests still have other episodes that are up.


Maybe they are having trouble paying their S3 bill


The Louis Theroux one really throws me off.


Ooh, conspiracy time, it's scientologists pulling the strings!


real chance these were done due to jokes that aren't woke/pc more than vax related


Just checked: Tom Segura is still there. Kreischer, Ian edwards too.


Bill Burr?


I heard comedians have a hard time doing shows on University campuses these days. The huge amount of comedian episodes being removed seems consistent with that trend.


Having worked with computers before, the explanation I am leaning to based on zero evidence is that there is a bug somewhere, possibly in the tool used to make the determination about which episodes have been 'removed'


Cliffy B could be the game designer.


Here's what I'm wondering, and pardon me if this is a stupid question:

Why go through all of the trouble generating true randomness with a Geiger counter just to use the result as the seed to a PRNG, rather than using your true random method to generate the lottery numbers directly?


Besides the point about a need to generate a large number of random numbers, there's another thing: you don't know the distribution of the true random generator exactly. You know it's random, but in addition to that, the distribution needs to be flat. That's why you use a PRNG with a known-to-be flat distribution to further process the entropy. As an interesting sidenote, though, Mersenne Twister doesn't have a particularly good distribution by modern standards. For this, I would use a cryptographically secure PRNG.


You can generally turn any random binary distribution into a uniform binary distribution by generating two bits, then if they're different, output the first bit, and if they're the same, ignore the result.


01 01 01 01 00 01 01 01 11 01

Doesn't seem to work... It depends on odd bits and even bits having the same distribution, which in many cases they won't.


Yes, the draws from the random distribution must be uncorrelated. But the mentioned algorithm does indeed a "flat" output in this case.


I'm guessing because the lottery system needed to generate a large number of random numbers (given that they are running lots of lottery systems), and this gives a fairly verifiable way of doing that from one initial chunk of randomness.

That said, I don't work in this area, so you should give my comment very little weight. :)


A geiger counter counts from 0-9 repeatedly and stops when a muon is detected. I guess you could count longer but I think the 10 digit modulus is sufficient.. not sure. This is the random initialization.

I'm curious at how you would actually successfully hide the control flow. The only think I can think of would be some factorization, maybe a couple of mods, but that would be detectable. You don't want to explicitly modify the control flow per se. I mean you'd have to be able to hide it from code review... and then mathematical review.


He was the Security Director with privileged access to the production system. I don't think his changes went through code review.

I'm struggling to differentiate what's established fact and what's the author's theory in this article[0], but it sounds like he could have used a root kit on a USB thumb drive to modify the code directly on the production machine.

[0] https://privacysecuritybrainiacs.com/privacy-professor-blog/...


You can find people that will do a job but not really question what they're doing even if it should be questioned. Developers make spyware and do all kinds of terrible things as long as they get paid they just kind of do what is asked and not ask too many questions.


This sort of implies that the developers just aren't thinking deeply about what they're building. Certainly possible, but you're also much more likely to get garbage software this way. Isn't it as likely (or more so) that the developers just believe in what they're building?


Typically true random stuff like geiger counters do not have enough entropy in a single reading to be a true secret. They are statistically predictable and produce a curve of data.

By using it to seed a pseudo random number generator you are making the values come out in a completely flat distribution. The geiger counter adds non-deterministic results while the pseudo random number generator adds entropy.


My understanding is that for this type of gaming, e.g. gambling, there are regulations governing the randomness and repeatability of the randomness for computer/electronic run games. Part of it due to necessities of verification of sufficient randomness from the perspective of the gamer but also that of the gaming establishment. The game needs to be provably sufficiently random to the advertised odds for fairness to both the gamer and the gaming establishment.

For all intents and purposes, using a true source of randomness as the seed for something like a one-time use of Mersenne Twister as a pseudorandom number generator (PNG) is indistinguishable to an end user of true randomness. What it does do, though, is allow for reproducible testing to ensure you don't have an xkcd PNG [0].

Source: bar chat with a couple of friends & former coworkers who had spent time in that area of gaming (e.g. working on video poker machines destined for casinos), so take it with a grain of salt.

[0] https://xkcd.com/221/


The amount of entropy is usually limited. For the lottery, it doesn't seem like it would be a problem, but in general, you don't want your function calls to block while gathering "random data"


Am241 source coupled to a detector sounds sophisticated, but it matches description of now-outdated type of ceiling smoke detectors. It's probably not that mechanically elaborate.


Am241 smoke detectors (i.e. ionization smoke detector) might be an older technology, but they’re extremely common, at least here in the US.


That logic assumes that the amount of music produced is constant over time, which it isn't necessarily. The population grows and the barrier to entry keeps getting lower.


My 2022 New Year's Resolution is to try out complete sobriety (caffeine excluded; this effectively means alcohol and cannabis). I've never considered myself to have a 'problem' with substances like some people I have known, but I sure have spent plenty of time in my life intoxicated alone. The resolution isn't an ambitious thing because I've been going this way for a while anyway.

Over the years I've slowly come to a realization: These substances have various effects, but at the heart what they really do is make me less aware. Sometimes I guess it's a good thing. Alcohol makes me less aware of the part of me that is self-conscious in social situations, and of how others perceive me. Cannabis makes you feel more aware of experiences, but it proves to be an illusion. I guess they're really not that bad on the balance, but as I grow older and I have spent more and more time thinking about cultivating awareness - of the present moment; of my body and mind and senses; of things in life that are truly important, and which maybe even make me anxious to contemplate - I find that I simply don't enjoy intoxication as much anymore because there's something I enjoy more about awareness.

More and more I hear this nagging voice when intoxicated. It says: "I'm bored; I'm nervous; I'm scared; I'm sad; I'm worried; I'm self-conscious; I'm restless; Someday I will die. What I'm doing right now is trying to be less aware of these things. But maybe they aren't just to be ignored or avoided. Maybe they're an adaptive part of the human mind. Maybe there really is something worth being anxious about."


With weed, after I switched to vaping concentrate I found the experience much more empathetic. Like if I was watching something I would deeply feel whatever the characters were experiencing. Maybe it’s illusory but it does feel like I’m connecting to something quite deep. I haven’t really heard other people say that so I wonder if it’s common.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: