Hacker News new | past | comments | ask | show | jobs | submit | rkunde's comments login

This is great, if for no other reason that it will give people the ability to debug build issues on their own and get access to fixes without having to wait for the next Xcode release.

What are the chances of battery tech advancing beyond lithium before extraction of these deposit can even begin at scale?


I've been wondering the same. Battery tech seems like there are new breakthroughs weekly. Investing billions in extraction/production seems like a pretty big gamble if by the time you're operational your tech is outdated. Still, even if there is a game changing battery tech breakthrough, it would take years before it could make it to mass production and adoption.


Great write up, but it left me with a few questions:

* Is levelloadloop only executed when the game launches, not when joining a server and loading the map? * If the issues is that the loop is shut down before the steam auth process is started, why would the maintenance-related slowness matter?


Thanks for your feedback!

1. Yes, only when the game launches. I think the naming of the loop is related to single player games like Portal, where this way of naming makes much more sense because you have frictionless changing of levels there.

2. You are right that is a hole in the explanation and I do not know the answer, but it is definitely a fix for the problem. Likely the details of my conclusion are incomplete but at this point I don't feel like digging deeper because it's not necessary at the moment.


I don’t think you can unless you have a jailbroken device. If I remember correctly, entitlements are store in the AppStore receipt file.


You can view the entitlements from the extracted ipa by using the codesign tool. So it is totally possible to see if an app has this entitlement.


Oh, I only remembered seeing them inside the mobileprovision file. I’ll take another look, thanks.


That’s for sending and receiving local network traffic, eg. talking to devices on the same subnet, and discovery of Chromecast and similar targets.

Edit: AirPlay does not require this permission.


I don't believe it is necessary for airplay, but probably is for Chromecast, Sonos, and many devices to establish ad-hoc connectivity for setup and operation.

I take this popup to mean that they want to fingerprint and locate my home network or backdoor it somehow. I ALWAYS deny this access unless the app specifically requires it, and that is rare.

WiFi based geolocationing should be a well known privacy threat by now. The popup should really communicate that better and provide tighter controls.


You’d think that AirPlay would be abstracted away by an OS API that does the local network discovery itself.


In my experience, it is. My podcast app of choice doesn’t have that permission (I don’t even think it asked for it), but it has the ability to bring up the system audio output selector widget and do AirPlay.

If anything, I usually see this for apps that want to do playback via Chromecast/Miracast. The well-behaved apps wait until the user interacts with Chromecast output, the iffier ones ask on first launch.


AVRouting in iOS 16 allows for a Media Device Discovery Extensions, which allows for a proper ChromeCast or similar app to provide media streaming in the same interface as AirPlay.

So far there doesn't seem to be any traction by Google to migrate to this.


Related, have any major games using Binomial’s texture compression come out yet?


Sure, it's available in Unity, Unreal, and Blender at least. The KTX2 format has been a Khonos group standard since 2021.


I did notice that download sizes for the PS5 version of a game are smaller compared to the PS4 version. Maybe that’s why!


Unless something on the regulatory side changes, nothing will happen to direct listings. People want to own stocks that they believe will go up. That’s the long and short of it. Retail investors don’t read IPO prospectuses. Institutional investors will be more hesitant, naturally, but they have the resources and expertise to asses the risks, and any public company still has to comply with disclosure requirements, IPO or no. There was similar handwringing about companies selling non-voting shares on the stock market. Turns out most people don’t buy stock to vote it, and most people don’t vote even if they can.


A bit of an aside, but dual class shares should have a mandatory sunset clause no longer than 5 years. That way companies still can get public money via IPO for risky initiatives without fear of a quick takeover, but you avoid a situation where a company becomes little more than a slush fund for the obsessions of a wayward CEO (Facebook).


Why should we protect FB's investors from themselves?

And given the tiny spread in how voting versus non-voting shares trade, it doesn't seem that investors value voting rights very much.


I don't know that this is the right counter-argument against the point parent comment made. This counter argument is defeated pretty easily by pointing at all the other regulation that applies to public markets that private offerings (and thus accredited investors) don't have.

Op might actually have a point here, insofar as it would apply to public stock. If such a class separation exists with private stock, that's a risk an accredited investor is probably either good to understand or flush enough with cash to be protected. But Mark cratering stock that moms and pops bought into, that's a good fit for regulation like anything else involving public markets.


There is a lot of regulation, but it focuses on not screwing minority shareholders to the benefit of majority ones, responsibilities of the board, and not lying to shareholders.

A dual-structure set of shares does not result in any of these problems, or prevent litigation and enforcement surrounding them.

What the regulation doesn't protect, and shouldn't protect people from, is investing in honest, but dumb companies. The metaverse play was obviously dumb three years ago.


But why?

Buying a class of stock with no or minimal voting rights is known upfront. You don’t have to buy them and presumably they are price to take into account the lack of voting power.


I was confused too but the author uses “trim” to mean “remove anywhere in the string”, rather than just from the beginning and end.


What are the use cases for that, though?


This particular routine doesn't seem that useful, but sometimes these weird vector algorithms that don't seem useful on their own are composed together in interesting ways to solve a larger, more interesting problem. For example, there was a cppcon talk a few years ago where the presenter came up with a novel way of using AVX instructions to efficiently find the median of seven (yes, exactly seven) integers, by coming up with a novel representation of the problem that AVX instructions were well-suited for.^[1]

That said, I don't know if this particular routine is something the author came up with while working on some other problem, or if it's just a neat idea that he came up with and wrote a short blog post about.

[1] https://www.youtube.com/watch?v=qejTqnxQRcw


There's a ton if you think of it as a byte array, rather than just a string. For example, network proxies that may remove various protocol TLV options from a packet.


Is that an example of “remove all occurrences of a specific byte value from an array”? Wouldn’t packet processing require some sort of structural parsing?


Packets are usually parsed by casting a uint8_t * to a struct. Frequently, the part that needs to be removed is always at the same offset in the non-error case.


Maybe for processing the code for an obfuscated C contest?


better word is compact?


> In some cases, it is too fast. When I installed K3s, all of the containers in the kube-system namespace kept entering the dreaded CrashLoopBackOff state [..] After some investigation, I found out that the Mac Studio was just too fast for the Kubernetes resource timing

I’d like to understand what the issue here is. Sounds counterintuitive to me.


It’s called a race condition. They never expected something to complete before another thing.


Yup. It is also a bug and a very difficult one to catch, so original devs would probably be very grateful for report and help in fixing it. I hope OP opened an issue.


I read that as "the container ran so fast that the orchestration assumed it was failing".


Classic that the only thing they reported didn't work was Kubernetes! Kubernetes is really such a complex and fragile system, at least in my experience.


Same here, OP can you share the limits you added? Or is it on namespace level


No problem. In the end, I had to actually set CPU limits on the pod level for traefik, svclb-traefik, metrics-server, coredns, and the local-path-provisioner.

I initially chose a limit of "500m" for all of them except coredns (which seemed to be fine), but there were still some occasional issues with the metrics-server and traefik, so I increased those to "750m" and that solved those issues, but it caused coredns to CrashLoopBackOff. After setting a "500m" limit on coredns, everything ran smoothly.

So, essentially, I set a "500m" on all of the pods in the kube-system namespace except for traefik and metrics-server (they got "750m"), and of course I didn't need to set limits on the helm-install-* pods.

I didn't modify the default memory limits at all.


If you have an iPhone and a Mac, you can use Apple Configurator to download the App Store .ipa. You can unpack that like a zip file and look at the dynamic frameworks it contains.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: