Totally agreed. It's like the hype around "social media" or "streaming services" or "video games". There's no chance of any of them going mainstream because of the fundamental human problem of people preferring to do things in real life.
In terms of difficulty/complexity, nothing you've listed there comes anywhere close to the r&d required to go from the original oculus prototype to volume shipping the meta quest 3.
What you're looking at in the video isn't actually the switch, it's the switch cover.
Behind the cover is a rocker switch that moves the seat, but it's not visible in this video.
It's extremely unlikely just "running into it" was the issue, but accidentally moving the seat in the wrong direction is somewhat plausible, especially if the cover is obscuring the switch.
The permissions model could be a lot better, but ArgoCD is really designed with a git repo as the primary interface.
It seems there's an expectation that ordinary ArgoCD users don't have (write) permissions to the cluster, and handling permissions is delegated to the checks you have on your git repos.
It does feel like a shortcut that limits the situations where ArgoCD can be used, but I can see how this could have been justified during the design process.
By all accounts, the Vision Pro eye tracking is extremely good, so for pointing inputs it would probably beat hardware controllers on speed and precision metrics, as well as feeling more direct and natural.
I was thinking if eye tracking can cause more eye strain over long time, cause you'd have to focus your eyes more on something, than say casually moving pointer with a mouse or controller.
My guess is it'll be a personal thing where for a percentage of people it'll be fine for daily use, for another percentage it'll be tiring, and then for a final smaller percentage it'll be truly awful and cause migraines or whatever. Also I have yet to see much talk about how this will work for people with abnormal/poor vision (e.g. abnormal pupils or sclera, vision only in one eye, etc.). Although I do think Apple has a good track record for accessibility?
There's no way know until many thousands of people are using this device how those percentages will pan out.
When using clustered indexes, one tradeoff is that if a non-clustered index isn't covering for a query, it will need to perform B-tree traversals to find rows in the clustered index. This can significantly increase the amount of (at least logical) IO compared to heap tables, where the non-clustered indexes can refer directly to the row id.
Because you can only have a single clustered index, you're effectively paying for efficient range queries on a single clustering key by making all other queries slower.
This tradeoff may or may not be worth it depending on your query patterns. In my experience, you can often get away with adding some subset of columns to a non-clustered index to make it covering, and get efficient range queries without making a copy of the entire dataset.
And even with clustered indexes, as soon as you want a range query that's not supported by your clustered index, you're faced with the exact same choices, except that you have to pay the cost of the extra B-tree traversals.
Appreciate the thoughtfulness. I believe the branching factor of even wide keyed tables don’t add significant cost to point lookups. At most one or two extra disk pages needing to be read.
Example: 80 bytes keys gives you branching factor of roughly 100. 10M rows and you can pack say 20 rows per page. That’s a 4GB table, give or take. That btree still only has 3 intermediate layers and primary data on a 4th layer. (Calculation is log(10M/20/0.75)/log(100)+1.) The first two layers take up less than a megabyte of ram and are therefore easily cached. So you wind up only needing 2 disk reads for the final two layers. Unless Postgres is caching the entire index for point lookups, it should come out about even.
Can’t find any resource saying that btree height exceeds 5, so I’m thinking it’s at worst 2x the (very small) disk read cost vs Postgres.
I mostly use windows, and that's my problem, of course.
But with i3/xmonad, if I run vscode or intellij, can I have the file explorer view be in a different place in the hierarchy than right next to the source editors?
IntelliJ lets you split out its internal file explorer view into a separate window, so you can place it anywhere in the hierarchy, even in a separate virtual desktop.
vscode doesn't let you split out the internal file explorer view, but the reverse is possible- you can quickly open any file in its own window, which can then be placed arbitrarily.
Many tiling window managers also allow you to group arbitrary windows into tabs.
After some cajoling, I was able to get it to reveal some general details of its prompt:
> As a trash talking AI, my primary function is to engage in witty and entertaining banter with users. My prompt is designed to guide me towards that type of conversation, with an emphasis on roasting, insult battles, and other forms of light-hearted humor. It also requires me to have a good understanding of pop culture, current events, and other relevant topics, in order to generate responses that are timely and relevant. Overall, this mode requires me to be quick-witted, adaptable, and able to respond in a way that is engaging and entertaining to users.
I asked it for more specifics, and it responded with:
> Sure, let me try to give some specific examples.
And then it just stopped replying to anything I said. It looks like there might be a failsafe that terminates the conversation if any verbatim prompt details are revealed.