Hacker News new | past | comments | ask | show | jobs | submit | pimanrules's comments login

There's also an official app that has some nice features:

- You can generate grids on multiple devices with a seed

- If the phone displaying the grid falls over, it automatically hides the grid.

I don't know if the grids in the app are truly random or if they follow some constraints though.


I found if I resize my browser window the spinny thing on the landing page starts spinning really fast. I like it :)


oh interesting it actually works that way whether you zoom in or out


If your only issue with TeamViewer is that it's falsely flagging you for commercial use, you can fill out a form to reset it (it's not instant though)

https://www.teamviewer.com/en-us/special/reset-management/


I forget where I originally heard this idea, but I always explain to people that LLMs are (affectionately) "bullshitters." Terms like "lying" or "hallucinating" imply that it's trying to tell the truth, but actually it doesn't care if what it says is true or not at all save for the fact that true text is slightly more plausible than false text.


Not really? There's a webcam, an indicator LED, an ambient light sensor, and a lot of empty space. As far as I can tell, the MacBook notch is wide just to make it look like the iPhone notch.

https://guide-images.cdn.ifixit.com/igi/5JIdAqwLsxWAFAyZ


There is also the mounting hardware. It’s not an unreasonable size for what it contains. If they were to really redesign the module they might shave a little off of it but how impactful would that reduction really be?



They probably set the notch size with some margin too, so they could develop the camera module and LCD at the same time.


> The MacBook notch is wide just to make it look like the iPhone notch.

I'm convinced this is true, at least partially.

The iPhone notch is branding and a visual differentiator from the competition, which is Apple's forte, and carrying over that very distinctive design element to other product lines seems right in Apple's playbook.

In other words: glass slab in your hand? Who knows. Glass slab with a black notch? iPhone.

Person typing on metallic laptop in a cafe? Who knows. Ah, but the screen has a notch? MacBook.


Partners can choose to disable all types/placements of ads ("skippable video ads", "non-skippable video ads", "pre-roll ads", "mid-roll ads", and "post-roll ads") except for "display ads" (that is, banner ads). As for whether those options actually work, I can only assume so, but the article you linked is specifically about non-partners.

https://i.imgur.com/RynaVin.png


This annoyed me particularly because I pay for YouTube premium... but I can't sign into my Google account on my work computer. So if they block ad blockers, paying for YouTube isn't even enough to get rid of the ads for me.


>it couldn't possibly understand how to spell "platoggle" if it's treating it just as a single, never-before-seen, opaque token

That's not how the tokenizer works. A novel word like "platoggle" is decomposed into three separate tokens, "pl", "at", and "oggle". You can see for yourself how prompts are tokenized: https://platform.openai.com/tokenizer


Ahh, thank you very much, definitely was missing that piece!


Why don’t they also have single letters as tokens?


They do, e.g. "gvqbkpwz" is tokenised into individual characters. Actually it was a bit tricky to construct that, since I needed to find letter combinations that are very low probability in tokeniser's training text (e.g. "gv").

So notice it doesn't contain any vowels, since almost all consonant-vowel pairs are sufficiently frequent in the training text as to be tokenised at least as a pair. E.g. "guq" is tokenised as "gu" + "q", since "gu" is common enough.

(Compare "gun" which is just tokenised as a single token "gun", as it's common enough in the training set as a word on its own, so it doesn't need to tokenise it as "gu"+"n".)

The only exceptions I found with consonant-vowel pairs being tokenised as pairs were ones like "qe", tokenised as "q" + "e". Or "qo" as "q"+"o". Which I guess makes sense, given these will be low-frequency pairings in the training text -- compare "qu" just tokenised as "qu".

(Though I didn't test all consonant-vowel pairs, so there may be more).


My wild guess is that if it could get things done by tokenising like that all the time, they wouldn't need to also have word-like tokens.

If that is a inference time performance or training time performance or a model size issue or just total nonsense, I wouldn't know.


I might as well take this opportunity to plug my clone of Semantle which uses a tSNE visualization. I've heard many people say this helps them visualize the chains of logic in their guesses.

https://semantle.pimanrul.es/


Yours has the best interface I've seen so far, but the words still feel kind of obscure when I'm clicking the hint button. Ideally those should be limited to maybe the top 10,000 english words.

I think there's a really approachable game somewhere in this space, but it needs to implement something along the lines of an auto hinting system.

I imagine that for every guess, you could get a word or two that is similar (to help you understand what part of the word's context is important), and maybe words that are further away to help understand what isn't important?


My partner and I were consumed by this for a while. There's something very satisfying about hopping from cluster to cluster and recognizing the various meanings of a word.

Makes me wonder what 3D (or even 4D) version would be like.


At least a 2D visualization is at https://word2vec.xyz/


Your visualization makes the game a lot more fun.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: