Hacker News new | past | comments | ask | show | jobs | submit | maxutility's comments login

It would be great to implement a browser extension that lets you highlight a term or phrase on any webpage and open a GPT rabbit hole for that term or phrase.

@maxkreiger - if I were to build one as a proof of concept, would you object to me having it hyperlink to your UI?


You could call it “bootstrapping is all you need” :)


Act III of episode 585 of This American Life (a WBEZ radio show broadcast on NPR and distributed via podcast) discussed this phenomenon and spoke with a few individuals with HSAM:

https://www.thisamericanlife.org/585/in-defense-of-ignorance...

One of the individuals was a script supervisor in Hollywood responsible for ensuring continuity between scenes during filming. But it also ventures into powerful emotionally resonant territory, touching on the bittersweet implications of experiencing loss when memories never fade.


I re-listened to this and I think all of this episode is worth the time investment if you have not heard it. The first act is with director Lulu Wang and the real life inspiration for her movie The Farewell and the second part is an interview with Dunning behind the infamous Dunning-Kruger effect.


I submitted using the title from the article metadata instead of what's displayed in the article, since the metadata title was more descriptive and less clickbaity.


Related article from the same series: We Can’t Stop Writing Paper Checks. Thieves Love That. [0]

[0] https://www.nytimes.com/2023/12/09/business/check-fraud.html


Seems like as good time as any to revisit the advantages of having and challenges of building a cash-flow-positive bootstrapped company (e.g. [0],[1])

[0] https://news.ycombinator.com/item?id=34740105 [1] https://news.ycombinator.com/item?id=37657519


A few interesting tidbits

> The company pressed forward and launched ChatGPT on November 30. It was considered such a nonevent that no major company-wide announcement about the chatbot going live was made. Many employees who weren’t directly involved, including those in safety functions, didn’t even realize it had happened. Some of those who were aware, according to one employee, had started a betting pool, wagering how many people might use the tool during its first week. The highest guess was 100,000 users. OpenAI’s president tweeted that the tool hit 1 million within the first five days. The phrase low-key research preview became an instant meme within OpenAI; employees turned it into laptop stickers.

> Anticipating the arrival of [AGI], Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.

> For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles. In July, OpenAI announced the creation of a so-called superalignment team with Sutskever co-leading the research. OpenAI would expand the alignment team’s research to develop more upstream AI-safety techniques with a dedicated 20 percent of the company’s existing computer chips, in preparation for the possibility of AGI arriving in this decade, the company said.


> The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.

I know it's small and kind of a throw-away line, but statements like this make me take this author's interpretation of the rest of these events with a healthy dose of skepticism. At my company we have multiple company "memes" like this that have been turned into reacji, but the vast majority of them are not because it's "popular" but rather because we use it ironically or to make fun of the meme. Just the fact that an employee turned it into a reacji is a total non-event and I don't think you can read anything from it.


That quote is a statement of fact, it doesn't imply anything one way or the other.


Well that's not how any of this works. Connotations and subtext are a thing, particularly in the choice of including or not including a particular quote in a piece of journalism.


Yes it does, it says within the quote that the phrase was "popular" and is using the creation of a reacji as supporting evidence. The fact that an employee made a reacji of something does not mean it is popular. It takes a extremely low amount of effort from an extremely low amount of people (often, just one person) to create a custom reacji.


You can have a "popular" phrase in a company that's popular because people are making fun of it and the people that use it unironically.


But they put the effort to create it and others recollect it?


How many of those reacjis have you been lead in a chant for though?


Being awkwardly goaded by your boss to chant some weird company saying at a company holiday party is _exactly_ the kind of stuff that people make fun of after-the-fact by making memes about it (speaking from experience here... which is why this phrase in the article gave me pause in the first place).

There's not enough info in this article to know if it was seen as weird by the employees or not, but my point is that "they created a reacji of it" isn't evidence one way or the other for it being "popular".


Right. Exhibit A is Steve Ballmer infamously chanting “developers, developers, developers”


Article says there is compelling but contested “smoking gun” evidence in favor of both lab leak and zoonotic/wet market origin theories (only one of which can be the actual origin), including new elaboration of details about infections at Wuhan Institute of Virology that could have started the pandemic: > Ben Hu, Yu Ping, and Zhu Yan, three gain-of-function coronavirus researchers at WIV, became severely ill with COVID-like symptoms in the second week of November 2019 and sought hospital care.

The article is ultimately agnostic about the truth and concludes: > the origins question has broken down into a pair of rival theories that don’t—and can’t—ever fully interact. They’re based on different sorts of evidence, with different standards for evaluation and debate. Each story may be accruing new details—fresh intelligence about the goings-on at WIV, for example, or fresh genomic data from the market—but these are only filling out a picture that will never be complete. The two narratives have been moving forward on different tracks. Neither one is getting to its destination.


I think this is a good point, but here’s a follow up question: is the zoonotic theory incompatible with this list of patient zeros?

It would be a somewhat odd coincidence for virus researchers to be the people infected at the market. But… it seems not completely unlikely that there could be other people infected who didn’t have serious symptoms. Without China sharing all available data it’s going to be impossible to be sure one way or another.


From the article:

> The lawsuit, filed in U.S. District Court for the Western District of Washington, argued that Amazon had “duped millions of consumers” into enrolling in Prime by using “manipulative, coercive or deceptive” design tactics on its website known as “dark patterns.” And when consumers wanted to cancel, Amazon “knowingly complicated” the process with byzantine procedures.

...

> On Wednesday, the F.T.C. said that Amazon had made it particularly difficult to purchase a product in its store without also subscribing to Prime while checking out. In one example, it said the company had used “repetition and color” to push customers’ focus to Prime’s promise of free shipping and away from the service’s price, leading some to subscribe to Prime without “informed consent.”

> The agency also said Amazon made it hard to find the page that allowed consumers to cancel the service. Once they found it, the company bombarded them with offers intended to change their mind. The lawsuit said that Amazon had named the process for canceling Prime after the Iliad, the lengthy Greek epic poem that recounts the Trojan War.


I don’t see this in the article. Has Anthropic explained the mechanism by which they were able to cost-effectively expand the context window, and whether there was additional training or a design decision (e.g. alternative positional embedding approach) that helped the model optimize for a larger window?


No. As far as I know, they haven't said anything about this. Neither did OpenAI about gpt-4-32k.

MosaicML did say something about MPT-7B-StoryWriter-65k+: https://www.mosaicml.com/blog/mpt-7b. They are using ALiBi (Attention with Linear Biases): https://arxiv.org/abs/2108.12409.

I think OpenAI and Anthropic are using ALiBi or their own proprietary advances. Both seem possible.


Interesting. Does the decision to use ALiBi have to be done before the model weights are first trained, or is there a way that these models could have incorporated ALiBi instead or in addition to an alternate positional encoding method to ALiBi after they were first trained?


The decision needs to be made before starting training. Maybe there is a clever way to add it after the fact in the style of LoRA? First, that would be a different method in its own right (just as LoRA is), second, I can't see how to do so easily. But then I just thought about it for a minute.


a lot of people are speculating online (https://twitter.com/search?q=anthropicai%20alibi&src=typed_q...) but i'm guessing it's ALiBi, which was also used by MPT-7B to get up to 85k long context


No, they are playing this close to the chest, similar to how OpenAI achieved 32k context limit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: