Hacker News new | past | comments | ask | show | jobs | submit | more YossarianFrPrez's comments login

If I understand things correctly, the problem with magnetic confinement (e.g. Tokomaks, Stellarators) is that once you have heated a plasma such that it is "fusing," how do you get the power out with out cooling the very plasma you've just spent a lot of energy heating up?

Helion, a fusion startup, claims to have solved this problem via capturing an induced current from colliding two hot plasmas together. I'd be curious if there is any way the Wendelstein can produce electricity.


Most fusion power systems assume they are doing that as neutrons. D-T fusion conveniently has the proportion of energy that gets lost from the plasma as KE of neutrons be pretty close to the amount of energy that a conveniently sized fusion reactor can afford to remove from the plasma.

Then you trap the neutrons with, for example, a lithium blanket, use them to breed more tritium, and produce energy with a turbine from the heating of the blanket.


Ah, got ya. Thanks for the information.


I'm willing to bet most graduate student stipends fall below the living wage estimate for the place they are living. It's a terrible and negative incentive to go into the sciences, especially compared with Tech wages / salaries.


This is very cool; I like it a lot. It's visually striking, and if it's sonically striking / it works, I could see it in people's homes or in concert halls. The article says it's "lighter, easier to move, and more convenient to fit in the home." Might be a piano game-changer if true, but given that there is no video, Whipsaw doesn't seem to have built it. Maybe that's the next step in the works.

And yes, there's a decent bit of "artist's statement" patois in the article, but that's sort of expected from designers.


Ah, I see, one interesting part is how the LLM generates a summary of the shown "cards" and weaves a narrative together. Interesting and unconventional use of an LLM.

..

It seems to me that the only thing that "works" about Tarot cards is that they allow the "user" to generate a semi-prompted narrative via symbolically rich-enough images and concepts. The rest is either hopeful thinking, a reflection of one's emotional state, rationalization or some combo thereof. This makes me wonder: do people like having ever aspect of a tarot "reading" laid out for them? Or do they get more out of their own interpretation?


I've actually never had a tarot reading and only know about it from The Pictorial Key to the Tarot https://en.wikipedia.org/wiki/The_Pictorial_Key_to_the_Tarot

Waite's descriptions strikes me as densely arcane and potent sounding symbolism that can be used suggestively to try and provoke some kind of experience or insight in the querent. That's my take.


Personal and dictated readings are very different experiences. What you describe is the self help version of tarot. Maybe most common, but not exhaustive of the medium. Practitioners know/Intuit that the typical major and minor arcana (cards that reveal mutable or immutable Fate) layouts are incredibly flushed out in archetypical terms. Using the symbols and suits, you can define not just A personal narrative, but ANY personal narrative. Both the major, and each of the four suits of the minor arcana, capture progressions of conscious thought that arguably encompass every conceivable platonic Form. This is a phenomenal achievement when examined in good faith.

You might say it is impractical even if impressive. Still, I have been very surprised to see the principle that is supposed to make the tarot work (the hermetic idea of 'As above so below' or, ' the microcosm can only reflect the macrocosm') pop up in Karl Frissons FEP work. What esoterists describe as 'higher and lower powers', seems to be explicitly captured by ongoing work involving Markov blankets. I am not technically minded enought to get the math, but that's what I take from these interviews -

https://youtu.be/KkR24ieh5Ow


If you are interested in this, there are two related books -- American Nations [1] which was inspired by The Nine Nations of North America [2] as well as the data presented in the link above -- that explore geographic variation in American subcultures. There is also some recent work in personality psychology that aims to get at regional variation in culture (see [3], for example.)

[1] https://en.wikipedia.org/wiki/American_Nations

[2] https://en.wikipedia.org/wiki/The_Nine_Nations_of_North_Amer...

[3] Geographical Psychology, Peter J. Rentfrow. Current Opinion in Psychology. https://www.sciencedirect.com/science/article/pii/S2352250X1...


Perhaps a bit tangential, but the reason there aren't more bio-tech startups given the cited number of bio-tech PhDs is because graduate school / academia doesn't select for "entrepreneurship potential" (broadly defined), it doesn't nurture this potential over the course of one's graduate education, and it doesn't reward entrepreneurship.

Academia has favored "the mind" over every other quality of a human being for a long time. Maybe this is useful for capital-d Discovery. Jury's out on whether it is more effective to train researchers to be founders rather than training founders to be research-oriented.


Depending on your use case (particularly if it is research-oriented), "scipy.spatial.distance.cdist" and "scipy.spatial.distance.pdist" are your friends. If you are doing something in production, the PG extension seems like a good bet.

One way to potentially answer your question about text-chunk-granularity is to take a random sample of 500 pieces of chunked text and look at several "most similar pairs." Do this for a few different chunk-lengths and you'll see how much information is lost...


Kudos to you for asking for honest feedback. I don't mean to pile on; only to clarify something about the pitch on your website.

The thing with "problem-solution-result" style descriptions is that they can rely on (insider) knowledge about both the problem and the solution. Your pitch / tagline works great for people who know about both... even though this is unlikely to be the case outside of people working for your company. I.e. you want readers / potential customers to imagine how your solution applies to their problem, not to the one you've described for them.

If a user / potential customer is tech-savvy enough to know about API testing, it's a good bet that they won't just take it on faith that AI testing will hand-wavily solve their problem. I think you can trust that your potential customers don't need to be heavily sold on the idea that API performance is critical. Instead, I recommend focusing on coming up with marketing copy to address the following questions: * What does your AI testing do that internal engineers and a set of tests can't? * How can perfai.ai augment engineering efforts? * Can perfai.ai find things that traditional test-suites miss?

You sort of address this in the "No Code / No Config" section, but it's none too clear and takes some digging to figure out. Speaking of which "bringing the concept of Shift-Left to API active performance" is inside baseball.

Hope this helps!


I changed the heading to:

AI-Powered API Performance Testing No-Code, Self-Learning!

Since AI learns how to interact with the API and validates every path in it. Is the new sub-heading any better than the previous one?

Also, shortened all the other sections.

Thank you so much for the feedback!


You are on the right track, but you haven't reached your destination, so to speak.

"Value" isn't a property inherent to the copy on your website (or any text, for that matter), it's a property of the relationship between the copy and the reader. One way in which people recognize value is when they gently realize that they are (at least a little bit) wrong.

What are people wrong about re: API testing?

You currently have:

  > AI-Powered API Performance Testing
  
  > No-Code, Self-Learning!
None of this implies that I have a problem. And this is because if I am building a website, I already test my API.* So it doesn't matter that your product is automatic, no-code, and self-learning. Instead, you have to point out how my testing of my API is incomplete.

*Or so I may think. I might be convinced that I am testing my API even doing something as simple as '> rake routes'...

But users will likely respond to the following value propositions:

  > Most people don't even know they aren't testing their APIs properly...

  > Doing it right is actually quite tricky, due to... (e.g. effort, time, money)
  
  > By relying on unit tests, engineers miss the forest for the trees...
  
  > For example...
^^ Build this out first, and a snappy way of wording it, as well as a heading / tagline will follow.

The more you understand your true audience, the more you can "tune" this copy to specific issues they might have in their API test coverage. If I build a Rails site in a month by myself, am I likely to need your product? What about if I'm at the scale of Shopify? Or is your intended audience somewhere in between?


I used to have this line earlier. I changed it after criticism from the other users. I agree that your suggestions is valid to have a reason as well.

I'll add this line back. "According to Google poor API performance leads to negative user experience and high-churn. Deliver High-Performance APIs with our Self-Learning and No-Code platform."


I think you are better off with only the second sentence. The first sentence is something your users likely already intuit.


thanks


While not precisely "real estate construction" the NSF has funded many construction-related companies through its SBIR program, many of which are pretty cool: https://seedfund.nsf.gov/topics/advanced-materials/

SBIR-style startups involve discovering new things. Success in real estate construction (strictly speaking) doesn't seemingly rely on having deep and novel insights about the world.


So basically, if you want to estimate the average of more than three averages, there is a better way to go than just averaging them together. Sort of like how when computing the Standard Deviation for a sample population you divide by n-1 instead of n. Anyways, that's the James-Stein estimator, and the authors apply it to PCA dimensionality reduction; they de-bias and improve the leading eigenvector with their technique. Ctrl-f for "fig. 3" to see their results in action.

This looks like it might affect a wide number of fields. I appreciate the concrete examples (batting averages, finance / Markowitz) as well.


PCA is an interative algo. Once you build the leading vector, you subtract it from your data and then build the next.

Do you know by any chance why you can't use this new method recursively for building the whole SVD basis ? (I haven't read the paper carefully yet.. It's a bit at the limit of my understanding )


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: