Hacker News new | past | comments | ask | show | jobs | submit | more keyle's comments login

Both fantastic and 35 years too late?

more refs https://github.com/llvm/llvm-project/blob/main/clang/docs/Bo...


Thanks for this. The results are quite impressive, after trying it myself.


Please explain why do you think they're the most useful by far? Just curious of such a bold statement in a highly competitive space.


Not OP but it used to be that if you wanted an LLM that would cite its sources - Perplexity was one of the only games in town that did a really good job combining an LLM with an active search engine.

It was also much better for posing questions that required the most up-to-date knowledge.


Why do you sound like a bot.


Probably because bots are trained to sound like people.


It starts with a search and reasoning from there. Cites the sources in the middle of the sentences so you can just click and verify, see the whole context.

It blows every "model only" service because it's 100x more accurate.

I use it instead of Google for every search now.


Search is the most if not only useful application of LLMs. In this space there is also Phind. It seems I use Perplexity as often as Google nowadays.


Never heard of it before, interesting!


Yeah, it was a long time ago that it mattered, grey dog over here.


off topic: I live in this strange world where I can read the code and understand what it does, regardless of the language(!)

but the algorithm for the theory looks approximately like this to me

    (ijfio) $@ * foiew(qwdrg) awep2-2lahl [1]
Is there a good book that goes FROM programmer TO math-wizardry that you would recommend, without falling into the textbooks? Ideally I'd like to be able to read them and turn them into pseudocode, or I can't follow.

[1]: e.g. https://pbr-book.org/4ed/Introduction/Photorealistic_Renderi...


As a mathematician by training who does a lot of programming for a living: This is the biggest misconception about math I see coming from programmers. There's frequently a complaint about notation (be it that it is too compact, too obscure, too gatekept, whatever) and the difficulty in picking out what a given "line" (meaning equation or diagram or theorem, or whatever) means without context.

Here's the thing though: Mathematics isn't code! The symbols we use to form, say, equations, are not the "code" of a paper or a proof. Unless you yourself are an expert at the domain covered by the paper, you're unlikely to be able to piece together what the paper wants to convey from the equations alone. That's because mathematics is written down by humans for humans, and is almost always conveyed as prose, and the equations (or diagrams or whatever) are merely part of that text. It is way easier to read source code for a computer program because, by definition, that is all the computer has to work with. The recipient of a mathematical text is a human with mathematical training and experience.

Don't interpret this as gatekeeping. Just as a hint that math isn't code, and is not intended to be code (barring math written as actual code for proof assistants, of course, but that's a tiny minority).


Fantastic read, thank you. I was never explained it this way, nor imagined it this way.

I'll try keep an open mind and read more the surrounding content.

That said, there is a syntactic language, which many equations use, and they seem to vary or be presented differently. The big S is one, the meaning of `|` which is probably not "OR" etc. I wish there would be a cheat sheet of sorts, but I feel it would be death by a thousand paper cuts with 300 standards of doing things(?)


Maybe you're right and maybe that's the culture in mathematics but we don't have to like it.


The text seemed to describe it quite well. They just use a bunch of physics jargon because the book approaches rendering from the physics side of things.

Light leaving a point in a direction = light emitted from that point in that direction (zero if we aren't talking about a light bulb or something) plus all light reflected by that point in that direction.


Sure, but I get lost at the notations like big S, `|` and other things. Those notations seem to have many standards or I just can't see to follow.

In pseudocode, or any programming language, I'm right there with you.


I used to feel like you before I went to university and had a few math courses. Then it became a lot more clear.

And it really isn't that bad in most cases, and isn't unlike how we learnt that "int 10h" is how you change graphics modes[1] in DOS back in the days.

The "big S" is an integral, which is in most cases essentially a for-loop but for continuous values rather than discrete values. You integrate over a domain, like how you can for-loop over a range or discrete collection.

The domain is written here as just a collection of continuous values S^2, so like a for-in loop, though it can also be from and to specific values in which case the lower bound is written subscript and the upper bound superscript.

Similar to how you have a loop variable in a for-loop you need an integration variable. Due to reasons this is written with a small "d" in front, so in this case "dω_i" means we're integration (looping) over ω_i. It's customary to write it either immediately after the integral sign or at the end of the thing you're integrating over (the loop body).

However dω_i serves a purpose, as unlike a regular discrete for-loop, integrals can be, lets say, uneven, and the "d" term serves to compensate for that.

The only other special thing is the use of the absolute function, written as |cosθ_i|, which returns the absolute value of cosθ_i, the cosine of the angle θ_i. Here θ_i is defined earlier in the book as the vertical angle of ω_i relative to the surface normal at the point in question, which can be calculated using the dot product.

So in programmer-like terms, it reads a bit like this in pseudo-code.

    function L_o(p, ω_o): 
      integral_value = 0
      for ω_i in S^2 do
        θ_i = dot(w_i, n)
        integral_value += f(p, ω_o, ω_i) * L_i(p, ω_i) * abs(cos(θ_i)) * dω_i
      return L_e(p, ω_o) + integral_value
Note that the surface normal "n" is implicitly used here, typically it would be passed explicitly in code.

What's special here is that unlike a normal for-loop, in math the changes in ω_i, represented by dω_i, are infinitesimally small. But in practice you can actually implement a lot of integrals by assuming the changes are small but finite[2].

Anyway, this wasn't meant as a full explanation of integrals and such, but just an attempt to show that it's not all gobbledygook.

[1]: https://en.wikipedia.org/wiki/INT_10H

[2]: https://en.wikipedia.org/wiki/Riemann_sum


Screenshot and ask ChatGPT. Works pretty well.


+1 – often, for me, since a lot of the computations are estimated anyway, the biggest thing I need to do is ask ChatGPT to break down the notation and give me the precise terminology.


Neat idea, thanks!


This is a silly argument because at the end of the day, once you make your code portable, you've now duplicated 99% of malloc and free, and you've left a mess for the team or next guy to maintain on top of everything else. You've successfully lowered the abstraction floor which is already pretty low in C.


what?? The whole point is to create independence, then create "real-life" alternatives. I don't understand your point.


The last time I asked someone to turn down the music they dropped their bag to their feet and headed straight to punch me out.

Maybe the wisdom is to just not care, and know you'll die soon, who cares.

Every generation is a shitshow after the next.


This is why concealed carry is so important.


I would feel so productive! Well done.


Amazing that he's still going after all these years.

Please support him if you use his assets in your game prototypes!... Because by the time you ship a finished game, it will be too late, you'll be completely broke.


Thanks for the good advice. I’m not a game developer but I bought the big sprites pack because he’s been working on it for a lot of years and deserve some compensation.


Nature optimizes. The bigger you get, the more you need to eat. The harder it gets to fly. Fruit bats eat fruits.

Look at the food source and you'll understand the evolution.


> Fruit bats eat fruits.

The most caloric dense source of nutrition available in nature? I don't see why that is a limitation to body size for a flying animal - quite the opposite!


fruit bats are the biggest bats

not GP but I think that was the point.

also, volume grows as the cube of linear dimensions which also puts an upper limit on size, as wing surface area only grows as the square (not sure what/how lift grows relative to)


Plants aren't particularly calorie-dense. Meat, on the other hand...


this is almost in "not even wrong" territory, but for the fact that autotrophs are definitionally the entry point for abiotic energy into edible calories for animals, and the observation that the largest terrestrial megafauna are herbivorous.

bamboo is not calorie dense to humans, because we've lost the ability to digest most of it, but pecans are absolutely more calorie dense than even fatty beef.

all else being equal, an ideal carbohydrate source is more calorically dense than an equivalent ideal lean protein source due to the balance in the thermic effect of food between the two. most mammals outside the obligate carnivores are really well optimized for getting calories from plants— this is why we have amylase in our saliva.


Look at great apes. Large land mammals in general. (Apes came to mind specifically because they usually eat fruit)


Are you aware you switched "fruit" for "plant" there?


Fruits want to be eaten, Veggies don't.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: