Hacker News new | past | comments | ask | show | jobs | submit | mgt19937's comments login

Cool project! I like the idea of easily sharing LaTeX formulas. It's impressive how smooth it works right in the browser.

I've always thought compiling LaTeX in WebAssembly would be a tough nut to crack, so I was curious if that's what you'd done here. Turns out you're using KaTeX.

Have you considered any WebAssembly approaches?


Thank you for your positive feedback.

KaTeX does not support all LaTeX features but initializes very quickly.

LaTeX via WebAssembly supports more features but might need longer to initialize.

There's an existing WebAssembly project: https://www.swiftlatex.com


There is TikZJax[1], which apparently compiles TeX to WebAssembly, to run TikZ in the browser.

[1] https://tikzjax.com/


I played with web2js a couple of years ago. TeX ends up being a 500kb WASM file (88kb gzipped).

The LaTeX format file or the memory image after LaTeX is loaded are a bit bigger though (2.3 MB and 6.3MB gzipped, respectively).


Not OP, but do you mind me asking what advantages you hope to achieve by using WebAssembly rather than KaTeX?


Well, for one, KaTeX doesn't do "LaTeX" but a limited subset of the TeX equation syntax. As such, it can't handle more complicated macros or typesetting anything apart from equations.


Sounds interesting. Is there a open issue for this? I found https://github.com/duckdb/duckdb/issues/8505 but it seems that that specific issue is closed.


One cool feature of duckdb is that you can directly run sql against a pandas dataframe/arrow table.[1] The seamless integration is amazing.

[1]: https://duckdb.org/docs/api/python/overview.html#dataframes


How does this compare to duckdb/polars? I wonder if GPU based compute engine is a good idea. GPU memory is expensive and limited. The bandwidth between GPU and main memory isn't very much either.


The same group (Nvidia/Rapids) is working on a similar project but with Polars API compatibility instead of Pandas. It seems to be quite far from completion, though.

See discussion: https://news.ycombinator.com/item?id=39930846


Thanks for the heads up. This is amazing.

I've been watching cuda since it's introduction and Polars since I had an intern porting our Pandas code there a couple years ago but I had no idea Polars would go this far, this fast!


I used to think read is always faster than write in ssd. But in figure 5.3 and figure 5.4, it looks like in ssd, read iops is lower than write iops.

When queue depth is low(like qd=1), random 4k read iops is far more less(14.5 kiops vs 128 kiops) than 4k random write iops. When queue depth is high, like qd=32, the read iops and write iops becomes similar. But read iops is still less than write iops.(436 kiops vs 608 kiops)

I wonder why read is slower than write? Is it because ssd has a fast write cache, and it will finish the write request once the data is written into cache? Or it simply report that the data is written and actually write them in batch in background?


It remind me of typst, which use wasmi as its wasm plugin executor.

- https://typst.app/docs/reference/foundations/plugin/


Would you say their use case is closer to the "Translation-Intense" type?


Not really. I think the wasm file is only translated once and most time is spent on executing it.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: