Hacker News new | past | comments | ask | show | jobs | submit | more dcreater's comments login

It's a sad reminder that as long as you have the "right" credentials you can appear legitimate. Stanford grad, SF based. I haven't used it much and I can already sense it's over-promise-under-deliverness. Hopefully langchain will get found out sooner rather than later.


I pray 7 years at Google is enough to break through this with my own startup. Left a year ago. Before Google, I was a college drop out & waiter that sold a mildly successful startup. No one cared. Next best offers were $120K in NYC, or $50K in Buffalo.

You know, thinking on it, the factor you mention gets it more likely to get you funding, and the funding is what lets you live in an echo chamber.

Tis sort of close-minded navel-gazing happened all. the. time. at Google. I call it "rich people thinking" but it might just be the penultimate white collar thing. At some point the "success" of being there is widely considered enough, and anyone speaking up is either crazy or not a team player.

Even when you've been watching the naked emperor together for 6 months.

Even when "speaking up" is hilariously non-negative, ex. "just a thought exercise, is it possible the emperor does not have 1000 layers of clothes? if so, maybe possibly could I do some work to contribute to the team that could consider an action plan to account for any anticipated negative effect in the market if, possibly, it is shown they are lacking some clothes?"

There's plenty of obviously coarse contrarians, I used to think complaining about this sort of thing was for chumps who couldn't communicate. It's not. Social problems tend to be well-known, drive ~all variability in results, and simple enough that a middle schooler would grok them.


This is a great ideal and user friendly as well. Has the potential of converting multiple old devices overnight from being useless. However, I wish they had provided some results on tok/s, latency with some example setups.


We didn't expect this to blow up so quickly. A lot of work needs to be done on getting different setups working. I have made an issue here: https://github.com/exo-explore/exo/issues/11


This is great work! I will keep an eye (and maybe even try to contribute). Looking back at the beginning of Google, I think their use of hardware and hardware agnostic platform likely contributed to support growth at lower cost. We need more of that in the AI era


Thank you for the support! I agree on the cost point, and personally I don’t want to live in a world where all AI runs on H100s in a giant datacenter controlled by one company.


He most likely will make more money coaching than playing competitive tennis. Especially in the Bay area.


He would probably make more money washing cars in the bay area than doing almost any job in Nepal. It's a dismally poor country.


VS code has been a pillar all along? MATLAB is utter trash on macos, slow and a dated ui. Also why support proprietary languages with predatory marketing tactics


I was a huge fan of RStudio and was pretty much the biggest reason I used R. But then I realized how bad R is, syntactically, and how much more useful Python and it's ecosystem are. Then I discovered VS code and Jupyter notebooks in VS code which completely sealed the deal. So unless you are in need of specific R data science packages, Python seems like the way to go. I'm quite excited to try Positron!


R has the best syntax of any language I have used. What I hate about R is not the syntax, its all the functionality inside it that was written in hairy C and FORTRAN style. Unfortunately, any system gets written on top of by programmers of legacy systems. The syntax and semantics of R are probably the most elegant of any language I have used. The rules are extremely simple, logical, and transparent and are largely inherited from Scheme. The amount of power it gives developers is unparalleled outside of Common Lisp. So much can be redefined by users. It really is at its core a LISP wrapped in C clothing, but better because few people working on R care at all about compiler optimization. Instead, they all care about late binding and introspection so programmers can figure out what their code is doing. If speed is needed, use C++ (Rcpp). However, in practice R is usually fast enough. Somehow the R developers understood Knuth's proverb that premature optimization was the root of all evil (and wasted effort), but the rest of the world forgot.


I don't know. I bought into the Python hype and after a few years I've found myself missing R. If you're using the full python ecosystem more power to you...but for straight up data analysis and statistics, R is unbeatable.


Who are the manufacturers?


Some shell companies setup to collect government green bucks.

We all know this article is full of crap.

What state is the leading producer?


That would be mind blowing. Why go through all the lost intermediary steps, especially through Python and JS, when you can generate machine code directly


If your model is error-prone, having control structures, types and other compile-time checks is very valuable. It's harder to constrain arbitrary machine code to make something sensible.


Intuitively it makes sense, but I am not fully convinced about this. You could give it only a few register and discard invalid operations for certain registers or plain known invalid operations.


But that doesn't stop it from generating code that segfaults.


That is the same problem as generating python that blows up, no? (assuming it is tested in a sandbox)


Interpretability, reasoning about the generated code, portability, etc. Probably also demands a larger model with a higher cost of training (and likely more training data) to target a more unstructured "language" like machine code.


Did it have to look so ugly?


Next week: making a tic-tac-toe game look decent using JavaScript.


How am I going to make it look good? I dont have the CSS skills for that


Mine looks great https://go-here.nl/ai/

lol


TL;DR?


If I had a nickel for every outrageous "matches/beats GPT-x" claim, I'd have more money than the capital these projects raise from VC.

This absolutely is not the first Llama3 vision model. They even quote it's performance compared to Llava. Hard to take anything they say seriously with such obviously false claims


> This absolutely is not the first Llama3 vision model. They even quote it's performance compared to Llava.

Although this is true, there have been earlier Llama3 based vision releases, none of the latest Llava releases are Llama3 based.



That is someone else who has just used the Llava name.

It is not by the original group who have published a series of models under the Llava name.


This appears to be a Llava model which was then fine-tuned using outputs from Llama 3. If I understand correctly, that would make it Llama-2-based.


>fine-tuned using outputs from Llama 3.

Llama 3 outputs text and can only see text, this is a vision model.

>that would make it Llama-2-based.

It's based on Llama 3, Llama 2 has nothing to do with it. They took Llama 3 Instruct and CLIP-ViT-Large-patch14-336, train the projection layer first and then later finetuned the Llama 3 checkpoint and train a LoRA for the ViT.


All models surely write 'its performance'.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: