Hacker News new | past | comments | ask | show | jobs | submit | aortega's comments login

I'm the original developer and it seems very few people really understand how to use this tool. And I did a poor job explaining it too.

No, I do not think of this of patches to be integrated anywhere. Even if you want to do this, its much easier to review the patches than to write them. But why integrate the patches to the code when you can regenerate them all in hours?

This is a post-processing tool. You add additional checks to your existing code. This will help to catch bugs, (and also perhaps introduce its own bugs), like basically every other compiler-level protection.

But if you catch more bugs than what you introduce, you win. And remember this is done with an very basic AI, next year AIs will produce much better code.


> All AI is written with Python.

Only high-level is written in python, low level is Cuda that is a form of C.

Also, you would think the first to replace workers with AI would be the AI companies like Google.


CUDA is not C, it is a polyglot stack for GPGPU, using C++ memory semantics, with C, C++ and Fortran compilers, and anything else able to target PTX bytecode, including Julia, .NET, Haskell, Python JIT, Java,...


Yes, and not to repeat myself: https://news.ycombinator.com/item?id=40182940


You know that falsely accusing somebody of a crime is also a crime right?


Now you have to defend yourself, from a lower position.

See how those people work? now they are assigned themselves the moral high ground, and you have to justify yourself.

Dont do it. You did nothing wrong, those guys are bullys.


>don’t know why so many people worship that guy. Same with RMS. Both of those dudes give me the creeps

Maybe because we listen those guys for the useful information and we don't think with our reptile brain that "give us the creeps"


> he has explained (while very drunk on a live stream) that he has some beliefs that don't... always align well with the status quo

This is exactly the kind of people that I want to listen on a live stream. And not boring moralizing status-quo defenders.


Neurochat is a opensource native GUI for OpenAI LLMs, LLama.cpp and the free service neuroengine.ai.

It's basically an opensource version of LLMStudio. Currently it works on Windows and Linux, and I have some experimental builds for MacOS too. Apart from being 100% native (using FreePascal) is that with an API key you can access ChatGPT4 using this GUI without having to do monthly payments, you only pay for whatever you use. This is quite useful for me as I don't use GPT4 enough to justify the monthly payments, but I still want to try it once in a while.

It supports local LLMs out-of-the-box using LLama.cpp dll and you can use GPU acceleration if you provide the corresponding accelerated LLama.cpp build for your system.


Yes, it's coming tomorrow.


So where is it?


Added the 2-clause BSD license. Thank you for bringing it to my attention.


There will always be a delicate equilibrium between freedom and security. This site provides a greater degree of freedom compared to other services; however, it still necessitates registration for the purpose of sharing AIs. If you employ it to disseminate malware links, you will promptly face a ban. Nevertheless, apart from potential throttling in the event of excessive usage, there are no other restrictions imposed.


Yeah but what do you do with the data people send you


Currently I do not store any logs, nor I have any use for them. But you should not have to trust my word.

I can guarantee that if you use TOR, I do not have your IP or any other info that can be used to identify any client.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: