I'm the original developer and it seems very few people really understand how to use this tool. And I did a poor job explaining it too.
No, I do not think of this of patches to be integrated anywhere. Even if you want to do this, its much easier to review the patches than to write them. But why integrate the patches to the code when you can regenerate them all in hours?
This is a post-processing tool. You add additional checks to your existing code. This will help to catch bugs, (and also perhaps introduce its own bugs), like basically every other compiler-level protection.
But if you catch more bugs than what you introduce, you win. And remember this is done with an very basic AI, next year AIs will produce much better code.
CUDA is not C, it is a polyglot stack for GPGPU, using C++ memory semantics, with C, C++ and Fortran compilers, and anything else able to target PTX bytecode, including Julia, .NET, Haskell, Python JIT, Java,...
Neurochat is a opensource native GUI for OpenAI LLMs, LLama.cpp and the free service neuroengine.ai.
It's basically an opensource version of LLMStudio. Currently it works on Windows and Linux, and I have some experimental builds for MacOS too. Apart from being 100% native (using FreePascal) is that with an API key you can access ChatGPT4 using this GUI without having to do monthly payments, you only pay for whatever you use. This is quite useful for me as I don't use GPT4 enough to justify the monthly payments, but I still want to try it once in a while.
It supports local LLMs out-of-the-box using LLama.cpp dll and you can use GPU acceleration if you provide the corresponding accelerated LLama.cpp build for your system.
There will always be a delicate equilibrium between freedom and security. This site provides a greater degree of freedom compared to other services; however, it still necessitates registration for the purpose of sharing AIs. If you employ it to disseminate malware links, you will promptly face a ban. Nevertheless, apart from potential throttling in the event of excessive usage, there are no other restrictions imposed.
No, I do not think of this of patches to be integrated anywhere. Even if you want to do this, its much easier to review the patches than to write them. But why integrate the patches to the code when you can regenerate them all in hours?
This is a post-processing tool. You add additional checks to your existing code. This will help to catch bugs, (and also perhaps introduce its own bugs), like basically every other compiler-level protection.
But if you catch more bugs than what you introduce, you win. And remember this is done with an very basic AI, next year AIs will produce much better code.