Yes, of course C doesn't have all the bloat that comes with Go. But still, my question was about performance (as you noted). :)
LibCURL is used (along with a TON of other 3rd party C libs) in a bunch of systems securely. Writing C code isn't __always__ insecure. It all depends on the developer/team. Just like writing Java isn't always portable.
It seems like it's all about saving dev time these days.
I was once pulled onto a project where some devs had used perl & python for an embedded system. They got a prototype system up pretty quickly, but performance just wasn't there and they were using WAY too much RAM/CPU based on what the system had to offer. They wasted many man-months trying to optimize their code. Eventually, they brought in some C devs and we rewrote in C and the system ran great. It turned out great for the Python/Perl devs too because they got to learn C and some of the benefits of using it.
"It seems like it's all about saving dev time these days."
It has always been about saving engineering time; otherwise, why aren't you handcoding things in assembler? Any reasonable development process involves at least a subconscious decision about how much hardware to give up in exchange for a manageable engineering cost.
For the problem domain, Go seems like a reasonable point in the continuum of efficiency and engineering costs.
Another part people overlook when they say "well, my low level language has a library for that, too" is that there is also a cost decision when adopting a library. How quickly can you vet libcurl has no buffer overflows or memory leaks? Compare that to Go's net/http? Does the library have enough public usage in your problem space that you can assume someone else has found and reported the bugs?
Dependencies are like puppies. They are cute, right up until they become your responsibility. The author might have felt more comfortable with his chances of updating and maintaining "net/http" and the lower chance that he would need to.
Until something happens to make engineering time limitless and free, it's ALWAYS going to be about saving dev time.
So they proved out the first version in a high level dynamic language and then someone reimplemented in C for performa performance reasons? Sounds like a solid process.
I actually deployed a production embedded system running a large chunk of code written in perl once (on a 180MHz FreeBSD/arm board). The performance was just enough, although the critical components were implemented in C.
LibCURL is used (along with a TON of other 3rd party C libs) in a bunch of systems securely. Writing C code isn't __always__ insecure. It all depends on the developer/team. Just like writing Java isn't always portable.
It seems like it's all about saving dev time these days. I was once pulled onto a project where some devs had used perl & python for an embedded system. They got a prototype system up pretty quickly, but performance just wasn't there and they were using WAY too much RAM/CPU based on what the system had to offer. They wasted many man-months trying to optimize their code. Eventually, they brought in some C devs and we rewrote in C and the system ran great. It turned out great for the Python/Perl devs too because they got to learn C and some of the benefits of using it.
Don't get me wrong. C doesn't fit all problems.