Hacker News new | past | comments | ask | show | jobs | submit login
Improving Interoperability Between Rust and C++ (googleblog.com)
166 points by phkahler on Feb 5, 2024 | hide | past | favorite | 99 comments



Nice, I guess, but I had hoped for more technical particulars rather than just "we're trying to throw money it and hope it gets better".

What will the money actually fund? What are the conditions and KPIs of the grant? Are they putting their weight behind any preferred approach?

Edit: The RF announcement includes more information: https://foundation.rust-lang.org/news/google-contributes-1m-...

"The Rust Foundation’s first Interop Initiative task will be to draft a scope of work proposal for discussion amongst our team members, the Rust Project Leadership Council, Rust Project stakeholders, relevant Rust Foundation member organizations, and its board.

[...] Recommendations will likely include the hiring of one or more Interop Initiative engineers and may include the provision of resources towards expanding on existing interoperability work, build system integration, using AI for C++ to Rust conversion, or some combination of all these. The Foundation will engage the appropriate stakeholders across the Rust Project and its member base to review the proposal and carve the path forward for this important work."


Carbon's approach is to integrate with LLVM internals (particularly the clang AST) and interoperate at that level. I guess Rust could do something similar, but probably with limitations since Rust isn't explicitly designed as a C++ successor language.


In any case Carbon is a research project, even if the Internets give it more spotlight than the authors themselves.


It blows my mind that Google is celebrating a 1 million dollar grant to something that serves as a cornerstone of every aspect of the future of their business.

$1 mil is less than a rounding error for an organization like that. I'd love to know how much they've burned on the numerous projects that occupy their product graveyard, and imagine a future where that was devoted to the growth and education surrounding core language development in every language - Rust, Go, C++, Python, etc.


I don't know if you have much experience running non-profits, but giving an overly large one-time grant to an org which isn't necessarily prepared for it can be a fairly bad idea and often leads to the org doing mad yo-yo things and ending up penniless.

Spending money requires a certain amount of structure and process, and it takes a while to adapt to a money windfall. Going from a little money to a lot changes an org very significantly.

Granted, I also haven't looked much into the Rust Foundation, but I'm going to assume a $1 mio is significant for them. Google can always donate more down the road when the investment pays off and is digested usefully.

(Edit: The Rust Foundation had an income of $2.89 mio in 2022 and $2.56 mio in 2023 according to their annual reports. $1 mio sounds about right for what they should be able to usefully handle, from the gut.)


This seems like a cop-out by one of the largest companies in the world to avoid giving back to the tools and languages they use to extract their profit. If you're worried about funds mismanagement, assign some oversight, add a distribution schedule (5m over 5 years) and so on.


> This seems like a cop-out by one of the largest companies in the world

I'm not affiliated with Google in any way.


> something that serves as a cornerstone of every aspect of the future of their business.

I think it's great that Google has been embracing Rust, but this feels hyperbolic to me.


Admittedly poor phrasing, thanks for pointing that out. I meant more "core language development" rather than Rust specifically.

A quick search shows they spent almost $40 billion last year on development expenses. I'd like to imagine the alternate reality where even just $100mil of that was spread out across various core language organizations.


That does in fact change things quite a bit :)


It's hyperbolic for now but not for the future, I suspect. Sounds like chunks of Android are going Rust, but they have also bet big on Rust in Fuchsia. So if Fuchsia has a real future (a questionable proposition), Rust is a huge "cornerstone" of that.


Even if we assume that Fuchsia and Android are 100% rust, I would find “the cornerstone of their entire business” to be hyperbolic.

Yes, these are important projects. It is a big deal. But there’s way more to Google than these two things.


Well, the fact that they're not 100% Rust means that Rust/C++ interoperability is important to them because they want to add Rust components. Isn't that what's happening here?


Sure. That doesn't change that "a cornerstone of every aspect of the future of their business" isn't hyperbolic.

That said, the OP has clarified that they didn't mean that Rust was this thing, but that language development in general is.


While it is true they are interested in this going forward, I do not see why Google should be critiziced for investing money where they find appropiate as long as it is their money and not someone else money.


> $1 mil is less than a rounding error for an organization like that.

That's 4 $200k/yr engineers working on this full time and then 1 $200k/yr manager to keep them on track.


The money will last a lot longer if you go with EU Engineers for just as excellent quality…


>EU Engineers for just as excellent quality…

There is a group of highly intelligent people who want to go into lucrative careers that won't pick CS in the EU, and they do in the US because of the pay.

I think it's pretty apparent that, while there is fantastic talent in the EU, the density isn't as high as it in in the US.


Well, what about the SC landlords?


For a year only?


How much do you think they should allocate to rust?


4.5 million is a good start for a project of this magnitude affecting this much of Google.

Back 5-10 years ago Google tended to calculate opportunities in SWE-years. Roughly $300K/engineer.

So I'm asking for 5 engineers for 3 years.


I think there's a dedicated Rust team in Google working on interop between C++/Rust. If you count this as well, it would be close to that number.


I am the current passive maintainer of the cpp crate: https://crates.io/crates/cpp / https://github.com/mystor/rust-cpp

I'm a bit disappointed that cxx gets all the glory and nobody likes the approach of the cpp crate.


4M downloads of your crate is quite an achievement.


If there really serious about this, they'll realize they need to add erased but not specialized types to rust, and existential types.

If you can replace

    void qsort(void base[.size * .nmemb], size_t nmemb, size_t size,
               int (*compar)(const void [.size], const void [.size]))
with

    extern "C" fn qsort<erase T>(base: *const T, nmemb: usize, compar: fn(*const T, *const T) -> std::ffi::c_int)
such that `qsort` is still single a regular C function, not something that corresponds to many separate C functions, many good things are possible.

In particular for C++, being able to do

    struct MyVTable<erase T> {
        virt_method_0: fn(*const T) -> bool,
        virt_method_2: fn(*const T, usize) -> (usize, usize),
    };

    struct MyType<erase R> {
        vtable: *const MyVTable<MyType<R>>,
        rest_of_fields: R, 
    };
Opens a lot of doors. People while whine because it is exposing the C++ ABI in the Rust API, but that's fine. That's what `private` is there to fix.


Your proposal seems like an unnecessary half measure to me. If you want to go fully ergonomic, you can expose:

    pub unsafe fn qsort<E, C: FnMut(&E, &E) -> core::cmp::Ordering>(elements: &mut [E], mut comp: C)
In rust as currently stands: https://play.rust-lang.org/?version=stable&mode=debug&editio...

On the other hand, both this wrapper and yours are counterproductive if the element size is dynamic (e.g. perhaps you're dealing with some nonsense like:)

    struct ITableColumn {
        virtual ~ITableColumn() {}
        virtual void* base() = 0;
        virtual size_t stride() const = 0;
        virtual int (*)(void*, void*) get_comparison_func() const = 0;
    };


I wrote my example based on the libc quicksort. Yes, in greenfield code other things are better. But it's useful to look at things to bind via FFI as they curently exist to stress-test our current abilities.


To be clear, my playground link is calling libc's qsort. The FFI definition:

    extern "C" { fn qsort(ptr: *mut c_void, count: size_t, size: size_t, comp: extern "C" fn(*const c_void, *const c_void) -> c_int); }
And the call:

    unsafe { qsort(ptr, len, size_of::<E>(), comp_wrapper::<E, C>) };
Are both hidden away in the body of the wrapper fn.


I am writing a single abstract declaration (I suppose I should have used `extern "C" {...}` to be clear) that one can only use safely. You are writing some unsafe code with safe wrapper. These are not the same.


> I am writing a single abstract declaration (I suppose I should have used `extern "C" {...}` to be clear) that one can only use safely.

Your abstract declaration still wouldn't be safe. Remaining unsafety includes:

• possible dangling pointers

• possible incorrect lengths

• possible libc bugs with ZST elements

• undefined behavior if the sort fn misbehaves - (recently reported as a security issue against glibc because that being UB is dumb even if allowed by the standard: https://news.ycombinator.com/item?id=39264396 )

This is why I call it a "half measure".

I also fail to see how your proposed abstract declaration would simplify either my wrapper, or other code that would actually bother to use the raw FFI definition in any significant way. This is why I further call it "unnecessary". It also fails to specify which underlying FFI parameter size_of::<T>() would actually be passed into.

> You are writing some unsafe code with safe wrapper.

My wrapper remains `unsafe` as well, but it ameliorates everything it reasonably can.

> These are not the same.

No, but my wrapper demonstrates an actual use case of your raw FFI definition... and shows the actual concerns of surrounding code that aren't significantly helped by your declaration.


Replace my pointers with safe references then. Add a static assertion in a where clause about not being a ZST.

The second example with the vtables is the point (i.e. complicated data structures). qsort is just a simple example to introduce the concept.


Vtables are pretty solved as well. I do a lot of Windows COM interop. Using the `windows` crate, vtables for COM interfaces are relegated to an implementation detail - instead you simply implement a (typically safe!) trait:

https://microsoft.github.io/windows-docs-rs/doc/windows/Win3...

Which can then be converted to a refcounted smart pointer:

https://microsoft.github.io/windows-docs-rs/doc/windows/Win3...

All driven by win32 sdk parsing and metadata.

But suppose we want to roll our own, because we tend to prefer `winapi` but it lacks definition. That's not too terrible either:

https://github.com/MaulingMonkey/thindx-xaudio2/blob/master/...

https://github.com/MaulingMonkey/thindx-xaudio2/blob/master/...

https://github.com/MaulingMonkey/thindx-xaudio2/blob/master/...

I could more heavily lean on my macros ala `windows`, but I went the route of manual control for better doc comments, more explicit control of thread safety traits to match the existing C++ codebase, etc.

Is there some pointer casting? Yes. Is it annoying or likely to be what breaks? No. What is annoying?

• Stacked borrows and narrowing spatial provenance ( https://github.com/retep998/winapi-rs/issues/1025 - this can be "solved" by sticking to pointers ala `windows`, or by choosing a different provenance model like rustc might be doing?)

• Guarding against panics unwinding over an FFI boundary. This is at least being worked on, but remains unfinished ( https://rust-lang.github.io/rfcs/2945-c-unwind-abi.html )

• Edge case ABI weirdness specific to C++ methods ( https://devblogs.microsoft.com/oldnewthing/20220113-00/?p=10... , https://github.com/retep998/winapi-rs/issues/523 )


Side question, is writing COM stuff in Rust finally usable or is it still quite raw level programming, low level C style like before MFC and ATL came to be?

Rust/WinRT has plenty of old open tickets into that regard, and given the team's track record on how they messed the developer experience with C++/WinRT and then gave up on it, I am not too keen investing into Rust/WinRT.


> Is there some pointer casting? Yes. Is it annoying [..]? No.

Let's just agree to disagree on this.


What does `erase T` mean here? "We don't deref it so don't worry about the type, just erase it?"


Normally, if you have generics, Rust makes copies of the function (/struct/enum/etc.) (aka "monomorphization"), one for each type that you use with it. The idea is to have a generic function but only one copy of the function, presumably because none of the code actually relies on the details of T.


I wonder if we really need an explicit syntax or if the compiler just need to disable monomorphization for functions that don't use the interior type e.g., only passing around references like this example.

Without trait bounds there isn't much you can do with it anyway besides generic code like this. I guess the question is 'does Rust emit' or elide an unused generic function with unused interior types?


It's... not the most compelling feature in the world. The main use case I think I see is slightly easing the burden of writing external function signatures for a function that passes a user-definable type to a function callback parameter like qsort does. But there's relatively few of those functions in existence, and often times, you want a more ergonomic function definition anyways (see e.g. https://news.ycombinator.com/item?id=39267277 for qsort).

The other major use case is if you've got something like a Vec<T> where several of the helper methods can be erasable. But it's already possible to do this with helper functions--the standard library takes advantage of it--so it's not entirely clear that you need a language-level feature for this use case.


Explicit syntax is much better. "Just infer it" means we have to trace entire call graphs, because we have to know if our callees need monomorphization in order to know if we need monomorphization.


Isn't this just `&dyn Trait`?

Would it be a stack allocated, owning version of that, that can be moved? A pure `dyn Trait`?


No. &mut dyn Foo is sugar for something like

    struct DynFoo<erase T> {
        vtable: &mut VTableFoo<T>
        ptr: &mut T
    }
This is the underlying feature. `dyn` is a bit of an ad-hoc hack around not having this feature.


Polymorphic type erasure is the technique where the type is only used during type checking but has no impact on the generated code, namely the compiler doesn't emit specialized code for each instance of the type.


Can I get an ELI5 on erased, specialized, and existential types?


specialized: always substitute every type variable until they are all gone, creates many different "specializations" or "instantiations". This is like Rust

erased: just ignore the type variables for compilation. This is like the typescript -> javascript compilation. (Also how Java, Scala, Haskell, OCaml, etc. type paramters work)

Existential: "there exists a type such that..." see https://en.wikipedia.org/wiki/Existential_quantification https://en.wikipedia.org/wiki/System_F


Is “erased” how Haskell typeclasses work?


No, the opposite. They're either specialized (when they can be) or (effectively) existential (when they must be). That is, the compiler either inlines the specific definition of the type class methods if it knows them, or passes them around at run time.


Agreed, except I would not call it "existential"

C++ stores references to dictionaries from/with the things they describe, that's "existential"

Haskell passes in extra parmeters that are dictionaries and doesn't (except for user-written code that does) keep around extra references to them. That's more like "universal" than "existential".


Erased is how types in Haskell work. Type classes are compiled as "dictionaries": structs of methods. Sometimes they can specialize them (just pull the thing out of the dictionary statically), but in general they become that extra parameters at run time.


Can't you do this today by just writing a macro that generates a generic wrapper that casts/transmutes the pointers to void* before/after running the function? It'll almost certainly get inlined at that point so I'm not sure how much having this functionality native to Rust helps.


Yes, but to me that's no different to saying "we don't need structs, we can do everything with `[u8; N]` and casts".

Yes, that's true, but it's not type safe, and I don't really want to program that way.


How is it not just as typesafe? Rust will enforce the same typing constraints on the arguments to qsort either way. You're making attestations about how the C code behaves that the Rust compiler cannot type-check for you, but both methods have that fundamental issue.


I won't be able to explain this succinctly. Sorry to be an asshole, but you are in Plato's cave of only having monomorphized types.

The point of erased types is to be able to put in the language invariants that hitherto are only in our heads, but unless you have a typed-based mental model for what makes vtables/dictionaries safe (say, based on experience in other languages) you won't have ideas you can't write down, and thus you won't know what is missing.

I wish there was a good blog-sized resource I could point you to that did explain it instead, but I don't know of any off-hand.

I guess, try to wrote `dyn Trait` things (like a heterogenous collection) without `dyn Trait`, and then when it doesn't work without unsafe Rust, you will see what I mean.


Yes, but to me that's no different to saying "we don't need structs", we can do everything with `[u8; N]` can casts.

Yes, that's true, but it's not type safe, and I don't really want to program that way.


This might finally be the push for me to learn Rust. I'm primarily a Python developer working on Django, but I used to do embedded development in C, and Operating Systems with Remzi at UW Madison was my favorite class. It got me excited about CS. With Google and industry backing, it seems like Rust is here to stay.


It's worth mentioning that Rust-Python interoperability is already great, in both directions, thanks to pyo3

https://github.com/PyO3/pyo3


Seconding this. I learned a little rust and really liked its philosophy. I was then working on a small project that needed to avoid the big data science python libraries because they're miserable in terms of portability to air-gapped networks.

We had re-implemented a few mathematical and curve finding functions and even with the newest python they were uncomfortably slow.

PyO3 made it so easy to implement a runtime switchable rust library exposed to python, it was almost unnerving. By writing those handful of functions in rust we got something like a 30x speedup.

Note this was a small project with limited funding for r&d so the level of effort for performance speedup was really nice.


Why not compile those (python scientific) libraries from source? Why would PyO3 be acceptable while e.g. numpy is not?


needing to compile them on the airgapped machine introduces more dependencies. Things also get weird when your airgapped machine is a slight version off from your dev machine. There's a lot of ways around it but we've found for the size of this tool, it was preferable to just write our own handful of math functions over building out a deployment method for extra dependencies.


Yeah, PyO3 is great. I've tried to play around with releasing the GIL from rust in Python 3.12. I would enjoy writing a WSGI/ASGI server with a Celery runtime at some point too. Or contribute to Granian.

https://github.com/emmett-framework/granian


PyO3 is great. But it needs more and better documentation.

I recently did use it for a project, and it was quite a lot of time and pain guessing what would work and what wouldn't.


I think Rust is here to stay, but I wish folks had designed a systematic standard library (as Java has) rather than let a set of crates evolve uncontrolledly.

Hopefully, the language will develop further and then reach ISO standardization - which Java cannot as it remains proprietary.


Rust's std lib approach is the best way to approach a std lib (IMO). Even in java world, some dependencies end up being the defacto standard while others end up dying off.

The JDK has a bunch of garbage in the JDK that people shouldn't use. That stuff has to remain due to strong backwards compatibility guarantees.

Rust's approach means that non-standard stuff that we end up realizing is a mistake can quietly die off.

Now, should it be larger? Probably. I'd prefer if rustlang had a standard async/await implementation rather than leaving it up to the ecosystem. But I don't think rust needs, for example, a gui api (like swing or awt) in the standard library.

I'd prefer if rust was managed a bit more closely to the way java is managed today. Java doesn't pull in new apis willy nilly, but the ones they do pull in end up being things that have broad appeal and utility. Rust could take a look at common crates in the ecosystem and start pulling those in to the standard lib.


> I'd prefer if rustlang had a standard async/await implementation rather than leaving it up to the ecosystem

The "default" is tokio, and I think even the tokio authors would agree that it's neither suitable nor ready to be part of the stdlib.

OTOH, pollster[1] (or something like it) should be pulled into the standard library. Not particularly useful for anybody who wants async, but super useful for anybody who doesn't want async but wants to use a 3rd party crate that includes async.

1: https://docs.rs/pollster/latest/pollster/


On the async front, more work needs to be done to allow a more "plug and play" scenario for async runtimes. Standard APIs/traits for task spawning & mgmt, channels, locks, etc. Because right now it's not really feasible to write a library in an async-agnostic way, and tokio just owns the field even in places where it's not really appropriate or the best choice.


In Java land, I can just ignore dead stdlib stuff. In rust land I have to fight bugs when two crates depend on different async or applicator libraries.

Surely you can see how one is worse than the other?


> Now, should it be larger? Probably. I'd prefer if rustlang had a standard async/await implementation rather than leaving it up to the ecosystem.

What you are describing is a problem, no doubt about it, but it's not as if this isn't something that can't come up in javaland. For example, dealing with a project that uses both Gson and Jackson. Or dealing with a project that's mixed together Netty with Apache http.

And when you can ignore these dead libs very much depends on what the other parts of the lib you are dealing with. For example, you might never interact with `Enumeration` or `Vector` yet there are parts of the Swing api that expose those.


> The JDK has a bunch of garbage in the JDK that people shouldn't use. That stuff has to remain due to strong backwards compatibility guarantees.

That is not actually a problem. Let it stay there, it hurts nobody and may actually occasionally help people.


I disagree that it's not hurting.

The massive feature set of the JDK bloats the size of every container shipping with the JVM. It pumps up the requirements for metaspace. And it negatively impacts JVM startup time.

I'm not saying the JVM needs to remove every dated API. But things like JNDI, for example, are not only dangerous to use (that was the root cause of log4j2's big vulnerability), they are massive feature sets that pretty much nobody wants to use.

Java was designed in an era where we thought having thinish clients streaming jars/classes from central network servers was probably a good idea. When we thought the JVM could be more than just a language VM, it could be an operating system. A lot of those concepts simply don't apply to modern jvm dev or even jvm dev that's happened in the last 20 years.

And, to be clear, the JDK was not wrong in bringing along all these libraries. After all, in the 90s it's not like we really had a great story around community package development. That was an era where devs routinely downloaded their dependencies manually. Clever devs even had curl scripts wired into ant or make to do that job.


Here's a good article about why Rust has a small stdlib:

https://blog.nindalf.com/posts/rust-stdlib/


Instead of a large std tied to the stability guarantees of the labguage, I would like to see something akin yo "stdlib distributions", where a versioned set of community libraries can be depended on to interoperate without duplicated deps in your tree. This decoupling would allow Rust to remain 1.X while the equivalent of the stdlib can bump their major version every, lets say, 3 years, without breaking older projects. You'll notice that this is almost the same thing we have today, and anyone could start it just by creating a crate called "my-std" with the list of dependencies they want to provide.


(have an upvote, no idea why you're in the negative)

I have been thinking about this for the past few years, and I think I'd go so far as to make the standard library "not special." That is, make it work similar to the edition system: cargo new would add a dependency for the latest std at the time, build-std would just be transparent, and you treat it like any other crate.

That said, I am sure there are a zillion issues with this today, especially in Rust itself, but if I were to make a Rust++, this is one of the things I'd consider trying to give a shot.


The issue with this approach lies with the current content of the stdlib: the stdlib is for vocabulary types, that is, types that are meant to be used as interoperability bricks in almost all programs (think Option, Result, Vec).

If you start versioning the stdlib now you need a bridge between the Option from std1.0 and the Option from std2.0. This will become confusing quickly for programs that use libraries that depend on different versions of the std.

We could forbid that situation, but then we have an ecosystem split


Why does Rust need ISO standardization - what is blocked by the lack of ISO standard?


I guess the GP didn't mean to point to ISO as an organization specifically, but overall standardization/specification efforts.


Still, what is blocked by lack of standardization or specification?

Many (all?) languages were successful before being fully specified, and many still are not (cough Python cough) yet that has not impacted their adoption, popularity, or effectiveness.


Many people misunderstand how software is written in regulated industries, and assume that a standard is necessary. In practice, this is not the case. Note that Ferrocene[1] had to produce a specification[2] in order to qualify the compiler. But there isn't a requirement that it must be a standard in any way, only that it describes how the Ferrocene compiler works. Nor that it be accepted by upstream.

1: https://ferrous-systems.com/blog/officially-qualified-ferroc...

2: https://github.com/ferrocene/specification


Exactly. I've heard so much "you must follow MISRA!" but MISRA is not actually the requirement - it's a solution that satisfies the requirement, and there can be other way to satisfy the requirement of your industry's regulations.


I don't think GP described lack of the language specification as a blocker for future success? It's something nice-to-have when the ecosystem has enough resources, though I'm skeptical if this can be meaningfully kept up-to-date unless the core language team decides to actively maintain it. But this will definitely trade off the velocity to a certain degree.


Yeah, strongly agree. The lack of batteries in the Rust stdlib is by far the biggest mistake in the language.

The first reason is because the stdlib is the only thing you can always count on being there. People are often in situations where they can't download library packages due to security procedures, and have to rely on just the stdlib. People like to complain about urllib2 still being in Python even though it's not really used any more, but I've been in situations where urllib2 was the only thing available and I was damn glad it existed.

The second reason is because the way Rust does things is horribly confusing. What's the best crate to use for X? If you are a regular in the community you probably know, but a newbie is going to have no idea which of the many available options to pick. Whereas something within the stdlib is always a reasonable choice, even if it isn't the best choice.

I really hate that the Rust community in general is so dogmatic about this topic. Having so much functionality outside the stdlib makes the language worse, not better.


I would not hold your breath for an ISO standard, ever.

MAYBE a Ruby-style "standard dump," but I don't see why such a thing would ever be useful.


Java has an official language evolution process where plenty companies take place on, several implementations and an official language specification.

Rust is yet to have at least two fully working implementations, and language specification is ongoing.


There are tradeoffs, but overall I prefer rust's "free market" solution over Pythons "batteries included". Crates can evolve, compete and innovate much more freely than the standard library can. Most importantly they can be deprecated or abandoned when better approaches are found, while anything in the standard library has to be supported forever. And when the dust settles and everyone agrees on one canonically best implementation, that sometimes does get incorporated into the standard library.


std promises stability to avoid a python2-3 situation. Which means any mistake we make will be with us for a long time. And we've already accumulated quite a bit of those mistakes. So non-trivial additions aren't made lightly.

Compare through how many breaking changes even high-quality ecosystems crates have gone through in the last few years.


Announcement from the Rust side: https://news.ycombinator.com/item?id=39263750


I imagine that for Google, one path that would make sense would be 1) upgrading their C++ to Carbon and then 2) making sure that the Rust + Carbon interoperability is solid.

Of course, in a company of Google’s size, I’m sure different groups are pushing all reasonable options.


Carbon is a research project, whose authors are the first to say use Rust if you can.


C++ and RUST interop with async is very nascent. Hope this helps to make improvements in this space.


Doing ffi from c to python is easy. From rust to python is annoying. Have to use unsafe, and weird parts of the rust stl. funky refs. Random crashes! oi vey.


Not my experience at all. At work we rewrote a small bit of hotspot python in Rust with no issues. This was what we primarily followed: https://ohadravid.github.io/posts/2023-03-rusty-python/


Thank you for that great link!


The followup question is "Is it easy in C because it's actually easier or because the result is code that has a half-dozen hidden undefined-behavior mines that C does a poor job of detecting at compile time?"


Rust isnt safe across the ffi barrier, so its not a fair criticism of c. go check it out.


It isn't safe, true.

Is it, on average, safer?


If i remember correctly it can even be worse.


I'm loving that Rust is getting more and more support


Interesting. I think I wouldn't have quit Google 2 years ago if I'd been able to get myself on a team doing Rust dev. Such a team did not exist at Waterloo though.


This would be nice, but the biggest pain I had with C++ is you can't use C++ types as values, e.g. as described here:

https://cxx.rs/binding/cxxstring.html

It's because Rust doesn't have move constructors. I doubt they're going to add move constructors to Rust so that seems kind of unsolvable which is a shame.


If they want googlers to program with one hand metaphorically tied behind the back they already have go for that purpose.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: