Fragmentation is unlikely - they'll be completely different heaps, and on 64-bit probably far apart. Marshalling costs only in so far as you work with safe code. C++/CLI is working at a slightly different level.
But I agree that working with the grain of the GC is usually more productive.
That seems like a lot of assumptions. I don't think I'd be OK with generally advocating based on all those assumptions. For example, you never touched on what would happen if you allocated enough in native and managed that you start to get significant memory pressure -- collecting with paging is almost impossible, so the CLR goes into panic mode in an attempt to prevent paging and a large portion of the heap would be untouchable/immovable.
Are you confusing physical memory with address space?
There's no good reason for the managed heap to be anywhere near any of the native heaps in address space, on 64-bit platforms.
And the CLR's GC should actively allocate slabs well away from any native heap (trivial to do - reserve (not commit, reserve) a big contiguous chunk of address space), simply because it relies on third party code which will itself be allocating native memory; everything from GUI code to native DB drivers and their caches, quite independent of unsafe code doing manual allocation.
In the absence of a GC-aware virtual memory manager, GC-immovable memory has little relevance to paging.
(Of course, GC.Add/RemoveMemoryPressure should be called if you're doing native allocation from .net.)
But I agree that working with the grain of the GC is usually more productive.