Hacker News new | past | comments | ask | show | jobs | submit login
Cloning a Laptop over NVMe TCP (copyninja.in)
440 points by pabs3 7 months ago | hide | past | favorite | 171 comments



In the author's scenario, there are zero benefits in using NVMe/TCP, as he just ends up doing a serial block copy using dd(1) so he's not leveraging concurrent I/O. All the complex commands can be replaced by a simple netcat.

On the destination laptop:

  $ nc -l -p 1234 | dd of=/dev/nvme0nX bs=1M
On the source laptop:

  $ nc x.x.x.x 1234 </dev/nvme0nX
The dd on the destination is just to buffer writes so they are faster/more efficient. Add a gzip/gunzip on the source/destination and the whole operation is a lot faster if your disk isn't full, ie. if you have many zero blocks. This is by far my favorite way to image a PC over the network. I have done this many times. Be sure to pass "--fast" to gzip as the compression is typically a bottleneck on GigE. Or better: replace gzip/gunzip with lz4/unlz4 as it's even faster. Last time I did this was to image a brand new Windows laptop with a 1TB NVMe. Took 20 min (IIRC?) over GigE and the resulting image was 20GB as the empty disk space compresses to practically nothing. I typically back up that lz4 image and years later when I donate the laptop I restore the image with unlz4 | dd. Super convenient.

That said I didn't know about that Linux kernel module nvme-tcp. We learn new things every day :) I see that its utility is more for mounting a filesystem over a remote NVMe, rather than accessing it raw with dd.

Edit: on Linux the maximum pipe buffer size is 64kB so the dd bs=X argument doesn't technically need to be larger than that. But bs=1M doesn't hurt (it buffers the 64kB reads until 1MB has been received) and it's future-proof if the pipe sizes is ever increased :) Some versions of netcat have options to control the input and output block size which would alleviate the need to use dd bs=X but on rescue discs the netcat binary is usually a version without these options.


> on Linux the maximum pipe buffer size is 64kB

Note that you can increase pipe buffer, I think default maximum size is usually around 1MB. A bit tricky to do from command line, one possible implementation being https://unix.stackexchange.com/a/328364


It's a little grimy, but if you use `pv` instead of `dd` on both ends you don't have to worry about specifying a sensible block size and it'll give you nice progression graphs too.


About 9 years ago, I consulted for a company that had a bad internal hack - disgruntled cofounder. Basically, a dead man’s switch was left that copied out the first 20mb of every disk to some bucket and then zeroed out. To recover the data we had to use test disk to rebuilt the partition table… but before doing that we didn’t want to touch the corrupted disks so we ended up copying out about 40tb using rescue flash disks, nectat and drive (some of the servers had a physical raid with all slots occupied so you couldn’t use some free had slots). Something along the lines of dd if=/dev/sdc bs=xxx | gzip | nc -l -p 8888 and the reverse on the other side. It actually worked surprisingly well. One thing of note,try combinations of dd bs to match with sector size - proper sizing had a large impact on dd throughput


This use of dd may cause corruption! You need iflag=fullblock to ensure it doesn't truncate any blocks, and (at the risk of cargo-culting) conv=sync doesn't hurt as well. I prefer to just nc -l -p 1234 > /dev/nvme0nX.


Partial reads won't corrupt the data. Dd will issue other read() until 1MB of data is buffered. The iflag=fullblock is only useful when counting or skipping bytes or doing direct I/O. See line 1647: https://github.com/coreutils/coreutils/blob/master/src/dd.c#...


According to the documentation of dd, "iflag=fullblock" is required only when dd is used with the "count=" option.

Otherwise, i.e. when dd has to read the entire input file because there is no "count=" option, "iflag=fullblock" does not have any documented effect.

From "info dd":

"If short reads occur, as could be the case when reading from a pipe for example, ‘iflag=fullblock’ ensures that ‘count=’ counts complete input blocks rather than input read operations."


Thank you for the correction -- it is likely that I did use count= when I ran into this some 10 years ago (and been paranoid about ever since). I thought a chunk of data was missing in the middle of the output file, causing everything after that to be shifted over, but I'm probably misremembering.


thank you for bringing it up! i wasn't even aware of this potential problem, and I use bs= count= and skip= seek= (sk"i"p means "input") through pipes across the net aaaallll the time for decades.

it pretty much seems iflag=fullblock is a requirement if you want the counts to work, even though the failure times might be rare


Isn't `nc -l -p 1234 > /dev/nvme0nX` working by accident (relying on that netcat is buffering its output in multiples of disk block size)?


No — the kernel buffers non-O_DIRECT writes to block devices to ensure correctness.

Larger writes will be more efficient, however, if only due to reduced system call overhead.

While not necessary when writing an image with the correct block size for the target device, even partial block overwrites work fine:

  # yes | head -c 512 > foo
  # losetup /dev/loop0 foo
  # echo 'Ham and jam and Spam a lot.' | dd bs=5 of=/dev/loop0
  5+1 records in
  5+1 records out
  28 bytes copied, 0.000481667 s, 58.1 kB/s
  # hexdump -C /dev/loop0
  00000000  48 61 6d 20 61 6e 64 20  6a 61 6d 20 61 6e 64 20  |Ham and jam and |
  00000010  53 70 61 6d 20 61 20 6c  6f 74 2e 0a 79 0a 79 0a  |Spam a lot..y.y.|
  00000020  79 0a 79 0a 79 0a 79 0a  79 0a 79 0a 79 0a 79 0a  |y.y.y.y.y.y.y.y.|
  *
  00000200
Partial block overwrites may (= will, unless the block to be overwritten is in the kernel's buffer cache) require a read/modify/write operation, but this is transparent to the application.

Finally, note that this applies to most block devices, but tape devices work differently: partial overwrites are not supported, and, in variable block mode, the size of individual write calls determines the resulting tape block sizes.


Somehow I had thought even in buffered mode the kernel would only accept block-aligned and sized I/O. TIL.


> # yes | head -c 512 > foo

How about `truncate -s 512 foo`?


Your exact command works reliably but is inefficient. And it works by design, not accident. For starters, the default block size in most netcat implementations is tiny like 4 kB or less. So there is a higher CPU and I/O overhead. And if netcat does a partial or small read less than 4 kB, when it writes the partial block to the nvme disk, the kernel would take care of reading a full 4kB block from the nvme disk, updating it with the partial data block, and rewriting the full 4kB block to the disk, which is what makes it work, albeit inefficiently.


I would include bs=1M and oflag=direct for some extra speed.


> there are zero benefits in using NVMe/TCP, as he just ends up doing a serial block copy using dd(1) so he's not leveraging concurrent I/O

I guess most people don't have faster local network than an SSD can transfer.

I wonder though, for those people who do, does a concurrent I/O block device replicator tool exist?

Btw, you might want also use pv in the pipeline to see an ETA, although it might have a small impact on performance.


I doubt it makes a difference. SSDs are an awful lot better at sequential writes than random writes, and concurrent IO would mainly speed up random access.

Besides, I don't think anyone really has a local network which is faster than their SSD. Even a 4-year-old consumer Samsung 970 Pro can sustain full-disk writes at 2.000M Byte/s, easily saturating a 10Gbit connection.

If we're looking at state-of-the-art consumer tech, the fastest you're getting is a USB4 40Gbit machine-to-machine transfer - but at that point you probably have something like the Crucial T700, which has a sequential write speed of 11.800 MByte/s.

The enterprise world probably doesn't look too different. You'd need a 100Gbit NIC to saturate even a single modern SSD, but any machine with such a NIC is more likely to have closer to half a dozen SSDs. At that point you're starting to be more worried about things like memory bandwidth instead. [0]

[0]: http://nabstreamingsummit.com/wp-content/uploads/2022/05/202...


> Besides, I don't think anyone really has a local network which is faster than their SSD. Even a 4-year-old consumer Samsung 970 Pro can sustain full-disk writes at 2.000M Byte/s, easily saturating a 10Gbit connection.

You might be surprised if you take a look at how cheap high speed NICs are on the used market. 25G and 40G can be had for around $50, and 100G around $100. If you need switches things start to get expensive but for the "home lab" crowd since most of these cards are dual port a three-node mesh can be had for just a few hundred bucks. I've had a 40G link to my home server for a few years now mostly just because I could do it for less than the cost of a single hard drive.


Depending on your sensitivity to power/noise a 40Gb switch can be had somewhat inexpensively too - something like the Brocade ICX6610 costs <200$ on eBay.


This is exactly what I'm looking for. Would you mind sharing what specific 40G card(s) you're using?


I'm using Mellanox ConnectX-3 cards, IIRC they're HP branded. They shipped in Infiniband mode and required a small amount of command line fiddling to put them in ethernet mode but it was pretty close to trivial.

They're PCIe 3.0 x8 cards so they can't max out both ports, but realistically no one who's considering cheap high speed NICs cares about maxing out more than one port.


In EC2, most of the "storage optimized" instances (which have the largest/fastest SSDs) generally have more advertised network throughput than SSD throughput, by a factor usually in the range of 1 to 2 (though it depends on exactly how you count it, e.g., how you normalize for the full-duplex nature of network speeds and same for SSD).


Can't find corroboration for the assertion 'SSDs are an awful lot better at sequential writes than random writes'.

Doesn't make sense at first glance. There's no head to move, as in an old-style hard drive. What else could make random write take longer on an SSD?


The main problem is that random writes tend to be smaller than the NAND flash erase block size, which is in the range of several MB.

You can check literally any SSD benchmark that tests both random and sequential IO. They're both vastly better than a mechanical hard drive, but sequential IO is still faster than random IO.


Seems to be the case, to some degree. My SSD is 8% slower doing random writes. I guess your mileage may vary.


The key aspect is that such memory generally works on a "block" level so making any smaller-than-block write on a SSD requires reading a whole block (which can be quite large), erasing that whole block and then writing back the whole modified block; as you physically can't toggle a bit without erasing the whole block first.

So if large sequential writes mean that you only write full whole blocks, that can be done much faster than writing the same data in random order.


In practice, flash based SSDs basically never do a full read-modify-write cycle to do an in-place update of an entire erase block. They just write the new data elsewhere and keep track of the fragmentation (consequently, sequential reads of data that wasn't written sequentially may not be as fast as sequential reads of data that was written sequentially).

RMW cycles (though not in-place) are common for writes smaller than a NAND page (eg. 16kB) and basically unavoidable for writes smaller than the FTL's 4kB granularity


Long story short: changing a bit from 1 to 0 is really easy, but changing it from 0 to 1 is quite difficult and requires an expensive erase operation. The erase works on a full 4k block, so writing a random byte means reading 4k to a buffer, changing one byte, erasing the page, and writing back 4k. Sequential writing means erasing the page once, and writing the bytes in known-pre-erased sections.

Any modern (last few decades) SSD has a lot of sauce on top of it to reduce this penalty, but random writes are still a fair bit slower - especially once the buffer of pre-prepared replacement pages runs out. Sequential access is also just a lot easier to predict, so the SSD can do a lot of speculative work to speed it up even more.


dd has status=progress to show bytes read/written now, I just use that


Seems awesome. Can you please tell us how to use gzip or lz4 to do the imaging?


If you search for “dd gzip” or “dd lz4” you can find several ways to do this. In general, interpose a gzip compression command between the sending dd and netcat, and a corresponding decompression command between the receiving netcat and dd.

For example: https://unix.stackexchange.com/questions/632267


Agree, but I'd suggest zstd instead of gzip (or lz4 is fine).


> That said I didn't know about that Linux kernel module nvme-tcp. We learn new things every day :) I see that its utility is more for mounting a filesystem over a remote NVMe, rather than accessing it raw with dd.

Aside, I guess nvme-tcp would result in less writes as you only copy files in stead of writing the whole disk over?


Not if you use it with dd, which will copy the blank space too


Not if you use it with dd, which will copy the blank space too


Yep I've done this and it works in a pinch. 1Gb/s is also a reasonable fraction of SATA speeds too.


Would be much better to hook this up to dump and restore. It'll only copy used data and you can do it while the source system is online.

For compression the rule is that you don't do it if the CPU can't compress faster than the network.


This is exactly that I usually do, it works like a charm


As a sysadmin, I'd rather use NVMe TCP or Clonezilla to do a slow write rather than trying to go 5% faster with more moving parts and chance to corrupt my drive in the process.

Plus, a it'd be well deserved coffee break.

Considering I'd be going at GigE speeds at best, I'd add "oflag=direct" to bypass caching on the target. A bog standard NVMe can write >300MBps unhindered, so trying to cache is moot.

Lastly, parted can do partition resizing, but given the user is not a power user to begin with, it's just me nitpicking. Nice post otherwise.


NVMe/TCP or Clonezilla are vastly more moving parts and chances to mess up the options, compared to dd. In fact, the author's solution exposes his NVMe to unauthenticated remote write access by any number of clients(!) By comparison, the dd on the source is read-only, and the dd on the destination only accepts the first connection (yours) and no one else on the network can write to the disk.

I strongly recommend against oflag=direct as in this specific use case it will always degrade performance. Read the O_DIRECT section in open(2). Or try it. Basically using oflag=direct locks the buffer so dd will have to wait for the block to be written by the kernel to disk until it can start reading the data again to fill the buffer with the next block, thereby reducing performance.


> the author's solution exposes his NVMe to unauthenticated remote write access by any number of clients(!)

I won't be bothered in a home network.

> Clonezilla are vastly more moving parts

...and one of these moving parts is image integrity and write integrity verification, allowing byte-by-byte integrity during imaging and after write.

> I strongly recommend against oflag=direct as in this... [snipped for brevity]

Unless you're getting a bottom of the barrel NVMe, all of them have DRAM caches and do their own write caching independent of O_DIRECT, which only bypasses OS caches. Unless the pipe you have has higher throughput than your drive, caching in the storage device's controller ensures optimal write speeds.

I can hit theoretical maximum write speeds of all my SSDs (internal or external) with O_DIRECT. When the pipe is fatter or the device can't sustain that speeds, things go south, but this is why we have knobs.

When you don't use O_DIRECT in these cases, you see initial speed surge maybe, but total time doesn't reduce.

TL;DR: When you're getting your data at 100MBps at most, using O_DIRECT on an SSD with 1GBps write speeds doesn't affect anything. You're not saturating anything on the pipe.

Just did a small test:

    dd if=/dev/zero of=test.file bs=1024kB count=3072 oflag=direct status=progress 
    2821120000 bytes (2.8 GB, 2.6 GiB) copied, 7 s, 403 MB/s
    3072+0 records in
    3072+0 records out
    3145728000 bytes (3.1 GB, 2.9 GiB) copied, 7.79274 s, 404 MB/s
Target is a Samsung T7 Shield 2TB, with 1050MB/sec sustained write speed. Bus is USB 3.0 with 500MBps top speed (so I can go %50 of drive speeds). Result is 404MBps, which is fair for the bus.

If the drive didn't have its own cache, caching on the OS side would have more profound effect since I can queue more writes to device and pool them at RAM.


Your example proves me right. Your drive should be capable of 1000 MB/s but O_DIRECT reduces performance to 400 MB/s.

This matters in the specific use case of "netcat | gunzip | dd" as the compressed data rate on GigE will indeed be around 120 MB/s but when gunzip is decompressing unused parts of the filesystem (which compress very well), it will attempt to write 1+ GB/s or more to the pipe to dd and it would not be able to keep up with O_DIRECT.

Another thing you are doing wrong: benchmarking with /dev/zero. Many NVMe do transparent compression so writing zeroes is faster than writing random data and thus not a realistic benchmark.

PS: to clarify I am very well aware that not using O_DIRECT gives the impression initial writes are faster as they just fill the buffer cache. I am taking about sustained I/O performance over minutes as measured with, for example, iostat. You are talking to someone who has been doing Linux sysadmin and perf optimizations for 25 years :)

PPS: verifying data integrity is easy with the dd solution. I usually run "sha1sum /dev/nvme0nX" on both source and destination.

PPPS: I don't think Clonezilla is even capable of doing something similar (copying a remote disk to local disk without storing an intermediate disk image).


> Your example proves me right. Your drive should be capable of 1000 MB/s but O_DIRECT reduces performance to 400 MB/s.

I noted that the bus I connected the device has 500MBps bandwidth theoretical, no?

To cite myself:

> Target is a Samsung T7 Shield 2TB, with 1050MB/sec sustained write speed. Bus is USB 3.0 with 500MBps top speed (so I can go %50 of drive speeds). Result is 404MBps, which is fair for the bus.


Yes USB3.0 is 500 MB/s but are you sure your bus is 3.0? It would imply your machine is 10+ years old. Most likely it's 3.1 or newer which is 1000 MB/s. And again, benchmarking /dev/zero is invalid anyway as I explained (transparent compression)


No, it wouldn't imply the machine is 10+ years old. Even a state-of-the-art motherboard like the Gigabyte Z790 D AX (which became available in my country today) has more USB 3 gen1 (5Gbps) ports than gen2 (10Gbps).

The 5Gbps ports are just marketed as "USB 3.1" instead of "USB 3.0" these days, because USB naming is confusing and the important part is the "gen x".


To be clear for everyone:

USB 3.0, USB 3.1 gen 1, and USB 3.2 gen 1x1 are all names for the same thing, the 5Gbps speed.

USB 3.1 gen 2 and USB 3.2 gen 2x1 are both names for the same thing, the 10Gbps speed.

USB 3.2 gen 2x2 is the 20Gbps speed.

The 3.0 / 3.1 / 3.2 are the version number of the USB specification. The 3.0 version only defined the 5Gbps speed. The 3.1 version added a 10Gbps speed, called it gen 2, and renamed the previous 5Gbps speed to gen 1. The 3.2 version added a new 20Gbps speed, called it gen 2x2, and renamed the previous 5Gbps speed to gen 1x1 and the previous 10Gbps speed to gen 2x1.

There's also a 3.2 gen 1x2 10Gbps speed but I've never seen it used. The 3.2 gen 1x1 is so ubiqitous that it's also referred to as just "3.2 gen 1".

And none of this is to be confused with type A vs type C ports. 3.2 gen 1x1 and 3.2 gen 2x1 can be carried by type A ports, but not 3.2 gen 2x2. 3.2 gen 1x1 and 3.2 gen 2x1 and 3.2 gen 2x2 can all be carried by type C ports.

Lastly, because 3.0 and 3.1 spec versions only introduced one new speed each and because 3.2 gen 2x2 is type C-only, it's possible that a port labeled "3.1" is 3.2 gen 1x1, a type A port labeled "3.2" is 3.2 gen2x1, and a type C port labeled "3.2" is 3.2 gen 2x2. But you will have to check the manual / actual negotiation at runtime to be sure.


> There's also a 3.2 gen 1x2 10Gbps speed but I've never seen it used.

It's not intended to be used by-design. Basically, it's a fallback for when a gen2x2 link fails to operate at 20Gbps speeds.


I didn't mean 5 Gbps USB ports have disappeared, but rather: most machines in the last ~10 years (~8-9 years?) have some 10 Gbps ports. Therefore if he is plugging a fast SSD in a slow 5 Gbps port, my assumption was that he has no 10 Gbps port.


TIL they have been sneaking versions of USB in while I haven't been paying attention. Even on hardware I own. Thanks for that.


I wonder how using tee to compute the hash in parallel would affect the overall performance.


On GigE or even 2.5G it shouldn't slow things down, as "sha1sum" on my 4-year-old CPU can process at ~400 MB/s (~3.2 Gbit/s). But I don't bother to use tee to compute the hash in parallel because after the disk image has been written to the destination machine, I like to re-read from the destination disk to verify the data was written with integrity. So after the copy I will run sha1sum /dev/XXX on the destination machine. And while I wait for this command to complete I might as well run the same command on the source machine, in parallel. Both commands complete in about the same time so you would not be saving wall clock time.

Fun fact: "openssl sha1" on a typical x86-64 machine is actually about twice faster than "sha1sum" because their code is more optimized.

Another reason I don't bother to use tee to compute the hash in parallel is that it writes with a pretty small block size by default (8 kB) so for best performance you don't want to pass /dev/nvme0nX as the argument to tee, instead you would want to use fancy >(...) shell syntax to pass a file descriptor as an argument to tee which is sha1sum's stdin, then pipe the data to dd to give it the opportunity to buffer writes in 1MB block to the nvme disk:

  $ nc -l -p 1234 | tee >(sha1sum >s.txt) | dd bs=1M of=/dev/XXX
But rescue disks sometimes have a basic shell that doesn't support fancy >(...) syntax. So in the spirit of keeping things simple I don't use tee.


It's over 10 years ago that I had to do such operations regularly with rather unreliable networks to Southeast Asia and/or SD cards, so calculating the checksum every time on the fly was important.

Instead of the "fancy" syntax I used

   mkfifo /tmp/cksum
   sha1sum /tmp/cksum &
   some_reader | tee /tmp/cksum | some_writer
Of course under the conditions mentioned throughputs were moderate compared to what was discussed above. So I don't know how it would perform with a more performant source and target. But the important thing is that you need to pass the data through the slow endpoint only once.

Disclaimer: From memory and untested now. Not.at the keyboard.


> ...and one of these moving parts is image integrity and write integrity verification, allowing byte-by-byte integrity during imaging and after write.

dd followed by sha1sum on each end is still very few moving parts and should still be quite fast.


Yes, in the laptop and one-off case, that's true.

In a data center it's not (this is when I use clonezilla 99.9% of the time, tbf).


I don't see how you can consider the nvme over tcp version less moving parts.

dd is installed on every system, and if you don't have nc you can still use ssh and sacrifice a bit of performance.

  dd if=/dev/foo | ssh dest@bar "cat > /dev/moo"


NVMe over TCP encapsulates and shows me the remote device as is. Just a block device.

I just copy that block device with "dd", that's all. It's just a dumb pipe encapsulated with TCP, which is already battle tested enough.

Moreover, if I have fatter pipe, I can tune dd for better performance with a single command.


netcat encapsulates data just the same (although in a different manner), and it's even more battle-tested. NVMe over TCP use case is to actually use the remote disk over the network as it were local. If you just need to dump a whole disk like in the article, dd+netcat (or even just netcat, as someone pointed out) will work just the same.


Nvme over TCP encapsulates the entire nvme protocol in TCP, which is way more complex than just sending the raw data. It's the opposite of "a dumb pipe encapsulated in tcp", this is what the netccat approach would be. Heck if you insist on representing the drive as a block device on the remote side you could just as well use NBD, which is just about as many moving parts as nvme over tcp but still a simpler protocol.


Just cat the blockdev to a bash socket


I came here to suggest similar. I usually go with

    dd if=/dev/device | mbuffer to Ethernet to mbuffer dd of=/dev/device
(with some switches to select better block size and tell mbuffer to send/receive from a TCP socket)

If it's on a system with a fast enough processor I can save considerable time by compressing the stream over the network connection. This is particularly true when sending a relatively fresh installation where lots of the space on the source is zeroes.


Sir, i dont upvote much, but your post deserves a double up, at least


I am not often speechless, but this hit the spot. Well done Sir!

Where does one learn this black art?


[flagged]


> Hdjrnrhf Fhjrnrjg Cnn3nrmf Нос3uejr Нирш Юни до края шллш

Is this some new form of Russian Ops here on HN?


Thanks AWS/Annapurna/Nitro/Lightbits for bringing NVMe-over-TCP to Linux.

https://www.techtarget.com/searchstorage/news/252459311/Ligh...

> The NVM Express consortium ratified NVMe/TCP as a binding transport layer in November 2018. The standard evolved from a code base originally submitted to NVM Express by Lightbits' engineering team.

https://www.lightbitslabs.com/blog/linux-distributions-nvme-...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


A lot of hassle compared to:

  nbdkit file /dev/nvme0n1
  nbdcopy nbd://otherlaptop localfile


This is actually much better because nbdcopy can handle sparse files, you can set the number of connections and threads to number of cores, you can force a flush before exit, and enable a progress bar. For unencrypted drives it also supports TLS.


If you really want a progress bar chuck in a 'pv' somewhere into the command posted at the top of the thread.


Or add nbdcopy -p option :-)


And "dd conv=sparse" to fix the other problem.


I recently had to set up a new laptop (xubuntu).

Previously I cloned but I this time I wanted to refresh some of the configs.

Using a usb-c cable to transfer at 10gb/s is so useful (as my only other option was WiFi).

When you plug the computers together they form an ad-hoc network and you can just rsync across. As far as I could tell the link was saturated so using anything else (other protocols) would be pointless. Well not pointless, it's really good to learn new stuff, maybe just not when you are cloning your laptop (joke)!


Did it ‘just work’?

Serious question since last time I tried a direct non-Ethernet connection was sometime in the 90s ;)


I assume that those USB-C connectors were USB 4 or Thunderbolt, not USB 3.

With Thunderbolt and operating systems that support Ethernet over Thunderbolt, a virtual network adapter is automatically configured for any Thunderbolt connector, so connecting a USB C cable between 2 such computers should just work, as if they had Ethernet 10 Gb/s connectors.

With USB 3 USB C connectors, you must use USB network adapters (up to 2.5 Gb/s Ethernet).


USB C is perfectly capable of connecting two equals, even with USB-2.

It merely requires one side to be capable of behaving as a device, with the other side behaving as a host.

I.e., unlike PCIe hubs, you won't get P2P bandwidth savings on a USB hub.

It just so happens that most desktop xhci controllers don't support talking "device".

But where you can, you can set up a dumb bidirectional stream fairly easily, over which you can run SLIP or PPP. It's essentially just a COM port/nullmodem cable. Just as a USB endpoint instead of as a dedicated hardware wire.


Yes, both were modern enough laptops. Although the ideapad didn't advertise thunderbolt in the lspci, connecting that and the dell precision "just worked" (tm).

It's very useful for sending large data using minimal equipment. No need for two cat6 cables and a router for example.


You just need a single Ethernet cable really, if the devices are reasonably modern. With Auto MDI-X the days of needing a crossover cable or a switch are over.


I'm not sure, first off the precision doesn't have an ethernet port at all!

Secondly, I'm not sure if a crossover cable setup will autoconfigure the network, as the poster above says, it has been since the 90s when I bothered trying something like that!


Right, that plan is somewhat foiled by most laptops not having ethernet ports anymore.

You don't need crossover cables anymore. You can just connect a regular patch cable directly between 2 devices. Modern devices can swap RX/TX as needed.

As for auto-configuration, that's up to the OS, but yeah you probably have to set up static IPs.


They would receive APIPA (169.254.0.0/16) address for IPv4 and link-local for IPv6, if the interface would be brought up. Now, that second part is a question; windows would do it, but linux probably not.


If both ends got APIPA addresses would they be able to talk to each other?

I was under the impression you have to set up the devices as each other's default gateway, but maybe I'm the one not up to modern standards this time.


Their randomly chosen addresses are within the same /16 subnetwork.

Therefore they can talk directly, without using a gateway.

Their corresponding Ethernet MAC addresses will be resolved by ARP.

The problem is that in many cases you would have to look at the autoconfigured addresses and introduce manually the peer address in each computer.

For file sharing, either in Windows or using Samba on Linux, you could autodiscover the other computer and just use the name of the shared resource.


There's https://en.wikipedia.org/wiki/Multicast_DNS though I'll admit I never properly tried it.


I did this also, but had to buy a ~$30+ Thunderbolt 4 cable to get the networking to work. Just a USB3-C cable was not enough.

The transfer itself screamed and I had a terabyte over in a few mins. Also I didn't bother with encryption on this one, so that simplified things a lot.


Were you transferring the entire filesystem, after booting from a live disk, or were you transferring files over after having set up a base system?


I don't understand why he didn't pipe btrfs through network. Do a btrfs snapshot first, then btrfs send => nc => network => nc => brtfs receive. That way only blocks in use are sent.


That's the first thing I thought when I saw he used btrfs. I use btrfs send/receive all the time via SSH and it works great. He could easily have setup SSH server in GRML live session.

There's one caveat though: With btrfs it is not possible to send snapshots recursively, so if he had lots of recursive snapshots (which can happen in Docker/LXD/Incus), it is relatively hard to mirror the same structure in a new disk. I like btrfs, but recursive send/receive is one aspect ZFS is just better.


Yea, there needs to be a `snapshot -r` option or something. I like using subvolumes to manage what gets a snapshot but sometimes you want the full disk.


I recently had to copy around 200gb of files over wifi. I used rsync to make sure a connection failure doesn't mean I have to start over and so that nothing is lost, but it took at least 6 hours. I wonder what could I have done better.

Btw, what kinds of guarantees do you get with the dd method? Do you have to compare md5s of the resulting block level devices after?


> 200gb of files over wifi […] took at least 6 hours

> I wonder what could I have done better.

Used an Ethernet cable? That’s not an impressive throughput amount over local. WiFi has like a million more sources of perf bottlenecks. Btw, just using a cable on ONE of the device => router ~> device can help a lot.


Yeah, I did that. One of the devices didn't have an ethernet port though.


If it’s a MacBook you can get Ethernet - USB-C adapters for dirt cheap. Otherwise, you have to move closer to the router and/or get a new router/wifi card if it’s running old 2g stuff for instance. But again, WiFi is headache - especially since many parameters are outside your control (like nearby airports, neighbors, layout of your home, to name a few lol).


> ...but it took at least 6 hours...

Rsync cannot transfer more than one file at a time so if you were transferring a lot of small files that was probably the bottleneck. You could either use xargs/parallel to split the file list and run multiple instances of rsync or use something like rclone, which supports parallel transfers on its own.


You could tar and zip the files into a netcat-pipe


> I wonder what could I have done better.

6 hours is roughly 10MB/s, so you likely could have gone much much quicker. Did you compress with `-z`? If you could use ethernetyou probably could have done it at closer to 100MB/s on most deviceds, which would have been 35 minutes.


No, I didn't use compression. Would it be useful over a high-bandwidth connection? I presume that it wasn't wifi bandwidth that was bottlenecking, though I've not really checked.

One thing I could have done is found a way to track total progress, so that I could have noticed that this is going way too slow.


I wouldn't consider 10MB/s high bandwidth. My home wifi network can comfortably handle 35MB/s, while a wired connection is 110MB/s+

> No, I didn't use compression. Would it be useful over a high-bandwidth connection?

Compression will be useful if you're IO bound as opposed to CPU bound, which at 10MB/s you definitely are.


Thanks for the clarification. I presumed that it's the rsync+ssh protocol chatter that was slowing me down, not the wifi bandwidth (which I've not tested separately ). I don't have my head wrapped around the mechanics in play, to be honest.


I’ve had compression noticeably speed things up even over a wired lan


One of the options of rsync is to print out transfer speed, --progress or verbose or similar.


Yeah, I did that, but that shows stats per file, which is minimally useful. I would see it go up to 15 mb/s for bigger files, but turns out the aggregate speed was less than 10 mb/s.


Right, I guess it doesn't show the total speed until the end, which is probably too late. Although 15MB/s is not very fast these days either.

Elsewhere I recommended the Thunderbolt4 cable—it screamed at ~10GBs and didn't need any optimization like tar/compression.


Zstd is also great for io bottlenecked transfers with its adapt flag. I don't know if you can use it with rsync though.

> --adapt[=min=#,max=#]: zstd will dynamically adapt compression level to perceived I/O conditions.


If your transport method for rsync was ssh, that is often a bottleneck, as openssh has historically had some weird performance limits that needed obscure patches to get around. Enabling compression helps too if your CPU doesn't become a bottleneck


Yeah, it was SSH. Thanks for the heads up.


Wifi shares the medium (air) with all other radios. It has a random time it waits after it stops if it sees a collision.

"Carrier-sense multiple access with collision avoidance (CSMA/CA) in computer networking, is a network multiple access method in which carrier sensing is used, but nodes attempt to avoid collisions by beginning transmission only after the channel is sensed to be "idle".[1][2] When they do transmit, nodes transmit their packet data in its entirety.

It is particularly important for wireless networks, where the alternative with collision detection CSMA/CD, is not possible due to wireless transmitters desensing (turning off) their receivers during packet transmission.

CSMA/CA is unreliable due to the hidden node problem.[3][4]

CSMA/CA is a protocol that operates in the data link layer."

https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_...


I'm not sure I get your meaning. Are you implying that dd over netcat is unreliable because CSMA/CA is unreliable?


Sorry, just that it isn't anywhere near as efficient. There's a lot of random wait that will slow everything down.


Oh ok, thanks for the insight!


200 GB in 6 hours is too slow for wifi, too.

dd doesn't skip empty blocks, like clonezilla would do.


I'm sure there are benefits to this approach, but I've transferred laptops before by launching an installer on both and then combining dd and nc on both ends. If I recall correctly, I also added gzip to the mix to make transferring large null sections faster.

With the author not having access to an ethernet port on the new laptop, I think my hacky approach might've even been faster because of the slight boost compression would've provided, given that the network speed is nowhere near the speed limit compression would add to a fast network link.


Well, if it's whole-disk encrypted, unless they told LUKS to pass TRIM through, you'd not be getting anything but essentially random data for the way the author described it.


could you explain how you do that exactly?



Basically https://news.ycombinator.com/item?id=39676881, but also adding a pipe through gzip/gunzip in the middle for data compression.

I think I did something like `nc -l -p 1234 | gunzip | dd status=progress of=/dev/nvme0n1` on the receiving end and `dd if=/dev/nvme0n1 bs=40M status=progress | gzip | nc 10.1.2.3:1234` on the sending end, after plugging an ethernet cable into both devices. In theory I could've probably also used the WiFi cards to set up a point to point network to speed up the transmission, but I couldn't be bothered with looking up how to make nc use mptcp like that.


Or just use Clonezilla? Then you also copy only the actual data blocks, and it can autoresize your partitions as well. That's how I always do it.

True, I do always just take the NVME disk out of the laptop and put it in a highspeed dock.


Clonezilla is great. It's got one job and it usually succeeds the first time. My only complaint is the initial learning curve requires tinkering. It's still not at the trust level of fire and forget. Experimenting is recommended as backup is never the same thing as a backup and restore and even Clonezilla will have issues recreating partitions on disks that are very different from their source.


Spot on re: learning curve, it absolutely requires a test run or two before you'll get it right


I haven't actually "installed" an OS on my desktops/laptops in decades, always just copy over the files and adjust as needed. Usually just create a new filesystem though to use the opportunity to update the file system type / parameters (e.g. block size), encryption etc. and then rsync the files over.

Still, if I was the planning ahead kind then using something more declarative like NixOS where you only need to copy your config and then automatically reinstall everything would probably be the better approach.


If you directly connect devices over WiFi without an intermediate AP you should be able to double your transfer speed. In this scenario it might have been worth it.


> and I'm not so familiar with resizing LUKS.

LUKS2 does not care about the size of the disk. If it's JSON header is present, it will by default treat the entire underlying block device as the LUKS encrypted volume/partition (sans the header), unless specified otherwise on the commandline.


Since there is no mention of FDT (Fast Data Transfer) [1]

Amazing software (transfer performance wise) written (unfortunately in) Java :) Unintuitive CLI options, but fastest transfer I have seen.

So fast, if I don't artificially limit the speed, my command would sometimes hog the entire local network.

-limit <rate> Restrict the transfer speed at the specified rate. K (KiloBytes/s), M (MegaBytes/s) or G (GigaBytes/s) may be used as suffixes.

[1] http://monalisa.cern.ch/FDT/download.html

PS: It does cause file-fragmentation on the destination side, but it shouldn't matter to anyone, practically.


I was in a situation like this recently, and just want to say that if opening the pc/laptop and connecting it to the new machine is possible you should go that route, it's a little hassle compared to wait to a transfer over the network, waiting times can be deceptive.

This reminds me one of my favorite articles

https://blog.codinghorror.com/the-infinite-space-between-wor...


So this just bit for bit dumps a NVMe device to another location. That’s clear. So all encryption just is transferred and not touched. But doesn’t the next machine go into panic when you boot? There are probably many changes in the underlying machine? (Okay, now I read the other post. The author really knows the way. This is at least intermediate Linux.)


A Linux install is often remarkably hardware agnostic.

Windows would panic, certainly (because so much drivers & other state is persisted & expected), but the Linux kernel when it boots kind of figures out afresh what the world is every time. That's fine.

The main thing you ought to do is generate a new systemd/dbus machine-id. But past this, I fairly frequently instantiate new systems by taking a btrfs snapshot of my current machine & send that snapshot over to a new drive. Chroot onto that drive, and use bootctl to install systemd-boot, and then I have a second Linux is ready to go.


Unless you're doing a BIG jump, like from legacy to UEFI, or SATA to NVMe, Windows will generally just figure it out.

There may be an occasional exception for when you've doing something weird (iirc the 11th-13th gen Intel RST need a slipstreamed or manually added drivers unless you change controller settings in the BIOS which may bite on laptops at the moment if you're unaware of having to do it).

But even for big jumps you can usually get it working with a bit of hackery pokery. Most recently I had to jump from a legacy C2Q system running Windows 10 to run bare metal on a 9th gen Core i3.

I ended up putting it onto a VM to run the upgrade from legacy to UEFI so I'd have something that'd actually boot on this annoyingly picky 9th gen i3 Dell system, but it worked.

I generally (ab)use Macrium Reflect, and have a copy of Parted Magic to hand, but it's extremely rare for me to find a machine I can't clone to dissimilar hardware.

The only one that stumped me recently was an XP machine running specific legacy software from a guy who died two decades ago. That I had to P2V. Worked though!


My old practice used to be to start safe mode & go delete as many devices drivers as I could. That got me pretty far.

These days though I've had a number of cases where some very humble small scoped bios change will cause windows to not boot. I've run into this quite a few times, and it's been quite an aggravation to me. I've been trying to get my desktops power consumption down & get sleep working, mostly in Linux, but its shocking to me what a roll-of-the-die it's been that I may have to do the windows-self-reinstall, from changing an AMD cool-n-quiet settings or adjusting a sleep mode option.


You can "make" windows do a lot of things. As a consumer oriented product you shouldn't have to.


You don't if you move from one relatively modern machine to another. My current laptop SATA SSD has moved between... 4 different machines, I think? I'll have to "make" when I move to an NVMe SSD, maybe, but it might largely just work. Time may tell.

If you do things the Microsoft way, you just sign into your MS account and your files show up via OneDrive, your apps and games come from the Microsoft store anyway.

There's plenty to fault Microsoft for. Like making it progressively more difficult to just use local accounts.

I don't think "cloning a machine from hardware a decade old onto a brand new laptop may require expertise" is one of them.


> If you do things the Microsoft way

and happen to live near good bandwidth.

> Like making it progressively more difficult to just use local accounts.

right. because they don't care about your hardware just your payment relationship with them.

> cloning a machine from hardware a decade old onto a brand new laptop may require expertise

you don't see the connection between all these facts? "coping all the work and effort you've collected on your personal machine for a decade _still_ unaccountably requires expertise."

If this were something so unusual that people would rarely want to do it this would be a reasonable point. The fact that this is such an obvious thing people want to do without having to have a monthly subscription first indicates that the product does not cater well to it's selected market segments.


I expected Windows to panic as well but Win 10 handles it relatively gracefully. It says Windows detected a change to your hardware and does a relatively decent job.


Yeah, I've done several such transplants of disks onto different hardware.. maybe even from an AMD to an Intel system once (different mainboard chipsets), and of Windows 7. As you say, it will say new hardware detected and a lot of the times will get the drivers for them from Windows Update. There are also 3rd party tools to remove no-longer existing devices from the registry, to clean the system up a bit.


Windows will, however, no longer acknowledge your license key. At least that is what happened when I swapped a Windows installation from an Intel system into an AMD system.


If you switch to a Microsoft account instead of a local account before you move your install, you can often recover your license once Windows processes the hardware change (sometimes it takes several attempts over a few reboots, their license server is special), and once that happens, you can go back to a local account, if Microsoft didn't remove that flow.


With Windows 10, I have unplugged the primary storage from an old laptop, and plugged it into a new one, without issue.

The system booted, no license issues.


Likely because on a laptop there's usually a Windows license embedded in an ACPI table. As long as you weren't moving a Pro edition installation to a machine that only had a key for Home edition, it would silently handle any re-activation necessary.


To me the most complicated thing would be to migrate an encrypted disk with the key stored in TPM. The article doesn't try to do that, but I wonder if there is a way.


If you moved the wrong stuff Linux would give you a bad time too, try /proc, /dev


Those are pseudo-filesystems though, they aren't part of the install.


Not sure where I said they were part of the install, but I meant that if you don’t exclude them bad things would happen to the cloning process

Edit: or backup process


With a block device copy approach like in TFA you don't need to worry about that because those filesystems do not exist on the block device.

When copying files (e.g. with rsync) you need to watch out for those, yes. The best way to deal with other mounts is not to copy directly from / but instead non-recursively bind-mount / to another directory and then copy from that.


> Windows would panic, certainly (because so much drivers & other state is persisted & expected)

Nope.

7B is because back in WinNT days it made sense to disable the drivers for which there are no devices in the system. Because memory and because people can't write drivers.

Nowadays it's AHCI or NVMe, which are pretty vendor agnostic, so you have 95% chance of successful boot. And if you boot is successful then Windows is fine and yes, it can grab the remaining drivers from the WU.


I usually set up an initial distro and then copy over my /home. Later, I just need to install the debs I'm missing, but this has the benefit of not installing stuff I don't need anymore.

That said, I didn't know you could export NVMe over TCP like that, so still a nice read!


Same here.

The only problem with that approach is that it is also copies over the .config and .cache folders, most of which are possibly not needed anymore. Or worse, they might contain configs that overrides better/newer parameters set by the new system..


~/oldhome/oldhome/oldhome ...


There's nothing special about this NVMe-TCP scenario. It can be done with any storage protocol that works over TCP. Setup a target and an initiator and use dd to dump the data. iSCSI or FCoE could be used also. NVMe-TCP is just newer tech!


This is very clever, although somewhat unnecessary, but still useful to know. The one thing I would call out is the author used WiFi because one of the laptops didn't have Ethernet. I've encountered this situation several times myself and I've found that nearly every modern laptop supports USB in both modes, so you can simply use a USB-C cable to connect the two laptops and get a pseudo-Ethernet device this way to interconnect them, no need to use WiFi. This is hundreds of times faster than WiFi.


Oh man I thought this was going to be about super fast networking by somehow connecting the PCI buses of two laptops. Instead it's the opposite, tunneling the NVMe protocol through regular Ethernet. Sort of like iScsi back in the day.

Yeah there are much simpler approaches. I don't bother with container images but can install a new machine from an Ansible playbook in a few minutes. I do then have to copy user files (Borg restore) but that results in a clean new install.


> copy took seven hours because the new laptop didn't have an Ethernet port

Buy a USB to Ethernet adapter. It will come in handy in the future.


I love to see such progress, brought by nvme-over-tcp and systemd!

Not so many years ago doing something similar (exporting a block device over network, and mounting it from another host) would have meant messing with iscsi which is a very cumbersome thing to do (and quite hard to master).


If you don't need routing, you could always use the vastly simpler ATA-over-Ethernet protocol.


>> Since the new laptop didn't have an Ethernet port, I had to rely only on WiFi, and it took about 7 and a half hours to copy the entire 512GB

That's because bs=40M and no status=progress.


Pretty cool.

Is there any way to mount a remote volume on Mac OS using NVMe-TCP?


If setting up a new laptop takes a week then you need to learn about Ansible.

I restore my working setup to a fresh install within minutes.


How can this work if the laptops have different hardware, cq different requirements on device drivers?


Linux ships with all the drivers installed (for a fairly high value of "all") (typically)


With modern systems this rarely is a problem. Like Andy said, Linux comes with most drivers in the kernel. Windows just installs the appropriate drivers as soon as you boot it up with new hardware. Sometimes it takes a minute or two for it to boot up after the initial swap, but then it should be fine.


Nothing like using Linux pipes and TCP to clone a rooted device for low level data recovery… ;)


I just move my old drive to the new machine, wonder why that wasn't possible here.


> Conclussion

Is that a mix of conclusion and concussion that requires some ibuprofen ?


Today a new nvme is arriving and I need to do EXACTLY this. Thanks Mr poster :)


Pipe with zstd (or similar)!


Cool! Does it work in any file system?


Couldn’t they have just used netcat ?


Very nice read! I didn't know it existed. Love it.


I love your explanation of your disk install with cryptsetup etc. I do a manual install as well as I always install with both mirrored and encrypted disks. The combination of the two I didn't find as easy install options (I think not at all) on the Ubuntu's I installed. Good to see a lot of this low level OS talk here.


Doing this without systemd or directly from the netboot environment would be interesting.


The user didn't do it from systemd actually. Instead they booted GRML, which is not very different from a netboot environment, and hand-exported the device themselves.


GRML is a debian live image, much different from a netboot environment. Look at https://ipxe.org/, boot firmware doing fiber over ethernet, iSCSI, HTTP; booting from virtually everything but NVMe over TCP.


I use GRML almost daily, sometimes booting it over network and loading it to RAM. We don't use iPXE specificially, but use "vanilla" PXE regularly, too, generally to install systems with Kickstart or xCAT, depending on the cluster we're working on.

I'll be using GRML again tonight to rescue some laptop drives which were retrieved from a fire, to see what can be salvaged, for example. First in forensic mode to see what's possible, then to use dd_rescue on a N95 box.


I've switched laptops like 5 times in the past 2 years. Last time was yesterday.

I just take my nvme drive out of the last one and I put it into the new one.

I don't even lose my browser session.

Edit: If you do this (and you should) you should configure predictable network adapter names (wlan0,eth0).


This could also be titled: How to make one's life difficult 101 and/or I love experimenting.

I've bought a few hard disks in my lifetime. Many years ago one such disk came bundled with Acronis TrueImage. Back in the day you could just buy the darn thing, after certain year they switched it to subscription model. Anyhoo.. I got the "one-off" TrueImage and I've been using it ever since. I've 'migrated' the CD (physical mini-CD) it to a USB flash disk, and have been using it ever since.

I was curious for the solution as I recently bought a WinOS tablet, and would like to clone my PC to it, but this looks more like one too many hoop jumps to something that TrueImage can do in 2h hours (while watching a movie).


Next time, I'll be sure to do it the easy way by going back in time and getting a free copy of Acronis TrueImage in 1998.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: