I've recently had some fun with a BTRFS filesystem corruption (described in a gist here). Ended up with the recovered data in a linear LV formed by 2 non-redundant PVs.
After all needed checks were done and a fresh backup taken, it was time to eliminate the failed btrfs filesystem and get the data back on redundancy by reusing its RAID5 array. In my mind, it would be simple a question of pvmove'ing the data, which I could do over time, no hush, by making partial/incremental data moves.
Contrary to what I expected however, it turned out I couldn't just run pvmove for some time, cancel it and have the data which was already moved stay on the new volume; instead, after stopping pvmove the lvm structure stayed as it was at the start.
It turns out pvmove basis for data movement is the segment, that is, contiguous allocations of data, not individual extents as I expected. For the data movement to be "commited" it would have to finish moving the whole segment, which in this case would mean moving the entire PV / disk partition, as it's fully allocated to the same LV.
As pvmove also supports specifying specific areas of the PV to move, It seemed right to script my way to the result I wanted, so I've made a small script to sequentially pvmove a disk in "small" [parameterized] amount of extents per time. The script can be interrupted and the fully finished pvmove's stay as they should, so you can get back another time and continue from (mostly) where you stopped.
Please keep in mind that this has been made and tested in a very specific case - PV allocated sequentially to a single LV. I didn't really try to cover other user cases, so it could move data it shouldn't, and while I don't expect a pvmove to cause data loss, if the script somehow gets the start extent or size wrong, I can see it getting the LV / filesystem fragmented. As usual, having a backup ready is recommended.
Without further ado, my hackish incrementa_pvmove.sh script is available here:
https://gist.github.com/grmontesino/8ec29cd16cf3d893dde808f35f079304