The last couple of years there has been issues with writing large numbers of files to USB drives in Linux.
My specific use has always been writing out the files from Windows install DVD ISOs onto a bootable thumb drive (easier than keeping 12 different thumb drives around the shop).
Steps are pretty easy, mount the ISO file, delete files from USB drive, change directory, start copying and… the system becomes completely unusable.
And it’s not a hardware problem. Sure, my machine is from late 2007, but the parts are all up to modern speed standards because the high end Xeons were so incredibly over the top at the time.
Bounced around a few places online, lots of recommendations about trying to run cp with nice, changing the dirty_bytes threshold, etc. as noted here. Nothing every helped.
Turns out it’s because newer kernels try to write to USB devices in 240 byte chunks, and shitty USB drives don’t really support that, causing an IO hang that stops the system cold until it’s done with writes.
So the workaround is a UDEV rule that applies to all thumb drives.
Create a new file,
nano -w /usr/lib/udev/rules.d/81-udisks_maxsect.rules
and add the following rule to it, changing the write size from 240 to 32k
SUBSYSTEMS==”scsi”, ATTR{max_sectors}==”240″, ATTR{max_sectors}=”32678″
Save the file and reboot. No more hangs, even using the worst USB drives. Lovely.