Question

Sometimes my external drives get uncleanly detached, and at the next connection a disk check is necessary.

What surprises me now is to see a slow disk check process on a journaled filesystem. According to the manual, that requires the -f option, but that seems incorrect. Isn't the point of journaling that you never need a full disk check?

In detail, macOS started fsck automatically with -y:

$ ps auxw|grep fsck
root              3792   1.1  2.3  6429668 389400   ??  U     2:09pm   0:05.10 /System/Library/Filesystems/hfs.fs/Contents/Resources/./fsck_hfs -y /dev/disk3s2

Then I interrupted the process and re-run it by hand without -y for greater control, as I usually do:

 sudo fsck_hfs /dev/disk3s2
** /dev/rdisk3s2
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
   The volume name is MyPass4T-TM2
** Checking extents overflow file.
** Checking catalog file.
** Checking multi-linked files.

FWIW, this is on a journaled HFS+ sparsebundle, that I use for Time Machine, hosted inside an external filesystem (ExFAT).

Was it helpful?

Solution

Great question.

Apple only journals filesystem changes and not the data, so when a full filesystem check is initiated, the catalog file, extents file, superblocks, and allocation counts are manually checked / cross checked / and optionally rebuilt or refreshed. The journal saves no time to check a volume - it only saves time if you lose power or connection when a filesystem metadata modification is being made. So, you might run fsck less often, but it doesn’t shorten the run time of fsck

Every single hard link is referenced / evaluated on the sending and receiving side.

This process is highly IO intensive, so if you have spinning disks, you could have weeks to check a 20 TB volume on a RAID if multiple disks are involved.

One nice benchmark is to check how many files are present on the volume in the information tab and watch Activity Monitor to see what the read and write IOPS and data rate are. Then you can compare that to the protocol (USB C and 3.0 and Thunderbolt are faster than any realistic storage you might have) so you know the progress is limited by the drive and not CPU or connection.

In your case, the sparsebundle adds negligible overhead, but Time Machine is the real crippler. It uses hard links so every single file may be referenced dozens or even hundreds of times. I have some Time Machine volumes that will never finish a fsck in that I won’t let then run for the weeks needed.

Instead, I let them be read only - set them on a shelf and only ever would get files off them or wipe them when I know I don’t need any files back from them.

Some people perform heroic surgery on Time Machine data to revive it. I prefer to buy another $100 drive and start over every 6 to 18 months, adding new fresh destinations and aging out my older destinations when they get close to filling.

Licensed under: CC-BY-SA with attribution
Not affiliated with apple.stackexchange
scroll top