Recently, file systems like
ZFS and to some extent
Btrfs have been making inroads onto quite affordable hardware. Unlike most file systems (like Windows' NTFS, Linux's ext2/3/4, Mac OS X's HFS+, and so on), these file systems store checksums for each data block and verify that checksum on each read, thus to a very large degree guaranteeing that any successful read has not suffered from bit rot. In a redundant storage configuration, they can also be used as an additional safeguard to ensure that data for which parity was used to reconstruct missing parts resulted in the originally stored data and not something else.
By "scrubbing" the data, each block of data is checked against its corresponding checksum and possibly redundant data as well, ensuring that all data is accurate and readable, and allowing for reconstruction of data from redundant blocks if a problem is detected. They cannot, however, protect from changes that stem through the operating system's "normal" channels (using the operating system's documented interfaces to open a file and write garbage to it is not protected against, for example).
Seeing that separate fixity data is likely to be necessary as well to ensure the integrity of the full archive, can file system-based checksums like these add value in the context of digital preservation?