The total size of a WAVE file is defined by a 32 bit integer in its header. Depending on whether a developer interpreted the spec as a signed or unsigned integer, WAVE files have a maximum size of 2 or 4 GB. Using 24 bits @ 96 Hz to digitize audio creates about 2GB per hour with stereo. So audio preservation master WAVE files have a 2-hour duration limit. We have a number of audio media that exceed that duration, especially with formats like open reel audio that don't have a defined size limit.
As far as I can tell, it is standard practice in the digitization community to save a 2+ hour audio stream as 2 or more WAVE files with 30 seconds of overlap to match up the streams. Here's an example being served by the American Archive of Public Broadcasting (http://americanarchive.org/catalog/cpb-aacip_28-gf0ms3kc4t).
However, splitting up the audio stream is causing downstream preservation problems because I cannot find a simple way to encode the relationship between the preservation masters. The semantic relationships become even more complicated when edit masters are created that do not have the same relationship to each other as the preservation masters. Finally, the split files provide a disjointed performance of the original asset for our users.
The easiest solution would be to use a format that can hold more than 4GB of data. There have been some attempts to extend WAVE with WAVE64 and RF64. There are also other audio formats such as FLAC. IASA even recommended creating multiple mono WAV files and saving them in a single TAR (2.8.3). However, I haven't found documentation that any of these have been widely accepted. Is there a common strategy for creating 4GB+ audio files?