I am assuming that each "item" can logically be thought of as a unit, and the large number of files in some of them is merely an artefact of your digitization process.
For such a situation, I would consider using Zip without compression as a container format.
Doing so will allow you to add each item to your repository as a single archive file, which can be retrieved and extracted when the need arises.
Not using compression (in effect, using Zip only as a logical container) means that even if the Zip file header and directory somehow gets corrupted, standard recovery techniques used on damaged storage media should be able to pick out the respective files stored within the archive with little difficulty. The fact that the archive (unlike storage media) will have no fragmentation should also help should such recovery ever be necessary.
Zip is in wide use in many areas, including being used as a container format e.g. in ISO 26300 (OpenDocument) and Microsoft Office Open XML (as well as many others). Support on modern systems, including Windows, OS X and Linux, is basically ubiquitous. Both open source and proprietary implementations exist, and the format itself is publicly documented.
Zip files offer basic fixity verification through the stored checksums, but no real recovery mechanism.
Standard Zip files have a few limitations that you might run into in a situation like what you are describing. Perhaps most importantly, the maximum size of a Zip archive is capped at 4 GiB. The ZIP64 extension appears to resolve that deficiency by raising the size fields from 32 to 64 bits, but according to Wikipedia, support for ZIP64 is not as ubiquitous. If you are able to set software requirements for the systems that will handle the Zip files themselves, standardizing on ZIP64 might be a reasonable option even if the larger file size capability is not yet needed.