Helper data algorithms reliably extract secrets from physical unclonable functions. The necessary helper data can leak information, though. One state-of-the-art approach to assess the remaining min-entropy is limited to homogeneous bias or correlation, not both. Another one extends this to only local bias without correlation but is limited to short code lengths. This work presents a new approach for determining the min-entropy based on convolving histograms. It provides a better bound and good approximation given arbitrary bias, more realistic correlation effects, and practically relevant code sizes. Experiments on real-world and synthetic data show the benefit of the new method compared with state-of-the-art ones. This work also facilitates a better understanding of how the error correction as post-processing impacts the min-entropy.
«
Helper data algorithms reliably extract secrets from physical unclonable functions. The necessary helper data can leak information, though. One state-of-the-art approach to assess the remaining min-entropy is limited to homogeneous bias or correlation, not both. Another one extends this to only local bias without correlation but is limited to short code lengths. This work presents a new approach for determining the min-entropy based on convolving histograms. It provides a better bound and good a...
»