The task of reconstructing binary images from the knowledge of their line sums (discrete X-rays) in a given finite number m of directions is ill-posed. Even some small noise in the physical measurements can lead to dramatically different yet still unique solutions. The present paper addresses in particular the following problems. Does discrete tomography have the power of error correction? Can noise be compensated by taking more X-ray images, and, if so, what is the quantitative effect of taking one more X-ray? Our main theorem gives the first nontrivial unconditioned (and best possible) stability result. In particular, we show that the Hamming distance between any two different sets of m X-ray images of the same cardinality is at least 2(m-1), and this is best possible. As a consequence, this result implies a Rényi-type theorem for denoising and shows that the noise compensating effect of X-rays is linear in their number. Our theoretical results are complemented by determining the computational complexity of some underlying algorithmic tasks. In particular, we show that while there always is a certain inherent stability, the possibility of making (worst-case) efficient use of it is rather limited
«
The task of reconstructing binary images from the knowledge of their line sums (discrete X-rays) in a given finite number m of directions is ill-posed. Even some small noise in the physical measurements can lead to dramatically different yet still unique solutions. The present paper addresses in particular the following problems. Does discrete tomography have the power of error correction? Can noise be compensated by taking more X-ray images, and, if so, what is the quantitative effect of taking...
»