EECS Publication
On convergence of the EM-ML algorithm for PET reconstruction
Jens Gregor, Soren P. Olesen, and Michael G. Thomason
The EM-ML (expectation-maximization, maximum-likelihood) algorithm for PET reconstruction is an iterative method. Sequence convergence to a fixed point that satisfies the Karush-Kuhn-Tucker conditions for optimality has previously been established [1, 2, 3]. This correspondence first gives an alternative proof of sequence convergence and optimality based on direct expansion of certain Kullback discrimination functions and a standard result in optimization theory. Using results in series convergence, we then show that several sequences converge to 0 faster than k -> infinity, i.e., the sequences are o(k^-1).
Published 2007-10-31 04:00:00 as ut-cs-07-605 (ID:127)