Beating Heart Site Map:
This page is a result of work with
Marc Dickstein, MD,
Columbia Presbiterian Hospital, New York,
who was kind to give us recorded data from his experiments.
Click here to see graphs
Problem:
On the surface of a heart are attached 16 Sonometrics piezoelectric crystals,
measuring distances between each two of them. Output of the system is a
sequence of 16x16 arrays. An element of the array, in the ith row and
jth column, represents the distance between ith and jth
sensors. The diagonal elements of the array are thus equal to zero. Typically,
one such an array is acquired 200 times in a second.
The task was to calculate carthesian coordinates of each crystal sensor from
the distances between the crystals, using sum of squared errors as the
minimization function. The result is set of points that is not uniquily
determined, but can be translated and rotated in any direction keeping constant
distances between the points.
The problem is contamination of the data:
high levels of noise makes it impossible for any algorithm to calculate the
coordinates without previous filtering. Several types of noise are observed:
 Whitelike noise, occasionally apearing or constantly present.
 Spike noise  occasional peaks with extremely high values.
 Sudden shifts  signal suddenly "jumps" to higher value, continues for a
period and than "jumps" back by approximately the same amount.
 No change in signal, probably indicating that the contact between the
sensor and the device is borken.
While the spike noise can be easyly filtered, other types of noise are hard
or impossible to eliminate. In order to handle this cases, a grading function
is defined. If and discontinuity of the signal is observed, the signal gets a
low grade (from 0 to 1), otherwise, its grade is 1. The "bad grade" will depend
on the relative value of the second derivative of the signal, and it will
exponentially increase in time, as long as the signal is continuous. The
minimization algorithm will take into account this value while calculationg the
new coordinate of a point.
The Algorithm
Newton's method was used:
Taylor series of the error function f(x) is:
 f(x+h) = f(x) + h f'(x) + 1/2 h f''(x) h^{T}
 f(x+h) = f(x) + h g(x) + 1/2 h G(x) h^{T}
The first derivative of the Taylor series with respect to h is:
 df(x+h)/dh = g(x) + G(x) h
Solution to the next equation tells how much should x be changed so that
the error function goes to its minimum:
 df(x+h)/dh = 0
 => G(x) h = g(x)
The increment h obtained by solving system of linear equations G(x) h
= g(x), will move the error function toward its minimum of the matrix
G(x) is positive definite. Unfortunately, for this specific problem
it is NOT the case. However, using Gaussian elimination
usually gives good results! For instance, if the fifth column of the matrix
G has all zero elements, then the fifth element of vector h will
be zero. The algorithm uses this fact, and if no progress can be made, the
LevenberghMarquardt method is applied. This method gives solution a
bias toward the negative gradient, and the h is obtained from a system
of equations:
 (G(x) + u I) h = g(x)
where u is positive number large enough, so that the matrix (G(x) +
u I) is positive definite. It is interesting to note that
LevenberghMarquardt algorithm applied alone, converges too slow for this application.
Described method is applied separately to each
coordinate (x, y, z) in each iteration.
The Stopping Condition
Relative change of the point coordinates is used as the measure of convergence
speed.
 x_new = x + h,
newChange = h/x
This measure is averaged and summed for all three coordinates.
 change = ni oldChange + (1ni) mean(abs( h / x^{*}))
 x^{*} = max( abs(x), typical_x )
The value of x^{*} in order to eliminate bad scaling when
the value of x is small.
Apendix  error function
Signed squared error is defined for the distance between two points with coordinates
(x_{i},y_{i},z_{i}) and
(x_{j},y_{j},z_{j}), and the measured distance
d_{i,j} :

e_{i,j} =
(x_{i}x_{j})^{2}
+(y_{i}y_{j})^{2}
+(z_{i}z_{j})^{2}
 d_{i,j}^{2}
Error function is defined as the sum of squared errors multiplied by a constant
teta that determines how "good" or reliable is the distance.
 F = alpha sum_i sum_j
teta_{i,j} e_{i,j}^{2}
( sum_i represents sum for i=1 to the number of points).
Constant alpha normalizes the function change due to constants
teta_{i,j}.
 alpha = (sum_i sum_j 1 ) / (sum_i sum_j teta_{i,j} );
The first derivative of the error function:
 g_{k} = dF/dx_{k}
= sum_i sum_j teta_{i,j}
d(e_{i,j}^{2})/dx_{k}
 = sum_i[ teta_{i,k} d(e_{i,k}^{2})/dx_{k} ] +
sum_j[ teta_{k,j} d(e_{k,j}^{2})/dx_{k} ]
 = alpha sum_j
4 (x_{k}x_{j}) (teta_{k,j}
e_{k,j}+teta_{j,k} e_{j,k})
The second derivative of the error function:
 G_{k,l} = d^{2} F/dx_{k}dx_{l}
= d/dx_{l} [ dF/dx_{k} ]
 = alpha sum_j{ d/dx_{l}[
4 (x_{k}x_{j}) ((teta_{k,j}
e_{k,j}+teta_{j,k} e_{j,k}) ]}
for k <> l
 d^{2} F/dx_{k}dx_{l}
= alpha d/dx_{l}[
4 (x_{k}x_{l})
(teta_{k,l} e_{k,l}+teta_{l,k} e_{l,k}) ]
 = alpha { 4 ( teta_{k,l} e_{k,l}
+ teta_{l,k} e_{l,k} )
8 ( x_{k}x_{l})^{2} (
teta_{k,l} + teta_{l,k} ) }
for k == l
 d^{2} F/dx_{k}dx_{l}
= d^{2} F/dx_{k}dx_{k}
= alpha sum_j[
(teta_{k,j} e_{k,j} + teta_{j,k} e_{j,k})
+
(x_{k}x_{j})
(teta_{k,j} de_{k,j}/dx_{k}
+ teta_{j,k} de_{j,k}/dx_{k} ) ]
 = alpha 4 sum_j[ (teta_{k,j} e_{k,j} +
teta_{j,k} e(j,k))
+ 2 (teta_{k,j} + teta_{j,k})
(x_{k}x_{j})^{2}]
Zoran Lazarevic, Sun Feb 9, 1997
 http://www.cs.columbia.edu/~laza

BACK to the home page