Home Back to the home page
Beating Heart Site Map:
Wire-mesh animation | Java source |
Data acquisition | Graphs | Matlab source |
B-spline modelling: [ PowerPoint | HTML ] | Matlab source |
B-spline animation | Java source |

This page is a result of work with Marc Dickstein, MD, Columbia Presbiterian Hospital, New York, who was kind to give us recorded data from his experiments.

3D Heart Model Obtained From Sonometrics Sensors

Click here to see graphs


On the surface of a heart are attached 16 Sonometrics piezoelectric crystals, measuring distances between each two of them. Output of the system is a sequence of 16x16 arrays. An element of the array, in the i-th row and j-th column, represents the distance between i-th and j-th sensors. The diagonal elements of the array are thus equal to zero. Typically, one such an array is acquired 200 times in a second.

The task was to calculate carthesian coordinates of each crystal sensor from the distances between the crystals, using sum of squared errors as the minimization function. The result is set of points that is not uniquily determined, but can be translated and rotated in any direction keeping constant distances between the points. The problem is contamination of the data: high levels of noise makes it impossible for any algorithm to calculate the coordinates without previous filtering. Several types of noise are observed:

While the spike noise can be easyly filtered, other types of noise are hard or impossible to eliminate. In order to handle this cases, a grading function is defined. If and discontinuity of the signal is observed, the signal gets a low grade (from 0 to 1), otherwise, its grade is 1. The "bad grade" will depend on the relative value of the second derivative of the signal, and it will exponentially increase in time, as long as the signal is continuous. The minimization algorithm will take into account this value while calculationg the new coordinate of a point.

The Algorithm

Newton's method was used:
Taylor series of the error function f(x) is:

f(x+h) = f(x) + h f'(x) + 1/2 h f''(x) hT
f(x+h) = f(x) + h g(x) + 1/2 h G(x) hT
The first derivative of the Taylor series with respect to h is:
df(x+h)/dh = g(x) + G(x) h
Solution to the next equation tells how much should x be changed so that the error function goes to its minimum:
df(x+h)/dh = 0
=> G(x) h = -g(x)
The increment h obtained by solving system of linear equations G(x) h = -g(x), will move the error function toward its minimum of the matrix G(x) is positive definite. Unfortunately, for this specific problem it is NOT the case. However, using Gaussian elimination usually gives good results! For instance, if the fifth column of the matrix G has all zero elements, then the fifth element of vector h will be zero. The algorithm uses this fact, and if no progress can be made, the Levenbergh-Marquardt method is applied. This method gives solution a bias toward the negative gradient, and the h is obtained from a system of equations:
(G(x) + u I) h = -g(x)
where u is positive number large enough, so that the matrix (G(x) + u I) is positive definite. It is interesting to note that Levenbergh-Marquardt algorithm applied alone, converges too slow for this application.

Described method is applied separately to each coordinate (x, y, z) in each iteration.

The Stopping Condition

Relative change of the point coordinates is used as the measure of convergence speed.
x_new = x + h, newChange = h/x
This measure is averaged and summed for all three coordinates.
change = ni oldChange + (1-ni) mean(abs( h / x*))
x* = max( abs(x), typical_x )
The value of x* in order to eliminate bad scaling when the value of x is small.

Apendix - error function

Signed squared error is defined for the distance between two points with coordinates (xi,yi,zi) and (xj,yj,zj), and the measured distance di,j :

ei,j = (xi-xj)2 +(yi-yj)2 +(zi-zj)2 - di,j2
Error function is defined as the sum of squared errors multiplied by a constant teta that determines how "good" or reliable is the distance.
F = alpha sum_i sum_j tetai,j ei,j2
( sum_i represents sum for i=1 to the number of points).

Constant alpha normalizes the function change due to constants tetai,j.

alpha = (sum_i sum_j 1 ) / (sum_i sum_j tetai,j );
The first derivative of the error function:
gk = dF/dxk = sum_i sum_j tetai,j d(ei,j2)/dxk
= sum_i[ tetai,k d(ei,k2)/dxk ] + sum_j[ tetak,j d(ek,j2)/dxk ]
= alpha sum_j 4 (xk-xj) (tetak,j ek,j+tetaj,k ej,k)

The second derivative of the error function:
Gk,l = d2 F/dxkdxl = d/dxl [ dF/dxk ]
= alpha sum_j{ d/dxl[ 4 (xk-xj) ((tetak,j ek,j+tetaj,k ej,k) ]}
for k <> l
d2 F/dxkdxl = alpha d/dxl[ 4 (xk-xl) (tetak,l ek,l+tetal,k el,k) ]
= alpha { -4 ( tetak,l ek,l + tetal,k el,k ) -8 ( xk-xl)2 ( tetak,l + tetal,k ) }
for k == l
d2 F/dxkdxl = d2 F/dxkdxk
= alpha sum_j[ (tetak,j ek,j + tetaj,k ej,k) + (xk-xj) (tetak,j dek,j/dxk + tetaj,k dej,k/dxk ) ]
= alpha 4 sum_j[ (tetak,j ek,j + tetaj,k e(j,k)) + 2 (tetak,j + tetaj,k) (xk-xj)2]
Zoran Lazarevic, Sun Feb 9, 1997 http://www.cs.columbia.edu/~laza

BACK to the home page