Sunday, October 25, 2009

Remote Monitoring: Deep Dive Introduction

I am going to change course over the next few entries to focus on remote monitoring.  This article is the first in a series of articles on Remote Monitoring and what can be gleaned from the data remote monitoring collects.  The Biotronik press releases and some of the claims they have been making have driven me to investigate and speculate on remote monitoring, its capabilities, potential and possible future. 


Two claims that Biotronik have made for it's Home Monitoring system have intrigued me.  First, Biotronik claims as a proven capability of earlier detection than other systems of critical, arrhythmic events.  Second, they also claim that they can report these events earlier than other systems.  

Let's take the second claim first, Biotronik has created a system with the capability to more quickly notify (e. g., transmit) implant data.  The Biotronik mobile capability enables a faster detection and quicker transmission of those events by virtue of its mobile capability.  Their claim is rooted in mobility of their monitor and its communication system.  So, the second claim appears plausible.

The first claim is more difficult not only because it is more difficult to prove, but because it's more difficult to define.  I think of at least two ways the capability could be defined and implemented.  One, consider the signal-detection paradigm.  I have a drawing that defines the basic signal detection paradigm below.


 

The basic concept of signal detection is extraordinarily simple.  On any given trial, a signal is either present or not.  It is the job of the detector to accurately determine whether or not the signal is present.  There are two right answers and two wrong answers as shown in the diagram.  The type 1 error is the indication by the detector that is signal is present when it is not.  (The probability of a type 1 error is represented by Greek letter alpha.)  The type 2 error is incorrectly indicating that a signal is not present when in fact it is.  (The probability of a type 2 error is represented by Greek letter beta.)


The objective of detector improvement is to reduce both type 1 and 2 errors.  However, often times adjustments are made to alpha or beta to make it look like there's an improvement.  For example, if sensitivity is the crucial characteristic, the engineers may be willing to sacrifice an increase in type 1 errors to reduce type 2.  (This gets into what's called receiver operating characteristics or ROC.  Something for a later blog article.)


I discuss the signal detection paradigm for two reasons.  First, the signal detection paradigm is an engineering and scientific touchstone that I'll refer to in later articles.  Second, it allows one to assess just what is accurate detection, increasing sensitivity, etc. 

Thus Biotronik's claim of earlier detection could be real or it could reflect Biotronik's acceptance of more type 1 errors in order to raise sensitivity.  This could lead to earlier detection but at the expense of increasing the likelihood of type 1 errors. In the next article, I'll explore ways to improve detection capabilities, not by increasing accuracy of a particular detector, but by increasing the number of different detectors.



Early detection could also be interpreted as predictive.  This is the more difficult than simple detection.  This would be the computed likelihood of a particular event based on one or more measurements.  This does not fit into the simple signal detection paradigm.  It often involves finding a pattern and extrapolation.  Or it could involve finding a predecessor indicator; finding a condition that is a know precondition to the target.  The specifics of a predictive capability will be discussed in a later article.  


This ends the Introduction.  The next article will discuss detection capabilities in greater detail.


No comments:

Post a Comment