Sunday, November 8, 2009

Remote Monitoring: Predictability

One of the most controversial subjects in measurement and analysis is the concept of predictably.  Prediction does not imply causality or a causal relationship.  It is about an earlier event or events indicating the likelihood of another event occurring.  For example, I've run simulation studies of rare events.  If any of my readers have done this, you'll notice that rare events tend to cluster around each other.  This means that if one rare event has occurred, it's likely that the same event will occur again in a relatively short time.  

Interestingly, the clustering does not seem to be an artifact of the simulation system.  There are some real-world examples.  Consider the paths of hurricanes. At any one time, it is rare that a hurricane will make landfall at a particular location.  However, once a hurricane has hit a particular location, it appears that one can predict that the likelihood of the next hurricane hitting in that same general area goes ups.  I can think of a couple of examples in recent history.  In 1996, hurricanes made landfall two times around the area of Wilmington, NC. Furthermore, a third hurricane passed by.  In 2005, New Orleans was hit solidly twice.  If you look at two hurricane seasons - 1996, 2005 - you'll note that they show quite different patterns.  The rare event paradigm suggests that when the patterns for creating rare conditions are established, they will tend to linger. 

In medicine the objective is to find an event or conditions preceding an event before the event of concern occurs.  For example, an event of concern would be a heart attack.  It is true that once one has had a heart attack, another one could soon follow.  The conditions are right for a follow-on event.  However, the objective is to prevent a heart attack - not wait for a heart attack to occur in order to deal with the next one that is likely to soon occur.  Physicians employ a variety of means to attempt to detect possible conditions that may indicate an increased likelihood of a heart attack.  For example, cholesterol levels that are out of balance might signal an increase in likelihood of having a heart attack.  


The problem is that most of the conditional indicators that physicians currently employ are weak indicators of an impending heart attack.  The indicators are suggestive.  Let me provide an example using a slot machine as an example.  Let's assume that hitting the jackpot is equivalent to an heart attack.  Each pull of the lever represents another passing day.  On it's own, with the settings that the machine is initially set to, the slot machine has a possibility of hitting a jackpot with each pull of the lever.  However, the settings on the slot machine can be biased to make it more likely to hit a jackpot.  This is what doctors search for ... the elevated conditions that make a heart attack more likely.  Making hitting a jackpot more likely does not mean that you're ever going to hit one.  It just increases the likelihood that you will hit one.  


To compound the problem, the discovery of biasing conditions that appear to increase the likelihood of events such as heart attacks are often difficult to clearly assess.  One problem is that apparent biasing indicators or biasing conditions generally don't have a clear causal relationship. They are indicators, they have a correlative relationship (that is not always strong), and not a causal relationship.  There are other problems as well.  For one, extending conclusions to an individual from data collected from a group is generally considered suspect.  Yet, that is what's going on with respect to measuring performing assessments on individuals.  Individuals are compared to norms based on data collected from large groups of individuals.  Overtime and with enough data, norms may be considered predictors.  Search out the literature.  You'll note that many times, measurement that once were considered predictive, now no longer are.


The gold standard of prediction is the discovery of predecessor event or events.  It is something that precedes the watched-for event.  In Southern California everyone is waiting for the great earthquake.  Scientists have been attempting to discover a predecessor event to that great earthquake.  Same goes for detecting a heart attack or other important medical events that are threats to ones health.  Two clear problems stand in the way of discovering a clear predecessor event.  The first is finding that event that seems to precede the event of interest.  This not easy.  A review of the literature will inform you of that.  Second, is once you've found what appears to be a predecessor event, what's its relationship to the target event, the event of interest?  Often times that is a very long process and even with effectively predictive predecessor events, the relationship is not always one to one.  In that, one predecessor event may not precede the event of interest.  Several predecessor events could precede the event of interest.  Or, the predecessor event does not always appear before the event of interest.


This ends my discussion of predictability.  Next time ... I'm going to speculate on what may be possible in the near term and how the benefits of remote monitoring and remote programming can be made available relatively inexpensively to a large number of people.


Article update notice

I have updated my article on Digital Plaster.  I have found an image of digital plaster that I have included, plus a link to one of the early news releases from the Imperial College, London, UK.  I shall include Digital Plaster in my next article.

Remote Monitoring: Update to Sensitivity and Accuracy

Before I dive into the subject of predictability (following article), I have an update on one of my previous articles: Remote Monitoring: Sensitivity and Accuracy.  It comes from a discussion I had with a colleague regarding what appeared to be counter-intuitive results.  The issue was the data sampling rate over a fix period of time.  As the sampling rate increased, accuracy decreased.  Thus with seemingly more data, accuracy went down.

Going back to the Signal Detection paradigm, the paradigm suggests that as a rule increasing the number of data points will reduce the false positives (alpha). And reducing false positives was a major objective of this research.  Frankly for a time I was flummoxed.  Suddenly I realized that I was looking at the problem incorrectly.  I realized that the problem is with the resolution or granularity of the measurement.

The Signal Detection paradigm has as a fundamental assumption the concept of a defined event or event window - and detecting whether or not within that event window a signal is present. The increased sampling rate compounded error, particularly false positive errors.  In effect, the system would take two samples, within the conditions that set-off the false positiveThus producing more than one false positive within an event window where only one false positive should have been recorded.

How to overcome the problem of oversampling, of setting the wrong size event window?  Here are some things that come to mind:
  • First, recognizing that there's an event-window problem may be the most difficult.  This particular situation suggested an event-window problem because the results were counter to expectations.  Having primarily a theoretical perspective, I am not the best one to address this issue. 
  • Finding event windows may involve a tuning or "dialing-in" process.  However it is done, it may take many samples at various sampling resolutions to determine the best or acceptable level of resolution.
  • Consider adding a waiting period once a signal has been detected.  The hope is that the waiting period will reduce the chances of making a false positive error.
On a personal note: I find it amusing that before this time, I had never encountered a granularity-related issue.  I theory I have understood it, but ever encountered it in my own research.  This was in part because the research I have performed has always had clear event boundaries.  Nevertheless, within days of writing about Sensitivity and Accuracy and the granularity issue in this blog, I encounter a granularity problem.

Tuesday, November 3, 2009

Sensor Technology: Digital Plaster and Stethoscope

Digital Plaster


Toumaz Technology has announced the clinical trials of what they are calling "digital plaster" that should enable caregivers to remotely monitor patients.  In the initial trial it would allow caregivers to remotely monitor patients when they are in the hospital.  However, conceivably patient could carry a mobile monitoring system like the one that I discussed in my article: Communication Model for Medical Devices.  

Here is a link the article on Digital Plaster: http://www.sciencecentric.com/news/article.php?q=09110342-digital-plaster-monitoring-vital-signs-undergoes-first-clinical-trials

Update:  Here's an image of digital plaster from a UK website.  This is to provide you with an image of the size and means of application of digital plaster.  It's a sensor placed into a standard plastic or cloth strip.  Simple to apply and it's disposable.  



For more information, here's the link: Imperial College, London, UK.  This is a 2007 article.  This is a good reference point to investigate the technology. 

Digital Stethoscope


Another development was the announcement at TEDMED of the digital ste.  Here's the link to the article: http://mobihealthnews.com/5142/tedmed-wireless-health-has-killed-the-stethoscope/.  This article discusses this and other new wireless medical devices that will enable patients to be remotely monitored from virtually anywhere.  Thus providing the capability to keep people out of hospitals or keep them for shorter periods of time.  Furthermore, these technologies have the capability of improving care while lowering costs.  Again I think it would be instructive to read my articles on mobile, wireless data communications:  1) Communication Model for Medical Devices and 2) New Communications Model for Medical Devices.

Sunday, November 1, 2009

Remote Monitoring: Sensitivity and Accuracy ... using wine tasting as a model

This article focuses on measurement accuracy, sensitivity and informativeness.  Sometime later I shall follow will an article that will focus on predictability.  

I discuss measurement accuracy, sensitivity and informativeness in this article in the abstract and use an example, wine tasting. However, in later articles when I drill-down into specific measurements provided by remote monitoring systems.  I shall make reference to concept foundation articles such as this one when I discuss specific measurements and measurement systems.



For remote monitoring to be a valuable tool, the measurements must be informative.  That is, they must provide something of value to the monitoring process - whether that monitoring process is an informed and well trained person such as a physician or software process.  However, there are conditions that must first be met before any measurement can be considered informative.

For any measurement to be informative, it must be accurate.  It must correctly measure whatever it was intended to measure.  For example, if the measurement system is designed to determine the existence of a particular event, then it should register that the event occurred and the number of times that it did occur.  Furthermore, it should reject or not respond when conditions dictate that the event did not occur - that is, it should not report a false positive.  This is something that I covered in detail on my article on Signal Detection.  Measurement extend beyond mere detection and to the measurement tied to a particular scale, e. g., such as the constituents in a milliliter of blood.


A constituent of accuracy is granularity.  That is, how fine is the measurement and is it fine enough to provide meaningful information.  Measurement granularity can often be a significant topic of discussion, particularly when defining similarities and differences.  For example, the world class times in swimming are to the hundredth of second.  There have been instances when the computer that sensed that two swimmers touched the end simultaneously and that the times were identical.  (I can think of a particular race in the last Olympics that involved Michael Phelps and the butterfly.)  At the resolution of the computer touch-timing system (and I believe it's down to a thousandth of a second), the system indicated that both touched simultaneously and that they had identical times.  However, is that really true?  If we take the resolution down to a nanosecond, one-billionth of a second, did they touch simultaneously?  

However, at the other end, if measurements are too granular, do they lose their meaningfulness?  This is particularly true when defining what is similar.  It can be argued that with enough granularity, every measurement will differ from all other measurements on that dimension. How do we assess similarities because assessing similarities (and differences) is vital to diagnosis and treatment.


We often make compromises when in comes to issues of granularity and similarity by categorizing.  And often times, categorization and assessments of similarities can be context-specific.  This is something that we do without thinking.  We often assess and reassess relative distances.  For example,  Los Angeles and San Diego are 121 miles from each other.  (I used Google to find this distance.)  To people living in either city, 121 miles is a long distance.  However, to someone is London, England, these two cities would seem to be nearly in the same metropolitan area.  They appear within the same geographic area from a far distance. 



Sensitivity is a topic often unto itself.  Since I discussed it at some length when I discussed Signal Detection, I shall make this discussion relatively short.  In the previous discussion, I discussed the issue related to a single detector and its ability to sense and reject.  I want to add the dimension of multiple detectors and the capability to sense based on multiple inputs.  In this case I am not discussing multiple trials to test a single detector, but multiple measures on a single trial.  Multiple measurements on different dimensions can provide greater sensitivity when combined even if the accuracy and sensitivity of each individual measurement system is less accurate and sensitive than the single measurement system.  I'll discuss this more in depth in a later article.


Informativeness ... this has to do with whether the output of the measurement process - its accuracy (granularity) and sensitivity - provides one with anything of value.  And determining the value depends on what you need that measurement to do for you.  I think my example provides a reasonable and accessible explanation.


Wine Tasting - Evaluating Wine


Over the years, people interested in wine have settled on a 1-100 scale - although, I do not know of an instance where I have seen anything less than an 80 rating.  (I am not a wine expert by any stretch of the imagination.  I know enough to discuss it, that's all.  If you're interested, here's an explanation, how ever they will want to sell you bottles of wine and some companies may block access, nevertheless, here's the link: http://www.wine.com/v6/aboutwine/wineratings.aspx?ArticleTypeId=2.)   Independent or "other" wine raters use a similar rating system.  Wine stores all over the US often have their own wine rater who "uses" one of these scales.  In theory, you'll note that they're reasonably similar.  In practice, they can be quite different.  Two 90 ratings from different wine raters don't always mean the same thing.


So, what is a buyer to do?  Lets look at wine rating in a mechanistic way.  Each wine rater is a measuring machine who is sensitive to the various constituents of a wine and how those constituents provide an experience.  Each rating machine provides us with a single number and often a brief description of the tasting experience.  But, for most people buying wine, it's the number that's the most important - and can often lead to the greatest disappointment.  When we're disappointed, the measurement has failed us.  It lacks informativeness.

How to remedy disappointment of expectation and often times, over payment?  I think of four ways:
  1. Taste the wine yourself before you buy it.  The wine should satisfy you.  You can determine if it's worth the price.  However, I've met many who are not always satisfied with this option for a variety of reasons, ranging from they do not trust their own tastes or lack of "wine knowledge" to the knowing that they are not in a position to taste the wide variety of wines available to professional wine tasters, and thus are concerned about "missing out."  Remote monitoring provides a similar situation.  A patient being remote monitored is not in the presence of the person doing the monitoring, thus the entire experience of seeing the patient along with the measurement values is missing.  However, remote monitoring provides the capability to provide great deal of information about many patients without the need to see each individual.  The problem is, the person doing the monitoring needs to trust the measurements from remote monitoring.
  2. Find a wine rater who has tastes similar to yours.  This might take some time or you might get lucky and find someone who likes wine the way you like it.  Again, this all boils down to trust.
  3. Ask an expert at the wine store.  The hope is that the person at the store will provide you with more information, ask you about your own tastes and what you're looking for.  Although this is not experiential information, you are provided with more information on more dimensions with the ability to re-sample on the same or different dimensions (i. e., ask a question and receive an answer).  In this sense, you have an interactive measurement system.  (At this juncture, I have added by implication remote programming to mix.  Remote programming involve adjusting, tuning or testing additional remotely monitored dimensions.  In this sense, the process of remote monitoring can be dynamic, inquiry-driven.  This is a topic for later discussion.)
  4. Consolidate the ratings of multiple wine raters.  Often several wine raters have rated the same wine.  This can get fairly complicated.  In most cases not all wine raters have rated the same wine and you'll probably get a different mix of raters for each wine.  This too may involve some level of tuning based on the "hits" and "misses." 
This ends this discussion of measurement.  Measurement is the foundation of remote monitoring.  For remote monitoring what its measuring and the accuracy and sensitivity of that measurement and whether that measurement is informative is key to its value.  We've also seen a place for remote monitoring as a means for getting at interesting measurements; changing measurement from a passive to an active, didactic process.


Next time I discuss a recent development with respect to physiological measuring systems.  Here's a link to an article that I believe many will find interesting.  http://mobihealthnews.com/5142/tedmed-wireless-health-has-killed-the-stethoscope/ 







Wednesday, October 28, 2009

Biotronik Home Monitoring Claim

I'm posting this article before my discussion on measurement and sensing because it has relevance to my immediately preceding posting.  

Biotronik released to the press on Tuesday 27 October 2009 an announcement regarding their Evia Pacemaker.  In that press release was some additional information regarding Biotronik's Home Monitoring system.  Here's the link to the press release: http://www.earthtimes.org/articles/show/biotronik-launches-evia-pacemaker-series,1016041.shtml

The relevant quote from the press release is the following:

Now physicians have the choice to call in their patients to the clinic or perform remote follow-ups with complete access to all pertinent patient and device information, including high quality IEGM Online HD®. Importantly, BIOTRONIK Home Monitoring® has also received FDA and CE Mark approval for its early detection monitoring technology which allows clinicians to access their patients’ clinically relevant event data more quickly so they can make immediate therapy decisions to improve patient care. 


The indications are that the Biotronik claims that their system provides quicker access to relevant data, not that the data (and analysis) yield earlier warning results.  This is consistent with my earlier analysis and that seems to be supported by Biotronik's own admission.

I do wonder about Biotronik's long-term objective.  I suspect that Biotronik wants to be one of the big three implantable device manufacturers, not just become one of four.  It would mean that Biotronik would likely target one of the big three to replace and that would likely involve targeting the weaknesses of the company that Biotronik wants to replace.  I'll continue to monitor Biotronik and report what I find.



Next, my discussion on measurement and detection.

Sunday, October 25, 2009

Remote Monitoring: Deep Dive Introduction

I am going to change course over the next few entries to focus on remote monitoring.  This article is the first in a series of articles on Remote Monitoring and what can be gleaned from the data remote monitoring collects.  The Biotronik press releases and some of the claims they have been making have driven me to investigate and speculate on remote monitoring, its capabilities, potential and possible future. 


Two claims that Biotronik have made for it's Home Monitoring system have intrigued me.  First, Biotronik claims as a proven capability of earlier detection than other systems of critical, arrhythmic events.  Second, they also claim that they can report these events earlier than other systems.  

Let's take the second claim first, Biotronik has created a system with the capability to more quickly notify (e. g., transmit) implant data.  The Biotronik mobile capability enables a faster detection and quicker transmission of those events by virtue of its mobile capability.  Their claim is rooted in mobility of their monitor and its communication system.  So, the second claim appears plausible.

The first claim is more difficult not only because it is more difficult to prove, but because it's more difficult to define.  I think of at least two ways the capability could be defined and implemented.  One, consider the signal-detection paradigm.  I have a drawing that defines the basic signal detection paradigm below.


 

The basic concept of signal detection is extraordinarily simple.  On any given trial, a signal is either present or not.  It is the job of the detector to accurately determine whether or not the signal is present.  There are two right answers and two wrong answers as shown in the diagram.  The type 1 error is the indication by the detector that is signal is present when it is not.  (The probability of a type 1 error is represented by Greek letter alpha.)  The type 2 error is incorrectly indicating that a signal is not present when in fact it is.  (The probability of a type 2 error is represented by Greek letter beta.)


The objective of detector improvement is to reduce both type 1 and 2 errors.  However, often times adjustments are made to alpha or beta to make it look like there's an improvement.  For example, if sensitivity is the crucial characteristic, the engineers may be willing to sacrifice an increase in type 1 errors to reduce type 2.  (This gets into what's called receiver operating characteristics or ROC.  Something for a later blog article.)


I discuss the signal detection paradigm for two reasons.  First, the signal detection paradigm is an engineering and scientific touchstone that I'll refer to in later articles.  Second, it allows one to assess just what is accurate detection, increasing sensitivity, etc. 

Thus Biotronik's claim of earlier detection could be real or it could reflect Biotronik's acceptance of more type 1 errors in order to raise sensitivity.  This could lead to earlier detection but at the expense of increasing the likelihood of type 1 errors. In the next article, I'll explore ways to improve detection capabilities, not by increasing accuracy of a particular detector, but by increasing the number of different detectors.



Early detection could also be interpreted as predictive.  This is the more difficult than simple detection.  This would be the computed likelihood of a particular event based on one or more measurements.  This does not fit into the simple signal detection paradigm.  It often involves finding a pattern and extrapolation.  Or it could involve finding a predecessor indicator; finding a condition that is a know precondition to the target.  The specifics of a predictive capability will be discussed in a later article.  


This ends the Introduction.  The next article will discuss detection capabilities in greater detail.


Thursday, October 22, 2009

Update: Future-Market Analysis: Global Patient Monitoring

I'm posting a link to an article that provides some information from the Global Patient Monitoring Marketing study.  Here's the link: Europe Remote Patient Monitoring Market: Strategic Analysis and Opportunity Assessment.  One warning, the article is loaded with embedded ads and links to services that they want to sell you.  Go to the article, you'll see what I mean.  However, it article provides some information about to size and the growth potential for remote monitoring in Europe.

Wednesday, October 21, 2009

Verizon's Offering at the Connected Health Symposium

Article from Mobihealthnews.com (@Connected Health: Verizon highlights partners) briefly describing the benefits and cost savings from tele-medicine.  For example, Verizon claims that "IT healthcare solutions and services can help organizations save close to $165 billion annually, according to the carrier. The carrier also cites a report from the Insight Research Corporation that estimates $800 million per year could be saved if more treatment was shifted from physician’s offices to home health visits."

Of course, tele-medicine and the applications bring revenue to that Verizon (and other carriers), thus the cost savings amounts should be viewed sceptically.  However, in general tele-medicine solutions nearly always provide cost savings over clinic and hospital visits.  And they also provide an additional level freedom that improves quality of life.


I want to add this link that sounds a significant concern regarding the supply and demand for communications bandwidth in the near future.  Here's the link: http://www.reuters.com/article/pressRelease/idUS121240+21-Oct-2009+PRN20091021.  The title of the article is: "Are we ready for the Exabyte Tsunami?  (Here's a link to explain an exabyte: http://en.wikipedia.org/wiki/Exabyte).

Tuesday, October 20, 2009

Biotronik Home Monitoring Operational in Europe

I've mentioned Biotronik's Home Monitoring system in an earlier post.  One of the attractive things about the Biotronik version is that their home monitoring has been deemed a replacement for clinic visits.  Quote from the article (link immediately below)


"Designed to avoid regular visits to the clinic by patients wearing company's ICD's, CRT's, and similar devices, the system sends readings from the chest straight to your doc over the cellular phone network."

This is an interesting development because Biotronik has been taking market share from the big three medical device makers.  I think that the Biotronik capability reduce clinic visits translates into either more revenue or more free time.  Either one would be attractive for device managing physicians who may suggest to implanting physicians to choose Biotronik.  This may be a situation where a robust home monitoring system drives the choice of the brand of device to implant.  I do not have clear evidence, but I think the issue is worth investigating.


Three aspects of the Biotronik home monitoring system seem to differentiate it from others.  First, the monitoring unit is mobile and uses the GSM to communicate with the monitoring servers.  The monitoring servers in turn can notify the device managing physician or clinic with an email, SMS (text) message or fax.  Second, Biotronik home monitoring unit has what they call an intelligent traffic light system.  I haven't any information on how the intelligent traffic light system operates.  Finally, and I think most importantly, the Biotronik system has the capability of earlier detection than other systems of critical, arrhythmic events.  They claim that this is a "proven capability."  Since I have no information on the operational details or algorithms that they use, I cannot confirm or deny their claims.  


The German Government has shown its belief in the bright future of Biotronik and its Home Monitoring technology: Nominated for the German Federal President`s "Deutscher Zukunftspreis" (German Future Award): BIOTRONIK Home Monitoring for Online Monitoring of Heart Patients.

Update: 21 October 2009.  A little more information about the research the Biotronik performed with respect to the value and capabilities of their Home Monitoring system.


Biotronik Press Release Published in Reuters Regarding Home Monitoring.  This press release mentions three publications of the results of the Biotronik study.  I have not yet been able to obtain a copy.  From the outside, it's hard to assess of the significance of the technology or technologies that Biotronik has incorporated into their system. However, from the outside, it appears that with the possible exception of the mobile monitoring unit, it looks more like a publicity campaign than substance because there is nothing that I can see that clearly sets Biotronik's remote monitoring system from anyone else with respect to data collection and/or analysis.

Monday, October 19, 2009

Update on 29 September 2009 Posting

I have an update related to my 29 September posting, Medtronic Remote Programming Patent.I stated the following in that posting ...

I believe that Medtronic's patent ... reveals not only the extent of Medtronic's work on remote programming and their level of development of this technology, it reveals a product development path. ... The strategy that I believe Medtronic has taken is in keeping with long-standing trends in technology development.

Over the last several decades, the trend has been to move away from  specialized to more powerful, general-purpose processors. This enables products to be defined more by software than by hardware. Processing power has become smaller, less power hungry and cheaper, thus allowing software to become the means for defining the system's capability. Furthermore, this enables multiple products to be defined by a single hardware platform. ...

The Medtronic patent suggests a similar product strategy ... that different products will use fundamentally the same hardware architecture, but they will be defined by the software that they run. So, a pacemaker, a neurostimulator and a drug pump will share the same processor hardware platform, but their operation will be defined primarily by the software that they run. For example, take some time and examine pacemakers, ICDs. CRTs/CRT-Ds, neuro-stimulators, drug pumps, etc.  Although they have different purposes, they have enough in common to consider the possibility that all of them could share a common processor platform.

The implications are significant for all functional areas within Medtronic, from research and development, product development, software development and management, and from product support. Medtronic can leverage its enormous scale to make its scale as a company a major asset. It can substantially reduce the number of hardware platforms it supports, it can leverage its software development capabilities to have its software development groups produce software for multiple product lines, it can create more products without a substantial requirement for additional support each time a product is produced. ...



I unearthed an article published in the August 2008, Journal of Computers, titled "Design Overview Of Processor Based Implantable Pacemaker" authored by Santosh Chede and Kishore Kulat both from the Department of Electronics and Computer Science Engineering at the Visvesvarayan National Institute of Technology. (I do not have an address for you to access this article, however, if you search on the journal, the title and authors, you will find it.)


Their article describes the means by which they created a pacemaker using at Texas Instrument (TI) MSP430F1611 processor to build a pacemaker.  The TI MSP430 processor (TI MSP430 Microcontroller Website) is a general purpose RISC processor similar in architecture to the DEC PDP-11.  The TI MSP430 is designed for ultra-low power consumption and targeted to battery-powered, embedded applications.  In other words, this would be the kind of processor on which to base a line of implantable medical devices.  Having looked around the website, I noted that the application of the processor included medical devices, but not implants.  However, based on the Journal of Computers article, I can see a clear route to creating implants using this processor. (I haven't yet found a comparable processor, however, I suspect the existence of one or more.  As I find additional processors in this class, I shall make them known in this blog.)


Finally, I think the important message of the Journal of Computers article is that it is possible to use a general purpose processor and software to create a pacemaker or any other implantable medical device such as a neuro-stimulator, CRT-D, or drug-pump. As I discussed earlier, using a general purpose processor and software to create the product, can be an effective business and technical strategy.