Showing posts with label measurement. Show all posts
Showing posts with label measurement. Show all posts

Tuesday, December 10, 2019

New Wearable Sensor Detects Gout and Other Medical Conditions

I just came across this article regarding a wearable sensor systems and thought that I would share it. This could be a component in a remote monitoring system. The sensor's information source is the person's sweat. "Sensor can pick up small concentrations of metabolites in sweat and provide readings over long periods of time." To turn this into a remote monitoring system, all that's required is a means to transmit the data over wireless. 

From the article:

The team’s goal is a sensor that lets doctors continuously monitor the condition of patients with illnesses such as cardiovascular disease, diabetes, and kidney disease, all of which put abnormal levels of nutrients or metabolites in the bloodstream. Patients would be better off if their physician knew more about their personal conditions and this method avoids tests that require needles and blood sampling.
“Such wearable sweat sensors could rapidly, continuously, and noninvasively capture changes in health at molecular levels,” Gao says. “They could make personalized monitoring, early diagnosis, and timely intervention possible.”

Tuesday, July 24, 2018

Adhesives: Part of the Future for the Remote Monitoring Sensors?

I just ran across this article a few minutes ago. It's a serious article published in Machine Design. Here's the link: http://www.machinedesign.com/mechanical/adhesives-enabling-future-wearable-medical-devices?NL=MD-005&Issue=MD-005_20180724_MD-005_524&sfvc4enews=42&cl=article_1_b&utm_rid=CPG05000003255032&utm_campaign=18775&utm_medium=email&elq2=5b76b40ea8f44d76b2b883c5c09f23fe

It's an extremely readable article and what's being described has in my opinion real applicability in the future of medical sensors. Adhesive, "band-aid" or strip sensors development applies to both the fitness set as well as to remotely monitored patients.

Transmitting data to monitoring systems and people will likely require an intermediate device such as a smart phone. I suspect that the real issues and hurdles will likely revolve around digital communications issues and standardization. Having worked most of my life in the communications domain, communications issues can be successfully overcome.

Here are a few quotes from the article:

Device manufacturers are taking steps to create medical devices that are smaller, lighter, and less invasive. Whether they’re adhering device components together or sticking a device to skin, adhesives are uniquely bonded to a device’s success.

Both consumers and patients want wearable devices to be smaller, lighter and less cumbersome to use for seamless integration into their everyday lives. The design process can get challenging when devices must maintain accurate sensing capabilities, but also reduce friction to ensure precise data collection. Adhesives can help to keep friction to a minimum by being breathable and maintaining a low profile. In addition, options with flex electronics, as well as addressing battery implications and electromagnetic interference, provide opportunities for advancement.

Adhesive wear time is a crucial consideration when designing a wearable device, impacting overall resilience and durability, as well as how often the user will need to change their device. 

______________

I should mention that by the looks of things, it appears to me that 3M maybe behind the article. Nevertheless, I think that considering adhesives in the research, design and development process of a bio-sensor is worth your time. 


Friday, March 27, 2015

Welch Allyn Published Patent Application: Continuous Patient Monitoring

I decided to review this patent application in light of the New York Times Opinion piece I commented on. Here's the to my commentary: http://medicalremoteprogramming.blogspot.com/2015/03/new-york-times-opinion-why-health-care.html

Also, I've gone back to the origins of this blog ... reviewing patents. The first patent I reviewed was one from Medtronic. Here's the link: http://medicalremoteprogramming.blogspot.com/2009/09/medtronics-remote-programming-patent.html

The issue raised of particular interest was the high "false alarm" rate generated reported by the author that would lead medical professionals to disregard warnings generated by their computer systems. I wrote that I wanted to follow-up on the issue of false alarms.

The patent application (the application has been published, but a patent has not yet been granted) describes an invention intended to 1) perform continuous automated monitoring and 2) lower the rate of false alarms.

Here are the details of the patent application so that you can find it yourself if you wish:



The continuous monitoring process from a technical standpoint is not all that interesting or new. What is interesting is the process they propose to lower the false alarm rate and determine whether this process in turn will not lower the false negative rate.

Proposed Process of Lowering False Alarms

As mentioned in my earlier article, it appears that false alarms have been a significant issue for medical devices and technology. Systems that issue too many false alarms issue warnings that are often dismissed or ignored. Or waste the time and attention of caregivers who spend time and energy on responding to a false alarm. This patent application is intended to reduce the number of false alarms. However, as I mentioned earlier, can it do that by not increasing the number of false negatives, that is, failure to detect when there is a real event where an alarm should be going off.

Getting through all the details of the patent application and trying to make sense of what they're trying to convey, the following is what I believe is the essence of the invention:


  • Measurement a sensor indicates an adverse patient conditions and an alarm should be initiated.
  • Before the alarm is initiated, the system cross-checks against other measurements that are: 
              1) from another sensor measuring essentially the same physiological condition as the
                  sensor that detected the adverse condition, the measurement from the second sensor
                  would confirm the alarm condition or indicate that an alarm condition should not exist; or
              2) from another sensor or sensors that take physiological measurements that would confirm
                  the alarm condition from the first sensor or indicate that an alarm condition should not
                  exist.

In this model at least two sensors must provide measurements that point to an alarm state.

Acceptable Model for Suppressing False Alarms and Not Increasing False Negatives?

Whatever you do in this domain of detecting adverse patient conditions, you don't want to lower your accuracy of detecting the adverse condition. That is, increase your false negative rate.

So is this one way of at least maintaining your currently level of detecting adverse events and lowering your false alarm rate? On the face of it, I don't know. But it does appear that it might be possible.

One of the conditions the inventors suggest that initiates false alarms are those times when patients move or turn over in their beds. This could disconnect a sensor or cause it to malfunction. A second sensor taking the identical measurement may not functioning normally and have a measurement from the patient indicating that nothing was wrong. The alarm would be suppressed ... although, if a sensor was disconnected, one would expect that there would be a disconnected sensor indicator would be turned on.

Under the conditions the inventors suggest, it would appear that cross checking measurements might reduce false positives without increasing false negatives. I would suggest that care should be given to insure that a rise in false negative rates do not increase. With array of new sensors and sensor technology becoming available, we're going to need to do a lot of research. Much of it would be computer simulations to identify those conditions were an adverse patient condition goes undetected or suppressed by cross-checking measurements.

Post Script

For those who do not know, I am on numerous patents and patent applications (pending patents). Not only that I have written the description section of a few patent applications. So I have a reasonable sense of what is what is not patentable ... this is in spite of the fact that I'm an experimental, cognitive psychologist and we're not general known for our patents.

So, what is my take on the likelihood that this applications will be issued a patent? My sense is not likely. As far as I can tell there's nothing really new described in this application. The core of the invention, the method for reducing false alarms, is not new. Cross-checking, cross-verifying measurements to determine if the system should be in an alarm state is not new. As someone who has analyzed datasets for decades, one of first things that one does with a new dataset is to check for outliers and anomalies - these are similar alarm conditions. One of the ways to determine whether an outlier is real, is to cross check against other measures to determine if they're consistent with and predictive of the outlier. I do not see anything that is particularly new or passes what known in patent review process as the "obviousness test." For me cross checking measures does not reach the grade of patentability.







Wednesday, April 21, 2010

HE-75: Collecting Data and Modeling Tasks and Environment

This article expounds on my earlier article related to AAMI HE-75: Know what thy user does and where they do it. 


Collect and Represent the Data


Ideally the first steps in the design process should occur before a design is ever considered.  Unfortunately, in virtually every case I have encountered, a design for the user interface has already been in the works before the steps for collecting user and task related data have been performed.


Nevertheless, if you are one of the people performing the research, do as much as you can to push the design out of your mind and focus on objectively collecting and evaluating the data.  And, in your data analysis, following the data and not your or the preconceived notions of someone else.


There are a variety of means for collecting data and representing it.  The means for collecting the data will generally involve:
  • Observation - collecting the step-by-step activities as a person under observation performs their tasks.
  • Inquiry - collecting data about the a person's cognitive processes.
Once the data has been connected, it requires analysis and representation in a manner that is useful for later steps in the design process.  Data representations can include:
  • Task models - summary process models (with variants and edge cases) of how users perform each task.  This is different from workflow models in that in task models no references to specific tools or systems should be included in the task model.  A task model should be abstracted and represented at a level without reference to actions taking place on a particular device or system.
  • Workflows - summary process models (with variants and edge cases) similar to the task flows with reference to a particular device or system.  For example, if the user interface consists of a particular web page, there should be a reference to that webpage and the action(s) that took place.
  • Cognitive models - a representation of the cognitive activities and processes that take place as the person performs a task.
  • Breadth analysis - I have noted that this is often overlooked.  Breadth analysis organizes the tasks by frequency of use and if appropriate, order of execution.  This is also the place to represent the tasks that users perform in their work environment but were not directly part of the data collection process.
Detailed Instructions


I cannot hope to provide detailed instructions in this blog.  However, I can provide a few pointers. There published works on how to collect, analyze and model the data by leaders in the field.

Here are three books that can recommend and several can be found in my library:


User and Task Analysis for Interface Design by  J. Hackos & J. Redish


I highly recommend this book.  I use it frequently.  For those of us experienced in the profession and with task and user analysis, what they discuss will seem familiar - as well it should.  However, what they do are provide clear paths and methods for collecting data from users.  The book is well-structured and extremely useful for practitioners.  I had been using task and user analysis for a decade before this book came out.  I found that by owning this book, I could throw all my notes away related to task and user analysis, and use this book as my reference.


Motion and Time Study: Improving Work Methods and Management 
by F. Meyer
Motion and Time Study for Lean Manufacturing (3rd Edition) by F. Meyer & J. R. Stewart


Time and motion study is a core part of industrial engineering as the means to improve the manufacturing process.  Historically, time and motion studies go back to Fredrick Taylor (http://en.wikipedia.org/wiki/Frederick_Winslow_Taylor) who pioneered this work in the later part of the 19th and in early part of the 20th Century.  I have used time and motion studies as a means for uncovering problematic designs.  Time and motion studies can be particularly useful when users are engaged in repetitive activities and as a means for improving efficiency and even as a means for reducing repeated stress injuries.  The first book I have in my library however it is a bit old (but very inexpensive) so I include the second book by Meyers (and Stewart) that more recent.  I can say that the methods of time and motion can be considered timeless, thus adding a book published in 1992 can still be valuable.

Time and motion studies can produce significant detail regarding the activities that those under observation perform.  However, these studies are time-consuming and as such, expensive.  Nevertheless, they can provide extremely valuable data that can uncover problems and improve efficiency.


Contextual Design: Defining Customer-Centered Systems (Interactive Technologies) by H. Beyer & K. Holtzblatt &

Rapid Contextual Design: A How-to Guide to Key Techniques for User-Centered Design (Interactive Technologies) by K. Holtzblatt, J. B. Wendell & S. Wood


The first book I have in my library, but not the second.  I have used many of the methods described in Contextual Design before the book was published.  The contextual design process is one of the currently "hot" methods collecting user and task data, and as such, every practitioner should own a copy of this book - at least as a reference.


I believe what's particularly useful about this contextual inquiry is that it collects data about activities not directly observered.  It's able but that affect the users and the tasks that they perform.  For example, clinicians engaged in the remote monitoring of patients often have other duties, many of them patient related.  Collecting data exclusively targeting remote monitoring activities (or the activities specific to a targeted device or company) can miss significant activities that impact remote monitoring and vice versa


Additional Resources


As a graduate student, I had the privilege of having my education supported by Xerox's Palo Alto Research Center.  I was able to work with luminaries of the profession, Tom Moran and Allen Newell on a couple of projects.  In addition I was able to learn the GOMS model.  I have found this model useful in that it nicely blends objectively observed activities with cognitive processes.  However, the modeling process can be arduous, and as such, expensive.  

Allen Newell and Herbert Simon are particularly well known for their research on chess masters and problem solving.  They were well-known for their research method, protocol analysis. Protocol analysis is a method that has the person under observation verbally express their thoughts while engaged a particular activity.  This enables the observer to collect data about the subject's thoughts, strategies and goals.  This methodology has been adopted by the authors of contextual inquiry and one that I have often used in my research.


The problem with protocol analysis is that it cannot capture cognitive processes that occur beyond the level of consciousness, such as the perception.  For example, subjects are unable to express how they perceive and identify words, or express how they are able to read sentences.  These processes are largely automatic and thus not available to conscious processes.  (I shall discuss methods that will enable one to collect data that involves automatic processes when I discuss usability testing in a later article.)  However, protocol analysis can provide valuable data regarding a subject's thoughts particularly when that person reaches a point where confusion sets-in or where the person attempts to correct an error condition.

Here's a link from Wikipedia: http://en.wikipedia.org/wiki/GOMS.


Another book that I have in my library by a former Bell Labs human factors researcher, Thomas K. (TK) Landauer, is The Trouble with Computers: Usefulness, Usability, and Productivity.


This is fun book.  I think it's much more instructive to the professional than Don Norman's book, The Psychology Of Everyday Things.  (Nevertheless, I place the link to Amazon just the same.  This is a good book for professional in the field to give to family members who ask "what do you do for a living?")  

Tom rails against the many of the pressures and processes that push products, systems and services into the commercial space before they're ready from a human engineering standpoint.  Although the book is relatively old, many of the points he makes are more relevant today than when the book was first published.  The impluse to design user interfaces without reference or regard for users has been clearly noted by the FDA, hence the need for HE-75.

Friday, April 9, 2010

Article: Wireless Remote Monitoring Prevents Complications of Chronic Diseases

An interesting article about the benefits of remote monitoring in the care of patients with chronic diseases from the Press of Atlantic City, 8 March 2010.  Here's the link to the article:  http://www.pressofatlanticcity.com/life/monday_health/article_1333e585-e3a6-5ba8-a411-75530f6b63cf.html

Quotes from the article:
Improving management
By early 2012, Americans will use about 15 million wireless health-monitoring devices, according to a forecast from ABI Research, which tracks mobile-technology trends. The mobile health market is projected to more than triple to $9.6 billion in 2012 from $2.7 billion in 2007, according to study from Kalorama Information Inc
[T]he first pilot project in the nation to assess whether the use of remote digital devices with data sent over the Internet to a doctor's office improved management of multiple chronic diseases - diabetes, heart disease and high blood pressure, also known as hypertension. 
Diabetics and hypertensive patients increased the number of days between appointments by 71 percent and 26 percent respectively ...
"One of the great promises of wireless (health) is making it a part of the patient's daily life, not an interruption to what they're doing every day," ...
From personal experience I believe the last sentence I quoted is among the most important in the article.  The entire process should be so smooth, so automated, so uncomplicated and unintrusive that the patient's life is uninterrupted and that the data is seamlessly collected and sent to the patient's caregiver.

Two other items to note.  The first is a brief discussion of the sensors connected to the patient's body.  They mention band-aid size electrodes.  I am not sure if these are the "digital plaster" that I've discussed in an earlier article.  http://medicalremoteprogramming.blogspot.com/2009/11/digital-plaster.html
Or something else.  I do not know, but it would be interesting to find out.  If I have any informational, I'll post it.  If you have any information, please enlighten us with a comment.

The second issue of note is the discussion in the article regarding payment, and who will do it.  Given the convoluted nature of our system of payments, this will be the most difficult issue to resolve, I believe.  It's ironic considering that remote monitoring saves money.   I think the technical issues will be minor in comparison.  I hope I am proved wrong.

Thursday, April 8, 2010

More on Knowing Thy Target User Population

Before moving forward into product development, I want to elaborate on the issues in my first two articles. This article elaborates on the importance of knowing the target population and ways to gather that information.  

The next article will discuss  I have had some recent experiences that reinforced that importance of defining and clearing understanding the targeted user population. And the importance of fully understanding and documenting what those members of the user population do and the environment(s) wherein they live and work.

Before proceeding any further, please review my previous article on understanding your target population. The link to the article is below:

http://medicalremoteprogramming.blogspot.com/2010/03/know-thy-target-population.html

HE75 clearly emphasizes the importance of understanding your target population.   The standard instructs that companies who develop medical devices should:
  1. Know their targeted user population
  2. Involve users early and often
  3. Accommodate user characteristics and capabilities. And in order to do this, one must first know what they are.

The information gathered about a target population should enable one to clearly define the qualities and characteristics of that population.  This can be particularly important when designing medical devices, particularly when those devices are targeted to patients. 

I have seen organizations a company, organizations that include program management, marketing and engineering assume that they know the characteristics of the targeted population.  Once the product is deployed, the company comes to a rude awakening and learns that their assumptions were often times false.  Neither the company nor the targeted user population(s) benefit from such a failure.

Methods for Gathering Target Population Data

The target population data is the most elemental data in the product development process.  All the descriptions about the targeted user population, their characteristics, culture and capabilities originate from this step in the research and development process.

So, how is this crucial data gathered? First, a confession ... the amount of work I have performed at this stage of the process has been limited.  My training is in cognitive psychology and computer science.  Most often I have been the recipient of such information about the targeted user population.  I have used the results of this first step as a means for recruiting subjects in my usability experiments and evaluations.  The training that is most suited to gathering this kind of data is anthropology and sociology.  The process of collecting target user population data draws on ethnographic and participant observation research methodologies.  The research can be observational.  It can be based on questionnaires administered orally or in writing.  It can be structured interview.  It can participant observation where the observer becomes participates in the activities of the target population.  It can be a combination of a variety of methods and include methods not listed above.  

The objective is the development well-grounded description that captures the important, defining characteristics of the target population.  The description can be provided in variety of ways, verbal or graphic.  The description should use the clearest and most appropriate methods available to covey that information to the members of the product development organizations.

Interestingly enough, I have used the data gathering methods I listed above.  However, I used those methods to collect data for the second step, Knowing what the user does and where they do it.  In other words, to gather task and environmental data.

Potential Costs for Failure to Correctly Define the Target User Population

Consider the following scenario ... that I collect task and environmental data about the wrong population, about a population that is not the target population.  What is the value of the results of my research?  And what could be the cost to the company for this failure?  What could be the cost to the target user population, to have a device with a user interface unsuited to their needs?

In reality, the cost could be high, but the product may not be a dismal failure.  Given the fact that we are all human, we share a wide variety of characteristics.  However, in the more stringent regulatory environment that is anticipated, it could mean delay, additional research, engineering and product development costs.  If the product is intended to provide a new capability to providers and/or patients, a delay could mean that a competitor could be first to the market the product.  Thus company could miss the competitive advantage to being first.

I have recent experience with two products targeted to patients. In one case the target population was well understood and well defined, and members of that population were used in usability testing.  In another case, there was a limited understanding of the target population by the research and development organization. And no member of the target population involved at any stage of the research and development process or in the development of the user interface.   In the first case where the target population was well understood and well defined, the user interface research and development process was clear and logical.  On the other hand, the research and development process that did not have a clear understand of the target population is struggling, it is learning as it goes.  Each time it learns something new about its target population, the user interface has to be updated.  It has been a costly process with constant reworks of the user interface.  So many reworks that the integrity of the original design has been lost.  It appears deconstructed.  At some point the entire user interface will have to be redesigned and that will likely come at the behest of the FDA enforcing HE75.

A Final Thought

HE75 instructs that medical product user interfaces should accommodate a diverse groups of users and should be maximally accessible. I see this as design objective of any user interface in that vernacular should be limited as much as possible and that limiting qualities should not be designed in or should be removed when detected. However, all products may not be accessible to all users but should be clearly accessible to the target population.  And I believe that the FDA will insist on this.

Sunday, November 8, 2009

Remote Monitoring: Predictability

One of the most controversial subjects in measurement and analysis is the concept of predictably.  Prediction does not imply causality or a causal relationship.  It is about an earlier event or events indicating the likelihood of another event occurring.  For example, I've run simulation studies of rare events.  If any of my readers have done this, you'll notice that rare events tend to cluster around each other.  This means that if one rare event has occurred, it's likely that the same event will occur again in a relatively short time.  

Interestingly, the clustering does not seem to be an artifact of the simulation system.  There are some real-world examples.  Consider the paths of hurricanes. At any one time, it is rare that a hurricane will make landfall at a particular location.  However, once a hurricane has hit a particular location, it appears that one can predict that the likelihood of the next hurricane hitting in that same general area goes ups.  I can think of a couple of examples in recent history.  In 1996, hurricanes made landfall two times around the area of Wilmington, NC. Furthermore, a third hurricane passed by.  In 2005, New Orleans was hit solidly twice.  If you look at two hurricane seasons - 1996, 2005 - you'll note that they show quite different patterns.  The rare event paradigm suggests that when the patterns for creating rare conditions are established, they will tend to linger. 

In medicine the objective is to find an event or conditions preceding an event before the event of concern occurs.  For example, an event of concern would be a heart attack.  It is true that once one has had a heart attack, another one could soon follow.  The conditions are right for a follow-on event.  However, the objective is to prevent a heart attack - not wait for a heart attack to occur in order to deal with the next one that is likely to soon occur.  Physicians employ a variety of means to attempt to detect possible conditions that may indicate an increased likelihood of a heart attack.  For example, cholesterol levels that are out of balance might signal an increase in likelihood of having a heart attack.  


The problem is that most of the conditional indicators that physicians currently employ are weak indicators of an impending heart attack.  The indicators are suggestive.  Let me provide an example using a slot machine as an example.  Let's assume that hitting the jackpot is equivalent to an heart attack.  Each pull of the lever represents another passing day.  On it's own, with the settings that the machine is initially set to, the slot machine has a possibility of hitting a jackpot with each pull of the lever.  However, the settings on the slot machine can be biased to make it more likely to hit a jackpot.  This is what doctors search for ... the elevated conditions that make a heart attack more likely.  Making hitting a jackpot more likely does not mean that you're ever going to hit one.  It just increases the likelihood that you will hit one.  


To compound the problem, the discovery of biasing conditions that appear to increase the likelihood of events such as heart attacks are often difficult to clearly assess.  One problem is that apparent biasing indicators or biasing conditions generally don't have a clear causal relationship. They are indicators, they have a correlative relationship (that is not always strong), and not a causal relationship.  There are other problems as well.  For one, extending conclusions to an individual from data collected from a group is generally considered suspect.  Yet, that is what's going on with respect to measuring performing assessments on individuals.  Individuals are compared to norms based on data collected from large groups of individuals.  Overtime and with enough data, norms may be considered predictors.  Search out the literature.  You'll note that many times, measurement that once were considered predictive, now no longer are.


The gold standard of prediction is the discovery of predecessor event or events.  It is something that precedes the watched-for event.  In Southern California everyone is waiting for the great earthquake.  Scientists have been attempting to discover a predecessor event to that great earthquake.  Same goes for detecting a heart attack or other important medical events that are threats to ones health.  Two clear problems stand in the way of discovering a clear predecessor event.  The first is finding that event that seems to precede the event of interest.  This not easy.  A review of the literature will inform you of that.  Second, is once you've found what appears to be a predecessor event, what's its relationship to the target event, the event of interest?  Often times that is a very long process and even with effectively predictive predecessor events, the relationship is not always one to one.  In that, one predecessor event may not precede the event of interest.  Several predecessor events could precede the event of interest.  Or, the predecessor event does not always appear before the event of interest.


This ends my discussion of predictability.  Next time ... I'm going to speculate on what may be possible in the near term and how the benefits of remote monitoring and remote programming can be made available relatively inexpensively to a large number of people.


Article update notice

I have updated my article on Digital Plaster.  I have found an image of digital plaster that I have included, plus a link to one of the early news releases from the Imperial College, London, UK.  I shall include Digital Plaster in my next article.

Remote Monitoring: Update to Sensitivity and Accuracy

Before I dive into the subject of predictability (following article), I have an update on one of my previous articles: Remote Monitoring: Sensitivity and Accuracy.  It comes from a discussion I had with a colleague regarding what appeared to be counter-intuitive results.  The issue was the data sampling rate over a fix period of time.  As the sampling rate increased, accuracy decreased.  Thus with seemingly more data, accuracy went down.

Going back to the Signal Detection paradigm, the paradigm suggests that as a rule increasing the number of data points will reduce the false positives (alpha). And reducing false positives was a major objective of this research.  Frankly for a time I was flummoxed.  Suddenly I realized that I was looking at the problem incorrectly.  I realized that the problem is with the resolution or granularity of the measurement.

The Signal Detection paradigm has as a fundamental assumption the concept of a defined event or event window - and detecting whether or not within that event window a signal is present. The increased sampling rate compounded error, particularly false positive errors.  In effect, the system would take two samples, within the conditions that set-off the false positiveThus producing more than one false positive within an event window where only one false positive should have been recorded.

How to overcome the problem of oversampling, of setting the wrong size event window?  Here are some things that come to mind:
  • First, recognizing that there's an event-window problem may be the most difficult.  This particular situation suggested an event-window problem because the results were counter to expectations.  Having primarily a theoretical perspective, I am not the best one to address this issue. 
  • Finding event windows may involve a tuning or "dialing-in" process.  However it is done, it may take many samples at various sampling resolutions to determine the best or acceptable level of resolution.
  • Consider adding a waiting period once a signal has been detected.  The hope is that the waiting period will reduce the chances of making a false positive error.
On a personal note: I find it amusing that before this time, I had never encountered a granularity-related issue.  I theory I have understood it, but ever encountered it in my own research.  This was in part because the research I have performed has always had clear event boundaries.  Nevertheless, within days of writing about Sensitivity and Accuracy and the granularity issue in this blog, I encounter a granularity problem.

Sunday, November 1, 2009

Remote Monitoring: Sensitivity and Accuracy ... using wine tasting as a model

This article focuses on measurement accuracy, sensitivity and informativeness.  Sometime later I shall follow will an article that will focus on predictability.  

I discuss measurement accuracy, sensitivity and informativeness in this article in the abstract and use an example, wine tasting. However, in later articles when I drill-down into specific measurements provided by remote monitoring systems.  I shall make reference to concept foundation articles such as this one when I discuss specific measurements and measurement systems.



For remote monitoring to be a valuable tool, the measurements must be informative.  That is, they must provide something of value to the monitoring process - whether that monitoring process is an informed and well trained person such as a physician or software process.  However, there are conditions that must first be met before any measurement can be considered informative.

For any measurement to be informative, it must be accurate.  It must correctly measure whatever it was intended to measure.  For example, if the measurement system is designed to determine the existence of a particular event, then it should register that the event occurred and the number of times that it did occur.  Furthermore, it should reject or not respond when conditions dictate that the event did not occur - that is, it should not report a false positive.  This is something that I covered in detail on my article on Signal Detection.  Measurement extend beyond mere detection and to the measurement tied to a particular scale, e. g., such as the constituents in a milliliter of blood.


A constituent of accuracy is granularity.  That is, how fine is the measurement and is it fine enough to provide meaningful information.  Measurement granularity can often be a significant topic of discussion, particularly when defining similarities and differences.  For example, the world class times in swimming are to the hundredth of second.  There have been instances when the computer that sensed that two swimmers touched the end simultaneously and that the times were identical.  (I can think of a particular race in the last Olympics that involved Michael Phelps and the butterfly.)  At the resolution of the computer touch-timing system (and I believe it's down to a thousandth of a second), the system indicated that both touched simultaneously and that they had identical times.  However, is that really true?  If we take the resolution down to a nanosecond, one-billionth of a second, did they touch simultaneously?  

However, at the other end, if measurements are too granular, do they lose their meaningfulness?  This is particularly true when defining what is similar.  It can be argued that with enough granularity, every measurement will differ from all other measurements on that dimension. How do we assess similarities because assessing similarities (and differences) is vital to diagnosis and treatment.


We often make compromises when in comes to issues of granularity and similarity by categorizing.  And often times, categorization and assessments of similarities can be context-specific.  This is something that we do without thinking.  We often assess and reassess relative distances.  For example,  Los Angeles and San Diego are 121 miles from each other.  (I used Google to find this distance.)  To people living in either city, 121 miles is a long distance.  However, to someone is London, England, these two cities would seem to be nearly in the same metropolitan area.  They appear within the same geographic area from a far distance. 



Sensitivity is a topic often unto itself.  Since I discussed it at some length when I discussed Signal Detection, I shall make this discussion relatively short.  In the previous discussion, I discussed the issue related to a single detector and its ability to sense and reject.  I want to add the dimension of multiple detectors and the capability to sense based on multiple inputs.  In this case I am not discussing multiple trials to test a single detector, but multiple measures on a single trial.  Multiple measurements on different dimensions can provide greater sensitivity when combined even if the accuracy and sensitivity of each individual measurement system is less accurate and sensitive than the single measurement system.  I'll discuss this more in depth in a later article.


Informativeness ... this has to do with whether the output of the measurement process - its accuracy (granularity) and sensitivity - provides one with anything of value.  And determining the value depends on what you need that measurement to do for you.  I think my example provides a reasonable and accessible explanation.


Wine Tasting - Evaluating Wine


Over the years, people interested in wine have settled on a 1-100 scale - although, I do not know of an instance where I have seen anything less than an 80 rating.  (I am not a wine expert by any stretch of the imagination.  I know enough to discuss it, that's all.  If you're interested, here's an explanation, how ever they will want to sell you bottles of wine and some companies may block access, nevertheless, here's the link: http://www.wine.com/v6/aboutwine/wineratings.aspx?ArticleTypeId=2.)   Independent or "other" wine raters use a similar rating system.  Wine stores all over the US often have their own wine rater who "uses" one of these scales.  In theory, you'll note that they're reasonably similar.  In practice, they can be quite different.  Two 90 ratings from different wine raters don't always mean the same thing.


So, what is a buyer to do?  Lets look at wine rating in a mechanistic way.  Each wine rater is a measuring machine who is sensitive to the various constituents of a wine and how those constituents provide an experience.  Each rating machine provides us with a single number and often a brief description of the tasting experience.  But, for most people buying wine, it's the number that's the most important - and can often lead to the greatest disappointment.  When we're disappointed, the measurement has failed us.  It lacks informativeness.

How to remedy disappointment of expectation and often times, over payment?  I think of four ways:
  1. Taste the wine yourself before you buy it.  The wine should satisfy you.  You can determine if it's worth the price.  However, I've met many who are not always satisfied with this option for a variety of reasons, ranging from they do not trust their own tastes or lack of "wine knowledge" to the knowing that they are not in a position to taste the wide variety of wines available to professional wine tasters, and thus are concerned about "missing out."  Remote monitoring provides a similar situation.  A patient being remote monitored is not in the presence of the person doing the monitoring, thus the entire experience of seeing the patient along with the measurement values is missing.  However, remote monitoring provides the capability to provide great deal of information about many patients without the need to see each individual.  The problem is, the person doing the monitoring needs to trust the measurements from remote monitoring.
  2. Find a wine rater who has tastes similar to yours.  This might take some time or you might get lucky and find someone who likes wine the way you like it.  Again, this all boils down to trust.
  3. Ask an expert at the wine store.  The hope is that the person at the store will provide you with more information, ask you about your own tastes and what you're looking for.  Although this is not experiential information, you are provided with more information on more dimensions with the ability to re-sample on the same or different dimensions (i. e., ask a question and receive an answer).  In this sense, you have an interactive measurement system.  (At this juncture, I have added by implication remote programming to mix.  Remote programming involve adjusting, tuning or testing additional remotely monitored dimensions.  In this sense, the process of remote monitoring can be dynamic, inquiry-driven.  This is a topic for later discussion.)
  4. Consolidate the ratings of multiple wine raters.  Often several wine raters have rated the same wine.  This can get fairly complicated.  In most cases not all wine raters have rated the same wine and you'll probably get a different mix of raters for each wine.  This too may involve some level of tuning based on the "hits" and "misses." 
This ends this discussion of measurement.  Measurement is the foundation of remote monitoring.  For remote monitoring what its measuring and the accuracy and sensitivity of that measurement and whether that measurement is informative is key to its value.  We've also seen a place for remote monitoring as a means for getting at interesting measurements; changing measurement from a passive to an active, didactic process.


Next time I discuss a recent development with respect to physiological measuring systems.  Here's a link to an article that I believe many will find interesting.  http://mobihealthnews.com/5142/tedmed-wireless-health-has-killed-the-stethoscope/