Sunday, August 1, 2010

HE-75 Topic: Cleaning Up the Mess

I received a reminder this week of what usability professionals are often called on to do – cleaning up the mess created by a failed process. Somehow, the persons responsible for designing an awful, unusable and in some case, useless user interface expect the usability expert to come in, take one look and create a beautiful user interface. This is absurd!  It was the "nightmare" come true - something related to one of my other postings: HE-75 topic: Design first and ask questions later

Writing from my own perspective, there is nothing that a usability professional likes to do less than to correct a failed design that resulted from a failed design process. This week I was asked to save a group of programmers and user interfaced designers from the monstrosities that they had created. What was particularly strange was that the leader of the project thought that I could just redesign something by looking at what they had created. It was bizarre. Unfortunately, I had to deliver several harsh messages regarding the design process and the design, that were not well received. (Nevertheless, that is my job.)

Here is the point I want to make to anyone who reads this. Process and the resulting design should be considered as two sides of the same coin. Good design process nearly always results in a good design. A nonexistent or poor design process leads to a poor design. HE-75's design process can serve as a foundation design process for designing user interface in nearly any industry, particularly in those industries where the harm is particularly severe. Where I am currently working, I plan to use HE-75 as one of the foundation documents to set user interface design standards. And as I mentioned, I am not currently working in the medical or medical device industry. However, I have come to believe that in this industry, the level of harm can be significant. Thus, I shall incorporate HE-75.
Next time, I'll review so of the literature that might be of some use to the community.

Saturday, July 24, 2010

Advanced Technology

I mentioned in an earlier article that I have move out of medical devices for the time being.  However, I have moved out of remote monitoring or remote programming (it is called "remote configuration" where I am now.)

We have been given the go ahead to explore a variety of new and what some may consider, off the beaten path technologies.  Although I shall not be able to discuss specific studies or approaches, I shall be able to discuss how some technologies not currently used by the medical and medical device communities might be useful to them.

I shall have periodic updates on this topic from time to time.

Here are some platforms to consider for mobile technology.  (This is not part of the work that I am doing now.  It is more related to my earlier work.)

Useful Verses Usable

This is a discussion that you are not likely to read in usability texts – the topic of useful verse usability, and the value of each. Just recently I had a discussion with someone on just this topic. Furthermore, I have had numerous discussions with others and each time it surprises me that people often do not know the difference between the two and the value of each.

Useful and Usable

If you go to the dictionary, you will discover that “useful” means “to be of serviceable value, beneficial” and in addition “of practical use.” Pretty straight forward.

On the other hand, the definition of “usable” is “capable of being used” and “convenient and viable for use.” Also a straightforward definition.

However, if you probe more deeply into the definitions, you will note that “useful” is the first, necessary quality of a tool or system. It must be useful or why use it? Usable is a quality of a tool or system. However, it is not primary in relationship with the quality of the being “useful.” It is secondary. Necessary, yes, nevertheless, it is still secondary.

The usefulness as a quality of a tool or system is not addresses in HE-75, or any other usability standard that I have encountered. (If any one knows of a standard where usefulness is addressed, please add a comment to this discussion.) Usefulness is assumed.

However, I have learned that in the real world, the usefulness of any tool or system should not be assumed. It should be tested. Furthermore, with complex systems, the fundamental capabilities of a system or tool are often useful. However, not all of the capabilities of that system may be.

I direct experience with a remote monitoring system where the primary or fundamental capabilities of the system have clear use. However, with each release of this system, as more capabilities are added, the useless capabilities may be on the verge of out numbering the useful ones.

Bottom Line

  • Usefulness is always more important than usability. If it is not useful, it is fundamentally worthless or at best, excess baggage, and a drag on the actual and perceived quality of the tool or system.

  • Usefulness should never be assumed. It should be demonstrated. I know of too many projects where usefulness was not demonstrated. This lead to capabilities being developed that waste time and money, and can damage reputations.

Sunday, July 18, 2010

Gadgets of the Future

An interesting and to some degree a light-hearted article about future systems that could monitor us.  This was published in the Chicago Tribune.

Here's the link:

HE-75, Usability and When to Prototype and Usability Test: Take 1

Prototyping and Testing will be a topical area where I shall have much to contribute.  Expect numerous articles to appear on this topic.

I had a discussion a few days ago with one of my colleagues who has worked as a user interface designer, but has little knowledge of human factors.  He was completely unaware of the concepts of "top-down" and "bottom-up" processes to user interface design.  I provide for you the essence of that discussion.

Top-Down Approach

The top-down approach begins with a design.  Most often the initial design is a best or educated guess based on some set of principles.  Could be aesthetics or "accepted" standards of good design, or something else.  The design is usability and/or acceptance tested in some manner.  (Anywhere from laboratory testing to field-collected data.)  In response to the data, the design reworked.  The process is continual.  Recent experience has suggested that the top-down approach has become predominant design methodology, particularly for the development of websites.

Top-down is a valid process, particularly for the deployment of new or unique products where the consequences of a failed design do not lead to serious consequences.  It can get a design into user hands more quickly.  The problem with a top-down approach (when practiced correctly) is that it relies on successive approximations to an ill-defined or unknown target.  To some degree it's similar to throwing darts blindfolded with some minimal correction information provided after each throw.  The thrower will eventually hit the bull's eyes, but it may take lots and lots of throws.

The top-down approach may have a side benefit in that it can lead to developing novel and innovative designs.  Although, it can have the opposite effect when designs are nothing more than "knock-offs" of the designs from others.  I have seen both coming out of the top-down approach.

Bottom-Up Approach

HE-75 teaches the use of a bottom-up approach where first one defines and researches the targeted user population.  Contextual Inquiry is also a bottom-up approach.  Since I have already discussed researching the targeted user population in depth, I'll not cover it here.  

With the bottom-up approach, the target is clear and understood.  And tailoring a design to the user population(s) should be a relatively straight forward process.  Furthermore, the bottom-up approach directly addresses the usefulness issue with hard data and as such, more likely to lead to the development of a system that is not only usable, but useful.

Useful vs. Usable

I'll address this topic more deeply in another article.  It suffices to say that usability and usefulness are distinctly different system qualities.  A system may be usable, that is, the user interface may require little training and be easy to use, but the system or its capabilities are not useful.  Or, and this is what often happens particularly with top-down approaches, much of what the system provides is not useful or extraneous.

Personal Preference

I am a believer in the bottom-up approach.  It leads to the development of systems that are both usable and useful sooner than the top-down approach.  It is the only approach that I would trust when designing systems where user error is of particular concern.  The top-down approach has its place and I have used it myself, and will continue to use it.  But, in the end, I believe the bottom-up approach is superior, particularly in the medical field. 

Saturday, July 17, 2010

HE-75 Touch Screen Recommendations

I have found HE-75 to be one of the best human factors standards ever produced.  However, I have found their analysis and recommendations regarding touch screens lacking, and out of date.  To place a perspective on the HE-75 touch screen recommendations ... in the late 1980's and early 1990's, I ran a user interface design and implementation project inside of a larger project at Bell Laboratories.  To make a long story short, one of the user interfaces we needed to design and produce was a touch screen interface.  The touch screen used a CRT as a display device and it was as flat as we could make it.  In addition, the distance between the touch screen surface and the display was about 35 mm.  When I read of the issues related to touch screens and the recommendations in HE-75, I experience deja vu and I feel as if I've been transported back to that time.

Some of the most significant advances in user interfaces have been in the areas of display technology and touch screens with respect to hardware and in particular software.  Apple Computer has been a leader in combining the advances in both display technology, touch screen design and touch screen interface software.  I would have expected the HE-75 committee to have incorporated these advances and innovations in touchscreen software into the standard.  However, what I have found appears to me as ossified thinking or ignoring what has transpired.  

People in the medical field are using smart phones with their advanced touch screen interfaces in their medical practice.  Smart phone touch screens and now the Apple iPad have become the de facto standard in touch screen technology.  My previous article related to consistency ... here's a consistency issue.  Is it wise to suggest that medical device touch screen interfaces look and operate in a way different from the accepted standard in the field?  I know this is not a simple question, but I think it is one that will need to be addressed in future editions of HE-75.

The Return: The Value of Consistency

I have been distracted for a couple of months ... working to find and land another consulting contract.  I have completed that task.  However, it is outside of the medical device industry.  I am not completely happy with the situation, however, having a position outside of the medical device industry does afford some freedom when commenting on it.

Another reason for the significant gap between my last post and this one has been that I was working on a long and intricate post regarding hacking or hijacking medical device communications.  The post began to look more like a short story than a commentary.  The more I worked on it, the longer and more convoluted it became.  At some point, I may publish portions of it.

This experience with the article that would never end has lead me to change the way I'll be posting articles in the future.  In the future, my articles will be short - two to four paragraphs.  And will address a single topic.  I think that some of my posts have been too long and in some cases, overly intricate.  I still plan to cover difficult topics, but in a format that is more readable and succinct.

Consistency in User Interfaces

When it comes to making user interface "usable," the two qualities are 1. Performance and 2. Consistency.  Performance is obvious.  If the interface is slow, unresponsive, sluggish, etc. people will not use it.  Or those who are stuck with using it will scream.  Consistency is somewhat less obvious and more difficult to describe.  However, when you encounter a user interface that has changed dramatically on an application that you thought that you knew, you understand the value of consistency. 

Recently, I encountered a newer version of Microsoft Office.  Gone are the pull down menus, the organization of the operations and tools has changed dramatically.  Frankly, I hate the new version.  If I had encountered the newer version of Office as my first encounter with Office, I know that my reaction would be different.  The new version is inconsistent with the older version.  My ability to transfer my knowledge about how to use the newer version is being hindered by the dramatic changes that have been made.  

Consistency is about providing your users with the capability to reapply their knowledge about how things work to new and updated systems.  Operations work the same between applications and between older and newer versions.  In the case of the new version of Word, I am grateful that once I have selected a particular operation, such as formatting, it essentially works the same as the older version.  However, I have tried to use the newer version of PowerPoint and it's drawing capabilities.  I have not yet been successful and am a drawing tool that I know how to use.

Consistency has a side benefit for the development process as well.  When operations, layouts, navigation, etc. become standardized, extending the design of a user interface becomes easier, less risky and less likely to be rejected by users.  The effect of creating consistent user interfaces is similar to having a common language. More on consistency and HE-75 in a later post.

Tuesday, May 4, 2010

HE-75 Topic: Design First and Ask Questions Later?

I was planning on publishing Part 2 of my Medical Implant Issues series.  However, something came up that I could not avoid discussing because it perfectly illustrates the issues regarding defining and understanding your user population.

A Story

I live in the South Loop of Chicago - easy walking distance to the central city ("the Loop).  I do not drive or park a car on the streets in the city of Chicago.  I walk or take public transportation.

One morning I had to run a couple of errands and as I was walking up the street from my home, I saw a man who had parked his car and was staring at the new Chicago Parking Meter machine with dismay.  I'll tell you why a little later.

Depending on how closely you follow the news about Chicago, you may or may not know that Chicago recently sold its street parking revenue rights to a private company.  The company (that as you might imagine has political connections) has recently started to remove the traditional parking meters (that is, one space, one meter) with new meters.  Separate painted parking spaces and their meters have been removed.  People park their vehicles in any space on the street where their vehicle fits, go to a centralized meter on the block where they parked and purchase a ticket (or receipt) that is placed on the dashboard of the vehicle.  On the ticket is printed the end time wherein the vehicle is legally parked.  After the time passes, the vehicle can receive a citation for parking illegally.  Many cities have moved to this system.  However, this system has something missing that I have seen on other systems.

Here's a photograph of the meter's interface ...

Chicago Street-Parking Meter

I have placed black ellipse around the credit card reader and a black circle around a coin slot.  Do you see anything wrong in the photo?  ...

Getting back to the man who was staring at the parking meter ... he saw something that was very wrong ... there was no place to enter paper money into to the meter. 

I was surprised. This was the first time I had ever taken the time to really look at one of these meters.

As street parking goes, this is expensive.  One hour will cost you $2.50.  The maximum time that you can park is 3 hours - translated, that's 30 quarters if you had the change.  You can use a credit card. However, there are a lot of people in the City of Chicago who don't have credit cards.  And this man was one of them, nor did he have 30 quarters.

I have seen machines used other cities and towns, and they have a place for paper money.  Oak Park, the suburb immediately west of Chicago, has similar meters and they have a place to use paper money to pay for parking.  What gives with this meter?

I take the City of Chicago off the hook for the design of this parking meter.  I don't believe they had anything to do with the design of the meter.  I have parked in city garages over the years (when I was living in the suburbs), and the city garages have some pretty effective means to enable one to pay for parking - either using cash (paper money) or credit card.  But I think they should have been more aware of what the parking meter company was deploying.  I think they failed the public in that regard.

I can take the cynical view and suggest that this is a tactic by the private company to extract more revenue for itself and the city through issuing parking citations.  However, I think is the more likely that some one designed the system without any regard to the population that was expected to use it and the city fell-down on its responsibility to oversee what the parking company was doing.

Failure to Include a Necessary Feature

For the purposes of examining the value of usability research - that is, the research to understand your users and their environment, what does this incident teach?  It teaches that failure to perform the research to understand your user population could result in the failure to include a necessary capability - such as a means to pay for your parking with paper money.  

What I find interesting (and plausible) is that this parking meter design could have been usability tested and passed the test.  The subjects involved in the usability test could have been provided quarters and credit cards, and under those conditions the subjects would have performed admirably.  However, the parking meter fails the deployment test because the assumptions regarding populace, conditions and environment fail to align with reality of the needs of the population it should have been designed to serve.

Another Failure: Including the Unnecessary or Unwanted Features   

As I was walking to my destination, I started composing this article.  While thinking about what to include in this article, I remembered what a friend of mine said about a system wherein he was in charge of its development.  (I have to be careful about how I write this.  He's a friend of mine for whom I have great respect.  And, defining the set of features that are included in this system is not his responsibility.)

He said that "... we build a system with capabilities that customers neither need nor want."  The process for selecting capabilities to include in a product release at this company is an insular process.  More echo-chamber than outreach to include customers or users.  As a result this company has failed to understand their customers, users, their work environment, etc.  

Some might suggest that the requirements gathering process should reduce the likelihood of either failure occurring - failure to include or include unnecessary or unwanted features.  Again, I know that in case of my friend's company, requirements-gathering takes its direction largely from competitors instead of customers and/or users.  So what often results is the release of a system that fails to include capabilities that customers want and includes capabilities that customers do not want or need.
I don't know about you, but I see the process my friend's company engages in as a colossal waste of money and time.  Why would any company use or continue to use such a process?  

Ignorance, Stupidity or Arrogance - Or a combination?

I return to the title of this article "Design First and Ask Questions Later?" and the question I pose above.  I have seen company after company see design as an end in itself and failing to understand that creating a successful design requires an effective process that includes research and testing.  Failure to recognize this costs money and time, and possibly customers.  It is not always a good idea to be first in the market with a device or system that includes a trashy user interface.

So why to companies continue to hang on to failing processes?  Is it ignorance, stupidity or arrogance?  Is it a combination?  My personal experience suggests a combination all three factors with the addition of two others: delusion and denial.  These are two factors that we saw in operation that lead to the financial crisis of 2008.  I think the people will continue to believe that what they're doing is correct up to the point until the whole thing comes crashing down.

The Chicago Parking Meters has a user interface with poor and inconsiderate design ... inconsiderate of those who would use it.  (If I get comments from city officials, it will probably be for that last sentence.)  However, I don't believe that the parking meter company will face any major consequences such as being forced to redesign and redeploy new meters.  They will have gotten away with creating a poor design.  And they're not alone.  There are lots of poorly designed systems, some of the poor designs can be and have been life threatening.  Yet, there are no major consequences.  For medical devices and systems, I believe this needs to change and I hope the FDA exerts it's oversight authority to insure that it happens. 

Medical Device Design: Reader Suggested Books

One of my readers provided me the following list of books related to usable medical product designs.  I pass this list of three books on to you.  I do not yet have them in my library but these would be suitable additions.

Medical Design Article: FDA announces Medical Device Home use Initiative

As I was working on a human factors related article, this article from Medical Design appeared.  Here's the link to the article:

I thought that this article is interesting and telling with respect to how the FDA will assert it regulatory authority regarding usability issues. Here are a few quote from the article.

Recognizing that more patients of all ages are being discharged from hospitals to continue their medical treatment at home, the U.S. Food and Drug Administration announced an initiative to ensure that caregivers and patients safely use complex medical devices in the home. (My emphasis.) The initiative will develop guidance for manufacturers that intend to market such devices for home use, provide for post-market surveillance, and put in place other measures to encourage safe use of these products. The FDA is also developing education materials on home use of medical devices.
These home care patients often need medical devices and equipment such as hemodialysis equipment to treat kidney failure, wound therapy care, intravenous therapy devices, and ventilators. 

Monday, May 3, 2010

HE-75 Topic: Risk Management

One more HE-75 topic before proceeding into design and design related activities.  The topic, risk management.

Reading HE-75, you will note that this document continually discusses risk management and reducing risk.  In fact, the entire document is fundamentally about reducing risk, the risks associated with a poor or inappropriate design.

If you drive a car, especially if you have been driving cars for more than a decade or two, you will note that a driving a car with well-designed controls and well-laid out and designed displays seems inherently easier than one that is poorly designed.  Furthermore, it has been demonstrated time and again that driving safety increases when a driver has been provided well-designed controls and displays, driving become less risky for everyone concerned.

Car makers now see safety as selling point.  (Look at a car that was built in the 40s, 50s or 60s and you'll note how few safety features the car included.)  Manufacturers are beginning to include in their luxury models driver-error detection systems.  For example, one manufacturer has a system that signals the driver of the existence of an other vehicle the space the driver wants to move to.  One of the qualities of a well-designed user interface is the ability to anticipate the user and identify and trap errors or potential user errors, and provide a means or path for preventing or correcting the error without serious consequences.  Car manufacturers have been moving in this direction.  I suggest that the adoption of HE-75 will be the FDA's way of pushing medical manufacturers in the same direction.

Risk Management: Creating a Good Design and Verifying It

My many blog postings on HE-75 will address the specifics of how to create a good design and verify it, and the process of incorporating these design and verification processes in the a company's risk management processes.  In this posting I want to address a two issues at a high level.

First, I want to address "what is a good design and how to do to create it." Creating a good design requires a process such as one outlined by HE-75.  I am often amused at hiring managers and HR people who want to see a designer's portfolio having no conception regarding how the design were created.  A good design for a user interface is not artistry, it is a result of an effective process.  It should not only look good, but it should enable users to perform their tasks effectively and with a minimum of errors.  Furthermore, it should anticipate users and trap errors and prevent serious errors occurring.  And finally, it should provide users with paths or instructions on how to correct the error.  This is what HE-75 teaches in that it instructs researchers and designs   And to that end, the design process should reduce risk.  Think this is not possible?  Then I suggest you spend some time in the cockpit of a commercial airline.  It is possible.

Second, HE-75 teaches that design verification should be empirical and practiced often throughout the design process.  This is an adjunct to classic risk management that tends to be speculative or theoretical in that it relies on brainstorming and rational analysis.  HE-75 teaches that medical device and system manufacturers should not rely just on opinions - although opinions provided by subject-matter experts can provide valuable guidance.  HE-75 instructs subjects drawn from the targeted population(s) should be used to guide and test the design at each stage of the process.  This becomes the essence of risk management and risk reduction in the design of user interfaces.

Additional Resources

I have this book in my library.  It provides some good information, but it's not comprehensive.  Unfortunately, it's the only book I know of in this field.  

These books I do not owe, but provide you with the links for information purposes.  I am surprised at how few books in the field of medical risk management there are.  It may go a long ways to explain the large number medical errors, especially the ones that injure or kill patients.

Risk Management Handbook for Health Care Organizations, Student Edition (J-B Public Health/Health Services Text) 

Medical Malpractice Risk Management 

Saturday, May 1, 2010

HE-75 Topic: Meta Analysis

The definition of a "meta-analysis" is an analysis of analyzes.  Meta analyzes are often confused with a literature search, although a literature search is often the first step in a meta-analysis.

A meta-analysis is a consolidation of similar studies on a single, well defined topic.  The each study may have covered a variety of topics, but with the meta-analysis, each study will have addressed the common topic in depth and collected data regarding it.

The meta-analysis is a well-respected means of developing broad-based conclusions from a variety of studies.  (I have included a book on the topic at the end of this article.)  If you search the literature, you will note that meta-analyzes are often found in the medical literature, particularly in relationship to the effectiveness or problems with medications.

In some quarters, the meta-analysis is not always welcome or respected.  Human factors (Human engineering) is rooted in experimental psychology, and meta-analyzes are not always respected or well-received in this community.  It is work outside of the laboratory.  It is not collecting your own data, but using the data collected by others, thus the tendency has been to consider the meta-analysis as lesser.

However, the meta-analysis has a particular strength in that it provides a richer and wider view than a single study with a single population sample.  It is true that the studies of others often do not directly address all the issues that researchers could study if those researchers performed that research themselves.  In other words, the level and the types of research related controls were employed by the researchers themselves.  But, again, the meta-analysis can provide a richness and the numeric depth that a single study cannot provide.

Thus the question is, to use or not to use a meta-analysis when collecting data about a specific population?  Should a meta-analysis be used in lieu of collecting empirical data?  

Answer.  There are no easy answers.  Yes, a meta-analysis could be used in lieu of an empirical analysis, but only if there are enough applicable studies recently performed.  However, I would suggest that when moving forward with a study of a specific, target population that the first response should be to initiate a literature search and perform some level of a meta-analysis.  If the data is not available or is incomplete, then the meta-analysis will not suffice.  But, a meta-analysis is always a good first step, and a relatively inexpensive first step, even if the decision is made to go forward with an empirical study.  The meta-analysis will aid in the study's design and data analysis.  And will act as a guide when drawing conclusions.

Additional Resources

Friday, April 30, 2010

How to Hack Grandpa's ICD, Reprise ...

Several weeks ago I published an article (How to Hack Grandpa's ICD) discussing another article published in an IEEE journal that described a variety of ways to hack, illicitly manipulate or modify an ICD.  To those in the know, this is a potentially greater concern than I had imagined.  As it turns out, not surprisingly enough, the concerns about hacking are not limited to ICDs. 

One of my readers notified me of a recent article published by the CNN website that discusses concerns regarding the capability to hack ICDs.  Here's the link to the article that was published on 16 April 2010.  I was also republished in the Communications of the ACM (of which I am a member) on 19 April 2010.

Much of the article appears below Before proceeding, I would like to add a little background about myself and a little bit of commentary regarding hacking.  I am a co-founder of data leak security company, Salare Security (  If anyone is interested in what the company does, please do follow the link above.  (As of this point, I am a silent partner in the company.  My partners are currently running the business.)  I mention this because I have some real-world based knowledge regarding system vulnerabilities.  

From experience and research I have found that even vulnerabilities that seem unlikely to be exploited, inevitably are exploited.  If something can be gained from a target and a vulnerability exists, you can be assured that the vulnerability will be exploited.  

For example, specific vulnerabilities that Salare Security addresses months ago were considered unlikely to be exploited because of the lack knowledge and a lack of interest on the part of hackers.  However, the vulnerabilities are of significant interest because if exploited, the damage to a government, a company or other organization could be severe.  Nevertheless, the thinking in the industry has been that exploitation of the vulnerabilities over the near term were remote.

However, recently, we have received information that the system vulnerabilities that Salare Security addresses have been exploited by a government funded group of hackers.  So much for "nothing happening in the near term."  

In the case of the vulnerabilities that Salare Security protects ... the hackers were after information.  (I do not know the details of the attack so I cannot tell you what information they stole.)  But, why might hackers develop systems to exploit medical device vulnerabilities?  

My sense is that the hackers most likely are not out to attack, injure or kill people with medical devices.  In my estimation, these hackers would be engaged in an extortion scheme against a device manufacturer or manufacturers.  This suggestion is based on some of the current trends in criminal activity. (Please see:,289142,sid14_gci1510919,00.html?track=NL-102&ad=763387&asrc=EM_NLN_11442713&uid=6228713)  The article references other possible motives for hacking medical devices.  I would strongly side with any motivation that opens the door for extracting money from a manufacturer.

Here is the article published by CNN

Scientists work to keep hackers out of implanted medical devices
By John D. Sutter, CNN  (4/16/2010)

(CNN) -- Nathanael Paul likes the convenience of the insulin pump that regulates his diabetes. It communicates with other gadgets wirelessly and adjusts his blood sugar levels automatically.
But, a few years ago, the computer scientist started to worry about the security of this setup.
What if someone hacked into that system and sent his blood sugar levels plummeting? Or skyrocketing? Those scenarios could be fatal.  
Researchers say it is possible for hackers to access and remotely control medical devices like insulin pumps, pacemakers and cardiac defibrillators, all of which emit wireless signals.
In 2008, a coalition of researchers from the University of Washington, Harvard Medical School and the University of Massachusetts at Amherst wrote that they remotely accessed a common cardiac defibrillator using easy-to-find radio and computer equipment. In a lab, the researchers used their wireless access to steal personal information from the device and to induce fatal heart rhythms by taking control of the system.
This article references the same IEEE article that I referenced in my blog posting.
"Medical devices have provided important health benefits for many patients, but their increasing number, automation, functionality, connectivity and remote-communication capabilities augment their security vulnerabilities," he wrote.
FDA spokeswoman Karen Riley declined to say whether the FDA is looking into new regulations of wireless medical devices; she added that the responsibility for making the devices secure falls primarily on the manufacturer.

"The FDA shares concerns about the security and privacy of medical devices and emphasizes security as a key element of device design," she said.
Wendy Dougherty, spokeswoman for Medtronic Inc., a large maker of implantable medical devices, said the company is willing to work with the FDA to establish "formal device security guidelines."
The company is aware of potential security risks to implanted medical devices, she said. "Safety is an integral part of our design and quality process. We're constantly evolving and improving our technologies."
In a written statement, Dougherty described the risk of someone hacking into a wireless medical device as "extremely low."
Wireless connections

The security concerns stem from the fact that pacemakers, defibrillators and insulin pumps emit wireless signals, somewhat like computers.
These signals vary in range and openness. Researchers who reported hacking into a defibrillator said some in-the-body devices have a wireless range of about 15 feet.

Many devices do not have encrypted signals to ward off attack, the researchers say. Encryption is a type of signal scrambling that is, for example, employed on many home Wi-Fi routers to prevent unknown people from accessing the network.

There's some question as to why a person would hack into a pacemaker or insulin pump and how the hacker would know a person uses a medical device.
Maisel listed some possible scenarios in his New England Journal article.
"Motivation for such actions might include the acquisition of private information for financial gain or competitive advantage; damage to a device manufacturer's reputation; sabotage by a disgruntled employee, dissatisfied customer or terrorist to inflict financial or personal injury; or simply the satisfaction of the attacker's ego," he wrote.
Denning, from the University of Washington, said the current risk of attack is very low, but that someone could hack into a pacemaker without apparent motive.
She referenced a case from 2008 in which a hacker reportedly tried to induce seizures in epilepsy patients by putting rapidly flashing images on an online forum run by the Epilepsy Foundation.
I emphasized Denning's comments because in my experience those are "famous last words." If there is a way to profit from exploiting a vulnerability, be assured, it will be exploited.

Additional Resources


Friday, April 23, 2010

Medical Implant Issues: Part 1, A True Story

When I started this article, I thought I could place it into a single posting.  However, having written just the first section, noted it's length and how much more there was to write.  Thus, I decided to turn this into a serialized publication just as I am doing with HE-75.  Thus, here is Part 1 ...

Part 1: Background Story

Before I dive into the technical details of this issue, I want to tell a true story from my own experience.  It involves a friend of mine.  (I need to be vague regarding the person's identity including gender and how I came to know this person.  As you read this, you'll understand.

My friend was incredibly intelligent (e. g., the best applied statistician I have ever known) and physically attractive, and diagnosed as a paranoid schizophrenic.  In the early 1990's, my friend underwent back surgery.  To my amazement, my friend claimed that the surgeon had placed a "chip," small processor into the person's spinal cord.  My friend said that the chip could be activated by people with controls that looked like garage door openers.  When activated, the chip would cause my friend to have a sudden, overwhelming desire to have sexual relations with the person who had activated the chip.  My friend called this chip a "tutu."

At the time I had been part of the cutting-edge technology community to know that such a chip was absurd.  And I told my friend that this chip did not exist. My information was not well received by my friend who was convinced of the reality of this chip.

I tell this story because at the time my friend informed me of the "tutu," the idea of embedding a chip in a human being and activate it using wireless means was patently absurd.  Embedding programmable chips with wireless communications less than a decade and a half later is no longer considered absurd, but real.  And for some people, frightening with religious overtones.  Consider what the Georgia state legislature just passed and you'll understand what I mean.  Here's a link to that article: Georgia Senate Makes "Mark of the Beast Illegal."

The reaction from the Georgia Senate makes my paranoid-schizophrenic friend's story seem plausible.  Interestingly enough and I did not realize it at the time (but I do now), that was my introduction to wireless, medical remote programming.  As I said, my friend was extremely intelligent and as it turned out more creative and prescient than I realized at the time.  Turns out that today a device embedded in the spinal cord with the ability to trigger sexual experience is real.  And the ability to embed microprocessors and controls in people with the capability of wireless communication and medical management is also real.

I tell you that story not to make light of people's stories and fears, but as a "sideways" introduction to the technical topic of dealing with multiple, embedded medical monitoring and remote programming systems.  And to suggest that people may have real fears and concerns regarding the capabilities that technologists like myself often overlook.  In this series I discuss real and imagined fears as well as the technical problems with multiple, implanted devices.

Part 2: Multiple, Implanted Wireless Communicating Devices

Books sold by Amazon that might be of interest in this series

New Frontiers in Medical Device Technology

MEMS and Nanotechnology-Based Sensors and Devices for Communications, Medical and Aerospace Applications

Remote Monitoring Demonstration System for Diabetes & COPD Available

I want to share the article and it's link to the the demonstration systemHere are a few quotes from the article.

Health Revolution Sciences Inc. has launched a new Website demonstrating its remote health care monitoring capabilities for perspective patients and care givers.
Called ForVida, the software application represents a sea change in health care technology. 
The software allows physicians and patients to watch streaming cardiac telemetry or reference steadily growing actionable patient EKG and heart rate histories.

The system apparently uses a communication model similar to one I have described in an earlier article.  (  I do not know what data integrity and security measures they have taken.
The article can be found at:

Wednesday, April 21, 2010

Remote Monitoring and Preventing Unnecessary ICD Shocks

In 2009 there was an interesting editorial written by Joseph E. Marine from Johns Hopkins University School of Medicine, published in the journal, Europace (European Society of Cardiology).  The title of the editorial was "Remote monitoring for prevention of inappropriate implantable cardioverter
defibrillator shocks: is there no place like home?
The entire article can be found at the following location:
For those of you unfamiliar with ICD's (implantable cardioverter
defibrillator), the ICD delivers a relatively high-voltage shock to the heart when conditions indicate that the heart may be about to go ventricle fibrillation (a rapid irregular heartbeat that will likely lead to death) or that the heart ceases beating.  The latter condition is easily detected, however, determining the former condition is more difficult.  Because the conditions are not always clear, ICD (and a companion system, the CRT-D) too frequently deliver shocks unnecessarily. (I have discussed issues related to detection in other articles in the blog.  Here are the links to those discussions:,  Another reason that an ICD might deliver unnecessary shocks would be because of sensor lead failure or near failure. 

Joseph Marine examined the value of remote monitoring to the prevention of unnecessary shocks.  He concluded that remote monitoring was particularly suited to providing early detection of failing sensor leads.  However, ...
[f]inally, most inappropriate ICD shocks are not caused by
lead failure, but rather by supraventricular arrhythmias, and this
study does not provide any evidence that home monitoring
reduces risk of inappropriate shocks from this cause.
In other words, remote monitoring could not aid with improving the false positive rate - the delivery of unnecessary shocks.

To those who have not been involved with ICDs, it may seem that the delivery of an unnecessary may not be so bad given the alternative, that a failure to deliver a shock will likely lead to the patient's death.  And there are many cardiologists who will argue the case for a "hair-trigger" system - acceptance of false positives, but no acceptance of false negative: that is a failure to deliver a shock when conditions warrant.

However, unnecessary shocks will do damage over time.  Furthermore, those patients who have received a shock describe it as feeling like "... a mule kicked" them in the chest.  I know of situations where patients who a received shocks eventually have the ICD removed

So, I want to make the case to the medical device industry that remote monitoring may be the key to solving the false positive problem.  In that the data that remote monitoring systems collect and transmit may lead to better detection and discrimination.  In addition with reference to my article on prediction, remote monitoring may enable physicians to tune ICDs based on specific predecessor events that could enable remotely adjusting the parameters on the ICD to allow better targeting.

I'm not an expert in this area.  However, I know enough about indicator conditions in other areas that can be used to adjust systems and improve their accuracy.

HE-75: Collecting Data and Modeling Tasks and Environment

This article expounds on my earlier article related to AAMI HE-75: Know what thy user does and where they do it. 

Collect and Represent the Data

Ideally the first steps in the design process should occur before a design is ever considered.  Unfortunately, in virtually every case I have encountered, a design for the user interface has already been in the works before the steps for collecting user and task related data have been performed.

Nevertheless, if you are one of the people performing the research, do as much as you can to push the design out of your mind and focus on objectively collecting and evaluating the data.  And, in your data analysis, following the data and not your or the preconceived notions of someone else.

There are a variety of means for collecting data and representing it.  The means for collecting the data will generally involve:
  • Observation - collecting the step-by-step activities as a person under observation performs their tasks.
  • Inquiry - collecting data about the a person's cognitive processes.
Once the data has been connected, it requires analysis and representation in a manner that is useful for later steps in the design process.  Data representations can include:
  • Task models - summary process models (with variants and edge cases) of how users perform each task.  This is different from workflow models in that in task models no references to specific tools or systems should be included in the task model.  A task model should be abstracted and represented at a level without reference to actions taking place on a particular device or system.
  • Workflows - summary process models (with variants and edge cases) similar to the task flows with reference to a particular device or system.  For example, if the user interface consists of a particular web page, there should be a reference to that webpage and the action(s) that took place.
  • Cognitive models - a representation of the cognitive activities and processes that take place as the person performs a task.
  • Breadth analysis - I have noted that this is often overlooked.  Breadth analysis organizes the tasks by frequency of use and if appropriate, order of execution.  This is also the place to represent the tasks that users perform in their work environment but were not directly part of the data collection process.
Detailed Instructions

I cannot hope to provide detailed instructions in this blog.  However, I can provide a few pointers. There published works on how to collect, analyze and model the data by leaders in the field.

Here are three books that can recommend and several can be found in my library:

User and Task Analysis for Interface Design by  J. Hackos & J. Redish

I highly recommend this book.  I use it frequently.  For those of us experienced in the profession and with task and user analysis, what they discuss will seem familiar - as well it should.  However, what they do are provide clear paths and methods for collecting data from users.  The book is well-structured and extremely useful for practitioners.  I had been using task and user analysis for a decade before this book came out.  I found that by owning this book, I could throw all my notes away related to task and user analysis, and use this book as my reference.

Motion and Time Study: Improving Work Methods and Management 
by F. Meyer
Motion and Time Study for Lean Manufacturing (3rd Edition) by F. Meyer & J. R. Stewart

Time and motion study is a core part of industrial engineering as the means to improve the manufacturing process.  Historically, time and motion studies go back to Fredrick Taylor ( who pioneered this work in the later part of the 19th and in early part of the 20th Century.  I have used time and motion studies as a means for uncovering problematic designs.  Time and motion studies can be particularly useful when users are engaged in repetitive activities and as a means for improving efficiency and even as a means for reducing repeated stress injuries.  The first book I have in my library however it is a bit old (but very inexpensive) so I include the second book by Meyers (and Stewart) that more recent.  I can say that the methods of time and motion can be considered timeless, thus adding a book published in 1992 can still be valuable.

Time and motion studies can produce significant detail regarding the activities that those under observation perform.  However, these studies are time-consuming and as such, expensive.  Nevertheless, they can provide extremely valuable data that can uncover problems and improve efficiency.

Contextual Design: Defining Customer-Centered Systems (Interactive Technologies) by H. Beyer & K. Holtzblatt &

Rapid Contextual Design: A How-to Guide to Key Techniques for User-Centered Design (Interactive Technologies) by K. Holtzblatt, J. B. Wendell & S. Wood

The first book I have in my library, but not the second.  I have used many of the methods described in Contextual Design before the book was published.  The contextual design process is one of the currently "hot" methods collecting user and task data, and as such, every practitioner should own a copy of this book - at least as a reference.

I believe what's particularly useful about this contextual inquiry is that it collects data about activities not directly observered.  It's able but that affect the users and the tasks that they perform.  For example, clinicians engaged in the remote monitoring of patients often have other duties, many of them patient related.  Collecting data exclusively targeting remote monitoring activities (or the activities specific to a targeted device or company) can miss significant activities that impact remote monitoring and vice versa

Additional Resources

As a graduate student, I had the privilege of having my education supported by Xerox's Palo Alto Research Center.  I was able to work with luminaries of the profession, Tom Moran and Allen Newell on a couple of projects.  In addition I was able to learn the GOMS model.  I have found this model useful in that it nicely blends objectively observed activities with cognitive processes.  However, the modeling process can be arduous, and as such, expensive.  

Allen Newell and Herbert Simon are particularly well known for their research on chess masters and problem solving.  They were well-known for their research method, protocol analysis. Protocol analysis is a method that has the person under observation verbally express their thoughts while engaged a particular activity.  This enables the observer to collect data about the subject's thoughts, strategies and goals.  This methodology has been adopted by the authors of contextual inquiry and one that I have often used in my research.

The problem with protocol analysis is that it cannot capture cognitive processes that occur beyond the level of consciousness, such as the perception.  For example, subjects are unable to express how they perceive and identify words, or express how they are able to read sentences.  These processes are largely automatic and thus not available to conscious processes.  (I shall discuss methods that will enable one to collect data that involves automatic processes when I discuss usability testing in a later article.)  However, protocol analysis can provide valuable data regarding a subject's thoughts particularly when that person reaches a point where confusion sets-in or where the person attempts to correct an error condition.

Here's a link from Wikipedia:

Another book that I have in my library by a former Bell Labs human factors researcher, Thomas K. (TK) Landauer, is The Trouble with Computers: Usefulness, Usability, and Productivity.

This is fun book.  I think it's much more instructive to the professional than Don Norman's book, The Psychology Of Everyday Things.  (Nevertheless, I place the link to Amazon just the same.  This is a good book for professional in the field to give to family members who ask "what do you do for a living?")  

Tom rails against the many of the pressures and processes that push products, systems and services into the commercial space before they're ready from a human engineering standpoint.  Although the book is relatively old, many of the points he makes are more relevant today than when the book was first published.  The impluse to design user interfaces without reference or regard for users has been clearly noted by the FDA, hence the need for HE-75.

Monday, April 19, 2010

Market Research Report Available: Remote & Wireless Patient Monitoring Markets

A new market research report has just been made available that discusses the market and investment potential of remote and wireless monitoring of patients.  I do not endorse this study or suggest it's purchase.  I am making it's existence known.

Here's a list of some of disorders covered by the study:
  • Asthma
  • COPD
  • CHF
  • CHD 
  • Diabetes 
Here are a few quotes from the press release:

Patient monitoring systems are emerging in response to increased healthcare needs of an aging population, new wireless technologies, better video and monitoring technologies, decreasing healthcare resources, an emphasis on reducing hospital days, and proven cost-effectiveness.
Of these new high-tech patient monitoring systems, nearly all focus on some form of wireless or remote patient monitoring. ...
...  the following companies are profiled in detail in this report:
  • Abbott Laboratories, Inc
  • Aerotel Medical Systems
  • GE Healthcare
  • Honeywell HomMed LLC
  • Intel Corporation
  • Philips Medical Systems
  • Roche Diagnostics Corporation

Here's the link to the press release and links to purchasing this study:


Saturday, April 17, 2010

Article: Investments in Real Time Medical Monitoring

This is an article targeted to the investment community regarding investment in real time medical monitoring.  I do not endorse anything in this article.  However I do find it interesting.  I do not know the track record of this publication.  Nevertheless, here a link to the article:

Article: Initiation of a Telemonitoring Study of Heart Failure, COPD and Diabetes Patients

A study will be performed by researchers from Case Western Reserve University and Cleveland State University with patients suffering from heart failure, diabetes and COPD.  The objective of the study will be to determine how effective remote monitoring is with maintaining the health of these patients and with keeping them out of the hospital.

Here's a link to a report on this study: 

Additional Resources


 The Complete Guide to Understanding and Living with COPD: From A COPDer's Perspective 

COPD For Dummies 


Diabetes For Dummies (For Dummies (Health & Fitness)) 

Tell Me What to Eat If I Have Diabetes: Nutrition You Can Live With 

The Official Pocket Guide to Diabetic Exchanges 

Heart Failure

The Cleveland Clinic Guide to Heart Failure (Cleveland Clinic Guides)

Manual of Heart Failure Management

Friday, April 16, 2010

Medtronic Remote Monitoring Study: CONNECT

At the American College of Cardiology 59th annual conference George H. Crossley, MD presented evidence that cardiac patient from remote monitoring (one scheduled in-office visit per year with remote monitoring) verses standard in-office care (four in-office visits per year) cuts the time between the time a cardiac or device related event occurs and when a treatment decision is made.

The title of the study: "The clinical evaluation of the remote notification to reduce time to clinical decision (CONNECT) Trial: The value of remote monitoring."

I present a summary of the method and the results of the study gleaned from the slides presented by Dr. Crossley at the conference.


Tested hypothesis: Remote monitoring with automatic clinician notifications reduces the time from a cardiac or device event to a clinical decision.

Additionally investigated were rates utilization of the health care system including hospitalization and between treatment groups.


Study participants:  1997 newly implanted CRT-D and DR-ICD patients from 136 US centers were randomly assigned to one of two groups. The first group had 1014 patients assigned to the remotely monitored group and the second had 983 patients assigned to the standard in-office care group. The patients were reasonably well matched for age and gender characteristics.  (A procedure similar to the Biotronik TRUST studies.)

The patients were followed for 12 months.  (On first reading, I found the the time relatively short in that I would not expect enough differentiating events would occur during that time.  However, on further reading, I believe my first impression was incorrect.)


Time from Event to Clinical Decision

The median time (used nonparametric inferential statistics for the analysis) from the cardiac or device event to clinical decision was 4.6 days in the remote group and 22 days in the in office group. This difference was significant.  The remote group involved 172 patient while the in-office group involved 145 patients.

The cardiac/device events included:
  • Atrial Tachycardia/Fibrillation (AT/AF) for 12 hours or more
  • Fast Ventricular rate. Of at least 120 beats per minute during at least a 6 hour AT/AFT event
  • At least two shocks delivered in an episode
  • Lead impedance out of range
  • All therapies in a specific zone were exhausted for an episode
  • Ventricular Fibrillation detection/therapy off
  • Low battery
Total number of events Remote group: 575 and In-office group: 391.  The slides show the breakdowns.

Office Visits

The number of office visits per patient reported are shown below.
                        Scheduled     Unscheduled      All office
Remote group:     1.68              2.24              3.92
In-office group:    4.33              1.94              6.27

The TRUST studies showed a slight increase of more unscheduled visits for the remote group. However, given the nature of the study and that remotely monitored patients would receive only one in-office visit per year, it's remarkable how similar the numbers between the two groups are.

Utilization of the Health Care System

Number of incidents where patients used the health care system show virtually no difference, hospitalization or emergency room. 

However, a remarkable difference was the significant difference in length of stay when there was a hospitalization. The remote group had a mean hospital stay of 3.3 days while the in-office group was 4.0 days with an estimated savings per hospitalization of $1659.


The CONNECT and (Biotronik) TRUST studies show clear benefits from a number of standpoints for remote monitoring.  In addition, the CONNECT study showed clear cost and hospital resource utilization benefits from remote monitoring in that hospitalized patients had shorter stays indicating that they were in better shape than patients in the in-office group when admitted to the hospital.  Quick responses seem to lead to better outcomes as well as cost reductions.