Saturday, February 1, 2020

Drug Deaths 2018, Early Data from the CDC

Within hours of posting my previous article regarding projected deaths for drug overdoses, the CDC reported their early data on drug deaths for 2018. [I don't consider the CDC data to be finalized until it has been added to the Wonder Database (CDC Wonder Database).] Nevertheless, according to the CDC, there was drop in the number of drug related deaths from 2017 to 2018. And as an added bonus, there was a slight rise in US life expectancy. 

Here's one of the better articles I came across that explains the CDC data is from Vox: https://www.vox.com/policy-and-politics/2020/1/30/21111887/opioid-epidemic-drug-overdose-death-2018

The article reported that the crude rate (number of deaths per 100,000) had decreased to 20.7 in 2018 from 21.7 in 2017. I reviewed all my past data analysis and predictions to see how this new data lined up with of my predictions. Here's what I found:


  1. My best fit model that used data from 1999 to 2016 predicted that the crude rate for 2017 and 2018 would be 18.4 and 19.0 respectively. These predictions are strikingly lower than the actual data. And I should mention that this model is a second order, curvilinear, accelerating rate model.
  2. My best fit worst case scenario model that used data from 2008 to 2016 predicted 20.1 for 2017 and 23.4 for 2018. Slightly low for 2017 but noticeably high for 2018. Again, this is a second order, curvilinear, accelerating rate model. The rate of acceleration is greater than the model above.
  3. My best fit model using the most recent CDC Wonder data from 1999 to 2017 suggested that the crude rate for 2018 would be 27.5. This is a much faster accelerating 4th order model.


Analysis


The apparent primary cause for the striking rise in the death rate for 2017 was due to the increasing availability and use of illegal fentanyl. If the US is going be able to stem the tide of this epidemic, efforts are going to need to be made to reduce the many deaths from fentanyl and its chemical cousins. Is that happening? Consider the follow chart from the Vox article. 


Some of the decrease in drug death rates are likely due to the increased availability of the opioid overdose antidote, naloxone. There's a noticeable down-tick in the death rates across all drug categories suggesting a common factor of influence crossing all categories. However, although all other curves show a downturn in the death rate, deaths due to synthetic opioids (most notably fentanyl) continues to rise although at a lower rate.

So, are we seeing the light at the end of the tunnel of the opioid epidemic? Or is this just the light of a train coming the other direction? The cause for appears to be the reduction in deaths from heroin and natural and semisynthetic opioids. However, the rate of death from synthetic opioids continues to climb and this is the predominant cause of death from drug overdoses. Thus it's clear, this is not anywhere near to being under control.

I'll conclude by quoting from the Vox article that suggests that the underlying causes of the epidemic as well as the supply of drugs remain and remain unaddressed:


... the country still seems to struggle with underlying conditions that experts say are fueling “deaths of despair.” That’s not just drug overdoses but also suicides, which increased in 2018, and alcohol-related deaths, which have doubled in the past two decades. ...

“If all of these social factors were there, and we didn’t have the supply of drugs, of course people would not be dying of overdoses,” Nora Volkow, director of the National Institute on Drug Abuse, previously told me. “But it is the confluence of the widespread markets of drugs — that are very accessible and very potent — and the social-cultural factors that are making people despair and seek out these drugs as a way of escaping.”

All of that leaves America vulnerable to increases in drug addiction cases and overdose deaths, even as it sees some gains due to drops in opioid prescriptions and related deaths.


Thursday, January 30, 2020

Update: Public Health Alert: Centers for Disease Control: 2017 and Projections to 2025

I've published two articles in this blog related to drug related deaths. Since then, I've gone back to CDC's Wonder Database (https://wonder.cdc.gov) in order to use their most recent data to rework my models and predictions. What I found shocked me. As you will see below, the number of death reported by the New York Times in the linked article was not 72,300 as reported. It was 73,990 in 2017. The number that the Times reported was well beyond the number that I had predicted in my worst case scenario for 2017 of 69,000 drug related deaths. The updated year 2017 number I extracted from Wonder showed that my worst case number of drug related deaths for 2017 was low by an additional 1700 deaths for a total prediction error of: low by 3,990 deaths. 

Putting my error in perspective: approximately 2240 died in the Pearl Harbor attack; in the 9/11 attacks, 2, 977 people died; from 774 SARS around 2004 and that was a world-wide health crisis; I could go on. This cannot be considered anything other than a catastrophic crisis. 

Going Back to the Source



I decided to go back to the Wonder database and reexamine the drug death data.  This time I included the actual data from 2017 to see how the new data would effect my original predictions.

I queried Wonder for all the drug-related deaths from 1999 to 2017. To understand trends, the best measure is the "crude rate." Crude rate is the number of deaths per 100,000 population. It's similar to a percentage of the population, but instead of 100 being the denominator, it's 100,000.

The results are shown below.


1999 to 2017 Drug Related Deaths (CDC Wonder)

I recalculated my trend line and found that a 4th order equation provided by far the best fit for the data. In fact this trend line appears to provide a nearly perfect description of the data. My original  trend lines (including my worst case trend line) were second order equations. The worst case trend line was based on the data from 2008 to 2016. On the other hand new equation was based on all the available data. The last two years have shown a substantial uptick in the crude rate. 

The actual number of drug related deaths for 1999 to 2017 is shown below. The chart lists the actual number of deaths for each year.

1999 to 2017 Drug Related Deaths (CDC Wonder)

Based on this data we can clear see the effects of the widespread availability and use of fentanyl. 


Projecting Into the Future



Based on the new data and the recalculated trend line, what are the predictions for the crude rate and the number of drug related deaths from 2017 to 2025. 

Using the new equation, I projected might be expected for the future. That is shown in the two charts below.


Projected Crude and Number of Drug Related Deaths to 2025


I want to make clear that these are my projections, not the CDC or another organization. I also want to mention is that the last time I made future projections for drug related deaths, that even my worst case projections were substantially lower than what the actual data would show. So as bad as these numbers are, I consider them not out of the realm of possibility. Nevertheless, based on the equation the actual crude rate data, drug related deaths last year (2019) were over 100,000. And the number of drug related deaths could reach over 400,000 by 2025. That is a staggeringly high number and would make the opioid crisis the origination point for the worst epidemic in US history. With numbers like these US life expectancy will continue to trend downward at an accelerating rate. 

My sense is that the drug related death rate will at some point level out or stop growing at this extremely high rate. However, as of this point, based on the data so far, these are the projections. It will take a couple of years, but I would be interested in knowing the actual number of drug related deaths that occurred in 2019. If the number is anywhere near 100,000, then it's clear that we are riding on a trend line of massively high numbers of drug related deaths into the future. 




Monday, January 6, 2020

Apple being sued by New York Cardiologist over Atrial Fibrillation Detection in the Apple Watch

I found this interesting and a bit amusing, but it seems that Apple is being sued by Joseph Wiesel, a clinical assistant professor in cardiology at NYU School of Medicine who alleges "... that the tech giant has infringed a patent—generally related to detecting atrial fibrillation by monitoring a pulse—on which Wiesel is the sole named inventor. The accused products are various versions of the Apple Watch, Series 3 and 4 through purported inclusion of an irregular pulse notification feature, and earlier versions through the alleged provision of a software upgrade to add 'irregular pulse notifications resulting from checking a pulse rhythm'."

Here's a link to the quoted material: https://insight.rpxcorp.com/news/59822?utm_campaign=weekly_newsletter&utm_content=&utm_medium=email&utm_source=title_click

Knowing Apple, they will do everything that they can to invalidate Wiesel's patent. This is a common practice for very large and domineering companies like Apple to do in order to refrain from playing royalties to patent holders, especially when the patent holder is an individual or a small company. 

The processes that have been put in place to examine patents to determine their validity when there is litigation have shown themselves to be quite favorable to large companies being sued for patent infringement. So I suggest that the likelihood that Dr. Wiesel will receive anything from his suit is not all that favorable.

Monday, December 30, 2019

Signal Detection and the Apple Watch

In the last two articles about the Apple Watch's capability to detect atrial fibrillation, I made references to terminology ("false positive") that has its roots in Signal Detection Theory.  Signal Detection Theory was developed as a means to determine the accuracy of early radar systems. The technique has migrated to communications systems, psychology, diagnostics and a variety of other domains where determining the presence or absence of something of interest is important especially when the signal to be detected would be presented within a noisy environment (this was particularly true of  early radars) or when the signal is weak and difficult to detect.  

Signal detection can be powerful tool to guide research methodologies and data analysis. I have used the signal detection paradigm in my own research both for the development of my research methodology and data analysis: planned and post-hoc analysis. In fact when I have taught courses in research methods and statistical analysis, I have used the signal detection paradigm as a way to convey detecting the effects of an experimental manipulation in your data.  

Because I've mentioned issues related to signal detection and that it is a powerful tool for research and development, I decided to provide a short primer of signal detection.


Signal Detection


The central feature of signal detection is the two by two matrix shown below.

The signal detection process begins with a detection window or event. The window for detection could be a period of time or a specified occurrence such as a psychological test such as a rapid presentation of a stimulus and determine whether or not the subject of the experiment detected what was presented. 

Or in the case of the Apple Watch, whether it detects atrial fibrillation. In devices such as the Apple Watch, how the system defines the detection window can be important. Since we have no information regarding how the Apple Watch atrial fibrillation detection system operates, it's difficult to determine how it determines its detection window.


Multiple, Repeated Trials

Before discussing the meaning of the Signal Detection Matrix, it's important to understand that every matrix comes with multiple, repeated trials with a particular detection system, whether that detection system is a machine or a biological entity such as a person. Signal Detection Theory is grounded in probability theory, therefore, there is the requirement for multiple trials in order to create a viable and valid matrix.


The Four Cells of the Signal Detection Matrix

During the window of detection, a signal may or may not be present. Each cell represents an outcome of a detection event. The possible outcomes are: 1: the signal was present and it was detected, a hit (upper left cell), 2: the signal was not present and the system or person correctly correctly reported no signal present (lower right cell), 3: the signal was absent, but erroneously reported as present, this is a Type I error (lower left cell) and 4: the signal was present, but reported as absent, this is a Type II error (upper right cell).

The object of any system is that the outcomes of detection events end up in outcome cells 1 and 2, that is, correctly reported. However, from a research standpoint, the error cells (Outcomes 3 and 4) are the most interesting and revealing. 


Incorrect Report: Cells



Outcome 3: Type I Error

A Type I error is reporting that a signal is present when it was not. This is known as a "false alarm or false positive." The statistic for alpha which is the ratio of Outcome 3 over Total number of trials or detection events.

Outcome 4: Type II Error

A Type II error is reporting that a signal is not present when in fact it was present. This is a "failure to detect." The statistic for beta which is the ratio of Outcome 4 over Total number of trials or detection events. 


If you're designing a detection system, the idea is to minimize both types of errors. However, no system is perfect and as such, it's important to determine what type of error is most acceptable, Type I or II because there are likely to be consequences either way. 

Trade-off Between Type I and Type II Errors

In experimental research the emphasis has largely been on minimizing Type I errors, that is reporting an experimental effect when in actuality none was present. Increasing your alpha level, that is decreasing your acceptance of Type I errors, increases the likelihood of making a Type II error, reporting that an experimental effect was not present when in fact it was. 

However, with medical devices, what type of error is of greater concern, Type I or Type II? That's a decision that will need to be made.

Before leaving this section, I should mention that the trade-off analysis between Type I and Type II errors is called Receiver-Operating-Characteristic Analysis or ROC-analysis. This is something that I'll discuss in a later article. 


With Respect to the Apple Watch 


Since I have no access into Apple's thinking when it was designing the Watch's atrial fibrillation software system, I can't know for certain the thinking that went into designing atrial fibrillation detection algorithm for the Apple Watch. However based on their own research, it seems that Apple made the decision to side on accepting false positives over false negatives -- although we can't be completely sure this is true because Apple did not do the research to determine rate that the Apple Watch failed to detect atrial fibrillation when it was know to be present.

With a "medical device" such as the Apple Watch, it would seem reasonable to side on accepting false positives over false positive. That is, to set your alpha level low. The hope would be that if the Apple Watch detected atrial fibrillation the owner of the watch would seek medical attention to determine whether or not a diagnosis of atrial fibrillation was warranted for receiving treatment for the condition. If the watch generated a false alarm, then there was no harm in seeking medical advice ... it would seem. The author of the NY Times article I cited in the previous article appears to hold to this point of view. 

However ...

The problem with a system that generates a high rate of false alarms, is that all too often signals tend to be ignored. Consider the following scenario: an owner of an Apple Watch receives an indication that atrial fibrillation has been detected. The owner goes to a physician who reports that there's no indication of atrial fibrillation. Time passes and the watch reports again that atrial fibrillation has been detected. The owner goes back to the physician who give the owner the same report as before, no atrial fibrillation detected. What do you think will happen if the owner receives from the watch that atrial fibrillation has been detected? It's likely that the owner will just ignore the report. That would really be a problem for the owner if the owner had in fact developed atrial fibrillation. In this scenario the watch "cried wolf" too many times. And therein lies the problem with having a system that's adjusted to accepting a high rate of false alarms.





Thursday, December 26, 2019

Follow-up: Apple Watch 5, Afib detection, NY Times Article

The New York Times has published an article regarding the Apple Watch 5's capability to detect atrial fibrillation. The link to the article is below:

https://www.nytimes.com/2019/12/26/upshot/apple-watch-atrial-fibrillation.html?te=1&nl=personal-tech&emc=edit_ct_20191226?campaign_id=38&instance_id=14801&segment_id=19884&user_id=d7e858ffd01b131c28733046812ca088&regi_id=6759438320191226

The title and the subtitle of the article provide a good summary of what the author (Aaron E. Carroll) found:

"The Watch Is Smart, but It Can’t Replace Your Doctor
Apple has been advertising its watch’s ability to detect atrial fibrillation. The reality doesn’t quite live up to the promise."

With reference to my article, the Times article provides more detail on the trial that Apple ran to test the effectiveness of the Apple Watch's ability to detect atrial fibrillation. That provide interesting and enlightening, and clarified some of the issues I found with how the study was reported for both the procedure and the results. In addition, the author and I concur regarding the Apple Watch's extremely high reported rate of false positives for atrial fibrillation. I find this quite interesting when you consider that screening for atrial fibrillation can be as simple as taking the patient's pulse. 


Here are a few quotes from the article:


"Of the 450 participants [these are study participants where the Apple Watch had detected atrial fibrillation] who returned patches , atrial fibrillation was confirmed in 34 percent, or 153 people. 
...

Many news outlets reporting on the study mentioned a topline result: a “positive predictive value” of 84 percent. That statistic refers to the chance that someone actually has the condition if he or she gets a positive test result.

But this result wasn’t calculated from any of the numbers above. It specifically refers to the subset of patients who had an irregular pulse notification while wearing their confirmatory patch. That’s a very small minority of participants. Of the 86 who got a notification while wearing a patch, 72 had confirmed evidence of atrial fibrillation. (Dividing 72 by 86 yields 0.84, which is how you get a positive predictive value of 84 percent.)

Positive predictive values, although useful when talking to patients, are not always a good measure of a test’s effectiveness. When you test a device on a group where everyone has a disease, for instance, all positive results are correct."
...

There are positive messages from this study. There’s potential to use commercial devices to monitor and assess people outside of the clinical setting, and there’s clearly an appetite for it as well. But for now and based on these results, while there may be reasons to own an Apple Watch, using it as a widespread screen for atrial fibrillation probably isn’t one."

Friday, December 13, 2019

Submission of the Human Engineering File to the FDA and Other Regulatory Bodies, Section 8: Part VI

This is the easiest for me to cover largely because the requirements for the validation section are clearly spelled out in detail.

8Details of human factors validation testing
  • Rationale for test type selected (i.e., simulated use, actual use or clinical study)
  • Test environment and conditions of use
  • Number and type of test participants
  • Training provided to test participants and how it corresponded to real-world training levels
  • Critical tasks and use scenarios included in testing
  • Definition of successful performance of each test task 
  • Description of data to be collected and methods for documenting observations and interview responses
  • Test results: Observations of task performance and occurrences of use errors, close calls, and use problems 
  • Test results: Feedback from interviews with test participants regarding device use, critical tasks, use errors, and problems (as applicable)  
  • Description and analysis of all use errors and difficulties that could cause harm, root causes of the problems, and implications for additional risk elimination or reduction 
These requirements are largely self explanatory. However, I would like to make a few comments and additions.

  • Validation testing -- including verification testing -- are often performed by outside consulting firms. Thus it is extremely important that you spell out how your testing should be performed and the measurements to be collected and reported. I've noted that often times consulting company is asked to write both the protocol and the testing script. This is a mistake. The organization that performed the work up to the validation testing stage should be responsible for creating the protocol and the script, because it is this organization that will be responsible for the submission of the HE file to the FDA and/or other regulatory bodies. It's important that the research and development as well as the submission be responsible and in full control of what takes place during the validation step.
  • Verification and Validation testing. Verification testing takes place under laboratory conditions using as testing participants members of the targeted user population. This is an additional check on the usability of the system or device. Validation testing takes place in actual or simulated actual conditions -- with all the distractions and problems that users will likely encounter.
  • Rationale for type of testing performed and the conditions chosen for validation testing can be extremely important especially if you have chosen a testing procedure less rigorous than performance testing under real or simulated real conditions. Consult IEC 62366 and AAMI HE75 for guidance.
  • Testing procedure should insure that full testing of critical tasks are performed and likely to be repeatedly performed by testing participants.
  • Suggested additional measurement: If your system or device has error trapping and redirecting capabilities, be sure to report how often these capabilities were triggered and if they enabled the testing participant to successfully complete the task. This could be labeled as: task successfully completed, close call. However, a system or device with the capability to protect against use errors is a capability worth pointing out. 

What to include in your narrative?


Include the abstract or abstracts of your validation testing in your narrative. 

If you haven't included any significant issues or root cause analysis in your abstract, be sure to include this in your narrative. Be sure you surface all issues or concerns in your narrative, if you don't it could appear to a reviewer that you're trying to hide any problems that you encountered. Even the appearance of hiding problems could cause problems with receiving approval for your system or device. 


Updated: US life expectancy has not kept pace with that of other wealthy countries and is now decreasing: What appears to be causing this?

Here's the reference to the article I that has prompted my analysis into the data that they have collected and a brief summary of their results:

https://jamanetwork.com/journals/jama/fullarticle/2756187?guestaccesskey=c1202c42-e6b9-4c99-a936-0976a270551f&utm_source=for_the_media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=112619

Here's a summary of their conclusions: 

US life expectancy increased for most of the past 60 years, but the rate of increase slowed over time and life expectancy decreased after 2014. A major contributor has been an increase in mortality from specific causes (eg, drug overdoses, suicides, organ system diseases) among young and middle-aged adults of all racial groups, with an onset as early as the 1990s and with the largest relative increases occurring in the Ohio Valley and New England. The implications for public health and the economy are substantial, making it vital to understand the underlying causes.

Life expectancy data for 1959-2016 and cause-specific mortality rates for 1999-2017 were obtained from the US Mortality Database and CDC WONDER.

__________
In my previous two articles that reference this study, I examined data from other countries and in my most recent article, I examined US mortality rates in comparison to US peer countries. If you read that article you'll know that my findings showed that the US is at the bottom of the group. The US life expectancy is even lower than for Puerto Rico. 

The JAMA article referenced above examined US mortality data only, but performed a careful analysis to examine why people died. I think that we can agree that US life expectancy in comparison to other peer countries is lower than it should be. And the fact that it appears to have begun to drop is unacceptable. My question is why? Why is US life expectancy dropping instead of rising? What appears to be the major driver or drivers for this phenomena?

I pulled two figures from the study that appear to show why US life expectancy appears to be dropping over the last three years and why in earlier years the increases in US life expectancy had not kept up with its peer countries. 

Figures 4 and 6 from the JAMA article



Figure 4 clearly shows that the major increasing cause of death in all age categories is drug related -- Drug poisoning. All the other curves remain reasonably flat with one exception, hypertensive diseases for those 55 to 64 years old. One might expect that diabetes would be a contributor given that more and more people in the US are obese, but this is not the case. In each age group, diabetic related deaths are either flat or dropping. (The increase in hypertensive deaths are likely related to the overall increase in obesity.)

Figure 6 breaks down the drug poisoning deaths by Race/Ethnicity from 1999 to 2017. White Americans and American Indians & Alaska Natives show a steady increase in death from a drug overdose. African-American drug overdose deaths appear to have been relatively flat until 2014 when they showed a dramatic and unsettling rise. Both US Hispanics and Asian/Pacific-Islanders show an increase in drug related deaths, but nothing like the others. 

_______

Looks like we have more evidence of the level of significance of the impact of the opioid crisis. 

_______
I've seen references to one of the reasons for the decrease in life expectancy for the from 2015 to 2017 is the increasing rate of suicide. While suicides have been increasing, they've been increasing at a reasonably steady rate from 1999 -- which is the earliest date found from the CDC Wonder database. Suicides have not shown the dramatic and curvilinear rise that drug related deaths have shown. So yes, an increase in suicides is clearly a contributor to the reduction in the US life expectancy, but not a major contributor. 

Thursday, December 12, 2019

Submission of the Human Engineering File to the FDA and Other Regulatory Bodies, Sections 6 and 7: Part V

I cover Sections 6 and 7 in this article as shown below:

6Summary of preliminary analyses and evaluations
  • Evaluation methods used
  • Key results and design modifications implemented in response
  • Key findings that informed the human factors validation test protocol
7Description and categorization of critical tasks 
  • Process used to identify critical tasks
  • List and descriptions of critical tasks
  • Categorization of critical tasks by severity of potential harm
  • Descriptions of use scenarios that include critical tasks
I consider Sections 6 and 7 together because the information for these two sections should have come from the formative stage of the research and design process. These two sections could be combined into a single section. However, it is apparent that the FDA (and probably other regulatory bodies as well) considers Section 7, Description and categorization of critical tasks, as important enough to have its own, separate section. 

Importance of Getting It Right


The contents of these sections, the descriptions and explanations provided, can be the difference between: 

  • An easy, unquestioned acceptance of what you've done or
  • A difficult, question-riddled review of the work that you performed resulting in:
    • Approval delays, 
    • A reworking of the submitted materials 
    • Requests for additional research to be performed, or 
    • Rejection of the human engineering file 

To fully address what should be included in Sections 6 and 7, you need to examine your entire HE process in the context of the research and development program of your medical device or system and determine whether your HE process can adequately address reporting requirements of these two sections. These sections form the core of the report of your research and design process up to the point immediately before you begin your final phase of testing, namely verification and validation (summative) testing.


Section 6: Summary of the Preliminary Analysis and Evaluations


What Should be in Section 6

I briefly cover the points of what should be included in Section 6. Assuming that you are a human engineering professional, you should already have a reasonable understanding of the meaning of each of three requirements listed below. 

1. Evaluation methods used

This comprises the entire body of research performed including all of the data collected before the implementing a foundational or initial design, and all of the testing performed on the design.

2. Key results and design modifications implemented in response

What findings from your research lead to you to creating your initial design and what where the factors that lead you to modifying your design?

3. Key findings that informed the human factors validation test protocol

How did your arrive at creating your research protocol for summative/validation testing? How do you know that your validation protocol is appropriate and will verify that your system or device is safe for use?

That's the brief overview of what should be in Section 6. However, what should be included in Section 6 are the logical threads of justifications for doing what you did: for creating your research and development plan, the initial/foundational design and how you went about modifying that design. 
   
Don't be deceived by the seeming simplicity of Section 6. It is far more complicated and demands much more investigative and design process rigor than one might imagine. 


Human Engineering (HE): Research and Development


Section 6 is the section where you lay out all research and development performed in relationship to human engineering. Thus, Section 6 becomes the place where you make your case for the research that you performed and the design choices that you made. After reading Section 6, the reviewer should have a clear understanding and be in agreement with the research and design process that was undertaken. This includes the rationale for the research plan as proposed and undertaken including the rationale for any changes made to the plan on the basis of research findings. It will include the rationale for the design process, including the initial or foundational design and the reasons for changes made through the design iteration process.

Human factors is the study of how human interact with or operate systems and devices. Its fundamentally research. Human engineering incorporates the human factors, but encompasses and  incorporates design and the design process that should be at its foundation, driven by research. The research that directs and informs design and the design process includes field, laboratory, library, risk or research-based standards. And in the absence of the ability to collect empirical data: scenarios and interaction walk-throughs and analysis. 


You will need to defend your rationale for the specific research projects undertaken and the design choices made. Because the narrative is an overview, it's often a good place to explain the much of the logic for the research undertaken and the design choices made.


Defending HE Research and Design Planning and Choices


Adequate and effective justification of your research and development plan and design choices will often be the key to insuring unquestioned acceptance of your submission. Here are some suggestions:

  1. Justifying the Research and Development Plan -- the means for creating a usable and low-likelihood use-error and low risk system or device. Reasoning and justifications for the creating a research and development plan for this system or device include:
    • Compliance with IEC 62366 (part 1).
    • Conformance to FDA HE program guidance (on the FDA website).
    • Guidance from AAMI/ANSI HE-75
    • Guidance from previous, similar and accepted plans 
    • This system or device is a next generation release of a currently, commercially available product. Thus the research and development performed along with field collected data provide guidance for research and design plan for this next generation product.
  2. Justification for performing specific research include:
    • Planned research
    • Research fits within the guidelines set within the research plan.
    • Research is designed to answer specific research questions. Often during a research program, questions arise that may be human performance, design specific, etc. that may not have been specified in the research plan. Often times these types of studies are applicable to the research and development of a variety of device and systems. In this case the research is "question-driven." Those research questions need to be clearly defined out within the research protocol and become the clear justification for the research and the applicability and potential value of the findings.
    • Findings from planned research suggest the need for new research not originally planned.
  3. Justification for the foundational design: is initial design that is prototyped, usability tested and then iterated. The foundational design establishes the basic design philosophy (appearance and operation) that will likely be commercialized. While the foundational design will likely be updated and improved throughout the research and development process; fundamentally, it will likely maintain the same design philosophy. Thus, establishment of the foundational design maybe the most consequential step in the research and development process. Justifications for the foundational design include:
    • Updated version of an earlier, accepted design: using the same design philosophy. Updates and improvement driven-by field research, customer feedback, research on the use of the system under actual conditions.
    • Findings from formative research as defined by the research and development plan undertaken before initiating a design.
    • Compliance with accepted design standards, e.g., AAMI HE75. (There are a wide array of design standards issued and accepted by the US agencies as well as other agencies of a variety of countries. When localization of a design is required, the design standards issued by the targeted country should be considered and referenced.)
  4. Justification for changes made to the foundational and modified designs.
    • Findings from prototype testing.
    • Findings from expert reviewers: resulting from design walkthroughs/reviews and/or interactions with the device or system.
    • Limited field tests of prototypes.
  5. Justification that the design has reached the stage for verification and validation (summative) testing. And that a research protocol can be written that can effectively and realistically test the system or device to demonstrate that the system or device will be safe for use by members of the targeted population in the intended use environment(s).
    • This is the hand-off point to the summative testing phase.
    • Justification that that the system or device is read to hand off: The formative testing up to this point should have subjected the system or device to the all of the testing that it would be subjected-to multiple times. And the system or device should have passed those tests multiple times. Thus, if the research and development plan was properly executed nothing of any concern should come from verification and validation testing. If there are findings that are the least bit concerning, then it is time to reexamine your research and development planning and protocols. 
    • Finally, if your formative testing, meaning all of the testing performed up to this point, has been comprehensive,  rigorous and complete, then that testing should dictate the verification and validation research protocols.

What Should be Included in the Section 6 Narrative


I suggest that your narrative should be written in the form of a story. It should be a narration that describes in a linear fashion (from the beginning to immediately before the validation step) what you did, why you did it: 

  • if it's research, summarize what you did and what you found, 
  • if it's your foundational design, provide an high level description of how you arrived at this design (include enough figures to be sure that a reviewer will understand your description) and
  • if it's a design update, explain what change or changes were made and why.
Be sure to include references to your submitted materials in your HE file.



Section 7: Description and categorization of critical tasks


Identifying the critical tasks that will be performed on your system or device should be part of formative research. Often the ability to identify the set of critical tasks is beyond the expertise of the human engineering professional and identifying as well as categorizing the critical tasks requires the support of subject-matter experts (who should be included from the beginning of the formative research stage). My experience has been to integrate subject-matter experts into the research and design process from product inception.  

The list of requirements for Section 7 include:

1. Process used to identify critical tasks

With your subject-matter experts, describe the process used to identify your critical tasks. 

2. List and descriptions of critical tasks

Include with this your justifications and reasoning for this list. 

3. Categorization of critical tasks by severity of potential harm

In addition, if any of your critical tasks have the possibility of inflicting moderate to critical harm, I suggest that mitigations developed to minimize the likelihood that harm would ever occur. 


4. Descriptions of use scenarios that include critical tasks

These use scenarios should form a fundamental part of both your testing as well as justification and rationale for your design (and updates to your design.)

Section 7 Narrative


I suggest that in your narrative that you include a table with the information from items 2 and 3 above. I would add a brief summary of the process that was used to identify your critical tasks. Finally, include a reference to the use scenarios that include the critical tasks. You don't need to include them in your narrative, a reference should be sufficient. 

______________________
Note: I plan on periodically updating this article as I learn more and reconsider what I have written. With each update, I'll include at the top of this article, when it was updated and list some of the changes that I have made. 

Tuesday, December 10, 2019

New Wearable Sensor Detects Gout and Other Medical Conditions

I just came across this article regarding a wearable sensor systems and thought that I would share it. This could be a component in a remote monitoring system. The sensor's information source is the person's sweat. "Sensor can pick up small concentrations of metabolites in sweat and provide readings over long periods of time." To turn this into a remote monitoring system, all that's required is a means to transmit the data over wireless. 

From the article:

The team’s goal is a sensor that lets doctors continuously monitor the condition of patients with illnesses such as cardiovascular disease, diabetes, and kidney disease, all of which put abnormal levels of nutrients or metabolites in the bloodstream. Patients would be better off if their physician knew more about their personal conditions and this method avoids tests that require needles and blood sampling.
“Such wearable sweat sensors could rapidly, continuously, and noninvasively capture changes in health at molecular levels,” Gao says. “They could make personalized monitoring, early diagnosis, and timely intervention possible.”

Thursday, December 5, 2019

Quick Follow-up to: US life expectancy has not kept pace with that of other wealthy countries and is now decreasing

I wanted to do a quick update and share a few of my findings on this topic. I compared US life expectancy to our peer countries. These countries or wealthy portions of countries (such as Hong Kong and Macao) in 1960 would have been considered "first world" or "industrialized." 

Comparison list of countries include:

  1. Hong Kong SAR, China
  2. Japan
  3. Macao SAR, China
  4. Switzerland
  5. Spain
  6. Italy
  7. Singapore
  8. Luxembourg
  9. Korea, Rep.
  10. Israel
  11. France
  12. Norway
  13. Australia
  14. Malta
  15. Sweden
  16. Canada
  17. Iceland
  18. Ireland
  19. New Zealand
  20. Austria
  21. Netherlands
  22. Belgium
  23. Finland
  24. Greece
  25. United Kingdom
  26. Portugal
  27. Denmark
  28. Germany
  29. Puerto Rico
  30. United States
I added Puerto Rico and South Korea (Republic of Korea) because Puerto Rico is part of the US but geographically, culturally and linguistically separate as well as significantly poorer and as such provides an interesting comparison. In 1960 South Korea would not have been considered to be a "first world country" but it has grown into one. It's transformation and what that has meant for the citizens of South Korea is interesting as well.

Here's what I found:

  1. Since 1983, US life expectancy never again rose above the median. Before 1983, US life expectancy was generally above the median. 
  2. In 2012 Puerto Rican life expectancy was higher than the US and has remained higher since.
  3. Here's a table of life expectancy for Puerto Rico, US and the median life expectancy for the countries listed above from 2013 to 2017:
                                  

20132014201520162017
Puerto Rico79.0379.2079.3579.4979.63
United States78.7478.8478.6978.5478.54
Group Median81.7581.9281.9682.2482.28

Of the listed countries, the US ranks last and has ranked last since 2005. And not only that, US life expectancy has been declining -- at least during the last three years where we have records. I should mention that the country ordering shown above is from highest life expectancy to the lowest based on data collected in 2017.

Finally, if you graph the data, the first 28 countries fall into a fairly cohesive grouping, the Puerto Rico and the US clearly fall outside of that group into a lower grouping since 2011.

From a personal standpoint, the fact that US life expectancy in 2017 is a year lower than Puerto Rico and 3 3/4 year lower than the median is a stunning result. (The highest is over 84 years.)

More to follow on this topic.