Monday, December 30, 2019

Signal Detection and the Apple Watch

In the last two articles about the Apple Watch's capability to detect atrial fibrillation, I made references to terminology ("false positive") that has its roots in Signal Detection Theory.  Signal Detection Theory was developed as a means to determine the accuracy of early radar systems. The technique has migrated to communications systems, psychology, diagnostics and a variety of other domains where determining the presence or absence of something of interest is important especially when the signal to be detected would be presented within a noisy environment (this was particularly true of  early radars) or when the signal is weak and difficult to detect.  

Signal detection can be powerful tool to guide research methodologies and data analysis. I have used the signal detection paradigm in my own research both for the development of my research methodology and data analysis: planned and post-hoc analysis. In fact when I have taught courses in research methods and statistical analysis, I have used the signal detection paradigm as a way to convey detecting the effects of an experimental manipulation in your data.  

Because I've mentioned issues related to signal detection and that it is a powerful tool for research and development, I decided to provide a short primer of signal detection.


Signal Detection


The central feature of signal detection is the two by two matrix shown below.

The signal detection process begins with a detection window or event. The window for detection could be a period of time or a specified occurrence such as a psychological test such as a rapid presentation of a stimulus and determine whether or not the subject of the experiment detected what was presented. 

Or in the case of the Apple Watch, whether it detects atrial fibrillation. In devices such as the Apple Watch, how the system defines the detection window can be important. Since we have no information regarding how the Apple Watch atrial fibrillation detection system operates, it's difficult to determine how it determines its detection window.


Multiple, Repeated Trials

Before discussing the meaning of the Signal Detection Matrix, it's important to understand that every matrix comes with multiple, repeated trials with a particular detection system, whether that detection system is a machine or a biological entity such as a person. Signal Detection Theory is grounded in probability theory, therefore, there is the requirement for multiple trials in order to create a viable and valid matrix.


The Four Cells of the Signal Detection Matrix

During the window of detection, a signal may or may not be present. Each cell represents an outcome of a detection event. The possible outcomes are: 1: the signal was present and it was detected, a hit (upper left cell), 2: the signal was not present and the system or person correctly correctly reported no signal present (lower right cell), 3: the signal was absent, but erroneously reported as present, this is a Type I error (lower left cell) and 4: the signal was present, but reported as absent, this is a Type II error (upper right cell).

The object of any system is that the outcomes of detection events end up in outcome cells 1 and 2, that is, correctly reported. However, from a research standpoint, the error cells (Outcomes 3 and 4) are the most interesting and revealing. 


Incorrect Report: Cells



Outcome 3: Type I Error

A Type I error is reporting that a signal is present when it was not. This is known as a "false alarm or false positive." The statistic for alpha which is the ratio of Outcome 3 over Total number of trials or detection events.

Outcome 4: Type II Error

A Type II error is reporting that a signal is not present when in fact it was present. This is a "failure to detect." The statistic for beta which is the ratio of Outcome 4 over Total number of trials or detection events. 


If you're designing a detection system, the idea is to minimize both types of errors. However, no system is perfect and as such, it's important to determine what type of error is most acceptable, Type I or II because there are likely to be consequences either way. 

Trade-off Between Type I and Type II Errors

In experimental research the emphasis has largely been on minimizing Type I errors, that is reporting an experimental effect when in actuality none was present. Increasing your alpha level, that is decreasing your acceptance of Type I errors, increases the likelihood of making a Type II error, reporting that an experimental effect was not present when in fact it was. 

However, with medical devices, what type of error is of greater concern, Type I or Type II? That's a decision that will need to be made.

Before leaving this section, I should mention that the trade-off analysis between Type I and Type II errors is called Receiver-Operating-Characteristic Analysis or ROC-analysis. This is something that I'll discuss in a later article. 


With Respect to the Apple Watch 


Since I have no access into Apple's thinking when it was designing the Watch's atrial fibrillation software system, I can't know for certain the thinking that went into designing atrial fibrillation detection algorithm for the Apple Watch. However based on their own research, it seems that Apple made the decision to side on accepting false positives over false negatives -- although we can't be completely sure this is true because Apple did not do the research to determine rate that the Apple Watch failed to detect atrial fibrillation when it was know to be present.

With a "medical device" such as the Apple Watch, it would seem reasonable to side on accepting false positives over false positive. That is, to set your alpha level low. The hope would be that if the Apple Watch detected atrial fibrillation the owner of the watch would seek medical attention to determine whether or not a diagnosis of atrial fibrillation was warranted for receiving treatment for the condition. If the watch generated a false alarm, then there was no harm in seeking medical advice ... it would seem. The author of the NY Times article I cited in the previous article appears to hold to this point of view. 

However ...

The problem with a system that generates a high rate of false alarms, is that all too often signals tend to be ignored. Consider the following scenario: an owner of an Apple Watch receives an indication that atrial fibrillation has been detected. The owner goes to a physician who reports that there's no indication of atrial fibrillation. Time passes and the watch reports again that atrial fibrillation has been detected. The owner goes back to the physician who give the owner the same report as before, no atrial fibrillation detected. What do you think will happen if the owner receives from the watch that atrial fibrillation has been detected? It's likely that the owner will just ignore the report. That would really be a problem for the owner if the owner had in fact developed atrial fibrillation. In this scenario the watch "cried wolf" too many times. And therein lies the problem with having a system that's adjusted to accepting a high rate of false alarms.





Thursday, December 26, 2019

Follow-up: Apple Watch 5, Afib detection, NY Times Article

The New York Times has published an article regarding the Apple Watch 5's capability to detect atrial fibrillation. The link to the article is below:

https://www.nytimes.com/2019/12/26/upshot/apple-watch-atrial-fibrillation.html?te=1&nl=personal-tech&emc=edit_ct_20191226?campaign_id=38&instance_id=14801&segment_id=19884&user_id=d7e858ffd01b131c28733046812ca088&regi_id=6759438320191226

The title and the subtitle of the article provide a good summary of what the author (Aaron E. Carroll) found:

"The Watch Is Smart, but It Can’t Replace Your Doctor
Apple has been advertising its watch’s ability to detect atrial fibrillation. The reality doesn’t quite live up to the promise."

With reference to my article, the Times article provides more detail on the trial that Apple ran to test the effectiveness of the Apple Watch's ability to detect atrial fibrillation. That provide interesting and enlightening, and clarified some of the issues I found with how the study was reported for both the procedure and the results. In addition, the author and I concur regarding the Apple Watch's extremely high reported rate of false positives for atrial fibrillation. I find this quite interesting when you consider that screening for atrial fibrillation can be as simple as taking the patient's pulse. 


Here are a few quotes from the article:


"Of the 450 participants [these are study participants where the Apple Watch had detected atrial fibrillation] who returned patches , atrial fibrillation was confirmed in 34 percent, or 153 people. 
...

Many news outlets reporting on the study mentioned a topline result: a “positive predictive value” of 84 percent. That statistic refers to the chance that someone actually has the condition if he or she gets a positive test result.

But this result wasn’t calculated from any of the numbers above. It specifically refers to the subset of patients who had an irregular pulse notification while wearing their confirmatory patch. That’s a very small minority of participants. Of the 86 who got a notification while wearing a patch, 72 had confirmed evidence of atrial fibrillation. (Dividing 72 by 86 yields 0.84, which is how you get a positive predictive value of 84 percent.)

Positive predictive values, although useful when talking to patients, are not always a good measure of a test’s effectiveness. When you test a device on a group where everyone has a disease, for instance, all positive results are correct."
...

There are positive messages from this study. There’s potential to use commercial devices to monitor and assess people outside of the clinical setting, and there’s clearly an appetite for it as well. But for now and based on these results, while there may be reasons to own an Apple Watch, using it as a widespread screen for atrial fibrillation probably isn’t one."

Friday, December 13, 2019

Submission of the Human Engineering File to the FDA and Other Regulatory Bodies, Section 8: Part VI

This is the easiest for me to cover largely because the requirements for the validation section are clearly spelled out in detail.

8Details of human factors validation testing
  • Rationale for test type selected (i.e., simulated use, actual use or clinical study)
  • Test environment and conditions of use
  • Number and type of test participants
  • Training provided to test participants and how it corresponded to real-world training levels
  • Critical tasks and use scenarios included in testing
  • Definition of successful performance of each test task 
  • Description of data to be collected and methods for documenting observations and interview responses
  • Test results: Observations of task performance and occurrences of use errors, close calls, and use problems 
  • Test results: Feedback from interviews with test participants regarding device use, critical tasks, use errors, and problems (as applicable)  
  • Description and analysis of all use errors and difficulties that could cause harm, root causes of the problems, and implications for additional risk elimination or reduction 
These requirements are largely self explanatory. However, I would like to make a few comments and additions.

  • Validation testing -- including verification testing -- are often performed by outside consulting firms. Thus it is extremely important that you spell out how your testing should be performed and the measurements to be collected and reported. I've noted that often times consulting company is asked to write both the protocol and the testing script. This is a mistake. The organization that performed the work up to the validation testing stage should be responsible for creating the protocol and the script, because it is this organization that will be responsible for the submission of the HE file to the FDA and/or other regulatory bodies. It's important that the research and development as well as the submission be responsible and in full control of what takes place during the validation step.
  • Verification and Validation testing. Verification testing takes place under laboratory conditions using as testing participants members of the targeted user population. This is an additional check on the usability of the system or device. Validation testing takes place in actual or simulated actual conditions -- with all the distractions and problems that users will likely encounter.
  • Rationale for type of testing performed and the conditions chosen for validation testing can be extremely important especially if you have chosen a testing procedure less rigorous than performance testing under real or simulated real conditions. Consult IEC 62366 and AAMI HE75 for guidance.
  • Testing procedure should insure that full testing of critical tasks are performed and likely to be repeatedly performed by testing participants.
  • Suggested additional measurement: If your system or device has error trapping and redirecting capabilities, be sure to report how often these capabilities were triggered and if they enabled the testing participant to successfully complete the task. This could be labeled as: task successfully completed, close call. However, a system or device with the capability to protect against use errors is a capability worth pointing out. 

What to include in your narrative?


Include the abstract or abstracts of your validation testing in your narrative. 

If you haven't included any significant issues or root cause analysis in your abstract, be sure to include this in your narrative. Be sure you surface all issues or concerns in your narrative, if you don't it could appear to a reviewer that you're trying to hide any problems that you encountered. Even the appearance of hiding problems could cause problems with receiving approval for your system or device. 


Updated: US life expectancy has not kept pace with that of other wealthy countries and is now decreasing: What appears to be causing this?

Here's the reference to the article I that has prompted my analysis into the data that they have collected and a brief summary of their results:

https://jamanetwork.com/journals/jama/fullarticle/2756187?guestaccesskey=c1202c42-e6b9-4c99-a936-0976a270551f&utm_source=for_the_media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=112619

Here's a summary of their conclusions: 

US life expectancy increased for most of the past 60 years, but the rate of increase slowed over time and life expectancy decreased after 2014. A major contributor has been an increase in mortality from specific causes (eg, drug overdoses, suicides, organ system diseases) among young and middle-aged adults of all racial groups, with an onset as early as the 1990s and with the largest relative increases occurring in the Ohio Valley and New England. The implications for public health and the economy are substantial, making it vital to understand the underlying causes.

Life expectancy data for 1959-2016 and cause-specific mortality rates for 1999-2017 were obtained from the US Mortality Database and CDC WONDER.

__________
In my previous two articles that reference this study, I examined data from other countries and in my most recent article, I examined US mortality rates in comparison to US peer countries. If you read that article you'll know that my findings showed that the US is at the bottom of the group. The US life expectancy is even lower than for Puerto Rico. 

The JAMA article referenced above examined US mortality data only, but performed a careful analysis to examine why people died. I think that we can agree that US life expectancy in comparison to other peer countries is lower than it should be. And the fact that it appears to have begun to drop is unacceptable. My question is why? Why is US life expectancy dropping instead of rising? What appears to be the major driver or drivers for this phenomena?

I pulled two figures from the study that appear to show why US life expectancy appears to be dropping over the last three years and why in earlier years the increases in US life expectancy had not kept up with its peer countries. 

Figures 4 and 6 from the JAMA article



Figure 4 clearly shows that the major increasing cause of death in all age categories is drug related -- Drug poisoning. All the other curves remain reasonably flat with one exception, hypertensive diseases for those 55 to 64 years old. One might expect that diabetes would be a contributor given that more and more people in the US are obese, but this is not the case. In each age group, diabetic related deaths are either flat or dropping. (The increase in hypertensive deaths are likely related to the overall increase in obesity.)

Figure 6 breaks down the drug poisoning deaths by Race/Ethnicity from 1999 to 2017. White Americans and American Indians & Alaska Natives show a steady increase in death from a drug overdose. African-American drug overdose deaths appear to have been relatively flat until 2014 when they showed a dramatic and unsettling rise. Both US Hispanics and Asian/Pacific-Islanders show an increase in drug related deaths, but nothing like the others. 

_______

Looks like we have more evidence of the level of significance of the impact of the opioid crisis. 

_______
I've seen references to one of the reasons for the decrease in life expectancy for the from 2015 to 2017 is the increasing rate of suicide. While suicides have been increasing, they've been increasing at a reasonably steady rate from 1999 -- which is the earliest date found from the CDC Wonder database. Suicides have not shown the dramatic and curvilinear rise that drug related deaths have shown. So yes, an increase in suicides is clearly a contributor to the reduction in the US life expectancy, but not a major contributor. 

Thursday, December 12, 2019

Submission of the Human Engineering File to the FDA and Other Regulatory Bodies, Sections 6 and 7: Part V

I cover Sections 6 and 7 in this article as shown below:

6Summary of preliminary analyses and evaluations
  • Evaluation methods used
  • Key results and design modifications implemented in response
  • Key findings that informed the human factors validation test protocol
7Description and categorization of critical tasks 
  • Process used to identify critical tasks
  • List and descriptions of critical tasks
  • Categorization of critical tasks by severity of potential harm
  • Descriptions of use scenarios that include critical tasks
I consider Sections 6 and 7 together because the information for these two sections should have come from the formative stage of the research and design process. These two sections could be combined into a single section. However, it is apparent that the FDA (and probably other regulatory bodies as well) considers Section 7, Description and categorization of critical tasks, as important enough to have its own, separate section. 

Importance of Getting It Right


The contents of these sections, the descriptions and explanations provided, can be the difference between: 

  • An easy, unquestioned acceptance of what you've done or
  • A difficult, question-riddled review of the work that you performed resulting in:
    • Approval delays, 
    • A reworking of the submitted materials 
    • Requests for additional research to be performed, or 
    • Rejection of the human engineering file 

To fully address what should be included in Sections 6 and 7, you need to examine your entire HE process in the context of the research and development program of your medical device or system and determine whether your HE process can adequately address reporting requirements of these two sections. These sections form the core of the report of your research and design process up to the point immediately before you begin your final phase of testing, namely verification and validation (summative) testing.


Section 6: Summary of the Preliminary Analysis and Evaluations


What Should be in Section 6

I briefly cover the points of what should be included in Section 6. Assuming that you are a human engineering professional, you should already have a reasonable understanding of the meaning of each of three requirements listed below. 

1. Evaluation methods used

This comprises the entire body of research performed including all of the data collected before the implementing a foundational or initial design, and all of the testing performed on the design.

2. Key results and design modifications implemented in response

What findings from your research lead to you to creating your initial design and what where the factors that lead you to modifying your design?

3. Key findings that informed the human factors validation test protocol

How did your arrive at creating your research protocol for summative/validation testing? How do you know that your validation protocol is appropriate and will verify that your system or device is safe for use?

That's the brief overview of what should be in Section 6. However, what should be included in Section 6 are the logical threads of justifications for doing what you did: for creating your research and development plan, the initial/foundational design and how you went about modifying that design. 
   
Don't be deceived by the seeming simplicity of Section 6. It is far more complicated and demands much more investigative and design process rigor than one might imagine. 


Human Engineering (HE): Research and Development


Section 6 is the section where you lay out all research and development performed in relationship to human engineering. Thus, Section 6 becomes the place where you make your case for the research that you performed and the design choices that you made. After reading Section 6, the reviewer should have a clear understanding and be in agreement with the research and design process that was undertaken. This includes the rationale for the research plan as proposed and undertaken including the rationale for any changes made to the plan on the basis of research findings. It will include the rationale for the design process, including the initial or foundational design and the reasons for changes made through the design iteration process.

Human factors is the study of how human interact with or operate systems and devices. Its fundamentally research. Human engineering incorporates the human factors, but encompasses and  incorporates design and the design process that should be at its foundation, driven by research. The research that directs and informs design and the design process includes field, laboratory, library, risk or research-based standards. And in the absence of the ability to collect empirical data: scenarios and interaction walk-throughs and analysis. 


You will need to defend your rationale for the specific research projects undertaken and the design choices made. Because the narrative is an overview, it's often a good place to explain the much of the logic for the research undertaken and the design choices made.


Defending HE Research and Design Planning and Choices


Adequate and effective justification of your research and development plan and design choices will often be the key to insuring unquestioned acceptance of your submission. Here are some suggestions:

  1. Justifying the Research and Development Plan -- the means for creating a usable and low-likelihood use-error and low risk system or device. Reasoning and justifications for the creating a research and development plan for this system or device include:
    • Compliance with IEC 62366 (part 1).
    • Conformance to FDA HE program guidance (on the FDA website).
    • Guidance from AAMI/ANSI HE-75
    • Guidance from previous, similar and accepted plans 
    • This system or device is a next generation release of a currently, commercially available product. Thus the research and development performed along with field collected data provide guidance for research and design plan for this next generation product.
  2. Justification for performing specific research include:
    • Planned research
    • Research fits within the guidelines set within the research plan.
    • Research is designed to answer specific research questions. Often during a research program, questions arise that may be human performance, design specific, etc. that may not have been specified in the research plan. Often times these types of studies are applicable to the research and development of a variety of device and systems. In this case the research is "question-driven." Those research questions need to be clearly defined out within the research protocol and become the clear justification for the research and the applicability and potential value of the findings.
    • Findings from planned research suggest the need for new research not originally planned.
  3. Justification for the foundational design: is initial design that is prototyped, usability tested and then iterated. The foundational design establishes the basic design philosophy (appearance and operation) that will likely be commercialized. While the foundational design will likely be updated and improved throughout the research and development process; fundamentally, it will likely maintain the same design philosophy. Thus, establishment of the foundational design maybe the most consequential step in the research and development process. Justifications for the foundational design include:
    • Updated version of an earlier, accepted design: using the same design philosophy. Updates and improvement driven-by field research, customer feedback, research on the use of the system under actual conditions.
    • Findings from formative research as defined by the research and development plan undertaken before initiating a design.
    • Compliance with accepted design standards, e.g., AAMI HE75. (There are a wide array of design standards issued and accepted by the US agencies as well as other agencies of a variety of countries. When localization of a design is required, the design standards issued by the targeted country should be considered and referenced.)
  4. Justification for changes made to the foundational and modified designs.
    • Findings from prototype testing.
    • Findings from expert reviewers: resulting from design walkthroughs/reviews and/or interactions with the device or system.
    • Limited field tests of prototypes.
  5. Justification that the design has reached the stage for verification and validation (summative) testing. And that a research protocol can be written that can effectively and realistically test the system or device to demonstrate that the system or device will be safe for use by members of the targeted population in the intended use environment(s).
    • This is the hand-off point to the summative testing phase.
    • Justification that that the system or device is read to hand off: The formative testing up to this point should have subjected the system or device to the all of the testing that it would be subjected-to multiple times. And the system or device should have passed those tests multiple times. Thus, if the research and development plan was properly executed nothing of any concern should come from verification and validation testing. If there are findings that are the least bit concerning, then it is time to reexamine your research and development planning and protocols. 
    • Finally, if your formative testing, meaning all of the testing performed up to this point, has been comprehensive,  rigorous and complete, then that testing should dictate the verification and validation research protocols.

What Should be Included in the Section 6 Narrative


I suggest that your narrative should be written in the form of a story. It should be a narration that describes in a linear fashion (from the beginning to immediately before the validation step) what you did, why you did it: 

  • if it's research, summarize what you did and what you found, 
  • if it's your foundational design, provide an high level description of how you arrived at this design (include enough figures to be sure that a reviewer will understand your description) and
  • if it's a design update, explain what change or changes were made and why.
Be sure to include references to your submitted materials in your HE file.



Section 7: Description and categorization of critical tasks


Identifying the critical tasks that will be performed on your system or device should be part of formative research. Often the ability to identify the set of critical tasks is beyond the expertise of the human engineering professional and identifying as well as categorizing the critical tasks requires the support of subject-matter experts (who should be included from the beginning of the formative research stage). My experience has been to integrate subject-matter experts into the research and design process from product inception.  

The list of requirements for Section 7 include:

1. Process used to identify critical tasks

With your subject-matter experts, describe the process used to identify your critical tasks. 

2. List and descriptions of critical tasks

Include with this your justifications and reasoning for this list. 

3. Categorization of critical tasks by severity of potential harm

In addition, if any of your critical tasks have the possibility of inflicting moderate to critical harm, I suggest that mitigations developed to minimize the likelihood that harm would ever occur. 


4. Descriptions of use scenarios that include critical tasks

These use scenarios should form a fundamental part of both your testing as well as justification and rationale for your design (and updates to your design.)

Section 7 Narrative


I suggest that in your narrative that you include a table with the information from items 2 and 3 above. I would add a brief summary of the process that was used to identify your critical tasks. Finally, include a reference to the use scenarios that include the critical tasks. You don't need to include them in your narrative, a reference should be sufficient. 

______________________
Note: I plan on periodically updating this article as I learn more and reconsider what I have written. With each update, I'll include at the top of this article, when it was updated and list some of the changes that I have made. 

Tuesday, December 10, 2019

New Wearable Sensor Detects Gout and Other Medical Conditions

I just came across this article regarding a wearable sensor systems and thought that I would share it. This could be a component in a remote monitoring system. The sensor's information source is the person's sweat. "Sensor can pick up small concentrations of metabolites in sweat and provide readings over long periods of time." To turn this into a remote monitoring system, all that's required is a means to transmit the data over wireless. 

From the article:

The team’s goal is a sensor that lets doctors continuously monitor the condition of patients with illnesses such as cardiovascular disease, diabetes, and kidney disease, all of which put abnormal levels of nutrients or metabolites in the bloodstream. Patients would be better off if their physician knew more about their personal conditions and this method avoids tests that require needles and blood sampling.
“Such wearable sweat sensors could rapidly, continuously, and noninvasively capture changes in health at molecular levels,” Gao says. “They could make personalized monitoring, early diagnosis, and timely intervention possible.”

Thursday, December 5, 2019

Quick Follow-up to: US life expectancy has not kept pace with that of other wealthy countries and is now decreasing

I wanted to do a quick update and share a few of my findings on this topic. I compared US life expectancy to our peer countries. These countries or wealthy portions of countries (such as Hong Kong and Macao) in 1960 would have been considered "first world" or "industrialized." 

Comparison list of countries include:

  1. Hong Kong SAR, China
  2. Japan
  3. Macao SAR, China
  4. Switzerland
  5. Spain
  6. Italy
  7. Singapore
  8. Luxembourg
  9. Korea, Rep.
  10. Israel
  11. France
  12. Norway
  13. Australia
  14. Malta
  15. Sweden
  16. Canada
  17. Iceland
  18. Ireland
  19. New Zealand
  20. Austria
  21. Netherlands
  22. Belgium
  23. Finland
  24. Greece
  25. United Kingdom
  26. Portugal
  27. Denmark
  28. Germany
  29. Puerto Rico
  30. United States
I added Puerto Rico and South Korea (Republic of Korea) because Puerto Rico is part of the US but geographically, culturally and linguistically separate as well as significantly poorer and as such provides an interesting comparison. In 1960 South Korea would not have been considered to be a "first world country" but it has grown into one. It's transformation and what that has meant for the citizens of South Korea is interesting as well.

Here's what I found:

  1. Since 1983, US life expectancy never again rose above the median. Before 1983, US life expectancy was generally above the median. 
  2. In 2012 Puerto Rican life expectancy was higher than the US and has remained higher since.
  3. Here's a table of life expectancy for Puerto Rico, US and the median life expectancy for the countries listed above from 2013 to 2017:
                                  

20132014201520162017
Puerto Rico79.0379.2079.3579.4979.63
United States78.7478.8478.6978.5478.54
Group Median81.7581.9281.9682.2482.28

Of the listed countries, the US ranks last and has ranked last since 2005. And not only that, US life expectancy has been declining -- at least during the last three years where we have records. I should mention that the country ordering shown above is from highest life expectancy to the lowest based on data collected in 2017.

Finally, if you graph the data, the first 28 countries fall into a fairly cohesive grouping, the Puerto Rico and the US clearly fall outside of that group into a lower grouping since 2011.

From a personal standpoint, the fact that US life expectancy in 2017 is a year lower than Puerto Rico and 3 3/4 year lower than the median is a stunning result. (The highest is over 84 years.)

More to follow on this topic.