March 18, 2013

De Novo EMR Design Part III: Computer… Computer… Hello Computer…

Continuing our fantasy journey towards a patient care oriented EMR for primary care physicians, let’s quickly recap our progress. After much ado about nothing, we came up with the following set of requirements in Parts I and II:
  1. System shall assist with gathering information from various sources (TBD) at the point of care
    1. System shall assist with information recording at the point of care (needs more specificity)
    2. System shall retrieve and accept information from external sources
    3. System shall respond to all external legitimate requests for information
    4. System shall have the ability to access published clinical information (consider buying)
  2. System shall assist with synthesis of said information
  3. System shall assist with patient-doctor relationship building
  4. System shall not make the task harder to perform for the user
In the real world of software design, this would be a good place to seek validation for our thought process. We can do that here too although we have no real users. A recent issue of JAMA Internal Medicine contains several papers dealing with misdiagnosis in primary care. The main study conducted at the VA concludes the following: “Diagnostic errors identified in our study involved a large variety of common diseases and had significant potential for harm. Most errors were related to process breakdowns in the patient-practitioner clinical encounter. Preventive interventions should target common contributory factors across diagnoses, especially those that involve data gathering and synthesis in the patient-practitioner encounter [emphasis added]. In particular, those process breakdowns were mostly related to “taking medical histories, performing physical examinations, and ordering tests”. If you reread Part II in particular, it seems that we are on the right path. Although the study attributes these problems to the “relatively brief encounters” now common in primary care, neither the study authors nor the invited commentary dare suggest that we allow proper time for each encounter. Instead, the search is on for all sorts of methodologies, checklists, “metacognitive retraining”, and finally mandatory, structured recording and coding of presenting symptoms, rather than simply diagnoses, in our electronic health record systems” [emphasis added]. All this “without dramatically reducing efficiency”, i.e. without increasing the time a primary care doctor is allowed to spend with a patient. Here we are reminded of the common wisdom of software developers that suggests asking users to describe their problems, but never, ever, allow them to come up with solutions.

No more beating around the bush then. At this point we must tackle information recording at the point of care, or as fondly known by its practitioners, data collection (usually followed or preceded by unprintables). Note that we are insisting on usage of the term “information” instead of “data”. As far as computers are concerned, information comes in only one flavor: zeros and ones. We, on the other hand, have become accustomed to defining information as structured or unstructured, where structured information is considered computable, i.e. it can supposedly be analyzed by computers. This however, is not a material difference because computers can analyze all information to various degrees, depending on analyzing software capabilities. Primitive software is only capable of analyzing information provided in a predefined (in the software) content and format, but as software science advances, computers are becoming less dependent on the user to predigest and preformat their input into the software (at one time, you actually had to enter electrical signals for ones and zeros in order to have the computer do anything useful). So the structured vs. unstructured information debate is all about yesterday’s computers vs. tomorrow’s computers. We will build our imaginary EMR for tomorrow’s computers, some of which are already here today. This is our first technology decision: We are requiring a contextual parser and processor of information that can accept input from all known modalities (keyboard, mouse, microphone, camera/video, stylus, touch/gestures, and electronic interface). Here we refer to our #1-d requirement, which seems pertinent, and task an imaginary system analyst to call IBM.

We still have to design the front end though, so information from the encounter can be transferred to our hopefully smart processor. We are reminded of our tongue in cheek #4 requirement (or constraint really), and in order to fulfill this requirement, we will need to evaluate how our users currently document encounters, with the understanding that a large number of physicians are not using computers to document, and an even larger number of doctors use computers and hate doing so. Hence, our gold standard, or limiting factor, for #4 is the paper chart. But even the paper chart comes in all sorts of shapes and flavors:
  1. Blank sheet of paper allowing free hand documentation
  2. Structured forms with predefined questions, which may contain one or a combination of:
    1. Questions with predefined multiple choice answers
    2. Open ended questions 
    3. Room for free hand written answers
    4. Room for free hand notes
    5. Anatomical drawings for annotation
  3. Dictation devices allowing transcription of voice summaries into the chart
  4. Abstracted lists of factors deemed important (medication list, problem list, allergies, etc.)
Simple electronic versions of charts have not done much other than replace hand writing with keyboard typing and mouse clicks. For physicians who type well and are relatively comfortable with computers, or those who routinely dictated everything, these simple EMRs did not violate our #4 requirement. The more elaborate EMRs, took upon themselves the noble task of speeding up the documentation task (years ago there used to be speed contests between EMRs at trade shows). And they did that by using electronic stilts. If you ever tried as a child to master circus stilts, then you know that they allow you to take much longer steps and cover a lot more ground in shorter times. The caveat though is that you will most likely experience several bruising falls while you train yourself to use the stilts, and that you can only use them for short periods of times on perfectly flat surfaces. The flexibility built into your legs and feet by millions of years of evolution, enabling you to see and respond to details and uneven ground is pretty much gone, so if you try to use circus stilts in everyday life, you will probably find it to be a handicap rather than an advantage. What we really need is shoes. Walking shoes, hiking boots, tennis shoes, cleats, basketball shoes and maybe ballet slippers, all in different sizes and widths, colors, materials, support and price ranges. For primary care, we likely need a good pair of cross trainers.

Most software programs can be divided into three parts, or layers: the data layer, where all the information is stored, the display layer, which is what users see and interact with, and the business layer, which controls the application and is the de facto processing brain of the software. As is the case with people, the beauty of software comes from within. Usability of software products is much more dependent on the intelligence and abilities of the controller brain than it is on the lipstick of the display. The things that are causing heartburn to current EMR users were created precisely to compensate for the feeble brains of the software. Since we already made a technology decision to splurge on the best and most sophisticated software brain available, we will abstain from stealing valuable time from the patient and from the art of observation. Additionally, we will not attempt to shorten the time required for a physician to record observations in brief and plain language (as difficult as that may be). We note here that unlike other professions, historically, physicians did not employ stenographers during patient encounters and recording observations was very much part of the science of medicine. Therefore our information gathering user interface will have the following features and functionalities available:
  1. Ability to record and translate to text all verbal exchange during a patient encounter. This includes the ability to identify the parties engaged in dialog and appropriately classify the information as objective or subjective (patient reported), and the ability to accept addenda to an encounter through multiple modalities (in case you are wondering, this technology already exists).
  2. Ability to accept electronic input from all medical equipment used in the practice, without any human intervention.
  3. Ability to create electronic forms dynamically, from scratch or based on scanned paper forms, including drawings, and ability to modify these as needed.
  4. Ability to accept input from keyboard, mouse, pen/stylus and voice for all form fields without exception and without prior setup or notice.
When correctly combined the last three functionalities should do no more than transport the paper chart methods of documentation to a computer screen, nothing more than a glorified Microsoft Office for medicine, while the most difficult functionality in #1 should provide our intelligent processor the context necessary to augment and perhaps in due course replace the need for deliberate documentation. Note that we are in no way restricting our users to a particular subset of their natural language. Somewhere deep in the bowels of our contextual processor there will be vocabularies, ontologies, terminologies and synonyms, but our imaginary development team does not believe that it is the responsibility of the user to adapt to the machine. We are on a quest to develop intelligent machines, not robotic people.

I can practically see the raised eyebrows of those who want to measure, analyze and provide clinical advice to practicing physicians, contending that most EMRs fail to be useful precisely because they are just electronic paper charts. Strangely enough the much touted electronic media and all its blogging/social media offshoots are doing quite well in spite of being just electronic newspapers and pamphlets, and this revolutionary transition would have failed miserably if we required authors to create content from dropdowns and checkboxes. The true power of transitioning paper to electronic media comes from the inherently better accessibility and portability of electronic media, and from the ability to insert ever improving computerized synthesis underneath the familiar look and feel of paper. As we move on to our #2 requirement and define our specifications for synthesizing and serving information, it should become apparent that although inconveniencing the user is one solution to the very difficult problem of information synthesis, it is most likely not the best solution.

We did not cover here in any detail the exchange of information imposed by 1-b and 1-c, since simple technologies for both functionalities exist today, particularly if we don’t insist on specific structures for our exchanged content, and we are not.  Since our 1-a requirement proved to be a handful, and we only scratched the surface in most general terms, we will postpone discussion on requirement #2, which includes not only intelligent synthesis of information, but also serving the results to users at the right time and in the right place, to Part IV, and we will add Part V to deal with the fuzzy requirement expressed by #3. We have a long road ahead of us, and this is just make believe design...

No comments:

Post a Comment

Kindly submit you comment in good way,thanks..