Abstract:
A computerized system processes the speech of a physician and a patient during a patient encounter to automatically produce a draft clinical report which documents the patient encounter. The draft clinical report includes information that has been abstracted from the speech of the physician and patient. The draft report is provided to the physician for review. Producing the draft clinical report automatically, rather than requiring the physician to prepare the draft clinical report manually, significantly reduces the time required by the physician to produce the final version of the clinical report.
Abstract:
A graphical user interface, referred to herein as a virtual whiteboard, that provides both: (1) an automatically prioritized display of information related to a particular patient that is tailored to the current user of the system, and (2) a "scratch pad" area in which multiple users of the system may input free-form text and other data for sharing with other users of the system. When each user of the system accesses the virtual whiteboard, the system: (1) automatically prioritizes the patient information based on characteristics of the user and displays the automatically prioritized patient information to that user, and (2) displays the contents of the scratch pad to the user. As a result, the whiteboard displays both information that is tailored to the current user and information that is common to all users (i.e., not tailored to any particular user).
Abstract:
Embodiments of the present invention are directed to computer systems for implementing dynamic, data-driven workflows within healthcare and other environments. Such a system may include a computer-processable definition of one or more workflows. Each workflow definition may define various aspects of the corresponding workflow, such as the data required by the workflow, a process for extracting such data from a variety of structured and/or unstructured data sources, a set of process steps to be performed within the workflow, and a condition for triggering the workflow. The system may use the workflow definition to extract the data required by the workflow and to perform the workflow's process steps on the extracted data in response to determining that the workflow's trigger condition has been satisfied. The workflow may change in response to changes in data extracted by the workflow.
Abstract:
A computerized billing code generator reviews billing source data containing both admissible data (such as physician's notes) and inadmissible data (such as nurse's notes). The billing code generator determines whether to generate a request to review the first data based on both the first data and the second data. For example, the billing code generator may generate the request in response to determining that the second data contains information that is inconsistent with information contained in the first data. As another example, the billing code generator may generate the request in response to determining that the second data contains information that is not contained within the first data.
Abstract:
A method for dynamic de-identification of a document includes generating a document including a tag associated with an element including protected health information, the tag including at least one instruction for rendering the element. The method includes identifying a level of authorization of a user requesting access to the generated document. The method includes rendering the document for display to the user according to the at least one instruction in the tag, based on the determined level of authorization.
Abstract:
A system includes a data record (such as an Electronic Medical Record (EMR)) and a user interface for modifying (e.g., storing data in) the data record. The data record includes both free-form text elements and discrete data elements. The user interface includes user interface elements for receiving free-form text data. In response to receiving free-form text data via the free-form text user interface elements, a suggested action is identified, such as a suggested action to take in connection with one of the discrete data elements of the data record. Output is generated representing the suggested action. A user provides input indicating acceptance or rejection of the suggested action. The suggested action may be performed in response to the user input.
Abstract:
Inputs provided into user interface elements of an application are observed. Records are made of the inputs and the state(s) the application was in while the inputs were provided. For each state, a corresponding language model is trained based on the input(s) provided to the application while the application was in that state. When the application is next observed to be in a previously-observed state, a language model associated with the application's current state is applied to recognize speech input provided by a user and thereby to generate speech recognition output that is provided to the application. An application's state at a particular time may include the user interface element(s) that are displayed and/or in focus at that time.
Abstract:
An automatic speech recognition system includes an audio capture component, a speech recognition processing component, and a result processing component which are distributed among two or more logical devices and/or two or more physical devices. In particular, the audio capture component may be located on a different logical device and/or physical device from the result processing component. For example, the audio capture component may be on a computer connected to a microphone into which a user speaks, while the result processing component may be on a terminal server which receives speech recognition results from a speech recognition processing server.
Abstract:
A computerized system learns a mapping from the speech of a physician and patient in a physician-patient encounter to discrete information to be input into the patient's Electronic Medical Record (EMR). The system learns this mapping based on a transcript of the physician-patient dialog, an initial state of the EMR (before the EMR was updated based on the physician-patient dialogue), and a final state of the EMR (after the EMR was updated based on the physician-patient dialog). The learning process is enhanced by taking advantage of knowledge of the differences between the initial EMR state and the final EMR state.
Abstract:
A computer system automatically authenticates a user to a server in response to determining that an audio signal received from one microphone positively correlates with an audio signal received from another microphone that is associated with a computing device at which the user is already authenticated to the server. Two audio signals are received from distinct microphones associated with first and second computing devices. A correlation module performs correlation on the two audio signals. An authentication module automatically authenticates a user to a server at the first computing device if it is determined that the first audio signal positively correlates with the second audio signal and the user is already authenticated to the server at the second computing device.