Abstract:
A wear leveling technique is employed in a memory device so that the cycling history of a memory block is represented by the cycling history of a representative memory cell or a small number of representative memory cells. A control logic block tracks the cycling history of the one or more representative memory cells. A table tabulating the predicted shift in an optimal value for a reference variable for a sensing circuit as a function of cycling history is provided within the memory device. Prior to sensing a memory cell, the control logic block checks the total number of cycling in the one or more representative memory cells and adjusts the value for the reference variable in the sensing circuit, thereby providing an optimal value for the reference variable in the sensing circuit for each sensing cycle of the memory device.
Abstract:
A method for doping terminals of a field-effect transistor (FET), the FET including a drain region, a source region, and a surround gate surrounding a channel region, the method including depositing a dopant-containing layer, such that the surround gate prevents the dopant-containing layer from contacting the channel region of the FET, the dopant-containing layer including a dopant. The dopant then diffuses the dopant from the dopant-containing layer into at least one of the drain region and source region of the FET.
Abstract:
A method for doping terminals of a field-effect transistor (FET), the FET including a drain region, a source region, and a surround gate surrounding a channel region, the method including depositing a dopant-containing layer, such that the surround gate prevents the dopant-containing layer from contacting the channel region of the FET, the dopant-containing layer including a dopant. The dopant then diffuses the dopant from the dopant-containing layer into at least one of the drain region and source region of the FET.
Abstract:
A phase change memory cell and a method for fabricating the phase change memory cell. The phase change memory cell includes a bottom electrode and a first non-conductive layer. The first non-conductive layer defines a first well, a first electrically conductive liner lines the first well, and the first well is filled with a phase change material in the phase change memory cell.
Abstract:
Solutions preparing container images and data for container workloads prior to start times of workloads predicted through workload trend analysis. Local storage space on the node is managed based on workload trends, optimizing local storage of image files without requiring frequent reloading and/or deletion of image files, avoiding network intensive I/O operations when pulling images to local storage by workload scheduling systems. Systems perform collection of historical data including image and workload properties; analyze historical data for workload trends, including predicted start times, image files needed, number of nodes and types of nodes. Based on predicted future workload start times, nodes are selected from an ordered list of node requirements and workload properties. Selected nodes' local storage is managed using predicted future start times of workloads, to avoid removing image files having sooner start times, while removing (as needed) images files predictively utilized for workloads further into the future.
Abstract:
A computing device includes a processor and a storage device coupled to the processor. The storage device stores instructions to cause the processor to perform acts to provide a circuit performance modeling. The acts include identifying and extracting paths of an electric circuit between a plurality of designated components that represent the electric circuit; converting at least one of the extracted paths to a path embedding comprising a vector of a fixed length; and predicting, by a circuit representation-learning model, characteristics of the designated components that represent the electric circuit based on an input of circuit parameters of the electric circuit.
Abstract:
Solutions preparing container images and data for container workloads prior to start times of workloads predicted through workload trend analysis. Local storage space on the node is managed based on workload trends, optimizing local storage of image files without requiring frequent reloading and/or deletion of image files, avoiding network intensive I/O operations when pulling images to local storage by workload scheduling systems. Systems perform collection of historical data including image and workload properties; analyze historical data for workload trends, including predicted start times, image files needed, number of nodes and types of nodes. Based on predicted future workload start times, nodes are selected from an ordered list of node requirements and workload properties. Selected nodes' local storage is managed using predicted future start times of workloads, to avoid removing image files having sooner start times, while removing (as needed) images files predictively utilized for workloads further into the future.
Abstract:
Methods and apparatus for generating medical records from a doctor-patient dialogue are provided. A main content portion of a written doctor-patient conversation is identified. The main content portion of the conversation is extracted from the conversation. The main content of the conversation is divided into sections according to a pre-defined set of sections, and, based on the sections and their respective content, a medical record is generated according to a pre-defined template. The pre-defined template is one of a hard medical record format or a soft medical record format.
Abstract:
Aspects of the invention include systems and methods configured to provide simplified and efficient artificial intelligence (AI) model deployment. A non-limiting example computer-implemented method includes receiving an AI model deployment input having pre-process code, inference model code, and post-process code. The pre-process code is converted to a pre-process graph. The inference model and the post-process model are similarly converted to an inference graph and a post-process graph, respectively. A pipeline path is generated by connecting nodes in the pre-process graph, the inference graph, and the post-process graph. The pipeline path is deployed as a service for inference.
Abstract:
Method and apparatus are presented for receiving a medical or medical condition related input term or phrase in a source language, and translating the term or phrase from the source language into at least one target language to obtain a set of translated terms of the input term. For each translated term in the set of translations, the method and apparatus further translate the set of translations back into the source language to obtain an output list of standard versions of the input term, scoring each entry of the output list as to probability of being the most standard version of the input term, and providing the entry of the output list that has the highest score to a user.