Abstract:
A dynamic self-adjusting memory storage device and method of operating. The device includes a plurality of adjustable-size phase change memory (PCM) storage sub-modules connected to and communicating over a bus with a control device. One of the plurality of adjustable-size memory storage sub-modules is in a stand-by mode of operation. The control device implements steps to: determine, based on a switching criteria, when the memory storage device needs to be switched to a different operation mode; select one or more adjustable-sized memory storage sub-modules for switching to said different operation mode; copy stored data from a selected actively operating adjustable-size memory storage sub-module to said adjustable-size memory storage sub-module in said stand-by mode; and change the capacity of the selected actively operating adjustable-size memory storage sub-module after the copying. The dynamic self-adjusting memory capacity method is performed without powering down the memory storage device or paying any timing penalty.
Abstract:
An RC-based sensing method and computer program product to effectively sense the cell resistance of a programmed Phase Change Material (PCM) memory cell. The sensing method ensures the same physical configuration of each cell (after programming): same amorphous volume, same trap density/distribution, etc. The sensing method is based on a metric: the RC based sense amplifier implements two trigger points. The measured time interval between these two points is used as the metric to determine whether the programmed cell state, e.g., resistance, is programmed into desired value. The RC-based sensing method is embedded into an iterative PCM cell programming technique to ensure a tight distribution of resistance at each level after programming; and ensure the probability of level aliasing is very small, leading to less problematic drift.
Abstract:
A method for fabricating a semiconductor device utilizing a plurality of masks and spacers. The method includes forming parallel first trenches in a substrate using a first lithographic process. The substrate includes sidewalls adjacent to the parallel first trenches. Forming first spacers adjacent to the sidewalls. Removing the sidewalls, which in part includes using a second lithographic process. Forming second spacers adjacent to the first spacers, resulting in spacer ridges. Etching portions of the substrate between the spacer ridges resulting in second trenches.
Abstract:
A method for fabricating vertical surround gates in a semiconductor device array structure such that the processes are compatible with CMOS fabrication. The array structure includes a CMOS region and an array region. The method includes forming a polish stop layer, a plurality of patterning layers, a CMOS layer over a substrate, array pillars and array trenches. Forming the array pillars and trenches includes removing the CMOS cover layer and patterning layers. The method further includes doping portions of the substrate within the array trenches. The method includes forming vertical surround gates in the array trenches, an array filler layer to fill in the array trenches, and a CMOS photoresist pattern over the array filler layer. The method includes etching the CMOS trenches down through a portion of the substrate, such that the array pillars under the shared trench are etched to form contact holes.
Abstract:
A method of mapping a care plan template to a case model includes receiving a care plan template, extracting elements from the care plan template, wherein the elements correspond to a phase comprising at least one task and data attributes corresponding to the task, mapping the task of the care plan template to a task of the case model, mapping a precedence relationship of the task of the care plan template to preconditions of the task of the case model, mapping the data attributes of the care plan template to properties of the case model, wherein the properties are associated with the task of the case model, mapping the task of the care plan template to a role of the case model, and generating the case model including the mapped task, the mapped precedence relationship, the mapped data attributes, and the mapped role.
Abstract:
Techniques are described with regard to container image configuration in a computing environment. An associated computer-implemented method includes initializing a container image storage engine associated with a logical tree structure having a plurality of container image nodes, where each of the plurality of container image nodes includes a hash layer array and a hash data array. The method includes building at least one new container image node to incorporate into the plurality of container image nodes. The method further includes applying a software patch to a target container image node among the plurality of container image nodes. In an embodiment, the method further includes starting at least one container based upon a respective container image node among the plurality of container image nodes.
Abstract:
A method, system, and computer program product for circuit design automation. The method identifies a set of circuit components for a proposed circuit design. A subset of circuit components is selected to generate an initial topology for the proposed circuit design. A set of subsequent topologies are iteratively generated by a heuristic search algorithm based on the subset of circuit components and the initial topology. A set of valid topologies of the set of subsequent topologies are determined by a circuit simulator based on the subset of circuit components and a set of connections within the set of subsequent topologies. The method generates the proposed circuit design from the set of valid topologies.
Abstract:
Embodiments of the invention include a computer-implemented method that uses a processor to access cryptographic-function constraints associated with an encrypted message. Based on a determination that the cryptographic-function constraints do not include mandatory cryptographic computing resource requirements, first resource-scaling operations are performed that include an analysis of cryptographic metrics associated with a processor. The cryptographic metrics include information associated with the encrypted message, along with performance measurements of cryptographic functions performed by the processor. The cryptographic-function constraints and results of the analysis of the cryptographic metrics are used to determine cryptographic processing requirements of the encrypted message; and match the cryptographic processing requirements to selected ones of a set of cryptographic computing resources to identify a customized set of cryptographic computing resources matched to cryptographic processing requirements of the encrypted message. The customized set of cryptographic computing resources is used to perform customized cryptographic functions on the encrypted message.
Abstract:
Embodiments of the present disclosure relate to methods, systems, and computer program products for observation data evaluation. In a method, a hierarchical relationship between a plurality of observation items is obtained based on a dataset including a plurality of observation samples. Here, an observation sample in the plurality of observation samples includes a group of measurements for the group of observation items, respectively. A plurality of evaluation models for evaluating an observation sample is generated based on the hierarchical relationship according to a predefined group of membership functions and a predefined group of fuzzy operators. An evaluation model is selected for a further evaluation from the plurality of evaluation models based on a plurality of confidence intervals for the plurality of evaluation models. With these embodiments, the evaluation model may be obtained in an easy and more effective way.
Abstract:
A model parallel training technique for neural architecture search including the following operations: (i) receiving a plurality of ML (machine learning) models that can be substantially interchangeably applied to a computing task; (ii) for each given ML model of the plurality of ML models: (a) determining how the given ML model should be split for model parallel processing operations, and (b) computing a model parallelism score (MPS) for the given ML model, with the MPS being based on an assumption that the split for the given ML model will be used at runtime; and (iii) selecting a selected ML model based, at least in part, on the MPS scores of the ML models of the plurality of ML models.