Abstract:
At least one input/output (I/O) firmware partition is provided in a partitioned environment to facilitate access to I/O resources owned by the at least one I/O firmware partition. The I/O resources of an I/0 firmware partition are shared by one or more other partitions of the environment, referred to as consumer partitions. The consumer partitions use the I/O firmware partition to access the I/O resources. Since the I/O firmware partitions are responsible for providing access to the I/O resources owned by those partitions, the consumer partitions are relieved of this task, reducing complexity and costs in the consumer partitions.
Abstract:
A network processor (10) useful in network switch apparatus and methods of operating such a processor (10) in which data flow handling and flexibility is enhanced by the cooperation of a plurality of interface processors (16, 34) formed on a semiconductor substrate. The interface processors (16, 34) provide data paths for inbound and outbound data flow and operate under the control of instructions stored in an instruction store formed on the semiconductor substrate.
Abstract:
A parallel plate capacitor in copper technology is formed in an area that has no copper below it (within 0.3ν) with a bottom etch stop layer (104), a composite bottom plate (110) having an aluminium layer below a TiN layer, an oxide capacitor dielectric (120), and a top plate (130) of TiN. The process involves etching the top plate to leave a capacitor area, etching the bottom plate to a larger bottom area having a margin on all sides; depositing an interlayer dielectric having a higher material quality below the top surface of the capacitor top plate; opening contact apertures to the top and bottom plates and to lower interconnect to a two step process that partially opens a nitride cap layer on the lower interconnect and the top plate while penetrating the nitride cap layer above the bottom plate, then cutting through the capacitor dielectric and finishing the penetration of the nitride cap layer.
Abstract:
A method for automatic sorting includes receiving an item (22) in a sequence of items to be sorted, each such item marked with a respective machine-readable identifying code (42, 52, 54) and with respective characters (44, 56) in a location relative to the code that varies from one item to another in the sequence. A position of the code on the item is determined and, responsive to the position of the code, the location of the characters on the item is found. The characters are processed to determine a destination of the item.
Abstract:
A method, program and system for utilizing an algorithm to compare (32) and alalyze (34) a first set of data with a second set of data received by a computer while maintaining a persistent key (36).
Abstract:
A method for providing cache coherency in a RAID system (100) in which multiple RAID controllers (104) provide read/write access to shared storage devices (108) for multiple host computers (102). Each controller includes read (114), write (116) and write mirror (118) caches and the controllers and the shared storage devices are coupled to one another via common backend buses (110). Whenever a controller receives a write command (302) from a host the controller writes the data to the shared devices, its write cache and the write mirror caches of the other controllers. Whenever a controller receives a read command (320) from a host the controller attempts to return the requested data from its write mirror cache, write cache and read cache and the storage devices, in that order.
Abstract:
A nestable reader-writer lock minimizes writer and reader overhead by employing lock structures that are shared among groups of processors (24) that have lower latencies. In the illustrated multiprocessor system having a non-uniform memory access (NUMA) architecture, in a first embodiment each processor node has a lock structure (83) comprised of a shared counter (84) and associated flag (85) for each CPU group. During a read, the counter can be changed only by processors within a CPU group performing a read. This reduces the reader overhead that otherwise would exist if all processors in the system sharEd a single counter. During a write, the shared flag can be changed by a process running on any processor in the system. The processors in a CPU group are notified of the write through the shared flag. This reduces the writer overhead that otherwise would exist if each processor in the system had a separate flag. The number of CPUs per group can be varied to optimize performance of the lock in different multiprocessor systems. In a second embodiment a global counter (91) indicates the number of active reader threads that are not accounted for in the per-CPU-group counters (94). This permits a reader thread to read-release a lock without determining which processor that thread was running on when it last read-acquired that lock.
Abstract:
Disclosed is a technique for data synchronization. A first identifier is determined for a portion of data at a first source. A second identifier is determined for a portion of corresponding data at a second source. The first and second identifiers are compared. When the first and second identifiers do not match, the portion of corresponding data at the second source is replaced with the portion of data at the first source.
Abstract:
Method of invisibly embedding and hiding data into a text document by modifying selected invisible attributes of invisible characters on a plurality of inter-word intervals, comprising the steps of selecting (10) at least one attribute that is invisible on the space characters used as inter-word intervals, transforming (14) the document into a canonical form by setting on all inter-word intervals the values of the selected attribute to the same default value, encoding (18) the data to be embedded and hidden into the document as an ordered set of values corresponding to the different values of the selected attribute, selecting (20) a set of inter-word intervals among all inter-word intervals corresponding to a set of space characters, and replacing (22) on each space character of this set of space characters, default attribute values by the corresponding encoded data.
Abstract:
A method, an apparatus, a computer program product, and a data processing system provide for operation of a virtual machine with embedded functionality for interoperating with other virtual machines in a computational grid. A plurality of virtual machines are run on one or more devices within a data processing system; each virtual machine in the plurality of virtual machines incorporates functionality for interoperating and associating with other virtual machines in a virtual machine cluster in a grid-like manner. Each virtual machine in the virtual machine cluster acts as a node within the virtual machine cluster. A virtual machine manages its objects in association with an object group, and each virtual machine may manage multiple object groups. The virtual machines share information such that the object groups can be moved between virtual machines in the virtual machine cluster, thereby allowing the virtual machine cluster to act as one logical virtual machine.