摘要:
The described embodiments comprise a PredCount instruction and a SegCount instruction. When executed by a processor, the PredCount instruction causes the processor to analyze a predicate vector to determine a number of active elements in the predicate vector that exhibit a predetermined condition (e.g., that are set to a predetermined value) and to return a result indicating that number. When executed by a processor, the SegCount instruction causes the processor to determine a number of times that a GeneratePredicates instruction would be executed to generate a full set of predicates using active elements of an input vector.
摘要:
Various embodiments of a system and method for handling network partitions in a cluster of nodes are disclosed. The system and method may use a set of arbitration servers that are ordered in a particular order. Client nodes in different partitions may send requests to the arbitration servers to attempt to win control of them. The client node that wins a majority of the arbitration servers may remain in the cluster, and the client nodes in the other partitions may exit the cluster. The first arbitration server may award control to whichever client node whose request for control is received first. The remaining arbitration servers may be configured to give preference to the winner of one or more of the previous arbitration servers to attempt to ensure that one of the client nodes wins a majority.
摘要:
A method includes accepting an input code word, which was produced by encoding data with an Error Correction Code (ECC), for decoding by a hardware-implemented ECC decoder. The input code word is pre-processed to produce a pre-processed code word, such that a first number of bit transitions that occur in the hardware-implemented ECC decoder while decoding the pre-processed code word is smaller than a second number of the bit transitions that would occur in the ECC decoder in decoding the input code word. The pre-processed code word is decoded using the ECC decoder, and the data is recovered from the decoded pre-processed code word.
摘要:
A method and apparatus for optimizing transmission of data to a plurality of second endpoints in a system wherein a first endpoint is providing data to the plurality of second endpoints each connected by a point-to-point communication channels. This may be useful in teleconferencing applications with a plurality of participants (endpoints) or broadcast server applications. The first endpoint activates a multicast communication channel having a first multicast address and commences broadcast of the data over the multicast communication channel. The first endpoint transmits a request message to each of the plurality of second endpoints in order to query each of the second endpoints whether they can receive transmissions broadcast to the first multicast address. Certain of the plurality of second endpoints transmit an acknowledgment message if they can receive transmissions broadcast to the first multicast address, and the first endpoint receives the acknowledgment message. Then, for each acknowledgment message received from certain of the plurality of second endpoints, the first endpoint deactivates the point-to-point communication channel and the certain of the plurality of second endpoints.
摘要:
A user equipment communications device is configured to provide personal content to a party to a call with a user of the device, when the user places the party on hold. The device determines that the party has been placed on hold and that the user has enabled sharing of personal content with the party. The device then transmits a visual menu to a communications device of the party via a data network, to allow the party to select a type of personal content to receive from the device while the party is on hold. When the device receives a selection from the party's device indicating the type of personal content, it transmits a personal information asset to the party's device according to the type of personal content indicated by the selection. Other embodiments are also described and claimed.
摘要:
In one embodiment, an integrated circuit includes a self calibration unit configured to iterate a test on a logic circuit in the integrated circuit at respectively lower supply voltage magnitudes until the test fails. A lowest supply voltage magnitude at which the test passes is used to generate a requested supply voltage magnitude for the integrated circuit. In an embodiment, an integrated circuit includes a series connection of logic gates physically distributed over an area of the integrated circuit, and a measurement unit configured to launch a logical transition into the series and detect a corresponding transition at the output of the series. The amount of time between the launch and the detection is used to request a supply voltage magnitude for the integrated circuit.
摘要:
A method for data storage includes storing data in a group of analog memory cells by writing into the memory cells in the group respective storage values, which program each of the analog memory cells to a respective programming state selected from a predefined set of programming states, including at least first and second programming states, which are applied respectively to first and second subsets of the memory cells, whereby the storage values held in the memory cells in the first and second subsets are distributed in accordance with respective first and second distributions. A first median of the first distribution is estimated, and a read threshold, which differentiates between the first and second programming states, is calculated based on the estimated first median. The data is retrieved from the analog memory cells in the group by reading the storage values using the calculated read threshold.
摘要:
In an embodiment, a methodology for automating the generation of a programmable logic device implementation of at least a portion of an integrated circuit is contemplated. The methodology may operate on one or more hardware description language (HDL) files which describe the integrated circuit as an input. Additionally, one or more user-generated control files may be input to the methodology. The methodology may process the one or more HDL files, generating a bitstream to program one or more programmable logic devices to implement the described design. The methodology may include automated modification of the HDL files to prepare them for programmable logic device implementation, automated pad ring generation, automated pin multiplexing, daughter card definition, partitioning, etc.
摘要:
In one embodiment, a level shifter circuit may include a shift stage that also embeds transistors that implement a logic operation on two or more inputs to the level shifter. At least one of the inputs may be sourced from circuitry that is powered by a different power supply than the level shifter and circuitry that receives the level shifter output. Additionally, the level shifter includes one or more dummy transistors that match transistors the perform the logic operation, to improve symmetry of the level shifter circuit. In some embodiments, certain design and layout rules may be applied to the level shifter circuit to limit variation in the symmetry over various manufacturing variations.
摘要:
A dirty memory is operable to store dirty indicators, each dirty indicator being settable to a given value indicative that a page of memory associated therewith has been dirtied. The dirty indicators are stored in groups with each group having associated therewith a validity indicator computed from the dirty indicator values of the group. The control logic is operable on reading a group to compute a validity indicator value based on the dirty indicator values for the group to determine the integrity of the group. The integrity can be confirmed by comparing the computed validity indicator value to a validity indicator value read for the group. Where the value read and the value computed compare equal, it can be assumed that the dirty indicator values of the group are correct. Preferably the validity indicator is a parity indicator. Although parity does not provide for error correction, parity has the advantage that minimal overhead is needed for computation and storage. When a parity error is detected, all of the dirty indicators associated with the parity indicator that has flagged a potential error are treated as suspect. As a consequence, when a parity error is detected for a of dirty indicators, all of the pages of memory associated with those dirty indicators are treated as being dirtied and they are therefore copied between memories. The dirty indicators and the parity indicator are then reset.