Abstract:
A clock-less asynchronous processing circuit or system having a plurality of pipelined processing stages utilizes self-clocked generators to tune the delay needed in each of the processing stages to complete the processing cycle. Because different processing stages may require different amounts of time to complete processing or may require different delays depending on the processing required in a particular stage, the self-clocked generators may be tuned to each stage's necessary delay(s) or may be programmably configured.
Abstract:
A folding apparatus includes a first housing, a first support plate, a middle housing, a first mounting bracket, a first transmission arm, and a first rotating arm. The first mounting bracket is fixed to the first housing, the first transmission arm is rotatably connected to the middle housing, a rotation center is a first axis, and the first transmission arm is slidably connected to the first mounting bracket and slidably connected to the first support plate. The first rotating arm is rotatably connected to the middle housing, a rotation center is a second axis, the first rotating arm is rotatably connected to the first mounting bracket, and the second axis and the first axis are not collinear. The first support plate is rotatably connected to the first mounting bracket, so that the first housing and the first support plate switch between a flattened state and a folded state.
Abstract:
A key-value (KV) storage method and apparatus, the method including receiving a write request, where the write request is associated with writing a first key and a first value, storing the first key in a first memory chip of a solid state drive (SSD), and storing the first value in a second memory chip of the SSD, where an erase count of the first memory chip is less than an erase count of the second memory chip, and creating a mapping relationship between the first key, a physical address of the first key, and a physical address of the first value, where the physical address of the first key indicates that the first key is stored in storage space of the first memory chip, and where the physical address of the first value indicates that the first value is stored in storage space of the second memory chip.
Abstract:
This disclosure provides a cache data control method and a device, applied to a first edge cache node. The method includes: receiving a data obtaining request sent from a terminal device, where the data obtaining request includes an identification of to-be-requested data; when the first edge cache node does not include the to-be-requested data, determining a target cache node that includes the to-be-requested data in an edge cache node set corresponding to the first edge cache node and a central cache node corresponding to the first edge cache node; and obtaining the to-be-requested data from the target cache node. This disclosure is intended to improve efficiency of feeding back data information to the terminal device.
Abstract:
A network control method relates to the communications field, includes receiving, by a controller, a packet forwarded by a forwarder, detecting, by the controller, a status of a virtual currency identifier of the packet, querying, by the controller according to a user identifier in the packet, whether the user has permission to improve service quality when the status of the virtual currency identifier of the packet indicates that a user is willing to pay virtual currency to raise a network priority, and raising, by the controller, the network priority of the user, starting charging, and sending a network priority of the user to the forwarder when the user has the permission to improve the service quality such that the forwarder forwards a packet of the user according to the network priority of the user.
Abstract:
Embodiments are provided for an asynchronous processor with token-based very long instruction word architecture. The asynchronous processor comprises a memory configured to cache a plurality of instructions, a feedback engine configured to receive the instructions in bundles of instructions at a time (referred to as very long instruction word) and to decode the instructions, and a crossbar bus configured to transfer calculation information and results of the asynchronous processor. The apparatus further comprises a plurality of sets of execution units (XUs) between the feedback engine and the crossbar bus. Each set of the sets of XUs comprises a plurality of XUs arranged in series and configured to process a bundle of instructions received at the each set from the feedback engine.
Abstract:
The present invention relates to the computer field, and specifically, to a method and an apparatus for coordinating body devices for communication. The method includes: obtaining, by a mobile personal station, an identifier of a body device, and a location parameter and an ambient parameter that are of a user that carries the body device; obtaining, according to the identifier of the body device, a communication mode supported by the body device; obtaining, according to the identifier of the body device, a communication mode supported by the body device; determining, a networking mode of the body device according to the scenario and the communication mode supported by the body device; and establishing, a connection to the body device according to the networking mode.
Abstract:
A timing prediction circuit and method which relate to the field of circuit technologies and may be used to predict a timing margin of a to-be-predicted digital circuit, which are used to resolve a problem that a large quantity of devices are used to predict a probability that a timing error occurs in a to-be-predicted digital circuit. The timing prediction circuit includes a combinational logic circuit, a delay circuit, a sampling circuit, and a control circuit, where the sampling circuit includes N samplers, and an input end of each sampler is separately connected to an output end of the combinational logic circuit using the delay circuit, and an output end of each sampler is connected to an input end of the control circuit, where N is an integer equal, and N≧2. The present invention can be used to predict a timing margin of a to-be-predicted digital circuit.
Abstract:
Embodiments are provided for an asynchronous processor with a Hierarchical Token System. The asynchronous processor includes a set of primary processing units configured to gate and pass a set of tokens in a predefined order of a primary token system. The asynchronous processor further includes a set of secondary units configured to gate and pass a second set of tokens in a second predefined order of a secondary token system. The set of tokens of the primary token system includes a token consumed in the set of primary processing units and designated for triggering the secondary token system in the set of secondary units.
Abstract:
A timing prediction circuit and method which relate to the field of circuit technologies and may be used to predict a timing margin of a to-be-predicted digital circuit, which are used to resolve a problem that a large quantity of devices are used to predict a probability that a timing error occurs in a to-be-predicted digital circuit. The timing prediction circuit includes a combinational logic circuit, a delay circuit, a sampling circuit, and a control circuit, where the sampling circuit includes N samplers, and an input end of each sampler is separately connected to an output end of the combinational logic circuit using the delay circuit, and an output end of each sampler is connected to an input end of the control circuit, where N is an integer equal, and N≧2. The present invention can be used to predict a timing margin of a to-be-predicted digital circuit.