摘要:
Provided is a hardware accelerator and method, central processing unit, and computing device. A hardware accelerating method includes, in response to a request for a new task issued by a hardware thread, accelerating processing of the new task and producing a processing result for the task. A predicting step predicts total waiting time of the new task for returning to a specified address associated with the hardware thread.
摘要:
A correlation processing apparatus that obtains a correlation value between an image and a subimage, the apparatus including: N arithmetic circuits, each of the N arithmetic circuits performing an arithmetic operation on a first image pixel value of a first image pixel of the image and a second image pixel value of a second image pixel of the subimage; a rectangular pattern selection circuit selecting a rectangular pattern among a plurality of predetermined rectangular patterns, the rectangular pattern including Q elements, the smallest number of divisions is obtained if the image is divided by the rectangular pattern; a control circuit activating Q arithmetic circuits among the N arithmetic circuits and identifying Q first image pixel values and Q second image pixel values on which the arithmetic operations are performed by the Q arithmetic circuits; and an accumulator accumulating the results of the arithmetic operations performed by the Q arithmetic circuits.
摘要:
Provided is a hardware accelerator, central processing unit, and computing device. A hardware accelerator includes a task accelerating unit configured to, in response to a request for a new task issued by a hardware thread, accelerate the processing of the new task and produce a processing result for the task; a task time prediction unit configured to predict the total waiting time of the new task for returning to a specified address associated with the hardware thread. One aspect of this disclosure makes the hardware thread aware of the time to be waited for before getting a processing result, facilitating its task planning accordingly.
摘要:
The present invention provides a method, apparatus and article of manufacture, for fast context saving in transactional memory. The method creates a mapping table that includes entries corresponding to architectural registers. Each entry includes a physical register index and shadow bit of a first physical register mapped to an architectural register. In response to a detection that an update occurs to an architectural register in a transaction and its shadow bit being an invalid value, the method sets the shadow bit to be a valid value and sets a shadow register for the architectural register using the physical register index of the first physical register. The method maps a second physical register to the shadow register in order to save a modified value generated by an update process and saves the original value before the update process by use of the first physical register corresponding to the architecture register.
摘要:
The present invention provides a rule set partitioning based packet classification method for Internet. The method comprising: performing Horizontal Cut for the rule set, determining the field for partitioning a rule layer based on a target algorithm and selecting the partition manner of the Horizontal Cut, performing Horizontal Cut according to the selected partition manner of the Horizontal Cut, thereby obtaining more than one rule layers, each rule layer being a Horizontal subset, combining the rule layers to obtain a plurality of Horizontal subsets according to the total number of the pre-designated Horizontal subsets and a predefined principle, wherein the total number of said combined plurality of Horizontal subsets equals to the total number of said pre-designated Horizontal subsets; performing Vertical Cut in each of the Horizontal subsets; then forming a Hash table that can index the Vertical subsets, so that it can be used in a lookup; and realizing rule storage in each Vertical subset respectively according to the target algorithm.
摘要:
A processing system may include a performance monitoring unit (PMU), a machine accessible medium, and a processor responsive to the PMU and the machine accessible medium. Instructions encoded in the machine accessible medium, when executed by the processor, may determine whether performance details for the processing system should be collected, based at least in part on a predetermined monitoring policy for the processing system. The instructions may generate performance data for the processing system, based at least in part on data obtained from the PMU. The instructions may determine whether the processing system should be reconfigured, based at least in part on the performance data and a power policy profile for the processing system. The instructions may automatically adjust power consumption of the processing system by using the PMU to reconfigure the processing system. Other embodiments are described and claimed.
摘要:
A cache system includes processing units operative to access a main memory device, caches coupled in one-to-one correspondence to the processing units, and a controller coupled to the caches to control data transfer between the caches and data transfer between the main memory and the caches, wherein the controller includes a memory configured to store first information and second information separately for each index, the first information indicating an order of oldness of entries in each one of the caches, and the second information indicating an order of oldness of entries for the plurality of the caches, and a logic circuit configured to select an entry to be evicted and its destination in response to the first and second information when an entry of an index corresponding to an accessed address is to be evicted from a cache corresponding to the processing unit that accesses the main memory device.
摘要:
Provided is a hardware accelerator and method, central processing unit, and computing device. A hardware accelerating method includes, in response to a request for a new task issued by a hardware thread, accelerating processing of the new task and producing a processing result for the task. A predicting step predicts total waiting time of the new task for returning to a specified address associated with the hardware thread.
摘要:
A method and system for compressing and encrypting data. The method includes: receiving original data; performing a first compression of the original data to obtain a first compression result; and encrypting only a literal portion in the first compression result to obtain an encrypted first compression result. Various embodiments improve the efficiency of the process of compression and encryption to a great extent by encrypting only the literal portion of the compression result.
摘要:
The present invention provides a central processing unit for processing at least one encrypted software. The encrypted software comprises at least one encrypted software section. The encrypted software section is encrypted with a management key MK, and the MK being encrypted with a device key DK as a encrypted MK. The central processing unit comprises processing and cache unit, and cryptographic unit. The cryptographic unit comprises device key storage unit for storing the DK, a plurality of management key storage units for storing MKs, wherein each management key storage unit corresponding to a management key index MKI, and decryption unit. The decryption unit decrypts a encrypted MK with the DK to obtain a MK, stores the MK to a management key storage unit, and output a MKI corresponding to the management key storage unit, thus the MKI is used to correspond to the encrypted software section. Wherein, the decryption unit invokes corresponding MK according to the MKI and decrypts the encrypted software section, and directly transfers the decrypted software code and/or data to the processing and cache unit.