Abstract:
An apparatus includes a primary hypervisor that is executable on a first set of processors and a secondary hypervisor that is executable on a second set of processors. The primary hypervisor may define settings of a resource and the secondary hypervisor may use the resource based on the settings defined by the primary hypervisor. For example, the primary hypervisor may program memory address translation mappings for the secondary hypervisor. The primary hypervisor and the secondary hypervisor may include their own schedulers.
Abstract:
A processor device includes a memory and a sequencer that is responsive to the memory. The sequencer supports very long instruction word (VLIW) type instructions and at least one VLIW instruction packet uses a number of operands during execution, The processor device further includes a plurality of instruction execution units responsive to the sequencer and a plurality of register files. Each of the plurality of register files includes a plurality of registers and the plurality of register files are coupled to the plurality of instruction execution units. Further, each of the plurality of register flies includes a number of data read ports and the number of data read ports of each of the plurality of register files is less than the number of operands used by the at least one VLIW instruction packet.
Abstract:
A method includes executing, at a processor, a dedicated arithmetic encoding instruction. The dedicated arithmetic encoding instruction accepts a plurality of inputs including a first range, a first offset, and a first state and produces one or more outputs based on the plurality of inputs. The method also includes storing a second state, realigning the first range to produce a second range, and realigning the first offset to produce a second offset based on the one or more outputs of the dedicated arithmetic encoding instruction.
Abstract:
A particular method includes receiving at least one translation lookaside buffer (TLB) configuration indicator. The at least one TLB configuration indicator indicates a specific number of entries to be enabled at a TLB. The method further includes modifying a number of searchable entries of the TLB in response to the at least one TLB configuration indicator.
Abstract:
A system for translating compressed instructions to instructions in an executable format is described. A translation unit is configured to decompress compressed instructions into a native instruction format using X and Y indices accessed from a memory, a translation memory, and a program specified mix mask. A level 1 cache is configured to store the native instruction format for each compressed instruction. The memory may be configured as a paged instruction cache to store pages of compressed instructions intermixed with pages of uncompressed instructions. Methods of determining a mix mask for efficiently translating compressed instructions is also described. A genetic method uses pairs of mix masks as genes from a seed population of mix masks that are bred and may be mutated to produce pairs of offspring mix masks to update the seed population. A mix mask for efficiently translating compressed instructions is determined from the updated seed population.
Abstract:
A processor includes priority adjustment circuitry configured to adjust a priority of a thread of multiple threads configured to execute tasks to have a software-defined priority value or a designated high priority value. The processor also includes circuitry configured to identify a lowest priority thread of the multiple threads and a control unit configured to cause the lowest priority thread to take a pending interrupt.
Abstract:
A method includes reading, by a processor, one or more configuration values from a storage device or a memory management unit. The method also includes loading the one or more configuration values into one or more registers of the processor. The one or more registers are useable by the processor to perform address translation.
Abstract:
A method of determining an execution order of memory operations performed by a processor includes executing at least one single-instruction, multiple-data (SIMD) scatter operation by the processor to store data to a memory. The method further includes executing one or more instructions by the processor to determine the execution order of a set of memory operations. The set of memory operations includes the at least one SIMD scatter operation.
Abstract:
A method includes receiving image data and performing a non-maximum suppression (NMS) operation on the image data. The method also includes initiating an edge tracking by hysteresis (ETH) operation on a portion of the image data prior to completion of the NMS operation.
Abstract:
In a particular embodiment, an apparatus includes control logic configured to selectively set bits of a multi-bit way prediction mask based on a prediction mask value. The control logic is associated with an instruction cache including a data array. A subset of line drivers of the data array is enabled responsive to the multi-bit way prediction mask. The subset of line drivers includes multiple line drivers.