BANDWIDTH REDUCTION FOR INSTRUCTION TRACING
    1.
    发明申请

    公开(公告)号:US20170249146A1

    公开(公告)日:2017-08-31

    申请号:US15057111

    申请日:2016-02-29

    CPC classification number: G06F11/3636 G06F11/36

    Abstract: Systems and methods pertain to reducing bandwidth of instruction tracing for a processor, using an Embedded Trace Macrocell (ETM). Packets, which include trace information for load/store instructions executed in the processor, are generated. A P-Header comprising commit information for load/store instructions of up to a maximum number of two or more packets is generated. The P-Header is generated for the maximum number of two or more packets if none of the load/store instructions in the maximum number of two or more packets were killed. If a load/store instruction in a packet was killed, a P-Header comprising commit information for the packet comprising the load/store instruction which was killed is generated and placed in an instruction trace immediately after that packet, even if the maximum number is not reached.

    MULTIPLE CYCLE SEARCH CONTENT ADDRESSABLE MEMORY

    公开(公告)号:US20170345500A1

    公开(公告)日:2017-11-30

    申请号:US15369823

    申请日:2016-12-05

    CPC classification number: G11C15/04 G11C7/1006 G11C15/00

    Abstract: In an aspect of the disclosure, a method and an apparatus are provided. The apparatus may be a content addressable memory. The content addressable memory includes a plurality of memory sections each configured to store data. Additionally, the content addressable memory includes a comparator configured to compare the stored data in each of the plurality of memory sections with search input data. The comparison may be in a time division multiplexed fashion. The comparator may be configured to compare the stored data in each of the plurality of memory sections with search input data in a corresponding one of a plurality of memory access cycles. The content addressable memory may include a state machine configured to control when the comparator compares the stored data in each of the plurality of memory sections with search input data based on a state of the state machine.

    TECHNIQUES FOR INSTRUCTION PERTURBATION FOR IMPROVED DEVICE SECURITY

    公开(公告)号:US20220237283A1

    公开(公告)日:2022-07-28

    申请号:US17160769

    申请日:2021-01-28

    Abstract: Methods, systems, and devices for techniques for instruction perturbation for improved device security are described. A device may assign a set of executable instructions to an instruction packet based on a parameter associated with the instruction packet, and each executable instruction of the set of executable instructions may be independent from other executable instructions of the set of executable instructions. The device may select an order of the set of executable instructions based on a slot instruction rule associated with the device, and each executable instruction of the set of executable instructions may correspond to a respective slot associated with memory of the device. The device may modify the order of the set of executable instructions in a memory hierarchy post pre-decode based on the slot instruction rule and process the set of executable instructions of the instruction packet based on the modified order.

    WAY STORAGE OF NEXT CACHE LINE
    5.
    发明申请

    公开(公告)号:US20180081815A1

    公开(公告)日:2018-03-22

    申请号:US15273297

    申请日:2016-09-22

    Abstract: Systems and methods for accessing a cache include determining if a current access of the cache will satisfy an expected relationship with a next access of the cache, wherein the cache is a set-associative cache comprising multiple ways. The next way for the next access is stored in a next way field associated with the current access. If the expected relationship will be satisfied, such as a sequential relationship which will be satisfied in the case of an instruction cache when the current access does not cause a change in control flow, the next way for the next access is retrieved from the next way field associated with the current access. The next way of the cache is then directly accessed using the retrieved next way.

    COPROCESSOR FOR OUT-OF-ORDER LOADS
    6.
    发明申请
    COPROCESSOR FOR OUT-OF-ORDER LOADS 有权
    用于不合适的负载的共同控制器

    公开(公告)号:US20160092238A1

    公开(公告)日:2016-03-31

    申请号:US14499044

    申请日:2014-09-26

    Abstract: Systems and methods for implementing certain load instructions, such as vector load instructions by cooperation of a main processor and a coprocessor. The load instructions which are identified by the main processor for offloading to the coprocessor are committed in the main processor without receiving corresponding load data. Post-commit, the load instructions are processed in the coprocessor, such that latencies incurred in fetching the load data are hidden from the main processor. By implementing an out-of-order load data buffer associated with an in-order instruction buffer, the coprocessor is also configured to avoid stalls due to long latencies which may be involved in fetching the load data from levels of memory hierarchy, such as L2, L3, L4 caches, main memory, etc.

    Abstract translation: 用于实现某些加载指令的系统和方法,例如通过主处理器和协处理器协作的向量加载指令。 由主处理器识别的用于卸载到协处理器的加载指令在主处理器中提交,而不接收相应的负载数据。 提交后,加载指令在协处理器中进行处理,这样在取出加载数据时产生的延迟从主处理器中隐藏起来。 通过实现与按顺序指令缓冲器相关联的无序负载数据缓冲器,协处理器还被配置为避免由于长时间延迟而导致的延迟,这可能涉及从诸如L2的存储器层级的级别中提取负载数据 ,L3,L4高速缓存,主内存等

Patent Agency Ranking