Enhanced address space layout randomization

    公开(公告)号:US11030030B2

    公开(公告)日:2021-06-08

    申请号:US16259736

    申请日:2019-01-28

    Abstract: One embodiment provides an apparatus. The apparatus includes a linear address space, metadata logic and enhanced address space layout randomization (ASLR) logic. The linear address space includes a metadata data structure. The metadata logic is to generate a metadata value. The enhanced ASLR logic is to combine the metadata value and a linear address into an address pointer and to store the metadata value to the metadata data structure at a location pointed to by a least a portion of the linear address. The address pointer corresponds to an apparent address in an enhanced address space. A size of the enhanced address space is greater than a size of the linear address space.

    Spatial and temporal merging of remote atomic operations

    公开(公告)号:US10572260B2

    公开(公告)日:2020-02-25

    申请号:US15858899

    申请日:2017-12-29

    Abstract: Disclosed embodiments relate to spatial and temporal merging of remote atomic operations. In one example, a system includes an RAO instruction queue stored in a memory and having entries grouped by destination cache line, each entry to enqueue an RAO instruction including an opcode, a destination identifier, and source data, optimization circuitry to receive an incoming RAO instruction, scan the RAO instruction queue to detect a matching enqueued RAO instruction identifying a same destination cache line as the incoming RAO instruction, the optimization circuitry further to, responsive to no matching enqueued RAO instruction being detected, enqueue the incoming RAO instruction; and, responsive to a matching enqueued RAO instruction being detected, determine whether the incoming and matching RAO instructions have a same opcode to non-overlapping cache line elements, and, if so, spatially combine the incoming and matching RAO instructions by enqueuing both RAO instructions in a same group of cache line queue entries at different offsets.

    Increasing invalid to modified protocol occurrences in a computing system

    公开(公告)号:US10303605B2

    公开(公告)日:2019-05-28

    申请号:US15214895

    申请日:2016-07-20

    Abstract: An example system on a chip (SoC) includes a processor, a cache, and a main memory. The SoC can include a first memory to store data in a memory line, wherein the memory line is set to an invalid state. The processor can include a processor coupled to the first memory. The processor can determine that a data size of a first data set received from an application is within a data size range. The processor can determine that an aggregate data size of the first data set and a second data set received from the application is at least a same data size as data size of the memory line. The processor can perform an invalid-to-modify (I2M) operation to change the memory line from the invalid state to a modified state. The processor can write the first data set and the second data set to the memory line.

    Enhanced address space layout randomization

    公开(公告)号:US10191791B2

    公开(公告)日:2019-01-29

    申请号:US15201443

    申请日:2016-07-02

    Abstract: One embodiment provides an apparatus. The apparatus includes a linear address space, metadata logic and enhanced address space layout randomization (ASLR) logic. The linear address space includes a metadata data structure. The metadata logic is to generate a metadata value. The enhanced ASLR logic is to combine the metadata value and a linear address into an address pointer and to store the metadata value to the metadata data structure at a location pointed to by a least a portion of the linear address. The address pointer corresponds to an apparent address in an enhanced address space. A size of the enhanced address space is greater than a size of the linear address space.

    Pipelined Prefetcher for Parallel Advancement Of Multiple Data Streams

    公开(公告)号:US20170286304A1

    公开(公告)日:2017-10-05

    申请号:US15087917

    申请日:2016-03-31

    CPC classification number: G06F12/0862 G06F12/0897 G06F2212/1024

    Abstract: A processor includes a front end to decode instructions, an execution unit to execute instructions, multiple caches at different cache hierarchy levels, a pipelined prefetcher, and a retirement unit to retire instructions. The prefetcher includes circuitry to receive a demand request for data at a first address within a first line in a memory and, in response, to provide the data at the first address to the execution unit for consumption, to prefetch a second line at a first offset distance from the first line into a mid-level cache, and to prefetch a third line at a second offset distance from the second line into a last-level cache. The prefetcher includes circuitry to prefetch, in response to another demand request, the third line into the mid-level cache, and a fourth line into the last-level cache. The prefetcher enforces minimum or maximum offset distances between prefetched data streams.

Patent Agency Ranking