DIFFERENTIAL PIPELINE DELAYS IN A COPROCESSOR

    公开(公告)号:US20240045694A1

    公开(公告)日:2024-02-08

    申请号:US18211007

    申请日:2023-06-16

    CPC classification number: G06F9/3867 G06F9/3836

    Abstract: A coprocessor such as a floating-point unit includes a pipeline that is partitioned into a first portion and a second portion. A controller is configured to provide control signals to the first portion and the second portion of the pipeline. A first physical distance traversed by control signals propagating from the controller to the first portion of the pipeline is shorter than a second physical distance traversed by control signals propagating from the controller to the second portion of the pipeline. A scheduler is configured to cause a physical register file to provide a first subset of bits of an instruction to the first portion at a first time. The physical register file provides a second subset of the bits of the instruction to the second portion at a second time subsequent to the first time.

    SELECTIVE SPECULATIVE PREFETCH REQUESTS FOR A LAST-LEVEL CACHE

    公开(公告)号:US20230205700A1

    公开(公告)日:2023-06-29

    申请号:US17564141

    申请日:2021-12-28

    Abstract: In response to generating one or more speculative prefetch requests for a last-level cache, a processor determines prefetch analytics for the generated speculative prefetch requests and compares the determined prefetch analytics of the speculative prefetch requests to selection thresholds. In response to a speculative prefetch request meeting or exceeding a selection threshold, the processor selects the speculative prefetch request for issuance to a memory-side cache controller. When one or more system conditions meet one or more condition thresholds, the processor issues the selected speculative prefetch request to the memory-side cache controller. The memory-side cache controller then retrieves the data indicated in the selected speculative prefetch request from a memory and stores it in a memory-side cache in the data fabric coupled to the last-level cache.

    SCHEDULING MEMORY BANDWIDTH BASED ON QUALITY OF SERVICE FLOORBACKGROUND

    公开(公告)号:US20190190805A1

    公开(公告)日:2019-06-20

    申请号:US15849266

    申请日:2017-12-20

    Abstract: A system includes a multi-core processor that includes a scheduler. The multi-core processor communicates with a system memory and an operating system. The multi-core processor executes a first process and a second process. The system uses the scheduler to control a use of a memory bandwidth by the second process until a current use in a control cycle by the first process meets a first setpoint of use for the first process when the first setpoint is at or below a latency sensitive (LS) floor or a current use in the control cycle by the first process exceeds the LS floor when the first setpoint exceeds the LS floor.

    DIFFERENTIAL PIPLINE DELAYS IN A COPROCESSOR

    公开(公告)号:US20190179643A1

    公开(公告)日:2019-06-13

    申请号:US15837974

    申请日:2017-12-11

    Abstract: A coprocessor such as a floating-point unit includes a pipeline that is partitioned into a first portion and a second portion. A controller is configured to provide control signals to the first portion and the second portion of the pipeline. A first physical distance traversed by control signals propagating from the controller to the first portion of the pipeline is shorter than a second physical distance traversed by control signals propagating from the controller to the second portion of the pipeline. A scheduler is configured to cause a physical register file to provide a first subset of bits of an instruction to the first portion at a first time. The physical register file provides a second subset of the bits of the instruction to the second portion at a second time subsequent to the first time.

    SYSTEM AND METHOD FOR PAGE TABLE CACHING MEMORY

    公开(公告)号:US20210097002A1

    公开(公告)日:2021-04-01

    申请号:US16586183

    申请日:2019-09-27

    Abstract: A processing system includes a processor, a memory, and an operating system that are used to allocate a page table caching memory object (PTCM) for a user of the processing system. An allocation of the PTCM is requested from a PTCM allocation system. In order to allocate the PTCM, a plurality of physical memory pages from a memory are allocated to store a PTCM page table that is associated with the PTCM. A lockable region of a cache is designated to hold a copy of the PTCM page table, after which the lockable region of the cache is subsequently locked. The PTCM page table is populated with page table entries associated with the PTCM and copied to the locked region of the cache.

    POWER CONSERVATION IN A COPROCESSOR
    6.
    发明申请

    公开(公告)号:US20190179396A1

    公开(公告)日:2019-06-13

    申请号:US15837918

    申请日:2017-12-11

    Abstract: A pipeline includes a first portion configured to process a first subset of bits of an instruction and a second portion configured to process a second subset of the bits of the instruction. A first clock mesh is configured to provide a first clock signal to the first portion of the pipeline. A second clock mesh is configured to provide a second clock signal to the second portion of the pipeline. The first and second clock meshes selectively provide the first and second clock signals based on characteristics of in-flight instructions that have been dispatched to the pipeline but not yet retired. In some cases, a physical register file is configured to store values of bits representative of instructions. Only the first subset is stored in the physical register file in response to the value of the zero high bit indicating that the second subset is equal to zero.

Patent Agency Ranking