Memory Fetch Granule
    1.
    发明申请

    公开(公告)号:US20230009674A1

    公开(公告)日:2023-01-12

    申请号:US17932983

    申请日:2022-09-16

    Applicant: Apple Inc.

    Abstract: Systems, apparatuses, and methods for implementing a memory fetch granule for real-time agents are described. A computing system includes a plurality of real-time agents coupled to memory via an interconnect fabric and a memory controller. The efficiency of the memory controller is determined by the number of bank groups in the memory devices coupled to the memory controller. A memory fetch granule is defined for the memory controller based on the amount of data that can be accessed in parallel on the memory device in back-to-back access cycles. Each real-time agent accumulates memory requests for sequential physical addresses until the amount of data referenced by the requests reaches the size of the memory fetch granule. Once the memory fetch granule is reached, the real-time agent sends the requests to the memory controller via the fabric. This helps to ensure that the requests will arrive at the memory controller near enough to each other to get grouped together.

    Memory Fetch Granule
    2.
    发明申请

    公开(公告)号:US20220334984A1

    公开(公告)日:2022-10-20

    申请号:US17230490

    申请日:2021-04-14

    Applicant: Apple Inc.

    Abstract: Systems, apparatuses, and methods for implementing a memory fetch granule for real-time agents are described. A computing system includes a plurality of real-time agents coupled to memory via an interconnect fabric and a memory controller. The efficiency of the memory controller is determined by the number of bank groups in the memory devices coupled to the memory controller. A memory fetch granule is defined for the memory controller based on the amount of data that can be accessed in parallel on the memory device in back-to-back access cycles. Each real-time agent accumulates memory requests for sequential physical addresses until the amount of data referenced by the requests reaches the size of the memory fetch granule. Once the memory fetch granule is reached, the real-time agent sends the requests to the memory controller via the fabric. This helps to ensure that the requests will arrive at the memory controller near enough to each other to get grouped together.

    PARALLEL SCHEDULING OF WRITE COMMANDS TO MULTIPLE MEMORY DEVICES

    公开(公告)号:US20170255396A1

    公开(公告)日:2017-09-07

    申请号:US15057145

    申请日:2016-03-01

    Applicant: Apple Inc.

    Abstract: A controller includes an interface and a processor. The interface is configured to communicate with multiple memory devices over a link. The processor is configured to select at least first and second memory devices for writing, and to write at least first and second data units in sequence to the first memory device over the link, while avoiding writing to any of the other memory devices until transferal of the at least first and second data units over the link has been completed, to write at least one data unit to the second memory device after transferring the at least first and second data units to the first memory device, and, in response to verifying that the first memory device is ready to receive subsequent data, to write to the first memory device at least a third data unit.

    Memory fetch granule
    5.
    发明授权

    公开(公告)号:US11467988B1

    公开(公告)日:2022-10-11

    申请号:US17230490

    申请日:2021-04-14

    Applicant: Apple Inc.

    Abstract: Systems, apparatuses, and methods for implementing a memory fetch granule for real-time agents are described. A computing system includes a plurality of real-time agents coupled to memory via an interconnect fabric and a memory controller. The efficiency of the memory controller is determined by the number of bank groups in the memory devices coupled to the memory controller. A memory fetch granule is defined for the memory controller based on the amount of data that can be accessed in parallel on the memory device in back-to-back access cycles. Each real-time agent accumulates memory requests for sequential physical addresses until the amount of data referenced by the requests reaches the size of the memory fetch granule. Once the memory fetch granule is reached, the real-time agent sends the requests to the memory controller via the fabric. This helps to ensure that the requests will arrive at the memory controller near enough to each other to get grouped together.

    Parallel scheduling of write commands to multiple memory devices

    公开(公告)号:US09952779B2

    公开(公告)日:2018-04-24

    申请号:US15057145

    申请日:2016-03-01

    Applicant: Apple Inc.

    Abstract: A controller includes an interface and a processor. The interface is configured to communicate with multiple memory devices over a link. The processor is configured to select at least first and second memory devices for writing, and to write at least first and second data units in sequence to the first memory device over the link, while avoiding writing to any of the other memory devices until transferal of the at least first and second data units over the link has been completed, to write at least one data unit to the second memory device after transferring the at least first and second data units to the first memory device, and, in response to verifying that the first memory device is ready to receive subsequent data, to write to the first memory device at least a third data unit.

Patent Agency Ranking