-
公开(公告)号:US12066969B2
公开(公告)日:2024-08-20
申请号:US17589633
申请日:2022-01-31
Applicant: XILINX, INC.
Inventor: Krishnan Srinivasan , Sagheer Ahmad , Ygal Arbel , Millind Mittal
CPC classification number: G06F13/42 , G06F13/382 , G06F13/4063
Abstract: Embodiments herein describe using an adaptive chip-to-chip (C2C) interface to interconnect two chips, wherein the adaptive C2C interface includes circuitry for performing multiple different C2C protocols to communicate with the other chip. One or both of the chips in the C2C connection can include the adaptive C2C interface. During boot time, the adaptive C2C interface is configured to perform one of the different C2C protocols. During runtime, the chip then uses the selected C2C protocol to communicate with the other chip in the C2C connection.
-
公开(公告)号:US11983117B2
公开(公告)日:2024-05-14
申请号:US17826074
申请日:2022-05-26
Applicant: XILINX, INC.
Inventor: Millind Mittal , Jaideep Dastidar
IPC: G06F12/0891 , G06F3/06 , G06F9/50 , G06F12/0815
CPC classification number: G06F12/0891 , G06F3/0607 , G06F3/0652 , G06F3/0685 , G06F9/5016 , G06F12/0815
Abstract: The embodiments herein describe a multi-tenant cache that implements fine-grained allocation of the entries within the cache. Each entry in the cache can be allocated to a particular tenant—i.e., fine-grained allocation—rather than having to assign all the entries in a way to a particular tenant. If the tenant does not currently need those entries (which can be tracked using counters), the entries can be invalidated (i.e., deallocated) and assigned to another tenant. Thus, fine-grained allocation provides a flexible allocation of entries in a hardware cache that permits an administrator to reserve any number of entries for a particular tenant, but also permit other tenants to use this bandwidth when the reserved entries are not currently needed by the tenant.
-
23.
公开(公告)号:US11693805B1
公开(公告)日:2023-07-04
申请号:US17373620
申请日:2021-07-12
Applicant: XILINX, INC.
Inventor: Jaideep Dastidar , Millind Mittal
CPC classification number: G06F13/4022 , G06F9/30043 , G06F13/1663 , G06F13/1668 , G06F2209/5011 , G06F2213/0038
Abstract: An adaptive memory expansion scheme is proposed, where one or more memory expansion capable Hosts or Accelerators can have their memory mapped to one or more memory expansion devices. The embodiments below describe discovery, configuration, and mapping schemes that allow independent SCM implementations and CPU-Host implementations to match their memory expansion capabilities. As a result, a memory expansion host (e.g., a memory controller in a CPU or an Accelerator) can declare multiple logical memory expansion pools, each with a unique capacity. These logical memory pools can be matched to physical memory in the SCM cards using windows in a global address map. These windows represent shared memory for the Home Agents (HAs) (e.g., the Host) and the Slave Agent (SAs) (e.g., the memory expansion device).
-
公开(公告)号:US11586369B2
公开(公告)日:2023-02-21
申请号:US16425841
申请日:2019-05-29
Applicant: Xilinx, Inc.
Inventor: Millind Mittal , Jaideep Dastidar
Abstract: Examples herein describe an accelerator device that shares the same coherent domain as hardware elements in a host computing device. The embodiments herein describe a mix of hardware and software coherency which reduces the overhead of managing data when large chunks of data are moved from the host into the accelerator device. In one embodiment, an accelerator application executing on the host identifies a data set it wishes to transfer to the accelerator device to be processed. The accelerator application transfers ownership from a home agent in the host to the accelerator device. A slave agent can then take ownership of the data. As a result, any memory operation requests received from a requesting agent in the accelerator device can gain access to the data set in local memory via the slave agent without the slave agent obtaining permission from the home agent in the host.
-
25.
公开(公告)号:US11074208B1
公开(公告)日:2021-07-27
申请号:US16555146
申请日:2019-08-29
Applicant: XILINX, INC.
Inventor: Jaideep Dastidar , Millind Mittal
Abstract: An adaptive memory expansion scheme is proposed, where one or more memory expansion capable Hosts or Accelerators can have their memory mapped to one or more memory expansion devices. The embodiments below describe discovery, configuration, and mapping schemes that allow independent SCM implementations and CPU-Host implementations to match their memory expansion capabilities. As a result, a memory expansion host (e.g., a memory controller in a CPU or an Accelerator) can declare multiple logical memory expansion pools, each with a unique capacity. These logical memory pools can be matched to physical memory in the SCM cards using windows in a global address map. These windows represent shared memory for the Home Agents (HAs) (e.g., the Host) and the Slave Agent (SAs) (e.g., the memory expansion device).
-
公开(公告)号:US20200379664A1
公开(公告)日:2020-12-03
申请号:US16425841
申请日:2019-05-29
Applicant: Xilinx, Inc.
Inventor: Millind Mittal , Jaideep Dastidar
Abstract: Examples herein describe an accelerator device that shares the same coherent domain as hardware elements in a host computing device. The embodiments herein describe a mix of hardware and software coherency which reduces the overhead of managing data when large chunks of data are moved from the host into the accelerator device. In one embodiment, an accelerator application executing on the host identifies a data set it wishes to transfer to the accelerator device to be processed. The accelerator application transfers ownership from a home agent in the host to the accelerator device. A slave agent can then take ownership of the data. As a result, any memory operation requests received from a requesting agent in the accelerator device can gain access to the data set in local memory via the slave agent without the slave agent obtaining permission from the home agent in the host.
-
公开(公告)号:US10761985B2
公开(公告)日:2020-09-01
申请号:US16053488
申请日:2018-08-02
Applicant: Xilinx, Inc.
Inventor: Millind Mittal , Jaideep Dastidar
IPC: G06F12/0815 , G06F12/0831
Abstract: Circuits and methods for combined precise and imprecise snoop filtering. A memory and a plurality of processors are coupled to the interconnect circuitry. A plurality of cache circuits are coupled to the plurality of processor circuits, respectively. A first snoop filter is coupled to the interconnect and is configured to filter snoop requests by individual cache lines of a first subset of addresses of the memory. A second snoop filter is coupled to the interconnect and is configured to filter snoop requests by groups of cache lines of a second subset of addresses of the memory. Each group encompasses a plurality of cache lines.
-
公开(公告)号:US10673745B2
公开(公告)日:2020-06-02
申请号:US15886583
申请日:2018-02-01
Applicant: Xilinx, Inc.
Inventor: Ian A. Swarbrick , Ygal Arbel , Millind Mittal , Sagheer Ahmad
IPC: H04L12/725 , H04L12/851 , G06F15/78 , H04L12/931 , H04L29/08 , H04W4/50 , H04L12/933 , H04L12/701 , H04L12/70 , H04L12/863 , H04L12/721
Abstract: An example method of generating a configuration for a network on chip (NoC) in a programmable device includes: receiving traffic flow requirements for a plurality of traffic flows; assigning routes through the NoC for each traffic flow based on the traffic flow requirements; determining arbitration settings for the traffic flows along the assigned routes; generating programming data for the NoC; and loading the programming data to the programmable device to configure the NoC.
-
公开(公告)号:US20190238453A1
公开(公告)日:2019-08-01
申请号:US15886583
申请日:2018-02-01
Applicant: Xilinx, Inc.
Inventor: Ian A. Swarbrick , Ygal Arbel , Millind Mittal , Sagheer Ahmad
IPC: H04L12/725 , H04L12/851 , H04L12/931 , G06F15/78
Abstract: An example method of generating a configuration for a network on chip (NoC) in a programmable device includes: receiving traffic flow requirements for a plurality of traffic flows; assigning routes through the NoC for each traffic flow based on the traffic flow requirements; determining arbitration settings for the traffic flows along the assigned routes; generating programming data for the NoC; and loading the programming data to the programmable device to configure the NoC.
-
-
-
-
-
-
-
-