DIE-STACKED DEVICE WITH PARTITIONED MULTI-HOP NETWORK
    91.
    发明申请
    DIE-STACKED DEVICE WITH PARTITIONED MULTI-HOP NETWORK 有权
    具有分层多路网络的DIE堆叠设备

    公开(公告)号:US20140177626A1

    公开(公告)日:2014-06-26

    申请号:US13726142

    申请日:2012-12-23

    Abstract: An electronic assembly includes horizontally-stacked die disposed at an interposer, and may also include vertically-stacked die. The stacked die are interconnected via a multi-hop communication network that is partitioned into a link partition and a router partition. The link partition is at least partially implemented in the metal layers of the interposer for horizontally-stacked die. The link partition may also be implemented in part by the intra-die interconnects in a single die and by the inter-die interconnects connecting vertically-stacked sets of die. The router partition is implemented at some or all of the die disposed at the interposer and comprises the logic that supports the functions that route packets among the components of the processing system via the interconnects of the link partition. The router partition may implement fixed routing, or alternatively may be configurable using programmable routing tables or configurable logic blocks.

    Abstract translation: 电子组件包括设置在插入器处的水平堆叠的管芯,并且还可以包括垂直堆叠的管芯。 堆叠的管芯通过被划分成链路分区和路由器分区的多跳通信网络相互连接。 连接分隔件至少部分地实现在用于水平堆叠的模具的插入件的金属层中。 链路分区还可以部分地由单个管芯中的管芯内互连和通过垂直堆叠的管芯组连接的晶片间互连来实现。 路由器分区在设置在插入器处的部分或全部芯片上实现,并且包括支持通过链路分区的互连来在处理系统的组件之间路由分组的功能的逻辑。 路由器分区可以实现固定路由,或者可以使用可编程路由表或可配置逻辑块来配置。

    Dynamically Configuring Regions of a Main Memory in a Write-Back Mode or a Write-Through Mode
    92.
    发明申请
    Dynamically Configuring Regions of a Main Memory in a Write-Back Mode or a Write-Through Mode 有权
    以写回模式或直写模式动态配置主内存区域

    公开(公告)号:US20140143505A1

    公开(公告)日:2014-05-22

    申请号:US13736063

    申请日:2013-01-07

    CPC classification number: G06F12/0802 G06F12/0804 G06F12/0862 G06F12/0888

    Abstract: The described embodiments include a main memory and a cache memory (or “cache”) with a cache controller that includes a mode-setting mechanism. In some embodiments, the mode-setting mechanism is configured to dynamically determine an access pattern for the main memory. Based on the determined access pattern, the mode-setting mechanism configures at least one region of the main memory in a write-back mode and configures other regions of the main memory in a write-through mode. In these embodiments, when performing a write operation in the cache memory, the cache controller determines whether a region in the main memory where the cache block is from is configured in the write-back mode or the write-through mode and then performs a corresponding write operation in the cache memory

    Abstract translation: 所描述的实施例包括具有包括模式设置机制的高速缓存控制器的主存储器和高速缓冲存储器(或“高速缓存”)。 在一些实施例中,模式设置机制被配置为动态地确定主存储器的访问模式。 基于确定的访问模式,模式设置机制以回写模式配置主存储器的至少一个区域,并以直通模式配置主存储器的其他区域。 在这些实施例中,当在高速缓冲存储器中执行写入操作时,高速缓存控制器确定高速缓存块所处的主存储器中的区域是否配置在回写模式或直写模式中,然后执行相应的 在高速缓存中写入操作

    Lookup Table (LUT) Vector Instruction
    94.
    发明公开

    公开(公告)号:US20240329984A1

    公开(公告)日:2024-10-03

    申请号:US18128963

    申请日:2023-03-30

    CPC classification number: G06F9/30036 G06F9/3001 G06F9/30109

    Abstract: An electronic device includes processing circuitry that executes a lookup table (LUT) vector instruction. Executing the lookup table vector instruction causes the processing circuitry to acquire a set of reference values by using each input value from an input vector as an index to acquire a reference value from a reference vector. The processing circuitry then provides the set of reference values for one or more subsequent operations. The processing circuitry can also use the set of reference values for controlling vector elements from among a set of vector elements for which a vector operation is performed.

    Page table walker with page table entry (PTE) physical address prediction

    公开(公告)号:US11494300B2

    公开(公告)日:2022-11-08

    申请号:US17033737

    申请日:2020-09-26

    Inventor: Gabriel H. Loh

    Abstract: Methods and apparatus provide virtual to physical address translations and a hardware page table walker with region based page table prefetch operation that produces virtual memory region tracking information that includes at least: data representing a virtual base address of a virtual memory region and a physical address of a first page table entry (PTE) corresponding to a virtual page within the virtual memory region. The hardware page table walker, in response to the TLB miss indication, prefetches a physical address of a second page table entry, that provides a final physical address for the missed TLB entry, using the virtual memory region tracking information. In some implementations, the prefetching of the physical PTE address is done in parallel with earlier levels of a page walk operations.

    PAGE TABLE WALKER WITH PAGE TABLE ENTRY (PTE) PHYSICAL ADDRESS PREDICTION

    公开(公告)号:US20220100653A1

    公开(公告)日:2022-03-31

    申请号:US17033737

    申请日:2020-09-26

    Inventor: Gabriel H. Loh

    Abstract: Methods and apparatus provide virtual to physical address translations and a hardware page table walker with region based page table prefetch operation that produces virtual memory region tracking information that includes at least: data representing a virtual base address of a virtual memory region and a physical address of a first page table entry (PTE) corresponding to a virtual page within the virtual memory region. The hardware page table walker, in response to the TLB miss indication, prefetches a physical address of a second page table entry, that provides a final physical address for the missed TLB entry, using the virtual memory region tracking information. In some implementations, the prefetching of the physical PTE address is done in parallel with earlier levels of a page walk operations.

    Cache for storing regions of data
    100.
    发明授权

    公开(公告)号:US11232039B2

    公开(公告)日:2022-01-25

    申请号:US16214363

    申请日:2018-12-10

    Inventor: Gabriel H. Loh

    Abstract: Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses.

Patent Agency Ranking