ADAPTIVE SERVICE CONTROLLER, SYSTEM ON CHIP AND METHOD OF CONTROLLING THE SAME
    3.
    发明申请
    ADAPTIVE SERVICE CONTROLLER, SYSTEM ON CHIP AND METHOD OF CONTROLLING THE SAME 有权
    自适应服务控制器,芯片系统及其控制方法

    公开(公告)号:US20140208071A1

    公开(公告)日:2014-07-24

    申请号:US13799785

    申请日:2013-03-13

    IPC分类号: G06F15/80

    CPC分类号: G06F15/80 G06F15/7807

    摘要: A system on chip (SOC) includes a slave device, a plurality of master devices, an interconnect device and a plurality of service controllers. The master devices generate requests to demand services from the slave device. The interconnect device is coupled to the slave device and the master devices through respective channels, and the interconnect device performs an arbitrating operation on the requests. The service controllers control request flows from the master devices adaptively depending on an operational environment change of the SOC.

    摘要翻译: 片上系统(SOC)包括从设备,多个主设备,互连设备和多个服务控制器。 主设备产生从从设备请求服务的请求。 互连设备通过相应的信道耦合到从设备和主设备,并且互连设备对请求执行仲裁操作。 服务控制器根据SOC的操作环境变化自适应地控制来自主设备的请求流。

    SEMICONDUCTOR DEVICES INCLUDING APPLICATION PROCESSOR CONNECTED TO HIGH-BANDWIDTH MEMORY AND LOW-BANDWIDTH MEMORY, AND CHANNEL INTERLEAVING METHOD THEREOF
    4.
    发明申请
    SEMICONDUCTOR DEVICES INCLUDING APPLICATION PROCESSOR CONNECTED TO HIGH-BANDWIDTH MEMORY AND LOW-BANDWIDTH MEMORY, AND CHANNEL INTERLEAVING METHOD THEREOF 有权
    包括连接到高带宽存储器和低带宽存储器的应用处理器的半导体器件及其通道交换方法

    公开(公告)号:US20150081989A1

    公开(公告)日:2015-03-19

    申请号:US14307994

    申请日:2014-06-18

    IPC分类号: G06F12/06

    CPC分类号: G06F12/0607 Y02D10/13

    摘要: A memory system includes a high-bandwidth memory device, the high-bandwidth memory device having a relatively high operation bandwidth, the high-bandwidth memory device having a plurality of access channels. A low-bandwidth memory device has a relatively low operation bandwidth relative to the high-bandwidth memory device, the low-bandwidth memory device having one or more access channels. An interleaving unit performs a memory interleave operation among the plurality of access channels of the high-bandwidth memory device and an access channel of the one or more access channels of the low-bandwidth memory device.

    摘要翻译: 存储器系统包括高带宽存储器件,具有相对高的操作带宽的高带宽存储器件,高带宽存储器件具有多个存取通道。 低带宽存储器件相对于高带宽存储器件具有相对低的操作带宽,低带宽存储器件具有一个或多个存取通道。 交织单元在高带宽存储装置的多个接入信道和低带宽存储装置的一个以上的接入信道的接入信道之间进行存储器交织操作。

    DATA CACHE CONTROLLER, DEVICES HAVING THE SAME, AND METHOD OF OPERATING THE SAME
    6.
    发明申请
    DATA CACHE CONTROLLER, DEVICES HAVING THE SAME, AND METHOD OF OPERATING THE SAME 有权
    数据缓存控制器,具有该数据缓存控制器的设备及其操作方法

    公开(公告)号:US20130117627A1

    公开(公告)日:2013-05-09

    申请号:US13446345

    申请日:2012-04-13

    IPC分类号: G06F11/08 G06F12/08

    CPC分类号: G06F12/0855 G06F11/1064

    摘要: An method of operating a data cache controller is provided. The method includes transmitting first data output from a data cache to a central processing unit (CPU) core with a first latency and transmitting second data to the CPU core with a second latency greater than the first latency. The first latency is a delay between a read request to the data cache and transmission of the first data according to execution of a first instruction fetched from an instruction cache, and the second latency is a delay between a read request to the data cache and transmission of the second data according to execution of a second instruction fetched from the instruction cache.

    摘要翻译: 提供了一种操作数据高速缓存控制器的方法。 该方法包括以第一等待时间将从数据高速缓存输出的第一数据传输到中央处理单元(CPU)核心,并以大于第一等待时间的第二等待时间向CPU核发送第二数据。 第一等待时间是根据从指令高速缓存取出的第一指令的执行,对数据高速缓存的读取请求和第一数据的传输之间的延迟,并且第二等待时间是对数据高速缓存和传输的读请求之间的延迟 根据从指令高速缓存取出的第二指令的执行来执行第二数据。

    NETWORK-ON-CHIP SYSTEM INCLUDING ACTIVE MEMORY PROCESSOR
    7.
    发明申请
    NETWORK-ON-CHIP SYSTEM INCLUDING ACTIVE MEMORY PROCESSOR 审中-公开
    包括主动内存处理器在内的网络芯片系统

    公开(公告)号:US20120226865A1

    公开(公告)日:2012-09-06

    申请号:US13504923

    申请日:2009-12-09

    IPC分类号: G06F12/08

    CPC分类号: G06F13/1642 G06F2213/0038

    摘要: Disclosed is a network-on-chip system including an active memory processor for processing increased communication latency by multiple processors and memories. The network-on-chip system includes a plurality of processing elements that request to perform an active memory operation for a predetermined operation from a shared memory to reduce access latency of the shared memory, and an active memory processor connected to the processing elements through a network, storing codes for processing custom transaction in request to the active memory operation, performing an operation addresses or data stored in a shared cache memory or the shared memory based on the codes and transmitting the performed operation result to the processing elements.

    摘要翻译: 公开了一种片上网络系统,其包括用于处理由多个处理器和存储器增加的通信延迟的主动存储器处理器。 片上网络系统包括多个处理元件,其要求从共享存储器执行用于预定操作的活动存储器操作以减少共享存储器的访问等待时间;以及主动存储器处理器,其通过一个 网络,存储用于处理对活动存储器操作的定制事务的代码,基于代码执行存储在共享高速缓冲存储器或共享存储器中的操作地址或数据,并将所执行的操作结果发送到处理元件。

    Data cache controller, devices having the same, and method of operating the same
    8.
    发明授权
    Data cache controller, devices having the same, and method of operating the same 有权
    数据缓存控制器,具有相同功能的设备及其操作方法

    公开(公告)号:US08645791B2

    公开(公告)日:2014-02-04

    申请号:US13446345

    申请日:2012-04-13

    IPC分类号: H03M13/00 G11C29/00

    CPC分类号: G06F12/0855 G06F11/1064

    摘要: An method of operating a data cache controller is provided. The method includes transmitting first data output from a data cache to a central processing unit (CPU) core with a first latency and transmitting second data to the CPU core with a second latency greater than the first latency. The first latency is a delay between a read request to the data cache and transmission of the first data according to execution of a first instruction fetched from an instruction cache, and the second latency is a delay between a read request to the data cache and transmission of the second data according to execution of a second instruction fetched from the instruction cache.

    摘要翻译: 提供了一种操作数据高速缓存控制器的方法。 该方法包括以第一等待时间将从数据高速缓存输出的第一数据传输到中央处理单元(CPU)核心,并以大于第一等待时间的第二等待时间向CPU核发送第二数据。 第一等待时间是根据从指令高速缓存取出的第一指令的执行,对数据高速缓存的读取请求和第一数据的传输之间的延迟,并且第二等待时间是对数据高速缓存和传输的读请求之间的延迟 根据从指令高速缓存取出的第二指令的执行来执行第二数据。