DOMAIN PROTECTION AND VIRTUALIZATION FOR SATA
    1.
    发明申请
    DOMAIN PROTECTION AND VIRTUALIZATION FOR SATA 有权
    SATA的域保护和虚拟化

    公开(公告)号:US20140201481A1

    公开(公告)日:2014-07-17

    申请号:US13742767

    申请日:2013-01-16

    Abstract: Various aspects provide for a hardware SATA virtualization system without the need for backend and frontend drivers and native device drivers. A lightweight SATA virtualization handler can run on a specialized co-processor and manage requests enqueued by individual VMs. The lightweight SATA virtualization handler can also perform the scheduling of the requests based on performance optimizations to reduce seek time as well as based on the priority of the requests. The specialized co-processor can communicate to an integrated SATA controller through an advanced host controller interface (“AHCI”) data structure that is built by the system processor and has commands from one or more VMs.

    Abstract translation: 各种方面提供硬件SATA虚拟化系统,而无需后端和前端驱动程序和本地设备驱动程序。 轻量级SATA虚拟化处理器可以在专用协处理器上运行,并管理由各个虚拟机排队的请求。 轻量级SATA虚拟化处理器还可以基于性能优化来执行请求的调度,以减少查找时间以及基于请求的优先级。 专用协处理器可以通过由系统处理器构建并具有来自一个或多个VM的命令的高级主机控制器接口(“AHCI”)数据结构与集成SATA控制器进行通信。

    System boot with external media
    2.
    发明授权
    System boot with external media 有权
    系统启动与外部媒体

    公开(公告)号:US09558012B2

    公开(公告)日:2017-01-31

    申请号:US13772498

    申请日:2013-02-21

    CPC classification number: G06F9/4408 G06F9/4401 G06F9/4406 G06F9/4416

    Abstract: Various aspects of the present disclosure provide for a system that is able to boot from a variety of media that can be connected to the system, including SPI NOR and SPI NAND memory, universal serial bus (“USB”) devices, and devices attached via PCIe and Ethernet interfaces. When the system is powered on, the system processor is held in a reset mode, while a microcontroller in the system identifies an external device to be booted, and then copies a portion of boot code from the external device to an on-chip memory. The microcontroller can then direct the reset vector to the boot code in the on-chip memory and brings the system processor out of reset. The system processor can execute the boot code in-place on the on-chip memory, which initiates the system memory and the second stage boot loader.

    Abstract translation: 本公开的各个方面提供了能够从可以连接到系统的各种媒体引导的系统,包括SPI NOR和SPI NAND存储器,通用串行总线(“USB”)设备,以及经由 PCIe和以太网接口。 当系统通电时,系统处理器保持在复位模式,而系统中的微控制器识别要引导的外部设备,然后将一部分引导代码从外部设备复制到片上存储器。 然后,微控制器可以将复位向量引导到片上存储器中的引导代码,并使系统处理器不复位。 系统处理器可以在片上存储器上就地执行启动代码,从而启动系统内存和第二级引导加载程序。

    Packet processing with dynamic load balancing
    3.
    发明授权
    Packet processing with dynamic load balancing 有权
    数据包处理与动态负载平衡

    公开(公告)号:US09158713B1

    公开(公告)日:2015-10-13

    申请号:US12772832

    申请日:2010-05-03

    Abstract: A system and method are provided for evenly distributing central processing unit (CPU) packet processing workloads. The method accepts packets for processing at a port hardware module port interface. The port hardware module supplies the packets to a direct memory access (DMA) engine for storage in system memory. The port hardware module also supplies descriptors to a mailbox. Each descriptor identifies a corresponding packet. The mailbox has a plurality of slots, and loads the descriptors into empty slots. There is a plurality of CPUs, and each CPU fetches descriptors from assigned slots in the mailbox. Then, each CPU processes packets in the system memory in the order in which the associated descriptors are fetched. A load balancing module estimates each CPU workload and reassigns mailbox slots to CPUs in response to unequal CPU workloads.

    Abstract translation: 提供了一种用于均匀分布中央处理单元(CPU)数据包处理工作负载的系统和方法。 该方法接受在端口硬件模块端口接口处理的数据包。 端口硬件模块将数据包提供给直接存储器访问(DMA)引擎,用于存储在系统内存中。 端口硬件模块还向邮箱提供描述符。 每个描述符标识相应的数据包。 邮箱有多个插槽,并将描述符加载到空插槽中。 有多个CPU,每个CPU从邮箱中分配的插槽中提取描述符。 然后,每个CPU按照相关联的描述符获取的顺序处理系统内存中的数据包。 负载平衡模块估计每个CPU工作负载,并将邮箱插槽重新分配给CPU,以响应不平等的CPU工作负载。

    System-on-chip with power-save mode processor
    4.
    发明授权
    System-on-chip with power-save mode processor 有权
    采用省电模式处理器的片上系统

    公开(公告)号:US08832483B1

    公开(公告)日:2014-09-09

    申请号:US13014616

    申请日:2011-01-26

    Abstract: A system-on-chip (SoC) is provided with a low power processor to manage power-save mode operations. The SoC has a high-speed group with a high-speed processor, a standby agent, and a governor. In response to inactivity, the governor establishes a power-save mode and deactivates the high-speed group, but not the standby agent. The standby agent monitors SoC input/output (IO) interfaces, and determines the speed requirements associated with a received communication. In response to determining that the communication does not prompt a high-speed SoC operation, the standby agent responds to the communication. Likewise, the standby agent monitors SoC internal events such as housekeeping and timer activity, and the standby performs the tasks if it is determined that the tasks do not require a high-speed SoC operation. Alternatively, if monitored communication or internal event prompts a high-speed SoC operation, the governor activates a member of the high-speed group.

    Abstract translation: 片上系统(SoC)提供有低功耗处理器来管理节电模式操作。 SoC具有高速处理器,备用代理和调速器的高速组。 响应于不活动状态,调速器建立省电模式,并禁用高速组,而不是待机代理。 备用代理监视SoC输入/输出(IO)接口,并确定与接收到的通信相关联的速度要求。 响应于确定通信不提示高速SoC操作,备用代理响应于通信。 同样,备用代理监视SoC内部事件,如管理和定时器活动,如果确定任务不需要高速SoC操作,则备用程序执行任务。 或者,如果监视的通信或内部事件提示高速SoC操作,则调速器激活高速组的成员。

    Packet forwarding system and method using patricia trie configured hardware
    5.
    发明授权
    Packet forwarding system and method using patricia trie configured hardware 有权
    数据包转发系统和方法采用patricia trie配置硬件

    公开(公告)号:US08767757B1

    公开(公告)日:2014-07-01

    申请号:US13396711

    申请日:2012-02-15

    CPC classification number: H04L45/745 H04L45/66

    Abstract: A method is provided for forwarding packets. Using a control plane state machine, addresses in a packet header are examined to derive a pointer value. The pointer value is used to access entries in a result database to identify routing information, a buffer pool ID associated with a location in memory, and a queue ID. A direct memory access (DMA) engine writes the packet into the memory location in response to the first message including the buffer pool ID. The QM prepares a second message associated with the packet, the second message including the routing information, the memory allocation in the buffer pool ID, and the queue ID. An operating system reads the second message, reads the packet from the memory allocation, modifies the packet header using the routing information, and writes the modified packet back into the memory allocation.

    Abstract translation: 提供了转发数据包的方法。 使用控制平面状态机,检查分组报头中的地址以导出指针值。 指针值用于访问结果数据库中的条目以识别路由信息,与存储器中的位置相关联的缓冲池ID和队列ID。 直接存储器访问(DMA)引擎响应于包括缓冲池ID的第一消息将数据包写入存储器位置。 QM准备与分组相关联的第二消息,第二消息包括路由信息,缓冲池ID中的内存分配和队列ID。 操作系统读取第二个消息,从存储器分配中读取数据包,使用路由信息修改数据包头,并将修改后的数据包写入存储器分配。

    Method and system for queuing a request by a processor to access a shared resource and granting access in accordance with an embedded lock ID
    6.
    发明授权
    Method and system for queuing a request by a processor to access a shared resource and granting access in accordance with an embedded lock ID 有权
    排队处理器访问共享资源的请求并根据嵌入的锁ID授予访问权限的方法和系统

    公开(公告)号:US08918791B1

    公开(公告)日:2014-12-23

    申请号:US13045453

    申请日:2011-03-10

    CPC classification number: G06F9/526

    Abstract: A hardware-based method is provided for allocating shared resources in a system-on-chip (SoC). The SoC includes a plurality of processors and at least one shared resource, such as an input/output (IO) port or a memory. A queue manager (QM) includes a plurality of input first-in first-out memories (FIFOs) and a plurality of output FIFOs. A first application writes a first request to access the shared resource. A first application programming interface (API) loads the first request at a write pointer of a first input FIFO associated with the first processor. A resource allocator reads the first request from a read pointer of the first input FIFO, generates a first reply, and loads the first reply at a write pointer of a first output FIFO associated with the first processor. The first API supplies the first reply, from a read pointer of the first output FIFO, to the first application.

    Abstract translation: 提供了一种基于硬件的方法来分配片上系统(SoC)中的共享资源。 SoC包括多个处理器和至少一个共享资源,诸如输入/输出(IO)端口或存储器。 队列管理器(QM)包括多个输入的先进先出存储器(FIFO)和多个输出FIFO。 第一个应用程序写入访问共享资源的第一个请求。 第一应用编程接口(API)在与第一处理器相关联的第一输入FIFO的写指针处加载第一请求。 资源分配器从第一输入FIFO的读指针读取第一请求,产生第一应答,并且在与第一处理器相关联的第一输出FIFO的写指针处加载第一应答。 第一个API从第一个输出FIFO的读指针向第一个应用提供第一个应答。

    Stashing system and method for the prevention of cache thrashing
    7.
    发明授权
    Stashing system and method for the prevention of cache thrashing 有权
    防止缓存颠覆的系统和方法

    公开(公告)号:US08429315B1

    公开(公告)日:2013-04-23

    申请号:US13167783

    申请日:2011-06-24

    Abstract: In a system-on-chip (SoC) including a processor, a method is provided for stashing packet information that prevents cache thrashing. In operation, an Ethernet subsystem accepts a plurality of packets and sends the packets to an external memory for storage. A packet descriptor is derived for each accepted packet and is added to an ingress queue. Packet descriptors are transferred from the ingress queue to an egress queue supplying the packet descriptors to a processor. A context manager monitors the fill level of packet descriptors in the egress queue. In response to monitoring the fill level, the context manager stashes packets from the external memory into a cache, where each stashed packet is associated with a packet descriptor in the egress queue. Packet descriptors are transferred from the ingress queue to the egress queue in response to a number of packet descriptors in the egress queue falling below the fill level.

    Abstract translation: 在包括处理器的片上系统(SoC)中,提供了一种用于冻结分组信息以防止高速缓存颠簸的方法。 在操作中,以太网子系统接受多个分组并将分组发送到外部存储器用于存储。 为每个接受的包导出包描述符,并将其添加到入口队列。 分组描述符从入口队列传送到向处理器提供分组描述符的出口队列。 上下文管理器监视出口队列中的分组描述符的填充级别。 响应于监视填充级别,上下文管理器将来自外部存储器的数据包存储到高速缓存中,其中每个被封闭的分组与出口队列中的分组描述符相关联。 响应于出口队列中的多个分组描述符落入填充级别以下,分组描述符从入口队列传送到出口队列。

    Large receive offload functionality for a system on chip
    8.
    发明授权
    Large receive offload functionality for a system on chip 有权
    片上系统的大型接收卸载功能

    公开(公告)号:US09300578B2

    公开(公告)日:2016-03-29

    申请号:US13772535

    申请日:2013-02-21

    CPC classification number: H04L45/74 H04L47/34 H04L47/41 H04L47/50 H04L69/166

    Abstract: Various aspects provide large receive offload (LRO) functionality for a system on chip (SoC). A classifier engine is configured to classify one or more network packets received from a data stream as one or more network segments. A first memory is configured to store one or more packet headers associated with the one or more network segments. At least one processor is configured to receive the one or more packet headers and generate a single packet header for the one or more network segments in response to a determination that a gather buffer that stores packet data for the one or more network segments has reached a predetermined size.

    Abstract translation: 各种方面为片上系统(SoC)提供大的接收卸载(LRO)功能。 分类器引擎被配置为将从数据流接收的一个或多个网络分组分类为一个或多个网段。 第一存储器被配置为存储与一个或多个网段相关联的一个或多个分组报头。 至少一个处理器被配置为响应于确定存储一个或多个网段的分组数据的收集缓冲器已经达到一个或多个网段而接收一个或多个分组报头并为一个或多个网段生成单个分组报头 预定尺寸。

    System and method for partitioning resources in a system-on-chip (SoC)
    9.
    发明授权
    System and method for partitioning resources in a system-on-chip (SoC) 有权
    片上系统(SoC)分区资源的系统和方法

    公开(公告)号:US08893267B1

    公开(公告)日:2014-11-18

    申请号:US13212112

    申请日:2011-08-17

    CPC classification number: G06F21/31 G06F21/554 G06F21/76

    Abstract: In a system-on-chip (SoC), a method is provided for partitioning access to resources. A plurality of processors is provided, including a configuration master (CM) processor, a memory, a plurality of OSs, and accessible resources. The method creates a mapping table with a plurality of entries, each entry cross-referencing a range of destination addresses with a domain ID, where each domain ID is associated with a corresponding processor. Access requests to the resource are accepted from the plurality of processors. Each access request includes a domain ID and a destination address. A mapping table is consulted to determine the range of destination addresses associated with the access request domain IDs. The accesses are authorized in response to the access request destination addresses matching the range of destination addresses in the mapping table, and the authorized access requests are sent to the destination addresses of the requested resources.

    Abstract translation: 在片上系统(SoC)中,提供了一种分区资源访问的方法。 提供了多个处理器,包括配置主机(CM)处理器,存储器,多个OS以及可访问的资源。 该方法创建具有多个条目的映射表,每个条目与域ID交叉参考目标地址范围,其中每个域ID与对应的处理器相关联。 从多个处理器接受对资源的访问请求。 每个访问请求都包含域ID和目标地址。 参考映射表来确定与访问请求域ID相关联的目标地址的范围。 响应于与映射表中的目的地址的范围匹配的接入请求目的地地址来授权接入,并且授权接入请求被发送到所请求资源的目的地址。

    System and method for packet splitting
    10.
    发明授权
    System and method for packet splitting 有权
    分组拆分的系统和方法

    公开(公告)号:US08732351B1

    公开(公告)日:2014-05-20

    申请号:US12917425

    申请日:2010-11-01

    CPC classification number: G06F13/28

    Abstract: A data structure splitting method is provided for processing data using a minimum number of memory accesses. An SoC is provided with a with a central processing unit (CPU), a system memory, an on-chip memory (OCM), and a network interface including an embedded direct memory access (DMA). The network interface accepts a data structure with a header and a payload. The DMA writes the payload in the system memory, and the header in the OCM. The network interface DMA notifies the CPU of the header address in the OCM. The CPU reads the header in the OCM, performs processing instructions, and writes the processed header in the OCM. The CPU sends the address of the processed header in OCM to the network interface DMA. The network interface DMA reads the processed header from the OCM and sends a data structure with the processed header and the payload.

    Abstract translation: 提供了一种用于使用最少数量的存储器访问来处理数据的数据结构分割方法。 SoC具有中央处理单元(CPU),系统存储器,片上存储器(OCM)和包括嵌入式直接存储器访问(DMA)的网络接口)。 网络接口接收具有报头和有效载荷的数据结构。 DMA将有效负载写入系统内存,并在OCM中写入头。 网络接口DMA向CPU通知OCM中的头地址。 CPU读取OCM中的头,执行处理指令,并将处理后的头写入OCM。 CPU将处理后的报头的地址发送到网络接口DMA。 网络接口DMA从OCM读取处理的报头,并发送具有处理的报头和有效载荷的数据结构。

Patent Agency Ranking