EFFICIENT LOCKING OF MEMORY PAGES
    83.
    发明申请
    EFFICIENT LOCKING OF MEMORY PAGES 有权
    高效锁定内存页

    公开(公告)号:US20130311738A1

    公开(公告)日:2013-11-21

    申请号:US13996438

    申请日:2012-03-30

    CPC classification number: G06F12/1466 G06F12/1027 G06F12/126

    Abstract: An apparatus is described that contains a processing core comprising a CPU core and at least one accelerator coupled to the CPU core. The CPU core comprises a pipeline having a translation look aside buffer. The CPU core comprising logic circuitry to set a lock bit in attribute data of an entry within the translation look-aside buffer entry to lock a page of memory reserved for the accelerator.

    Abstract translation: 描述了一种装置,其包含处理核心,其包括CPU核心和耦合到CPU核心的至少一个加速器。 CPU核心包括具有翻译旁边缓冲器的管线。 CPU核心包括逻辑电路,用于在转换后备缓冲器条目中的条目的属性数据中设置锁定位,以锁定为加速器保留的存储器页面。

    EFFICIENT PEER-TO-PEER COMMUNICATION SUPPORT IN SOC FABRICS
    85.
    发明申请
    EFFICIENT PEER-TO-PEER COMMUNICATION SUPPORT IN SOC FABRICS 有权
    SOC框架中的高效对等通信支持

    公开(公告)号:US20130185370A1

    公开(公告)日:2013-07-18

    申请号:US13810033

    申请日:2012-01-13

    Abstract: Methods and apparatus for efficient peer-to-peer communication support in interconnect fabrics. Network interfaces associated with agents are implemented to facilitate peer-to-peer transactions between agents in a manner that ensures data accesses correspond to the most recent update for each agent. This is implemented, in part, via use of non-posted “dummy writes” that are sent from an agent when the destination between write transactions originating from the agent changes. The dummy writes ensure that data corresponding to previous writes reach their destination prior to subsequent write and read transactions, thus ordering the peer-to-peer transactions without requiring the use of a centralized transaction ordering entity.

    Abstract translation: 互连结构中有效的对等通信支持的方法和设备。 实现与代理相关联的网络接口以便于以确保数据访问对应于每个代理的最新更新的方式促进代理之间的对等事务。 这部分是通过使用从代理发出的写入事务之间的目标之间从代理发送的未发布的“虚拟写入”来实现的。 虚拟写入确保与之前的写入相对应的数据在后续写入和读取事务之前到达其目的地,从而排序对等事务,而不需要使用集中式事务排序实体。

    Scheduling Workloads Based On Cache Asymmetry
    88.
    发明申请
    Scheduling Workloads Based On Cache Asymmetry 有权
    基于缓存不对称的调度工作负载

    公开(公告)号:US20120233393A1

    公开(公告)日:2012-09-13

    申请号:US13042547

    申请日:2011-03-08

    CPC classification number: G06F12/0842 G06F9/46 G06F9/4881 G06F2209/483

    Abstract: In one embodiment, a processor includes a first cache and a second cache, a first core associated with the first cache and a second core associated with the second cache. The caches are of asymmetric sizes, and a scheduler can intelligently schedule threads to the cores based at least in part on awareness of this asymmetry and resulting cache performance information obtained during a training phase of at least one of the threads.

    Abstract translation: 在一个实施例中,处理器包括第一高速缓存和第二高速缓存,与第一高速缓存相关联的第一核和与第二高速缓存相关联的第二核。 高速缓存具有非对称尺寸,并且调度器可以至少部分地基于对至少一个线程的训练阶段期间获得的不对称性和结果高速缓存性能信息的认识来智能地将线程调度到核心。

Patent Agency Ranking