Power Switch with Source-Bias Mode for on-chip Powerdomain Supply Drooping
    41.
    发明申请
    Power Switch with Source-Bias Mode for on-chip Powerdomain Supply Drooping 审中-公开
    具有源偏置模式的电源开关,用于片上电源供给下降

    公开(公告)号:US20160357211A1

    公开(公告)日:2016-12-08

    申请号:US15236743

    申请日:2016-08-15

    CPC classification number: G05F3/02 H03K17/6871 H03K17/6872

    Abstract: This invention is an electronic circuit with a low power retention mode. A single integrated circuit includes a circuit module and a droop switch circuit supplied by a voltage regulator. In a normal mode a PMOS source-drain channel connects the voltage regulator power to the circuit module power input or isolates them dependent upon a power switch input. In a low power mode a second PMOS connected between the first PMOS gate and output diode connects the first PMOS. This supplied the circuit module from the voltage regulator power as reduced in voltage by a diode forward bias drop. This lower voltage should be sufficient for flip-flops in the circuit module to retain their state while not guaranteeing logic operation. There may be a plurality of chain connected droop switch each powering a corresponding circuit module.

    Abstract translation: 本发明是具有低功率保持模式的电子电路。 单个集成电路包括由电压调节器提供的电路模块和下垂开关电路。 在正常模式下,PMOS源极 - 漏极通道将电压调节器功率连接到电路模块电源输入,或者根据电源开关输入将其隔离。 在低功率模式中,连接在第一PMOS栅极和输出二极管之间的第二PMOS连接第一PMOS。 这通过二极管正向偏压下降,从电压调节器电源提供电压降低的电压。 这个较低的电压应该足以使电路模块中的触发器保持其状态而不保证逻辑运行。 可以有多个链式连接的下垂开关,各自为对应的电路模块供电。

    Local page translation and permissions storage for the page window in program memory controller
    42.
    发明授权
    Local page translation and permissions storage for the page window in program memory controller 有权
    程序存储控制器中的页面窗口的本地页面转换和权限存储

    公开(公告)号:US09514058B2

    公开(公告)日:2016-12-06

    申请号:US14579641

    申请日:2014-12-22

    Abstract: This invention provides a current page translation register storing virtual to physical address translation data for a single current page and optionally access permission data for the same page for program accesses. If an accessed address is within the current page, the address translation and permission data is accessed from current page translation register. This current page translation register provides an additional level of caching of this data above the typical translation look-aside buffer and micro translation look-aside buffer. The smaller size of the current page translation register provides faster page hit/miss determination and faster data access using less power than the typical architecture. This is helpful for program access which generally hits the current page more frequently than data access.

    Abstract translation: 本发明提供了当前的页面翻译寄存器,其存储用于单个当前页面的虚拟到物理地址转换数据,并且可选地访问用于程序访问的同一页面的许可数据。 如果访问的地址在当前页面内,则从当前页面翻译寄存器访问地址转换和许可数据。 这个当前的页面翻译寄存器提供了这种数据的高级缓存,这些数据在典型的翻译后备缓冲区和微型翻译后备缓冲区之上。 当前页面翻译寄存器的较小尺寸使用比典型架构更少的功率提供更快的页面命中/错误确定和更快的数据访问。 这对于通常比数据访问更频繁地访问当前页面的程序访问是有帮助的。

    Hiding Page Translation Miss Latency in Program Memory Controller By Selective Page Miss Translation Prefetch
    43.
    发明申请
    Hiding Page Translation Miss Latency in Program Memory Controller By Selective Page Miss Translation Prefetch 有权
    隐藏页面翻译小计延迟在程序存储器控制器通过选择性页面小姐翻译预取

    公开(公告)号:US20160179700A1

    公开(公告)日:2016-06-23

    申请号:US14579654

    申请日:2014-12-22

    Abstract: This invention hides the page miss translation latency for program fetches. In this invention whenever an access is requested by CPU, the L1I cache controller does a-priori lookup of whether the virtual address plus the fetch packet count of expected program fetches crosses a page boundary. If the access crosses a page boundary, the L1I cache controller will request a second page translation along with the first page. This pipelines requests to the μTLB without waiting for L1I cache controller to begin processing the second page requests. This becomes a deterministic prefetch of the second page translation request. The translation information for the second page is stored locally in L1I cache controller and used when the access crosses the page boundary.

    Abstract translation: 本发明隐藏程序提取的页面未命中转换延迟。 在本发明中,只要CPU请求访问,L1I高速缓存控制器先验地查看虚拟地址加上预期程序提取的提取数据包数是否跨越页边界。 如果访问跨页面边界,则L1I缓存控制器将与第一页一起请求第二页翻译。 该管道请求到μTLB,而不等待L1I缓存控制器开始处理第二页请求。 这成为第二页翻译请求的确定性预取。 第二页的翻译信息本地存储在L1I高速缓存控制器中,当访问越过页面边界时使用。

    Local Page Translation and Permissions Storage for the Page Window in Program Memory Controller
    44.
    发明申请
    Local Page Translation and Permissions Storage for the Page Window in Program Memory Controller 有权
    程序存储器控制器中的页面窗口的本地页面翻译和权限存储

    公开(公告)号:US20160179695A1

    公开(公告)日:2016-06-23

    申请号:US14579641

    申请日:2014-12-22

    Abstract: This invention provides a current page translation register storing virtual to physical address translation data for a current page and optionally access permission data for the same page for program accesses. If an accessed address is within the current page, the address translation and permission data is accessed from current page translation register. This current page translation register provides an additional level of caching of this data above the typical translation look-aside buffer and micro translation look-aside buffer. The smaller size of the current page translation register provides faster page hit/miss determination and faster data access using less power than the typical architecture. This is helpful for program access which generally hits the current page more frequently than data access.

    Abstract translation: 本发明提供了当前页面转换寄存器,其存储用于当前页面的虚拟到物理地址转换数据,并且可选地访问用于程序访问的同一页面的许可数据。 如果访问的地址在当前页面内,则从当前页面翻译寄存器访问地址转换和许可数据。 这个当前的页面翻译寄存器提供了这种数据的高级缓存,这些数据在典型的翻译后备缓冲区和微型翻译后备缓冲区之上。 当前页面翻译寄存器的较小尺寸使用比典型架构更少的功率提供更快的页面命中/错误确定和更快的数据访问。 这对于通常比数据访问更频繁地访问当前页面的程序访问是有帮助的。

    Integer and Half Clock Step Division Digital Variable Clock Divider
    46.
    发明申请
    Integer and Half Clock Step Division Digital Variable Clock Divider 有权
    整数和半时钟分步数字可变时钟分频器

    公开(公告)号:US20130243148A1

    公开(公告)日:2013-09-19

    申请号:US13888050

    申请日:2013-05-06

    Abstract: A clock divider is provided that is configured to divide a high speed input clock signal by an odd, even or fractional divide ratio. The input clock may have a clock cycle frequency of 1 GHz or higher, for example. The input clock signal is divided to produce an output clock signal by first receiving a divide factor value F representative of a divide ratio N, wherein the N may be an odd or an even integer. A fractional indicator indicates the divide ratio is N.5 when the fractional indicator is one and indicates the divide ratio is N when the fractional indicator is zero. F is set to 2(N.5)/2 for a fractional divide ratio and F is set to N/2 for an integer divide ratio. A count indicator is asserted every N/2 input clock cycles when N is even. The count indicator is asserted alternately N/2 input clock cycles and then 1+N/2 input clock cycles when N is odd. One period of an output clock signal is synthesized in response to each assertion of the count indicator when the fractional indicator indicates the divide ratio is N.5. One period of the output clock signal is synthesized in response to two assertions of the count indicator when the fractional indicator indicates the divide ratio is an integer.

    Abstract translation: 提供了一个时钟分频器,其配置为将高速输入时钟信号除以奇数,偶数或分数分频比。 例如,输入时钟可以具有1GHz或更高的时钟周期频率。 输入时钟信号被分割以产生输出时钟信号,首先接收表示分频比N的除法因子值F,其中N可以是奇数或偶数整数。 分数指示符表示分数指示符为1时的分频比为N.5,当分数指示符为零时表示分频比为N。 对于分数除数,F被设置为2(N.5)/ 2,并且对于整数分频比,F被设置为N / 2。 当N为偶数时,每N / 2个输入时钟周期,计数指示器被置位。 当N为奇数时,计数指示灯交替显示N / 2个输入时钟周期,然后1 + N / 2个输入时钟周期。 当分数指示符表示分频比为N.5时,响应于计数指示符的每个断言,合成输出时钟信号的一个周期。 当分数指示符表示分频比是整数时,响应于计数指示符的两个断言,合成输出时钟信号的一个周期。

    ZERO LATENCY PREFETCHING IN CACHES
    47.
    发明申请

    公开(公告)号:US20250103503A1

    公开(公告)日:2025-03-27

    申请号:US18976568

    申请日:2024-12-11

    Abstract: This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetchs the lower half level two cache line employing fewer resources than an ordinary prefetch.

Patent Agency Ranking