SECURE MEMORY CONTROLLER
    3.
    发明申请

    公开(公告)号:US20170285986A1

    公开(公告)日:2017-10-05

    申请号:US15086523

    申请日:2016-03-31

    申请人: Vinodh Gopal

    发明人: Vinodh Gopal

    IPC分类号: G06F3/06

    摘要: Methods and apparatus for a secure memory controller. The secure memory controller includes circuitry and logic that is programmed to prevent malicious code from overwrite protected regions of system memory. The memory controller observes memory access patterns and trains itself to identify thread stacks and addresses relating to the thread stacks including stack-frame pointers and return addresses. In one aspect, the memory controller prevents a return address from being overwritten until a proper return from a function call is detected. The memory controller is also configured to prevent malicious code from overwriting page table entries (PTEs) in page tables. Pages containing PTEs are identified, and access is prevented to the PTEs from user-mode code. The PTEs are also scanned to detect corrupted PTEs resulting from bit manipulation by malicious code.

    ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME
    4.
    发明申请
    ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME 有权
    电子设备及其制造方法

    公开(公告)号:US20160284992A1

    公开(公告)日:2016-09-29

    申请号:US14856488

    申请日:2015-09-16

    申请人: SK hynix Inc.

    发明人: Woo-Tae LEE

    摘要: An electronic device includes a semiconductor memory. The semiconductor memory includes a vertical electrode layer formed over a substrate and extending in a vertical direction substantially perpendicular to a surface of the substrate; an interlayer dielectric layer and a structure formed over the substrate and alternately stacked along the vertical electrode layer, wherein the structure includes a horizontal electrode layer and a base layer which is conductive and located over or under the horizontal electrode layer; a variable resistance layer interposed between the vertical electrode layer and the base layer, and including a common element with the base layer; and a groove interposed between the vertical electrode layer and the horizontal electrode layer and insulating the vertical electrode layer and the horizontal electrode layer from each other.

    摘要翻译: 电子设备包括半导体存储器。 半导体存储器包括垂直电极层,该垂直电极层形成在衬底上并且在基本上垂直于衬底的表面的垂直方向上延伸; 层叠介电层和形成在所述基板上并且沿着所述垂直电极层交替堆叠的结构,其中所述结构包括导电并位于所述水平电极层之上或之下的水平电极层和基极层; 插入在所述垂直电极层和所述基底层之间的可变电阻层,并且包括与所述基底层的共同元件; 以及插入在所述垂直电极层和所述水平电极层之间并且将所述垂直电极层和所述水平电极层彼此绝缘的沟槽。

    DATA PROCESSING SYSTEM HAVING COMBINED MEMORY BLOCK AND STACK PACKAGE
    5.
    发明申请
    DATA PROCESSING SYSTEM HAVING COMBINED MEMORY BLOCK AND STACK PACKAGE 审中-公开
    具有组合记忆块和堆叠包的数据处理系统

    公开(公告)号:US20160210235A1

    公开(公告)日:2016-07-21

    申请号:US15063012

    申请日:2016-03-07

    申请人: SK hynix Inc.

    IPC分类号: G06F12/08 G06F13/40

    摘要: A data processing system includes a central processing unit (CPU), a control block configured to interface with the CPU, a cache memory configured to interface with the control block and arranged to be spaced from the CPU by a first distance, and a combined memory block configured to interface with the control block, arranged to be spaced from the CPU by a second distance larger than the first distance, and configured of a working memory and a storage memory. The combined memory block is configured of a plurality of stacked memory layers, each configured of a plurality of variable resistance memory cells. The working memory is allocated to one memory layer selected among the plurality of memory layers. The storage memory is allocated to remaining memory layers among the plurality of memory layers.

    摘要翻译: 数据处理系统包括中央处理单元(CPU),配置成与CPU接口的控制块,配置为与控制块接口并被布置为与CPU间隔第一距离的高速缓存存储器,以及组合存储器 其被配置为与所述控制块接口,被布置成与所述CPU间隔大于所述第一距离的第二距离,并且由工作存储器和存储存储器构成。 组合存储块由多个堆叠存储器层构成,每个堆叠存储层由多个可变电阻存储单元构成。 工作存储器被分配给在多个存储器层中选择的一个存储器层。 存储存储器被分配给多个存储器层中的剩余存储器层。

    Low latency thread context caching
    6.
    发明授权
    Low latency thread context caching 有权
    低延迟线程上下文缓存

    公开(公告)号:US09384036B1

    公开(公告)日:2016-07-05

    申请号:US14059218

    申请日:2013-10-21

    申请人: Google Inc.

    摘要: A method includes performing one or more operations as requested by a thread executing on a processor, the thread having a thread context; receiving a park request from the thread, the park request received following a request from the thread for a low latency resource, wherein the cache response time is less than or equal to a resource response threshold so as to allow the thread context to be stored and retrieved from the cache in less time than the portion of time it takes to complete the request for the low latency resource; storing the thread context in the cache; detecting that the resume condition has occurred; retrieving the thread context from the cache; and resuming execution of the thread.

    摘要翻译: 一种方法包括:执行在处理器上执行的线程所请求的一个或多个操作,所述线程具有线程上下文; 从所述线程接收到驻留请求,所述驻留请求是在所述线程针对低延迟资源的请求之后接收的,其中所述高速缓存响应时间小于或等于资源响应阈值,以便允许所述线程上下文被存储;以及 从比缓存时间资源完成请求所花费的时间少的时间,从缓存中检索; 将线程上下文存储在高速缓存中; 检测到恢复条件已经发生; 从缓存中检索线程上下文; 并恢复线程的执行。

    Using predictions for store-to-load forwarding
    7.
    发明授权
    Using predictions for store-to-load forwarding 有权
    使用商店到装载转发的预测

    公开(公告)号:US09367455B2

    公开(公告)日:2016-06-14

    申请号:US14018562

    申请日:2013-09-05

    IPC分类号: G06F9/30 G06F12/08 G06F12/12

    摘要: The described embodiments include a core that uses predictions for store-to-load forwarding. In the described embodiments, the core comprises a load-store unit, a store buffer, and a prediction mechanism. During operation, the prediction mechanism generates a prediction that a load will be satisfied using data forwarded from the store buffer because the load loads data from a memory location in a stack. Based on the prediction, the load-store unit first sends a request for the data to the store buffer in an attempt to satisfy the load using data forwarded from the store buffer. If data is returned from the store buffer, the load is satisfied using the data. However, if the attempt to satisfy the load using data forwarded from the store buffer is unsuccessful, the load-store unit then separately sends a request for the data to a cache to satisfy the load.

    摘要翻译: 所描述的实施例包括使用对存储到负载转发的预测的核心。 在所描述的实施例中,核心包括加载存储单元,存储缓冲器和预测机制。 在运行期间,预测机制产生一个预测,即使用从存储缓冲器转发的数据来满足负载,因为负载从栈中的存储器位置加载数据。 基于该预测,加载存储单元首先向存储缓冲器发送对数据的请求,以尝试使用从存储缓冲器转发的数据来满足负载。 如果从存储缓冲区返回数据,则使用该数据来满足负载。 然而,如果使用从存储缓冲器转发的数据来满足负载的尝试不成功,则加载存储单元然后分别向缓存发送用于满足负载的数据请求。

    Code interpretation using stack state information
    9.
    发明授权
    Code interpretation using stack state information 失效
    使用堆栈状态信息的代码解释

    公开(公告)号:US07424596B2

    公开(公告)日:2008-09-09

    申请号:US10813599

    申请日:2004-03-31

    IPC分类号: G06F9/30

    CPC分类号: G06F9/45504 G06F2212/451

    摘要: Executing an instruction on an operand stack, including performing a stack-state aware translation of the instruction to threaded code to determine an operand stack state for the instruction, dispatching the instruction according to the operand stack state for the instruction, and executing the instruction.

    摘要翻译: 执行操作数堆栈上的指令,包括对指令进行堆栈状态感知转换以确定指令的操作数堆栈状态,根据指令的操作数堆栈状态分配指令,以及执行指令。

    Method frame storage using multiple memory circuits
    10.
    发明授权
    Method frame storage using multiple memory circuits 失效
    使用多个存储器电路的方法帧存储

    公开(公告)号:US06950923B2

    公开(公告)日:2005-09-27

    申请号:US10346886

    申请日:2003-01-17

    摘要: A memory architecture in accordance with an embodiment of the present invention improves the speed of method invocation. Specifically, method frames of method calls are stored in two different memory circuits. The first memory circuit stores the execution environment of each method call, and the second memory circuit stores parameters, variables or operands of the method calls. In one embodiment the execution environment includes a return program counter, a return frame, a return constant pool, a current method vector, and a current monitor address. In some embodiments, the memory circuits are stacks; therefore, the stack management unit to cache can be used to cache either or both memory circuits. The stack management unit can include a stack cache to accelerate data transfers between a stack-based computing system and the stacks. In one embodiment, the stack management unit includes a stack cache, a dribble manager unit, and a stack control unit. The dribble manager unit includes a fill control unit and a spill control unit. Since the vast majority of memory accesses to the stack occur at or near the top of the stack, the dribble manager unit maintains the top portion of the stack in the stack cache. When the stack-based computing system is popping data off of the stack and a fill condition occurs, the fill control unit transfer data from the stack to the bottom of the stack cache to maintain the top portion of the stack in the stack cache. Typically, a fill condition occurs as the stack cache becomes empty and a spill condition occurs as the stack cache becomes full.

    摘要翻译: 根据本发明的实施例的存储器架构提高了方法调用的速度。 具体来说,方法调用的方法帧存储在两个不同的存储器电路中。 第一存储器电路存储每个方法调用的执行环境,并且第二存储器电路存储方法调用的参数,变量或操作数。 在一个实施例中,执行环境包括返回程序计数器,返回帧,返回常量池,当前方法向量和当前监视器地址。 在一些实施例中,存储器电路是堆叠; 因此,可以使用缓存的堆栈管理单元来缓存一个或两个存储器电路。 堆栈管理单元可以包括堆栈高速缓存,以加速基于堆栈的计算系统和堆栈之间的数据传输。 在一个实施例中,堆栈管理单元包括堆栈高速缓存,运球管理器单元和堆栈控制单元。 运球管理器单元包括一个填充控制单元和溢出控制单元。 由于绝大多数对堆栈的存储器访问发生在堆栈顶部或附近,运球管理器单元将堆栈的顶部部分维护在堆栈高速缓存中。 当基于堆栈的计算系统从堆栈中弹出数据并发生填充条件时,填充控制单元将数据从堆栈传送到堆栈高速缓存的底部,以将堆栈的顶部部分保持在堆栈高速缓存中。 通常,填充条件发生在堆栈高速缓存变为空并且随着堆栈高速缓存已满时发生溢出状态。