Using breakpoints for debugging in a RISC microprocessor architecture

    公开(公告)号:US20080077911A1

    公开(公告)日:2008-03-27

    申请号:US11981278

    申请日:2007-10-31

    IPC分类号: G06F9/44

    摘要: A microprocessor executes at 100 native MIPS peak performance with a 100-MHz internal clock frequency. Central processing unit (CPU) instruction sets are hardwired, allowing most instructions to execute in a single cycle. A “flow-through” design allows the next instruction to start before the prior instruction completes, thus increasing performance. A microprocessing unit (MPU) contains 52 general-purpose registers, including 16 global data registers, an index register, a count register, a 16-deep addressable register/return stack, and an 18-deep operand stack. Both stacks contain an index register in the top elements, are cached on chip, and when required, automatically spill to and refill from external memory. The stacks minimize the data movement and also minimize memory access during procedure calls, parameter passing, and variable assignments. Additionally, the MPU contains a mode/status register and 41 locally addressed registers for I/O, control, configuration, and status. The CPU contains both a high-performance, zero-operand, dual-stack architecture MPU, and an input-output processor (IOP) that executes instructions to transfer data, count events, measure time, and perform other timing-dependent functions. A zero-operand stack architecture eliminates operand bits. Stacks also minimize register saves and loads within and across procedures, thus allowing shorter instruction sequences and faster-running code. Instructions are simple to decode and execute, allowing the MPU and IOP to issue and complete instructions in a single clock cycle—each at 100 native MIPS peak execution. Using 8-bit opcodes, the CPU obtains up to four instructions from memory each time an instruction fetch or pre-fetch is performed. These instructions can be repeated without rereading them from memory. This maintains high performance when connected directly to DRAM, without a cache.

    Detecting the boundaries of memory in a RISC microprocessor architecture

    公开(公告)号:US20070271442A1

    公开(公告)日:2007-11-22

    申请号:US11881284

    申请日:2007-07-26

    IPC分类号: G06F9/30

    摘要: A microprocessor executes at 100 native MIPS peak performance with a 100-MHz internal clock frequency. Central processing unit (CPU) instruction sets are hardwired, allowing most instructions to execute in a single cycle. A “flow-through” design allows the next instruction to start before the prior instruction completes, thus increasing performance. A microprocessing unit (MPU) contains 52 general-purpose registers, including 16 global data registers, an index register, a count register, a 16-deep addressable register/return stack, and an 18-deep operand stack. Both stacks contain an index register in the top elements, are cached on chip, and when required, automatically spill to and refill from external memory. The stacks minimize the data movement and also minimize memory access during procedure calls, parameter passing, and variable assignments. Additionally, the MPU contains a mode/status register and 41 locally addressed registers for I/O, control, configuration, and status. The CPU contains both a high-performance, zero-operand, dual-stack architecture MPU, and an input-output processor (IOP) that executes instructions to transfer data, count events, measure time, and perform other timing-dependent functions. A zero-operand stack architecture eliminates operand bits. Stacks also minimize register saves and loads within and across procedures, thus allowing shorter instruction sequences and faster-running code. Instructions are simple to decode and execute, allowing the MPU and IOP to issue and complete instructions in a single clock cycle—each at 100 native MIPS peak execution. Using 8-bit opcodes, the CPU obtains up to four instructions from memory each time an instruction fetch or pre-fetch is performed. These instructions can be repeated without rereading them from memory. This maintains high performance when connected directly to DRAM, without a cache.

    Efficient splitting and mixing of streaming-data frames for processing through multiple processing modules
    23.
    发明申请
    Efficient splitting and mixing of streaming-data frames for processing through multiple processing modules 有权
    流式数据帧的高效分割和混合,用于通过多个处理模块进行处理

    公开(公告)号:US20050286552A1

    公开(公告)日:2005-12-29

    申请号:US11204683

    申请日:2005-08-16

    IPC分类号: H04J3/16

    CPC分类号: G06F15/8053

    摘要: Streaming data is processed through one or more pipes of connected modules including mixers and/or splitters. The data is carried in composite physically allocated frames having virtual subframes associated with different ones of the splitters, mixers, and other transform modules. Nesting trees and pipe control tables represent the structure of the pipes. A frame allocator is assigned to a particular module in a pipe. Rather than issuing a control transaction to all modules when any one of them completes an operation upon its source data, a control manager requests a module to begin its operation only when all of its input subframes have become available. Frame control tables record when any module has completed an operation, and a pipe control table lists which modules provide data to which other modules.

    摘要翻译: 流数据通过一个或多个连接的模块管道进行处理,包括混合器和/或分离器。 该数据在具有与分离器,混合器和其他变换模块中的不同分配器相关联的虚拟子帧的复合物理分配帧中承载。 嵌套树木和管道控制表表示管道的结构。 帧分配器被分配给管道中的特定模块。 控制管理器当其任何一个完成其源数据上的操作时,不会向所有模块发出控制事务,而是只有当所有模块的所有输入子帧都可用时才要求模块开始操作。 任何模块完成操作时,帧控制表记录,管道控制表列出哪些模块向其他模块提供数据。

    Resource manager architecture
    26.
    发明授权
    Resource manager architecture 有权
    资源管理器架构

    公开(公告)号:US06799208B1

    公开(公告)日:2004-09-28

    申请号:US09563726

    申请日:2000-05-02

    IPC分类号: G06F1300

    摘要: Resource management architectures implemented in computer systems to manage resources are described. In one embodiment, a general architecture includes a resource manager and multiple resource providers that support one or more resource consumers such as a system component or application. Each provider is associated with a resource and acts as the manager for the resource when interfacing with the resource manager. The resource manager arbitrates access to the resources provided by the resource providers on behalf of the consumers. A policy manager sets various policies that are used by the resource manager to allocate resources. One policy is a priority-based policy that distinguishes among which applications and/or users have priority over others to use the resources. A resource consumer creates an “activity” at the resource manager and builds one or more “configurations” that describe various sets of preferred resources required to perform the activity. Each resource consumer can specify one or more configurations for each activity. If multiple configurations are specified, the resource consumer can rank them according to preference. This allows the resource consumers to be dynamically changed from one configuration to another as operating conditions change.

    摘要翻译: 描述了在计算机系统中实现的管理资源的资源管理架构。 在一个实施例中,一般架构包括资源管理器和支持一个或多个资源消费者(诸如系统组件或应用程序)的多个资源提供者。 当与资源管理器进行接口时,每个提供者都与资源相关联并充当该资源的管理器。 资源管理员代表消费者对资源提供者提供的资源的访问进行仲裁。 策略管理器设置资源管理器使用的各种策略来分配资源。 一个策略是一个基于优先级的策略,区分哪些应用程序和/或用户优先于其他应用程序和/或用户来使用资源。 资源消费者在资源管理器中创建“活动”,并构建一个或多个描述执行活动所需的各种首选资源集的“配置”。 每个资源消费者可以为每个活动指定一个或多个配置。 如果指定了多个配置,资源消费者可以根据喜好对其进行排名。 这允许在操作条件改变时资源消费者从一个配置动态地改变另一个配置。

    Multi-media synchronization
    27.
    发明授权
    Multi-media synchronization 失效
    多媒体同步

    公开(公告)号:US5661665A

    公开(公告)日:1997-08-26

    申请号:US669719

    申请日:1996-06-26

    摘要: A method is described for synchronously rendering digitized media streams. Each digitized media stream is made up of a sequence of media samples having media-specified timing. The described method includes calculating presentation times for media samples of different media streams based in part on the media-specified timing of the media samples and also based in part upon the desired synchronization of the different media streams relative to each other. The calculated presentation times indicate when the media samples should be rendered relative to a common clock reference. The method further includes attaching a media sample's calculated presentation time to the media sample, and then routing the media sample to a sink component for rendering. The sink component renders the respective media samples of the digitized media streams at the approximate presentation times of the samples relative to the common clock reference.

    摘要翻译: 描述了一种用于同步呈现数字化媒体流的方法。 每个数字化媒体流由具有媒体指定定时的一系列媒体样本组成。 所描述的方法包括基于介质指定的媒体样本定时来计算不同媒体流的媒体样本的呈现时间,并且还部分地基于相对于彼此的不同媒体流的期望同步。 计算的演示时间表示何时应该相对于公共时钟参考渲染媒体样本。 该方法还包括将媒体样本的计算呈现时间附加到媒体样本,然后将媒体样本路由到宿组件进行渲染。 接收器组件使得数字化媒体流的相应媒体采样相对于公共时钟参考在样本的近似呈现时间。