High resolution zoom: a novel digital zoom for digital video camera
    1.
    发明申请
    High resolution zoom: a novel digital zoom for digital video camera 有权
    高分辨率变焦:数码摄像机的小型数码变焦

    公开(公告)号:US20060125937A1

    公开(公告)日:2006-06-15

    申请号:US11010032

    申请日:2004-12-10

    IPC分类号: H04N5/262

    CPC分类号: H04N5/23296 H04N5/232

    摘要: A camera system and a method for zooming the camera system is disclosed. The method generally includes the steps of (A) generating an electronic image by sensing an optical image received by the camera, the sensing including electronic cropping to a window size to establish an initial resolution for the electronic image, (B) generating a final image by decimating the electronic image by a decimation factor to a final resolution smaller than the initial resolution and (C) changing a zoom factor for the final image by adjusting both of the decimation factor and the window size.

    摘要翻译: 公开了一种相机系统和用于缩放相机系统的方法。 该方法通常包括以下步骤:(A)通过感测由相机接收的光学图像来生成电子图像,所述感测包括电子裁剪至窗口大小以建立电子图像的初始分辨率,(B)生成最终图像 通过抽取因子将电子图像抽取到小于初始分辨率的最终分辨率,以及(C)通过调节抽取因子和窗口大小来改变最终图像的缩放因子。

    Method and apparatus for flow control in packet-switched computer system
    3.
    发明授权
    Method and apparatus for flow control in packet-switched computer system 失效
    分组交换计算机系统中流控制的方法和装置

    公开(公告)号:US5907485A

    公开(公告)日:1999-05-25

    申请号:US414875

    申请日:1995-03-31

    IPC分类号: G06F9/46 G06F13/24 G05B15/00

    CPC分类号: G06F9/546 G06F13/24

    摘要: This invention describes a link-by-link flow control method for packet-switched uniprocessor and multiprocessor computer systems that maximizes system resource utilization and throughput, and minimizes system latency. The computer system comprises one or more master interfaces, one or more slave interfaces, and an interconnect system controller which provides dedicated transaction request queues for each master interface and controls the forwarding of transactions to each slave interface. The master interface keeps track of the number of requests in the dedicated queue in the system controller, and the system controller keeps track of the number of requests in each slave interface queue. Both the master interface, and system controller know the maximum capacity of the queue immediately downstream from it, and does not issue more transaction requests than what the downstream queue can accommodate. An acknowledgment from the downstream queue indicates to the sender that there is space in it for another transaction. Thus no system resources are wasted trying to send a request to a queue that is already full.

    摘要翻译: 本发明描述了一种用于分组交换单处理器和多处理器计算机系统的链路链路流控制方法,其使系统资源利用率和吞吐量最大化,并最小化系统等待时间。 计算机系统包括一个或多个主接口,一个或多个从接口和互连系统控制器,其为每个主接口提供专用事务请求队列,并且控制事务到每个从接口的转发。 主接口跟踪系统控制器中专用队列中的请求数,系统控制器跟踪每个从接口队列中的请求数。 主接口和系统控制器都知道其下游队列的最大容量,并且不会比下游队列可以容纳更多的事务请求。 来自下游队列的确认向发送方指示在其中存在另一个事务的空间。 因此,尝试将请求发送到已满的队列时,不会浪费任何系统资源。

    Method and apparatus for implementing non-faulting load instruction
    4.
    发明授权
    Method and apparatus for implementing non-faulting load instruction 失效
    用于实现非故障加载指令的方法和装置

    公开(公告)号:US5842225A

    公开(公告)日:1998-11-24

    申请号:US395579

    申请日:1995-02-27

    申请人: Leslie Kohn

    发明人: Leslie Kohn

    摘要: A non-fault-only (NFO) bit is included in the translation table entry for each page. If the NFO bit is set, non-faulting loads accessing the page will cause translations to occur. Any other access to the non-fault-only page is an error, and will cause the processor to fault. A non-faulting load behaves like a normal load except that it never produces a fault even when applied to a page with the NFO bit set. The NFO bit in a translation table entry marks a page that is mapped for safe access by non-faulting loads, but can still cause a fault by other, normal accesses. The NFO bit indicates which pages are illegal. Selected pages, such as the virtual page 0x0, can be mapped in the translation table. Whenever a null-pointer is dereferenced by a non-faulting load, a translation lookaside buffer (TLB) hit will occur, and zero will be returned immediately without trapping to software to find the requested page. A second embodiment provides that when the operating system software routine invoked by a TLB miss discovers that a non-faulting load has attempted to access an illegal virtual page that was not previously translated in the translation table, the operating system creates a translation table entry for that virtual page mapping it to a physical page of all zeros and asserting the NFO bit for that virtual page.

    摘要翻译: 每个页面的转换表项中都包含非故障(NFO)位。 如果NFO位被设置,访问页面的无故障加载将导致转换。 对非故障页面的任何其他访问都是错误,并将导致处理器出现故障。 非故障负载的作用就像正常负载,除了即使应用于设置了NFO位的页面,它也不会产生故障。 翻译表条目中的NFO位标记了一个被非故障负载安全访问映射的页面,但仍然可能由其他正常访问引起故障。 NFO位指示哪些页面是非法的。 所选页面,如虚拟页面0x0,可以映射到转换表中。 无论何时一个空指针由非故障负载解除引用,将会发生转换后备缓冲区(TLB)命中,零将立即返回,而不会陷入软件以查找请求的页面。 第二实施例规定,当由TLB错误调用的操作系统软件例程发现非故障负载尝试访问以前未在转换表中翻译的非法虚拟页面时,操作系统创建用于 该虚拟页面将其映射到全零的物理页面并断言该虚拟页面的NFO位。

    Hit bit for indicating whether load buffer entries will hit a cache when
they reach buffer head
    5.
    发明授权
    Hit bit for indicating whether load buffer entries will hit a cache when they reach buffer head 失效
    命中位用于指示加载缓冲区条目到达缓冲区头时是否会到达高速缓存

    公开(公告)号:US5802575A

    公开(公告)日:1998-09-01

    申请号:US946611

    申请日:1997-10-07

    摘要: A dual-ported tag array of a cache allows simultaneous access of the tag array by miss data of older LOAD instructions being returned during the same cycle that a new LOAD instruction is accessing the tag array to check for a cache hit. Because a load buffer queues LOAD instructions, the cache tags for older LOAD instructions which missed the cache return later when new LOAD instructions are accessing a tag array to check for cache hits. A method and apparatus for calculating and maintaining a hit bit in a load buffer perform the determination of whether or not a newly dispatched LOAD will hit the cache after it has been queued into the load buffer and waited for all older LOADs to be processed. A load buffer data entry includes the hit bit and all information necessary to process the LOAD instruction and calculate the hit bits for future LOAD instructions which must be buffered. A method and apparatus for servicing LOAD instructions, in which the access of the data array portion of a cache and the tag array portion are decoupled, allows the delayed access of the data array after a LOAD has been delayed in the load buffer without reaccessing the tag array.

    摘要翻译: 缓存的双端口标签阵列允许通过在新的LOAD指令正在访问标签阵列以检查缓存命中的相同周期期间返回的旧LOAD指令的未命中数据来同时访问标签数组。 因为加载缓冲区排队LOAD指令,所以当新的LOAD指令正在访问标记数组以检查缓存命中时,错过高速缓存的较旧LOAD指令的缓存标签将返回。 用于计算和维护加载缓冲器中的命中位的方法和装置执行新分派的LOAD在其已经排队到加载缓冲器中并等待所有较旧的LOAD被处理之后是否将击中高速缓存的确定。 加载缓冲区数据条目包括命中位和处理LOAD指令所需的所有信息,并计算必须缓冲的未来LOAD指令的命中位。 一种用于服务LOAD指令的方法和装置,其中高速缓存的数据阵列部分和标签阵列部分的访问被解耦,允许在加载缓冲器中的LOAD被延迟之后数据阵列的延迟访问,而不重新加载 标签数组。

    Cache coherent computer system that minimizes invalidation and copyback
operations
    6.
    发明授权
    Cache coherent computer system that minimizes invalidation and copyback operations 失效
    缓存一致的计算机系统,使无效和副本操作最小化

    公开(公告)号:US5706463A

    公开(公告)日:1998-01-06

    申请号:US854418

    申请日:1997-05-12

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0822 G06F12/0815

    摘要: A multi-processor computer system is disclosed that reduces the occurrences of invalidate and copyback operations through a memory interconnect by disabling a first write optimization of a cache coherency protocol for data that is not likely to be written by a requesting processor. Such data include read-only code segments. The code segments, including instructions and data, are shared among the multiple processors. The requesting processor generates a Read to Share Always request upon a cache miss of a read-only datablock, and generates a Read to Share request otherwise. The Read to Share Always request results in the datablock stored in cache memory being labeled as in a "shared" state, while the Read to Share request results in the datablock being labeled as in an "exclusive" state.

    摘要翻译: 公开了一种多处理器计算机系统,其通过禁用用于不可能由请求处理器写入的数据的高速缓存一致性协议的第一写入优化来减少通过存储器互连的无效和复制操作的发生。 这样的数据包括只读代码段。 代码段(包括指令和数据)在多个处理器之间共享。 请求处理器在只读数据块的高速缓存未命中时生成“读取共享始终”请求,否则生成“读取共享”请求。 读共享始终请求将存储在高速缓冲存储器中的数据块中的结果标记为“共享”状态,而读共享请求导致数据块标记为“独占”状态。

    Memory transaction execution system and method for multiprocessor system
having independent parallel transaction queues associated with each
processor
    7.
    发明授权
    Memory transaction execution system and method for multiprocessor system having independent parallel transaction queues associated with each processor 失效
    具有与每个处理器相关联的独立并行事务队列的多处理器系统的内存事务执行系统和方法

    公开(公告)号:US5657472A

    公开(公告)日:1997-08-12

    申请号:US414922

    申请日:1995-03-31

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/0828

    摘要: A multiprocessor computer system is provided having a multiplicity of sub-systems and a main memory coupled to a system controller. An interconnect module, interconnects the main memory and sub-systems in accordance with interconnect control signals received from the system controller. At least two of the sub-systems are data processors, each having a respective cache memory that stores multiple blocks of data and a respective master cache index. Each master cache index has a set of master cache tags (Etags), including one cache tag for each data block stored by the cache memory. Each data processor includes a master interface for sending memory transaction requests to the system controller and for receiving cache access requests from the system controller corresponding to memory transaction requests by other ones of the data processors. In the preferred embodiment, each memory transaction request is classified into one of two distinct master classes: a first transaction class including read memory access requests and a second transaction class including writeback memory access requests. The master interface and system controller have corresponding parallel request queues, one for each master class, for transmitting and receiving memory access requests. The system controller further includes memory transaction request logic for processing each memory transaction request and a duplicate cache index having a set of duplicate cache tags (Dtags), including one cache tag corresponding to each master cache tag in an associated data processor.

    摘要翻译: 提供了具有多个子系统和耦合到系统控制器的主存储器的多处理器计算机系统。 互连模块根据从系统控制器接收的互连控制信号,互连主存储器和子系统。 至少两个子系统是数据处理器,每个数据处理器具有存储多个数据块的相应高速缓存存储器和相应的主高速缓存索引。 每个主缓存索引具有一组主缓存标签(Etags),包括缓存存储器存储的每个数据块的一个缓存标签。 每个数据处理器包括主界面,用于向系统控制器发送存储器事务请求,以及从其他数据处理器接收来自系统控制器的对应于存储器事务请求的高速缓存访​​问请求。 在优选实施例中,每个存储器事务请求被分类为两个不同的主类之一:包括读存储器访问请求的第一事务类和包括回写存储器访问请求的第二事务类。 主接口和系统控制器具有对应的并行请求队列,每个主类一个,用于发送和接收存储器访问请求。 系统控制器还包括用于处理每个存储器事务请求的存储器事务请求逻辑和具有一组重复高速缓存标签(Dtags)的重复高速缓存索引,包括与相关联的数据处理器中的每个主高速缓存标签相对应的一个高速缓存标签。

    Method of protecting high definition video signal
    8.
    发明授权
    Method of protecting high definition video signal 有权
    保护高分辨率视频信号的方法

    公开(公告)号:US06925181B2

    公开(公告)日:2005-08-02

    申请号:US10354454

    申请日:2003-01-30

    摘要: A system controls reproduction of a video transmission between a transmitter and a receiver. The system includes an encryptor with an offset generator adapted to receive the encrypted frame key and to generate a sequence of pseudo-random values for the color component; and an adder coupled to the offset generator and to the color component signal for providing an encoded color component signal. The system also includes a decryptor with a decryptor offset generator adapted to receive the encrypted frame key and to generate a decryptor pseudo-random value for the color component; and a subtractor coupled to the offset generator and to the color component signal for subtracting the offset signal from the color component signal.

    摘要翻译: 系统控制发射机和接收机之间的视频传输的再现。 该系统包括具有偏移生成器的加密器,适于接收加密的帧密钥并生成颜色分量的伪随机值序列; 以及耦合到偏移发生器和用于提供编码颜色分量信号的颜色分量信号的加法器。 所述系统还包括具有解密器偏移生成器的解密器,所述解密器偏移生成器适于接收所述加密的帧密钥并且生成所述颜色分量的解密器伪随机值; 以及耦合到偏移发生器的减法器和用于从颜色分量信号中减去偏移信号的颜色分量信号。

    Dual-prime motion estimation engine
    9.
    发明授权
    Dual-prime motion estimation engine 有权
    双质量运动估计引擎

    公开(公告)号:US06501799B1

    公开(公告)日:2002-12-31

    申请号:US09128730

    申请日:1998-08-04

    申请人: Leslie Kohn

    发明人: Leslie Kohn

    IPC分类号: A04B166

    CPC分类号: H04N19/43

    摘要: An apparatus performs motion estimation based on an average of previous field references in a flexible, yet high performance manner. The apparatus has a command memory for storing a motion estimation command list segment which in turn contains a search command for specifying a merged search operation over one or more search positions. The apparatus also has a score memory for storing the result of each merged search operation. The score memory is initialized when the merged search operation is initiated. During the search operation, the score memory accumulates the result of each search position. The apparatus also has a search engine connected to the command memory and to the score memory for determining from the score memory a search position with the lowest score. The search engine then generates dual prime motion estimation outputs in the form of motion estimation result list segments.

    摘要翻译: 一种装置以灵活但高性能的方式基于先前场参考的平均值进行运动估计。 该装置具有用于存储运动估计命令列表段的命令存储器,运动估计命令列表段又包含用于指定在一个或多个搜索位置上的合并搜索操作的搜索命令。 该装置还具有用于存储每个合并搜索操作的结果的分数存储器。 当合并搜索操作启动时,得分记忆被初始化。 在搜索操作期间,分数存储器累积每个搜索位置的结果。 该装置还具有连接到命令存储器和分数存储器的搜索引擎,用于从分数存储器确定具有最低分数的搜索位置。 然后,搜索引擎以运动估计结果列表段的形式产生双质量运动估计输出。

    Motion estimation engine
    10.
    发明授权
    Motion estimation engine 失效
    运动估计引擎

    公开(公告)号:US06335950B1

    公开(公告)日:2002-01-01

    申请号:US08950379

    申请日:1997-10-14

    申请人: Leslie Kohn

    发明人: Leslie Kohn

    IPC分类号: H04B166

    CPC分类号: H04N5/145

    摘要: An apparatus performs motion estimation based on a reference image and a target image. The apparatus has a command memory for storing a motion estimation command list segment and a search engine connected to the command memory. The search engine retrieves and processes the command list segment stored in the memory. The search engine in turn has a reference window memory containing one or more reference data segments, a target memory containing one or more target data segments, and a data path engine for generating a score for each offset between data in the reference window memory and data stored in the target memory. A result memory receives outputs from the motion estimation search engine in the form of motion estimation result list segments. The reference window memory, target memory, and result memory may be double-buffered to minimize system memory latencies. Moreover, target and reference fetches may be shared by up to four search targets in a split search command. Additionally, the command list segment and the result list segment use an identical format. The size of each command in the command list and each result in the result list is also identical. The identical format and size characteristics allow results generated by a current search to be reused as a part of the command for the next search.

    摘要翻译: 一种装置基于参考图像和目标图像执行运动估计。 该装置具有用于存储连接到命令存储器的运动估计命令列表段和搜索引擎的命令存储器。 搜索引擎检索并处理存储在存储器中的命令列表段。 搜索引擎依次具有包含一个或多个参考数据段的参考窗口存储器,包含一个或多个目标数据段的目标存储器,以及用于为参考窗口存储器中的数据和数据之间的每个偏移生成分数的数据路径引擎 存储在目标存储器中。 结果存储器以运动估计结果列表段的形式接收来自运动估计搜索引擎的输出。 参考窗口存储器,目标存储器和结果存储器可以被双缓冲以最小化系统内存延迟。 此外,目标和参考提取可以在分割搜索命令中由多达四个搜索目标共享。 另外,命令列表段和结果列表段使用相同的格式。 命令列表中每个命令的大小和结果列表中的每个命令的大小也是相同的。 相同的格式和大小特征允许当前搜索生成的结果作为下一次搜索命令的一部分被重新使用。