Abstract:
Systems, architectures, and data structures are described which are used to manage distributed design chains, specifically for domains in which data reside in multiple applications and are linked through complex interrelationships. The design chains or design networks integrated by the invention may include multiple companies in multiple sites collaborating to design and develop a new product. The invention is intended to integrate seamlessly and transparently with existing, diverse legacy applications, which include inter-linked data relevant to the design, thereby addressing the needs identified above.
Abstract:
A PC-type computer has a system bus (e.g., a PCI bus) configured with a main CPU board, a statistical multiplexing (stat-mux) board, and a plurality of video/audio encoder boards, each configured to receive and compress a corresponding video/audio stream. The stat-mux board performs statistical multiplexing on the different compressed bitstreams to transmit multiple bitstreams over individual shared communication channels. Although each of the boards is configured to the system bus, each encoder board has a digital signal processor (DSP) with a synchronized serial interface (SSI) output port that is directly connected to an SSI input port on a DSP on the stat-mux board (which, in one embodiment, has four such DSPs each with six such SSI input ports). As such, (up to 24) compressed video/audio bitstreams generated on the various encoder boards can be transmitted directly to the stat-mux board without having to go through the system bus. In this way, the computer system can provide statistical multiplexing of low-latency video/audio bitstreams without having to suffer the processing delays associated with conventional transmission over PCI system buses.
Abstract:
An apparatus and method for reducing memory resource requirements in, e.g., an image processing system by utilizing a packed data pixel representation and, optionally, M-ary pyramid decomposition, for pixel block or pixel group searching and matching operations.
Abstract:
During video coding, a transform such as a discrete cosine transform (DCT) is applied to blocks of image data (e.g., motion-compensated interframe pixel differences) and the resulting transform coefficients for each block are quantized at a specified quantization level. Notwithstanding the fact that some coefficients are quantized to non-zero values, at least one non-zero quantized coefficient is treated as if it had a value of zero for purposes of further processing (e.g., run-length encoding (RLE) the quantized data). When segmentation analysis is performed to identify two or more different regions of interest in each frame, the number of coefficients that are treated as having a value of zero for RLE is different for different regions of interest (e.g., more coefficients for less-important regions). In this way, the number of bits used to encode image data are reduced to satisfy bit rate requirements without (1) having to drop frames adaptively, while (2) conforming to constraints that may be imposed on the magnitude of change in quantization level from frame to frame.
Abstract:
Embodiments of the present invention provide a run-time scheduler that schedules tasks for database queries on one or more execution resources in a dataflow fashion. In some embodiments, the run-time scheduler may comprise a task manager, a memory manager, and hardware resource manager. When a query is received by a host database management system, a query plan is created for that query. The query plan splits a query into various fragments. These fragments are further compiled into a directed acyclic graph of tasks. Unlike conventional scheduling, the dependency arc in the directed acyclic graph is based on page resources. Tasks may comprise machine code that may be executed by hardware to perform portions of the query. These tasks may also be performed in software or relate to I/O.
Abstract:
Embodiments of the present invention provide one or more hardware-friendly data structures that enable efficient hardware acceleration of database operations. In particular, the present invention employs a column-store format for the database. In the database, column-groups are stored with implicit row ids (RIDs) and a RID-to-primary key column having both column-store and row-store benefits via column hopping and a heap structure for adding new data. Fixed-width column compression allow for easy hardware database processing directly on the compressed data. A global database virtual address space is utilized that allows for arithmetic derivation of any physical address of the data regardless of its location. A word compression dictionary with token compare and sort index is also provided to allow for efficient hardware-based searching of text. A tuple reconstruction process is provided as well that allows hardware to reconstruct a row by stitching together data from multiple column groups.
Abstract:
Embodiments of the present invention provide one or more hardware-friendly data structures that enable efficient hardware acceleration of database operations. In particular, the present invention employs a column-store format for the database. In the database, column-groups are stored with implicit row ids (RIDs) and a RID-to-primary key column having both column-store and row-store benefits via column hopping and a heap structure for adding new data. Fixed-width column compression allow for easy hardware database processing directly on the compressed data. A global database virtual address space is utilized that allows for arithmetic derivation of any physical address of the data regardless of its location. A word compression dictionary with token compare and sort index is also provided to allow for efficient hardware-based searching of text. A tuple reconstruction process is provided as well that allows hardware to reconstruct a row by stitching together data from multiple column groups.
Abstract:
Embodiments of the present invention provide for batch and incremental loading of data into a database. In the present invention, the loader infrastructure utilizes machine code database instructions and hardware acceleration to parallelize the load operations with the I/O operations. A large, hardware accelerator memory is used as staging cache for the load process. The load process also comprises an index profiling phase that enables balanced partitioning of the created indexes to allow for pipelined load. The online incremental loading process may also be performed while serving queries.
Abstract:
Embodiments of the present invention provide fine grain concurrency control for transactions in the presence of database updates. During operations, each transaction is assigned a snapshot version number or SVN. A SVN refers to a historical snapshot of the database that can be created periodically or on demand. Transactions are thus tied to a particular SVN, such as, when the transaction was created. Queries belonging to the transactions can access data that is consistent as of a point in time, for example, corresponding to the latest SVN when the transaction was created. At various times, data from the database stored in a memory can be updated using the snapshot data corresponding to a SVN. When a transaction is committed, a snapshot of the database with a new SVN is created based on the data modified by the transaction and the snapshot is synchronized to the memory. When a transaction query requires data from a version of the database corresponding to a SVN, the data in the memory may be synchronized with the snapshot data corresponding to that SVN.
Abstract:
Embodiments of the present invention provide a run-time scheduler that schedules tasks for database queries on one or more execution resources in a dataflow fashion. In some embodiments, the run-time scheduler may comprise a task manager, a memory manager, and hardware resource manager. When a query is received by a host database management system, a query plan is created for that query. The query plan splits a query into various fragments. These fragments are further compiled into a directed acyclic graph of tasks. Unlike conventional scheduling, the dependency arc in the directed acyclic graph is based on page resources. Tasks may comprise machine code that may be executed by hardware to perform portions of the query. These tasks may also be performed in software or relate to I/O.