Abstract:
Embodiments of the present invention provide fine grain concurrency control for transactions in the presence of database updates. During operations, each transaction is assigned a snapshot version number or SVN. A SVN refers to a historical snapshot of the database that can be created periodically or on demand. Transactions are thus tied to a particular SVN, such as, when the transaction was created. Queries belonging to the transactions can access data that is consistent as of a point in time, for example, corresponding to the latest SVN when the transaction was created. At various times, data from the database stored in a memory can be updated using the snapshot data corresponding to a SVN. When a transaction is committed, a snapshot of the database with a new SVN is created based on the data modified by the transaction and the snapshot is synchronized to the memory. When a transaction query requires data from a version of the database corresponding to a SVN, the data in the memory may be synchronized with the snapshot data corresponding to that SVN.
Abstract:
Embodiments of the present invention generate and optimize query plans that are at least partially executable in hardware. Upon receiving a query, the query is rewritten and optimized with a bias for hardware execution of fragments of the query. A template-based algorithm may be employed for transforming a query into fragments and then into query tasks. The various query tasks can then be routed to either a hardware accelerator, a software module, or sent back to a database management system for execution. For those tasks routed to the hardware accelerator, the query tasks are compiled into machine code database instructions. In order to optimize query execution, query tasks may be broken into subtasks, rearranged based on available resources of the hardware, pipelined, or branched conditionally
Abstract:
Embodiments of the present invention provide for batch and incremental loading of data into a database. In the present invention, the loader infrastructure utilizes machine code database instructions and hardware acceleration to parallelize the load operations with the I/O operations. A large, hardware accelerator memory is used as staging cache for the load process. The load process also comprises an index profiling phase that enables balanced partitioning of the created indexes to allow for pipelined load. The online incremental loading process may also be performed while serving queries.
Abstract:
Apparatus and method for classifying regions of an image, based on the relative “importance” of the various areas and to adaptively use the importance information to allocate processing resources and input image formation.
Abstract:
A method and apparatus for encoding, illustratively, a video information stream to produce an encoded information stream according to a group of frames (GOF) information structure where the GOF structure and, optionally, a bit budget are modified in response to, respectively, information discontinuities and the presence of redundant information in the video information stream (due to, e.g., 3:2 pull-down processing).
Abstract:
Frames in a video sequence are divided into two or more regions and a specified number of macroblocks are selected in each region for intra-coding. Depending on the particular implementation, for one or more of the regions, the intra-macroblocks are selected randomly, while at least one other region is dividing into a specified number of slices with the least-recently intra-coded macroblock in each slice selected for intra-coding. When an error is detected at the decoder, the decoder discards data in the corresponding packet and applies a concealment strategy that involves using motion-compensated data if the motion vectors were accurately decoded; otherwise, using non-motion-compensated reference data for the macroblocks affected by the discarding of data. The refresh strategy of the present invention can be used to provide the resulting encoded bitstream with resilience to transmission errors, while maintaining an acceptable degree of video compression.
Abstract:
An image is divided into one or more (e.g., foreground) regions of interest with transition regions defined between each region of interest and the relatively least-important (e.g., background) region. Each region is encoded using a single selected quantization level, where quantizer values can differ between different regions. In general, in order to optimize video quality while still meeting target bit allocations, the quantizer assigned to a region of interest is preferably lower than the quantizer assigned to the corresponding transition region, which is itself preferably lower than the quantizer assigned to the background region. The present invention can be implemented iteratively to adjust the quantizer values as needed to meet the frame's specified bit target. The present invention can also be implemented using a non-iterative scheme that can be more easily implemented in real time. The present invention enables a video compression algorithm to meet a frame-level bit target, while ensuring spatial and temporal smoothness in frame quality, thus resulting in improved visual perception during playback.
Abstract:
The algorithm assumes a constant bit rate over a timing window of specified duration (e.g., a specified number of consecutive frames), where the current frame to be encoded lies in the interior of the timing window. A target bit rate for the current frame is initially selected by calculating the number of bits already used to encode other frames within the window and then assuming that the remaining available bits allocated to the timing window will be evenly distributed to the remaining unencoded frames in the timing window. The target bit rate may then be optionally adjusted based on scene content, encoder state, and buffer considerations. Through a combination of target bit allocation and frame skipping, spatial and temporal resolutions are maintained within acceptable ranges while meeting buffer delay constraints. The algorithm has also been extended to include PB frames in addition to P-only coders.
Abstract:
Embodiments of the present invention provide for batch and incremental loading of data into a database. In the present invention, the loader infrastructure utilizes machine code database instructions and hardware acceleration to parallelize the load operations with the I/O operations. A large, hardware accelerator memory is used as staging cache for the load process. The load process also comprises an index profiling phase that enables balanced partitioning of the created indexes to allow for pipelined load. The online incremental loading process may also be performed while serving queries.
Abstract:
Embodiments of the present invention provide hardware-friendly indexing of databases. In particular, forward and reverse indexing are utilized to allow for easy traversal of primary key to foreign key relationships. A novel structure known as a hit list also allows for easy scanning of various indexes in hardware. Group indexing is provided for flexible support of complex group key definition, such as for date range indexing and text indexing. A Replicated Reordered Column (RRC) may also be added to the group index to convert random I/O pattern into sequential I/O of only needed column elements.