Abstract:
Video syntax elements are coded using a context tree. Context information used for coding previously-coded syntax elements is identified. A context tree is produced by separating the previously-coded syntax elements into data groups based on the context information. The context tree includes nodes representing the data groups. Separating the previously-coded syntax elements can include applying separation criteria against values of the context information to produce at least some of the nodes. Context information is then identified for another set of syntax elements to be coded. One of the nodes of the context tree is identified based on values of the context information associated with one of the other set of syntax elements. That syntax element is then coded according to a probability model associated with the identified node. The context tree can be used to encode or decode syntax elements.
Abstract:
A suspension system (760) includes a top mount (774), a bottom mount (778), a rigid housing, an air spring (766), and a linear actuator (662). The air spring transfers force of a first load path between the top mount and the bottom mount. The air spring includes a pressurized cavity containing pressurized gas that transfers the force of the first load path. The linear actuator transfers force of a second load path between the top mount and the bottom mount in parallel to the first load path. The rigid housing defines at least part of the pressurized cavity and transfers the force of the second load path.
Abstract:
A method includes determining compression parameters for image portions based on a distance value for each of the image portions to a sensor such that compression rates applied by the compression parameters depend on the distance values, and encoding the image portions using the compression parameters. The range-based compression parameters cause the image portions corresponding to large distance values to be compressed using low compression rates and cause the image portions corresponding to small distance values to be compressed using high compression rates.
Abstract:
Transform kernel candidates including a vertical transform type associated with a vertical motion and a horizontal transform type associated with a horizontal motion can be encoded or decoded. During an encoding operation, a residual block of a current block is transformed according to a selected transform kernel candidate to produce a transform block. A probability model for encoding the selected transform kernel candidate is then identified based on neighbor transform blocks of the transform block. The selected transform kernel candidate is then encoded according to the probability model. During a decoding operation, the encoded transform kernel candidate is decoded using the probability model. The encoded transform block is then decoded by inverse transforming dequantized transform coefficients thereof according to the decoded transform kernel candidate.
Abstract:
A method includes defining a plurality of known document types, obtaining a collection of previously classified documents that are each associated with one of the known document types, and extracting features from each document from the collection of previously classified documents to define feature information. The method also includes obtaining a subject document that is associated with a user, extracting one or more features from the subject document, comparing the one or more features from the subject document to the feature information, associating the subject document with one of the known document types based on the comparison, and transmitting the document to a cloud storage system for storage in a dedicated storage location that is associated with the user and contains only documents of the respective known document type that is associated with the subject document.
Abstract:
Techniques are described to use a reference motion vector to reduce the amount of bits needed to encode motion vectors for inter prediction. One method includes identifying a candidate motion vector used to inter predict each of a plurality of previously coded blocks to define a plurality of candidate motion vectors, identifying a set of reconstructed pixel values corresponding to a set of previously coded pixels for the current block, and generating, using each candidate motion vector, a corresponding set of predicted values for the set of previously coded pixel values within each reference frame of a plurality of reference frames. A respective error value based on a difference between the set of reconstructed pixel values and each set of predicted values is used to select a reference motion vector from the candidate motion vectors that is used to encode the motion vector for the current block.
Abstract:
Methods and apparatuses are disclosed for dynamic switching of user profiles on computing devices. In one method, the computing device identifies a first user profile under which the computing device is operating. The first user profile is associated with a first user value indicative of a first user. The computing device receives an image from an image-sensing device, generates a current user value indicative of a current user based on the received image, and determines if the current user value corresponds to the first user value. If the current user value does not correspond to the first user value, the computing device configures at least some programs operating on the computing device using a second user profile that is selected based on the current user value. If the current user value does correspond to the first user value, the computing device continues to operate using the first user profile.
Abstract:
A system, apparatus, and method for estimating available bandwidth for transmitting a media stream over a network, the media stream having a plurality of frames. One method includes receiving some of the plurality of frames, each frame of the plurality of frames having an inter-frame size differential and an inter-arrival time differential, detecting whether at least some of the inter-arrival time differentials are outside of a steady-state range using at least some of the inter-frame size differentials, and estimating an available bandwidth based on the detected change. The bit rate can be regulated using the estimated available bandwidth.
Abstract:
Image data is processed for noise reduction before encoding and subsequent decoding. For an input image in a spatial domain, two-dimensional (2-D) wavelet coefficients at multiple levels are generated. Each level includes multiple subbands, each associated with a respective subband type in a wavelet domain. For respective levels, a flat region of a subband is identified, which flat region includes blocks of the subband having a variance no higher than a first threshold variance. A flat block set for the subband type associated with the subband is identified, which includes blocks common to respective flat regions of the subband. A second threshold variance is determined using variances of the flat block set, and is then used for thresholding at least some of the 2-D wavelet coefficients to remove noise. After thresholding, a denoised image is generated in the spatial domain using the levels.
Abstract:
A method includes training a first model to measure the banding artefacts (1302), training a second model to deband the image (1304), and generating a debanded image for the image using the second model (1306). Training the first model (1302) can include selecting a first set of first training images, generating a banding edge map for a first training image, where the map includes weights that emphasize banding edges and de-emphasize true edges in the first training image, and using the map and a luminance plane of the first training image as input to the first model. Training the second model (1304) can include selecting a second set of second training images, generating a debanded training image for a second training image, generating a banding score for the debanded training image using the first model, and using the banding score in a loss function used in the training the second model..