Abstract:
A user equipment includes a modem receives a compressed bitstream and metadata. The UE also includes a decoder that decodes the compressed bitstream to generate an HDR image, an inertial measurement unit that determines viewpoint information based on an orientation of the UE, and a graphics processing unit (GPU). The GPU maps the HDR image onto a surface and renders a portion of the HDR image based on the metadata and the viewpoint information. A display displays the portion of the HDR image.
Abstract:
A wireless communication device includes a processor configured to execute an image query. The image query utilizes cluster selection criteria for a cluster-aggregation based vectorization of a set of local features based on a quantity of top local features having the highest posteriori probability values. The cluster selection criterion is measured as the summation of the posteriori probability values of the top local features. The quantity of top local features is determined by a predetermined integer value greater than one.
Abstract:
A user equipment (UE) includes a receiver, display, and processor. The receiver is configured to receive a data stream including a plurality of frames. The data stream includes a region of interest in a key frame of the plurality of frames. The display is configured to display a portion of a frame of the plurality of frames. The processor is configured to perform an action to focus a current view of the UE to the region of interest in the key frame. Each frame of the plurality of frames includes a plurality of images stitched together to form a stitched image. The stitched image for at least one frame of the plurality of frames includes at least one high dynamic range (HDR) image and at least one standard dynamic range (SDR) image.
Abstract:
A user equipment (UE) includes a receiver, at least one sensor, and a processor. The receiver is configured to receive a bit stream including at least one encoded image and metadata. The sensor is configured to determine viewpoint information of a user. The processor is configured to render the at least one encoded image based on the metadata and the viewpoint.
Abstract:
A user equipment (UE) includes a receiver and a processor. The receiver is configured to receive a standard dynamic range (SDR) image and metadata related to an HDR image. The processor is configured to identify relevant portions of the SDR image to be enhanced based on the metadata related to the HDR image. The processor is also configured to increase an intensity of the relevant portions of the SDR image to create an enhanced SDR image. The processor is also configured to output the enhanced SDR image to a display.
Abstract:
In various embodiments, a method and a decoder include identifying a directional intra prediction mode with an angle of prediction. The method also includes identifying a first and second reference neighboring samples in a block of the video along the angle of prediction; the angle of prediction intersects a pixel to be predicted. The method further includes determining which of the first and second reference samples is nearest the angle of prediction and applying a value of the nearest reference neighboring sample to the pixel as a predictor. Also, a method and a decoder include determining whether a block type of a block of the video is intra block copy. The method also includes responsive to the block type being the intra block copy, determining a transform block size of the block and, responsive to the transform block size being 4×4, applying a discrete sine transform to the block.
Abstract:
A method includes receiving a video bitstream and a flag and interpreting the flag to determine a transform that was used at an encoder. The method also includes, upon a determination that the transform that was used at the encoder includes a secondary transform, applying an inverse secondary transform to the received video bitstream, where the inverse secondary transform corresponds to the secondary transform used at the encoder. The method further includes applying an inverse discrete cosine transform (DCT) to the video bitstream after applying the inverse secondary transform.
Abstract:
A user equipment includes a modem receives a compressed bitstream and metadata. The UE also includes a decoder that decodes the compressed bitstream to generate an HDR image, an inertial measurement unit that determines viewpoint information based on an orientation of the UE, and a graphics processing unit (GPU). The GPU maps the HDR image onto a surface and renders a portion of the HDR image based on the metadata and the viewpoint information. A display displays the portion of the HDR image.
Abstract:
A method is provided that includes receiving a bitstream. The method also includes parsing the bitstream for a flag indicating whether a palette was used from a first or second coding unit. The method also includes decoding the first coding unit using the palette from the first or second coding unit indicated by the flag. The palette is determined based on which palette of the first or second coding unit improves compression performance. Also, a method is provided that includes receiving a bitstream with a predicted pixel. A coding unit and a reference unit are identified. A number of pixels of the coding unit and the reference unit overlap. A set of available pixels and a set of unavailable pixels of the reference unit are identified. The predicted pixel of the set of unavailable pixels is estimated as a pixel of the set of available pixels.
Abstract:
A video processing unit and method for region adaptive smoothing. The image processing unit includes a memory and one or more processors. The one or processors are operably connected to the memory and configured to stitch together a plurality of video frames into a plurality of equirectangular mapped frames of a video. The one or processors are configured to define a top region and a bottom region for each of the equirectangular mapped frames of the video; perform a smoothing process on the top region and the bottom region for each of the equirectangular mapped frames of the video; and encode the smoothed equirectangular mapped frames of the video.