Abstract:
An image resizing method includes at least the following steps: receiving at least one input image; performing an image content analysis upon at least one image selected from the at least one input image to obtain an image content analysis result; and creating a target image with a target image resolution by scaling the at least one input image according to the image content analysis result, wherein the target image resolution is different from an image resolution of the at least one input image.
Abstract:
A method for tuning a plurality of image signal processor (ISP) parameters of a camera includes performing a first iteration. The first iteration includes extracting image features from an initial image, arranging a tuning order of the plurality of ISP parameters of the camera according to at least the plurality of ISP parameters and the image features, tuning a first set of the ISP parameters according to the tuning order to generate a first tuned set of the ISP parameters, and replacing the first set of the ISP parameters with the first tuned set of the ISP parameters in the plurality of ISP parameters to generate a plurality of updated ISP parameters.
Abstract:
An image processing method is applied to an operation device and includes analyzing an unprocessed image to split the unprocessed image into a first region and a second region, applying a first image processing algorithm to the first region for acquiring a first processed result, applying a second image processing algorithm different from the first image processing algorithm to the second region for acquiring a second processed result, and generating a processed image via the first processed result and the second processed result.
Abstract:
An image enhancement method applied to an image enhancement apparatus and includes acquiring a first edge feature from a first spectral image and a second edge feature from a second spectral image, analyzing similarity between the first edge feature and the second edge feature to align the first spectral image with the second spectral image, acquiring at least one first detail feature from the first spectral image and at least one second detail feature from the second spectral image, comparing the first edge feature and the second edge feature to generate a first weight and a second weight, and fusing the first detail feature weighted by the first weight with the second detail feature weighted by the second weight to generate a fused image. The first spectral image and the second spectral image are captured at the same point of time.
Abstract:
A video encoding method includes: setting a 360-degree Virtual Reality (360 VR) projection layout of projection faces, wherein the projection faces have a plurality of triangular projection faces located at a plurality of positions in the 360 VR projection layout, respectively; encoding a frame having a 360-degree image content represented by the projection faces arranged in the 360 VR projection layout to generate a bitstream; and for each position included in at least a portion of the positions, signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate at least one of an index of a triangular projection view filled into a corresponding triangular projection face located at the position and a rotation angle of content rotation applied to the triangular projection view filled into the corresponding triangular projection face located at the position.
Abstract:
An exemplary image processing method includes the following steps: receiving an image input composed of at least one source image; receiving algorithm selection information corresponding to each source image; checking corresponding algorithm selection information of each source image to determine a selected image processing algorithm from a plurality of different image processing algorithms; and performing an object oriented image processing operation upon the source image based on the selected image processing algorithm. The algorithm selection information indicates an image quality of each source image and is generated from one of an auxiliary sensor, an image processing module of an image capture apparatus, a processing circuit being one of a video decoder, a frame rate converter, and an audio/video synchronization (AV-Sync) module, or is a user-defined mode setting.
Abstract:
A projection-based frame is generated according to an omnidirectional video frame and an octahedron projection layout. The projection-based frame has a 360-degree image content represented by triangular projection faces assembled in the octahedron projection layout. A 360-degree image content of a viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. One side of a first triangular projection face has contact with one side of a second triangular projection face, one side of a third triangular projection face has contact with another side of the second triangular projection face. One image content continuity boundary exists between one side of the first triangular projection face and one side of the second triangular projection face, and another image content continuity boundary exists between one side of the third triangular projection face and another side of the second triangular projection face.
Abstract:
A method for performing image processing control and an associated apparatus are provided, where method may include the steps of: performing image coding on image information of at least one frame to generate encoded data of the at least one frame, wherein in the encoded data, a specific frame of the at least one frame includes a plurality of tiles, and each tile of the plurality of tiles includes a plurality of superblocks; and generating a bitstream carrying the encoded data of the at least one frame, wherein at least a partition type and a transform size of each superblock within a specific tile of the plurality of tiles are derivable from information corresponding to the specific tile within the encoded data, having no need to derive the partition type and the transform size from information corresponding to another tile of the plurality of tiles within the encoded data.
Abstract:
An exemplary image processing method includes the following steps: receiving an image input composed of at least one source image; receiving algorithm selection information corresponding to each source image; checking corresponding algorithm selection information of each source image to determine a selected image processing algorithm from a plurality of different image processing algorithms; and performing an object oriented image processing operation upon the source image based on the selected image processing algorithm. The algorithm selection information indicates an image quality of each source image and is generated from one of an auxiliary sensor, an image processing module of an image capture apparatus, a processing circuit being one of a video decoder, a frame rate converter, and an audio/video synchronization (AV-Sync) module, or is a user-defined mode setting.
Abstract:
A data processing apparatus has a compression circuit and an output interface. The compression circuit has a pre-processor and a compressor. The pre-processor receives a first input display data in a first color domain, and performs a color format conversion upon the first input display data to generate a second input display data in a second color domain, wherein the second color domain is different from the first color domain. The compressor performs compression in the second color domain, wherein the compressor is arranged to compress the second input display data into a compressed display data in the second color domain. The output interface packs an output display data derived from the compressed display data into an output bitstream, and outputs the output bitstream via a display interface.