Abstract:
An image processing method is applied to an operation device and includes analyzing an unprocessed image to split the unprocessed image into a first region and a second region, applying a first image processing algorithm to the first region for acquiring a first processed result, applying a second image processing algorithm different from the first image processing algorithm to the second region for acquiring a second processed result, and generating a processed image via the first processed result and the second processed result.
Abstract:
An image capture method for flash photography includes: capturing at least one first image while a strobe device is operating under a first strobe intensity setting for emitting main flash, capturing at least one second image while the strobe device is operating under a second strobe intensity setting different from the first strobe intensity setting, and generating an output image by blending the at least one first image and the at least one second image.
Abstract:
The invention provides an input-output calibration method performed by a processing unit connected to an output device and an input device. The output and the input device correspond to an output and an input device coordinate systems, respectively. The processing unit uses the input device to derive a plurality of lines in the input device coordinate system for M calibration points by sensing a viewer specifying the M calibration points' positions, the plurality of lines are between the M calibration points and the viewer's the predetermined object's different positions, M is a positive integer equal to or larger than three. The processing unit derives the M calibration points' coordinates in the input device coordinate system according to the plurality of lines and uses the M calibration points' coordinates in the output and the input device coordinate systems to derive the relationship between the output and the input device coordinate systems.
Abstract:
One of the embodiments of the invention provides an input-output calibration method performed by a processing unit connected to an output device and an input device. The output device and the input device correspond to an output device coordinate system and an input device coordinate system, respectively. The processing unit first uses the input device to derive a plurality of lines in the input device coordinate system for M calibration points by sensing a viewer specifying the M calibration points' positions, wherein the plurality of lines are between the M calibration points and the viewer's the predetermined object's different positions, and M is a positive integer equal to or larger than three. Then, the processing unit derives the M calibration points' coordinates in the input device coordinate system according to the plurality of lines and uses the M calibration points' coordinates in the output device coordinate system and coordinates in the input device coordinate system to derive the relationship between the output device coordinate system and the input device coordinate system.
Abstract:
An image capture method for flash photography includes: capturing at least one first image while a strobe device is operating under a first strobe intensity setting for emitting main flash, capturing at least one second image while the strobe device is operating under a second strobe intensity setting different from the first strobe intensity setting, and generating an output image by blending the at least one first image and the at least one second image.
Abstract:
A method for changing a setting of a mobile communication device is disclosed. The method includes receiving context information of the mobile communication device, changing the setting of the mobile communication device according to the context information and a user preference rule, and updating the user preference rule according to the context information and the changed setting.
Abstract:
A method and apparatus for texture image compression in a 3D video coding system are disclosed. Embodiments according to the present invention derive depth information related to a depth map associated with a texture image and then process the texture image based on the depth information derived. The invention can be applied to the encoder side as well as the decoder side. The encoding order or decoding order for the depth maps and the texture images can be based on block-wise interleaving or picture-wise interleaving. One aspect of the present invent is related to partitioning of the texture image based on depth information of the depth map. Another aspect of the present invention is related to motion vector or motion vector predictor processing based on the depth information.