Abstract:
A monitoring and photographing module includes one primary camera and N secondary cameras. The primary camera and the N secondary cameras are configured to collect images, and a frame rate at which any secondary camera collects an image is less than a frame rate at which the primary camera collects an image. Regions monitored by the N secondary cameras respectively cover N different regions in a region monitored by the primary camera, and a focal length of any secondary camera is greater than a focal length of the primary camera.
Abstract:
The present disclosure provides a method for encoding an intra-frame prediction mode, including: obtaining an intra-frame prediction mode of a current intra-frame encoding block from a preset prediction mode set; obtaining reference prediction modes of the current intra-frame encoding block, where the reference prediction modes are intra-frame prediction modes of available adjacent blocks of the current intra-frame encoding block or prediction modes in a preset backup reference mode set; writing a first flag bit into a code stream according to the reference prediction modes and the intra-frame prediction mode; and, when the intra-frame prediction mode of the encoding block is different from all the reference prediction modes, obtaining a prediction mode encoding value according to a size relationship between the value of the intra-frame prediction mode and values of the reference prediction modes, and encoding the prediction mode encoding value.
Abstract:
The present disclosure provides a method for encoding an intra-frame prediction mode, including: obtaining an intra-frame prediction mode of a current intra-frame encoding block from a preset prediction mode set; obtaining reference prediction modes of the current intra-frame encoding block, where the reference prediction modes are intra-frame prediction modes of available adjacent blocks of the current intra-frame encoding block or prediction modes in a preset backup reference mode set; writing a first flag bit into a code stream according to the reference prediction modes and the intra-frame prediction mode; and, when the intra-frame prediction mode of the encoding block is different from all the reference prediction modes, obtaining a prediction mode encoding value according to a size relationship between the value of the intra-frame prediction mode and values of the reference prediction modes, and encoding the prediction mode encoding value.
Abstract:
A time-division multiplexing fill light imaging method includes alternately generating a visible light frame and a fill light frame by using an image sensor, where the visible light frame is an image frame generated when the image sensor receives visible light but does not receive fill light, and the fill light frame is an image frame generated when the image sensor receives fill light, and combining a visible light frame and a fill light frame that are adjacent or consecutive, to obtain a composite frame.
Abstract:
An encoding method with multiple image block division manners is disclosed, including: determining a division manner and a division direction of an image block; dividing the image block to obtain image subblocks sequentially arranged horizontally or vertically; determining whether the image subblocks need subdivision, and if subdivision is not needed, predicting the encoding object in the frame according to the image subblocks, to obtain residual data; performing transformation, quantization, and entropy encoding for the residual data so as to obtain coded residual data; and writing the division manner of the image block, the division direction of the image block, an identifier indicating whether the image subblocks need subdivision, and the coded residual data into a bitstream. By applying the encoding method, better prediction accuracy can be achieved when the image block presents a small change of pixel value in the horizontal or vertical direction.
Abstract:
A time-division multiplexing fill light imaging method includes alternately generating a visible light frame and a fill light frame by using an image sensor, where the visible light frame is an image frame generated when the image sensor receives visible light but does not receive fill light, and the fill light frame is an image frame generated when the image sensor receives fill light, and combining a visible light frame and a fill light frame that are adjacent or consecutive, to obtain a composite frame.
Abstract:
An image processing method and apparatus, the image processing method including receiving a first largest coding unit of an image, where the first largest coding unit is a currently received largest coding unit; determining a compensation parameter of the first largest coding unit; performing pixel compensation on at least one area of the first largest coding unit according to the compensation parameter of the first largest coding unit; and performing pixel compensation on at least one area, on which pixel compensation is not performed, of a second largest coding unit according to a compensation parameter of the second largest coding unit, where the second largest coding unit is a previously received largest coding unit adjacent to the first largest coding unit.
Abstract:
In embodiments of the present invention, a first reference point and a second reference point that correspond to a prediction point are obtained from an upper reference edge and a left reference edge of a prediction block respectively according to a position of the prediction point in the prediction block and a prediction texture direction that corresponds to a prediction mode. Then linear interpolation is performed, according to the position of the prediction point, on the first reference point and the second reference point to obtain a predicted value of the prediction point.
Abstract:
This application discloses a photographing apparatus and method, and relates to the field of image processing. When a high-quality image in a low illumination environment is obtained, costs are reduced, a size is reduced, and product compatibility is improved. The method includes: controlling a light filtering unit to: transparently transmit visible light in incident light and block infrared light in the incident light in a first image exposure interval, transparently transmit the infrared light in the incident light in a first time period of a second image exposure interval, and block the incident light in a second time period of the second image exposure interval; performing, by using an image sensor to obtain a first image, and performing photoelectric imaging on a light ray to obtain a second image; and synthesizing the first image and the second image, to generate a first target image.
Abstract:
An optical path switching method is applied to a surveillance module. The method includes: determining a target magnification; and (i) when the target magnification is less than or equal to a maximum magnification of a camera, setting a magnification of the camera to the target magnification, determining that a reflection element is at a first location or in a first working state, and performing image capture by using the camera alone; or (ii) when the target magnification is greater than a maximum magnification of the camera, setting a magnification of the camera to a first magnification, determining that the reflection element is at a second location or in a second working state, and performing image capture by using both the camera and a teleconverter, where a product of the first magnification and a magnification of the teleconverter is the target magnification. The method increases a surveillance distance while reducing costs.