Abstract:
Embodiments of this application provide an electronic device and a picture processing method. The electronic device includes a rear cover, a front-facing camera, a rear-facing camera, and a light reflective apparatus. The light reflective apparatus may be switched between a usage state and an idle state. The picture processing method includes: controlling the light reflective apparatus to switch from the idle state to the usage state, so that the rear-facing camera captures a picture reflected by the light reflective apparatus; separately obtaining a picture captured by the front-facing camera and a picture captured by the rear-facing camera; processing the picture captured by the rear-facing camera; and fusing the picture captured by the front-facing camera and a picture obtained by processing the picture captured by the rear-facing camera. In this application, a field of view of a picture taken by the front-facing camera can be increased with relatively low costs.
Abstract:
This application provides a pixel collection circuit comprising an optical-to-electrical converter circuit configured to convert a collected optical signal into an analog signal; an analog-to-digital converter circuit configured to: receive the analog signal from the optical-to-electrical converter circuit; and perform analog-to-digital conversion on the analog signal to obtain a digital signal; a differential circuit configured to: receive the digital signal from the analog-to-digital converter circuit; and obtain a difference signal between a digital signal of a previous triggering moment and the digital signal of a current triggering moment; the previous triggering moment and the current triggering moment are determined by at least a digital clock signal; a comparison circuit configured to receive the difference signal from the differential circuit, and output a pulse signal. A digital component is used to implement the pixel collection circuit in the dynamic vision sensor, so as to reduce noise, reduce interference, and facilitate debugging.
Abstract:
This application provides a dynamic vision sensor. The sensor converts an optical signal into an electrical signal by using a photoelectric conversion unit, to generate a photovoltage; performs second-order difference on the photovoltage by using a second-order differential circuit; and generates a second-order event signal based on a result of second-order difference. A camera including the sensor can generate an image based on the second-order event signal, where the image represents a change of a light change speed.
Abstract:
An image rendering method and apparatus where the method includes recognizing a target area from a to-be-rendered image, setting a virtual light source for the target area, and performing rendering on the target area using the virtual light source. When rendering is performed on the to-be-rendered image, the rendering performed on the to-be-rendered image is implemented using the virtual light source. The virtual light source plays a rendering action only on the target area corresponding to the virtual light source, and does not affect another part of the to-be-rendered image, and therefore an image effect of the to-be-rendered image may be relatively good.
Abstract:
An image processing method and an image processing device are provided. The image processing method includes: determining a first width and a second width, where the second width is the width of a gap between display devices of target images on N screens, the first width is the width of a blind spot between source images corresponding to the N screens, N is an integer greater than 1, and the N screens are of a same size and are arranged side by side at a same height; and when the first width is different from the second width, adjusting the source images according to the determined first width and second width so as to obtain the target images, so that no mismatch exists in the stitched target images on the N screens.
Abstract:
This application relates to the field of video coding technologies, and discloses a video coder and a corresponding method, to help improve video coding performance. In this application, encoding and decoding are collectively referred to as coding. A video coding method includes: determining a block split policy of a current picture block based on a size relationship between the width and the height of the current picture block; applying the block split policy to the current picture block to obtain a coding block; and reconstructing the obtained coding block to reconstruct the current picture block.
Abstract:
This application provides a dynamic vision sensor. The sensor converts an optical signal into an electrical signal by using a photoelectric conversion unit, to generate a photovoltage; performs second-order difference on the photovoltage by using a second-order differential circuit; and generates a second-order event signal based on a result of second-order difference. A camera including the sensor can generate an image based on the second-order event signal, where the image represents a change of a light change speed.
Abstract:
A method and system for controlling multiple auxiliary streams, a control device, and a node, which implements sending and receiving of multiple auxiliary streams of multiple nodes. The method includes determining, by a control device, to allocate a token to m auxiliary streams, where the m auxiliary streams belong to n node or nodes, n≥1, m≥2, and m≥n, allocating a token to the m auxiliary streams, sending a token allocated to an auxiliary stream of a first node to the first node, where the first node is one node of the n node or nodes, and sending a first indication message to the first node, where the first indication message is used to instruct the first node to send a first auxiliary stream according to the received token, and the first auxiliary stream includes at least one auxiliary stream of the m auxiliary streams.
Abstract:
A method includes sending, by a first device, a first message to a second device, sending a third message to the second device, where the first message includes information about at least one media capture capability supported by the first device, and at least one first association identifier, the third message includes at least one configuration item supported by the first device and at least one second association identifier, and the at least one first association identifier corresponds to the at least one second association identifier in a one-to-one manner, and receiving, a second message and a fourth message sent by the second device, where the second message includes at least one media capture capability that is selected by the second device according to the first message and the third message, and at least one third association identifier corresponding to the at least one media capture capability.
Abstract:
The present invention provides a media negotiation method, device, and system for a multi-stream conference. The method includes: sending a media advertisement message that carries information about at least two media data objects; receiving a media selection message that carries information, about a media data object, selected by a second media entity; and determining a corresponding media data object according to the information, about the media data object, selected by the second media entity, and establishing a media transmission channel with the second media entity, so as to transmit the corresponding media data object to the second media entity through the media transmission channel. In embodiments of the present invention, more media data streams can be represented, and the representation accuracy and the amount of information of the media data stream can be improved