Abstract:
A wearable projection apparatus and a projection method are provided, and the projection method comprises: acquiring color distribution information and brightness distribution information of a plurality of regions of a hand; determining modified color output parameters according to the acquired color distribution information of the regions of the hand and preset standard color information, and determining modified brightness output parameters according to the acquired brightness distribution information of the regions of the hand; controlling a projection module to output an image according to the determined color output parameters and the determined brightness output parameters. With the wearable projection apparatus and the projection method provided by embodiments of the present invention, an image projected by the wearable projection apparatus on the hand can have relatively higher uniformity in color and brightness.
Abstract:
A touch implementation method and device and an electronic device are provided, and the method includes: displaying N calibration points for user touch by a display screen of an electronic device; acquiring first coordinates of the N calibration points for user touch; acquiring second coordinates of the N calibration points; and calculating a mapping parameter between a first coordinate of a point and a second coordinate of the point according to the first coordinates of the N calibration points and the second coordinates of the N calibration points. The mapping parameter is configured for acquiring a first coordinate of a touch point touched by the user upon a user touching the display screen. Embodiments of present disclosure can reduce the cost of the electronic.
Abstract:
The present disclosure provides a gesture recognition method, a gesture recognition system, a terminal device and a wearable device. The gesture recognition method includes: collecting action information about a user; recognizing the action information; inquiring an action instruction corresponding to the recognized action information from a personal action database of the user, a correspondence between the action information about the user and the action instruction being stored in the personal action database of the user; and executing an operation corresponding to the inquired action instruction.
Abstract:
The present disclosure relates to the field of data interaction technologies, and in particular to an interaction method between a display device and a terminal device, a computer readable storage medium, and an electronic device. The display device includes a multi-device access function, and the method includes: in response to a user's enabling operation on the multi-device access function, generating an access address of the display device; receiving access requests generated and sent by multiple terminal devices according to an access address, and establishing communication connections between the terminal devices and the display device; generating multiple cursors in a one-to-one correspondence with individual terminal devices, and displaying the cursors on a display screen of the display device; and receiving a cursor control instruction sent by the terminal device, and controlling a display position of the cursor on the display screen according to the control instruction.
Abstract:
An assisted driving method and apparatus, a computing device, a computer readable storage medium and a computer program product. The assisted driving method includes: acquiring a first image of a current road section, wherein in the first image is recorded road image information of the current road section; and controlling appropriate playback of the acquired first image, wherein the appropriate playback is based on relevant marker information of the first image, and combined with parameter information of a current vehicle, such that the position information of the first image matches the position of the current vehicle.
Abstract:
The present disclosure provides a mobile terminal image synthesis method, a mobile terminal image synthesis apparatus and a mobile terminal. The mobile terminal image synthesis method includes: acquiring a first fisheye image collected by a front fisheye camera group and a second fisheye image collected by a rear fisheye camera group; obtaining a first correction image by using a first internal parameter of the front fisheye camera group to correct the first fisheye image, and obtaining a second correction image by using a second internal parameter of the rear fisheye camera group to correct the second fisheye image; and obtaining a synthetic image by splicing and combining the first correction image and the second correction image according to splicing parameters.
Abstract:
Embodiments of the present disclosure provide a rendering method in an AR scene, the method includes: creating a virtual scene including a virtual object; obtaining depth information of a real shielding object and generating a grid map; creating a shielding object virtual model of the real shielding object in the virtual scene; and setting a property of the shielding object virtual model to be absorbing light, and rendering the shielding object virtual model. The present disclosure further provides a processor and AR glasses.
Abstract:
A wearable projecting device, a focusing method and a projection method thereof are disclosed. The focusing method includes: acquiring position information of a projection center of the wearable projecting device on a palm; determining a distance between the projection center and the wearable projecting device according to the acquired position information of the projection center of the wearable projecting device on the palm at a set frequency; determining a focal length of a lens set in the wearable projecting device according to the determined distance between the projection center and the wearable projecting device; and adjusting the lens set according to the determined focal length of the lens set. This focusing method improves stability of definition in display of images projected on the palm surface by the wearable projecting device.
Abstract:
Disclosed is a somatosensory recognition system for recognizing a human body's action and a recognition method thereof. The somatosensory recognition system comprises: an acquisition device configured to acquire a first motion track data of a body parts' action; a wearable device worn on the body parts and configured to acquire a second motion track data of the body parts' action; and a processing device configured to compare the first motion track data and the second motion track data and determine the body parts' action according to comparison result. The above somatosensory recognition system and the recognition method thereof can improve the accuracy of somatosensory recognition.
Abstract:
In the embodiments of the invention, a double-vision backlight module and a LCD device are provided. In the embodiments of the present invention, a light-splitting prism sheet is disposed between a diffuser plate and a LCD panel for splitting light. The prism sheet is arranged such that a side having prisms thereon of the prism sheet faces the LCD panel, and thereby splitting the light and enhancing the brightness, and eventually enhancing the brightness in both left and right view areas and at the same time reducing the brightness in the central-interference area, and thus improving the double-vision effect.