Abstract:
The present invention provides a method and device for realizing Chinese character input based on uncertainty information, wherein the method comprises: receiving input information from a user; extracting at least two types of uncertainty information of Chinese characters to be input, from the input information; and, determining the matched Chinese characters according to the at least two types of uncertainty information and outputting the matched Chinese character(s). The device comprises a receiving module, an extracting module and a matching module. The method and device as provided by the present invention allow a user who has incomplete memory of pronunciation or glyph information of Chinese characters to be input to realize correct input of the Chinese characters by defining a certain range for candidate characters corresponding to the Chinese characters to be input, in combination with at least two types of the extracted uncertainty information of the Chinese characters to be input.
Abstract:
The present disclosure discloses a video enhancement method and apparatus. The method includes: segmenting a target video into a plurality of groups of images, the images in the same group belonging to the same scene; determining, for each group of images, a matched video enhancement algorithm using a pre-trained quality assessment model, and performing video enhancement processing on the each group of images using the video enhancement algorithm; and sequentially splicing video enhancement processing results of all groups of images to obtain video enhancement data of the target video. With the present disclosure, the video enhancement processing effect can be improved and the video viewing experience can be improved.
Abstract:
The present disclosure discloses a method, apparatus, and system for sharing a virtual reality (VR) viewport. The method may include: establishing, by a content distribution server, a first transport connection; receiving, by the viewport sharing server, panoramic video data; establishing, by the viewport sharing server, a second transport connection for receiving first viewport information supplied by the terminal or transmitting second viewport information to the terminal; rendering, by the viewport sharing server, the received viewport information; receiving, by the terminal, the panoramic video data; converting, by the terminal, the panoramic video data into first video data; rendering, by the terminal, a first video within the user viewport scope; receiving, by the terminal, the second viewport information; converting, by the terminal, the panoramic video data into second video data corresponding to the second viewport information; and rendering, by the terminal, a second video corresponding to the second viewport information.
Abstract:
A method of generating handwriting information about handwriting of a user includes determining a first writing focus and a second writing focus; sequentially shooting a first local writing area, which is within a predetermined range from a first writing focus, and a second local writing area, which is within a predetermined range from a second writing focus; obtaining first handwriting from the first local writing area and second handwriting from the second local writing area; combining the first handwriting with the second handwriting; and generating the handwriting information, based on a result of the combining.
Abstract:
A method for beautifying handwritten input, including collecting handwriting data input by a user; analyzing the handwriting data to get handwriting information, determining a corresponding pen tip model according to the handwriting information, and beautifying the user's handwriting with the pen tip model; and outputting the beautified handwriting. An apparatus for beautifying handwritten input determines a pen tip model that matches a user's handwriting by acquiring handwriting information input by the user, and carries out real-time beautification of the user's handwriting through the pen tip model. Changes of the user's handwriting are quickly captured to provide the user to receive timely feedback and an excellent user experience.
Abstract:
A method and a system for rendering video images in virtual reality (VR) scenes are provided. The method includes providing a video image at a current time point, dividing the video image at the current time point into a plurality of sub-regions, inputting image feature information of the sub-regions and acquired user viewpoint feature information into a trained attention model for processing to obtain attention coefficients of the sub-regions indicating probability values at which user viewpoints at a next time point fall into the sub-regions, rendering the sub-regions based on the attention coefficients of the sub-regions to obtain a rendered video image at the current time point, inputting the attention coefficients of the sub-regions and the image feature information of the sub-regions into a trained user eyes trajectory prediction model for processing, obtaining user eyes trajectory information in a current time period, dividing, for video images at subsequent time points within the current time period, the video images at the subsequent time points into a plurality of sub-regions, calculating attention coefficients of the sub-regions in a video image at each of the subsequent time points within the current time period respectively based on the user eyes trajectory information in the current time period, and rendering the corresponding sub-regions based on the attention coefficients of the sub-regions to obtain a rendered video image at each of the subsequent time points.
Abstract:
A method and apparatus for implementing a virtual performance partner are provided. The method includes collecting audio frame data performed by a performer; and for each piece of current audio frame data collected, performing: converting the piece of current audio frame data collected into a current digital score, matching the current digital score with a range of digital scores in a repertoire, and determining a matching digital score in the range of digital scores that matches the current digital score; positioning a position of the matching digital score in the repertoire, and determining a start time of playing a cooperation part of music in a next bar of the matching digital score in the repertoire for a performance partner.
Abstract:
A method and an apparatus for calligraphic beautification of handwritten characters are provided. The method includes collecting handwriting data of a user's handwritten input in real-time, determining whether a calligraphic beautification operation is to be started, determining stroke structure information of a stroke according to the collected handwriting data if the beautification operation is to be started, continuing the collecting of the handwriting data if the calligraphic beautification operation is not to be started, performing the calligraphic beautification operation according to a calligraphic beautification method corresponding to the stroke structure information, and displaying a beautified result.
Abstract:
An obstacle avoidance playing method includes acquiring human eye position information of a viewer in a playing scene and three-dimensional data of an object in a respective viewing space region; determining a visible region of a display screen based on the human eye position information, the three-dimensional data of the object, and size and position information of the display screen, the visible region corresponding to a portion of the display screen that is unobstructed to the viewer; and displaying image content using (i) a matched obstacle avoidance mode determined based on the visible region and (ii) a preset obstacle avoidance strategy such that the image content is displayed in the visible region.
Abstract:
A method for generating a video intermediate frame, including obtaining a target video frame pair; constructing an image pyramid for each video frame in the target video frame pair; and generating an intermediate frame of the target video frame pair by using a bidirectional optical flow estimation model and a pixel synthesis model in a layer-by-layer recursive calling manner according to an order of the image pyramid from a high layer to a low layer based on the image pyramid, wherein the generating of the intermediate frame of the target video frame pair comprising: repairing a bidirectional optical flow corresponding to a previous layer using the bidirectional optical flow estimation model, and repairing a previous intermediate frame corresponding to the previous layer using the pixel synthesis model.