Abstract:
A multi-camera architecture for detecting and tracking a ball in real-time. The multi-camera architecture includes network interface circuitry to receive a plurality of real-time videos taken from a plurality of high-resolution cameras. Each of the high-resolution cameras simultaneously captures a sports event, wherein each of the plurality of high-resolution cameras includes a viewpoint that covers an entire playing field where the sports event is played. The multi-camera architecture further includes one or more processors coupled to the network interface circuitry and one or more memory devices coupled to the one or more processors. The one or more memory devices includes instructions to determine the location of the ball for each frame of the plurality of real-time videos, which when executed by the one or more processors, cause the multi-camera architecture to simultaneously perform one of a detection scheme or a tracking scheme on a frame from each of the plurality of real-time videos to detect the ball used in the sports event and perform a multi-camera build to determine a location of the ball in 3D for the frame from each of the plurality of real-time videos using one of detection or tracking results for each of the cameras.
Abstract:
Methods, systems and apparatuses may provide for technology that automatically determines, based on camera calibration data and trajectory data associated with a projectile in a game, a plurality of camera angles. The technology may also automatically generate, based on the plurality of camera angles, a camera path for a volumetric content replay of a three-dimensional (3D) region of interest around a highlight moment in the game.
Abstract:
A method for trajectory generation based on player tracking is described herein. The method includes determining a temporal association for a first player in a captured field of view and determining a spatial association for the first player. The method also includes deriving a global player identification based on the temporal association and the spatial association and generating a trajectory based on the global player identification.
Abstract:
In response to movement of an underlying structure, motion of complex objects connected to that structure may be simulated relatively quickly and without requiring extensive processing capabilities. A skeleton extraction method is used to simplify the complex object. Tracking is used to track the motion of the underlying structure, such as the user's head in a case where motion of hair is being simulated. Thus, the simulated motion is driven in response to the extent and direction of head or facial movement. A mass-spring model may be used to accelerate the simulation in some embodiments.
Abstract:
Technologies for control and status register (CSR) access include a computing device that starts a firmware initialization phase. The firmware accesses a CSR at an abstract CSR address. The computing device determines whether an upper part of the CSR address matches a cached upper part of a previously accessed CSR address. If the upper parts do not match, the computing device converts the CSR address into a physical address and caches the upper part of the CSR address and the upper part of the physical address. If the upper parts match, the computing device combines a cached upper part of a previously accessed physical address with an offset of the CSR address. The upper part may include 20 bits and the lower part may include 12 bits. The physical address may be a PCIe address of the CSR added with an MMCFG base address. Other embodiments are described and claimed.
Abstract:
Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
Abstract:
Techniques for media quality control may include receiving media information and determining the quality of the media information. The media information may be presented when the quality of the media information meets a quality control threshold. A warning may be generated when the quality of the media information does not meet the quality control threshold. Other embodiments are described and claimed.
Abstract:
Apparatuses, methods and storage medium associated with creating an avatar video are disclosed herein. In embodiments, the apparatus may one or more facial expression engines, an animation-rendering engine, and a video generator. The one or more facial expression engines may be configured to receive video, voice and/or text inputs, and, in response, generate a plurality of animation messages having facial expression parameters that depict facial expressions for a plurality of avatars based at least in part on the video, voice and/or text inputs received. The animation-rendering engine may be configured to receive the one or more animation messages, and drive a plurality of avatar models, to animate and render the plurality of avatars with the facial expression depicted. The video generator may be configured to capture the animation and rendering of the plurality of avatars, to generate a video. Other embodiments may be described and/or claimed.
Abstract:
A device, method and system of video and audio sharing among communication devices, may comprise a communication device for generating and sending a packet containing information related to the video and audio, and another communication device for receiving the packet and rendering the information related to the audio and video. In some embodiments, the communication device may comprise: an audio encoding module to encode a piece of audio into an audio bit stream; an avatar data extraction module to extract avatar data from a piece of video and generate an avatar data bit stream; and a synchronization module to generate synchronization information for synchronizing the audio bit stream with the avatar parameter stream. In some embodiments, the another communication device may comprise: an audio decoding module to decode an audio bit stream into decoded audio data; an Avatar animation module to animate an Avatar model based on an Avatar data bit stream to generate an animated Avatar model; and a synchronizing and rendering module to synchronize and render the decoded audio data and the animated Avatar model by utilizing the synchronization information.
Abstract:
Methods, systems and apparatuses may provide for technology that detects an individual in a real-time multi-camera video feed and generates three-dimensional (3D) skeletal data based on the real-time multi-camera video feed. The technology may also automatically identify a frontal body orientation of an individual based on the 3D skeletal data and one or more anthropometric constraints.