Abstract:
Method for capturing an environment with objects, using a 3D camera, wherein the images of the cameras captured at different moments in time are used to generate 3D models, and wherein accuracy values are assigned to segments of the models allowing efficient refining of the models using the accuracy values.
Abstract:
The present invention relates to a method, system and related devices for navigating in ultra high resolution video content under control of a user navigation command originating from a client device, at least portions of the ultra high resolution video content being transmitted from a server towards said client device, where said method comprises the steps of receiving a user navigation command, said user navigation command indicating a navigation trajectory through said ultra high resolution video content and determining a local video saliency on said navigation selection trajectory by analyzing said ultra high resolution video content on said navigation trajectory through said ultra high resolution video content and adapting characteristics of said navigation selection trajectory in function of said local video saliency on said navigation selection trajectory.
Abstract:
An encoder for encoding a video signal, comprising a video modelling module configured for determining a plurality of video modelling parameters for a plurality of video locations on a spatiotemporal grid of said video signal, said spatiotemporal grid comprising at least two spatial dimensions and a time dimension, each video modelling parameter being adapted for allowing a pre-determined video model to at least approximately reconstruct its video location, a video segmentation module configured for segmenting said video signal into a plurality of spatiotemporal video regions, based on said video modelling parameters and a vectorisation module configured for vectorising spatiotemporal surfaces of said spatiotemporal video regions; and wherein said encoder is configured for encoding said video signal based on at least a subset of said determined plurality of video modelling parameters and based on said vectorised spatiotemporal surfaces, wherein said subset is determined taking into account said spatiotemporal video regions.
Abstract:
A system for making available at an end-user a media file, from a media provider comprising a media file patch related to at least one object, the system comprising: an encoding module at the media provider configured for determining at least one representation which resembles the media file patch, by comparing the media file patch with representations of said at least one object, and for including at least one identification corresponding with said representation in a skeleton file; a storage medium storing a dictionary including the representations of the at least one object at of the end-user and an intermediate node between the media provider and/or the end-user; a decoding module configured for decoding the skeleton file using the identification for looking up the corresponding representation in the dictionary of the storage medium and for rendering the media file patch based on the looked-up corresponding representation.
Abstract:
Method for encoding a video stream having a transparency information channel in view of a predetermined target bit rate for said transparency information channel, comprising: tentatively applying a first encoding (130) to said transparency information channel, said first encoding (130) comprising lossless vector graphics encoding; if said first encoding results in an encoded bitrate within said predetermined target bitrate (140), selecting said first encoding (150) to produce an encoded version of said transparency information channel; otherwise, tentatively applying a second encoding (160) to said transparency information channel, said second encoding (160) comprising a mathematical representation encoding scheme; if said second encoding results in an encoded bitrate within said predetermined target bitrate (170), selecting said second encoding (180) to produce an encoded version of said transparency information channel; otherwise, selecting a third encoding (190) to produce an encoded version of said transparency in formation channel, said third encoding comprising MPEG-based encoding.
Abstract:
A transmitter for generating a set of encoded prioritized video streams from an incoming video stream (Vin) comprises: —a video decomposition module (VD) for decomposing said incoming video stream in at least two independent video components in spectral or time-spatial/spectral domain (Vd1, . . . , Vdn), —a compression module for compressing said at least two independent video components, thereby obtaining at least two compressed independent video components (Vde1, . . . , Vden), —a multi-stream packetization and prioritization module (MPP) for generating for each of said at least two compressed independent video components a respective set of packet streams in respective different qualities Vde1q1, . . . , Vde1q1, . . . , Vdenq1, . . . , Vdenq1), —a transmission rule engine (TRE) for determining and provision of respective compression parameters (prc) to be applied to the respective independent video components in said compression module, and for determining and provision of respective priority parameters (Pr) to said multi-stream packetization and prioritization module. A receiver for requesting at least two compressed independent video components, each of them in a respective requested quality (Vdeq1, Vdnqi), from such a transmitter (T) is disclosed as well.
Abstract:
Embodiments relates to a method for encrypting a 3D object (O) defined at least by a set of first points (pi) and a set first of faces (F), contained in a bounding box (B), the method being executed by an encryption device and comprising: determining (S4) a set of second points (psi) by bijection of the set of first points (pi), and a second set of faces (Fs), determining (S5) an encrypted 3 D object (Os) defined at least by the set of second points (psi) and the second set of faces (Fs), wherein the first points (pi) are associated with respective first indexes (i), the second points (psi) are associated with respective second indexes (sj), and a face is specified by a list of indexes, wherein the encrypted 3D object (Os) is contained in said bounding box (B), the method further comprising: partitioning the bounding box (B) into a set of first sub-boxes (nj), determining a set of second sub-boxes (nsj) by bijection of the set of first sub-boxes (nj), in function of a secret key (k), wherein the position of a second point (psi) is (c) determined in function the position of the corresponding first point (pi), the position of the first sub-box (nj) containing the corresponding first point, and the position of the second sub-box (nsj) corresponding with said first sub-box (nj).
Abstract:
An encoder for encoding a video signal, comprising a video modelling module configured for determining a plurality of video modelling parameters for a plurality of video locations on a spatiotemporal grid of said video signal, said spatiotemporal grid comprising at least two spatial dimensions and a time dimension, each video modelling parameter being adapted for allowing a pre-determined video model to at least approximately reconstruct its video location, a video segmentation module configured for segmenting said video signal into a plurality of spatiotemporal video regions, based on said video modelling parameters and a vectorisation module configured for vectorising spatiotemporal surfaces of said spatiotemporal video regions; and wherein said encoder is configured for encoding said video signal based on at least a subset of said determined plurality of video modelling parameters and based on said vectorised spatiotemporal surfaces, wherein said subset is determined taking into account said spatiotemporal video regions.
Abstract:
A method for mixing a first video signal and a second video signal, the method comprising at a mixing device receiving the first video signal; receiving the second video signal; receiving a transformation information signal dividing the first video signal into a transparent region and a non-transparent region and representing a spatial relationship between the first video signal and the second video signal; transforming the second video signal in accordance with the transformation information signal; and combining the non-transparent region of the first video signal with a portion of the transformed second video signal, the portion of the transformed second video signal being rendered in the transparent region of the first video signal.
Abstract:
Method for encoding a video stream having a transparency information channel in view of a predetermined target bit rate for said transparency information channel, comprising: tentatively applying a first encoding (130) to said transparency information channel, said first encoding (130) comprising lossless vector graphics encoding; if said first encoding results in an encoded bitrate within said predetermined target bitrate (140), selecting said first encoding (150) to produce an encoded version of said transparency information channel; otherwise, tentatively applying a second encoding (160) to said transparency information channel, said second encoding (160) comprising a mathematical representation encoding scheme; if said second encoding results in an encoded bitrate within said predetermined target bitrate (170), selecting said second encoding (180) to produce an encoded version of said transparency information channel; otherwise, selecting a third encoding (190) to produce an encoded version of said transparency information channel, said third encoding comprising MPEG-based encoding.