Abstract:
Es wird ein Verfahren und ein System zur Erfassung und Synchronisation von Audio- und Videosignalen vorgestellt. Das Audiosignal und das Videosignal werden gemeinsam mit Timestamps jeweils aus einer dazugehörigen Systemuhr abgelegt. Die Erfindung bezieht sich auf eine Anpassung der Dauer einer erfassten Audiosequenz an die Dauer einer dazugehörigen Videosequenz, um Gleichlaufunterschiede der beiden Systemuhren auszugleichen. Außerdem wird ein Abgleich der beiden Systemuhren eingeführt, der auf einer Datenübertragung beruht, welche variable Wartezeiten für den Zugriff auf einen Übertragungskanal aufweist. Somit wird der Uhrenabgleich mit Mitteln ermöglicht, wie sie z.B. auf einem Smartphone zur Verfügung stehen.
Abstract:
Frames of video data from a surveillance system can be analyzed in near real time to allow for action to be taken based on the analysis. Task-based resources can be allocated to process each individual frame. Pre-processing can be performed to determine whether to analyze a given video frame. Each frame to be analyzed can be processed using at least one recognition algorithm to detect objects of interest, which can also be compared against corresponding data from earlier frames to determine relevant behaviors, moods, actions, or patterns of use. Each determination can have a corresponding confidence value. Information about the determinations and confidence levels can be analyzed to determine whether an action should be taken, as well as the type of action to take. Information for the determinations can also be used to apply tags to the video content to allow for searching and indexing of the video content.
Abstract:
Systems and methods provide real time custom audio in accordance with embodiments of the invention. One method includes selecting a video stream from source multimedia content using a media server; recording a voice-over session audio recording for the video stream using the media server, where the voice-over session audio recording comprises real time custom audio for the video stream; synchronizing the timing of the voice-over session audio recording with the video stream to create a voice-over stream using the media server; and storing the voice-over stream as at least one voice-over audio stream for the source video channel using the media server.
Abstract:
Various technologies described herein pertain to generating an output video loop (202) from an input video (200) that includes values at pixels over a time range. Respective input time intervals within the time range of the input video are determined for the pixels by performing an optimization. The optimization can be performed to assign the input time intervals at a first level of resolution, while terms of an objective function use a finer, second level of resolution. An input time interval for a particular pixel includes a per-pixel loop period (px) of a single, contiguous loop at the particular pixel within the time range from the input video. The input time intervals can be temporally scaled based on per-pixel loop periods and an output video loop period. The output video loop (202) is created based on the values at the pixels over the input time intervals for the pixels in the input video.
Abstract:
In one aspect, an example method for use in a video-broadcast system having a DVE system includes: (i) receiving an instruction to apply a particular DVE of a particular overlay - DVE type to a temporal portion of a video segment based, at least in part, on the temporal portion of the video segment being suitable for having a DVE of the particular DVE-type applied thereto; (ii) making a determination that a particular temporal portion of the video segment has been identified as being suitable for having a DVE of the particular DVE-type applied thereto; and (iii) based, at least in part, on the received instruction and the determination, transmitting to the DVE system an instruction that causes the DVE system to apply the particular DVE to at least part of the particular temporal portion of the video segment.
Abstract:
A concealment system is configured to be concealed within a host device for covert surveillance. The concealment system includes a housing configured to be concealed within a host device for covert surveillance. The housing encloses a processor and components electronically coupled to the processor including: at least one sensor interface to connect to at least one sensor for capturing information, a real-time clock to generate timestamps, a tamper protection module to receive the captured information from the at least one sensor interface, add the timestamps generated by the real-time clock to the captured information, and encrypt the captured information, at least one wireless communication module to wirelessly connect to at least one wireless communication antenna to transmit the encrypted captured information to a remote device, and rewritable memory to store the encrypted captured information.
Abstract:
In one general aspect, a method for generating a digital textbook can include receiving, by a computing device, a time-based transcript of a video of an online lecture, receiving a time-based thumbnail image subset of images included in the video of the online lecture, and displaying at least a portion of the transcript including a particular word. The method can further include receiving a selection of the particular word, determining a first thumbnail image and a second thumbnail image associated with the particular word, displaying the first thumbnail image and the second thumbnail image, receiving a selection of the first thumbnail image, and modifying, based on the selection of the first thumbnail image, the time-based transcript by including the first thumbnail image in the time-based transcript. The method can further include storing the modified time-based transcript as the digital textbook.
Abstract:
L'invention se rapporte à un procédé de production, par un dispositif de traitement, d'un fichier sonore caractérisé en ce qu'il comporte les étapes suivantes: acquisition d'un premier fichier musical, acquisition d'un deuxième fichier vocal, acquisition d'un fichier de placements, production, par duplication, d'un troisième et d'un quatrième fichier musical à partir du premier fichier musical, mixage des deuxième, troisième et quatrième fichiers: le troisième et le quatrième fichier commencent simultanément, le deuxième fichier commence selon un paramètre de placement lu dans le fichier de placements, à une première date égale au paramètre de placement moins une valeur prédéterminée on applique un fondu à la fermeture pour couper progressivement la puissance du troisième fichier durant un intervalle de fondu allant de la première date à la valeur du paramètre de placement, à la fin du deuxième fichier on applique un fondu à l'ouverture pour restaurer progressivement la puissance du troisième fichier depuis la fin du deuxième fichier et sur une durée sensiblement égale à celle du fondu en ouverture.