Abstract:
A system of this invention is a video processing system for determining details of a browsable video content. This video processing system includes a video fragment download unit that downloads data of a video fragments in a determination target video content via a network, and a first video content determination unit that determines the details of the video content based on the downloaded data of the video fragments. With this arrangement, it is possible to determine the details of a browsable video content while reducing the amount of data to be downloaded.
Abstract:
To enable a utilization frequency of each segment in a source video to be readily assessed between videos in a corresponding relationship during a creation process. Relationship information that is information indicating a segment in a corresponding relationship between a source video and a plurality of derived videos created using at least a part of segments of the source video is stored in a relationship information storing unit, and a corresponding relationship of the segment between the source video and the plurality of derived videos and a utilization frequency of each segment of the source video by the plurality of derived videos are displayed based on the relationship information stored in the relationship information storing unit.
Abstract:
A signal analysis/control system includes: a signal analysis unit which analyzes an input signal of a transmission unit and generates analysis information; and a signal control unit which controls the input signal of a reception unit by using the analysis information. Thus, the signal analysis is performed in the transmission unit. This reduces the calculation amount concerning the signal analysis in the reception unit. Furthermore, the reception unit can control each of the constituent elements of the input signal according to the signal analysis information obtained in the transmission unit.
Abstract:
A system of this invention is a video processing system for determining details of a browsable video content. This video processing system includes a video fragment download unit that downloads data of a video fragments in a determination target video content via a network, and a first video content determination unit that determines the details of the video content based on the downloaded data of the video fragments. With this arrangement, it is possible to determine the details of a browsable video content while reducing the amount of data to be downloaded.
Abstract:
A first matrix (W(k)) indicating frequency characteristics of a separation filter is calculated from input signals of a plurality of channels. A second matrix (Ws(k)) is calculated by using the restriction coefficients (Ci(k)) for restricting the separation filter and the first matrix, and separation filter coefficients (wsij(s)) are calculated by using the second matrix. With use of the separation filter coefficients, separation signals (ysi(t)) are then calculated from the input signals. A third matrix (Ws−1(k)) is then calculated by transforming the second matrix into an inverse matrix at each frequency, and reproduction filter coefficients (a′I1(s), a′I2(s)) are calculated by using the third matrix. With use of the reproduction filter coefficients, the synthesized signal of each channel is calculated by using the separation signals. The restriction coefficients are calculated so that the reproduction filter coefficients indicate filter coefficients which perform a sound source localization on the separation signals.
Abstract:
A signal analysis control system is provided with a signal analyzing section for analyzing signals inputted to a transmission section and generating analysis information, and a signal control section for controlling signals inputted to a receiving section by using the analysis information.
Abstract:
Advertisement information relating to an object is provided in real time, while capturing images of the object. m first local features which are respectively feature vectors from one dimension to i dimensions are stored in association with an object, n feature points are extracted from a video picture, n second local features which are respectively feature vectors from one dimension to j dimensions are generated, the smaller number of dimensions is selected, of the number of dimensions i and the number of dimensions j, and an object is recognized to be present in the video picture and advertisement information relating to that object is provided when determination is made that at least a prescribed ratio of the m first local features of the selected number of dimensions corresponds to the n second local features of the selected number of dimensions.
Abstract:
The present approach enables an impression of the atmosphere of a scene or an object present in the scene at the time of photography to be pictured in a person's mind as though the person were actually at the photographed scene. A feeling-expressing-word processing device has: a feeling information calculating unit 11 for analyzing a photographed image, and calculating feeling information which indicates a situation of a scene portrayed in the photographed image or a condition of an object present in the scene; and a feeling-expressing-word extracting unit 12 for extracting, from among feeling-expressing words which express feelings and are stored in a feeling-expressing-word database 21 in association with the feeling information, a feeling-expressing word which corresponds to the feeling information calculated by the feeling information calculating unit 11.
Abstract:
Provided is a multi-point connection device including: a first signal receiving unit which receives a first signal containing a plurality of constituent elements and first analysis information expressing the relationship between the constituent elements contained in the first signal; a second signal receiving unit which receives a second signal containing a plurality of constituent elements and second analysis information expressing the relationship between the constituent elements contained in the second signal; a signal mixing unit which mixes the first signal and the second signal; and an analysis information mixing unit which mixes the first analysis information and the second analysis information.
Abstract:
The present invention is to notify a recognition result with respect to a recognition object in a video in real time while maintaining recognition accuracy. A recognition object and m-number of first local characteristic quantities which are respectively 1-dimensional to i-dimensional characteristic vectors are stored in association with each other, and n-number of second local characteristic quantities which are respectively 1-dimensional to j-dimensional characteristic vectors are generated for n-number of local areas respectively including n-number of characteristic points from an image in a video. In addition, a smaller dimension is selected from the dimension i and the dimension j, a recognition that the recognition object exists in an image in the video is made when it is determined that a prescribed proportion or more of the m-number of first local characteristic quantities which are characteristic vectors up to the selected number of dimensions correspond to the n-number of second local characteristic quantities which are characteristic vectors up to the selected number of dimensions, and information representing the recognition object is displayed in superposition on an image in which the recognition object exists in the video.