Abstract:
An audio analysis method that is realized by a computer system includes estimating a plurality of beat points of a musical piece by analyzing an audio signal representing a performance sound of the musical piece, receiving an instruction from a user to change a location of at least one beat point of the plurality of beat points, and updating a plurality of locations of the plurality of beat points in response to the instruction from the user.
Abstract:
An image processing apparatus includes a processor and a memory having stored thereon instructions executable by the processor to cause the image processing apparatus to perform: calculating, with respect to each of a plurality of captured images obtained by successively capturing a subject, an evaluation index as an index as to whether a capturing condition is appropriate or not for each of a plurality of partial images of the each of the plurality of captured images corresponding to different areas of the subject; selecting the plurality of partial images corresponding to the different areas of the subject from the plurality of captured images based on the evaluation indices of the partial images; and synthesizing the plurality of selected partial images in positions corresponding to partial images in a reference image obtained by capturing the subject.
Abstract:
An image correction device includes a line segment detection module, a shape specification module and an image correction module. The line segment detection module detects from a captured image obtained by photographing a document a plurality of line segments that correspond to the notation on the surface of the document. The shape specification module specifies shape approximation lines that approximate the surface shape of the document from the plurality of line segments. The image correction module utilizes the shape approximation lines specified by the shape specification module to correct the captured image.
Abstract:
An audio analysis method that is realized by a computer system includes setting a maximum tempo curve representing a temporal change of a maximum tempo value and a minimum tempo curve representing a temporal change of a minimum tempo value in accordance with an instruction from a user, and analyzing an audio signal representing a performance sound of a musical piece, thereby estimating a tempo of the musical piece within a restricted range between a maximum value represented by the maximum tempo curve and a minimum value represented by the minimum tempo curve.
Abstract:
Disclosed is an image analysis method implemented by a computer, the method including analyzing a partial image which is a part of an image of a planar subject, generating partial-image analysis data representing a characteristic of the partial image, comparing, for each of a plurality of images, candidate-image analysis data with the partial-image analysis data, the candidate-image analysis data representing a characteristic of each of the plurality of images, and selecting a candidate image among the plurality of images, the candidate image including a part corresponding to the partial image.
Abstract:
A musical score image analyzer includes a processor and a memory having stored thereon instructions executable by the processor to cause the musical score image analyzer to perform: detecting musical symbols in a musical score image obtained by capturing a musical score having a plurality of staffs arranged in parallel to each other and the musical symbols respectively disposed in prescribed positions in the staffs; specifying a symbol column having the detected musical symbols which are arranged in a column; calculating an index relating an image capturing based on the symbol column; and instructing a capturing device to perform capturing operation of a still image for the musical score image when the index satisfies a prescribed condition.
Abstract:
The object of the invention is to switch between plural sets of waveform data at desired timing while preventing noise. In response to an instruction for switching from a currently reproduced set of waveform data to another set of waveform data, either a switching position in the other set of waveform data or a switching in the currently reproduced set of waveform data is set as end timing for ending the reproduction of the currently reproduced set, with reference to switching position information of the two sets. If the switching position in the currently reproduced set is present within a 50 msec time range before a switching position in the other set that is present immediately after the switching instruction, the switching position in the currently reproduced set is set as the ending; if not, the switching position in the other set is set as the end timing.
Abstract:
A desired character train included in a predefined reference character train, such as lyrics, is set as a target character train, and a user designates a target phoneme train that is indirectly representative of the target character train by use of a limited plurality of kinds of particular phonemes, such as vowels and a particular consonants. A reference phoneme train indirectly representative of the reference character train by use of the particular phonemes is prepared in advance. Based on a comparison between the target phoneme train and the reference phoneme train, a sequence of the particular phonemes in the reference phoneme train that matches the target phoneme train is identified, and a character sequence in the reference character train that corresponds to the identified sequence of the particular phonemes is identified. The thus-identified character sequence estimates the target character train.
Abstract:
A method of processing sound includes arranging objects of a plurality of performers in a virtual space. The method also includes receiving a plurality of sound signals respectively corresponding to the plurality of performers. The method also includes obtaining, using a trained model, sound volume adjustment parameters respectively for the plurality of performers. The trained model is trained to learn a relationship between each sound signal, among the plurality of sound signals, that corresponds to each performer of the plurality of performers and each sound volume adjustment parameter, among the sound volume adjustment parameters, that corresponds to the each sound signal. The method also includes adjusting and mixing sound volumes respectively of the plurality of sound signals based on the sound volume adjustment parameters obtained using the trained model.
Abstract:
A data modification method of controlling sound , the method according to an embodiment includes providing a selection user interface that allows a user to select a modification mode to be applied to sound control data defining timing information of a sound generation from among a plurality of modification modes including a first modification mode and a second modification mode, modifying the sound control data by correcting the timing information and adding correction information according to an amount of correction of the timing information in a predetermined data section to the predetermined data section, in a state where the first modification mode is selected to be applied, and modifying the sound control data by correcting the timing information based on beat positions according to a predetermined tempo, in a state where the second modification mode is selected to be applied.