Abstract:
A mix instructions file is provided, for controlling the playback of at least one music track file said mix instruction file comprising instructions including an indication of the at least one music track file at the point in time when the at least one music track file is to be accessed, and at least one function for manipulating the output of the at least one music track file, the sum of the indication of the at least one music track file and the at least one function constituting the state of the mix, said mix instruction file comprising at least a first and a second packet, that may be transmitted independently of each other, the second packet holding information about the playback state of the mix at the corresponding end of the first packet.
Abstract:
The method of determining a parameter for mixing a first content item (X 1 ) and a second content item (X 2 ) comprises the steps of detecting a simultaneous (59) occurrence of vocals (55, 57) at a potential mixing point between the first content item (X 1 ) and the second content item (X 2 ), and determining (61, 63) a mixing parameter in dependence on the detected simultaneous occurrence of vocals at the potential mixing point. The method of mixing a first content item (X 1 ) and a second content item (X 2 ) comprises the steps of retrieving a mixing point between the first content item (X 1 ) and the second content item (X 2 ) from a database, and mixing (65) the first content item (X 1 ) and the second content item (X 2 ) at the mixing point. The electronic device and computer program of the invention are operative to perform one or both of the methods.
Abstract:
An audio data-processing device (110), comprising a processing unit (111) adapted to generate at least one audio data transition segment (204) representing a transition between a preceding one of a plurality of audio data items (201) and a subsequent one of the plurality of audio data items (202), wherein each audio data transition segment (204) is generated on the basis of a portion of the corresponding preceding one of the audio data items (201) and on the basis of a portion of the corresponding subsequent one of the audio data items (202), and a sending interface (112) at which the at least one audio data transition segment (204) can be provided for transmission to an audio playback device (120).
Abstract:
The method of determining a parameter for mixing a first content item (X 1 ) and a second content item (X 2 ) comprises the steps of detecting a simultaneous (59) occurrence of vocals (55, 57) at a potential mixing point between the first content item (X 1 ) and the second content item (X 2 ), and determining (61, 63) a mixing parameter in dependence on the detected simultaneous occurrence of vocals at the potential mixing point. The method of mixing a first content item (X 1 ) and a second content item (X 2 ) comprises the steps of retrieving a mixing point between the first content item (X 1 ) and the second content item (X 2 ) from a database, and mixing (65) the first content item (X 1 ) and the second content item (X 2 ) at the mixing point. The electronic device and computer program of the invention are operative to perform one or both of the methods.
Abstract:
A digital music system according to the present disclosure is a single processor system with a drag and drop interface that permits different digital content to be performed simultaneously in two or more performance zones. The user interface may be further optimized for use with a touchscreen display. Each performance zone may have a performance queue independent of other performance zones. Performance queues may be altered at any time during a performance. Transition between each item of digital content in a performance queue is accomplished using a crossfade with user-defined parameters. Additionally, a user in either performance zone may identify a digital content item for preview and the preview may be accomplished which the music management system is performing digital content in the two or more zones.
Abstract:
The invention relates to a novel array or piece of equipment (100) for providing assistance while composing musical compositions at least by means of acoustic reproduction during and/or after composing musical compositions or the like which are played on virtual musical instruments, preferably in a light music ensemble. Said array or piece of equipment comprises a composing computer (100) having at least one processor unit (4), at least one sequencer (5) that is data-flow connected to the latter and at least one sound sample library storage unit (6b) that is data-flow and data-exchange connected to at least said units (4, 5). In order to manage the sound samples (61) stored in the above-mentioned storage unit (6b), a bidirectional sound parameter storage unit (6a) is provided, which is bidirectionally or multidirectionally data-flow and data-exchange connected at least to the processor unit (4) and to the sequencer (5). Each of the sound samples (61) stored in the sound sample storage unit are assigned to said bidirectional sound parameter storage unit, which contains sound definition parameters enabling access to sound samples (61).
Abstract:
The invention provides a method for generating a sound effect audio clip based on a mix of audible characteristics of two existing audio clips. The method comprises selecting first and second audio clips, mapping evolution of time of a plurality of predetermined audible characteristics of the first audio clip to arrive at first mapping data accordingly. The second audio clip is then modified based on the first mapping data, so as to at least partially apply evolution of time of audible characteristics from the first audio clip to the second audio clip, and outputting the sound effect audio clip in response to the modified second audio clip. Preferred audible characteristics are such as: amplitude, pitch, and spectral envelope (e.g. formant), which are each represented in mapping data as values representing the audible characteristics for the duration of the first audio clip at a given time resolution, where each value represents a value or a set of values representing the result of an analysis over a predetermined time windows. Especially, the second audio clip may also be mapped with respect to evolution of time of corresponding audible characteristics, and the modification of the second audio clip can then be performed in response to a mix of the two mapping data sets, e.g. by a frame-by-frame processing. A time alignment of the first and second audio clips may be performed, so that the two audio clips have the same duration prior to being processed.