-
公开(公告)号:US11315532B2
公开(公告)日:2022-04-26
申请号:US16804845
申请日:2020-02-28
Applicant: YAMAHA CORPORATION
Inventor: Daichi Watanabe
Abstract: A chord information extraction device includes a character group extractor, a determiner and a corrector. The character group extractor extracts a character group corresponding to chord information from score image data representing a music score. The determiner determines whether the character group extracted by the character group extractor follows a predetermined chord notation rule. In a case where the character group extracted by the character group extractor does not follow the chord notation rule, the corrector corrects the extracted character group to follow the chord notation rule.
-
2.
公开(公告)号:US09728173B2
公开(公告)日:2017-08-08
申请号:US15262625
申请日:2016-09-12
Applicant: YAMAHA CORPORATION
Inventor: Daichi Watanabe
CPC classification number: G10H1/40 , G10H1/0025 , G10H1/38 , G10H1/42 , G10H2210/005 , G10H2210/051 , G10H2210/145 , G10H2210/151 , G10H2210/341
Abstract: Performance information of main music is sequentially acquired, and an accent position of the music is determined. An automatic accompaniment is progressed based on accompaniment pattern data. Upon determination that the current time point coincides with the accent position, an accompaniment event whose tone generation timing arrives within a predetermined time range following the current time point is extracted from the accompaniment pattern data, the tone generation timing of the extracted accompaniment event is shifted to the current time point, and then, accompaniment data is created based on the accompaniment event having the tone generation timing thus shifted. If there is no accompaniment event whose tone generation timing arrives at the current time point or within the predetermined time range following the current time point, automatic accompaniment data with the current time point set as its tone generation timing is additionally created.
-
公开(公告)号:US20130151556A1
公开(公告)日:2013-06-13
申请号:US13708723
申请日:2012-12-07
Applicant: Yamaha Corporation
Inventor: Daichi Watanabe
IPC: G06F17/30
CPC classification number: G06F17/30755 , G10H1/0025 , G10H1/42 , G10H7/02 , G10H2210/341 , G10H2220/106 , G10H2240/131 , G10H2250/641
Abstract: For each performance part, a database stores therein a plurality of part performance data. The part performance data for each part includes a sound generation pattern and tone data corresponding thereto. A query pattern indicative of a sound generation pattern to be made an object of search is input by a user. A search is made through the database for part performance data including a sound generation pattern matching the query pattern. In response to a user's operation, one part performance data is identified from among searched-out results, and the sound generation pattern of the identified part performance data is instructed as a new query pattern. Then, a further search is made through the database for part performance data including a sound generation pattern matching the new query pattern. The searched-out pattern can be edited by the user and a further search can be made based on the edited pattern.
Abstract translation: 对于每个性能部件,数据库中存储有多个部件性能数据。 每个部件的部件性能数据包括声音产生模式和与其对应的乐音数据。 由用户输入表示作为搜索对象的声音生成模式的查询模式。 通过数据库搜索部分性能数据,包括与查询模式匹配的声音生成模式。 响应于用户的操作,从搜索出的结果中识别出一部分性能数据,并且指示所识别的部件性能数据的声音生成模式作为新的查询模式。 然后,通过数据库进一步搜索包括与新查询模式匹配的声音生成模式的部分性能数据。 搜索到的图案可以由用户编辑,并且可以基于编辑的图案进行进一步的搜索。
-
公开(公告)号:US09563701B2
公开(公告)日:2017-02-07
申请号:US13708723
申请日:2012-12-07
Applicant: Yamaha Corporation
Inventor: Daichi Watanabe
CPC classification number: G06F17/30755 , G10H1/0025 , G10H1/42 , G10H7/02 , G10H2210/341 , G10H2220/106 , G10H2240/131 , G10H2250/641
Abstract: For each performance part, a database stores therein a plurality of part performance data. The part performance data for each part includes a sound generation pattern and tone data corresponding thereto. A query pattern indicative of a sound generation pattern to be made an object of search is input by a user. A search is made through the database for part performance data including a sound generation pattern matching the query pattern. In response to a user's operation, one part performance data is identified from among searched-out results, and the sound generation pattern of the identified part performance data is instructed as a new query pattern. Then, a further search is made through the database for part performance data including a sound generation pattern matching the new query pattern. The searched-out pattern can be edited by the user and a further search can be made based on the edited pattern.
Abstract translation: 对于每个性能部件,数据库中存储有多个部件性能数据。 每个部件的部件性能数据包括声音产生模式和与其对应的乐音数据。 由用户输入表示作为搜索对象的声音生成模式的查询模式。 通过数据库搜索部分性能数据,包括与查询模式匹配的声音生成模式。 响应于用户的操作,从搜索出的结果中识别出一部分性能数据,并且指示所识别的部件性能数据的声音生成模式作为新的查询模式。 然后,通过数据库进一步搜索包括与新查询模式匹配的声音生成模式的部分性能数据。 搜索到的图案可以由用户编辑,并且可以基于编辑的图案进行进一步的搜索。
-
5.
公开(公告)号:US09142203B2
公开(公告)日:2015-09-22
申请号:US14505026
申请日:2014-10-02
Applicant: YAMAHA CORPORATION
Inventor: Daichi Watanabe
CPC classification number: G10H7/00 , G10H1/0008 , G10H1/361 , G10H1/383 , G10H2210/005 , G10H2210/391 , G10H2210/576
Abstract: Individual chords and bar lines are extracted from an acquired chord chart described in text. Further, musical time information indicative of a musical time of music data to be generated is acquired, chord progression information is generated by allocating in-bar relative time positions to the extracted individual chords in accordance with the musical time indicated by the acquired musical time information and the extracted bar lines. A chord chart display can be provided based on the generated chord progression information. Further, accompaniment pattern data is acquired, and automatic accompaniment data can be generated by controlling the acquired accompaniment pattern data in accordance with the generated chord progression information.
Abstract translation: 从文本中描述的获取的和弦图中提取单个和弦和条形线。 此外,获取指示要产生的音乐数据的音乐时间的音乐时间信息,通过根据所获取的音乐时间信息指示的音乐时间,将所提取的个体和弦分配到条形相对时间位置来生成和弦进行信息 和提取的条形线。 可以基于生成的和弦进行信息来提供和弦图显示。 此外,获取伴奏图案数据,并且可以通过根据生成的和弦进行信息控制所获取的伴奏图案数据来生成自动伴奏数据。
-
公开(公告)号:US11955104B2
公开(公告)日:2024-04-09
申请号:US17149385
申请日:2021-01-14
Applicant: Yamaha Corporation
Inventor: Daichi Watanabe
IPC: G10H1/38
CPC classification number: G10H1/383
Abstract: An accompaniment sound generating device includes a specifier, an accompaniment sound generator, and an accompaniment sound outputter. The specifier specifies a plurality of musical performance parts for which accompaniment sounds are generated based on an input musical performance sound. The accompaniment sound generator generates the accompaniment sounds that belong to the plurality of specified musical performance parts for each musical performance sound. The accompaniment sound outputter outputs the accompaniment sounds generated for the plurality of musical performance parts with timing for generating the accompaniment sounds aligned with timing for generating musical performance sounds.
-
公开(公告)号:US11308924B2
公开(公告)日:2022-04-19
申请号:US16807766
申请日:2020-03-03
Applicant: YAMAHA CORPORATION
Inventor: Daichi Watanabe
IPC: G10G3/04 , G10H1/00 , G06V30/413
Abstract: A chord information extraction device includes an acquirer, a score type determiner, an extraction region determiner and a chord information extractor. The acquirer acquires score image data representing a reference score. The score type determiner determines a score type of the reference score from among a plurality of predetermined score types based on the acquired score image data. The extraction region determiner determines a chord extraction region in the reference score based on extraction region information that defines a relationship between a plurality of score types and a chord extraction region from which chord information is to be extracted. The chord information extractor extracts the chord information from the acquired score image data based on the determined chord extraction region.
-
公开(公告)号:US11176917B2
公开(公告)日:2021-11-16
申请号:US16120885
申请日:2018-09-04
Applicant: YAMAHA CORPORATION
Inventor: Daichi Watanabe
Abstract: Performance information of one or more accompaniment parts is extracted from original performance information. Arranged accompaniment performance information is created using, in accordance with a predetermined arranging condition, only any one or more of pitch names included in the extracted accompaniment performance information. The predetermined arranging condition comprises instructing that any of chord component notes be thinned out in an identified chord progression, and arranged accompaniment performance information is created by thinning out an accompaniment note of a pitch name corresponding to the chord component note having been instructed to be thinned out. Another predetermined arranging condition comprises setting a ratio between chords and arpeggios in one or more measures, and a combination of chords and arpeggios in the extracted accompaniment part performance information is changed so that a ratio between chords and arpeggios in one or more measures in the extracted accompaniment performance information assumes the set ratio.
-
公开(公告)号:US10354628B2
公开(公告)日:2019-07-16
申请号:US15262548
申请日:2016-09-12
Applicant: YAMAHA CORPORATION
Inventor: Daichi Watanabe
Abstract: Arranged accompaniment data are created by: acquiring original performance information; extracting, from the acquired original performance information, one or more accent positions in a music piece represented by the acquired original performance information; acquiring existing accompaniment pattern data; and adjusting time positions of one or more accompaniment notes, which are to be generated on the basis of the acquired accompaniment pattern data, so as to coincide with the extracted one or more accent positions. In this way, it is possible to create accompaniment data matching accent positions (rhythmic elements) of the music piece represented by the original performance information and thereby automatically make a musical arrangement with respective characteristics of the existing accompaniment pattern data and original performance information remaining therein.
-
-
-
-
-
-
-
-