METHOD AND APPARATUS FOR PROCESSING OPERATOR FOR DEEP LEARNING FRAMEWORK, AND DEVICE AND STORAGE MEDIUM

    公开(公告)号:EP4300363A1

    公开(公告)日:2024-01-03

    申请号:EP22925241.6

    申请日:2022-11-02

    IPC分类号: G06N3/04

    摘要: The present disclosure provides an operator processing method of a deep learning framework, which relates to a field of computer technology, especially in a field of artificial intelligence technology such as deep learning. The specific implementation scheme is: acquiring an operator to be processed, where the operator to be processed includes a template parameter independent of the deep learning framework and an operator kernel function; parsing, in response to receiving an input information for the operator to be processed, the template parameter by using the input information to obtain a plurality of complete template parameters related to the deep learning framework; and processing the operator kernel function according to the plurality of complete template parameters, to obtain an available operator for the deep learning framework. The present disclosure also provides an operator processing apparatus of a deep learning framework, an electronic device, and a storage medium.

    METHOD FOR PROCESSING DATA, AND ELECTRONIC DEVICE

    公开(公告)号:EP4113390A2

    公开(公告)日:2023-01-04

    申请号:EP22208202.6

    申请日:2022-11-18

    IPC分类号: G06N3/063 G06N3/04

    摘要: The disclosure provides a method for processing data, and an electronic device. The method includes: obtaining first attribute information of input data and second attribute information of a computing device corresponding to the input data; selecting a target operator implementation mode from a plurality of candidate operator implementation modes based on the first attribute information and the second attribute information; determining a plurality of sub-operators included in an operator required for the input data from an operator library based on the target operator implementation mode, to generate the operator; and obtaining an operation result by performing an operation on the input data by the computing device based on the operator.

    METHOD AND APPARATUS OF CONVERTING SCHEMA IN DEEP LEARNING FREAMWORK, AND COMPUTER STORAGE MEDIUM

    公开(公告)号:EP3996009A1

    公开(公告)日:2022-05-11

    申请号:EP21197920.8

    申请日:2021-09-21

    IPC分类号: G06N20/00

    摘要: According to exemplary embodiments of the present disclosure, there is provided a method and apparatus of converting a schema in a deep learning framework, and a computer storage medium, and a computer program product, which may be used for a construction of the deep learning framework. The method of converting the schema in the deep learning framework includes: updating a first schema, based on first syntax elements in the first schema and a context relationship between the first syntax elements in the first schema, so as to obtain an updated first schema; generating second syntax elements corresponding to updated first syntax elements in the updated first schema, based on a mapping relationship between the updated first syntax elements in the updated first schema and second syntax elements in a second schema system; and combining the second syntax elements according to a context relationship between the updated first syntax elements, so as to generate a second schema. According to a solution of the present disclosure, the schema conversion may be performed efficiently.

    METHOD AND APPARATUS OF TRAINING MODEL, ELECTRONIC DEVICE, STORAGE MEDIUM, AND DEVELOPMENT SYSTEM

    公开(公告)号:EP3926555A2

    公开(公告)日:2021-12-22

    申请号:EP21198055.2

    申请日:2021-09-21

    IPC分类号: G06N20/00

    摘要: Embodiments of the present disclosure provide a method and apparatus of training a model, an electronic device, a storage medium and a development system, which relate to a field of deep learning. The method may include calling a training preparation component to set at least a loss function and an optimization function for training the model, in response to determining that a training preparation instruction is received. The method further includes calling a training component to set a first data reading component, in response to determining that a training instruction is received. The first data reading component is configured to load a training data set for training the model. In addition, the method may further include training the model based on the training data set from the first data reading component, by using the loss function and the optimization function through the training component. Technical solutions of the present disclosure may reduce an input of codes, so that research and development resources and time costs are significantly saved.

    ACCESS METHOD AND APPARATUS, ELECTRONIC DEVICE AND COMPUTER STORAGE MEDIUM

    公开(公告)号:EP4123514A3

    公开(公告)日:2023-07-19

    申请号:EP22211848.1

    申请日:2022-12-07

    IPC分类号: G06N3/063 G06N3/04

    摘要: The disclosure provides an access method, an access apparatus, an electronic device and a computer storage medium, and relates to a field of computer technologies, in particular to a field of artificial intelligence technologies such as chip and deep learning. The method includes: determining (S11) a computational graph for calling an access device based on operator representations in a target model; optimizing (S12) the computational graph based on information of the access device; and performing (S13) relevant running operations of the target model on the access device based on the computational graph and an interface for the access device to access to a model framework of the target model, the interface being determined based on kit data of the access device.

    DISTRIBUTED TRAINING METHOD BASED ON END-TO-END ADAPTION, AND DEVICE

    公开(公告)号:EP4191411A1

    公开(公告)日:2023-06-07

    申请号:EP22211341.7

    申请日:2022-12-05

    IPC分类号: G06F9/50 G06N3/08

    摘要: A distributed training method based on end-to-end adaption, a device and a storage medium. The method includes: obtaining (S101) slicing results by slicing a model to be trained; obtaining (S102) an attribute of computing resources allocated to the model for training by parsing the computing resources, in which the computing resources are determined based on a computing resource requirement of the model, computing resources occupied by another model being trained, and idle computing resources, and the attribute of the computing resources is configured to represent at least one of a topology relation and a task processing capability of the computing resources; determining (S103) a distribution strategy of each of the slicing results in the computing resources based on the attributes of the computing resources; and performing (S104) distributed training on the model using the computing resources based on the distribution strategy.

    SHARED ENCODER GENERATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE

    公开(公告)号:EP3855368A1

    公开(公告)日:2021-07-28

    申请号:EP20864267.8

    申请日:2020-04-07

    IPC分类号: G06N3/08

    摘要: A method and an apparatus for generating a shared encoder, and an electronic device are provided by the present application, which belongs to a field of computer technology. The method includes: sending by a master node a shared encoder training instruction to child nodes, so that each child node obtains training samples based on a type of a target shared encoder included in the training instruction; sending an initial parameter set of the target shared encoder to be trained to each child node after obtaining a confirmation message returned by each child node, so that the initial parameter set is trained by each child node with its own training samples; obtaining an updated parameter set of the target shared encoder returned by each child node; determining a target parameter set corresponding to the target shared encoder based on a first preset rule and the updated parameter set of the target shared encoder returned by each child node. As a result, the method for generating the shared encoder may reduce the difficulty and cost of obtaining training corpus from a plurality of fields and improve the performance of the shared encoder.

    METHOD AND APPARATUS FOR GENERATING AND APPLYING DEEP LEARNING MODEL BASED ON DEEP LEARNING FRAMEWORK

    公开(公告)号:EP4195108A1

    公开(公告)日:2023-06-14

    申请号:EP22180958.5

    申请日:2022-06-24

    IPC分类号: G06N3/08 G06N20/00

    摘要: The present invention provides a method and apparatus for generating and applying a deep learning model based on a deep learning framework, and relates to the field of computers. A specific implementation solution includes that: a basic operating environment is established on a target device, where the basic operating environment is used for providing environment preparation for an overall generation process of a deep learning model; a basic function of the deep learning model is generated in the basic operating environment according to at least one of a service requirement and a hardware requirement, to obtain a first processing result; an extended function of the deep learning model is generated in the basic operating environment based on the first processing result, to obtain a second processing result; and a preset test script is used to perform function test on the second processing result; to output a test result.

    METHOD AND APPARATUS OF TRAINING MODEL, DEVICE, MEDIUM, AND PROGRAM PRODUCT

    公开(公告)号:EP3913542A3

    公开(公告)日:2022-05-11

    申请号:EP21197956.2

    申请日:2021-09-21

    摘要: There is provided a method and apparatus of training a model, a device, and a medium, which relate to artificial intelligence, and in particular to a deep learning and image processing technology. The method may be implemented to include: determining a plurality of augmented sample sets associated with a plurality of original samples; determining a first constraint according to a first model based on the plurality of augmented sample sets, wherein the first constraint is associated with a difference between outputs of the first model for different augmented samples in one augmented sample set; determining a second constraint according to the first model and a second model based on the plurality of augmented sample sets, wherein the second constraint is associated with a difference between outputs of the first model and the second model for one augmented sample, and the first model has a complexity lower than that of the second model; training the first model based on at least the first constraint and the second constraint, so as to obtain a trained first model. According to the embodiments of the present disclosure, a performance of the trained model may be optimized.

    METHOD AND APPARATUS OF PROCESSING IMAGE DEVICE AND MEDIUM

    公开(公告)号:EP3913533A3

    公开(公告)日:2022-03-02

    申请号:EP21190806.6

    申请日:2021-08-11

    IPC分类号: G06V30/40

    摘要: The present disclosure provides a method and apparatus of processing an image, a device and a medium, which relates to a field of artificial intelligence, and in particular to a field of deep learning and image processing. The method includes: determining a background image of the image, wherein the background image describes a background relative to characters in the image; determining a property of characters corresponding to a selected character section of the image; replacing the selected character section with a corresponding section in the background image, so as to obtain an adjusted image; and combining acquired target characters with the adjusted image based on the property. In this manner, it is possible to improve a shortage of images for different scenarios, thereby increasing a number of available images, and saving time and costs for labeling images.