Abstract:
A model file management method includes that a terminal device receives a storage address of a target model file package from a server and the terminal device obtains the target model file package based on the storage address of the target model file package, where the target model file package is based on a parameter of a model file package locally stored in the terminal device and a parameter of a model file package managed by the server. In an artificial intelligence (AI) field, an application may implement a specific function by using an AI model file. An application is decoupled from an AI model file such that the terminal device performs centralized management on a general model file.
Abstract:
Technical effects of a method, an apparatus, and a system for operating a shared resource in an asynchronous multiprocessing system that are provided in the present invention are as follows: A processor in an asynchronous multiprocessing system implements an operation on a shared resource by locking a hardware resource lock, and the hardware resource lock is implemented by a register; in this way, a bus in the asynchronous multiprocessing system does not need to support a synchronization operation, and the processor also does not need to have a feature of supporting a synchronization operation, and is capable of implementing the operation on the shared resource only in a manner of accessing the register, which simplifies the operation on the shared resource by the processor in the asynchronous multiprocessing system, enlarges a selection range of the processor in the asynchronous multiprocessing system, and further improves flexibility of the asynchronous multiprocessing system.
Abstract:
This application provides a method for training a neural network model and an apparatus. The method includes: obtaining annotation data that is of a service and that is generated by a terminal device in a specified period; training a second neural network model by using the annotation data that is of the service and that is generated in the specified period, to obtain a trained second neural network model; and updating a first neural network model based on the trained second neural network model. In the method, training is performed based on the annotation data generated by the terminal device, so that in an updated first neural network model compared with a universal model, an inference result has a higher confidence level, and a personalized requirement of a user can be better met.
Abstract:
This application provides a method for training a neural network model and an apparatus. The method includes: obtaining annotation data that is of a service and that is generated by a terminal device in a specified period; training a second neural network model by using the annotation data that is of the service and that is generated in the specified period, to obtain a trained second neural network model; and updating a first neural network model based on the trained second neural network model. In the method, training is performed based on the annotation data generated by the terminal device, so that in an updated first neural network model compared with a universal model, an inference result has a higher confidence level, and a personalized requirement of a user can be better met.
Abstract:
Technical effects of a method, an apparatus, and a system for operating a shared resource in an asynchronous multiprocessing system that are provided in the present invention are as follows: A processor in an asynchronous multiprocessing system implements an operation on a shared resource by locking a hardware resource lock, and the hardware resource lock is implemented by a register; in this way, a bus in the asynchronous multiprocessing system does not need to support a synchronization operation, and the processor also does not need to have a feature of supporting a synchronization operation, and is capable of implementing the operation on the shared resource only in a manner of accessing the register, which simplifies the operation on the shared resource by the processor in the asynchronous multiprocessing system, enlarges a selection range of the processor in the asynchronous multiprocessing system, and further improves flexibility of the asynchronous multiprocessing system.
Abstract:
A model file management method includes that a terminal device receives a storage address of a target model file package from a server and the terminal device obtains the target model file package based on the storage address of the target model file package, where the target model file package is based on a parameter of a model file package locally stored in the terminal device and a parameter of a model file package managed by the server. In an artificial intelligence (AI) field, an application may implement a specific function by using an AI model file. An application is decoupled from an AI model file such that the terminal device performs centralized management on a general model file.
Abstract:
A tree topology based computing system and method, where the system may include a plurality of node clusters, where the plurality of node clusters constitute a multi-layer network structure in a tree topology manner, any minimum tree in the network structure includes a second node cluster and at least one first node cluster. The first node cluster is configured to obtain a first computing result based on a first computing input, and send the first computing result to the second node cluster, and the second node cluster is configured to receive at least one first computing result sent by the at least one first node cluster, and aggregate the at least one first computing result and a second computing result to obtain a third computing result.
Abstract:
An intelligent driving method causes a vehicle to perform roaming based on a high-definition map, where the roaming can include lane switching, driving direction changing, or partial navigation strategy changing. The vehicle may perform the lane switching, the driving direction changing, or the partial navigation strategy changing based on an operation of a driver.
Abstract:
An electronic device displays a first user interface; receives a first operation of a user; obtains first content of the first user interface in response to the first operation; displays a second user interface, where content displayed in the second user interface includes a text in the first content, and the second user interface covers a part of a display area of the first user interface; reads a first sentence in the content of the second user interface; and displays marking information of a text that is in the second user interface and that corresponds to the first sentence that is being read. Embodiments of this application are used for text reading.
Abstract:
This application provides a method for training a neural network model and an apparatus. The method includes: obtaining annotation data that is of a service and that is generated by a terminal device in a specified period; training a second neural network model by using the annotation data that is of the service and that is generated in the specified period, to obtain a trained second neural network model; and updating a first neural network model based on the trained second neural network model. In the method, training is performed based on the annotation data generated by the terminal device, so that in an updated first neural network model compared with a universal model, an inference result has a higher confidence level, and a personalized requirement of a user can be better met.