-
公开(公告)号:US20230306272A1
公开(公告)日:2023-09-28
申请号:US18185031
申请日:2023-03-16
Applicant: Micron Technology, Inc.
Inventor: Andre Xian Ming Chang , Abhishek Chaurasia , Parth Khopkar , Bashar Romanous , Patrick Alan Estep , Skyler Arron Windh , Eugenio Culurciello , Sheik Dawood Beer Mohideen
Abstract: An artificial neural network is trained via reinforcement learning to receive first data representative of execution dependency conditions of instructions of a program, second data representative of a schedule of a first portion of the instructions of the program for execution in a device having a plurality of circuits units operable in parallel, and third data identifying a next instruction selected from a second portion of the instructions of the program remaining to be scheduled for execution in the device. The artificial neural network selects a placement of the next instruction in one of the circuit units from a plurality of possible placements of the next instruction in the device. Performance of placements of instructions being tested in search for a valid schedule for running the program in the device can be measured to generate samples to train the artificial neural network via reinforcement learning.
-
公开(公告)号:US20240257531A1
公开(公告)日:2024-08-01
申请号:US18420489
申请日:2024-01-23
Applicant: Micron Technology, Inc.
Inventor: Parth Khopkar , Shakti Nagnath Wadekar , Abhishek Chaurasia , Andre Xian Ming Chang
Abstract: Methods, systems, and devices for techniques to implement transformers with multi-task neural networks are described. A vehicle system may employ one or more transformer models in a machine learning system to generate an indication of a one or more objects in an image, one or more drivable areas in an image, one or more lane lines in an image, or a combination thereof. The multi-task system may include a feature extractor which uses a set of convolutional layers to generate a corresponding set of representation vectors of the image. The system may pass the representation vectors to a set of transformer models, such that each of the transformer models share a common input. Each transformer model may use the representation vectors to generate a respective indication.
-