-
公开(公告)号:US20220156560A1
公开(公告)日:2022-05-19
申请号:US16951848
申请日:2020-11-18
Applicant: Micron Technology, Inc.
Inventor: Saideep Tiku , Poorna Kale
Abstract: Apparatuses and methods can be related to compiling instructions for implementing an artificial neural network (ANN) bypass. The bypass path can be used to bypass a portion of the ANN such that the ANN generates an output with a particular level of confidence while utilizing less resources than if the portion of the ANN had not been bypassed. A compiler can determine where to place the bypass path in an ANN.
-
公开(公告)号:US20220101119A1
公开(公告)日:2022-03-31
申请号:US17039243
申请日:2020-09-30
Applicant: Micron Technology, Inc.
Inventor: Saideep Tiku , Poorna Kale
Abstract: Apparatuses and methods can be related to implementing age-based network training. An artificial neural network (ANN) can be trained by introducing errors into the ANN. The errors and the quantity of errors introduced into the ANN can be based on age-based characteristics of the memory device.
-
公开(公告)号:US20220036157A1
公开(公告)日:2022-02-03
申请号:US16942341
申请日:2020-07-29
Applicant: Micron Technology, Inc.
Inventor: Poorna Kale , Amit Gattani
Abstract: Methods, systems, and apparatus related to dynamic distribution of an artificial neural network among multiple processing nodes based on real-time monitoring of a processing load on each node. In one approach, a server acts as an intelligent artificial intelligence (AI) gateway. The server receives data regarding a respective operating status for each of monitored processing devices. The monitored processing devices perform processing for an artificial neural network (ANN). The monitored processing devices each perform processing for a portion of the neurons in the ANN. The portions are distributed in response to monitoring the processing load on each processing device (e.g., to better utilize processing power across all of the processing devices).
-
公开(公告)号:US20210400187A1
公开(公告)日:2021-12-23
申请号:US16906224
申请日:2020-06-19
Applicant: Micron Technology, Inc.
Inventor: Poorna Kale
Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, a digital camera may be configured to execute instructions with matrix operands and configured with: a housing; a lens; an image sensor positioned behind the lens to generate image data of a field of view of the digital camera; random access memory to store instructions executable by the Deep Learning Accelerator and store matrices of an Artificial Neural Network; a transceiver; and a controller configured to generate, and communicate using the transceiver to a separate computer, a description of an item or event in the field of view captured in the image data, based on an output of the Artificial Neural Network receiving the image data as an input. The separate computer may selectively request a portion of image data from the digital camera based on the processing of the description.
-
公开(公告)号:US20210398542A1
公开(公告)日:2021-12-23
申请号:US17460122
申请日:2021-08-27
Applicant: Micron Technology, Inc.
Inventor: Poorna Kale
IPC: G10L17/18 , H04R1/04 , G06N3/08 , G10L17/04 , G06F17/16 , H04R17/02 , G06N3/10 , G10L25/30 , G10L25/63 , H04R3/00 , G10L17/00
Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, a microphone may be configured to execute instructions with matrix operands and configured with: a transducer to convert sound waves to electrical signals; an analog to digital converter to generate audio data according to the electrical signals; random access memory to store instructions executable by the Deep Learning Accelerator and store matrices of an Artificial Neural Network; and a controller to store the audio data in the random access memory as an input to the Artificial Neural Network. The Deep Learning Accelerator can execute the instructions to generate an output of the Artificial Neural Network, which may be provided as the primary output of the microphone to a computer system, such as a voice-based digital assistant.
-
公开(公告)号:US20210320967A1
公开(公告)日:2021-10-14
申请号:US16845007
申请日:2020-04-09
Applicant: Micron Technology, Inc.
Inventor: Poorna Kale , Jaime Cummins
Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. An edge server may be implemented using an integrated circuit device having: a Deep Learning Accelerator configured to execute instructions with matrix operands; random access memory configured to store first instructions of an Artificial Neural Network executable by the Deep Learning Accelerator and second instructions of a server application executable by a Central Processing Unit; and an interface to a communication device on a computer network. The Central Processing Unit may be part of the integrated circuit device, or be connected to the integrated circuit device. The server application may be configured to provide services over the computer network based on output of the Artificial Neural Network and input received from one or more local devices via a bus, or a wired or wireless local area network.
-
公开(公告)号:US20210319219A1
公开(公告)日:2021-10-14
申请号:US16843787
申请日:2020-04-08
Applicant: Micron Technology, Inc.
Inventor: Poorna Kale
Abstract: Methods, devices, and computer-readable media for generating color-neutral representations of driving objects are disclosed. In one embodiment, a method is disclosed comprising capturing an image, the image including an object of interest; identifying the object of interest in the image based on identifying one or more colors in the image; associating the object of interest with a known traffic object; identifying a color-neutral representation of the known traffic object; and displaying the color-neutral representation to a user.
-
公开(公告)号:US11112994B2
公开(公告)日:2021-09-07
申请号:US16703091
申请日:2019-12-04
Applicant: Micron Technology, Inc.
Inventor: Poorna Kale
Abstract: A system including a memory device with microbumps are disclosed. A non-volatile memory device stores data for a machine learning operation. The non-volatile memory device comprises a set of microbumps. The set of microbumps are to transmit the data for the machine learning operation from the non-volatile memory device to another set of microbumps of a machine learning processing device that performs the machine learning operation.
-
公开(公告)号:US20210271446A1
公开(公告)日:2021-09-02
申请号:US17321351
申请日:2021-05-14
Applicant: Micron Technology, Inc.
Inventor: Robert Richard Noel Bielby , Poorna Kale
Abstract: Systems, methods and apparatus of control delivery of audio content into a vehicle cabin where occupants of vehicles are located/seated. For example, a vehicle includes: at least one microphone configured to generate signals representing audio content presented in the cabin of the vehicle; an infotainment system having access to multiple sources of audio content; and an artificial neural network configured to receive input parameters relevant to audio control in the vehicle and generate, based on the input parameters as a function of time, predictions of audio pattern. The input parameters can include data representing the signals from the at least one microphone, data from the infotainment system, and/or at least one operating parameter of the vehicle. The vehicle is configured to adjust a setting of the infotainment system based at least in part on the predictions generated by the artificial neural network.
-
120.
公开(公告)号:US20210064237A1
公开(公告)日:2021-03-04
申请号:US16557200
申请日:2019-08-30
Applicant: MICRON TECHNOLOGY, INC.
Inventor: Poorna Kale , Ashok Sahoo
IPC: G06F3/06
Abstract: One or more usage parameter values are received from a host system. The one or more parameter values correspond to one or more operations performed at the memory sub-system. Based on the one or more usage parameter values, a first expected time period is determined during which a first set of subsequent host data will be received from the host system and a second expected time period is determined during which a second set of subsequent host data will be received from the host system. A media management operation is scheduled to be performed between the first expected time period and the second expected time period.
-
-
-
-
-
-
-
-
-