-
公开(公告)号:US20240427717A1
公开(公告)日:2024-12-26
申请号:US18819007
申请日:2024-08-29
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Mihir Narendra MODY , Ankur ANKUR , Vivek Vilas DHANDE , Kedar Satish CHITNIS , Niraj NANDAN , Brijesh JADAV , Shyam JAGANNATHAN , Prithvi Shankar YEYYADI ANANTHA , Santhanakrishnan Narayanan NARAYANAN
Abstract: Systems and methods in which trace data is efficiently managed are provided. An example system includes a memory, a first interface, and a processing resource communicably coupled to the first interface and to the memory. The processing resource includes a buffer, and a first controller to transmit a set of data from the buffer with associated trace information for the set of data to the memory. A second controller transmits the set of data with the associated trace information from the memory to a second interface.
-
公开(公告)号:US20240320045A1
公开(公告)日:2024-09-26
申请号:US18675294
申请日:2024-05-28
Applicant: Texas Instruments Incorporated
Inventor: Mihir Narendra MODY , Kedar Satish CHITNIS , Kumar DESAPPAN , David SMITH , Pramod Kumar SWAMI , Shyam JAGANNATHAN
CPC classification number: G06F9/5016 , G06F9/5077 , G06F12/00 , G06F12/0223 , G06F2009/45583 , G06F9/50 , G06F9/5022 , G06N3/02 , G06N3/10 , G06N20/00
Abstract: Techniques for executing machine learning (ML) models including receiving an indication to run an ML model on a processing core; receiving a static memory allocation for running the ML model on the processing core; determining that a layer of the ML model uses more memory than the static memory allocated; transmitting, to a shared memory, a memory request for blocks of the shared memory; receiving an allocation of the requested blocks; running the layer of the ML model using the static memory and the range of memory addresses; and outputting results of running the layer of the ML model.
-
公开(公告)号:US20230267084A1
公开(公告)日:2023-08-24
申请号:US17677638
申请日:2022-02-22
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Mihir Narendra MODY, JR. , Ankur ANKUR , Vivek Vilas DHANDE , Kedar Satish CHITNIS , Niraj NANDAN , Brijesh JADAV , Shyam JAGANNATHAN , Prithvi Shankar YEYYADI ANANTHA , Santhanakrishnan Narayanan NARAYANAN
CPC classification number: G06F13/28 , G06F13/1673 , G06F13/4221 , G06F15/7807 , G06F9/4881
Abstract: A system-on-chip (SoC) in which trace data is managed includes a first memory device, a first interface to couple the first memory to a second memory external to the system-on-chip, and a first processing resource coupled to the first interface and the first memory device. The first processing resource includes a data buffer and a first direct access memory (DMA) controller. The first DMA controller transmits data from the data buffer to the first interface over a first channel, and transmits the data from the data buffer with associated trace information for the data to the first memory device over a second channel.
-
公开(公告)号:US20230013998A1
公开(公告)日:2023-01-19
申请号:US17378841
申请日:2021-07-19
Applicant: Texas Instruments Incorporated
Inventor: Mihir Narendra MODY , Kedar Satish CHITNIS , Kumar DESAPPAN , David SMITH , Pramod Kumar SWAMI , Shyam JAGANNATHAN
Abstract: Techniques for executing machine learning (ML) models including receiving an indication to run an ML model on a processing core; receiving a static memory allocation for running the ML model on the processing core; determining that a layer of the ML model uses more memory than the static memory allocated; transmitting, to a shared memory, a memory request for blocks of the shared memory; receiving an allocation of the requested blocks; running the layer of the ML model using the static memory and the range of memory addresses; and outputting results of running the layer of the ML model.
-
公开(公告)号:US20220391776A1
公开(公告)日:2022-12-08
申请号:US17342037
申请日:2021-06-08
Applicant: Texas Instruments Incorporated
Inventor: Mihir Narendra MODY , Kumar DESAPPAN , Kedar Satish CHITNIS , Pramod Kumar SWAMI , Kevin Patrick LAVERY , Prithvi Shankar YEYYADI ANANTHA , Shyam JAGANNATHAN
Abstract: Techniques for executing machine learning (ML) models including receiving an indication to run a ML model, receiving synchronization information for organizing the running of the ML model with other ML models, determining, based on the synchronization information, to delay running the ML model, delaying the running of the ML model, determining, based on the synchronization information, a time to run the ML model; and running the ML model at the time.
-
-
-
-