-
公开(公告)号:US20220391776A1
公开(公告)日:2022-12-08
申请号:US17342037
申请日:2021-06-08
Applicant: Texas Instruments Incorporated
Inventor: Mihir Narendra MODY , Kumar DESAPPAN , Kedar Satish CHITNIS , Pramod Kumar SWAMI , Kevin Patrick LAVERY , Prithvi Shankar YEYYADI ANANTHA , Shyam JAGANNATHAN
Abstract: Techniques for executing machine learning (ML) models including receiving an indication to run a ML model, receiving synchronization information for organizing the running of the ML model with other ML models, determining, based on the synchronization information, to delay running the ML model, delaying the running of the ML model, determining, based on the synchronization information, a time to run the ML model; and running the ML model at the time.
-
公开(公告)号:US20220058768A1
公开(公告)日:2022-02-24
申请号:US17520861
申请日:2021-11-08
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Niraj NANDAN , Rajasekhar Reddy ALLU , Mihir Narendra MODY
Abstract: A lens distortion correction function operates by backmapping output images to the uncorrected, distorted input images. As a vision image processor completes processing on the image data lines needed for the lens distortion correction function to operate on a group of output, undistorted image lines, the lens distortion correction function begins processing the image data. This improves image processing pipeline delays by overlapping the operations. The vision image processor provides output image data to a circular buffer in SRAM, rather than providing it to DRAM. The lens distortion correction function operates from the image data in the circular buffer. By operating from the SRAM circular buffer, access to the DRAM for the highly fragmented backmapping image data read operations is removed, improving available DRAM bandwidth. By using a circular buffer, less space is needed in the SRAM. The improved memory operations further improve the image processing pipeline delays.
-
公开(公告)号:US20210291735A1
公开(公告)日:2021-09-23
申请号:US17340207
申请日:2021-06-07
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Rajat SAGAR , Mihir Narendra MODY , Anthony Joseph LELL , Gregory Raymond SHURTZ
Abstract: A hub that receives sensor data streams and then distributes the data streams to the various systems that use the sensor data. A demultiplexer (demux) receives the streams, filters out undesired streams and provides desired streams to the proper multiplexer (mux) or muxes of a series of muxes. Each mux combines received streams and provides an output stream to a respective formatter or output block. The formatter or output block is configured based on the destination of the mux output stream, such as an image signal processor, a processor, memory or external transmission. The output block reformats the received stream to a format appropriate for the recipient and then provides the reformatted stream to that recipient.
-
公开(公告)号:US20200242048A1
公开(公告)日:2020-07-30
申请号:US16256821
申请日:2019-01-24
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Sriramakrishnan GOVINDARAJAN , Gregory Raymond SHURTZ , Mihir Narendra MODY , Charles Lance FUOCO , Donald E. STEISS , Jonathan Elliot BERGSAGEL , Jason A.T. JONES
IPC: G06F12/1027 , G06F9/455
Abstract: In an example, a device includes a memory and a processor core coupled to the memory via a memory management unit (MMU). The device also includes a system MMU (SMMU) cross-referencing virtual addresses (VAs) with intermediate physical addresses (IPAs) and IPAs with physical addresses (PAs). The device further includes a physical address table (PAT) cross-referencing IPAs with each other and cross-referencing PAs with each other. The device also includes a peripheral virtualization unit (PVU) cross-referencing IPAs with PAs, and a routing circuit coupled to the memory, the SMMU, the PAT, and the PVU. The routing circuit is configured to receive a request comprising an address and an attribute and to route the request through at least one of the SMMU, the PAT, or the PVU based on the address and the attribute.
-
公开(公告)号:US20200210205A1
公开(公告)日:2020-07-02
申请号:US16700254
申请日:2019-12-02
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Sriramakrishnan GOVINDARAJAN , Denis Roland BEAUDOIN , Gregory Raymond SHURTZ , Santhanakrishnan Badri NARAYANAN , Mark Adrian BRYANS , Mihir Narendra MODY , Jason A.T. JONES , Jayant THAKUR
IPC: G06F9/4401 , H04L12/721 , H04L12/741 , H04L12/931 , H04L12/823 , G06F13/28
Abstract: An Ethernet switch and a switch microcontroller or CPU are integrated onto a system-on-a-chip (SoC). The Ethernet switch remains independently operating at full speed even though the remainder of the SoC is being reset or is otherwise nonoperational. The Ethernet switch is on a separated power and clock domain from the remainder of the integrated SoC. A warm reset signal is trapped by control microcontroller (MCU) to allow the switch CPU to isolate the Ethernet switch and save state. When the Ethernet switch is isolated and operating independently, the warm reset request is provided to the other entities on the integrated SoC. When warm reset is completed, the state is restored and the various DMA and flow settings redeveloped in the integrated SoC to allow return to normal operating condition.
-
公开(公告)号:US20200074287A1
公开(公告)日:2020-03-05
申请号:US16556733
申请日:2019-08-30
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Mihir Narendra MODY , Prithvi Shankar YEYYADI ANANTHA
Abstract: A hardware neural network engine which uses checksums of the matrices used to perform the neural network computations. For fault correction, expected checksums are compared with checksums computed from the matrix developed from the matrix operation. The expected checksums are developed from the prior stage of the matrix operations or from the prior stage of the matrix operations combined with the input matrices to a matrix operation. This use of checksums allows reading of the matrices from memory, the dot product of the matrices and the accumulation of the matrices to be fault corrected without triplication of the matrix operation hardware and extensive use of error correcting codes. The nonlinear stage of the neural network computation is done using triplicated nonlinear computational logic. Fault detection is done in a similar manner, with fewer checksums needed and correction logic removed as compared to the fault correction operation.
-
公开(公告)号:US20250079967A1
公开(公告)日:2025-03-06
申请号:US18240864
申请日:2023-08-31
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Shailesh GHOTGALKAR , Mihir Narendra MODY , Ashish VANJARI , Aravindhan KARUPPIAH , Mohd FAROOQUI , Biju MG , Daniel WU
Abstract: A circuit includes a microcontroller having a first terminal and a second terminal. The microcontroller is configured to: receive a signal associated with operation of a power converter at the first terminal; adjust a switch control signal at the second terminal responsive to the signal; measure a frequency of the switch control signal; compare the measured frequency responsive to at least one envelope of a set of envelopes to obtain monitoring results; and perform control operations responsive to the monitoring results.
-
公开(公告)号:US20240428365A1
公开(公告)日:2024-12-26
申请号:US18826385
申请日:2024-09-06
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Mihir Narendra MODY , Niraj NANDAN , Ankur ANKUR , Mayank MANGLA , Prithvi Shankar YEYYADI ANANTHA
Abstract: A technique including receiving an image stream for processing; processing the received image stream in a real time mode of operation; outputting an indication that an image processing pipeline has begun processing the received image stream; receiving, in response to the indication, first configuration information associated with test data for testing the image processing pipeline; switching the image processing pipeline to a non-real time mode of operation to process the test data based on the first configuration information during a vertical blanking period of the received image stream; loading the test data from an external memory; switching an input of the image processing pipeline from the image stream to the test data; determining a checksum based on the processed test data; comparing the determined checksum to an expected checksum to determine that the test data was successfully processed; and outputting an indication that the test data was successfully processed.
-
公开(公告)号:US20240354272A1
公开(公告)日:2024-10-24
申请号:US18763195
申请日:2024-07-03
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Kishon Vijay Abraham ISRAEL VIJAYPONRAJ , Sriramakrishnan GOVINDARAJAN , Mihir Narendra MODY
CPC classification number: G06F13/4027 , G06F13/1668 , G06F13/4282 , G06F21/45 , G06F2213/0026
Abstract: Examples systems are provided for isolating transactions originating from different types of applications, including systems for providing such isolation on an inbound side and systems for providing such isolation on an outbound side, as well as end-to-end isolation systems. Such systems may be implemented in a peripheral component interconnect express (PCIe) environment. On an outbound side, a first interconnect receives transactions having different attributes indicating different application origins, respectively, and selects different pathways for the transactions based on their respective attributes. On an inbound side, a second interconnect selects different pathways for the transactions based on their respective attributes. Thus, transactions originating from safety and non-safety applications may be kept isolated as they are routed, and a non-safety transaction may be restricted from accessing certain portions of memory.
-
公开(公告)号:US20240248839A1
公开(公告)日:2024-07-25
申请号:US18426053
申请日:2024-01-29
Applicant: TEXAS INSTRUMENTS INCORPORATED
IPC: G06F12/02
CPC classification number: G06F12/0246 , G06F2212/7201
Abstract: An Adaptive Memory Mirroring Performance Accelerator (AMMPA) includes a transaction handling block that dynamically maps the most frequently accessed memory regions into faster access memory. The technique creates shadow copies of the most frequently accessed memory regions in the faster access memory, which is associated with lower latency. The regions for which shadow copies are provided are updated dynamically based on use. The technique is flexible for different memory hierarchies.
-
-
-
-
-
-
-
-
-