-
公开(公告)号:US20210255869A1
公开(公告)日:2021-08-19
申请号:US17306350
申请日:2021-05-03
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Jayasree Sankaranarayanan , Dipan Kumar Mandal
Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
-
公开(公告)号:US10996955B2
公开(公告)日:2021-05-04
申请号:US16451330
申请日:2019-06-25
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Jayasree Sankaranarayanan , Dipan Kumar Mandal
Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
-
公开(公告)号:US10547859B2
公开(公告)日:2020-01-28
申请号:US15653561
申请日:2017-07-19
Applicant: Texas Instruments Incorporated
Inventor: Hetul Sanghvi , Mihir Narendra Mody , Niraj Nandan , Mahesh Madhukar Mehendale , Subrangshu Das , Dipan Kumar Mandal , Nainala Vyagrheswarudu , Vijayavardhan Baireddy , Pavan Venkata Shastry
Abstract: A video hardware engine which support dynamic frame padding is disclosed. The video hardware engine includes an external memory. The external memory stores a reference frame. The reference frame includes a plurality of reference pixels. A motion estimation (ME) engine receives a current LCU (largest coding unit), and defines a search area around the current LCU for motion estimation. The ME engine receives a set of reference pixels corresponding to the current LCU. The set of reference pixels of the plurality of reference pixels are received from the external memory. The ME engine pads a set of duplicate pixels along an edge of the reference frame when a part area of the search area is outside the reference frame.
-
公开(公告)号:US10331347B2
公开(公告)日:2019-06-25
申请号:US15991653
申请日:2018-05-29
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Jayasree Sankaranarayanan , Dipan Kumar Mandal
Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
-
公开(公告)号:US09681150B2
公开(公告)日:2017-06-13
申请号:US14737904
申请日:2015-06-12
Applicant: TEXAS INSTRUMENTS INCORPORATED
Abstract: An image processing system includes a processor and optical flow determination logic. The optical flow determination logic is to quantify relative motion of a feature present in a first frame of video and a second frame of video with respect to the two frames of video. The optical flow determination logic configures the processor to convert each of the frames of video into a hierarchical image pyramid. The image pyramid comprises a plurality of image levels. Image resolution is reduced at each higher one of the image levels. For each image level and for each pixel in the first frame, the processor is configured to establish an initial estimate of a location of the pixel in the second frame and to apply a plurality of sequential searches, starting from the initial estimate, that establish refined estimates of the location of the pixel in the second frame.
-
公开(公告)号:US09652686B2
公开(公告)日:2017-05-16
申请号:US15345523
申请日:2016-11-08
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Jayasree Sankaranarayanan , Dipan Kumar Mandal , Prashanth R Viswanath
CPC classification number: G06K9/4638 , G06K9/00986 , G06K9/4604 , G06K9/481 , G06K9/6211
Abstract: This invention enables effective corner detection of pixels of an image using the FAST algorithm using a vector SIMD processor. This invention loads an 8×8 pixel block that includes four 7×7 pixel blocks including the 16 peripheral pixels to be tested for each of four center pixels. This invention rearranges the 64 pixels of the 8×8 block to form a 16 element array for each center pixel preferably using a vector permutation instruction. This invention uses vector SIMD subtraction and compare and vector SIMD addition and compare to make the FAST algorithm comparisons. The N consecutive pixels determinations of the FAST algorithm are made from the results of plural shift and AND operations. The corresponding center pixel is marked a corner or not a corner dependent upon of the results of plural shift and AND operations.
-
公开(公告)号:US20150296212A1
公开(公告)日:2015-10-15
申请号:US14684334
申请日:2015-04-11
Applicant: Texas Instruments Incorporated
Inventor: Dipan Kumar Mandal , Mihir Narendra Mody , Mahesh Madhukar Mehendale , Chaitanya Satish Ghone , Piyali Goswami , Naresh Kumar Yadav , Hetul Sanghvi , Niraj Nandan
IPC: H04N19/42 , G06F9/30 , H04N19/463 , G06F9/38
CPC classification number: H04N19/42 , G06F9/30181 , G06F9/3885 , H04N19/43 , H04N19/463
Abstract: A control processor for a video encode-decode engine is provided that includes an instruction pipeline. The instruction pipeline includes an instruction fetch stage coupled to an instruction memory to fetch instructions, an instruction decoding stage coupled to the instruction fetch stage to receive the fetched instructions, and an execution stage coupled to the instruction decoding stage to receive and execute decoded instructions. The instruction decoding stage and the instruction execution stage are configured to decode and execute a set of instructions in an instruction set of the control processor that are designed specifically for accelerating video sequence encoding and encoded video bit stream decoding.
Abstract translation: 提供了一种用于视频编码解码引擎的控制处理器,其包括指令流水线。 指令流水线包括与指令存储器耦合以取指令的指令提取级,耦合到指令提取级以接收所取指令的指令解码级,以及耦合到指令解码级的接收和执行解码指令的执行级。 指令解码级和指令执行级被配置为解码和执行专门用于加速视频序列编码和编码视频位流解码的控制处理器的指令集中的一组指令。
-
公开(公告)号:US12056491B2
公开(公告)日:2024-08-06
申请号:US18321037
申请日:2023-05-22
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Jayasree Sankaranarayanan , Dipan Kumar Mandal
CPC classification number: G06F9/383 , G06F9/30036 , G06F9/3004 , G06F9/30043
Abstract: This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory.
-
公开(公告)号:US11445207B2
公开(公告)日:2022-09-13
申请号:US16714837
申请日:2019-12-16
Applicant: Texas Instruments Incorporated
Inventor: Hetul Sanghvi , Mihir Narendra Mody , Niraj Nandan , Mahesh Madhukar Mehendale , Subrangshu Das , Dipan Kumar Mandal , Nainala Vyagrheswarudu , Vijayavardhan Baireddy , Pavan Venkata Shastry
Abstract: A video hardware engine which support dynamic frame padding is disclosed. The video hardware engine includes an external memory. The external memory stores a reference frame. The reference frame includes a plurality of reference pixels. A motion estimation (ME) engine receives a current LCU (largest coding unit), and defines a search area around the current LCU for motion estimation. The ME engine receives a set of reference pixels corresponding to the current LCU. The set of reference pixels of the plurality of reference pixels are received from the external memory. The ME engine pads a set of duplicate pixels along an edge of the reference frame when a part area of the search area is outside the reference frame.
-
公开(公告)号:US10395381B2
公开(公告)日:2019-08-27
申请号:US16291405
申请日:2019-03-04
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Jayasree Sankaranarayanan , Dipan Kumar Mandal
Abstract: Disclosed techniques relate to forming a block sum of picture elements employing a vector dot product instruction to sum packed picture elements and the mask producing a vector of masked horizontal picture element. The block sum is formed from plural horizontal sums via vector single instruction multiple data (SIMD) addition.
-
-
-
-
-
-
-
-
-