Instant auto-focus with distance estimation

    公开(公告)号:US11902658B2

    公开(公告)日:2024-02-13

    申请号:US17007347

    申请日:2020-08-31

    CPC classification number: H04N23/67 G01B11/026 G01B11/26

    Abstract: Systems, apparatuses, and methods for implementing an instant auto-focus mechanism with distance estimation are disclosed. A camera includes at least an image sensor, one or more movement and/or orientation sensors, a timer, a lens, and control circuit. The control circuit receives first and second images captured by the image sensor of a given scene. The control circuit calculates a distance between first and second camera locations when the first and second images, respectively, were captured based on the one or more movement and/or orientation sensors and the timer. Next, the control circuit calculates an estimate of a second distance between the camera and an object in the scene based on the distance between camera locations and angles between the camera and the object from the first and second locations. Then, the control circuit causes the lens to be adjusted to bring the object into focus for subsequent images.

    Dynamic hardware selection for experts in mixture-of-experts model

    公开(公告)号:US11893502B2

    公开(公告)日:2024-02-06

    申请号:US15849633

    申请日:2017-12-20

    CPC classification number: G06N5/022 G06N20/00 G06F7/02

    Abstract: A system assigns experts of a mixture-of-experts artificial intelligence model to processing devices in an automated manner. The system includes an orchestrator component that maintains priority data that stores, for each of a set of experts, and for each of a set of execution parameters, ranking information that ranks different processing devices for the particular execution parameter. In one example, for the execution parameter of execution speed, and for a first expert, the priority data indicates that a central processing unit (“CPU”) executes the first expert faster than a graphics processing unit (“GPU”). In this example, for the execution parameter of power consumption, and for the first expert, the priority data indicates that a GPU uses less power than a CPU. The priority data stores such information for one or more processing devices, one or more experts, and one or more execution characteristics.

    CROSS FET SRAM CELL LAYOUT
    185.
    发明公开

    公开(公告)号:US20240032270A1

    公开(公告)日:2024-01-25

    申请号:US18480463

    申请日:2023-10-03

    CPC classification number: H10B10/12 G11C7/1045 H01L29/42392

    Abstract: A system and method for efficiently creating layout for memory bit cells are described. In various implementations, a memory bit cell uses Cross field effect transistors (FETs) that include vertically stacked gate all around (GAA) transistors with conducting channels oriented in an orthogonal direction between them. The channels of the vertically stacked transistors use opposite doping polarities. The memory bit cell includes one of a read bit line and a write word line routed in no other metal layer other than a local interconnect layer. In addition, a six transistor (6T) random access data storage of the given memory bit cell consumes a planar area above a silicon substrate of four transistors.

    Cache access measurement deskew
    188.
    发明授权

    公开(公告)号:US11880310B2

    公开(公告)日:2024-01-23

    申请号:US17553044

    申请日:2021-12-16

    CPC classification number: G06F12/12 G06F2212/601

    Abstract: A processor includes a cache having two or more test regions and a larger non-test region. The processor further includes a cache controller that applies different cache replacement policies to the different test regions of the cache, and a performance monitor that measures performance metrics for the different test regions, such as a cache hit rate at each test region. Based on the performance metrics, the cache controller selects a cache replacement policy for the non-test region, such as selecting the replacement policy associated with the test region having the better performance metrics among the different test regions. The processor deskews the memory access measurements in response to a difference in the amount of accesses to the different test regions exceeding a threshold.

    Management of thrashing in a GPU
    190.
    发明授权

    公开(公告)号:US11875197B2

    公开(公告)日:2024-01-16

    申请号:US17136738

    申请日:2020-12-29

    CPC classification number: G06F9/52 G06F9/30141 G06F9/3836 G06T1/20

    Abstract: Systems, apparatuses, and methods for managing a number of wavefronts permitted to concurrently execute in a processing system. An apparatus includes a register file with a plurality of registers and a plurality of compute units configured to execute wavefronts. A control unit of the apparatus is configured to allow a first number of wavefronts to execute concurrently on the plurality of compute units. The control unit is configured to allow no more than a second number of wavefronts to execute concurrently on the plurality of compute units, wherein the second number is less than the first number, in response to detection that thrashing of the register file is above a threshold. The control unit is configured to detect said thrashing based at least in part on a number of registers in use by executing wavefronts that spill to memory.

Patent Agency Ranking