REAL-TIME AND LOW LATENCY PACKETIZATION PROTOCOL FOR LIVE COMPRESSED VIDEO DATA

    公开(公告)号:US20190182308A1

    公开(公告)日:2019-06-13

    申请号:US15834400

    申请日:2017-12-07

    Abstract: Systems, apparatuses, and methods for implementing real-time, low-latency packetization protocols for live compressed video data are disclosed. A wireless transmitter includes at least a codec and a media access control (MAC) layer unit. In order for the codec to communicate with the MAC layer unit, the codec encodes the compression ratio in a header embedded inside the encoded video stream. The MAC layer unit extracts the compression ratio from the header and determines a modulation coding scheme (MCS) for transferring the video stream based on the compression ratio. The MAC layer unit and the codec also implement a feedback loop such that the MAC layer unit can command the codec to adjust the compression ratio. Since the changes to the video might not be implemented immediately, the MAC layer unit relies on the header to determine when the video data is coming in with the requested compression ratio.

    Land Pad Design for High Speed Terminals
    672.
    发明申请

    公开(公告)号:US20190181087A1

    公开(公告)日:2019-06-13

    申请号:US15836239

    申请日:2017-12-08

    Abstract: An integrated circuit assembly includes an integrated circuit package substrate and a conductive land pad disposed on a surface of the integrated circuit package substrate. The conductive land pad comprises a conductor portion, an isolated conductor portion, and an isolation portion disposed between the conductor portion and the isolated conductor portion. The isolated conductor portion may surround a first side of the conductor portion and a second side of the conductor portion. The isolated conductor portion may surround a portion of a perimeter of the conductor portion. The isolation portion may include a gap between the conductor portion and the isolated conductor portion. The gap may have a width smaller than a radius of an interconnect structure of a receiving structure.

    CACHE TO CACHE DATA TRANSFER ACCELERATION TECHNIQUES

    公开(公告)号:US20190179758A1

    公开(公告)日:2019-06-13

    申请号:US15839662

    申请日:2017-12-12

    Abstract: Systems, apparatuses, and methods for accelerating cache to cache data transfers are disclosed. A system includes at least a plurality of processing nodes and prediction units, an interconnect fabric, and a memory. A first prediction unit is configured to receive memory requests generated by a first processing node as the requests traverse the interconnect fabric on the path to memory. When the first prediction unit receives a memory request, the first prediction unit generates a prediction of whether data targeted by the request is cached by another processing node. The first prediction unit is configured to cause a speculative probe to be sent to a second processing node responsive to predicting that the data targeted by the memory request is cached by the second processing node. The speculative probe accelerates the retrieval of the data from the second processing node if the prediction is correct.

    DIFFERENTIAL PIPLINE DELAYS IN A COPROCESSOR
    674.
    发明申请

    公开(公告)号:US20190179643A1

    公开(公告)日:2019-06-13

    申请号:US15837974

    申请日:2017-12-11

    Abstract: A coprocessor such as a floating-point unit includes a pipeline that is partitioned into a first portion and a second portion. A controller is configured to provide control signals to the first portion and the second portion of the pipeline. A first physical distance traversed by control signals propagating from the controller to the first portion of the pipeline is shorter than a second physical distance traversed by control signals propagating from the controller to the second portion of the pipeline. A scheduler is configured to cause a physical register file to provide a first subset of bits of an instruction to the first portion at a first time. The physical register file provides a second subset of the bits of the instruction to the second portion at a second time subsequent to the first time.

    System and method for energy reduction based on history of reliability of a system

    公开(公告)号:US10318363B2

    公开(公告)日:2019-06-11

    申请号:US15338172

    申请日:2016-10-28

    Abstract: A system and method for managing operating parameters within a system for optimal power and reliability are described. A device includes a functional unit and a corresponding reliability evaluator. The functional unit provides reliability information to one or more reliability monitors, which translate the information to reliability values. The reliability evaluator determines an overall reliability level for the system based on the reliability values. The reliability monitor compares the actual usage values and the expected usage values. When system has maintained a relatively high level of reliability for a given time interval, the reliability evaluator sends an indication to update operating parameters to reduce reliability of the system, which also reduces power consumption for the system.

    STREAM PROCESSOR WITH LOW POWER PARALLEL MATRIX MULTIPLY PIPELINE

    公开(公告)号:US20190171448A1

    公开(公告)日:2019-06-06

    申请号:US15855637

    申请日:2017-12-27

    Abstract: Systems, apparatuses, and methods for implementing a low power parallel matrix multiply pipeline are disclosed. In one embodiment, a system includes at least first and second vector register files coupled to a matrix multiply pipeline. The matrix multiply pipeline comprises a plurality of dot product units. The dot product units are configured to calculate dot or outer products for first and second sets of operands retrieved from the first vector register file. The results of the dot or outer product operations are written back to the second vector register file. The second vector register file provides the results from the previous dot or outer product operations as inputs to subsequent dot or outer product operations. The dot product units receive the results from previous phases of the matrix multiply operation and accumulate these previous dot or outer product results with the current dot or outer product results.

    Secure system memory training
    678.
    发明授权

    公开(公告)号:US10311236B2

    公开(公告)日:2019-06-04

    申请号:US15358640

    申请日:2016-11-22

    Abstract: Systems, apparatuses, and methods for performing secure system memory training are disclosed. In one embodiment, a system includes a boot media, a security processor with a first memory, a system memory, and one or more main processors coupled to the system memory. The security processor is configured to retrieve first data from the boot media and store and authenticate the first data in the first memory. The first data includes a first set of instructions which are executable to retrieve, from the boot media, a configuration block with system memory training parameters. The security processor also executes a second set of instructions to initialize and train the system memory using the training parameters. After training the system memory, the security processor retrieves, authenticates, and stores boot code in the system memory and releases the one or more main processors from reset to execute the boot code.

    Memory including side-car arrays with irregular sized entries

    公开(公告)号:US10311191B2

    公开(公告)日:2019-06-04

    申请号:US15416731

    申请日:2017-01-26

    Abstract: A system and method for floorplanning a memory. A computing system includes a processing unit which generates memory access requests and a memory. The size of each memory line in the memory includes M bits. A memory macro block includes at least a primary array and a sidecar array. The primary array stores a first portion of a memory line and the sidecar array stores a second smaller portion of the memory line being accessed. The primary array and the sidecar array have different heights. The height of the sidecar array is based on a notch height in at least one corner of the memory macro block. The notch creates on-die space for s reserved area on the die. The notches result in cross-shaped, T-shaped, and/or L-shaped memory macro blocks.

    Method and apparatus for providing clock signals for a scan chain

    公开(公告)号:US10310015B2

    公开(公告)日:2019-06-04

    申请号:US13946083

    申请日:2013-07-19

    Abstract: An integrated circuit device includes a plurality of flip flops configured into a scan chain. The plurality of flip flops includes at least flip flop of a first type and at least one flip flop of a second type. A method includes generating a first scan clock signal for loading scan data into at least one flip flop of a first type, generating a second scan clock signal and a third scan clock signal for loading the scan data into at least one flip flop of a second type, and loading a test pattern into a scan chain defined by the at least flip flop of the first type and the at least one flip flop of the second type responsive to the first, second, and third scan clock signals.

Patent Agency Ranking