CONFIGURING PROGRAMMABLE LOGIC REGION VIA PROGRAMMABLE NETWORK

    公开(公告)号:US20200264901A1

    公开(公告)日:2020-08-20

    申请号:US16276178

    申请日:2019-02-14

    Applicant: Xilinx, Inc.

    Abstract: Examples described herein provide for an integrated circuit (IC) having a programmable logic region that is capable of being configured via a programmable network. In an example, an IC includes a programmable logic region, a controller, and a programmable network. The programmable network is connected between the controller and the programmable logic region. The controller is programmed to configure the programmable logic region via the programmable network. In some examples, the programmable logic region can be configured faster, among other benefits.

    Method and apparatus for a transceiver system

    公开(公告)号:US10742254B1

    公开(公告)日:2020-08-11

    申请号:US16056236

    申请日:2018-08-06

    Applicant: Xilinx, Inc.

    Inventor: Adrian Lynam

    Abstract: A leakage compensation circuit includes a compensation digital to analog converter (DAC) and an adjustment circuit. The compensation DAC is configured to: receive a first digital signal associated with a transmitter of a transceiver; generate a compensation analog signal using the first digital signal; and provide the compensation analog signal to a receiver of the transceiver. The adjustment circuit is configured to generate the first digital signal by adjusting a second digital signal from the transmitter based on one or more adjustment parameters.

    System and method for successive cancellation list decoding of polar codes

    公开(公告)号:US10727873B1

    公开(公告)日:2020-07-28

    申请号:US16373434

    申请日:2019-04-02

    Applicant: Xilinx, Inc.

    Inventor: Gordon I. Old

    Abstract: A decoder circuit includes an input configured to receive an encoded message, and a decoding loop circuit including first and second memories, an update circuit, and a sort circuit. The decoding loop circuit is configured to perform list decoding to the encoded message by successively decoding a plurality of bits of a first codeword of the encoded message in a plurality of decoding loops respectively; and provide, to an output, a decoded message. In each decoding loop, the update circuit is configured to receive, from the first memory, parent path values, and provide, to a second memory, child path values based on the parent path values. The sort circuit is configured to receive, from the second memory, the child path values; and provide, to the first memory, surviving child path values based on the child path values.

    Operator aware finite state machine for circuit design simulation

    公开(公告)号:US10726182B1

    公开(公告)日:2020-07-28

    申请号:US16100041

    申请日:2018-08-09

    Applicant: Xilinx, Inc.

    Abstract: Disclosed approaches involve simulating a circuit design specified in a hardware description language (HDL). During simulation, a thread is started at an edge of a simulation clock signal for evaluation of states of a finite state machine (FSM) that represent a series of events specified in a statement in the HDL. The thread transitions from one state to a next state in the FSM in response to evaluation of the one state. In response to encountering a fork state in the FSM, the thread is forked into two threads during simulation. The fork state represents a composite operator in the statement, and the FSM has a branch from the fork state for each operand of the composite operator. In response to encountering a join state in the FSM by the two threads, the two threads are joined into one thread.

    Systems for optimization of read-only memory (ROM)

    公开(公告)号:US10726175B1

    公开(公告)日:2020-07-28

    申请号:US16291952

    申请日:2019-03-04

    Applicant: Xilinx, Inc.

    Abstract: A memory optimization method includes identifying, within a circuit design, a memory having an arithmetic operator at an output side and/or an input side of the memory. The memory may include a read-only memory (ROM). In some examples, an input of the arithmetic operator includes a constant value. In some embodiments, the memory optimization method further includes absorbing a function of the arithmetic operator into the memory. By way of example, the absorbing the function includes modifying contents of the memory based on the function of the arithmetic operator to provide an updated memory and removing the arithmetic operator from the circuit design.

    Network interface device
    238.
    发明授权

    公开(公告)号:US10686731B2

    公开(公告)日:2020-06-16

    申请号:US16226453

    申请日:2018-12-19

    Applicant: XILINX, INC.

    Abstract: Roughly described: a network interface device has an interface. The interface is coupled to first network interface device circuitry, host interface circuitry and host offload circuitry. The host interface circuitry is configured to interface to a host device and has a scheduler configured to schedule providing and/or receiving of data to/from the host device. The interface is configured to allow at least one of: data to be provided to said host interface circuitry from at least one of said first network device interface circuitry and said host offload circuitry; and data to be provided from said host interface circuitry to at least one of said first network interface device circuitry and said host offload circuitry.

    INTEGRATED CIRCUITS AND METHODS TO ACCELERATE DATA QUERIES

    公开(公告)号:US20200183937A1

    公开(公告)日:2020-06-11

    申请号:US16212134

    申请日:2018-12-06

    Applicant: Xilinx, Inc.

    Abstract: Integrated circuits and methods relating to hardware acceleration include independent, programmable, and parallel processing units (PU) custom-adapted to process a data stream and aggregate the results to respond to a query. In an illustrative example, a data stream from a database may be divided into data blocks and allocated to a corresponding PU. Each data block may be processed by one of the PUs to generate results according to a predetermined instruction set. A concatenate unit may merge and concatenate a result of each data block together to generate an output result for the query. In some embodiments, very large database SQL queries, for example, may be accelerated by hardware PU/concatenate engines implemented in fixed ASIC or reconfigurable FPGA hardware circuitry.

Patent Agency Ranking