Genome Sequence Alignment System and Method

    公开(公告)号:US20210201163A1

    公开(公告)日:2021-07-01

    申请号:US16729379

    申请日:2019-12-28

    Abstract: A system is provided that includes a bit vector-based distance counter circuitry configured to generate one or more bit vectors encoded with information about potential matches and edits between a read and a reference genome, wherein the read comprises an encoding of a fragment of deoxyribonucleic acid (DNA) encoded via bases G, A, T, C. The system further includes a bit vector-based traceback circuitry configured to divide the reference genome into one or more windows and to use the plurality of bit vectors to generate a traceback output for each of the one or more windows, wherein the traceback output comprises a match, a substitution, an insert, a delete, or a combination thereof, between the read and the one or more windows.

    Technology for dynamically tuning processor features

    公开(公告)号:US10915421B1

    公开(公告)日:2021-02-09

    申请号:US16575535

    申请日:2019-09-19

    Abstract: A processor comprises a microarchitectural feature and dynamic tuning unit (DTU) circuitry. The processor executes a program for first and second execution windows with the microarchitectural feature disabled and enabled, respectively. The DTU circuitry automatically determines whether the processor achieved worse performance in the second execution window. In response to determining that the processor achieved worse performance in the second execution window, the DTU circuitry updates a usefulness state for a selected address of the program to denote worse performance. In response to multiple consecutive determinations that the processor achieved worse performance with the microarchitectural feature enabled, the DTU circuitry automatically updates the usefulness state to denote a confirmed bad state. In response to the usefulness state denoting the confirmed bad state, the DTU circuitry automatically disables the microarchitectural feature for the selected address for execution windows after the second execution window. Other embodiments are described and claimed.

    System, Apparatus And Method For Simultaneous Read And Precharge Of A Memory

    公开(公告)号:US20190355411A1

    公开(公告)日:2019-11-21

    申请号:US15980813

    申请日:2018-05-16

    Abstract: In one embodiment, an apparatus includes a memory array having a plurality of memory cells, a plurality of bitlines coupled to the plurality of memory cells, and a plurality of wordlines coupled to the plurality of memory cells. The memory array may further include a sense amplifier circuit to sense and amplify a value stored in a memory cell of the plurality of memory cells. The sense amplifier circuit may include: a buffer circuit to store the value, the buffer circuit coupled between a first internal node of the sense amplifier circuit and a second internal node of the sense amplifier circuit; and an equalization circuit to equalize the first internal node and the second internal node while the sense amplifier circuit is decoupled from the memory array. Other embodiments are described and claimed.

    Hardware accelerator for selecting data elements

    公开(公告)号:US10402413B2

    公开(公告)日:2019-09-03

    申请号:US15475238

    申请日:2017-03-31

    Abstract: A processor may include a plurality of processing elements and a hardware accelerator for selecting data elements. The hardware accelerator may: access an input data set comprising a set of data elements, each data element having a score value; increment bin counters based on the score values of the set of data elements, each bin counter to count a number of data elements with an associated score value; determine a cumulative sum of count values for a sequence of bin counters, the sequence beginning with a first bin counter of the plurality of bin counters; identify a second bin counter in the sequence of bin counters at which the cumulative sum reaches a selection quantity N; and generate an output data set based on a comparison of the set of data elements to a threshold score associated with the second bin counter.

    DYNAMIC DETECTION AND PREDICTION FOR STORE-DEPENDENT BRANCHES

    公开(公告)号:US20190220284A1

    公开(公告)日:2019-07-18

    申请号:US15870595

    申请日:2018-01-12

    CPC classification number: G06F9/3844 G06F9/3806 G06F9/3859 G06F9/3861

    Abstract: One embodiment provides an apparatus. The apparatus includes a store direct dependent (SDD) branch prediction circuitry and an SDD management circuitry. The store direct dependent (SDD) branch prediction circuitry is to store an SDD branch table. The SDD branch table is to store at least one record. Each record includes a branch instruction pointer (IP) field, a load IP field, a store IP field, a comparison info field and at least one of a store value field and/or a predicted outcome field. The SDD management circuitry is to populate the SDD branch table at runtime and to override a baseline branch prediction associated with an incoming branch IP with an SDD branch prediction, if the SDD branch table contains a first record populated with the incoming branch IP and at least one of a store value and/or an SDD predicted outcome.

    EFFICIENT HARDWARE-BASED EXTRACTION OF PROGRAM INSTRUCTIONS FOR CRITICAL PATHS

    公开(公告)号:US20180232235A1

    公开(公告)日:2018-08-16

    申请号:US15433674

    申请日:2017-02-15

    CPC classification number: G06F9/3838 G06F8/41

    Abstract: A processor includes a memory to hold a buffer to store data dependencies comprising nodes and edges for each of a plurality of micro-operations. The nodes include a first node for dispatch, a second node for execution, and a third node for commit. A detector circuit is to queue, in the buffer, the nodes of a micro-operation; add, to determine a node weight for each of the nodes of the micro-operation, an edge weight to a previous node weight of a connected micro-operation that yields a maximum node weight for the node, wherein the node weight comprises a number of execution cycles of an OOO pipeline of the processor and the edge weight comprises a number of execution cycles to execute the connected micro-operation; and identify, as a critical path, a path through the data dependencies that yields the maximum node weight for the micro-operation.

    MEMORY AWARE REORDERED SOURCE
    19.
    发明申请

    公开(公告)号:US20180181329A1

    公开(公告)日:2018-06-28

    申请号:US15392638

    申请日:2016-12-28

    Abstract: Processor, apparatus, and method for reordering a stream of memory access requests to establish locality are described herein. One embodiment of a method includes: storing in a request queue memory access requests generated by a plurality of execution units, the memory access requests comprising a first request to access a first memory page in a memory and a second request to access a second memory page in the memory; maintaining a list of unique memory pages, each unique memory page associated with one or more memory access requests stored the request queue and is to be accessed by the one or more memory access requests; selecting a current memory page from the list of unique memory pages; and dispatching from the request queue to the memory, all memory access requests associated with the current memory page before any other memory access request in the request queue is dispatched.

    Data Compression In Processor Caches
    20.
    发明申请
    Data Compression In Processor Caches 有权
    处理器缓存中的数据压缩

    公开(公告)号:US20150089126A1

    公开(公告)日:2015-03-26

    申请号:US14036673

    申请日:2013-09-25

    CPC classification number: G06F12/126 G06F12/0895

    Abstract: In an embodiment, a processor includes a cache data array including a plurality of physical ways, each physical way to store a baseline way and a victim way; a cache tag array including a plurality of tag groups, each tag group associated with a particular physical way and including a first tag associated with the baseline way stored in the particular physical way, and a second tag associated with the victim way stored in the particular physical way; and cache control logic to: select a first baseline way based on a replacement policy, select a first victim way based on an available capacity of a first physical way including the first victim way, and move a first data element from the first baseline way to the first victim way. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括高速缓存数据阵列,其包括多个物理方式,每种物理方式来存储基线方式和受害方式; 包括多个标签组的缓存标签阵列,与特定物理方式相关联的每个标签组,并且包括与以特定物理方式存储的基线方式相关联的第一标签,以及与存储在特定物理方式中的受害方式相关联的第二标签 物理方式 以及高速缓存控制逻辑,以:基于替换策略选择第一基线方式,基于包括所述第一受害者方式的第一物理方式的可用容量选择第一受害者方式,并将第一数据元素从所述第一基线方式移动到 第一个受害者的方式。 描述和要求保护其他实施例。

Patent Agency Ranking