Failure analysis system of semiconductor device, failure analysis method of semiconductor device, and non-transitory computer readable medium

    公开(公告)号:US12235770B2

    公开(公告)日:2025-02-25

    申请号:US17180919

    申请日:2021-02-22

    Abstract: According to one embodiment, the failure analysis system of the semiconductor device includes a memory, a failure information management table, and an analyzing unit. The memory stores normal/failure information collected in a block unit and a column unit in a chip, in a plurality of inspection processes of the semiconductor memory. The failure information management table stores the normal/failure information in the block unit and the column unit stored in the memory, with an addition of product information, fabricating information including a lot number, a wafer number, and a chip address, process information, and test information, which are common information ranging over the inspection processes. The analyzing unit analyzes the normal/failure information in the block unit and the column unit ranging over the plurality of inspection processes, on the basis of the information stored in the failure information management table.

    Techniques for managing writes in nonvolatile memory

    公开(公告)号:US20250053338A1

    公开(公告)日:2025-02-13

    申请号:US18625096

    申请日:2024-04-02

    Abstract: This disclosure provides techniques for managing writes of data useful for storage systems that do not permit overwrite of a logical address. One implementation provides a nonvolatile memory storage drive, such as a flash memory drive, that provides support for zoned drive and/or Open Channel-compliant architectures. Circuitry on the storage drive tracks storage location release metadata for addressable memory space, optionally providing to a host system information upon which maintenance decisions or related scheduling can be based. The storage drive can also provide buffering support for accommodating receipt of out-of-order writes and unentanglement and performance of out of order writes, with buffering resources being configurable according to any one of a number of parameters. The disclosed storage drive facilitates reduced error rates and lower request traffic in a manner consistent with newer memory standards that mandate that writes to logical addresses be sequential.

    Application-transparent near-memory processing architecture with memory channel network

    公开(公告)号:US12210473B2

    公开(公告)日:2025-01-28

    申请号:US17980685

    申请日:2022-11-04

    Abstract: A computing device includes a host processor to execute a host driver to create a host-side interface, the host-side interface emulating a first Ethernet interface, assign the host-side interface a first medium access control (MAC) address and a first Internet Protocol (IP) address. Memory components are disposed on a substrate. A memory channel network (MCN) processor is disposed on the substrate and coupled between the memory components and the host processor. The MCN processor is to execute an MCN driver to create a MCN-side interface, the MCN-side interface emulating a second Ethernet interface. The MCN processor is to assign the MCN-side interface a second MAC address and a second IP address, which identify the MCN processor as a MCN network node to the host processor.

    Non-stalling, non-blocking translation lookaside buffer invalidation

    公开(公告)号:US12210459B2

    公开(公告)日:2025-01-28

    申请号:US18303183

    申请日:2023-04-19

    Inventor: Daniel Brad Wu

    Abstract: A method includes receiving, by a MMU for a processor core, an address translation request from the processor core and providing the address translation request to a TLB of the MMU; generating, by matching logic of the TLB, an address transaction that indicates whether a virtual address specified by the address translation request hits the TLB; providing the address transaction to a general purpose transaction buffer; and receiving, by the MMU, an address invalidation request from the processor core and providing the address invalidation request to the TLB. The method also includes, responsive to a virtual address specified by the address invalidation request hitting the TLB, generating, by the matching logic, an invalidation match transaction and providing the invalidation match transaction to one of the general purpose transaction buffer or a dedicated invalidation buffer.

    Systems and methods for translating memory addresses

    公开(公告)号:US12174748B1

    公开(公告)日:2024-12-24

    申请号:US18223394

    申请日:2023-07-18

    Applicant: ADTRAN, INC.

    Abstract: A memory system has a memory management unit (MMU) that is configured to receive data for storage into physical memory comprising a plurality of memory devices. The MMU receives a logical memory address and converts the logical memory address into at least one page address associated with data to be written to or read from physical memory. The MMU has an address translation circuit that is configured to translate each page address into a physical memory address. In translating the page address, the MMU employs an integer division operation that does not constrain the size of an arbitration map used to define the physical memory address. Thus, the operation of the memory can be better optimized using circuitry that has relatively low complexity and cost.

    Method and apparatus for data access in a heterogeneous processing system with multiple processors using memory extension operation

    公开(公告)号:US12169459B2

    公开(公告)日:2024-12-17

    申请号:US18099021

    申请日:2023-01-19

    Abstract: A heterogeneous processing system and method including a host processor, a first processor coupled to a first memory, a second processor coupled to a second memory, and switch and bus circuitry that communicatively couples the host processor, the first processor, and the second processor. The host processor is programmed to map virtual addresses of the second memory to physical addresses of the switch and bus circuitry and to configure the first processor to directly access the second memory using the mapped physical addresses according to memory extension operation. The first processor may be a reconfigurable processor, a reconfigurable dataflow unit, or a compute engine. The first processor may directly read data from or directly write data to the second memory while executing an application. The method may include configuring the first processor to directly access the second memory while executing an application for reading or writing data.

    PAGE REQUEST INTERFACE SUPPORT IN HANDLING DIRECT MEMORY ACCESS WITH CACHING HOST MEMORY ADDRESS TRANSLATION DATA

    公开(公告)号:US20240411700A1

    公开(公告)日:2024-12-12

    申请号:US18675486

    申请日:2024-05-28

    Abstract: A method includes buffering, in a descriptor queue, descriptors associated with translation units of an LBA-based, direct memory access (DMA) read command of a host system, each descriptor to be linked with a pointer including a physical destination for data associated with a respective translation unit. The method includes sending address translation requests to an address translation circuit for the pointers of respective translation units and detecting an address translation request miss at a cache of the address translation circuit for a first pointer of a first translation unit linked to a first descriptor of the plurality of descriptors. The method includes causing a translation miss message to be sent to a page request interface (PRI) handler, the translation miss message containing a virtual address of the first pointer and to trigger the PRI handler to send a page miss request to a translation agent of the host system.

Patent Agency Ranking