-
公开(公告)号:US10733048B1
公开(公告)日:2020-08-04
申请号:US16219493
申请日:2018-12-13
Applicant: Amazon Technologies, Inc.
Inventor: Itai Avron , Adi Habusha , Gal Paikin , Simaan Bahouth
Abstract: A method and circuit are disclosed to calculate an error correction code (ECC) and perform a decryption in parallel when reading memory data. There are multiple modes of operation. In a normal parallel mode of operation, the data passes through a decryption engine. Simultaneously, the same data passes through an ECC decode engine. However, if no error is detected, the output of the decode engine is discarded. If there is an ECC error, an error indication is made so that the corresponding data exiting the decryption engine is discarded. The circuit then switches to a serial mode of operation, wherein the ECC decode engine corrects the data and resends the corrected data again through the decryption engine. The circuit is maintained in the serial mode until a decision is made to switch back to the parallel mode, such as when a pipeline of the ECC engine becomes empty.
-
公开(公告)号:US10725957B1
公开(公告)日:2020-07-28
申请号:US16460897
申请日:2019-07-02
Applicant: Amazon Technologies, Inc.
Inventor: Mark Bradley Davis , Thomas A. Volpe , Nafea Bshara , Yaniv Shapira , Adi Habusha
Abstract: A plurality of system on chips (SoCs) in a server computer can be coupled to a plurality of memory agents (MAs) via respective Serializer/Deserializer (SerDes) interfaces. Each of the plurality of MAs can include one or more memory controllers to communicate with a memory coupled to the respective MA, and globally addressable by each of the SoCs. Each of the plurality of SoCs can access the memory coupled to any of the MAs in uniform number of hops using the respective SerDes interfaces. Different types of memories, e.g., volatile memory, persistent memory, can be supported.
-
公开(公告)号:US10719463B1
公开(公告)日:2020-07-21
申请号:US16386157
申请日:2019-04-16
Applicant: Amazon Technologies, Inc.
Inventor: Nafea Bshara , Mark Bradley Davis , Matthew Shawn Wilson , Uwe Dannowski , Yaniv Shapira , Adi Habusha , Anthony Nicholas Liguori
IPC: G06F13/30 , G06F3/06 , G06F12/0891 , G06F13/40 , G06F13/28
Abstract: Disclosed herein are techniques for migrating data from a source memory range to a destination memory while data is being written into the source memory range. An apparatus includes a control logic configured to receive a request for data migration and initiate the data migration using a direct memory access (DMA) controller, while the source memory range continues to accept write operations. The apparatus also includes a tracking logic coupled to the control logic and configured to track write operations performed to the source memory range while data is being copied from the source memory range to the destination memory. The control logic is further configured to initiate copying data associated with the tracked write operations to the destination memory.
-
公开(公告)号:US10705985B1
公开(公告)日:2020-07-07
申请号:US15918930
申请日:2018-03-12
Applicant: Amazon Technologies, Inc.
Inventor: Benny Pollak , Dana Michelle Vantrease , Adi Habusha
Abstract: In various implementations, provided are systems and methods for an integrated circuit implementing a processor that can include a rate limiting circuit that attempts to fairly distribute processor memory bandwidth between transaction generators in the processor. The rate limiting circuit can maintain a count of tokens for each transaction generator, where a transaction generator can only transmit a transaction when the transaction generator has enough tokens to do so. Each transaction generator can send a request to the rate limiting circuit when the transaction generator wants to transmit a transaction. The rate limiting circuit can then check whether the transaction generator has sufficient tokens to transmit the transaction. When the transaction generator has enough tokens, the rate limiting circuit will allow the transaction to enter the interconnect. When the transaction generator does not have enough tokens, the rate limiting circuit will not allow the transaction to enter the interconnect.
-
公开(公告)号:US10691850B1
公开(公告)日:2020-06-23
申请号:US16219205
申请日:2018-12-13
Applicant: Amazon Technologies, Inc.
Inventor: Lev Makovsky , Adi Habusha , Ron Diamant
IPC: G06F30/327 , G06F7/02 , G06N20/00 , G06F119/06
Abstract: A power analysis system for an integrated circuit device design can use machine learning to determine an estimated power consumption of the design. In various examples, the system can generate workloads for a power projection tool, which can include less than all the data of a full suite of power projection tests. The results from the power projection tool can be used to train a machine learning data model. From the results, the data model can learn the functions of the design by grouping together cells that are triggered together by the same signals. The data model can also learn estimated power consumption for each of the functions. The output of the data model can then be used to configure a design testing tool, which can run tests on the design. The output of the tests can then be used to compute an estimated overall power consumption for the design.
-
公开(公告)号:US20200012610A1
公开(公告)日:2020-01-09
申请号:US16575316
申请日:2019-09-18
Applicant: Amazon Technologies, Inc.
Inventor: Leah Shalev , Adi Habusha , Georgy Machulsky , Nafea Bshara , Eric Jason Brandwine
Abstract: Apparatus, methods, and computer-readable storage media are disclosed for core-to-core communication between physical and/or virtual processor cores. In some examples of the disclosed technology, application cores write notification data (e.g., to doorbell or PCI configuration memory space accesses via a memory interface), without synchronizing with the other application cores or the service cores. In one examples of the disclosed technology, a message selection circuit is configured to, serialize data from the plurality of user cores by: receiving data from a user core, selecting one of the service cores to send the data based on a memory location addressed by the sending user core, and sending the received data to a respective message buffer dedicated to the selected service core.
-
公开(公告)号:US10404674B1
公开(公告)日:2019-09-03
申请号:US15445190
申请日:2017-02-28
Applicant: Amazon Technologies, Inc.
Inventor: Nafea Bshara , Thomas A. Volpe , Adi Habusha , Yaniv Shapira
Abstract: Efficient memory management can be provided in a multi-tenant virtualized environment by encrypting data to be written in memory by a virtual machine using a cryptographic key specific to the virtual machine. Encrypting data associated with multiple virtual machines using a cryptographic key unique to each virtual machine can minimize exposure of the data stored in the memory shared by the multiple virtual machines. Thus, some embodiments can eliminate write cycles to the memory that are generally used to initialize the memory before a virtual machine can write data to the memory if the memory was used previously by another virtual machine.
-
公开(公告)号:US10255210B1
公开(公告)日:2019-04-09
申请号:US15058053
申请日:2016-03-01
Applicant: Amazon Technologies, Inc.
Inventor: Nafea Bshara , Guy Nakibly , Adi Habusha
IPC: G06F13/36 , G06F13/42 , G06F13/362 , G06F13/40
Abstract: A master device transmits a transaction to a target device. The transaction includes a transaction identifier. An ordering message is sent to the target device over a bus that is different than a communication channel that the transaction is transmitted over. The ordering message includes the transaction identifier. The target device adjusts an order of execution of the transaction by the target device based at least in part on receiving the ordering message.
-
公开(公告)号:US10061700B1
公开(公告)日:2018-08-28
申请号:US15230230
申请日:2016-08-05
Applicant: Amazon Technologies, Inc.
Inventor: Adi Habusha , Gil Stoler , Said Bshara , Nafea Bshara
IPC: G06F12/08 , G06F12/0817 , G06F12/0855
CPC classification number: G06F12/0828 , G06F12/0831 , G06F12/0833 , G06F12/0855 , G06F2212/62 , G06F2212/621 , G11C7/1072
Abstract: A method for writing data, the method may include: receiving or generating, by an interfacing module, a data unit coherent write request for performing a coherent write operation of a data unit to a first address; receiving, by the interfacing module and from a circuit that comprises a cache and a cache controller, a cache coherency indicator that indicates that a most updated version of the content stored at the first address is stored in the cache; and instructing, by the interfacing module, the cache controller to invalidate a cache line of the cache that stored the most updated version of the first address without sending the most updated version of the content stored at the first address from the cache to a memory module that differs from the cache if a length of the data unit equals a length of the cache line.
-
公开(公告)号:US09984021B1
公开(公告)日:2018-05-29
申请号:US14867431
申请日:2015-09-28
Applicant: Amazon Technologies, Inc.
Inventor: Christopher James BeSerra , Adi Habusha , Ziv Harel , Nafea Bshara , Hani Ayoub , Darin Lee Frink
CPC classification number: G06F13/385 , G06F13/102 , G06F13/4054 , G06F13/4221
Abstract: Provided are systems and methods for a location-aware, self-configuring peripheral device. In some implementations, the peripheral device may include two or more personalities. In these implementations, a personality enables the peripheral device to provide a service. In some implementations, the peripheral device may be configured to receive a configuration cycle. In some implementations, the peripheral device may further select a personality from among two or more personalities. The peripheral device may use information derived from the configuration cycle to make this selection. Selecting a personality may further include configuring the peripheral device according to the selected personality.
-
-
-
-
-
-
-
-
-