Abstract:
A method and system for sharing hash calculations across N parallel mining threads, the method comprising: finding N Merkle root hash values that have identical marginal portions of a predetermined size, calculating a corresponding mid-state hash for each of the N Merkle root hash values, and transmitting the N Merkle root hash values along with the corresponding mid-state values to the N parallel mining threads.
Abstract:
Input data can be split into data components that can each have a length equal to a machine word size of a processor capable of parallel processing. Hash components can be selected to have a length equal to the length of the data components. A bitwise hashing function can be performed, in which each data component is hashed with a respective different one of the hash components. A representation of the hash components can be output as the hash. The bitwise hashing function can include an exclusive-or operation and a multiplication and can be a modified Fowler-Noll-Vo hashing function, such as a modified FNV-1a function.
Abstract:
A processor includes a first execution unit to receive and execute a first instruction to process a first part of secure hash algorithm 256 (SHA256) message scheduling operations, the first instruction having a first operand associated with a first storage location to store a first set of message inputs and a second operand associated with a second storage location to store a second set of message inputs. The processor further includes a second execution unit to receive and execute a second instruction to process a second part of the SHA256 message scheduling operations, the second instruction having a third operand associated with a third storage location to store an intermediate result of the first part and a third set of message inputs and a fourth operand associated with a fourth storage location to store a fourth set of message inputs.
Abstract:
The present invention provides a bi-delta network for distributing bits through bit distribution (BDST) instruction fiOm an input to an output of the bi-dclta network. The network comprises a control delta network constituting a forward path of a delta network for receiving an n-bits data from a bitmask register; a data delta network constituting a reverse path of said delta network for n-bits data from a source register; and a plurality of control generation stages between the control delta network and the data delta network, wherein each stage operationally receives inputs from inputs of stage m of the control delta network and generates control signals for switches in stage m of both networks, m represents a stage number stalling from 1 to log2(n). A method for distributing bits is also provided.
Abstract:
An electronic device for encrypting and decrypting data blocks of a message having n data blocks in accordance with the data encryption standard (DES) is provided. The electronic device has a first data processing channel having a first processing stage for performing encryption and decryption of data blocks of a predefined length, and a first input data buffer coupled to a data input and to the first processing stage, and a second data processing channel having a second processing stage for performing encryption and decryption of data blocks, a second data input buffer coupled to an output of the first processing stage and to the second processing stage. The electronic device also has a control stage (FSM) for controlling the first processing stage and the second processing stage, so as to perform an encryption or decryption step with the second processing stage on an encrypted/decrypted data block output from the first processing stage. The control stage is adapted to control the first processing stage to perform data encryption or decryption according to the data encryption standard on each block and to control the second processing stage to compute a message authentication code over the encrypted or decrypted message received from the first processing stage block-by-block.
Abstract:
The speed at which encrypt and decrypt operations may be performed in a general purpose processor is increased by providing a separate encrypt data path and decrypt data path. With separate data paths, each of the data paths may be individually optimized in order to reduce delays in a critical path. In addition, delays may be hidden in a non-critical last round.
Abstract:
A secure gateway includes a TLS server for authenticating connecting devices, a connection manager for routing requests from the TLS server to service provider adapters, and a key management system for providing key management functions, wherein when a device provides a manufacturing certificate to one or more servers of the gateway, servers identify the device as authentic by validating that the manufacturing certificate provided is signed by the same root that has signed the servers its own certificate.
Abstract:
Systems and methods are disclosed, especially designed for very compact hardware implementations, to generate random number strings with a high level of entropy at maximum speed. For immediate deployment of software implementations, certain permutations have been introduced to maintain the same level of unpredictability which is more amenable to hi-level software programming, with a small time loss on hardware execution; typically when hardware devices communicate with software implementations. Particular attention has been paid to maintain maximum correlation immunity, and to maximize non-linearity of the output sequence. Good stream ciphers are based on random generators which have a large number of secured internal binary variables, which lead to the page synchronized stream ciphering. The method for parsed page synchronization which is presented is especially valuable for Internet applications, where occasionally frame sequences are often mixed. The large number of internal variables with fast diffusion of individual bits wherein the masked message is fed back into the machine variables is potentially ideal for message authentication procedures.