Abstract:
An apparatus comprises a mass storage unit and a plurality of circuit modules including a machine learning module, a programmable state machine module, and input/output interfaces. Switching circuitry is configured to selectively couple the circuit modules. Configuration circuitry is configured to access configuration data from the mass storage unit and to operate the switching circuitry to connect the circuit modules according to the configuration data.
Abstract:
A connection between a user device and a network server is established. Via the connection, a deep learning network is formed for a processing task. A first portion of the deep learning network operates on the user device and a second portion of the deep learning network operates on the network server. Based on cooperation between the user device and the network server, a boundary between the first portion and the second portion of the deep learning network is dynamically modified based on a change in a performance indicator that could affect the processing task
Abstract:
First and second data representation are stored in first and second blocks of a non-volatile, solid-state memory. The first and second blocks share series-connected bit lines. The first and second blocks are selected and other blocks of the non-volatile, solid-state memory that share the bit lines are deselected. The bit lines are read to determine a combination of the first and second data representations. The combination may include a union or an intersection.
Abstract:
A computing component may consist of a die package that has at least a board, first computing layer, and second computing layer. Dielectric layers can separate each of the board, first computing layer, and second computing layer. The first computing layer may be disposed between the board and second computing layer. One or more interconnects can continuously extend from the second computing layer to the board with a non-circular cross-section shape.
Abstract:
A system includes a plurality of nonvolatile memory cells and a map that assigns connections between nodes of a neural network to the memory cells. Memory devices containing nonvolatile memory cells and applicable circuitry for reading and writing may operate with the map. Information stored in the memory cells can represent weights of the connections. One or more neural processors can be present and configured to implement the neural network.
Abstract:
A connection between a user device and a network server is established. Via the connection, a deep learning network is formed for a processing task. A first portion of the deep learning network operates on the user device and a second portion of the deep learning network operates on the network server. Based on cooperation between the user device and the network server, a boundary between the first portion and the second portion of the deep learning network is dynamically modified based on a change in a performance indicator that could affect the processing task.
Abstract:
An apparatus comprises a mass storage unit and a plurality of circuit modules including a machine learning module, a programmable state machine module, and input/output interfaces. Switching circuitry is configured to selectively couple the circuit modules. Configuration circuitry is configured to access configuration data from the mass storage unit and to operate the switching circuitry to connect the circuit modules according to the configuration data.
Abstract:
A computing component may consist of a die package that has at least a board, first computing layer, and second computing layer. Dielectric layers can separate each of the board, first computing layer, and second computing layer. The first computing layer may be disposed between the board and second computing layer. One or more interconnects can continuously extend from the second computing layer to the board with a non-circular cross-section shape.
Abstract:
First and second data representation are stored in first and second blocks of a non-volatile, solid-state memory. The first and second blocks share series-connected bit lines. The first and second blocks are selected and other blocks of the non-volatile, solid-state memory that share the bit lines are deselected. The bit lines are read to determine a combination of the first and second data representations. The combination may include a union or an intersection.