Proactive wavelength synchronization

    公开(公告)号:US11923899B2

    公开(公告)日:2024-03-05

    申请号:US17539275

    申请日:2021-12-01

    CPC classification number: H04B10/07955

    Abstract: Examples described herein relate to a method for synchronizing a wavelength of light in an optical device. In some examples, a heater voltage may be predicted for a heater disposed adjacent to the optical device in a photonic chip. The predicted heater voltage may be applied to the heater to cause a change in the wavelength of the light inside the optical device. In response to applying the heater voltage, an optical power inside the optical device may be measured. Further, a check may be performed to determine whether the measured optical power is a peak optical power. If it is determined that measured optical power is the peak optical power, the application of the predicted heater voltage to the heater may be continued.

    Adjustable Precision for Multi-Stage Compute Processes

    公开(公告)号:US20200042287A1

    公开(公告)日:2020-02-06

    申请号:US16052218

    申请日:2018-08-01

    Abstract: Disclosed techniques provide for dynamically changing precision of a multi-stage compute process. For example, changing neural network (NN) parameters on a per-layer basis depending on properties of incoming data streams and per-layer performance of an NN among other considerations. NNs include multiple layers that may each be calculated with a different degree of accuracy and therefore, compute resource overhead (e.g., memory, processor resources, etc.). NNs are usually trained with 32-bit or 16-bit floating-point numbers. Once trained, an NN may be deployed in production. One approach to reduce compute overhead is to reduce parameter precision of NNs to 16 or 8 for deployment. The conversion to an acceptable lower precision is usually determined manually before deployment and precision levels are fixed while deployed. Disclosed techniques and implementations address automatic rather than manual determination or precision levels for different stages and dynamically adjusting precision for each stage at run-time.

    Resiliency for machine learning workloads

    公开(公告)号:US11868855B2

    公开(公告)日:2024-01-09

    申请号:US16673868

    申请日:2019-11-04

    CPC classification number: G06N20/00 G06F16/901 G06F21/602

    Abstract: In exemplary aspects, a golden data structure can be used to validate the stability of machine learning (ML) models and weights. The golden data structure includes golden input data and corresponding golden output data. The golden output data represents the known correct results that should be output by a ML model when it is run with the golden input data as inputs. The golden data structure can be stored in a secure memory and retrieved for validation separately or together with the deployment of the ML model for a requested ML operation. If the golden data structure is used to validate the model and/or weights concurrently with the performance of the requested operation, the golden input data is combined with the input data for the requested operation and run through the model. Relevant outputs are compared with the golden output data to validate the stability of the model and weights.

    Adjustable precision for multi-stage compute processes

    公开(公告)号:US11385863B2

    公开(公告)日:2022-07-12

    申请号:US16052218

    申请日:2018-08-01

    Abstract: Disclosed techniques provide for dynamically changing precision of a multi-stage compute process. For example, changing neural network (NN) parameters on a per-layer basis depending on properties of incoming data streams and per-layer performance of an NN among other considerations. NNs include multiple layers that may each be calculated with a different degree of accuracy and therefore, compute resource overhead (e.g., memory, processor resources, etc.). NNs are usually trained with 32-bit or 16-bit floating-point numbers. Once trained, an NN may be deployed in production. One approach to reduce compute overhead is to reduce parameter precision of NNs to 16 or 8 for deployment. The conversion to an acceptable lower precision is usually determined manually before deployment and precision levels are fixed while deployed. Disclosed techniques and implementations address automatic rather than manual determination or precision levels for different stages and dynamically adjusting precision for each stage at run-time.

    RESILIENCY FOR MACHINE LEARNING WORKLOADS

    公开(公告)号:US20210133624A1

    公开(公告)日:2021-05-06

    申请号:US16673868

    申请日:2019-11-04

    Abstract: In exemplary aspects, a golden data structure can be used to validate the stability of machine learning (ML) models and weights. The golden data structure includes golden input data and corresponding golden output data. The golden output data represents the known correct results that should be output by a ML model when it is run with the golden input data as inputs. The golden data structure can be stored in a secure memory and retrieved for validation separately or together with the deployment of the ML model for a requested ML operation. If the golden data structure is used to validate the model and/or weights concurrently with the performance of the requested operation, the golden input data is combined with the input data for the requested operation and run through the model. Relevant outputs are compared with the golden output data to validate the stability of the model and weights.

    Memristive dot product engine virtualization

    公开(公告)号:US10740125B2

    公开(公告)日:2020-08-11

    申请号:US15884030

    申请日:2018-01-30

    Abstract: An example system includes at least one memristive dot product engine (DPE) having at least one resource, the DPE further having a physical interface and a controller, the controller being communicatively coupled to the physical interface, the physical interface to communicate with the controller to access the DPE, and at least one replicated interface, each replicated interface being associated with a virtual DPE, the replicated interface with communicatively coupled to the controller. The controller is to allocate timeslots to the virtual DPE through the associated replicated interface to allow the virtual DPE access to the at least one resource.

Patent Agency Ranking