Abstract:
A test and measurement instrument includes a port to connect to a device under test (DUT) to receive waveform data, a connection to a machine learning network, and one or more processors configured to: receive one or more inputs about a three-dimensional (3D) tensor image; scale the waveform data to fit within the 3D tensor image; build the 3D tensor image; send the 3D tensor image to the machine learning network; and receive a predictive result from the machine learning network. A method includes receiving waveform data from one or more device under test (DUT), receiving one or more inputs about a three-dimensional (3D) tensor image, scaling the waveform data to fit within the 3D tensor image, building the 3D tensor image, sending the 3D tensor image to a pre-trained machine learning network, and receiving a predictive result from the machine learning network.
Abstract:
A manufacturing system has a machine learning (ML) system having one or more neural networks and a configuration file associated with a trained neural network (NN), a structured data store having interfaces to the ML system a test automation application, a training store, a reference parameter store, a communications store, a trained model store, and one or more processors to control the data store to receive and store training data, allow the ML system to access the training data to train the one or more NNs, receive and store reference parameters and to access the reference parameters, receive and store prediction requests for optimal tuning parameters and associated data within the communication store, to provide requests to the ML system, allow the ML system to store trained NNs in the trained models store, and to recall a selected trained NN and provide the prediction to the test automation application.
Abstract:
A test and measurement instrument has one or more ports configured to receive a signal one or more devices under test (DUT), and one or more processors configured to execute code that causes the one or more processors to: acquire a waveform from the signal, derive a pattern waveform from the waveform, perform linear response extraction on the pattern waveform, present one or more data representations including a data representation of the extracted linear response to a machine learning system, and receive a prediction for a measurement from the machine learning system. A method of performing a measurement on a waveform includes acquiring the waveform at a test and measurement device, deriving a pattern waveform from the waveform, performing linear response extraction on the pattern waveform, presenting one or more data representations including a data representation of the extracted linear response to a machine learning system, and receiving a prediction of the measurement from the machine learning system.
Abstract:
Systems and methods directed towards reducing noise introduced into a signal when processing the signal are discussed herein. In embodiments a signal may initially be split by a multiplexer into two or more frequency bands. Each of the frequency bands can then be forwarded through an assigned channel. One or more channels may include an amplifier to independently boost the signal band assigned to that channel prior to a noise source within the assigned channel. This results in boosting the signal band relative to noise introduced by the noise source. In some embodiments, a filter may also be implemented in one or more of the channels to remove noise from the channel that is outside the bandwidth of the signal band assigned to that channel. Additional embodiments may be described and/or claimed herein.
Abstract:
Embodiments of the present invention provide techniques and methods for improving signal-to-noise ratio (SNR) when averaging two or more data signals by finding a group delay between the signals and using it to calculate an averaged result. In one embodiment, a direct average of the signals is computed and phases are found for the direct average and each of the data signals. Phase differences are found between each signal and the direct average. The phase differences are then used to compensate the signals. Averaging the compensated signals provides a more accurate result than conventional averaging techniques. The disclosed techniques can be used for improving instrument accuracy while minimizing effects such as higher-frequency attenuation. For example, in one embodiment, the disclosed techniques may enable a real-time oscilloscope to take more accurate S parameter measurements.
Abstract:
A continuously or step variable passive noise filter for removing noise from a signal received from a DUT added by a test and measurement instrument channel. The noise filter may include, for example, a splitter splits a signal into at least a first split signal and a second split signal. A first path receives the first split signal and includes a variable attenuator and/or a variable delay line which may be set based on the channel response of the DUT which is connected. The variable attenuator and/or the variable delay line may be continuously or stepped variable, as will be discussed in more detail below. A second path is also included to receive the second split signal and a combiner combines a signal from the first path and a signal from the second path into a combined signal.
Abstract:
Systems and methods directed towards reducing noise introduced into a signal when processing the signal are discussed herein. In embodiments a signal may initially be split by a multiplexer into two or more frequency bands. Each of the frequency bands can then be forwarded through an assigned channel. One or more channels may include an amplifier to independently boost the signal band assigned to that channel prior to a noise source within the assigned channel. This results in boosting the signal band relative to noise introduced by the noise source. In some embodiments, a filter may also be implemented in one or more of the channels to remove noise from the channel that is outside the bandwidth of the signal band assigned to that channel. Additional embodiments may be described and/or claimed herein.
Abstract:
Disclosed is a mechanism for reducing noise caused by an analog to digital conversion in a test and measurement system. An adaptive linear filter is generated based on a converted digital signal and measured signal noise. The adaptive linear filter includes a randomness suppression factor for alleviating statistical errors caused by a comparison of a signal circularity coefficient and a noise circularity coefficient in the adaptive linear filter. The adaptive linear filter is applied to the digital signal along with a stomp filter and a suppression clamp filter. The digital signal may be displayed in a complex frequency domain along with depictions of the adaptive linear filter frequency response and corresponding circularity coefficients. The display may be animated to allow a user to view the signal and/or filters in the frequency domain at different times.
Abstract:
A signal acquisition probe stores compressed or compressed and filtered time domain data samples representing at least one of an impulse response or step response characterizing the signal acquisition probe. The compressed or compressed and filtered time domain data samples of the impulse response or the step response are provided to a signal measurement instrument for compensating the signal measurement instrument for the impulse or step response of the signal measurement instrument.
Abstract:
A test and measurement system including a test and measurement instrument, a probe connected to the test and measurement instrument, a device under test connected to the probe, at least one memory configured to store parameters for characterizing the probe, a user interface and a processor. The user interface is configured to receive a nominal source impedance of the device under test. The processor is configured to receive the parameters for characterizing the probe from the memory and the nominal source impedance of the device under test from the user interface and to calculate an equalization filter using the parameters for characterizing the probe and nominal source impedance from the user interface.