Abstract:
Methods, systems and apparatuses may provide for technology that conducts an automated vision analysis of image data associated with an interior of a vehicle cabin, determines a state of a child restraint system (CRS) based on the automated vision analysis, and generates an alert if the state of the CRS does not satisfy one or more safety constraints. In one example, the technology identifies the safety constraint(s) based on a geographic location of the vehicle cabin.
Abstract:
Machine vision processing includes capturing 3D spatial data representing a field of view and including ranging measurements to various points within the field of view, applying a segmentation algorithm to the 3D spatial data to produce a segmentation assessment indicating a presence of individual objects within the field of view, wherein the segmentation algorithm is based on at least one adjustable parameter, and adjusting a value of the at least one adjustable parameter based on the ranging measurements. The segmentation assessment is based on application of the segmentation algorithm to the 3D spatial data, with different values of the at least one adjustable parameter value corresponding to different values of the ranging measurements of the various points within the field of view.
Abstract:
Apparatus and methods to dynamically adjust an analytics threshold are disclosed. An example method includes analyzing a set of probabilities of injury to determine minimum and maximum probability in the set based on a request for an injury risk for a target player; determining possible thresholds to divide the probabilities of injury between the minimum and maximum probability; converting the probabilities of injury into percentages of injury based on at least one of the possible thresholds; distributing the percentages of injury into a plurality of bands based on the possible threshold(s); comparing the percentages of injury in each of the plurality of bands according to a criterion; and when the criterion is satisfied, updating a target threshold to classify the injury risk for the target player based on at least one of the possible thresholds dividing the percentages of injury in the plurality of bands and outputting the injury risk for the target player.
Abstract:
Machine vision processing includes capturing 3D spatial data representing a field of view and including ranging measurements to various points within the field of view, applying a segmentation algorithm to the 3D spatial data to produce a segmentation assessment indicating a presence of individual objects within the field of view, wherein the segmentation algorithm is based on at least one adjustable parameter, and adjusting a value of the at least one adjustable parameter based on the ranging measurements. The segmentation assessment is based on application of the segmentation algorithm to the 3D spatial data, with different values of the at least one adjustable parameter value corresponding to different values of the ranging measurements of the various points within the field of view.
Abstract:
In various embodiments, an angular gauge reporting system (“AGRS”) may determine one or more values from an image of an angular gauge. The AGRS may receive one or more images of the gauge and develop an angular map to determine values indicated by the gauge. The AGRS may identify numbers in the image to generate the angular map. The AGRS may determine a center for the angular gauge. The AGRS may determine numerical values by processing capture images of the angular gauge though angular or linear interpolation of values. By generating the angular map prior to later determination of values, the AGRS may provide for determination of numerical values without requiring repetition of actions which may be computationally complex or resource intensive. Other embodiments may be described and/or claimed.
Abstract:
Described is an apparatus which comprises: one or more sensors for coupling to a power source and for sensing electrical parameters of the power source, wherein the power source is operable to provide power to a system having one or more sub-systems; and a processor to analyze the sensed electrical parameters and to detect and identify one or more events associated with the system and the one or more sub-systems.
Abstract:
In one embodiment, a device comprises interface circuitry and processing circuitry. The processing circuitry receives, via the interface circuitry, a video stream captured by a camera during performance of an industrial process, wherein the video stream comprises a sequence of frames; detects, based on analyzing the sequence of frames, a degree of particle scatter that occurs during performance of the industrial process; and determines, based on the degree of particle scatter, that an anomaly occurs during performance of the industrial process.
Abstract:
A device for a vehicle may include a first wireline interface configured to receive a first data stream from a first sensor having a first sensor type for perceiving a surrounding of the vehicle, the first data stream including raw sensor data detected by the first sensor; a second wireline interface configured to receive a second data stream from a second sensor having a second sensor type for perceiving the surrounding of the vehicle, the second data stream including raw sensor data detected by the second sensor; one or more processors configured to generate a coded packet including the received first data stream and the received second data stream by employing vector packet coding on the first data stream and the second data stream; and an output wireline interface configured to transmit the generated coded packet to one or more target units of the vehicle.
Abstract:
Technologies for performing sensor fusion include a compute device. The compute device includes circuitry configured to obtain detection data indicative of objects detected by each of multiple sensors of a host system. The detection data includes camera detection data indicative of a two or three dimensional image of detected objects and lidar detection data indicative of depths of detected objects. The circuitry is also configured to merge the detection data from the multiple sensors to define final bounding shapes for the objects.
Abstract:
Systems, apparatuses, and methods include technology that identifies a first dataset that comprises a plurality of data values, and partitions the first dataset into a plurality of bins to generate a second dataset, where the second dataset is a compressed version of the first dataset. The technology randomly subsamples data associated with the first dataset to obtain groups of randomly subsampled data, and generates a plurality of decision tree models during an unsupervised learning process based on the groups of randomly subsampled data and the second dataset.