Abstract:
A device includes a memory and processing circuitry coupled to the memory. The processing circuitry, in operation, generates an indication of a predicted difference in a direction of arrival (DoA) of a signal using a trained autoregressive model. A predicted indication of a DoA of the signal is generated based on a previous indication of the DoA of the signal and the indication of the predicted difference in the DoA of the signal. The processing circuitry actuates or controls an antenna array based on predicted indications of the DoA of the signal.
Abstract:
A laserbeam light source is controlled to avoid light sensitive regions around the laserbeam light source. One or more laserlight-sensitive regions are identified based on images of an area around the laserbeam light source, and indications of positions corresponding to the laserlight-sensitive regions are generated. The laserbeam light source is controlled based on the indications of the positions. The laserbeam light source may be controlled to deflect a laserlight beam away from laserlight-sensitive regions, to reduce an intensity of a laserlight beam directed towards a laserlight-sensitive region, etc. Motion estimation may be used to generate the indications of positions corresponding to the laserlight-sensitive regions.
Abstract:
In an embodiment, digital video frames in a flow are subjected to a method of extraction of features including the operations of: extracting from the video frames respective sequences of pairs of keypoints/descriptors limiting to a threshold value the number of pairs extracted for each frame; sending the sequences extracted from an extractor module to a server for processing with a bitrate value variable in time; receiving the aforesaid bitrate value variable in time at the extractor as target bitrate for extraction; and limiting the number of pairs extracted by the extractor to a threshold value variable in time as a function of the target bitrate.
Abstract:
Local descriptors are extracted from digital image information and digital depth information related to digital images. The local descriptors convey appearance description information and shape description information related to the digital images. Global representations of the one or more digital images are generated based on the extracted local descriptors, and are hashed. Visual search queries are generated based on the hashed global representations. The visual search queries include fused appearance description information and shape description information conveyed in the local descriptors. The fusing may occur before the global representations are generated, before the hashing or after the hashing.
Abstract:
Disclosed embodiments are directed to methods, systems, and circuits of generating compact descriptors for transmission over a communications network. A method according to one embodiment includes receiving an uncompressed descriptor, performing zero-thresholding on the uncompressed descriptor to generate a zero-threshold-delimited descriptor, quantizing the zero-threshold-delimited descriptor to generate a quantized descriptor, and coding the quantized descriptor to generate a compact descriptor for transmission over a communications network. The uncompressed and compact descriptors may be 3D descriptors, such as where the uncompressed descriptor is a SHOT descriptor. The operation of coding can be ZeroFlag coding, ExpGolomb coding, or Arithmetic coding, for example.
Abstract:
An image processing system has one or more memories and image processing circuitry coupled to the one or more memories. The image processing circuitry, in operation, compares a first image to feature data in a comparison image space using a matching model. The comparing includes: unwarping keypoints in keypoint data of the first image; and comparing the unwarped keypoints and descriptor data associated with the first image to the feature data of the comparison image. The image processing circuitry determines whether the first image matches the comparison image based on the comparing.
Abstract:
Local descriptors are extracted from digital image information and digital depth information related to digital images. The local descriptors convey appearance description information and shape description information related to the digital images. Global representations of the one or more digital images are generated based on the extracted local descriptors, and are hashed. Visual search queries are generated based on the hashed global representations. The visual search queries include fused appearance description information and shape description information conveyed in the local descriptors. The fusing may occur before the global representations are generated, before the hashing or after the hashing.
Abstract:
A laserbeam light source is controlled to avoid light sensitive regions around the laserbeam light source. One or more laserlight-sensitive regions are identified based on images of an area around the laserbeam light source, and indications of positions corresponding to the laserlight-sensitive regions are generated. The laserbeam light source is controlled based on the indications of the positions. The laserbeam light source may be controlled to deflect a laserlight beam away from laserlight-sensitive regions, to reduce an intensity of a laserlight beam directed towards a laserlight-sensitive region, etc. Motion estimation may be used to generate the indications of positions corresponding to the laserlight-sensitive regions.
Abstract:
In an embodiment, digital video frames in a flow are subjected to a method of extraction of features including the operations of: extracting from the video frames respective sequences of pairs of keypoints/descriptors limiting to a threshold value the number of pairs extracted for each frame; sending the sequences extracted from an extractor module to a server for processing with a bitrate value variable in time; receiving the aforesaid bitrate value variable in time at the extractor as target bitrate for extraction; and limiting the number of pairs extracted by the extractor to a threshold value variable in time as a function of the target bitrate.
Abstract:
One embodiment is a method for selecting and grouping key points extracted by applying a feature detector on a scene being analyzed. The method includes grouping the extracted key points into clusters that enforce a geometric relation between members of a cluster, scoring and sorting the clusters, identifying and discarding clusters that are comprised of points which represent the background noise of the image, and sub-sampling the remaining clusters to provide a smaller number of key points for the scene.