Abstract:
An electronic equipment comprises a sensor circuit operative to measure at least one electrical property of a user at a plurality of frequencies to thereby capture frequency-resolved electrical characteristics of the user. The electronic equipment comprises a processing circuit operative to perform a comparison between the frequency-resolved electrical characteristics of the user and reference characteristics to authenticate the user. The processing circuit is operative to perform an unlocking operation based on a result of the comparison.
Abstract:
Disclosed is a method, performed in a Body Area Network (BAN) enabled media experience device for enabling transmission of a media experience according to a stored, predefined user configuration associated with an identification data, userID, of a BAN enabled communication device, the method comprising: establishing a connection between the BAN enabled media experience device and the BAN enabled communication device by using BAN; retrieving the identification data, userID, of the BAN enabled communication device; retrieving the stored, predefined user configuration associated with the userID of the BAN enabled communication device; and initiating transmission of the media experience by the BAN enabled media experience device according to the stored, predefined user configuration.
Abstract:
An image capturing device may include a recording circuit configured to record multimedia data of a field of view of the image capturing device, a receiving circuit configured to receive a wireless signal from a wireless sensor, a location determining circuit configured to determine, based on the received wireless signal, a location of the wireless sensor, and a storage circuit configured to store data associated with the wireless sensor responsive to a comparison of the determined location of the wireless sensor and the field of view of the image capturing device so as to create an association between the data associated with the wireless sensor and the recorded multimedia data.
Abstract:
A monitoring system detects and mitigates traffic risks among a group of vehicles. The group of vehicles includes a ground-based vehicle (GBV), e.g. an automotive vehicle, and an air-based vehicle (ABV), e.g. a drone, which is operated to track a ground-based object (GBO), e.g. an unprotected road user or an animal. The monitoring system performs a method comprising: obtaining (301) predicted navigation data for the ground-based vehicle and the air-based vehicle, processing (302) the predicted navigation data to obtain one or more future locations of the ground based-object and to detect an upcoming spatial proximity between the ground-based object and the ground-based vehicle, and causing (305), upon detection of the upcoming spatial proximity, an alert signal to be provided to at least one of the ground-based object and the ground-based vehicle.
Abstract:
Drones are controlled by one or more control devices and comprise a respective camera for image capture. The one or more control devices perform a method including obtaining projected flight paths of the drones, obtaining a projected camera setting of the respective camera, computing, as a function of the projected camera setting, a projected viewing frustum of the respective camera, defining, for the drones, projected time-space trajectories of no-fly zones based on the projected viewing frustum of the respective camera, analyzing the projected flight paths of the drones in relation to the projected time-space trajectories for detection of a violation of one or more of the no-fly zones, and setting an operative flight path and/or an operative camera setting for at least one selected drone to prevent the violation.
Abstract:
A control device operates a drone with an onboard camera. The control device obtains a current performance metric to be computed for an activity performed by an individual, determines, based on a positioning rule associated with the current performance metric, a selected relative position, SRP, between the individual and the onboard camera, identifies a reference plane of the individual, operates the drone to move the onboard camera from an initial relative position to attain the SRP in relation to the reference plane; operates the onboard camera, when in the SRP, to capture image(s) of the individual, and provides the image(s) for computation of the current performance metric for the activity performed by the individual. The SRP may be defined, by the positioning rule, to ensure that the orientation of the individual in the image(s) is relevant or optimal for the current performance metric.