Abstract:
A device and method for providing a visual cue for improved text imaging on a mobile device. The method includes determining a minimum text size for accurate optical character recognition (OCR) of an image captured by the mobile device, receiving an image stream of a printed substrate, and displaying the image stream and a visual cue superimposed onto the image stream, wherein the visual cue is indicative of the minimum text size. The method further includes capturing a digital image of the image stream, wherein the digital image does not include the visual cue. Additionally, the method further includes notifying a user of the mobile device when text displayed within the image stream is at least as large as the minimum text size.
Abstract:
A system and method of providing annotated trajectories by receiving image frames from a video camera and determining a location based on the image frames from the video camera. The system and method can further include the steps of determining that the location is associated with a preexisting annotation and displaying the preexisting annotation. Additionally or alternatively, the system and method can further include the steps of generating a new annotation automatically or based on a user input and associating the new annotation with the current location.
Abstract:
A method and device for aligning an image of a printed substrate using a mobile device. The method includes receiving, by an image capturing device, an image stream of a printed substrate; determining, by a processing device operably connected to the image capturing device, a location and a geometry of the printed substrate from the image stream; displaying, on a display operably connected to the processing device, the image stream; overlaying, by the processing device, at least a first visual marker onto the printed substrate as displayed in the image stream using the location and geometry; and instructing, by the processing device, a user of the mobile device to move the mobile device to align the mobile device and the printed substrate. The device includes the various hardware components configured to perform the method of aligning.
Abstract:
A three-dimensional (3D) printer includes a nozzle and a camera configured to capture a real image or a real video of a liquid metal while the liquid metal is positioned at least partially within the nozzle. The 3D printer also includes a computing system configured to perform operations. The operations include generating a model of the liquid metal positioned at least partially within the nozzle. The operations also include generating a simulated image or a simulated video of the liquid metal positioned at least partially within the nozzle based at least partially upon the model. The operations also include generating a labeled dataset that comprises the simulated image or the simulated video and a first set of parameters. The operations also include reconstructing the liquid metal in the real image or the real video based at least partially upon the labeled dataset.
Abstract:
A mobile electronic device processes a sequence of images to identify and re-identify an object of interest in the sequence. An image sensor of the device, receives a sequence of images. The device detects an object in a first image as well as positional parameters of the device that correspond to the object in the first image. The device determines a range of positional parameters within which the object may appear in a field of view of the device. When the device detects that the object of interest exited the field of view it subsequently uses motion sensor data to determine that the object of interest has likely re-entered the field of view, it will analyze the current frame to confirm that the object of interest has re-entered the field of view.
Abstract:
A document may include a non-magnetic substrate, a first colorant mixture printed as a first image upon the substrate, the first colorant mixture including a magnetic ink, and a second colorant mixture printed as a second image upon the substrate in substantially close spatial proximity to the printed first colorant mixture. The second colorant mixture may consist essentially of one or more non-magnetic inks and exhibit properties of both low visual contrast and high magnetic contrast against the first colorant mixture, such that the resultant printed substrate does not reveal the first image to the human eye, but will reveal the first image to a magnetic image reader.
Abstract:
Methods and systems for continuously monitoring the gaze direction of a driver of a vehicle over time. Video is received, which is captured by a camera associated with, for example, a mobile device within a vehicle, the camera and/or mobile device mounted facing the driver of the vehicle. Frames can then be extracted from the video. A facial region can then be detected, which corresponds to the face of the driver within the extracted frames. Features descriptors can then be computed from the facial region. A gaze classifier derived from the vehicle, the driver, and the camera can then be applied, wherein the gaze classifier receives the feature descriptors as inputs and outputs at least one label corresponding to one or more predefined finite number of gaze classes to identify the gaze direction of the driver of the vehicle.
Abstract:
A method, non-transitory computer readable medium and apparatus for generating graphical chromophore maps are disclosed. For example, the method includes receiving an image of a customer from a mobile endpoint device of the customer, wherein the image is taken via the mobile endpoint device of the customer, converting RGB values of the image into a spectral representation, performing a constrained independent component analysis (ICA) on the spectral representation to obtain three or more independent components that are ordered, generating a first graphical chromophore map of a first independent component of the three or more independent components that are ordered and a second graphical chromophore map of a second independent component of the three or more independent components that are ordered and transmitting the first graphical chromophore map and the second graphical chromophore map to the mobile endpoint device of the customer for display.
Abstract:
A method of automatically identifying a border in a captured image may include capturing an image of a target by an image sensor of an electronic device, and, by one or more processors, processing the image to automatically detect a border of the target in the image by applying an automatic border detection method to the image. The method may include presenting the image of the target to a user via a display of the electronic device so that the presented image comprises a visual depiction of the detected border, receiving an adjustment of the border from the user, determining whether to update the default parameters based on the received adjustment, in response to determining to update the default parameters, determining one or more updated parameters for the automatic border detection method that are based on, at least in part, the received adjustment, and saving the updated parameters.
Abstract:
In a system for detecting location of an object inside of a building, an image capture device of a mobile electronic device captures an image of a boundary of a room in which the portable electronic device is positioned. The system extracts features of a boundary (ceiling, wall or floor) in the image to determine whether the mobile device is in a known location. When the system identifies a known location, it will take an action that provides the portable electronic device with additional functionality at the identified known location. Such functionality may include connecting to a wireless network or communicating with a stationary device at the known location.