Abstract:
A 3D printer includes a nozzle configured to jet a drop of liquid metal therethrough. The 3D printer also includes a light source configured to illuminate the drop with a pulse of light. A duration of the pulse of light is from about 0.0001 seconds to about 0.1 seconds. The 3D printer also includes a camera configured to capture an image, video, or both of the drop. The 3D printer also includes a computing system configured to detect the drop in the image, the video, or both. The computing system is also configured to characterize the drop after the drop is detected. Characterizing the drop includes determining a size of the drop, a location of the drop, or both in the image, the video, or both.
Abstract:
A method includes capturing a video of a plurality of drops being jetted through a nozzle of a printer. The method also includes measuring a signal proximate to the nozzle based at least partially upon the video. The method also includes determining one or more metrics that characterize a behavior of the drops based at least partially upon the signal.
Abstract:
A method operates a three-dimensional (3D) metal object manufacturing system to compensate for displacement errors that occur during object formation. In the method, image data of a metal object being formed by the 3D metal object manufacturing system is generated prior to completion of the metal object and compared to original 3D object design data of the object to identify one or more displacement errors. For the displacement errors outside a predetermined difference range, the method modifies machine-ready instructions for forming metal object layers not yet formed to compensate for the identified displacement errors and operates the 3D metal object manufacturing system using the modified machine-ready instructions.
Abstract:
A system creates an electronic file corresponding to a printed artifact by launching a video capture module that causes a mobile electronic device to capture a video of a scene that includes the printed artifact. The system analyzes image frames in the video in real time as the video is captured to identify a suitable instance. In one example, the suitable instance is a frame or sequence of frames that contain an image of a page or side of the printed artifact and that do not exhibit a page-turn event. In response to identification of the suitable instance, the system will automatically cause a photo capture module of the device to capture a still image of the printed artifact. The still image has a resolution that is higher than that of the image frames in the video. The system will save the captured still images to a computer-readable file.
Abstract:
A method, non-transitory computer readable medium and apparatus for generating an interactive image of facial skin of a user that is displayed via a mobile endpoint device of the user are disclosed. For example, the method includes displaying a guide to position a face of the user, capturing an image of the face of the user, transmitting the image to a facial skin analysis server for analyzing one or more parameters of the facial skin of the user, receiving the interactive image of the face of the user that includes metadata associated with the one or more parameters of the facial skin that were analyzed by the facial skin analysis server, and displaying the interactive image of the face of the user.
Abstract:
A method and system for identifying content relevance comprises acquiring video data, mapping the acquired video data to a feature space to obtain a feature representation of the video data, assigning the acquired video data to at least one action class based on the feature representation of the video data, and determining a relevance of the acquired video data.
Abstract:
A method, non-transitory computer readable medium, and apparatus for localizing a region of interest using a hand gesture are disclosed. For example, the method acquires an image containing the hand gesture from the ego-centric video, detects pixels that correspond to one or more hands in the image using a hand segmentation algorithm, identifies a hand enclosure in the pixels that are detected within the image, localizes a region of interest based on the hand enclosure and performs an action based on the object in the region of interest.
Abstract:
A mobile electronic device is used to decode a printed correlation mark. The device receives an image of a printed correlation mark, identifies a decoding template, applies the template to detect hidden content within the printed correlation mark, and outputs an image of the detected hidden content on the display. The device may enhance the image before presenting it on the display.
Abstract:
A mobile electronic device processes a sequence of images to identify and re-identify an object of interest in the sequence. An image sensor of the device, receives a sequence of images. The device detects an object in a first image as well as positional parameters of the device that correspond to the object in the first image. The device determines a range of positional parameters within which the object may appear in a field of view of the device. When the device detects that the object of interest exited the field of view it subsequently uses motion sensor data to determine that the object of interest has likely re-entered the field of view, it will analyze the current frame to confirm that the object of interest has re-entered the field of view.
Abstract:
A mobile electronic device application uses various hardware parameters for operation. The application leverages calibration data from other users to determine what the parameters should be for the particular device model on which the application is installed. The application queries a cloud-based data store by sending the model and a hardware-variable parameter to the data store. If a value for the parameter is available in the data store, the application will receive it from the data store and use it in operation. If the value is not available, the application will prompt the user to calibrate the application. The application will use the calibration results to identify a setting, and it will send the setting to the data store for use by other instances in which the application is installed on the same model device.