Abstract:
A mechanism is described for facilitating age classification of humans using image depth and human pose according to one embodiment. A method of embodiments, as described herein, includes facilitating, by one or more cameras of a computing device, capturing of a video stream of a scene having persons, and computing overall-depth torso lengths of the persons based on depth torso lengths of the persons. The method may further include comparing the overall-depth torso lengths with a predetermined threshold value representing a separation age between adults and children, and classifying a first set of the persons as adults if a first set of the overall-depth torso lengths associated with the first set of persons is greater than the threshold value.
Abstract:
Technologies for providing cues to a user of a cognitive cuing system are disclosed. The cues can be based on the context of the user. The cognitive cuing system communicates with a knowledge-based system which provides information based on the context, such as the name of a person and the relationship the user of the cognitive cuing system has with the person. The cues can be provided to the user of the cognitive cuing system through visual, auditory, or haptic means.
Abstract:
A mechanism is described for facilitating age classification of humans using image depth and human pose according to one embodiment. A method of embodiments, as described herein, includes facilitating, by one or more cameras of a computing device, capturing of a video stream of a scene having persons, and computing overall-depth torso lengths of the persons based on depth torso lengths of the persons. The method may further include comparing the overall-depth torso lengths with a predetermined threshold value representing a separation age between adults and children, and classifying a first set of the persons as adults if a first set of the overall-depth torso lengths associated with the first set of persons is greater than the threshold value.
Abstract:
An apparatus for video summarization using sematic information is described herein. The apparatus includes a controller, a scoring mechanism, and a summarizer. The controller is to segment an incoming video stream into a plurality of activity segments, wherein each frame is associated with an activity. The scoring mechanism is to calculate a score for each frame of each activity, wherein the score is based on a plurality of objects in each frame. The summarizer is to summarize the activity segments based on the score for each frame.
Abstract:
An example apparatus for encoding video frames includes a receiver to receive video frames and a heat map from a camera and expected object regions from a video database. The apparatus also includes a region of interest (ROI) map generator to detect a region of interest in a video frame based on the expected object regions. The ROI map generator can also detect a region of interest in the video frame based on the heat map. The ROI map generator can then generate an ROI map based on the detected regions of interest. The apparatus further includes a parameter adjuster to adjust an encoding parameter based on the ROI map. The apparatus also further includes a video encoder to encode the video frame using the adjusted encoding parameter.
Abstract:
Examples include a determination how to manage storage of a video clip generated from recorded video based upon a sensor event. Managing storage of the video clip may include determining whether to save or delete the video clip based on an imprint associated with an object that indicates whether the object is included in the video clip.
Abstract:
An apparatus for video summarization using sematic information is described herein. The apparatus includes a controller, a scoring mechanism, and a summarizer. The controller is to segment an incoming video stream into a plurality of activity segments, wherein each frame is associated with an activity. The scoring mechanism is to calculate a score for each frame of each activity, wherein the score is based on a plurality of objects in each frame. The summarizer is to summarize the activity segments based on the score for each frame.
Abstract:
Generally discussed herein are systems and apparatuses for gesture-based augmented reality. Also discussed herein are methods of using the systems and apparatuses. According to an example a method may include detecting, in image data, an object and a gesture, in response to detecting the object in the image data, providing data indicative of the detected object, in response to detecting the gesture in the image data, providing data indicative of the detected gesture, and modifying the image data using the data indicative of the detected object and the data indicative of the detected gesture.