Abstract:
An automated computer-implemented method for generating a time-based comparative report is provided. The method includes receiving a selection of a first time period from a user device, identifying a first seasonality characteristic of the first time period, determining a second time period such that the second time period has a second seasonality characteristic matching the first seasonality characteristic, receiving from one or more data storage devices a first data set defined by the first time period and a second data set defined by at least one of the second time period and a user-selected third time period, and generating a comparative report using the first data set and the second data set.
Abstract:
A method, in an application executing on a client device, includes: displaying a camera event history provided by a remote server system, where the camera event history is presented as a chronologically-ordered set of event identifiers, each event identifier corresponding to a respective event for which a remote camera has captured an associated video; receiving a user selection of a displayed event identifier; and in response to receiving the user selection of the displayed event identifier: expanding the selected event identifier into a video player window, the video player window consuming a portion of the displayed camera event history; and playing, in the video player window, the captured video; and in response to terminating playback of the captured video or user de-selection of the displayed event identifier, collapsing the video player window into the selected event identifier thereby stopping the playing of the captured video.
Abstract:
Methods and systems for providing for display attribution data associated with one or more events are disclosed. Processor identifies channels from paths including events corresponding to position data identifying a position along the path at which the event was performed. Processor determines attribution credits assigned to each event included in the paths corresponding to the channel. Processor determines a number of attribution credits assigned to the channel. Processor identifies, from the paths, a plurality of event-position pairs. Each event-position pair corresponds to events that correspond to a respective channel and are performed at a respective position of the plurality of paths corresponding to the event-position pair. Processor determines, for each identified event-position pair, a weighting based on an aggregate of the attribution credits assigned to the events to which the event-position pair corresponds. Processor provides, for display, a visual object including an indicator to display the determined weightings.
Abstract:
A method includes obtaining from an image sensor of a video camera a primary real-time video stream comprising images of a field of view of the video camera; identifying from the primary video stream one or more regions of interest in the field of view of the video camera; while obtaining the primary video stream, creating a first video sub-stream comprising a first plurality of images for a first one of the one or more identified regions of interest, wherein: images of the first plurality of images include image data for portions of the field of the video camera that include the first identified region of interest, and the images of the first plurality of images have fields of view that are smaller than the field of view of the video camera; and providing the first video sub-stream for display at a client device.
Abstract:
The various implementations described herein include systems and methods for recognizing persons in video streams. In one aspect, a method includes: (1) obtaining a live video stream; (2) detecting person(s) in the stream; and (3) determining, from analysis of the live video stream, personally identifiable information of the detected person(s); (4) determining, based on the personally identifiable information, that the first person is not known to the computing system; (5) in accordance with the determination that the first person is not known: (a) storing the personally identifiable information; and (b) requesting a user to classify the first person; and (6) in accordance with (i) a determination that a predetermined amount of time has elapsed since the request was transmitted and a response was not received, or (ii) a determination that a response was received classifying the first person as a stranger, deleting the stored personally identifiable information.
Abstract:
A method at a server system includes: receiving a video stream from a remote video camera, wherein the video stream comprises a plurality of video frames; selecting a plurality of non-contiguous frames from the video stream, the plurality of non-contiguous frames being associated with a predetermined time interval; encoding the plurality of non-contiguous frames as a compressed video segment associated with the time interval; receiving a request from an application running on a client device to review video from the remote video camera for the time interval; and in response to the request, transmitting the video segment to the client device for viewing in the application.
Abstract:
Methods and systems for providing for display attribution data associated with one or more events are disclosed. Processor identifies channels from paths including events corresponding to position data identifying a position along the path at which the event was performed. Processor determines attribution credits assigned to each event included in the paths corresponding to the channel. Processor determines a number of attribution credits assigned to the channel. Processor identifies, from the paths, a plurality of event-position pairs. Each event-position pair corresponds to events that correspond to a respective channel and are performed at a respective position of the plurality of paths corresponding to the event-position pair. Processor determines, for each identified event-position pair, a weighting based on an aggregate of the attribution credits assigned to the events to which the event-position pair corresponds. Processor provides, for display, a visual object including an indicator to display the determined weightings.
Abstract:
A method at an electronic device includes obtaining from an image sensor a primary real-time video stream comprising images of a scene; identifying from the primary video stream one or more regions of interest in the scene; while obtaining the primary video stream, creating a first video sub-stream comprising a first plurality of images for a first one of the one or more identified regions of interest, wherein: images of the first plurality of images include image data for portions of the scene that include the first identified region of interest, and the images of the first plurality of images have fields of view that are smaller than the field of view for the images of the primary video stream; and providing the first video sub-stream for display at a client device.
Abstract:
A method at an electronic device with a display includes: displaying a user interface having a first region and a second region; receiving, and displaying in the first region of the user interface, a live video stream of a physical environment captured by a remote video camera, where at least some of the live video stream is recorded at a remote server; displaying, in the second region, a timeline corresponding to a timespan for a first portion of a duration during which the live video stream may have been recorded; in response to receiving a user interaction to move the timespan to a second portion of the duration, transitioning the displayed timeline to a new timeline that corresponds to the timespan for the second portion, and while transitioning, displaying, in the first region, a subset of video frames representing the first and/or second portion of the duration.