Abstract:
Methods and systems for executing an application include extending a container orchestration system application programming interface (API) to handle objects that specify components of an application. An application representation is executed using the extended container orchestration system API, including the instantiation of one or more services that define a data stream path from a sensor to a device.
Abstract:
Systems and methods are provided for increasing accuracy of video analytics tasks in real-time by acquiring a video using video cameras, and identifying fluctuations in the accuracy of video analytics applications across consecutive frames of the video. The identified fluctuations are quantified based on an average relative difference of true-positive detection counts across consecutive frames. Fluctuations in accuracy are reduced by applying transfer learning to a deep learning model initially trained using images, and retraining the deep learning model using video frames. A quality of object detections is determined based on an amount of track-ids assigned by a tracker across different video frames. Optimization of the reduction of fluctuations includes iteratively repeating the identifying, the quantifying, the reducing, and the determining the quality of object detections until a threshold is reached. Model predictions for each frame in the video are generated using the retrained deep learning model.
Abstract:
Systems and methods for network bandwidth optimization, including transmitting sensor data from one or more sensors over a wireless network into a generated network slice, submitting a Quality-of-Service (QoS) request for one or more applications by specifying desired network slice characteristics, and predicting network bandwidth needed for granting the QoS request for the one or more applications using a cost function based on magnitude, direction, and frequency of error. Time-varying network bandwidth usage is continuously monitored, and new QoS requests for the one or more applications are periodically requested based on the monitoring. An updated prediction for updated bandwidth needed for the new QoS request is generated using the cost function, and network bandwidth reservations are iteratively adjusted based on the updated prediction for the new QoS request to provide an amount of network resources to the one or more applications to support the new QoS request.
Abstract:
Systems and methods for determining dwell time is provided. The method includes receiving images of an area including one or more people from one or more cameras, and detecting a presence of each of the one or more people in the received images using a worker. The method further includes receiving by the worker digital facial features stored in a watch list from a master controller, and performing facial recognition and monitoring the dwell time of each of the one or more people. The method further includes determining if each of the one or more people is in the watch list or has exceeded a dwell time threshold.
Abstract:
A method for performing resource orchestration for microservices-based 5G applications in a dynamic, heterogenous, multi-tiered compute and network environment is presented. The method includes managing compute requirements and network requirements of a microservices-based application jointly by positioning computing nodes distributed across multiple layers, across edges and at a central cloud, identifying and modeling coupling relationships between compute and network resources for a plurality of microservices, when only application-level requirements are provided, to build coupling functions, solving a multi-objective optimization problem to identify how each of the plurality of microservices are deployed in the dynamic, heterogenous, multi-tiered compute and network environment by employing the coupling functions to jointly optimize resource usage of the compute and network resources across different compute and network slices, and deriving optimal joint network and compute resource allocation and function placement decisions.
Abstract:
A method for employing a semi-supervised learning approach to improve accuracy of a small model on an edge device is presented. The method includes collecting a plurality of frames from a plurality of video streams generated from a plurality of cameras, each camera associated with a respective small model, each small model deployed in the edge device, sampling the plurality of frames to define sampled frames, performing inference to the sampled frames by using a big model, the big model shared by all of the plurality of cameras and deployed in a cloud or cloud edge, using the big model to generate labels for each of the sampled frames to generate training data, and training each of the small models with the training data to generate updated small models on the edge device.
Abstract:
Methods and systems for video analysis and response include detecting face images within video streams. Noisy images are filtered from the detected face images. Batches of the remaining detected face images are clustered to generate mini-clusters, constrained by temporal locality. The mini-clusters are globally clustered to generate merged clusters formed of face images for respective people, using camera-chain information to constrain a set of the video streams being considered. Analytics are performed on the merged clusters to identify a tracked individual's movements through an environment. A response is performed to the tracked individual's movements.
Abstract:
Methods and systems for controlling a user interface include identifying a user at a station based on facial recognition of an image of the user's face in a video stream, to match a profile for the user. At least one preference of the user is determined for the display of content, based on the matched profile. Content for the user is configured in accordance with the at least one preference. The configured content is displayed on a user interface of the station.
Abstract:
A big data processing system includes a memory management engine having stream buffers, realtime views and models, and batch views and models, the stream buffers coupleable to one or more stream processing frameworks to process stream data, the batch models coupleable to one or more batch processing frameworks; one or more processing engines including Join, Group, Filter, Aggregate, Project functional units and classifiers; and a client layer engine communicating with one or more big data applications, the client layer engine handling an output layer, an API layer, and an unified query layer.
Abstract:
Systems and methods for swapping out and in pinned memory regions between main memory and a separate storage location in a system, including establishing an offload buffer in an interposing library; swapping out pinned memory regions by transferring offload buffer data from a coprocessor memory to a host processor memory, unregistering and unmapping a memory region employed by the offload buffer from the interposing library, wherein the interposing library is pre-loaded on the coprocessor, and collects and stores information employed during the swapping out. The pinned memory regions are swapped in by mapping and re-registering the files to the memory region employed by the offload buffer, and transferring data of the offload buffer data from the host memory back to the re-registered memory region.