-
公开(公告)号:US20180046865A1
公开(公告)日:2018-02-15
申请号:US15384911
申请日:2016-12-20
Applicant: QUALCOMM Incorporated
Inventor: Ying Chen , Lei Wang , Jinglun Gao
CPC classification number: G06K9/00771 , G06K9/3233 , G06K9/6201 , G06K2009/6213 , G06T7/20 , G06T2207/10004 , G06T2207/10016
Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for matching a plurality of bounding boxes to a plurality of trackers. In some examples, a first association is performed, in which case one or more of the plurality of bounding boxes are associated with one or more of the plurality of trackers by minimizing distances between the one or more bounding boxes and the one or more trackers. A set of unmatched trackers are identified from the plurality of trackers after the first association. The set of unmatched trackers are not associated with a bounding box from the plurality of bounding boxes during the first association. A second association is then performed, in which case each of the set of unmatched trackers is associated with an associated bounding box from the plurality of bounding boxes that is within a first pre-determined distance. A set of unmatched bounding boxes is identified from the plurality of bounding boxes after the second association. The set of unmatched bounding boxes are not associated with a tracker from the plurality of trackers during the second association. A third association is then performed, in which case each of the set of unmatched bounding boxes is associated with an associated tracker from the plurality of trackers that is within a second pre-determined distance.
-
公开(公告)号:US12142084B2
公开(公告)日:2024-11-12
申请号:US17561309
申请日:2021-12-23
Applicant: QUALCOMM INCORPORATED
Inventor: Chun-Ting Huang , Lei Wang , Ning Bi
Abstract: Methods, systems, and apparatuses are provided to automatically determine whether an image is spoofed. For example, a computing device may obtain an image, and may execute a trained convolutional neural network to ingest elements of the image. Further, and based on the ingested elements of the image, the executed trained convolutional neural network generates an output map that includes a plurality of intensity values. In some examples, the trained convolutional neural network includes a plurality of down sampling layers, a plurality of up sampling layers, and a plurality of joint spatial and channel attention layers. Further, the computing device may determine whether the image is spoofed based on the plurality of intensity values. The computing device may also generate output data based on the determination of whether the image is spoofed, and may store the output data within a data repository.
-
公开(公告)号:US10372970B2
公开(公告)日:2019-08-06
申请号:US15266747
申请日:2016-09-15
Applicant: QUALCOMM Incorporated
Inventor: Lei Wang , Dashan Gao , Lei Ma , Chinchuan Chiu
Abstract: To determine real-world information about objects moving in a scene, the camera capturing the scene is typically calibrated to the scene. Automatic scene calibration can be accomplished using people that are found moving about in the scene. During a calibration period, a video content analysis system processing video frames from a camera can identify blobs that are associated with people. Using an estimated height of a typical person, the video content analysis system can use the location of the person's head and feet to determine a mapping between the person's location in the 2-D video frame and the person's location in the 3-D real world. This mapping can be used to determine a cost for estimated extrinsic parameters for the camera. Using a hierarchical global estimation mechanism, the video content analysis system can determine the estimated extrinsic parameters with the lowest cost.
-
公开(公告)号:US10223590B2
公开(公告)日:2019-03-05
申请号:US15262700
申请日:2016-09-12
Applicant: QUALCOMM Incorporated
Inventor: Ying Chen , Lei Wang , Jinglun Gao , Ning Bi
Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for performing content-adaptive morphology operations. A first erosion function can be performed on a foreground mask of a video frame, including setting one or more foreground pixels of the frame to one or more background pixels. A temporary foreground mask can be generated based on the first erosion function being performed on the foreground mask. One or more connected components can be generated for the frame by performing connected component analysis to connect one or more neighboring foreground pixels. A complexity of the frame (or of the foreground mask of the frame) can be determined by comparing a number of the one or more connected components to a threshold number. A second erosion function can be performed on the temporary foreground mask when the number of the one or more connected components is higher than the threshold number. The one or more connected components can be output for blob processing when the number of the one or more connected components is lower than the threshold number.
-
公开(公告)号:US20180075593A1
公开(公告)日:2018-03-15
申请号:US15266747
申请日:2016-09-15
Applicant: QUALCOMM Incorporated
Inventor: Lei Wang , Dashan Gao , Lei Ma , Chinchuan Chiu
CPC classification number: G06K9/00248 , G06K9/00281 , G06K9/00369 , G06K9/00718 , G06K9/00771 , G06T7/246 , G06T7/70 , G06T7/73 , G06T7/80 , G06T7/85 , G06T2207/30196 , G06T2207/30201 , G06T2207/30232 , G06T2207/30244 , H04N13/261
Abstract: To determine real-world information about objects moving in a scene, the camera capturing the scene is typically calibrated to the scene. Automatic scene calibration can be accomplished using people that are found moving about in the scene. During a calibration period, a video content analysis system processing video frames from a camera can identify blobs that are associated with people. Using an estimated height of a typical person, the video content analysis system can use the location of the person's head and feet to determine a mapping between the person's location in the 2-D video frame and the person's location in the 3-D real world. This mapping can be used to determine a cost for estimated extrinsic parameters for the camera. Using a hierarchical global estimation mechanism, the video content analysis system can determine the estimated extrinsic parameters with the lowest cost.
-
26.
公开(公告)号:US20180048894A1
公开(公告)日:2018-02-15
申请号:US15402757
申请日:2017-01-10
Applicant: QUALCOMM Incorporated
IPC: H04N19/142 , H04N19/176 , H04N19/182 , H04N19/172
CPC classification number: H04N19/142 , G06T5/007 , G06T5/50 , G06T7/11 , G06T7/136 , G06T7/194 , G06T2207/10016 , G06T2207/30232 , H04N19/172 , H04N19/176 , H04N19/182
Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for compensating for lighting changes in one or more video frames. To perform the lighting change compensation, a current frame and a background picture are obtained. A frame-level lighting condition change is then detected for the current frame. A block-level comparison of the current frame and the background picture is performed when the frame-level lighting condition change is detected. The block-level comparison includes comparing a block of pixels of the current frame with a corresponding block of pixels of the background picture. Based on the block-level comparison, it is determined that a change in the block of the current frame relative to a previous frame is associated with a change in lighting. Blob-level lighting compensation can also be performed.
-
27.
公开(公告)号:US20180047173A1
公开(公告)日:2018-02-15
申请号:US15382244
申请日:2016-12-16
Applicant: QUALCOMM Incorporated
Inventor: Lei Wang , Ying Chen , Jian Wei , Jinglun Gao , Chinchuan Chiu
CPC classification number: G06T7/246 , G06T7/136 , G06T7/194 , G06T7/62 , G06T2207/10016 , G06T2207/20036 , G06T2207/20224 , G06T2207/30232 , G06T2207/30241
Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for performing content-adaptive object or blob tracking. To perform the content-adaptive object tracking, a blob tracker is associated with a blob generated for a video frame. The blob includes pixels of at least a portion of a foreground object in a video frame. A size of the blob can be determined to be greater than a blob size threshold. The blob tracker can be converted to a normal tracker based on the size of the blob being greater than the size threshold. The associated blob tracker and blob are output as an identified blob tracker-blob pair when the blob tracker is converted to the normal tracker.
-
公开(公告)号:US20180046877A1
公开(公告)日:2018-02-15
申请号:US15235999
申请日:2016-08-12
Applicant: QUALCOMM Incorporated
Inventor: Ying Chen , Ning Bi , Lei Wang , Jinglun Gao
CPC classification number: G06K9/4642 , G06K9/00711 , G06K9/38 , G06T7/11 , G06T7/136 , G06T7/194 , G06T7/254 , G06T2207/10016 , G06T2207/30232 , G06T2207/30242
Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for determining blob size thresholds. Blob sizes of blobs generated for a video frame can be determined. A lower boundary of a category of blob sizes can then be determined that corresponds to a minimum blob size of the video frame. The lower boundary is determined from a plurality of possible blob sizes including the blob sizes of the blobs and one or more other possible blob sizes. One of the possible blob sizes is determined as the lower boundary when one or more lower boundary conditions are met by characteristics of the possible blob size. A blob size threshold for the video frame is assigned as the minimum blob size corresponding to the lower boundary.
-
公开(公告)号:US20180046863A1
公开(公告)日:2018-02-15
申请号:US15400118
申请日:2017-01-06
Applicant: QUALCOMM Incorporated
CPC classification number: G06K9/00744 , G06K9/00771 , G06T7/11 , G06T7/246 , G06T7/70 , G06T2207/10016 , G06T2207/30232 , G06T2207/30241 , G06T2210/12
Abstract: Techniques and systems are provided for maintaining lost blob trackers for one or more video frames. In some examples, one or more blob trackers maintained for a sequence of video frames are identified. The one or more blob trackers are associated with one or more blobs of the sequence of video frames. A transition of a blob tracker from a first type of tracker to a lost tracker is detected at a first video frame. For example, the blob tracker can be transitioned from the first type of tracker to the lost tracker when a blob for which the blob tracker was associated with in a previous frame is not detected in the first video frame. A recovery duration is determined for the lost tracker at the first video frame. For one or more subsequent video frames obtained after the first video frame, the lost tracker is removed from the one or more blob trackers maintained for the sequence of video frames when a lost duration for the lost tracker is greater than the recovery duration. The blob tracker can be transitioned back to the first type of tracker if the lost tracker is associated with a blob in a subsequent video frame prior to expiration of the recovery duration. Trackers and associated blobs are output as identified blob tracker-blob pairs when the trackers are converted from new trackers to trackers of the first type.
-
公开(公告)号:US20180046857A1
公开(公告)日:2018-02-15
申请号:US15384997
申请日:2016-12-20
Applicant: QUALCOMM Incorporated
Inventor: Jinglun Gao , Ying Chen , Lei Wang , Ning Bi
IPC: G06K9/00
CPC classification number: G06K9/00335 , G06K9/00718 , G06K9/00771 , G06T7/246 , G06T2207/30232 , G06T2207/30241
Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for performing context-aware object or blob tracker updates (e.g., by updating a motion model of a blob tracker). In some cases, to perform a context-aware blob tracker update, a blob tracker is associated with a first blob. The first blob includes pixels of at least a portion of one or more foreground objects in one or more video frames. A split of the first blob and a second blob in a current video frame can be detected, and a motion model of the blob tracker is reset in response to detecting the split of the first blob and the second blob. In some cases, a motion model of a blob tracker associated with a merged blob is updated to include a predicted location of the blob tracker in a next video frame. The motion model can be updated by using a previously predicted location of blob tracker as the predicted location of the blob tracker in the next video frame in response to the blob tracker being associated with the merged blob. The previously predicted location of the blob tracker can be determined using a blob location of a blob from a previous video frame.
-
-
-
-
-
-
-
-
-