Abstract:
A method for panoramic image completion is disclosed. The method includes: acquiring a panoramic image; obtaining a projected image by mapping pixels of the panoramic image onto a polar coordinate system, wherein a long side component of the pixel coordinate of the pixels is corresponding to the polar angle of the polar coordinate system and a short side component of the pixel coordinate of the pixels is corresponding to a radial coordinate of the polar coordinate system; acquiring an incomplete region of the projected image, and obtaining a completed image by completing the incomplete region; and obtaining a completed panoramic image by inverse mapping the pixels of the completed image according to the polar coordinate system. Furthermore, a device for panoramic image completion is also disclosed. The above method and device for panoramic image completion take account of the perspective curvature of the panoramic image, to improve the degree of restoration.
Abstract:
A computer-implemented method and system transforms a first sequence of video frames of a first dynamic scene to a second sequence of at least two video frames depicting a second dynamic scene. A subset of video frames in the first sequence is obtained that show movement of at least one object having a plurality of pixels located at respective x, y coordinates and portions from the subset are selected that show non-spatially overlapping appearances of the at least one object in the first dynamic scene. The portions are copied from at least three different input frames to at least two successive frames of the second sequence without changing the respective x, y coordinates of the pixels in the object and such that at least one of the frames of the second sequence contains at least two portions that appear at different frames in the first sequence.
Abstract:
A computer-implemented method and system for transforming a first sequence of video frames of a first dynamic scene captured at regular time intervals to a second sequence of video frames depicting a second dynamic scene wherein for at least two successive frames of the second sequence, there are selected from at least three different frames of the first sequence portions that are spatially contiguous in the first dynamic scene and copied to a corresponding frame of the second sequence so as to maintain their spatial continuity in the first sequence. In a second aspect, for at least one feature in the first dynamic scene respective portions of the first sequence of video frames are sampled at a different rate than surrounding portions of the first sequence of video frames; and the sampled portions are copied to a corresponding frame of the second sequence.
Abstract:
The invention relates to a method and device for generating a large static image M(n), such as a sprite or a mosaic, from a video sequence including successive video objects. This method comprises a first step for estimating motion parameters related to the current video object V0(n) of the sequence with respect to the previously generated static image M(n-1), a second step for warping this video object on the basis of the estimated motion parameters, and a third step for blending the warped video object WV0(n) thus obtained with the previously generated static image M(n-1). According to the invention, an additional step for computing, for each picture element of the current video object, a weighting coefficient WWF(n)[x,y] correlated to the error between the warped video object and the static image M(n-1) is provided, and the blending formula now takes into account said weighting coefficients.
Abstract:
Systems, devices, and methods disclosed herein may apply a computational spatial-temporal analysis to assess pixels between temporal and/or perspective view imagery to determine imaging details that may be used to generate image data with increased signal-to-noise ratio.
Abstract:
A method interactively displays panoramic images of a scene. The method includes measuring 3D coordinates of the scene with a 3D measuring instrument at a first position and a second position. The 3D coordinates are registering into a common frame of reference. Within the scene, a trajectory includes a plurality of trajectory points. Along the trajectory, 2D images are generated from the commonly registered 3D coordinates. A user interface provides a trajectory display mode that sequentially displays a collection of 2D images at the trajectory points. The user interface also provides a rotational display mode that allows a user to select a desired view direction at a given trajectory point. The user selects the trajectory display mode or the rotational display mode and sees the result shown on the display device.
Abstract:
Methods and systems for navigating panoramic imagery are provided. If a user rotates panoramic imagery to a view having a view angle that deviates beyond a threshold view angle, the view of the panoramic imagery will be adjusted to the threshold view angle. In a particular implementation, the view is drifted to the threshold view angle so that a user can at least temporarily view the imagery that deviates beyond the threshold view angle. A variety of transition animations can be used as the imagery is drifted to the threshold view angle. For instance, the view can be elastically snapped back to the threshold view angle to provide a visually appealing transition to a user.
Abstract:
A system and method of providing composite real-time dynamic imagery of a medical procedure site from multiple modalities which continuously and immediately depicts the current state and condition of the medical procedure site synchronously with respect to each modality and without undue latency is disclosed. The composite real-time dynamic imagery may be provided by spatially registering multiple real-time dynamic video streams from the multiple modalities to each other. Spatially registering the multiple real-time dynamic video streams to each other may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view of a region of interest at the medical procedure site at multiple depths. As such, a surgeon, or other medical practitioner, may view a single, accurate, and current composite real-time dynamic imagery of a region of interest at the medical procedure site as he/she performs a medical procedure, and thereby, may properly and effectively implement the medical procedure.
Abstract:
The video summarization system and method supports a moving sensor or multiple sensors by mapping imagery back to a common ortho-rectified geometry. The video summarization system includes at least one video sensor to acquire video data, of at least one area of interest (AOI), including video frames having a plurality of different perspectives. The video sensor may be a moving sensor or a plurality of sensors to acquire video data, of the at least one AOI, from respective different perspectives. A memory stores the video data, and a processor is configured to cooperate with the memory to register video frames from the AOI, ortho-rectify registered video frames based upon a common geometry, identify events within the ortho-rectified registered video frames, and generate a video summary of selected events shifted in time within a selected AOI based upon identified events within the ortho-rectified registered video frames.