Parsing location histories
    1.
    发明授权
    Parsing location histories 有权
    解析位置历史

    公开(公告)号:US07868786B2

    公开(公告)日:2011-01-11

    申请号:US10968861

    申请日:2004-10-19

    IPC分类号: G08G1/123

    CPC分类号: G06Q30/02

    摘要: A location history is a collection of locations over time for an object. A stay is a single instance of an object spending some time in one place, and a destination is any place where one or more objects have experienced a stay. Location histories are parsed using stays and destinations. In a described implementation, each location of a location history is recorded as a spatial position and a corresponding time at which the spatial position is acquired. Stays are extracted from a location history by analyzing locations thereof with regard to a temporal threshold and a spatial threshold. Specifically, two or more locations are considered a stay if they exceed a minimum stay duration and are within a maximum roaming distance. Each stay includes a location, a starting time, and an ending time. Destinations are produced from the extracted stays using a clustering operation and a predetermined scaling factor.

    摘要翻译: 位置历史记录是对象的一段时间内的位置集合。 逗留是在一个地方花费一些时间的对象的单个实例,目的地是一个或多个对象经历了逗留的任何地方。 使用停留和目的地解析位置历史记录。 在描述的实现中,将位置历史的每个位置记录为空间位置和获取空间位置的对应时间。 通过相对于时间阈值和空间阈值分析其位置,从位置历史中提取静止。 具体来说,如果两个或多个位置超过最小停留持续时间并处于最大漫游距离内,则将其视为停留。 每个住宿包括一个位置,一个起始时间和一个结束时间。 使用聚类操作和预定缩放因子从所提取的逗留中产生目的地。

    REGION SELECTION FOR IMAGE COMPOSITING
    2.
    发明申请
    REGION SELECTION FOR IMAGE COMPOSITING 有权
    区域选择图像组成

    公开(公告)号:US20080120560A1

    公开(公告)日:2008-05-22

    申请号:US11561407

    申请日:2006-11-19

    IPC分类号: G06F3/048

    CPC分类号: G06T11/60

    摘要: A technique for image compositing which allows a user to select the best image of an object, such as for example a person, from a set of images interactively and see how it will be assembled into a final photomontage. A user can select a source image from the set of images as an initial composite image. A region, representing a set of pixels to be replaced, is chosen by the user in the composite image. A corresponding same region is reflected in one or more source images, one of which will be selected by the user for painting into the composite image. The technique optimizes the selection of pixels around the user-chosen region or regions for cut points that will be least likely to show seams where the source images are merged in the composite image.

    摘要翻译: 一种用于图像合成的技术,其允许用户以交互方式从一组图像中选择对象(例如,人)的最佳图像,并且看到它将如何组装成最终的照片蒙版。 用户可以从图像集中选择源图像作为初始合成图像。 表示要替换的像素集合的区域由用户在合成图像中选择。 相应的相同区域被反映在一个或多个源图像中,其中一个源图像将由用户选择以绘制到合成图像中。 该技术优化了围绕用户选择的区域或区域的像素的选择,用于切割点,其将最不可能显示在合成图像中合并源图像的接缝。

    System and process for generating high dynamic range video
    3.
    发明授权
    System and process for generating high dynamic range video 失效
    用于生成高动态范围视频的系统和过程

    公开(公告)号:US06879731B2

    公开(公告)日:2005-04-12

    申请号:US10425338

    申请日:2003-04-29

    摘要: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.

    摘要翻译: 提出了用于产生高动态范围(HDR)视频的系统和过程,其涉及首先在改变曝光的同时捕获视频图像序列,以便在具有较短和较长曝光的帧之间交替。 每个帧的曝光在其被捕获之前被设置为在先前帧中的像素亮度分布的函数。 接下来,对于视频的每个帧,识别所考虑的帧与前后帧之间的对应像素。 对于每个对应的像素集合,至少一个像素被识别为表示可靠的像素。 然后使用与可信赖像素相关联的像素颜色信息来计算每个像素组的辐射值以形成辐射图。 然后可以执行色调映射过程以将辐射图转换成HDR帧的8位表示。

    Panoramic video
    4.
    发明授权
    Panoramic video 有权
    全景视频

    公开(公告)号:US06788333B1

    公开(公告)日:2004-09-07

    申请号:US09611646

    申请日:2000-07-07

    IPC分类号: H04N700

    摘要: A system and process for generating a panoramic video. Essentially, the panoramic video is created by first acquiring multiple videos of the scene being depicted. Preferably, these videos collectively depict a full 360 degree view of the surrounding scene and are captured using a multiple camera rig. The acquisition phase also includes a calibration procedure that provides information about the camera rig used to capture the videos that is used in the next phase for creating the panoramic video. This next phase, which is referred to as the authoring phase, involves mosaicing or stitching individual frames of the videos, which were captured at approximately the same moment in time, to form each frame of the panoramic video. A series of texture maps are then constructed for each frame of the panoramic video. Each texture map coincides with a portion of a prescribed environment model of the scene. The texture map representations of each frame of the panoramic video are encoded so as to facilitate their transfer and viewing. This can include compressing the panoramic video frames. Such a procedure is useful in applications where the panoramic video is to be transferred over a network, such as the Internet.

    摘要翻译: 用于生成全景视频的系统和过程。 本质上,通过首先获取所描绘的场景的多个视频来创建全景视频。 优选地,这些视频共同地描绘了周围场景的完整360度视图,并且使用多个相机钻机来捕获。 采集阶段还包括一个校准程序,该程序提供有关用于捕获下一阶段中用于创建全景视频的视频的摄像机的信息。 这个下一个阶段(被称为创作阶段)涉及拼接或缝合在大约相同的时刻被捕获的视频的各个帧,以形成全景视频的每个帧。 然后为全景视频的每个帧构建一系列纹理贴图。 每个纹理贴图与场景的规定环境模型的一部分重合。 对全景视频的每个帧的纹理映射表示进行编码,以便于它们的传送和观看。 这可以包括压缩全景视频帧。 这样的程序在通过诸如因特网的网络传输全景视频的应用中是有用的。

    Image blending using multi-splines
    5.
    发明授权
    Image blending using multi-splines 有权
    使用多样条的图像混合

    公开(公告)号:US08189959B2

    公开(公告)日:2012-05-29

    申请号:US12104446

    申请日:2008-04-17

    IPC分类号: G06K9/36

    CPC分类号: G06T3/4038

    摘要: Multi-spline image blending technique embodiments are presented which generally employ a separate low-resolution offset field for every image region being blended, rather than a single (piecewise smooth) offset field for all the regions to produce a visually consistent blended image. Each of the individual offset fields is smoothly varying, and so is represented using a low-dimensional spline. A resulting linear system can be rapidly solved because it involves many fewer variables than the number of pixels being blended.

    摘要翻译: 提出了多样条图像混合技术实施例,其通常对于混合的每个图像区域采用单独的低分辨率偏移场,而不是针对所有区域的单个(分段平滑)偏移场,以产生视觉上一致的混合图像。 每个单独的偏移场都是平滑变化的,因此使用低维样条表示。 由此产生的线性系统可以快速解决,因为它涉及比混合像素数少许多的变量。

    COMPRESSING AND DECOMPRESSING MULTIPLE, LAYERED, VIDEO STREAMS EMPLOYING MULTI-DIRECTIONAL SPATIAL ENCODING
    6.
    发明申请
    COMPRESSING AND DECOMPRESSING MULTIPLE, LAYERED, VIDEO STREAMS EMPLOYING MULTI-DIRECTIONAL SPATIAL ENCODING 有权
    压缩和分解采用多方向空间编码的多层,多层视频流

    公开(公告)号:US20120114037A1

    公开(公告)日:2012-05-10

    申请号:US13348262

    申请日:2012-01-11

    IPC分类号: H04N11/02

    摘要: A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe.

    摘要翻译: 一种用于在构成多个视频流的同步视频帧的顺序集合中压缩和解压缩非关键帧的过程,其中集合中的视频帧从不同视点描绘基本上相同的场景。 每组同时期的视频帧具有指定为关键帧的多个帧,其余的是非关键帧。 在一个实施例中,使用多方向空间预测技术来压缩非关键帧。 在另一个实施例中,使用组合链接和空间预测压缩技术来压缩每组同时期视频帧的非关键帧。 所使用的空间预测压缩技术可以是单向技术,其中仅使用一个参考帧,因此使用一条链来预测每个非关键帧,或者它可以是多方向技术,其中两个或更多个参考帧等 链,用于预测每个非关键帧。

    Joint bilateral upsampling
    7.
    发明授权
    Joint bilateral upsampling 有权
    联合双边采样

    公开(公告)号:US07889949B2

    公开(公告)日:2011-02-15

    申请号:US11742325

    申请日:2007-04-30

    IPC分类号: G06K9/32

    CPC分类号: G06T3/4007

    摘要: A “Joint Bilateral Upsampler” uses a high-resolution input signal to guide the interpolation of a low-resolution solution set (derived from a downsampled version of the input signal) from low-to high-resolution. The resulting high-resolution solution set is then saved or applied to the original input signal to produce a high-resolution output signal. The high-resolution solution set is close to what would be produced directly from the input signal without downsampling. However, since the high-resolution solution set is constructed in part from a downsampled version of the input signal, it is computed using significantly less computational overhead and memory than a solution set computed directly from a high-resolution signal. Consequently, the Joint Bilateral Upsampler is advantageous for use in near real-time operations, in applications where user wait times are important, and in systems where computational costs and available memory are limited.

    摘要翻译: “联合双边上行采样器”使用高分辨率输入信号来引导低分辨率解集(从输入信号的下采样版本导出)的内插从低到高分辨率。 然后将所得到的高分辨率解决方案集保存或应用于原始输入信号以产生高分辨率输出信号。 高分辨率解决方案集合接近于直接从输入信号产生的,而无需采样。 然而,由于高分辨率解集合部分地由输入信号的下采样版本构成,所以与直接从高分辨率信号计算的解集相比,使用显着更少的计算开销和存储器来计算。 因此,联合双边上行采样器在用户等待时间重要的应用中以及在计算成本和可用存储器受到限制的系统中有利于近实时操作。

    Automatic digital image grouping using criteria based on image metadata and spatial information
    8.
    发明授权
    Automatic digital image grouping using criteria based on image metadata and spatial information 有权
    使用基于图像元数据和空间信息的标准自动数字图像分组

    公开(公告)号:US07580952B2

    公开(公告)日:2009-08-25

    申请号:US11069662

    申请日:2005-02-28

    IPC分类号: G06F17/00

    摘要: An automatic digital image grouping system and method for automatically generating groupings of related images based on criteria that includes image metadata and spatial information. The system and method takes an unordered and unorganized set of digital images and organizes and groups related images into image subsets. The criteria for defining an image subset varies and can be customized depending on the needs of the user. Metadata (such as EXIF tags) already embedded inside the images is used to extract likely image subsets. This metadata may include the temporal proximity of images, focal length, color overlap, and geographical location. The first component of the automatic image grouping system and method is a subset image stage that analyzes the metadata and generates potential image subsets containing related images. The second component is an overlap detection stage, where potential image subset is analyzed and verified by examining pixels of the related images.

    摘要翻译: 一种基于包括图像元数据和空间信息的标准自动生成相关图像分组的自动数字图像分组系统和方法。 该系统和方法采用无序和无组织的数字图像集,将相关图像组织并组合成图像子集。 定义图像子集的标准不同,可以根据用户的需要进行定制。 已经嵌入图像内的元数据(例如EXIF标签)用于提取可能的图像子集。 该元数据可以包括图像的时间接近,焦距,颜色重叠和地理位置。 自动图像分组系统和方法的第一个组件是分析元数据并生成包含相关图像的潜在图像子集的子集图像阶段。 第二部分是重叠检测阶段,其中通过检查相关图像的像素来分析和验证潜在图像子集。

    System and process for generating a two-layer, 3D representation of a scene
    9.
    发明授权
    System and process for generating a two-layer, 3D representation of a scene 有权
    用于生成场景的两层3D表示的系统和过程

    公开(公告)号:US07015926B2

    公开(公告)日:2006-03-21

    申请号:US10879235

    申请日:2004-06-28

    IPC分类号: G09G5/02

    CPC分类号: G06T15/205

    摘要: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.

    摘要翻译: 提出了一种用于从图像生成数字或数字化图像的二层3D表示和图像的像素视差图的系统和过程。 两层表示包括具有显示背景颜色的像素和与图像中的深度不连续区域的相应定位的像素相关联的背景差异的主层以及与在这些深度中未找到的图像的相应定位的像素相关联的颜色和差异的像素 不连续区域。 另一层是由与前述深度不连续区域的对应位置的像素相关联的前景色,前景差异和α值的像素构成的边界层。 深度不连续区域对应于使用其视差图在图像中发现的围绕深度不连续性的规定尺寸的区域。