Abstract:
An original image to be edited is displayed using hierarchical data. As a user draws a figure in a region of the image as an edit action, an image data updating unit generates a layer having a hierarchical structure composed of the rendered region only. More specifically, the image of the region to be edited is used as the lowermost hierarchical level, and upper hierarchical levels than this lowermost level are generated by reducing the lowermost level, as appropriate, so as to produce hierarchical data. As, during image display, it is checked that the updated region is contained in a frame to be displayed anew, the image of the layer is displayed by superposing the frame on the original hierarchical data.
Abstract:
Links are set among three hierarchical data 170, 172, and 174 and one moving image data 182. When a display area overlaps with a link area 176 while an image is being displayed by using the hierarchical data 170, switching to display by use of the 0-th hierarchical level of the hierarchical data 172 is made (link a). When the display area overlaps with a link area 178 while an image is being displayed by using the hierarchical data 172, switching to display by use of the 0-th hierarchical level of the hierarchical data 174 is made (link b). The link destination of another link area 180 of the hierarchical data 170 is the moving image data 182 (link c) and moving image reproduction is started as a result of zoom-up of this area. The hierarchical data 170 and 172 are held on the client terminal side and the data existing on the other side of a switching boundary 184 are transmitted by a server to the client terminal in a data stream format.
Abstract:
A tile image sequence obtained by dividing a frame into a predetermined size is further divided into another predetermined size on an image plane to generate a voxel (for example, a voxel. If a redundancy in a space direction or a time direction exists, then data is reduced in the direction, and sequences in the time direction are deployed on a two-dimensional plane. Voxel images are placed on an image plane of a predetermined size to generate one integrated image. In a grouping pattern which exhibits a minimum quantization error, pixels are collectively placed in the region of each voxel image for each group (integrated image). The integrated image after the re-placement is compressed in accordance with a predetermined compression method to generate a compressed image and reference information for determining the position of a needed pixel.
Abstract:
A frame sequence of moving picture data is divided into a tile image sequence, and the color space of the tile image sequence is converted to generate a YCbCr image sequence. Each frame is reduced to ½ time in the vertical and horizontal directions, and a compression process is carried out to generate compression data of a reference image. The compression data of the reference image is decoded and decompressed similarly as upon image display to restore a YCbCr image as the reference image, and a difference image sequence is generated from the reference image and the original YCbCr image. Then, compression data of a difference image is generated, and compression data obtained by connecting the compression data of the reference image and the compression data of the difference image is generated for every four frames of a tile image.
Abstract:
Methods and Apparatus provide for obtaining a data sequence representative of a three-dimensional parameter space; forming a plurality of coding units by dividing, in three dimensions, the data sequence subject; and generating, for each of the plurality of coding units: (i) a palette defined by two representative values, and (ii) a plurality of indices, each index representing a respective original data point as a value, determined by linear interpolation, to be one of, or an intermediate value between, the representative values, and setting the palette and the plurality of indices for each of the coding units as compressed data.
Abstract:
Provided is an information processor which readily permits operation input to be made so as to point a position on a screen when an operation input is received from a user using a captured image obtained by imaging the user. The information processor acquires a captured image including a user's face, identifies the position of the user's face included in the acquired captured image, sets an operation area at a position on the captured image determined in accordance with the identified face position, detects a detection target within the operation area, and receives, as a user-pointed position, a position on the screen corresponding to a relative position of the detected detection target within the operation area.
Abstract:
An image processing device includes: an input information obtaining section for obtaining input information for changing a display region in an image as a display object; a display image processing section for generating an image inside the display region determined on a basis of the input information as a display image; and a display section for displaying the generated display image on a display, wherein when the input information obtaining section obtains input information for scaling the display image, the display image processing section scales the display image according to the input information, and performs image manipulation making visibility of a region in a predetermined range including a focus as a center of scaling in an image plane different from visibility of another region.
Abstract:
An original image to be edited is displayed using hierarchical data. As a user draws a figure in a region of the image as an edit action, an image data updating unit generates a layer having a hierarchical structure composed of the rendered region only. More specifically, the image of the region to be edited is used as the lowermost hierarchical level, and upper hierarchical levels than this lowermost level are generated by reducing the lowermost level, as appropriate, so as to produce hierarchical data. As, during image display, it is checked that the updated region is contained in a frame to be displayed anew, the image of the layer is displayed by superposing the frame on the original hierarchical data.
Abstract:
An imaging device includes a first camera and a second camera and shoots the same object under different shooting conditions. A shot-image data acquirer of an image analyzer acquires data of two images simultaneously shot from the imaging device. A correcting section aligns the distributions of the luminance value between the two images by carrying out correction for either one of the two images. A correction table managing section switches and generates a correction table to be used according to the function implemented by the information processing device. A correction table storage stores the correction table showing the correspondence relationship between the luminance values before and after correction. A depth image generator performs stereo matching by using the two images and generates a depth image.
Abstract:
An image storage section 48 stores shot image data with a plurality of resolutions transmitted from an imaging device. Depth images 152 with a plurality of resolutions are generated using stereo images with a plurality of resolution levels from the shot image data (S10). Next, template matching is performed using a reference template image 154 that represents a desired shape and size, thus extracting a candidate area for a target picture having the shape and size for each distance range associated with one of the resolutions (S12). A more detailed analysis is performed on the extracted candidate areas using the shot image stored in the image storage section 48 (S14). In some cases, a further image analysis is performed based on the analysis result using a shot image with a higher resolution level (S16a and S16b).