Abstract:
Disclosed is a 3-dimensional model-processing apparatus capable of processing a 3-dimensional model appearing on a display unit including a sensor for generating information on the position and the posture, which can be controlled by the user arbitrarily, and control means for carrying out a grasp-state-setting process of taking a relation between the sensor-generated information on the position and the posture of the sensor and information on the position and the posture of the 3-dimensional model appearing on the display unit as a constraint relation on the basis of a relation between the 3-dimensional position of the 3-dimensional model appearing on the display unit and the 3-dimensional position of a tool appearing on the display unit for the sensor or on the basis of a relation between the 3-dimensional posture of the 3-dimensional model appearing on the display unit and the 3-dimensional posture of the tool appearing on the display unit for the sensor.
Abstract:
A copying or printing machine comprising a microcomputer, a memory having a sequence control program for accomplishing copying or printing operation, and a processing unit operative for reproducing a copy. The copying machine is provided with a delivering device for delivering the copy, a sorter for sorting successive copies delivered from the delivering device and a jam detector for detecting a jamming of a copy in the sorter. The delivering device is covered with a cover which may be opened so that an operator may access the delivery device.When a malfunction in the sorter is detected by the jam detector, the microcomputer stops the operation of the copying machine. It is necesary for the operator to remove the copy detected by the jam detector and to close the cover after opening it once for restarting the operation of the copying machine. The microcomputer enables the copying machine to restart after it detects that the copy detected by the jam detector is cleared and the cover is closed.
Abstract:
A capture device is equipped with a stereo camera, and generates a plurality of demosaiced images of different sizes in which the left and right frame images have been reduced in stepwise fashion. A virtual composite image is generated that includes the plurality of demosaiced images, in which the pixel rows of the rows are pixel rows having undergone one round of connection. A host terminal sends to the capture device a data request signal designating a plurality of areas within the composite image, having a shared range in the longitudinal direction. The capture device clips out the designated areas, and sends to the host terminal a stream of a new composite image comprising only the clipped out areas. The host terminal cuts this into separate images, which are expanded into consecutive addresses in a main memory.
Abstract:
An imaging device 12 includes a first camera 22 and a second camera 24. Each of the cameras captures a subject from left and right positions that are apart by a known width at the same timing and frame rate. Each of the captured frame images is converted into image data with a plurality of predetermined resolutions. An input information acquisition section 26 of an information processor 14 acquires an instruction input from the user. A position information generation section 28 roughly estimates, as a target area, a subject area or an area with motion using low-resolution and wide-range images of pieces of stereo image data and performs stereo matching using high-resolution images only for the area, thus identifying the three-dimensional position of the subject. An output information generation section 32 performs a necessary process based on the position of the subject, thus generating output information. A communication section 30 requests image data to the imaging device 12 and acquires such image data.
Abstract:
Frames of a moving image are configured as a hierarchical structure where each frame is represented with a plurality of resolutions. Some layers are set as original image layers, and the other layers are set as difference image layers in hierarchical data representing a frame at each time step. In the case that an area is to be displayed in the resolution of the difference image layer, to respective pixel values of a difference image of the area, respective pixel values of an image of a corresponding area retained by the original image layer of lower resolution, the image enlarged to the resolution of the difference image layer, are added. A layer to be set as a difference image layer is switched to another layer as time passes.
Abstract:
Links are set among three hierarchical data and one moving image data. When a display area overlaps with a first link area while an image is being displayed by using first hierarchical data, switching to display by use of the 0-th hierarchical level of the second hierarchical data is made. When the display area overlaps with a second link area while an image is being displayed by using the second hierarchical data, switching to display by use of the 0-th hierarchical level of the third hierarchical data is made. The link destination of another link area is the moving image data and moving image reproduction is started as a result of zoom-up of this area. The hierarchical data are held on the client terminal side and the data existing on the other side of a switching boundary are transmitted to the client terminal in a data stream format.
Abstract:
In an image processing apparatus, an image pickup unit takes images of an object including the face of a person wearing the glasses by which to observe a stereoscopic image that contains a first parallax image and a second parallax image obtained when the object in a three-dimensional (3D) space is viewed from different viewpoints. A glasses identifying unit identifies the glasses included in the image of the object taken by the image pickup unit. A face detector detects a facial region the face of the person included in the image of the object taken by the image pickup unit, based on the glasses identified by the glasses identifying unit. An augmented-reality special rendering unit adds a virtual feature to the facial region of the face of the person detected by the face detector.
Abstract:
A control panel image generation unit generates a control panel image displayed to control an application. An application execution unit executes the application based on user control information input while the control panel image is being displayed. An information image generation unit generates an information image including information related to the application. An image switching unit switches an image displayed on a display from the control panel image to the information image. The information image generation unit uses image data stored in a storage device and generates the information image including a thumbnail image corresponding to the control panel image.
Abstract:
A parallax representation unit in a displayed image processing unit uses a height map containing information on a height of an object for each pixel to represent different views caused by the height of the object. A color representation unit uses, for example, texture coordinate values derived by the parallax representation unit to render the image, shifting the pixel defined in the color map. The color representation unit uses the normal map that maintains normals to the surface of the object for each pixel to change the way that light impinges on the surface and represent the roughness accordingly. A shadow representation unit uses a horizon map, which maintains information for each pixel to indicate whether a shadow is cast depending on the angle relative to the light source, so as to shadow the image rendered by the color representation unit.
Abstract:
When a single page of a newspaper article or a magazine is displayed using hierarchical data, a guidance area (indicated by a line in the hierarchical data) for each article is defined in a scenario definition file. A plurality of guidance areas are defined in the layer below, i.e., the layer having a resolution of a level that allows characters to be legible, so that the viewer can track the article from start to end. The displayed image is initially guided to the guidance area upon a user request for enlargement. Upon a further request for enlargement, the displayed image is guided to the guidance area at the head of the article. When the user having read the sentence in the guidance area provides an input by indicating a direction or pressing a predetermined button, the displayed image is guided to the guidance area showing the continuation of the sentence.