Abstract:
A copying or printing machine connected with an attachment such as a sorter, including a microcomputer for controlling a copying or printing operation, a power supply for energizing the machine, and a power reducing device for reducing electric power to be supplied to a copy fixing device of the machine by the power supply so that less electric power is consumed than during operation.The microcomputer keeps the attachment in an inactive status, provided that the power reducing device reduces electric power to be supplied to the copy fixing device.
Abstract:
A user enters an input for selecting an image-capturing mode. The user then captures the image of a target with a desired zoon factor as a target image. An image-capturing device determines a zoom factor for an image to be captured starting from the target image and then captures an image while zooming out to the determined zoom factor. A process of changing the zoom factor and capturing an image is repeated until the smallest zoom factor among determined zoom factors is used. When image capturing of an image of the smallest zoom factor is completed, metadata is created that includes the respective zoom factors of images and relative position information of the images, and the metadata is stored in a memory unit in relation with the data of the captured images.
Abstract:
An image hierarchy generation unit reads image data stored in a hard disk drive, generate images with a plurality of resolutions, and hierarchizes the images. An image dividing unit divides the image in each layer into tile images. A redundancy detection unit analyzes the image in each layer so as to detect redundancy between images within the same layer or images from different layers. A tile image reference table creates a tile image reference table that maps area numbers to tile numbers, in view of the redundancy. An image file generation unit creates an image file that should be ultimately output and includes image data and the tile image reference table.
Abstract:
In an image processing apparatus, an image pickup unit takes images of an object including the face of a person wearing the glasses by which to observe a stereoscopic image that contains a first parallax image and a second parallax image obtained when the object in a three-dimensional (3D) space is viewed from different viewpoints. A glasses identifying unit identifies the glasses included in the image of the object taken by the image pickup unit. A face detector detects a facial region the face of the person included in the image of the object taken by the image pickup unit, based on the glasses identified by the glasses identifying unit. An augmented-reality special rendering unit adds a virtual feature to the facial region of the face of the person detected by the face detector.
Abstract:
A control panel image generation unit generates a control panel image displayed to control an application. An application execution unit executes the application based on user control information input while the control panel image is being displayed. An information image generation unit generates an information image including information related to the application. An image switching unit switches an image displayed on a display from the control panel image to the information image. The information image generation unit uses image data stored in a storage device and generates the information image including a thumbnail image corresponding to the control panel image.
Abstract:
A player can be notified both visually and aurally that his/her action is recognized. A velocity vector-calculating unit calculates a velocity vector of the movement of an object manipulated by a player moving toward an assumable contact surface W by using an image of the movement of the player captured by a camera. A travel time calculating unit calculates the travel time required for the object to reach the contact surface W by using the velocity vector and a distance between the object and the contact surface W. A lag time acquisition unit acquires a lag time that sound output from a speaker takes to reach the player. A sound control unit allows the player to hear the sound substantially at the same time the object contacts the contact surface by outputting a predetermined sound after the time passes which is obtained by subtracting the lag time from the travel time.
Abstract:
Intended is to inform a player of it through both the visual sense and the auditory sense that an action has been recognized. A velocity vector calculating unit calculates, by using the image of the action of the player taken with a camera (20), the velocity vector of the action, in which an object operated by a player (72) comes toward a contact plane (W). A moving time calculating unit calculates, by using the velocity vector and the distance between the object and the contact plane (W), the moving time period required for the object to reach the contact plane. A delay time acquiring unit acquires a delay time period till the voice uttered by a speaker (42) reaches the player (72). A voice control unit outputs a predetermined voice after lapse of the time period calculated by subtracting the delay time from the moving time period, so that the player (72) may listen to the voice, substantially simultaneously as the object contacts the contact plane.
Abstract:
In trapping process of a multicolor image, it is judged whether trapping process is required or not, and trapping process is performed only when required. Specifically, trapping process is executed when in a portion where a plurality of figures constituting an image overlap, the plate color value of a relatively lower figure is erased or overwritten. This enables to execute trapping process only when there is the danger that a gap occurs in a boundary portion where different two colors are adjacent each other.
Abstract:
A compression format identifying unit identifies the compression format of image data stored in a main memory. A transfer function determining unit determines a transfer function in accordance with the compression format of a tile image thus identified. A convolution operation unit subjects a modification request signal and the transfer function thus determined to convolution operation so as to generate a modification direction signal. A read ahead processor reads a tile image from the main memory using the modification request signal, decodes the image, and writes the decoded image in a buffer memory. A display image processor generates a display image using the modification direction signal.
Abstract:
A neighboring vector, which is a boundary portion between two overlapping objects, is extracted. To calculate luminance levels of the objects on both sides of the neighboring vector, a predetermined number of coordinate points (sample points) in the vicinity of the neighboring vector are extracted at least from the image side. A rendering process is performed on an area including all the extracted sample points to acquire color values at the sample points. The luminance level of the image is calculated based on the acquired color values, and the luminance levels of the objects on both sides of the neighboring vector are compared to each other to determine the position (direction) in which to generate a trap graphic.