Abstract:
An image synthesis unit receives respective pixel values for a single horizontal row of a ¼ demosaiced image, a 1/16 demosaiced image, and a 1/64 demosaiced image from a pyramid filter for reducing, in a plurality of stages, a frame of a moving image that is captured. The image synthesis unit then connects the pixel values in a predetermined rule so as to generate a virtual synthesized image and outputs the synthesized image in the form of streams. A control unit of an image transmission unit notifies a data selection unit of a request from a host terminal. The data selection unit selects and extracts necessary data from respective streams of pieces of data of the synthesized image, a RAW image, and a 1/1 demosaiced image, and generates a stream of data to be transmitted. A packetizing unit packetizes the stream and transmits the packetized stream to the host terminal.
Abstract:
Moving image data is delivered from an image provider server. A hierarchical data generation device decodes the moving image data and generates data for the moving image representing each frame in a plurality of resolutions, by reducing frames included in the moving image in a single or multiple stages. A decoder reads only data for a layer in the hierarchical data for each frame, the layer being determined by a resolution requested for display, and decodes the read data. This produces a series of frames representing frames in a requested resolution. A display device displays the frames so that the moving image is displayed in the requested resolution.
Abstract:
A user enters an input for selecting an image-capturing mode. The user then captures the image of a target with a desired zoon factor as a target image. An image-capturing device determines a zoom factor for an image to be captured starting from the target image and then captures an image while zooming out to the determined zoom factor. A process of changing the zoom factor and capturing an image is repeated until the smallest zoom factor among determined zoom factors is used. When image capturing of an image of the smallest zoom factor is completed, metadata is created that includes the respective zoom factors of images and relative position information of the images, and the metadata is stored in a memory unit in relation with the data of the captured images.
Abstract:
Provided is an image processing device, an image processing method, and a program which can prevent a total processing time from significantly increasing while maintaining precision of image processing at a high level, where an image acquiring section sequentially acquires images generated by imaging a predetermined subject to be imaged; an image processing executing section executes, in each of sequentially-arriving processing periods, image processing on the image acquired by the image acquiring section; a preprocessing execution result output section outputs an execution result of preprocessing performed on the image in part of the sequentially-arriving processing periods, the image having been acquired by the image acquiring section before the part of the sequentially-arriving processing periods; an execution result holding section keeps holding the execution result output by the preprocessing execution result output section at least until the execution result is output next time by the preprocessing execution result output section; and the image processing executing section executes the image processing by applying the execution result held in the execution result holding section to the image acquired by the image acquiring section.
Abstract:
Compressed image data of different resolutions stored in a hard disk drive is divided into blocks of substantially regular sizes. A determination is made as to whether a required block is stored in the main memory at predefined time intervals. If the block is not stored, the block is loaded into the main memory. Subsequently, the loaded compressed image data is referred to so that data for an image of an area required for display or for an image of an area predicted to be required is decoded and stored in a buffer memory. Of the images stored in a buffer area, i.e. a display buffer, the image of a display area is rendered in a frame memory. The display buffer and the decoding buffer are switched depending on the timing of completion of decoding or the amount of change in the display area.
Abstract:
Provided is an image processing device, an image processing method, and a program which can prevent a total processing time from significantly increasing while maintaining precision of image processing at a high level, where an image acquiring section sequentially acquires images generated by imaging a predetermined subject to be imaged; an image processing executing section executes, in each of sequentially-arriving processing periods, image processing on the image acquired by the image acquiring section; a preprocessing execution result output section outputs an execution result of preprocessing performed on the image in part of the sequentially-arriving processing periods, the image having been acquired by the image acquiring section before the part of the sequentially-arriving processing periods; an execution result holding section keeps holding the execution result output by the preprocessing execution result output section at least until the execution result is output next time by the preprocessing execution result output section; and the image processing executing section executes the image processing by applying the execution result held in the execution result holding section to the image acquired by the image acquiring section.
Abstract:
For an input interface apparatus for recognizing the motion of an object, there may be cases where the brightness of the object to be captured is insufficient, and thus, a camera takes an image of the object operated by the object, a depth position detector detects the position of the object based on captured frames, an action identifying unit identifies an action of the player based on a result on the detection of the object, an input receiving unit receives the action as an instruction to an input-receiving image displayed on a display, and in response to the action, an illumination control unit raises the brightness of the image projected on the display higher than that before the action is detected.
Abstract:
An image processing apparatus of the present invention compares the density value of K-color at each position around a blank character with a reference density value, thereby deciding whether it is necessary to remove other color components. Based on the result of a decision, the apparatus removes the color components. Thus, other color components can be removed only in the region where the density value of K-color is high. Hence, when a background image is an uneven image, a kickback processing can be performed suitably around the blank character.
Abstract:
A game controller includes a plurality of LEDs formed on the rear of a case. The plurality of LEDs are arranged two-dimensionally in its layout area. The game controller has a plurality of PWM control units which are provided inside the case and control the lighting of the plurality of LEDs, respectively. The PWM control units control the lighting of the LEDs based on a control signal from a game apparatus. The game apparatus acquires a captured image of the game controller, and acquires the position of the game controller in the captured image based on the positions of the LEDs in the captured image.
Abstract:
An information processing method provided, wherein an identifier (ID) corresponding to an object is obtained based on information input from a sensor for detecting the object, and the obtained ID is continuously and repeatedly input to an information processing unit. The information processing unit compares a program that is set based on a newly-input ID with a program that is set based on an already-input ID, and ends the currently-executed program when the two programs are different from each other. In this method, by putting an object in a sensor effective area, a program corresponding to the ID of the object is started, and by removing the object from the sensor effective area, the program is ended.