Abstract:
A game controller includes a plurality of LEDs formed on the rear of a case. The plurality of LEDs are arranged two-dimensionally in its layout area. The game controller has a plurality of PWM control units which are provided inside the case and control the lighting of the plurality of LEDs, respectively. The PWM control units control the lighting of the LEDs based on a control signal from a game apparatus. The game apparatus acquires a captured image of the game controller, and acquires the position of the game controller in the captured image based on the positions of the LEDs in the captured image.
Abstract:
The present invention provides a game system using LEDs in a controller 20. In a game system of the present invention, an image pickup apparatus captures the controller. The controller includes multiple input units for making an operation input to advance a game, and multiple LEDs for indicating a controller number set for the controller in a game application executed. A game apparatus accepts an operation input provided to the controller and reflects the operation input in the processing of a game application, acquires a captured image from the image pickup apparatus, acquires the position of the controller in the captured image based on the position of the LED in the captured image and reflects the acquired position information in the processing of the game application, and generates an image signal showing the result of application processing performed based on the operation input and acquired position information.
Abstract:
A neighboring vector, which is a boundary portion between two overlapping objects, is extracted. To calculate luminance levels of the objects on both sides of the neighboring vector, a predetermined number of coordinate points (sample points) in the vicinity of the neighboring vector are extracted at least from the image side. A rendering process is performed on an area including all the extracted sample points to acquire color values at the sample points. The luminance level of the image is calculated based on the acquired color values, and the luminance levels of the objects on both sides of the neighboring vector are compared to each other to determine the position (direction) in which to generate a trap graphic.
Abstract:
A display timing setting unit determines the timing of rendering an image by raster scanning. A pixel reading unit reads a pixel according to timing information output from the display timing setting unit. An area of interest information input unit enters information for identifying an arbitrary area of interest within an image. An area of interest identifying unit determines whether the pixel is included in the area of interest based on the timing information output by the display timing setting unit. A finite-bit generation unit generates a finite bit series by subjecting information on the pixel to mapping transformation when the pixel is included in the area of interest.
Abstract:
An image processing apparatus of the present invention compares the density value of K-color at each position around a blank character with a reference density value, thereby deciding whether it is necessary to remove other color components. Based on the result of a decision, the apparatus removes the color components. Thus, other color components can be removed only in the region where the density value of K-color is high. Hence, when a background image is an uneven image, a kickback processing can be performed suitably around the blank character.
Abstract:
Provided is a game system utilizing the LEDs of a controller (20). In a game system (1), an image pickup device (2) takes the image of the controller (20). The controller (20) includes a plurality of input units for performing an operation input to proceed a game, and a plurality of LEDs for expressing a controller number to be set in the controller (20), in a game application to be executed. A game device (10) accepts and reflects the operation input of the controller (20), upon the processing of the game application, acquires the taken image from the image pickup device (2), acquires a position in the taken image of the controller (20) from the positions of the LEDs in the taken image, reflects the acquired position information upon the processing of the game application, and generates an image signal indicating the application processing result based on the operation input and the acquired position information.
Abstract:
For page data containing shadow-casting objects, trap graphics are generated according to the following procedure. First, an ID drawing process is performed for all objects within a page, and based on the process result, a related graphic list indicating relative positional relationships between the objects is generated. Thereafter, an ID drawing process is performed for all objects in the area of each shadow-casting object, excluding the shadow-casting object. The result of the ID drawing process in the shadow-casting object area is reflected in the related graphic list, and thereafter, trap graphics are generated based on settings information in the related graphic list.
Abstract:
In trapping process of a multicolor image, an adjacent vector between an objective figure and a relative figure at an upper side is specified first. Then, with the use of the contour of the objective figure except for this adjacent vector, an adjacent vector between the objective figure and a lower relative figure is extracted. This enables to extract, as an adjacent vector, only the portion at which the color of the objective figure and the color of the lower related figure are adjacent each other in appearance. Therefore, the trapping process is executable without generating any unwanted color component on the image.
Abstract:
A conveyor belt for feeding an image-scanned original present at an image scanning position to an intermediate position located between the image scanning position and a paper discharge tray is arranged, so that the next original is fed to the image scanning position before a copied original is discharged to the paper discharge tray. Since each scanned original is fed to the intermediate position, the next original can be fed when the original at the image scanning position reaches the intermediate position. Therefore, the feed operation of the next original need not wait until the original at the image scanning position is discharged to the paper discharge tray, and the original-replace processing time can be shortened.
Abstract:
An image pickup apparatus 12 includes a first camera 22 and a second camera 24. The cameras pick up an image of a target at the same timing and at the same frame rate from left and right positions spaced by a known distance from each other. Picked up frame images are converted into image data of a predetermined plurality of resolutions. An input information acquisition section 26 of an information processing apparatus 14 acquires an instruction input from a user. A position information production section 28 approximately estimates a region of the target or a region which involves some movement on an image of a low resolution and a wide range from within stereo image data as a target region and carries out stereo matching only in regard to the region on an image of a high resolution to specify a three-dimensional position of the target. An output information production section 32 carries out a necessary process based on the position of the target to produce output information. A communication section 30 carries out request for and acquisition of image data to and from the image pickup apparatus 12.