摘要:
An image processing method includes a position/orientation acquisition step of acquiring a position and orientation of a viewpoint, a virtual image creation step of creating a virtual image on the basis of the position and orientation of the viewpoint, and a holding step of holding an environment mapping image. In addition, an image extraction step extracts a mapping image area rotated according to a viewpoint rotation component about an axis in a line-of-sight direction from the environment mapping image on the basis of the line-of-sight direction of the viewpoint and the viewpoint rotation component about the axis in the line-of sight direction, and an environment mapping step executes environment mapping processing for the virtual image created in the virtual image creation step by using an image included in the mapping image area extracted in the image extraction step.
摘要:
An acquired real image undergoes conversion based on a first parameter to generate an image corresponding to a designated characteristic. A virtual image is generated on the basis of a second parameter to generate an image corresponding to a designated characteristic. The converted real image is composited with the generated virtual image. The composite image is displayed.
摘要:
When a CG object is overlapped onto a photographed real scenery image as a background and an image of a photographed real photographic subject is synthesized in front of the overlapped image, an image in which the photographed real background and the photographic subject are imaged simultaneously without imaging the photographed real background and the photographic subject independently, is synthesized with CG. A photographed real image including a specific image is acquired, and area information representing an area of the specific image from the photographed real image, and an area other than the area of the specific image of the photographed real image is synthesized with a computer graphics image using the area information so detected.
摘要:
In step S1030, the position and orientation of a stylus operated by the user on the physical space are calculated, and it is detected if the stylus is located on the surface of a real object on the physical space. In step S1040, a virtual index is laid out at the position on the virtual space, which corresponds to the position calculated upon detection. In step S1060, an image of the virtual space including the laid-out virtual index is superimposed on the physical space.
摘要:
In exhibition of a synthesized image which is obtained by synthesizing a virtual world image with a real world image observed from a viewpoint position and direction of a user, data representing a position and orientation of a user is acquired, a virtual image is generated based on the data representing the position and orientation of the user, and the virtual image is synthesized with a real image corresponding to the position and orientation of the user. Based on a measurable area of the position and orientation of the user, area data is set. Based on the data representing the position of the user and the area data, notification related to the measurable area is controlled.
摘要:
A set of objects to be rendered by an identical rendering method is specified from objects which form a virtual space. A hierarchical structure formed by the object included in the specified set is generated (step S206). The objects included in the specified set are rendered by the rendering method common to the objects included in this set in accordance with the generated hierarchical structure (S207-S209).
摘要:
A view transformation matrix that represents the position/attitude of an HMD is generated based on a signal that represents the position/attitude of the HMD (S602). On the other hand, landmarks and their locations are detected based on a captured picture (S604) and a calibration matrix ΔMc is generated using the detected locations of the landmarks (S605). The position/attitude of the HMD is calibrated using the view transformation matrix and calibration matrix ΔMc generated by the above processes (S606), a picture of a virtual object is generated based on external parameters that represent the position/attitude of the calibrated HMD, and a mixed reality picture is generated (S607). The generated mixed reality picture is displayed in the display section (S609).
摘要:
A CG image having a transparency parameter is superimposed on a shot image, which is an image picked up by an image-pickup device, to obtain a combined image. The combined image is displayed in a combined-image-display region. In the combined image, a mask region of the CG image is set based on parameter information used to extract a region of a hand. The transparency parameter of the CG image is set based on a ratio of the size of the region of the CG image excluding the mask region to the size of the shot image. By checking the combined image, which is displayed in the combined-image-display region, the user can set the parameter information by a simple operation.
摘要:
A CG image having a transparency parameter is superimposed on a shot image, which is an image picked up by an image-pickup device, to obtain a combined image. The combined image is displayed in a combined-image-display region. In the combined image, a mask region of the CG image is set based on parameter information used to extract a region of a hand. The transparency parameter of the CG image is set based on a ratio of the size of the region of the CG image excluding the mask region to the size of the shot image. By checking the combined image, which is displayed in the combined-image-display region, the user can set the parameter information by a simple operation.
摘要:
When a plurality of mixed reality (MR) experiencing persons experience the same mixed reality, there is a possibility that the experiencing persons come into contact with one another. An image processing apparatus capable of reporting to the MR experiencing persons that there is a possibility of contact. Therefore, a real space image that is image captured from the position and orientation of a user's viewpoint is drawn. The position and orientation of the user's viewpoint at this time is detected by a sensor unit. It is determined whether or not the viewpoint position is smaller than or equal to an attention distance at which there is a possibility of contact. If the viewpoint position is smaller than or equal to the attention distance, an attention display for this fact is performed.