Abstract:
Image classification methods and systems are provided. First, an image is obtained using a computer. The image is then processed using the computer to obtain image information. The image information includes one or any combination of an average color difference between at least one average channel value of pixels in at least one boundary region of the image and a predefined standard value, a gradient variation magnitude difference between at least two regions of the image, and a percentage of the edges of the image to the image. The image is classified using the computer according to the image information.
Abstract:
A control box includes a shell, a display hole, an active member, an indicating member and an interlinking member. The display hole is formed in the shell. The active member has a turn-on end and a turn-off end and is pivotally connected with the shell so as to seesaw. The indicating member is movably disposed in the shell and has a first identifiable portion and a second identifiable portion. The interlinking member is connected between the active member and the indicating member to drive the indicating member to move. Accordingly, when the active member seesaws under the effect of an external force, the indicating member is driven by the interlinking member to show the first identifiable portion or the second identifiable portion in the display hole.
Abstract:
An example-based 2D to 3D image conversion method, a computer readable medium therefor, and a system are provided. The embodiments are based on an image database with depth information or with which depth information can be generated. With respect to a 2D image to be converted into 3D content, a matched background image is found from the database. In addition, graph-based segmentation and comparison techniques are employed to detect the foreground of the 2D image so that the relative depth map can be generated from the foreground and background information. Therefore, the 3D content can be provided with the 2D image plus the depth information. Thus, users can rapidly obtain the 3D content from the 2D image automatically and the rendering of the 3D content can be achieved.