Abstract:
A speech control apparatus and a method thereof are provided. The speech control apparatus logs the user in an application software according to a speech signal of a user. The speech control apparatus is connected to a password bank comprising a plurality of accounts and passwords. The speech control apparatus comprises a speech process module, a start module, a first receive module, an identity recognition module, a selection module, and a login module. The speech process module determines a meaning of the speech signal. The start module starts the application software according to the meaning of the speech signal. The first receiving module receives the biometrics feature of the user. The identity recognition module identifies the user as authorized according to the biometrics feature. The selection module selects a login set of account and password from the password bank according to the speech signal and the biometrics feature. The login module logs the user into the application software according to the login set of account and password.
Abstract:
An exemplary electronic device (10) comprises a display (11), a chip controller (131), a power supply (14) and a main processor (12). The display has a capacitive touch screen (111). The power supply is electrically connected with and controlled by the chip controller. The main processor is electrically connected to the chip controller. The main processor is used to store a start operational input and calculate a touched signal that the touch screen is touched. The main processor further compares the touched signal with the start operational input to decide whether to send a start instruction to the chip controller to start the electronic device. The present invention further provides a method for starting the electronic device.
Abstract:
A light emitting apparatus includes a substrate, an insulating layer and at least one light emitting device. The insulating layer is disposed over the substrate and has a patterned area exposing at least a portion of the substrate. The light emitting device is disposed over the substrate and is located in the patterned area.
Abstract:
A hand-held device is provided. The hand-held device includes a rotary wheel, a first switch, a second switch, an encoder, a memory, and a controller. The rotary wheel has a plurality of positioning inputs. The encoder is responsive to operations on the rotary wheel and outputs positioning codes. The memory stores a plurality of input modes, wherein each input mode includes characters corresponding to positioning codes. The controller obtains an input mode from the memory, and shifts a current input mode to the input mode obtained in responsive to an input from the first switch. The controller inputs a character of the current input mode according to a positioning code that the encoder outputs in response to an input from the second switch. The above-mentioned rotary wheel can replace keyboard in the hand-held device.
Abstract:
A method for verifying scan precision of a laser measurement machine includes the steps of: preparing a transparent flat, of which the flatness of each plane is regarded as a flatness conventional true value; determining an optimum scanning mode; determining optimum scanning parameters under the optimum scanning mode; scanning the transparent flat under the optimum scanning mode and the optimum scanning parameters for certain times, and obtaining measuring data; calculating a plurality of flatness values using the measuring data; calculating an average value and a standard deviation of the flatness values, and a bias between the average value and the flatness conventional true value; evaluating the repetitiveness of the laser measurement machine according to the standard deviation; and evaluating the veracity of the laser measurement machine according to the bias.
Abstract:
An image is analyzed to locate an object appearing in the image. A contour of that object is extracted from the image and normalized. Based on the normalized contour, one or more summation invariant values are determined and compared to templates comprising one or more summation invariants for each of one or more target objects. The determined summation invariants for the extracted object are compared to summation invariants for the target objects. When the summation invariants for the extracted object sufficiently match the summation invariants determined from an image of a target object, the extracted object is recognized as that target object. The summation invariants can be semi-local summation invariants determined for each point along the normalized contour, based on a number of points neighboring that point on the normalized contour. The semi-local summation invariants are determined as a function of the x and y coordinates of those points.