Abstract:
Embodiments include systems and methods of accessing multimedia content. One embodiment includes a system for accessing multimedia data. The system includes a tangible object comprising at least one proximity device embedded within the tangible object. The tangible object is configured to provide identification information of the tangible object. The system further includes a reader configured to wirelessly detect the tangible object based upon the proximity device and receive the identification information. The system further includes a device configured to receive a signal from the reader in response to detecting the tangible object and configured to access multimedia data based upon the provided identification information.
Abstract:
A method for controlling a robotic apparatus to produce desirable photographic results. The method includes, with a motor controller, first operating a robotics assembly to animate the robotic apparatus and, then, detecting an upcoming image capture. The method further includes, with the motor controller in response to the detecting of the upcoming image capture, second operating the robotics assembly to pose the robotic apparatus for the upcoming image capture. In some embodiments, the detecting includes a sensor mounted on the robotic apparatus sensing a pre-flash of light from a red-eye effect reduction mechanism of a camera. In other cases, the detecting includes a sensor mounted on the robotics apparatus sensing a range finder signal from a range finder of a camera. The posing may include opening eyes, moving a mouth into a smile, or otherwise striking a pose that is held temporarily to facilitate image capture with a camera.
Abstract:
A method for controlling a robotic apparatus to produce desirable photographic results. The method includes, with a motor controller, first operating a robotics assembly to animate the robotic apparatus and, then, detecting an upcoming image capture. The method further includes, with the motor controller in response to the detecting of the upcoming image capture, second operating the robotics assembly to pose the robotic apparatus for the upcoming image capture. In some embodiments, the detecting includes a sensor mounted on the robotic apparatus sensing a pre-flash of light from a red-eye effect reduction mechanism of a camera. In other cases, the detecting includes a sensor mounted on the robotics apparatus sensing a range finder signal from a range finder of a camera. The posing may include opening eyes, moving a mouth into a smile, or otherwise striking a pose that is held temporarily to facilitate image capture with a camera.
Abstract:
Embodiments include systems and methods of accessing multimedia content. One embodiment includes a system for accessing multimedia data. The system includes a tangible object comprising at least one proximity device embedded within the tangible object. The tangible object is configured to provide identification information of the tangible object. The system further includes a reader configured to wirelessly detect the tangible object based upon the proximity device and receive the identification information. The system further includes a device configured to receive a signal from the reader in response to detecting the tangible object and configured to access multimedia data based upon the provided identification information.
Abstract:
Realistic electronically controlled eyes for a figure such as a doll, toy, animatronic being, robot, etc., are provided by displaying a sequence of images simulating eye movement on an electronic display screen mounted to a portion of the figure. A convex lens, which serves to simulate the figure's eye, is mounted substantially in contact with a surface of the display screen system from which the light defining the sequence of images is emitted. To an observer looking at the convex surface of the lens, the lens appears as an eye characterized by realistic eye movement.
Abstract:
A system for producing motions for an animatronic figure is disclosed. The system is configured to produce different types of motions in real-time and in a life-like manner. The motion software module forms a composite motion by combining the user-inputted motion with user-selected fixed sequences and/or with algorithmically calculated motion. The motions of the animatronic figure can further be filtered to produce motions that are life-like. Combined motions are formed by superimposing, modulating, or modifying component motions. Motions are filtered based on user-inputted commands and commands determined from a stimulus and filtered to create a life-like motion.
Abstract:
Realistic electronically controlled eyes for a figure such as a doll, toy, animatronic being, robot, etc., are provided by displaying a sequence of images simulating eye movement on an electronic display screen mounted to a portion of the figure. A convex lens, which serves to simulate the figure's eye, is mounted substantially in contact with a surface of the display screen system from which the light defining the sequence of images is emitted. To an observer looking at the convex surface of the lens, the lens appears as an eye characterized by realistic eye movement.
Abstract:
The subject matter disclosed herein relates to a method and/or system for generating a dynamic image based, at least in part, on attributes associated with one or more individuals.
Abstract:
A system for producing motions for an animatronic figure is disclosed. The system is configured to produce different types of motions in real-time and in a life-like manner. The motion software module forms a composite motion by combining the user-inputted motion with user-selected fixed sequences and/or with algorithmically calculated motion. The motions of the animatronic figure can further be filtered to produce motions that are life-like. Combined motions are formed by superimposing, modulating, or modifying component motions. Motions ark filtered based on user-inputted commands and commands determined from a stimulus and filtered to create a life-like motion.
Abstract:
A method is provided for a simulated conversation by a pre-recorded audio navigator, with particular application to informational and entertainment settings. A monitor may utilize a navigation interface to select pre-recorded responses in the voice of a character represented by a performer. The pre-recorded responses may then be queued and sent to a speaker proximate to the performer. By careful organization of an audio database including audio buckets and script-based navigation with shifts for tailoring to specific guest user profiles and environmental contexts, a convincing and dynamic simulated conversation may be carried out while providing the monitor with a user-friendly navigation interface. Thus, highly specialized training is not necessary and flexible scaling to large-scale deployments is readily supported.