Abstract:
A camera apparatus is described that includes a frame housing and a camera module affixed to the frame housing. The camera module may include a lens and an image sensor. The camera apparatus may include a reflective element and a motor. The reflective element may be disposed within the frame housing, the reflective element being movable relative to the lens to select a direction from which the lens collects light. The motor may be adapted to move the reflective element in response to detecting a magnetic field change generated by at least one magnet disposed within the frame housing.
Abstract:
An apparatus is described. The apparatus includes a smart image sensor having a memory and a processor that are locally integrated with an image sensor. The memory is to store first program code to be executed by the processor. The memory is coupled to the image sensor and the processor. The memory is to store second program code to be executed by the processor. The first program code is to cause the smart image sensor to perform an analysis on one or more images captured by the image sensor. The analysis identifies a region of interest within the one or more images with machine learning from previously captured images. The second program code is to cause the smart image sensor to change an image sensing and/or optical parameter in response to the analysis of the one or more images performed by the execution of the first program code. Alternatively or in combination, the memory is to store third program code to be executed by the processor and fourth program code to be executed by the processor. The third program code is to store multiple images captured by the image sensor in the memory. The fourth program code is to merge the multiple images in the memory.
Abstract:
Example virtual-reality head-mounted devices having reduced numbers of cameras, and methods of operating the same are disclosed herein. A disclosed example method includes providing a virtual-reality (VR) head-mounted display (V-HMD) having an imaging sensor, the imaging sensor including color-sensing pixels, and infrared (IR) sensing pixels amongst the color-sensing pixels; capturing, using the imaging sensor, an image having a color portion and an IR portion; forming an IR image from at least some of the IR portion from the image; performing a first tracking based on the IR image; forming a color image by replacing the at least some of the removed IR portion with color data determined from the color portion of the image and the location of the removed IR-sensing pixels in the image; and performing a second tracking based on the color image.
Abstract:
An immersive video teleconferencing system may include a transparent display and at least one image sensor operably coupled to the transparent display. The at least one image sensor may be multiple cameras included on a rear side of the transparent display, or a depth camera operably coupled to the transparent display. Depth data may be extracted from the images collected by the at least one image sensor, and an image of a predetermined subject may be segmented from a background of the collected images based on the depth data. The image of the segmented predetermined subject may also be scaled based on the depth data. The image of the scaled segmented predetermined subject may be transmitted to a remote transparent display at a remote location, and displayed on the remote transparent display such that a background surrounding the displayed image of the remote location is visible through the transparent display, so that the predetermined subject appears to be physically located at the remote location.
Abstract:
An apparatus is described. The apparatus includes a first camera system having a processor and a memory. The first camera system includes an interface to receive images from a second camera system. The first camera system includes a processor and memory. The processor and memory are to execute image processing program code for first images that are captured by the first camera system and second images that are captured by the second camera system and that are received at the interface.
Abstract:
In one general aspect, a system can include an electromagnetic interference (EMI) filter, an alternating current (AC) rectifier bridge operatively coupled to the electromagnetic filter, the AC rectifier bridge providing a first voltage, a first power stage including a step-down transformer, the first power stage configured to receive the first voltage and output a second voltage, a second power stage configured to receive the second voltage and configured to convert the second voltage to a third voltage, and a power delivery adapter controller configured to receive at least one input indicative of a requested voltage value and configured to provide at least one output for use by the second power stage, the second power stage configured to determine a value for the third voltage based on the at least one output.
Abstract:
Provided are methods and systems for memory decompression using a hardware decompressor that minimizes or eliminates the involvement of software. Custom decompression hardware is added to the memory subsystem, where the decompression hardware handles read accesses caused by, for example, cache misses or requests from devices to compressed memory blocks, by reading a compressed block, decompressing it into an internal buffer, and returning the requested portion of the block. The custom hardware is designed to determine if the block is compressed, and determine the parameters of compression, by checking unused high bits of the physical address of the access. This allows compression to be implemented without additional metadata, because the necessary metadata can be stored in unused bits in the existing page table structures.