Abstract:
A multi view image display apparatus is provided. The multi view image display apparatus includes: a depth adjuster configured to adjust a depth of an input image; a renderer configured to render a multi view image based on the input image of which depth is adjusted; a display configured to arrange a multi view image in a preset arrangement pattern in order to display the multi view image; and a controller configured to control the depth adjuster to shift the depth of the input image based on depth information related to at least one object of the input image so that an object satisfying a preset criteria has a preset depth value.
Abstract:
A robot is provided. The robot includes a driving part, a three dimensional (3D) depth sensor, a memory storing instructions, and a processor connected to the driving part, the 3D depth sensor, and the memory. The processor is configured to execute the instructions to acquire a depth image of a driving surface photographed by the 3D depth sensor while the robot is driving in a space, acquire, based on the acquired depth image, location information of a boundary area where tilt information of the driving surface is changed, acquire type information corresponding to an outside area of the boundary area based on the acquired location information of the boundary area and the changed tilt information, and control the driving part based on the acquired type information.
Abstract:
Provided are a method, performed by an electronic device, of controlling a vehicle, and an electronic device for the same. A method, performed by an electronic device, of controlling a vehicle includes: transmitting, to an external server communicatively connected to the vehicle, as profile information of the vehicle, sensor information regarding at least one sensor mounted on the vehicle, communication efficiency information of the vehicle, and driving information of the vehicle; receiving, from the external server, precise map data related to at least one map layer selected based on the profile information of the vehicle from among a plurality of map layers that are combined to form a precise map and distinguished according to attributes thereof; and controlling the vehicle to perform autonomous driving by using the received at least one precise map data.
Abstract:
Provided is a method, performed by a first vehicle, of providing detailed map data, the method including collecting first traveling data about a first path using at least one first sensor while the first vehicle is traveling the first path; obtaining first detailed map data corresponding to the first path based on the first traveling data about the first path; and providing the first detailed map data to at least one external device.
Abstract:
A robot may include a LiDAR sensor, and a processor configured to acquire, based on a sensing value of the LiDAR sensor, a first map that covers a space where the robot is located, detect one or more obstacles existing in the space based on the sensing value of the LiDAR sensor, acquire a number of times that each of a plurality of areas in the first map is occupied by the one or more obstacles, based on location information of the one or more obstacles, determine an obstacle area based on the number of times that each of the plurality of areas is occupied by the one or more obstacles, and acquire a second map indicating the obstacle area on the first map to determine a driving route of the robot based on the second map.
Abstract:
Provided is a robot device and method of controlling same, wherein the robot device includes: at least one sensor; at least one memory configured to store at least one instruction; and at least one processor configured to execute the at least one instruction to: based on the robot device being positioned at a first position, control the robot device in a first mode corresponding to the first position, identify, based on sensing data obtained by the at least one sensor, a first event of picking up the robot device by a user and a second event of placing the robot device, and based on an identification that a position of the robot device is changed from the first position to a second position based on new sensing data obtained by the at least one sensor after the first event and the second event sequentially occur, control the robot device in a second mode corresponding to the second position.
Abstract:
A robot may include a LiDAR sensor, and a processor configured to acquire, based on a sensing value of the LiDAR sensor, a first map that covers a space where the robot is located, detect one or more obstacles existing in the space based on the sensing value of the LiDAR sensor, acquire a number of times that each of a plurality of areas in the first map is occupied by the one or more obstacles, based on location information of the one or more obstacles, determine an obstacle area based on the number of times that each of the plurality of areas is occupied by the one or more obstacles, and acquire a second map indicating the obstacle area on the first map to determine a driving route of the robot based on the second map.
Abstract:
A vehicle is provided. The vehicle includes a light detection and ranging (LiDAR) sensor to acquire point cloud information for each channel on a surrounding ground of the vehicle by using a multichannel laser. The vehicle further includes a communicator to communicate with an external server, and a processor to control the communicator to receive map data from the external server and to determine a position of the vehicle in the map data based on the point cloud information for each channel, acquired through the LiDAR sensor.
Abstract:
An encoding apparatus and method for encoding supplementary information of a three-dimensional (3D) video may determine that an updated parameter among parameters included in camera information and parameters included in depth range information is a parameter to be encoded. The encoding apparatus may generate update information including information about the updated parameter and information about a parameter not updated, perform floating-point conversion of the updated parameter, and encode the update information and the floating-point converted parameter. A decoding apparatus and method for decoding supplementary information of a 3D video may receive and decode encoded supplementary information by determining whether the encoded supplementary information includes update information. When update information is included, the decoding apparatus may classifying the decoded supplementary information, perform floating-point inverse conversion of the updated parameter, and store latest supplementary information in a storage.
Abstract:
Methods and apparatuses are provided for controlling a vehicle. The vehicle is controlled to operate in an autonomous driving mode in which the vehicle is driven without a manipulation by an operator of the vehicle. A request to switch to a manual driving mode, in which the vehicle is driven with the manipulation by the operator, is received. A range of the manipulation regarding a function of the vehicle is determined according to a driving situation of the vehicle in response to the request. The vehicle is controlled to operate in the manual driving mode in which the manipulation by the operator is limited according to the range of the manipulation.