Abstract:
According to one embodiment, a photographing image including a user region is acquired. The user region is detected from the photographing image. The user region in the photographing image is changed so that a user's direction in the user region cannot be decided, and the photographing image in which the user region is changed is acquired as a change image. A position of the user region in the photographing image is calculated. An image of a back side-cloth is compounded with the user region in the change image at the position.
Abstract:
According to an embodiment, an image processing device includes a first obtaining unit to obtain a first image which contains a clothing image to be superimposed; a second obtaining unit to obtain a second image which contains a photographic subject image on which the clothing image is to be superimposed; a third obtaining unit to obtain, of an image outline of the clothing image, a first outline that is the image outline other than openings formed in clothing; a setting unit to set, as a drawing restriction area, at least some area that is on the outside of the clothing image in the first image and that is continuous to the first outline; and a generating unit to generate a synthetic image by synthesizing the photographic subject image, from which is removed an area corresponding to the drawing restriction area, with the clothing image.
Abstract:
According to one embodiment, an information processor includes a memory and processing circuitry. The circuitry receives area information indicating a second area in a first area around a movable body apparatus and third areas in the first area, wherein the movable body apparatus is movable in the second area and an object is present in each of the third areas. The circuitry receives movement information including at least one of a velocity, a movement direction or an acceleration of the apparatus. The circuitry acquires evaluation values each indicative of a damage to be caused when the apparatus collides with each object in the third areas, and determines, based on the evaluation values, a position corresponding to a first object which causes a least damage.
Abstract:
According to one embodiment, an image processing apparatus includes a first acquisition module, a receiver, a second acquisition module, a calculator, a generator and an output module. The first acquisition module acquires first model data indicating a shape corresponding to a subject. The receiver receives pose data indicating a pose. The second acquisition module acquires second model data indicating a shape corresponding to a body shape of the subject and the pose indicated by the pose data. The calculator calculates an evaluation value based on the first and second model data. The generator generates notification information based on the evaluation value. The output module outputs the notification information.
Abstract:
A data processing apparatus according to an embodiment includes a control-point calculating unit and a deformation processing unit. The control-point calculating unit calculates target position coordinates on the basis of a first model representing a shape of a first object, deformation parameters representing characteristics of deformation of the first object, and a second model representing a shape of a second object. The target position coordinates are the coordinates to which points of the first model should move according to the second model when the first object is deformed according to the second object. The deformation processing unit calculates reaching position coordinates to minimize a sum of absolute values of differences between the target position coordinates and the reaching position coordinates where the point reaches. The sum is obtained by taking into account importance levels of the points.
Abstract:
According to an embodiment, a first acquirer of an image processing apparatus acquires a subject image of a first subject. A second acquirer acquires a first parameter representing a body type of the first subject. A receiver receives identification information on clothing to be tried on. An identifier identifies a clothing image associated with a second parameter of which dissimilarity with the first parameter is not larger than a threshold, from among clothing images associated with the received identification information in first information in which clothing sizes, second parameters, and clothing images are associated with each piece of identification information. The second parameters corresponds to each clothing size and represents different body types. The clothing images each represents a second subject who has a body type represented by the corresponding second parameter associated with the corresponding clothing size and who is wearing the piece of clothing in each clothing size.
Abstract:
According to an embodiment, a searching device includes an acquiring unit, a receiver, a calculator, and a determining unit. The acquiring unit is configured to acquire a first image. The receiver is configured to receive selection of a first area contained in the first image. The calculator is configured to calculate, for each of a plurality of second images to be searched, a first similarity with the first area and a second similarity with a second area that has been received by the receiver before the first area. The determining unit is configured to determine a third image or third images to be presented from among the second images on the basis of a result of a logical operation of the first similarity and the second similarity for each of the second images.
Abstract:
According to an embodiment, an image processing device includes a first calculator to calculate, when a photographic subject image is determined to satisfy a first condition, a first position of the clothing image such that the position of a feature area matches with the position of the feature area; a second calculator to calculate a second position of the clothing image in the photographic subject image such that the position of a feature point in the clothing image matches with the position of the feature point in the photographic subject image; a deciding unit to decide, when the photographic subject image is determined to satisfy the first condition, on the first position as a superimposition position, and decide, when the photographic subject image is determined not to satisfy the first condition, on the superimposition position based on the difference between the first position and the second position.