Abstract:
The disclosure includes implementations for executing one or more computations for a vehicle. Some implementations of a method for a vehicle may include identifying one or more computations as being un-executable by any processor-based computing device of the vehicle. The method may include generating a query including query data describing the one or more computations to be executed for the vehicle. The method may include providing the query to a network. The method may include receiving a response from the network. The response may include solution data describing a result of executing the one or more computations. The response may be provided to the network by a processor-based computing device included in a hierarchy of processor-based computing devices that have greater computational ability than any processor-based computing devices of the vehicle.
Abstract:
In an example embodiment, a computer-implemented method is disclosed that generates a spectral signature describing one or more dynamic objects and a scene layout of a current road scene; identifies, from among one or more scene clusters included in a familiarity graph associated with a user, a road scene cluster corresponding to the current road scene; determine a position of the spectral signature relative to other spectral signatures comprising the identified road scene cluster; and generates a familiarity index estimating familiarity of the user with the current road scene based on the position of the spectral signature. The method can further include determining an assistance level based on the familiarity index of the user; and providing one or more of an auditory instruction, a visual instruction, and a tactile instruction to the user via one or more output devices of a vehicle at the determined assistance level.
Abstract:
In an example embodiment, a computer-implemented method is disclosed that receives road scene data and vehicle operation data from one or more sensors associated with a first vehicle on a road segment; receives situation ontology data; automatically generates a semantic road scene description of the road segment using the road scene data, the vehicle operation data, and the situation ontology data; and transmits, via a communication network, the semantic road scene description to one or more other vehicles associated with the road segment. Automatically generating the semantic road scene description of the road segment can include determining lane-level activity information for each lane based on lane information and dynamic road object information and determining a lane-level spatial layout for each lane based on the lane information and the dynamic road object information.
Abstract:
Technology for localized guidance of a body part of a user to specific objects within a physical environment using a vibration interface is described. An example system may include a vibration interface wearable on an extremity by a user. The vibration interface includes a plurality of motors. The system includes sensor(s) coupled to the vibrotactile system and a sensing system coupled to the sensor(s) and the vibration interface. The sensing system is configured to analyze a physical environment in which the user is located for a tangible object using the sensor(s), to generate a trajectory for navigating the extremity of the user to the tangible object based on a relative position of the extremity of the user bearing the vibration interface to a position of the tangible object within the physical environment, and to guide the extremity of the user along the trajectory by vibrating the vibration interface.
Abstract:
The disclosure includes methods for determining a current location for a user in an environment; detecting obstacles within the environment; estimating one or more physical capabilities of the user based on an EHR associated with the user; generating, with a processor-based device that is programmed to perform the generating, instructions for a robot to perform a task based on the obstacles within the environment and one or more physical capabilities of the user; and instructing the robot to perform the task.
Abstract:
The disclosure describes novel technology for inferring scenes from images. In one example, the technology includes a system that can determine partition regions from one or more factors that are independent of the image data, for an image depicting a scene; receive image data including pixels forming the image; classify pixels of the image into one or more pixel types based on one or more pixel-level features; determine, for each partition region, a set of pixel characteristic data describing a portion of the image included in the partition region based on the one or more pixel types of pixels in the partition region; and classify a scene of the image based on the set of pixel characteristic data of each of the partition regions.
Abstract:
A vehicular micro cloud includes a set of connected vehicles that are operable to provide computational services to the set of connected vehicles. The disclosure includes embodiments for mobility-oriented data replication in the vehicular micro cloud. In some embodiments, a method includes, for each data set stored by the set of connected vehicles, determining a number of replicas to generate based on one or more mobility-based criteria. The method includes generating instances of replica data that describe the replicas. The method includes, for individual instances of replica data, determining which of the connected vehicles included in the set to use as storage locations for the individual instances of replica data based on the one or more mobility-based criteria. The method includes causing the individual instances of replica data to be stored in the storage locations. For example, the individual instances of replica data are transmitted to the storage locations.
Abstract:
The disclosure includes embodiments for providing a storage service for mobile nodes in a roadway area. A method may include a vehicle communicatively coupled to a mobile node via a non-infrastructure network, wherein the vehicle and the mobile node are present in a roadway area which is partitioned into a plurality of roadway regions which are predetermined and known to the vehicle and the mobile node. The method may include transmitting, by the vehicle via the non-infrastructure network, a wireless message to the mobile node, wherein the wireless message includes content including one or more of an instance of spatial key data describing a particular roadway region of the roadway environment and an identifier of a value stored in a memory present in the particular roadway region described by the spatial data. The method may include receiving, by the mobile node via the non-infrastructure network, the wireless message.
Abstract:
The disclosure includes a method that receives a real-time image of a road from a camera sensor communicatively coupled to an onboard computer of a vehicle. The method includes dividing the real-time image into superpixels. The method includes merging the superpixels to form superpixel regions. The method includes generating prior maps from a dataset of road scene images. The method includes drawing a set of bounding boxes where each bounding box surrounds one of the superpixel regions. The method includes comparing the bounding boxes in the set of bounding boxes to a road prior map to identify a road region in the real-time image. The method includes pruning bounding boxes from the set of bounding boxes to reduce the set to remaining bounding boxes. The method may include using a categorization module that identifies the presence of a road scene object in the remaining bounding boxes.
Abstract:
A method receives a first image set depicting a merging zone, the first image set including first image(s) associated with a first timestamp; determines, using a trained first machine learning logic, a first state describing a traffic condition of the merging zone at the first timestamp using the first image set; determines, from a sequence of states describing the traffic condition of the merging zone at a sequence of timestamps, using a trained second machine learning logic, second state(s) associated with second timestamp(s) prior to the first timestamp of the first state using a trained backward time distance; computes, using a trained third machine learning logic, impact metric(s) for merging action(s) using the first state, the second state(s), and the merging action(s); selects, from the merging action(s), a first merging action based on the impact metric(s); and provides a merging instruction including the first merging action to a merging vehicle.