Abstract:
A method includes interrogating devices in a network to obtain device information. Based on the device information, the method includes automatically determining which devices are at a top layer of a multi-layer hierarchical topology, wherein the devices at the top layer are core devices. The method further includes receiving input from a user to manually modify the determination as to which devices are the core devices. The method further includes determining which of the other devices in the network are at another layer in the hierarchical topology based on the core devices.
Abstract:
A user equipment (UE) comprising a processor configured to generate a three dimensional (3D) model by obtaining a 3D mesh comprising a plurality of reference markers, positioning at least one first order virtual object onto a surface of the mesh by associating the first order virtual object to at least one of the mesh reference markers, wherein the first order virtual object comprises a plurality of reference markers, and positioning at least one second order virtual object onto a surface of the mesh by associating the second order virtual object to at least one of the first order virtual object reference markers.
Abstract:
A user equipment (UE) comprising a display, an input device configured to receive user input, a visual input configured to capture motion or stop photography as visual data, and a processor coupled to the display, input device, and visual input and configured to, receive visual data from the visual input, overlay a model comprising network data onto the visual data to create a composite image, wherein the model is aligned to the visual data based on user input received from the input device, and transmit the composite image to the display.
Abstract:
A user equipment (UE) comprising a display, a visual input configured to capture motion or stop photography as visual data, a memory comprising instructions, and a processor coupled to the display, the input device, and the memory and configured to execute the instructions by receiving visual data from the visual input, determining a position of the feature relative to the UE if the visual data comprises a feature of a first area of a location, and generating a model of the first area of the location based on the position of the feature. The disclosure also includes a method comprising receiving data indicating a position of a feature of a location relative to a UE, and generating a model of the location based on the position of the feature.
Abstract:
A user equipment (UE) comprising a display, a visual input configured to capture motion or stop photography as visual data, a memory comprising instructions, and a processor coupled to the display, the input device, and the memory and configured to execute the instructions by receiving visual data from the visual input, determining a position of the feature relative to the UE if the visual data comprises a feature of a first area of a location, and generating a model of the first area of the location based on the position of the feature. The disclosure also includes a method comprising receiving data indicating a position of a feature of a location relative to a UE, and generating a model of the location based on the position of the feature.
Abstract:
A user equipment (UE) comprising a display, an input device configured to receive user input, a visual input configured to capture motion or stop photography as visual data, and a processor coupled to the display, input device, and visual input and configured to, receive visual data from the visual input, overlay a model comprising network data onto the visual data to create a composite image, wherein the model is aligned to the visual data based on user input received from the input device, and transmit the composite image to the display.