Abstract:
A method to stream media content via hypertext transfer protocol that includes receiving, at a client device, metadata including an attribute indicating a grouping of representations of the media content where each representation of the grouping of representations comprises a respective encoding choice of the media content.
Abstract:
A first device that includes a processor configured to transmit/receive a trigger message to/from a second device based on wireless short-range communication. The trigger message initiates a registration process within a wireless local area network (WLAN).
Abstract:
Example methods and apparatus to authenticate requests for network capabilities for connecting to an access network are disclosed. A disclosed example method involves receiving a request at a first access network. The request requests network connectivity information for connecting a wireless terminal to a second access network. The example method also involves encapsulating the request in an authentication frame. The authentication frame indicates the request as a white space protocol frame. The authentication frame is sent to a database addressed in the request.
Abstract:
An apparatus configured to encode or decode video data that includes a memory configured to store at least one reconstructed sample of video data and at least one processor, in communication with the memory, that is configured to identify at least one reconstructed sample, determine at least one extended angular intra prediction mode to use for intra prediction of at least one sample of a current block, intra predict, using the at least one extended angular intra prediction mode, at least one sample of the current block based on the at least one reconstructed sample, extended angular intra prediction modes including angular intra prediction modes other than angular prediction modes between horizontal -45 degrees and vertical -45 degrees, and encode or decode the current block based on the at least one predicted sample.
Abstract:
A method of coding (e.g., encoding or decoding) video data that includes coding a first block of video data using an inter prediction coding mode where coding the first block using the inter prediction coding mode comprises: constructing a list of candidate motion vectors for coding the first block using the inter prediction coding mode, identifying at least one motion vector predictor from among the list of candidate motion vectors, and generating a reconstructed motion vector based on the at least one motion vector predictor. The method of coding further includes adding the reconstructed MV to a history-based motion vector prediction (HMVP) candidate list and adding, to the HMVP candidate list, at least a second motion vector associated with construction of the list of candidate motion vectors.
Abstract:
A coding device configured to code video data that includes a buffer memory configured to store pictures of the video data and a at least one processor implemented in circuitry that is in communication with the buffer memory such that the processor is configured to code at least two pictures of a single coded video sequence (CVS) of the video data where each picture of the at least two pictures is associated with an identical picture order count (POC) value and where the at least two pictures are different from one another, associate respective data with each of the at least two pictures of the single CVS, and identify, for inclusion in a reference picture set, at least one picture among the at least two pictures based on the identical POC value associated with the at least two pictures and the respective data associated with the at least one picture.
Abstract:
Aspects of the present disclosure relate to systems and methods for assisting in positioning a camera at different zoom levels. An example device may include a memory configured to store image data. The example device may further include a processor in communication with the memory, the processor being configured to process a first image stream associated with a scene, independently process a second image stream associated with a spatial portion of the scene wherein the second image stream is different from the first image stream, output the processed first image stream, and output during output of the processed first image stream a visual indication that indicates the spatial portion associated with the second image stream.
Abstract:
Example methods and apparatus to provide network capabilities for connecting to an access network are disclosed. A disclosed example method involves receiving a request at a first access network of a first network type. The request is addressed to a database and requests network connectivity information for connecting a wireless terminal to a second access network of a second network type different from the first network type. The example method also involves sending a response to the wireless terminal via the first access network. The response includes the network connectivity information for connecting the wireless terminal to the second access network.
Abstract:
Systems and methods for processing one or more images include determining one or more current exposure settings for a current image of a current scene at a current time. One or more motion characteristics associated with an image sensor are determined. Based on the one or more motion characteristics, a location of a portion of the current image to use for determining one or more future exposure settings for the image sensor are determined, the one or more future exposure settings for capturing a future image of a future scene at a future time, the future time being subsequent to the current time. The one or more future exposure settings are determined based on the predicted portion of the current image.
Abstract:
A method for camera processing using a camera application programming interface (API) is described. A processor executing the camera API may be configured to receive instructions that specify a use case for a camera pipeline, the use case defining at least one or more processing engines of a plurality of processing engines for processing image data with the camera pipeline, wherein the plurality of processing engines includes one or more of fixed-function image signal processing nodes internal to a camera processor and one or more processing engines external to the camera processor. The processor may be further configured to route image data to the one or more processing engines specified by the instructions and return the results of processing the image data with the one or more processing engines to the application.