Abstract:
A method and apparatus for enabling a data call in a wireless network comprising determining if the data call in a packet app is a relay model tethered data call; and determining if default link flow type Flow 1 is deactivated for the data call. In one aspect, one or more of the following is also included: determining if the type of the data call is CDMA 2000 1X, IS-95A/B, EVDO Rev. 0, EVDO Rev. A or EVDO Rev. B; determining the type of the packet app; requesting to deactivate default link flow type Flow 1; and determining if default link flow type Flow 1 is deactivated for the data call; and wherein the type of the packet app is of a default packet app (DPA), a multi-flow packet app (MPA), an enhanced multi-flow packet app (EMPA) or a multi-link multi-flow packet app (MMPA).
Abstract:
Techniques for routing data via lower layer paths through lower layers of a protocol stack are described. A lower layer path may be composed of a flow for packets, a link at a link layer, and a channel at a physical layer. A packet may be received from an application. A most preferred lower layer path for the packet may be selected from among at least one available lower layer path. The available lower layer path(s) may be arranged in an order of preference based on treatment of packets (e.g., best effort or QoS), protocols used at the link layer, channel types at the physical layer, and/or other factors. The packet may be sent via the selected lower layer path. A highest precedence lower layer path for the packet may be set up (e.g., in parallel) if this path is not among the at least one available lower layer path.
Abstract:
A broadband service is provided by allocating air interface resources in a wireless network that conforms to the 1xEV-DO standard. The air interface resources are characterized by various quality of service (QoS) parameters, such as bandwidth, packet priority and error rate. Packetized information is transmitted in data flows between a base station and cell phones. A particular QoS level is reserved for each of the data flows that support the broadband service. An operating system on a cell phone monitors one data flow as well as another data flow in the opposite direction. When the base station runs out of an air interface resource, the base station suspends the QoS reservation of a data flow. The operating system determines that the QoS reservation in one direction has been suspended and sends an unsolicited message to the base station releasing the QoS reservation in the opposite direction, thereby conserving network resources.
Abstract:
Techniques for managing resources on a wireless device are described. In an aspect, congestion of resources on the wireless device may be detected. If any resources are deemed to be congested, then congestion of the congested resources may be relieved by controlling utilization of the congested resources by at least one client. In one design, flow control may be performed for at least one data flow to relieve congestion of the congested resources. A pattern indicative of when to send messages enabling data transmission and when to send messages disabling data transmission may be selected. Messages may then be sent in accordance with the pattern to control transmission of data for the at least one data flow. Another pattern with a higher ON fraction or a lower ON fraction may be selected based on usage of the congested resources.
Abstract:
Certain aspects of the present disclosure provide techniques for wireless communications, wherein distinct port partitions are assigned to processing entities on a user equipment device. Doing so provides the processing entities with concurrent access to the single PDN connection.
Abstract:
Systems and methods for releasing stale connection contexts are provided herein. The systems and methods help to ensure that connection records between mobile devices and communications nodes are synchronized so as to avoid and/or fix stale connection contexts.
Abstract:
Apparatus and methods are disclosed for power optimization in a wireless device. The apparatus and methods effect monitoring the amount of data stored in a data buffer that buffers data input to and data output from a processor. Dependent on the amount of data stored in the buffers parameters of a control function, such as a Dynamic Clock and Voltage Scaling (DCVS) function are modified based on the amount of data stored in the data buffer. By modifying or pre-empting the parameters of the control function, which controls at least processor frequency, the processor can process applications more dynamically over default parameter settings, especially in situations where one or more real-time activities having strict time constraints for completion are being handled by the processor as evinced by increased buffer depth. As a result, power usage is further optimized as the control function is more responsive to processing conditions.
Abstract:
Techniques for maintaining an always-on data session for an access terminal are described. Messages to keep alive the data session may be sent using non-traffic channels to avoid bringing up traffic channels just to send these messages. In one design, an access network may send a first message (e.g., a RouteUpdateRequest message) on a first non-traffic channel (e.g., a control channel) to the access terminal. The access terminal may return a second message (e.g., a RouteUpdate message) on a second non-traffic channel (e.g., an access channel) to the access network. The access network may then send a third message (e.g., for an Echo-Request) on the first non-traffic channel over a smaller area covering an approximate location of the access terminal, which may be determined based on the second message. The access terminal may return a fourth message (e.g., for an Echo-Reply) on the second non-traffic channel to the access network.
Abstract:
Techniques for routing data via lower layer paths through lower layers of a protocol stack are described. A lower layer path may be composed of a flow for packets, a link at a link layer, and a channel at a physical layer. A packet may be received from an application. A most preferred lower layer path for the packet may be selected from among at least one available lower layer path. The available lower layer path(s) may be arranged in an order of preference based on treatment of packets (e.g., best effort or QoS), protocols used at the link layer, channel types at the physical layer, and/or other factors. The packet may be sent via the selected lower layer path. A highest precedence lower layer path for the packet may be set up (e.g., in parallel) if this path is not among the at least one available lower layer path.