Abstract:
Embodiments are provided for an asynchronous processor with token-based very long instruction word architecture. The asynchronous processor comprises a memory configured to cache a plurality of instructions, a feedback engine configured to receive the instructions in bundles of instructions at a time (referred to as very long instruction word) and to decode the instructions, and a crossbar bus configured to transfer calculation information and results of the asynchronous processor. The apparatus further comprises a plurality of sets of execution units (XUs) between the feedback engine and the crossbar bus. Each set of the sets of XUs comprises a plurality of XUs arranged in series and configured to process a bundle of instructions received at the each set from the feedback engine.
Abstract:
Embodiments are provided for an asynchronous processor with a Hierarchical Token System. The asynchronous processor includes a set of primary processing units configured to gate and pass a set of tokens in a predefined order of a primary token system. The asynchronous processor further includes a set of secondary units configured to gate and pass a second set of tokens in a second predefined order of a secondary token system. The set of tokens of the primary token system includes a token consumed in the set of primary processing units and designated for triggering the secondary token system in the set of secondary units.
Abstract:
Aspects of the present disclosure relate to a reference signal assignment, in which reference signals for a first apparatus may be assigned based on second channel estimates for one or more second apparatus having a same location and network resource configuration as the first apparatus.
Abstract:
Systems and methods of reporting wireless channel state information are provided. With the provided system and method, in a situation where there are multiple UEs which are close to each other, such that channel conditions may be similar for the multiple UEs, one of the UEs is configured to report interference information on a time pattern that has at least two measurement time durations for which interference is to be measured, for example, only for a subset of N consecutive measurement time durations. Other UEs may be configured to report interference information for different time patterns, for example different subsets of the N time slots.
Abstract:
Some embodiments of the present disclosure relate to inferencing using a trained deep neural network. Inferencing may, reasonably, be expected to be a mainstream application of 6G wireless networks. Agile, robust and accurate inferencing is important for the success of AI applications. Aspects of the present application relate to introducing coding theory into inferencing in a distributed manner. It may be shown that redundant wireless bandwidths and edge units help to ensure agility, robustness and accuracy in coded inferencing networks.
Abstract:
A resource mapping method and apparatus and a resource mapping indication method and apparatus are provided to adapt to an interference cancellation function of a receiver. According to the method, a network device obtains a mapping mode used for resource mapping during uplink transmission, and sends, to a terminal, information indicating the mapping mode. The mapping mode is used to indicate mapping locations of a plurality of modulation symbols in a resource mapping block (RMB), the RMB comprises a plurality of resource elements (REs), and at least one of the plurality of REs carries at least two modulation symbols.
Abstract:
A broadcast signaling method performed by a network device having a protocol stack of with first and second protocol layers where the second protocol layer is below the first protocol layer, the method including generating, by the network device, first information at the first protocol layer, generating, by the network device, second information at the second protocol layer, where the second information is used to determine a time-frequency resource corresponding to one or more synchronization signal blocks (SSBs), processing, by the network device, the first information and the second information at the second protocol layer, and sending, by the network device to a terminal device by using a physical broadcast channel (PBCH) in the one or more SSBs, data obtained after second protocol layer processing.
Abstract:
A number K of N sub-channels that are defined by a code and that have associated reliabilities for input bits at N input bit positions, are to be selected to carry bits that are to be encoded. A localization area that includes multiple sub-channels and is located below fewer than K of the N sub-channels in a partial order of the N sub-channels is determined based on one or more coding parameters. The fewer than K sub-channels of the N sub-channels above the localization area in the partial order are selected, and a number of sub-channels from those in the localization area are also selected. The selected fewer than K sub-channels and the number of sub-channels selected from those in the localization area together include K sub-channels to carry the bits that are to be encoded.
Abstract:
This application provides a method for communicating a modulation and coding scheme (MCS). A terminal device obtains a modulation order, a code rate, or a spectral efficiency, determines an index of a reference MCS from a mapping table based on the obtained modulation order, code rate, or spectral efficiency, and reports the index of the reference MCS to a network device. The mapping table includes one or more mapping relationships between an MCS index and a modulation order, a code rate, or a spectral efficiency. The terminal device may process uplink or downlink data based on the determined MCS, thereby improving data transmission reliability.
Abstract:
Embodiments are provided for an asynchronous processor with pipelined arithmetic and logic unit. The asynchronous processor includes a non-transitory memory for storing instructions and a plurality of instruction execution units (XUs) arranged in a ring architecture for passing tokens. Each one of the XUs comprises a logic circuit configured to fetch a first instruction from the non-transitory memory, and execute the first instruction. The logic circuit is also configured to fetch a second instruction from the non-transitory memory, and execute the second instruction, regardless whether the one of the XUs holds a token for writing the first instruction. The logic circuit is further configured to write the first instruction to the non-transitory memory after fetching the second instruction.