Abstract:
Method of using a computerized smart phone to navigate remote auto attendant telephony systems with a menu structure. The auto attendant's menu structure is put into an online computer database. When the caller uses the smart phone to call and establish a voice channel with remote auto attendant telephony system (using the telephone number of that system), software applications running on the caller's smart phone communication device intercept the telephone number and along with the voice channel, also establish a data channel with the online computer accessible database. The caller's smart phone can then retrieve at least some of the menu structure of the auto attendant telephony system through this data channel. This application software can then display at least some of the menu structure of the remote auto attendant telephony system on the graphical user interface of the user's smart phone synchronized with the audio delivery of the menu structure, facilitating interactions with the auto attendant system.
Abstract:
A processor capable of secure execution. The processor contains an execution unit and secure partition logic that secures a partition in memory. The processor also contains cryptographic logic coupled to the execution unit that encrypts and decrypts secure data and code.
Abstract:
An apparatus includes an instruction decoder, first and second source registers and a circuit coupled to the decoder to receive packed data from the source registers and to unpack the packed data responsive to an unpack instruction received by the decoder. A first packed data element and a third packed data element are received from the first source register. A second packed data element and a fourth packed data element are received from the second source register. The circuit copies the packed data elements into a destination register resulting with the second packed data element adjacent to the first packed data element, the third packed data element adjacent to the second packed data element, and the fourth packed data element adjacent to the third packed data element.
Abstract:
A computerized method of payment based on short, temporary, transaction ID numbers which protect the security of the payer's (customer's) financial accounts. The payee will first register a source of funds and a payer device with a unique ID (such as a mobile phone and phone number) with the invention's payment server. Then once a payee (merchant) and the payer have agreed on a financial transaction amount, the payee requests a transaction ID from the payment server for that amount. The payment server sends the payee a transaction ID, which the payee then communicates to the payer. The payer in turn relays this transaction ID to the server, which validates the transaction using the payer device. The server then releases funds to the payee. The server can preserve all records for auditing purposes, but security is enhanced because the merchant never gets direct access to the customer's financial account information.
Abstract:
A method and apparatus for including in a processor instructions for performing multiply-add operations on packed data. In one embodiment, a processor is coupled to a memory. The memory has stored therein a first packed data and a second packed data. The processor performs operations on data elements in said first packed data and said second packed data to generate a third packed data in response to receiving an instruction. At least two of the data elements in this third packed data storing the result of performing multiply-add operations on data elements in the first and second packed data.
Abstract:
A processor capable of secure execution. The processor contains an execution unit and secure partition logic that secures a partition in memory. The processor also contains cryptographic logic coupled to the execution unit that encrypts and decrypts secure data and code.
Abstract:
An apparatus includes an instruction decoder, first and second source registers and a circuit coupled to the decoder to receive packed data from the source registers and to pack the packed data responsive to a pack instruction received by the decoder. A first packed data element and a second packed data element are received from the first source register. A third packed data element and a fourth packed data element are received from the second source register. The circuit packs packing a portion of each of the packed data elements into a destination register resulting with the portion from second packed data element adjacent to the portion from the first packed data element, and the portion from the fourth packed data element adjacent to the portion from the third packed data element.
Abstract:
Offloading application level communication functions from a host processor. The offloading apparatus can be configured as either a pre-processor or as a co-processor. An interface is provided for receiving a network message sent to the host. An engine performs processing of the network message above OSI level 4. In one embodiment, in a fast-path, a response to the message is sent back to the network without any involvement by the host, providing a complete offload. For other messages, certain pre-processing can be performed, such as parsing of a header, message authentication, and look-up of meta-data. The results of the look-up are then passed to the host with the processed header, simplifying the tasks the host needs to perform. The messages and data are transferred to the host using control and data buffers.
Abstract:
A method and apparatus for providing, in a processor, a shift operation on a packed data element having multiple values. One embodiment of a central processing unit (CPU) includes instruction fetch logic to fetch a single-instruction-multiple-data (SIMD) shift instruction. A register stores a multiple data elements to be operated upon by the SIMD shift instruction. A barrel shifter concurrently shifts the data elements in a bit-wise manner by a variable number of bit positions in response to the SIMD shift instruction.
Abstract:
A lookup is performed using multiple levels of compressed stride tables in a multi-bit Trie structure. An input lookup key is divided into several strides including a current stride of S bits. A valid entry in a current stride table is located by compressing the S bits to form a compressed index of D bits into the current stride table. A compression function logically combines the S bits to generate the D compressed index bits. An entry in a prior-level table points to the current stride table and has a field indicating which compression function and mask to use. Compression functions can include XOR, shifts, rotates, and multi-bit averaging. Rather than store all 2S entries, the current stride table is compressed to store only 2D entries. Ideally, the number of valid entries in the current stride table is between 2D−1 and 2D for maximum compression. Storage requirements are reduced.