Abstract:
A method and an apparatus for accessing a file, where the method includes that a file system receives a file access request from an application layer, acquires metadata of a file when the file access request is to acquire content of the file according to a query condition, where the metadata of the file includes index information of the file, and the query condition is used to select content of the file with respect to the index information of the file, determining, according to the index information of the file, content that is of the file and that meets the query condition, and acquiring, using a magnetic disk input/output controller, all content that is of the file and that meets the query condition such that the application layer accesses the file, and hence the memory usage is reduced by means of filtering out a part of data.
Abstract:
A method and a device for sharing a PCIe I/O device, and an interconnection system are provided. The method includes: determining a shared PCIe I/O device in a PCIe interconnection system; establishing, by using a BAR at a working node, a first mapping relationship between an address of a CSR of the shared PCIe I/O device and an address, used for processing the CSR, in a working node domain. The method also includes establishing, by using an A-LUT fragment at a management node side of the NTB, a second mapping relationship between an address, used for receiving an MSI-X interrupt of the shared PCIe I/O device, in a management node domain and an address, used for processing the MSI-X interrupt, in the working node domain.
Abstract:
A method and an apparatus for rapid data distribution, the method includes: sending, by a central processing unit, data description information to a rapid forwarding module, where the data description information includes an address and length information of data requested by a user; reading, by the rapid forwarding module according to the data description information, the data requested by the user and forwarding the data requested by the user to a network interface controller; and sending, by the network interface controller, the data requested by the user to the user. By using the method provided in the present invention, after services are increased, only the network interface controller and a storage device need to be added, and cost for the memory and the central processing unit does not need to be increased.
Abstract:
A method and a device for sharing a PCIe I/O device, and an interconnection system are provided. The method includes: determining a shared PCIe I/O device in a PCIe interconnection system; establishing, by using a BAR at a working node, a first mapping relationship between an address of a CSR of the shared PCIe I/O device and an address, used for processing the CSR, in a working node domain. The method also includes establishing, by using an A-LUT fragment at a management node side of the NTB, a second mapping relationship between an address, used for receiving an MSI-X interrupt of the shared PCIe I/O device, in a management node domain and an address, used for processing the MSI-X interrupt, in the working node domain.
Abstract:
An embodiment of the present invention discloses a co-processing acceleration method, including: receiving a co-processing request message which is sent by a compute node in a computer system and carries address information of to-be-processed data; according to the co-processing request message, obtaining the to-be-processed data, and storing the to-be-processed data in a public buffer card; and allocating the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing. An added public buffer card is used as a public data buffer channel between a hard disk and each co-processor card of a computer system, and to-be-processed data does not need to be transferred by a memory of the compute node, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed.
Abstract:
A method and an apparatus for rapid data distribution, the method includes: sending, by a central processing unit, data description information to a rapid forwarding module, where the data description information includes an address and length information of data requested by a user; reading, by the rapid forwarding module according to the data description information, the data requested by the user and forwarding the data requested by the user to a network interface controller; and sending, by the network interface controller, the data requested by the user to the user. By using the method provided in the present invention, after services are increased, only the network interface controller and a storage device need to be added, and cost for the memory and the central processing unit does not need to be increased.
Abstract:
A data processing method and apparatus, a PCI-E bus system, and a server are provided. The method includes: configuring address information of a PCI-E memory of a PCI-E device, so that the PCI-E device stores data received by the PCI-E device in the PCI-E memory; and controlling a CPU to access the data stored in the PCI-E memory, so that the CPU processes the data. thus a problem that in the prior art, because the PCI-E device stores the data received by the PCI-E device in a memory of the CPU, when the data stored in the memory of the CPU is accessed by another CPU, a part of a bandwidth of a bus between the another CPU and the CPU is occupied and a bus through which the CPU accesses the memory that corresponds the CPU is occupied can be avoided, thereby improving a utilization rate of the CPU.