摘要:
Embodiments of the present invention include a method for measuring an angle between a first surface and a second surface of an object. The method includes rotating an object around a center axis of the object and shining a light source perpendicular to the center axis of the object. The method further includes measuring an intensity of a reflected light with respect to time and determining an angle between two or more surfaces of the object based on the intensity of the reflected light with respect to time.
摘要:
A slider is described with a resistive electro-lapping guide (ELG), which is aligned with a structure in the write head such as the throat height or trailing shield thickness and extends from the lapping region through the ABS and is connected to pads on the surface of the slider. In a second embodiment the ELG is disposed entirely in the section of the slider which will be removed by lapping. Another embodiment of the invention is a system for single slider lapping which simultaneously monitors the resistance of the read sensor or a read head ELG and at least one ELG that is aligned with a structure in the write head. A controller uses the resistance information to implement an algorithm which decides when lapping should be terminated.
摘要:
Inorganic resin compositions comprising, in combination, an aqueous solution of metal phosphate, an oxy-boron compound, a wollastonite compound and other optional additives, inorganic composite articles and products reinforced by fillers and fibers including glass fibers obtained from these compositions and processes for preparing said products.
摘要:
Subject matter described herein includes a multi-layer search-engine index. Accordingly, the search-engine index is divided into multiple indexes, each of which includes a respective set of information used to serve (i.e., respond to) a query. One index includes a term index, which organizes a set of terms that are found among a collection of documents. Another index includes a document index, which organizes a set of documents that are searchable. A computing device is used to serve the search-engine index (i.e., to analyze the index when identifying documents relevant to a search query). For example, a solid-state device might be used to serve the multi-layer search-engine index.
摘要:
In a data over satellite system, preallocation of upstream channel resources is provided by a scheduler at the gateway satellite modem termination system (SMTS) in response from the user terminal, wherein the user terminal detects web browser and/or bulk transfers involving large amounts of data transfer from users via the upstream channel. A type length value (TLV) field is included with data packets transmitted to the gateway SMTS, at which excess transfer capability is allocated to the user terminal in anticipation of load requirements.
摘要:
In a data over satellite system, preallocation of upstream channel resources is provided by a scheduler at the gateway satellite modem termination system (SMTS) in response from the user terminal, wherein the user terminal detects web browser and/or bulk transfers involving large amounts of data transfer from users via the upstream channel. A type length value (TLV) field is included with data packets transmitted to the gateway SMTS, at which excess transfer capability is allocated to the user terminal in anticipation of load requirements.
摘要:
Upstream information arriving through a gateway from a user terminal in a satellite link subject to propagation delay is efficiently scheduled through a modified Demand Assigned Multiple Access (DAMA) algorithm such that arriving data packets arriving at the software queue at the user terminal are concatenated to form a large frame for transmission to improve efficiency. A piggyback request replacing a conventional DAMA contention request for the succeeding packet is issued to request bandwidth allocation for the succeeding concatenated packet. In a specific embodiment, all packets up to the physical request limit arriving at the user terminal since a prior piggyback request or contention request are concatenated so that all currently known packets (up to that limit) are accounted for by the next piggyback request.