摘要:
Systems and methods for automatically building a deadlock free inter-communication network in a multi-core system are described. The example implementations described herein involve a high level specification to capture the internal dependencies of various cores, and using it along with the user specified system traffic profile to automatically detect protocol level deadlocks in the system. When all detected deadlock are resolved or no such deadlocks are present, messages in the traffic profile between various cores of the system may be automatically mapped to the interconnect channels and detect network level deadlocks. Detected deadlocks then may be avoided by re-allocation of channel resources. An example implementation of the internal dependency specification and using it for deadlock avoidance scheme is presented on Network-on-chip interconnects for large scale multi-core system-on-chips.
摘要:
An apparatus comprising a plurality of nodes and a plurality of links connecting the nodes in a ring topology, wherein a first node from among the plurality of nodes is coupled to a first link from among the plurality of links, wherein the first link comprises a plurality of virtual channels, and wherein each of the plurality of virtual channels is assigned to provide service to a unique one of the plurality of nodes.
摘要:
Systems and methods described herein are directed to solutions for NoC interconnects that provide end-to-end uniform- and weighted-fair allocation of resource bandwidths among various contenders. The example implementations are fully distributed and involve tagging the messages with meta-information when the messages are injected in the interconnection network. Example implementations may involve routers using various arbitration phases, and making local arbitration decisions based on the meta-information of incoming messages. The meta-information can be of various types based on the number of router arbitration phases, and the desired level of sophistication.
摘要:
An approach is provided for providing recommendations based on a recommendation model and a context-based rule. A recommendation platform receives a request for generating at least one recommendation, the request including at least one user identifier, at least one application identifier, or a combination thereof. Next, the recommendation platform determines at least one recommendation model associated with the at least one user identifier, the at least one application identifier, or a combination thereof. Then, the recommendation platform determines at least one context-based recommendation rule. Then, the recommendation platform processes and/or facilitates a processing of the at least one recommendation model, the at least one context-based recommendation rule, or a combination thereof for generating the at least one recommendation.
摘要:
An approach is provided for querying media based on media characteristics. A media platform processes and/or facilitates a processing of one or more images, one or more videos, or a combination thereof to determine one or more latent vectors associated with the one or more images, the one or more videos, or the combination thereof. The media platform further causes, at least in part, a comparison of the one or more latent vectors to one or more models. The media platform also causes, at least in part, an indexing of the one or more images, the one or more videos, or the combination thereof based, at least in part, on the one or more latent vectors, the one or more models, or a combination thereof.
摘要:
An approach is provided for providing abstracted user models in accordance with one or more access policies. A model platform determines an ontology for specifying a hierarchy of one or more abstraction levels for items data used in latent factorization models. The model platform further causes, at least in part, a generation of one or more user models for the one or more abstraction levels. The model platform also causes, at least in part, a selection of at least one of the one or more user models for generating one or more recommendations for one or more applications, one or more services, or a combination thereof based, at least in part, on one or more privacy policies, one or more security policies, or a combination thereof.
摘要:
An approach is presented for providing data providers for recommendations services. A data provider platform receives at least one request for at least one recommendation from at least one device. The data provider platform further determines context information associated with the at least one device, a user of the at least one device, or a combination thereof. The data provider platform further processes and/or facilitates a processing of the context information to determine, at least in part, one or more providers for generating the at least one recommendation.
摘要:
An approach is presented for providing recommendation channels. A recommendation platform receives an input for creating at least one recommendation channel, the input specifying at least one category. Next, the recommendation platform determines one or more tokens based, at least in part, on the at least one category, wherein at least one of the one or more tokens represents context information. Then, the recommendation platform determines to create the at least one recommendation channel based, at least in part, on the one or more tokens.
摘要:
An apparatus comprising a chip comprising a plurality of nodes, wherein a first node from among the plurality of nodes is configured to receive a first flit comprising a first timestamp, receive a second flit comprising a second timestamp, determine whether the first flit is older than the second flit based on the first timestamp and the second timestamp, transmit the first flit before the second flit if the first flit is older than the second flit, and transmit the second flit before the first flit if the first flit is not older than the second flit.
摘要:
An apparatus comprising a storage device comprising a hash table including a plurality of buckets, each bucket being capable of storing at least one data item, and a processor configured to apply at least a first and a second hash function upon receiving a key to generate a first index and a second index, respectively, the first and second indices identifying first and second potential buckets in the hash table for storing a new data item associated with the key, determine whether at least one of the first and second potential buckets have space available to store the new data item, and responsive to determining that at least one of the first and second potential buckets have available space, insert the new data item into one of the first or second potential buckets determined to have available space.