Abstract:
Embodiments provide a system and method for network tracking. By using packet capture applications having a flow identifier and a time stamper, one or more raw packets from one or more packet flows intercepted from a network can be tagged with a unique identifier and timestamp that can later be used to aggregate packet flows that have been analyzed by one or more capture applications. The unique identifier can relate to the network interface of the particular capture application and can also have an increasing value, where the increase in value can be monotonic. Later capture applications, while capable of generating secondary timestamps, can disregard those secondary timestamps for the primary timestamp of the first capture application in order to remove complications arising from latency issues.
Abstract:
Embodiments provide a system and method for network tracking. By using packet capture applications having a flow identifier and a time stamper, one or more raw packets from one or more packet flows intercepted from a network can be tagged with a unique identifier and timestamp that can later be used to aggregate packet flows that have been analyzed by one or more capture applications. The unique identifier can relate to the network interface of the particular capture application and can also have an increasing value, where the increase in value can be monotonic. Later capture applications, while capable of generating secondary timestamps, can disregard those secondary timestamps for the primary timestamp of the first capture application in order to remove complications arising from latency issues.
Abstract:
Embodiments provide a system and method for network tracking. By using packet capture applications having a flow identifier and a time stamper, one or more raw packets from one or more packet flows intercepted from a network can be tagged with a unique identifier and timestamp that can later be used to aggregate packet flows that have been analyzed by one or more capture applications. The unique identifier can relate to the network interface of the particular capture application and can also have an increasing value, where the increase in value can be monotonic. Later capture applications, while capable of generating secondary timestamps, can disregard those secondary timestamps for the primary timestamp of the first capture application in order to remove complications arising from latency issues.
Abstract:
A mashup session manager maintains state of the mashup session to ensure presentation consistency/uniformity across the execution environments. The mashup session manager also tracks the participating execution environments associated with a mashup session (e.g., usernames, device identifiers, network addresses, etc.), and transmits data for presentation consistency to the participating execution environments. In some cases, a view of the mashup session at a participating execution environment may not be current (“stale mashup session view”). The mashup session manager can detect if a view at a participating execution environment is of a past mashup session state, and provide data for the stale mashup session view to become current. In addition, the mashup session manager can propagate design modifications to the participants of the mashup session.
Abstract:
Determining and storing at least one validated results set in a global ontology database for future use by an entity that subscribes to the global ontology database. If global ontology data is stored in a global ontology database, attempt to determine a mapping between first and second ontologies. If a mapping between the first and second ontologies can be determined from the global ontology data, the mapping is validated and the validated mapping is defined as a validated results set. If global ontology data is not stored in a global ontology database or a mapping between the first and second ontologies can not be determined from global ontology data stored in the global ontology database, the first and second ontologies are unified by determining a mapping between the first and second ontologies, the mapping is validated and the validated mapping is defined as a validated results set. The validated results set is stored in the global ontology database for future use by an entity that subscribes to the global ontology database.
Abstract:
Embodiments provide a system and method for network tracking. By using packet capture applications having a flow identifier and a time stamper, one or more raw packets from one or more packet flows intercepted from a network can be tagged with a unique identifier and timestamp that can later be used to aggregate packet flows that have been analyzed by one or more capture applications. The unique identifier can relate to the network interface of the particular capture application and can also have an increasing value, where the increase in value can be monotonic. Later capture applications, while capable of generating secondary timestamps, can disregard those secondary timestamps for the primary timestamp of the first capture application in order to remove complications arising from latency issues.
Abstract:
A method, apparatus and computer program product for selectively storing network traffic data are described. Network traffic is stored according to a first packet filtering policy in a first repository. The stored network traffic is scanned in the first repository according to a second packet filtering policy to identify a subset of network traffic for archiving. The identified subset of network traffic identified by the second packet filtering policy are forensically interesting packets concerning a security issue. The identified subset of network traffic from the first repository is then stored in a second repository.
Abstract:
A method, apparatus and computer program product for selectively storing network traffic data are described. Network traffic is stored according to a first packet filtering policy in a first repository. The stored network traffic is scanned in the first repository according to a second packet filtering policy to identify a subset of network traffic for archiving. The identified subset of network traffic identified by the second packet filtering policy are forensically interesting packets concerning a security issue. The identified subset of network traffic from the first repository is then stored in a second repository.
Abstract:
Business processes are implemented using a collection component for storing system knowledge comprising usage history and user input relative to activities within community of users, where the system knowledge comprises at least a folksonomy. At least one of a user client component interacts with the data collection component to enable a corresponding user to contribute user-derived information to the folksonomy and a monitoring component monitors activities associated with the community of users and interacts with the data collection component to contribute usage information to the system knowledge. Still further, a composition design application interacts with a user to build and/or modify processes built using services, wherein the composition design application recommends candidate services that can implement aspects of the business processes based upon information derived from the system knowledge.
Abstract:
Business processes are implemented using a collection component for storing system knowledge comprising usage history and user input relative to activities within community of users, where the system knowledge comprises at least a folksonomy. At least one of a user client component interacts with the data collection component to enable a corresponding user to contribute user-derived information to the folksonomy and a monitoring component monitors activities associated with the community of users and interacts with the data collection component to contribute usage information to the system knowledge. Still further, a composition design application interacts with a user to build and/or modify processes built using services, wherein the composition design application recommends candidate services that can implement aspects of the business processes based upon information derived from the system knowledge.