Abstract:
Systems, methods, and non-transitory computer-readable storage media for rotating security keys for an online synchronized content management system client. A client having a first security key as an active security key may send a request to a server for a new security key as a replacement for the first security key. The server may receive the request and generate a candidate security key. The server can issue the candidate security key to the client device. After receiving the candidate security key, the client may send a key receipt confirmation message to the server. In response to the confirmation message, the server may mark the candidate key as the new security key for the client and discard the client's old security key. The server may send an acknowledgment message to the client device. In response, the client may also mark the candidate key as its new active key.
Abstract:
Systems, methods, and non-transitory computer-readable storage media for rotating security keys for an online synchronized content management system client. A client having a first security key as an active security key may send a request to a server for a new security key as a replacement for the first security key. The server may receive the request and generate a candidate security key. The server can issue the candidate security key to the client device. After receiving the candidate security key, the client may send a key receipt confirmation message to the server. In response to the confirmation message, the server may mark the candidate key as the new security key for the client and discard the client's old security key. The server may send an acknowledgment message to the client device. In response, the client may also mark the candidate key as its new active key.
Abstract:
Disclosed are systems, methods, and non-transitory computer-readable storage media for detecting compromised credentials. In some implementations, a content management system can receive information identifying compromised login credentials (e.g., account identifier, password, etc.) from a third party server. The login credentials can be represented by a first hash value generated using a hashing algorithm. When a user logs in to the content management system the user can provide the user's account identifier and password for the content management system. The content management system can generate a second hash value from the user-supplied password using the same hashing algorithm used for the compromised login credentials. The content management system can determine whether the second hash value matches the first hash value and prompt the user to provide a new password for the user's content management system account when the second hash value matches the first hash value.
Abstract:
To identify whether a content item is prohibited, a content management system can generate a content item fingerprint for the content item and then compare the generated content item fingerprint to a blacklist of content item fingerprints for prohibited content items. If the generated content item fingerprint matches any of the content item fingerprints included in the blacklist, the content management system can determine that the content item is prohibited. The content management system can deny requests to share prohibited content items and/or requests to assign prohibited content items to a user account on the content management system. The content management system can generate the content item fingerprint using the content item as input in a fingerprinting algorithm that was used to generate the content item fingerprints on the blacklist.
Abstract:
Various embodiments restrict or enable access to content items of an account based on login information or content request properties. For example, a synchronized online content management system can receive a request including one or more content request properties from a client device to access a user account. Access rules for the user account can be obtained and applied based on the content request properties to generate an access status. In one instance, the client device is provided with full account access if the access status indicates that the client device is an authorized device. In another instance, if the client device is an unauthorized device, at least one aspect of access to the user account is restricted.
Abstract:
Disclosed are systems, methods, and non-transitory computer-readable storage media for malware detection and content item recovery. For example, a content management system can receive information describing changes made to content items stored on a user device. The content management system can analyze the information to determine if the described changes are related to malicious software on the user device. When the changes are related to malicious software, the content management system can determine which content items are effected by the malicious software and/or determine when the malicious software first started making changes to the user device. The content management system can recover effected content items associated with the user device by replacing the effected versions of the content items with versions of the content items that existed immediately before the malicious software started making changes to the user device.
Abstract:
In some embodiments, upon detecting malicious activity associated with a user account, a content management system can identify other user accounts related to the malicious user account. The content management system can identify related user accounts by comparing authentication information collected for the malicious user account with authentication information collected for other user accounts. Authentication information can include IP address information, geographic information, device type, browser type, email addresses, and/or referral information, for example. The content management system can compare the content items associated with the malicious user account to content items associated with other user accounts to determine relatedness or maliciousness. After identifying related malicious user accounts, the content management system can block all related malicious user accounts.
Abstract:
To identify whether a content item is prohibited, a content management system can generate a content item fingerprint for the content item and then compare the generated content item fingerprint to a blacklist of content item fingerprints for prohibited content items. If the generated content item fingerprint matches any of the content item fingerprints included in the blacklist, the content management system can determine that the content item is prohibited. The content management system can deny requests to share prohibited content items and/or requests to assign prohibited content items to a user account on the content management system. The content management system can generate the content item fingerprint using the content item as input in a fingerprinting algorithm that was used to generate the content item fingerprints on the blacklist.
Abstract:
Disclosed are systems, methods, and non-transitory computer-readable storage media for detecting compromised credentials. In some implementations, a content management system can receive information identifying compromised login credentials (e.g., account identifier, password, etc.) from a third party server. The login credentials can be represented by a first hash value generated using a hashing algorithm. When a user logs in to the content management system the user can provide the user's account identifier and password for the content management system. The content management system can generate a second hash value from the user-supplied password using the same hashing algorithm used for the compromised login credentials. The content management system can determine whether the second hash value matches the first hash value and prompt the user to provide a new password for the user's content management system account when the second hash value matches the first hash value.
Abstract:
In some embodiments, a content management system can initiate a scan of a content item when the content management system detects that activity associated with the content item triggers a scan policy. In some embodiments, a content management system can initiate a scan of a user's account when the content management system detects that activity associated with the content item triggers a scan policy. A scan policy can specify, for example, a number of shares, downloads and/or previews of the content item allowable in a period of time. When the number of shares, downloads, and/or previews exceeds the specified number in the policy in the specified period of time, the content management system can initiate a scan (e.g., virus scan, malware scan, etc.) of the content item and/or the user's account.