Abstract:
Disclosed are a distributed storage system including a plurality of proxy servers, and a method for managing an object thereof. The distributed storage system of the present invention comprises: a metadata server for storing metadata which includes the unique information of the object and the unique information of data nodes onto which the object is stored; at least two proxy servers for relaying an action request by the client for performing a managing action to a target data node, which is related to a target object that is the object of the managing action performed by the client, by referencing the metadata server when the action request is received; and a global load balancer for authenticating the client, and for selecting a target proxy server to process the action request by the client from the proxy servers, wherein the target data node performs an action according to the action request that is received from the target proxy server and transmits the result of the performance to the target proxy server or to the client. According to the present invention, scalability and resilience, which are advantages of cloud computing, can be provided by introducing multiple proxy servers, and the distributed storage system can have a flexible structure.
Abstract:
Bulk operations including create, update, and delete operations are supported within the context of language-integrated queries. Such bulk operations can be implemented as distinct operations. Other operations, including query operators defining a collection of data over which the bulk operations can execute, can be restricted as a function of a specific bulk operation.
Abstract:
An apparatus, method, machine-readable medium, and system are disclosed. In one embodiment the apparatus is a micro-page table engine that includes logic that is capable of receiving a memory page request for a page in global memory address space. The apparatus also includes a translation lookaside buffer (TLB) that is capable of storing one or more memory page address translations. Additionally, the apparatus also has a page miss handler capable of performing a micro physical address lookup in a page miss handler tag table in response to the TLB not storing the memory page address translation for the page of memory referenced by the memory page request. The apparatus also includes memory management logic that is capable of managing the page miss handler tag table entries. The micro-page table engine allows the TLB to be an agent that determines whether data in a two-level memory hierarchy is in a hot region of memory or in a cold region of memory. When data is in the cold region of memory, the micro-page table engine fetches the data to the hot memory and a hot memory block is then pushed out to the cold memory area.
Abstract:
Customization of source code of a software program like a business application is enabled without modifying the source code of the software. External pieces of source code may be executed prior to, and/or following the invocation of selected methods. The external methods executed prior to a designated method call may change the parameter values that the designated method gets called with, and the methods executed after the designated method has been called may change a value returned from the designated method.
Abstract:
A mobile computing device with a mobile operating system and desktop operating system running concurrently and independently on a shared kernel without virtualization. The mobile operating system provides a user experience for the mobile computing device that suits the mobile environment. The desktop operating system provides a full desktop user experience when the mobile computing device is docked to a second user environment. Cross-environment notification and event handling allows the user to be notified of and respond to events occurring within the mobile operating system through the user environment associated with the desktop operating system. Events that may trigger cross-environment notification may be local events and/or remote events. The mobile computing device may be a smartphone running the Android mobile operating system and a full desktop Linux distribution on a modified Android kernel.
Abstract:
Automatic system upgrades are provided to a customer by a service provider based on the customer's preferences and parameters. The automatic upgrade system may manage customer preferences for the scheduling of automatic upgrades according to parameters such as date, time, and capacity. The upgrade system may automatically provide system upgrades to customer systems according to the customer preferences taking into account system health and status, customer parameters, and customer priority as determined by the service provider.
Abstract:
Concepts and technologies are described herein for contextual and task-focused computing. In some embodiments, a discovery engine analyzes application data describing applications, recognizes tasks associated with the applications, and stores task data identifying and describing the tasks. The task data is searchable by search engines, indexing and search services, and task engines configured to provide tasks to client devices operating alone or in a synchronized manner, the tasks being provided on-demand or based upon activity associated with the client devices. A task engine receives or obtains contextual data describing context associate with the client devices and/or social networking data associated with users of the client devices. Based upon the contextual data and/or the social networking data, the task engine identifies one or more relevant tasks and provides to the client devices information for accessing the relevant tasks, or packaged data corresponding to the relevant tasks.
Abstract:
Subject matter described herein is directed to reallocating an application component from a faulty data-center resource to a non-faulty data-center resource. Background monitors identify data-center resources that are faulty and schedule migration of application components from the faulty data-center resources to non-faulty data-center resources. Migration is carried out in an automatic manner that allows an application to remain available. Thresholds are in place to control a rate of migration, as well as, detect when resource failure might be resulting from data-center-wide processes or from an application failure.
Abstract:
Hardened programmable logic devices are provided with programmable circuitry. The programmable circuitry may be hardwired to implement a custom logic circuit. Generic fabrication masks may be used to form the programmable circuitry and may be used in manufacturing a product family of hardened programmable logic devices, each of which may implement a different custom logic circuit. Custom fabrication masks may be used to hardwire the programmable circuitry to implement a specific custom logic circuit. The programmable circuitry may be hardwired in such a way that signal timing characteristics of a hardened programmable logic device that implements a custom logic circuit may match the signal timing characteristics of a programmable logic device that implements the same custom logic circuit using configuration data.
Abstract:
One example discloses a data manager of a data collector (DCDM) 8 executing on a virtual machine 6 for managing sensitive data. The DCDM 8 can have a conformance certificate that characterizes functionality of the DCDM 8. The DCDM 8 can request sensitive data from a data subject 16, wherein the request for the sensitive data includes the conformance certificate. The DCDM 8 can further receive, in response to the request, the sensitive data encrypted with an encrypted secret key. The secret key can be decrypt-able with a private key stored at a trusted platform module for the data collector (DCTPM) 12.