ABSTRACT
Distributed application development requires the programmer to specify the inter process communication – a daunting task for the programmer when program involves complex data structures. The programmer should also explicitly handle any replication of data at a node to reduce network usage. Performance of the system is very sensitive to the various factors of the replicas. Consistency of the replicated data also burdens the programmer.
This project creates a middleware for java-based distributed application developers, providing transparency for distribution, replication and consistency of objects using Distributed Shared Memory (DSM). The DSM runtime systems intercept user accesses to remote objects and translate them into messages appropriate to the underlying communication media. The programmer is thus given the illusion of a large global object space encompassing all contributing nodes. DSM approach is attractive since most programmers find it easier to use, than a message passing paradigm, which requires them to explicitly manage communication values.
Order custom essay Distributed application development with free plagiarism report
The system uses prediction to dynamically replicate objects and also change the location of object replicas according to different access patterns of the objects. The replica of each object is able to propagate, perish and migrate depending on object usage. Replication is transparent to the application developer. Also the middleware takes care of transparently maintaining the replicas in a consistent state using adaptive home based lazy release consistency (AHLRC) protocol
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
Developing applications over distributed systems is non-trivial. Such application requires the programmer to specify inter process communication. When complex data structures are involved, such distributed applications development is a daunting task for the programmer. The programmer has to explicitly manage the communication values along with the algorithmic development. Distributed middleware such as CORBA and .NET alleviates some of these problems by hiding lower level network issues from the programmer.
Replication is an important issue in distributed systems. Distributed object middleware do not address replication issues naturally. The programmer has to explicitly handle any replication of data at a node to reduce network usage. If replicas are well distributed, most accesses will hit locally and good performance can be achieved. If replicas are unevenly distributed, systems performance may be greatly degraded due to increased traffic caused by updating and unnecessarily repeated fetching from the replicas present at other nodes. Hence replication of objects is a key factor in the performance of distributed applications.
Maintaining the replicas in a consistent state is also an important issue in distributed systems. Maintaining the replicas in a consistent state and synchronizing all the replicas is also explicitly managed by the programmer. Hence application development in a distributed environment is a daunting task. Adequate infrastructure that provides a sufficient level of abstraction is necessary.
Distributed Shared Memory (DSM) is an attempt to combine the simplicity of shared memory programming with inexpensiveness of message passing implementation. This idea of emulating a cache coherent multiprocessor by using the virtual memory mechanism was proposed in [1], [2]. DSM provides an illusion of globally shared memory, in which process can share data, without the application developer needing to specify explicitly where data is stored and how it should be accessed. This approach is attractive since most programmers find it easier to use than a message-passing paradigm, which requires them to explicitly partition data and manage communication. With a global address space, the programmer can focus on algorithmic development than on managing partitioned data sets and communicating values. In distributed shared memory systems (DSM), replication and consistency are the key issues that are handled extensively. DSM systems also focus on reducing the communication required for consistency maintenance. It provides the software implementation of more relaxed for of consistency.
Recent increases in PC performance, the exceptionally low cost of PCs relative to that of workstations and the introduction of advanced PC operating systems combine to make networks of PCs an attractive alternative for large scientific computations. Recent improvements in commodity general-purpose networks and processors have made networks of PCs an inexpensive alternative to large monolithic multiprocessor systems. By providing an abstraction of globally shared memory on top of the physically distributed memories present on networked workstations, it is possible to combine the programming advantages of shared memory and the cost advantages of distributed memory. These distributed shared memory (DSM), or shared virtual memory (SVM), runtime systems transparently intercept user accesses to remote memory and translate them into messages appropriate to the underlying communication media [31]. The programmer is thus given the illusion of a large global address space encompassing all available memory as seen in Figure 1.1.
Figure 1.1 Distributed Shared Memory
There are several factors that limit the performance of shared virtual memory (SVM). Software handlers and expensive network communication between clusters to maintain data consistency greatly limits system performance. There are two performance improvements avenues: relaxed consistency models which aim at reducing the communication traffic and additional hardware support provided in the communication architecture which can reduce the cost of communication. Since the first solution increases the programming complexity, while the second one increases the cost of the system, the research challenge was to determine how far to go in pushing for better performance without compromising the advantage of the software approach
1. 2 LITERATURE SURVEY
Researchers have proposed many relaxed consistency models. The first shared virtual memory (SVM) implementation [2] used sequential consistency (SC) [4] model, which meant that coherence operations had to be propagated immediately and processes had to wait for memory operations to complete before moving on to new ones. Progress was slow until the release consistency (RC) model [5] breathed new life into the software approach in the early 1990s and lead to eager release consistency (ERC) implementation in Munin [6] and lazy release consistency (LRC) [7] in TreadMarks. Entry consistency (EC) [8], home-based lazy release consistency (HLRC) [9] and scope consistency (ScC) [10] are other relaxed consistency models.
In eager release consistency (ERC) a processor delays propagating its modifications to shared data until it comes to release the lock on data. At that time it propagates the modifications to all other processors that cached the modified pages. But in lazy release consistency (LRC) the propagation of updates is further delayed until next acquiring of the lock on data. And only the processor that has acquired the lock is propagated the updated data.
HLRC [9] is a variant of the lazy release consistency (LRC) protocol [7] that requires no hardware support and can be easily implemented on workstation clusters or multi computers with traditional network interfaces. For these reasons, HLRC has been used in many software DSMs, including Tuple Spaces [3], GeNIMA [11], ORION [12], SHRIMP [13], ADSM [14], and KDSM [15].
Good performance results have been reported using these models. Software DSM protocols such as lazy release consistency are able to minimize false sharing and subsequent network messages by delaying the propagation of page invalidations or updates until the latest possible time. However, these protocols introduce substantial memory and other coherence-related overhead.
Home-based software DSM [16] provides a conceptually simpler way to build software DSMs. LRC systems maintain changes to shared pages locally, and multiple messages may be necessary to bring a stale page up to date. HLRC protocols, on the other hand, require changes to be flushed to a designated home node (assigned on a per-page basis). Requests to bring a stale page up to date can be satisfied with a single message to the home node, and such messages result in the entire page being sent back to the requester. HLRC has several advantages over LRC. First, the average critical path delay of each page access fault is reduced to one round trip. Second, coherence-related metadata for each page is less. Finally, memory overhead on each node is smaller because local page versioning is not required.
1.2.1 Home-based Lazy Release Consistency (HLRC) Protocol
The key idea in the HLRC protocol is that one node is assigned to be the home node of each shared page. Home node is a node where the page resides. Shared pages are invalidated on non-home nodes as required to maintain consistency. Accesses to invalid pages on non-home nodes require a fetch of the updated page from the home node. Details of the protocol can be found in [16].
In HLRC, each shared page is assigned a single home node, which typically does not change. Therefore, initial distribution of home nodes is important for good performance. Round robin, first touch, and block distribution are all examples of common page distribution algorithms. Some systems allow the application programmer to set the home node for a given shared address range in an attempt to assign the best home node for each page. As an example of the results of poor home assignment, suppose node 0 is initially assigned to be the home node for shared page i, however it never accesses the page. If node 1 reads and writes page i frequently, the home assignment is detrimental to performance since node 1 has to repeatedly fetch the whole page from node 0. Node 0 is interrupted frequently by incoming updates for that page from node 1, which also hinders forward computational progress.
Home-based software DSM system performance is very sensitive to the distribution of home pages. If the homes of shared pages are well distributed, most accesses will hit locally and good performance can be achieved. Otherwise, system performance may be greatly degraded due to increased traffic caused by updating home nodes and unnecessarily fetching pages repeatedly from the same home node.
1.2.2 Adaptive Home Protocols
There have been many adaptive protocols that seek to reduce the impact of poor home node distribution [12], [14], [17], [18], [19], [20], [21]. The idea behind these mechanisms is to detect specific application sharing patterns such as one producer-arbitrary consumer(s) [12], migratory [14], single writer [14], [19], etc. discussed in section 1.3.4, and redistribute the home pages accordingly in those specific cases. Although these schemes can achieve some performance improvements, they are tailored for specific memory access patterns and are not able to solve home node assignment problems in other memory access patterns such as multiple writer cases. As an example, consider two nodes that write to the same page frequently. In home-based software DSMs with HLRC and the above adaptive variants, at most one writer can be the home, and the other node still has to fetch the updated page from that home node when it wants to access it. The page fetch is still on the critical path of the second node, which prevents further performance improvement. Moreover, if the home node is initially neither of the two writers, it is difficult for the above adaptive protocols to decide how to migrate the home node for the best optimization, limiting performance improvement in those cases.
To the best of our knowledge, all adaptive HLRC protocols suffer from the following two limitations: (1) The protocols change home-distribution only after a specific memory access pattern is detected; therefore, home-redistribution lags behind changes in the memory sharing pattern. (2) Many adaptive protocols only deal with specific memory access patterns such as single writer or single producer-multiple consumer patterns. The performance may degrade for dynamically changing memory access behavior and other general memory access patterns such as multiple-writer, which are nevertheless common in parallel applications [22].
1.2.3 Adaptive HLRC (AHLRC)
Adaptive HLRC [23] is a home-based protocol to make the redistribution of home pages general enough to be applied to any sharing access pattern. Like HLRC, each page is assigned a home node, and changes to shared pages are propagated to each home node at release synchronization events. Similar to the variants with adaptive mechanisms, AHLRC is able to detect memory access patterns and change the home page distribution accordingly. However, in AHLRC every shared page can have more than one home node, with each home node maintaining an updated copy of the page after synchronization. In AHLRC, every node adaptively decides to be a home node of each specific shared page independent of other nodes participating in the computation. Home pages are expected to be redistributed better for general memory sharing patterns, including migratory and single-writer cases discussed in section 1.3.4. Such redistribution is based on predictions made by local online home predictors, not system-wide sharing pattern detection [12], [14], [18]. Consequently, AHLRC is able to redistribute home changes quickly and without costly global coordination between nodes. Hence AHLRC is a good candidate for the system.
1.2.4 Object based DSM on middleware
Distributed shared memory is implemented using one or more combinations of specialized hardware, conventional paged virtual memory or middleware. Hardware based solution are costly, and paged virtual memory implementation are suited to a collection of homogeneous computers, with common data and paging formats.
On the other hand language such as Orca [24] and middleware such as Linda [25] and its derivatives JavaSpaces [26] or TSpaces [27] support forms of DSM without any hardware or paging support, in a platform-neutral way. In this type of implementation, sharing is implemented by communication between instances of the user-level support layer in client and server. Processes make call to this layer when they accesses local data items and communicate as necessary to maintain consistency.
Object based DSM have better performance than a page based DSM due to larger granularity of sharing in page based DSMs [28] due to false sharing. Object based DSM alleviates the problem by more fine-grained sharing. Example of object based DSM include Linda [25], JDSM [29], as well as object based DSM in .NET environment [30]. Hence object based middleware is a good candidate for the system.
1.3 OBJECTIVES
The goal is to design and implement a software DSM system called HDSM that is an object-based middleware for java that uses the adaptive home-based lazy release consistency protocol (AHLRC) [23]. The Adaptive Home based Lazy Release Consistency is inspired by the research in AHLRC [23]. But the work was on page-based software DSM. The novelty of this work is to borrow from AHLRC and apply it to object-based middleware for java. The developer will be able to use the HDSM middleware for developing java based distributed applications, without specifying the inter process communication, without specifying creation, migration and perishing of replica and without specifying consistency maintenance.
1.4 HDSM SYSTEM LAYERS
The local HDSM API will provide the necessary functionality. The various layers of the HDSM middleware are as seen in figure 1.2.
Figure 1.2 HDSM system layers
The HDSM middleware shall provide the functionalities transparently to client application. The client application will use the local HDSM API for accessing the middleware. The middleware will provide the transparency for distribution of objects, transparency for replication of objects and transparency in maintaining objects in consistent state. The architecture of the system is seen in the figure 1.3. Sample distributed application for java objects using HDSM is discussed in section 4.2.
The APIs provided by HDSM middleware are: creating new object in HDSM, getting object ID for the objects in HDSM, reading objects from HDSM, writing to object in HDSM and removing object from HDSM. These APIs will be used by the distributed application developer without handling any inter-process communication, replication issues or consistency of the replicated objects. The middleware will handle these issues for the application developer.
The HDSM middleware contains four layers. The client’s distributed applications are written in the bottom layer called Client Application layer. The client application will directly use the Local HDSM API available at each contributing node. These HDSM APIs are provided by the HDSM API layer. The third layer is the Local HDSM Manager layer which takes care of all the local HDSM middleware operations. The fourth layer is the HDSM Home Set Manager layer which joins all the contributing nodes in to HDSM.
The Local Coordinator coordinates all the Local HDSM Manager layer operations. All the objects at a home node are stored in the Local Object Store. All the prediction related data is stored in Local Predictor Store. The Local lock Manager handles all the lock for the objects present at current node. Online Home Predictor does prediction for the objects present at the current node. Online Home Statistics Recorder records all the prediction related data into Local Predictor Store. Remote Reader allows non-home nodes to read objects from home node. Remote Object Requester performs the remote read operation from the non-home node to a home node.
Update Sender, Multicast Receiver, and Acknowledgement Receiver are for performing multicast operation during a write operation. Update Sender sends all the multicast messages. Multicast messages are lock request, unlock request, updated object, lock acknowledgement, unlock acknowledgement, and update acknowledgement. Multicast Receiver receives all the lock, unlock and update messages from updating nodes. Acknowledgement Receiver receives lock acknowledgement, unlock acknowledgement, and update acknowledgement sent from home nodes.
Home Set Coordinator coordinates all the HDSM Home Set Manager layer operations. Home Set Data stores all the home set related data. Home Set performs the home set related operations in the Home Set Data. Nodes List has the list of home nodes for an object.
Figure 1.3 HDSM system architecture
1.4.1 Object Updating on Multiple Home Nodes
In the system there can be multiple home nodes for the same object. Therefore, shared objects must be kept updated on all home nodes when required to do so by the coherence protocol. To achieve this, a set of homes (the object home set) is maintained for each shared object to record the current list of home nodes. When updates are sent out, they must be propagated to all nodes in the home set, and each home node applies the update to its copy of the object. Since every home node keeps an updated copy, when a non-home node wants to fetch the object, it can do so from any of the available home nodes. This strategy eliminates possible “hot spots” in HLRC, as fetch requests for the same object are not necessarily directed to a single location. When a node needs to fetch a copy of an object from a home node, the system currently selects a random node from the home set from which to fetch the object.
1.4.2 Online Statistics Recording
Home nodes in HLRC do not incur the delay of remote object fetches since a object is always up-to-date on a home node. However, the home node is frequently interrupted by incoming updates sent by other nodes, and must apply these changes. Similarly, a non-home node saves the time of receiving and processing updates, but it has to fetch whole objects from the home node when it accesses an invalid object. Consequently, if a node accesses a particular shared object very frequently, better performance would likely be achieved were it a home node; on the other hand, if a node accesses a shared object rarely, that node should not be a home node for that object. Therefore, the system compares the cost of being a home node (i.e., the object updating time tupd, including time to receive object updates and apply those updates to the object) with the cost of not being a home node (i.e., the object fetch time tfetch, including time to send out object request, wait for incoming updated copy and apply that copy to the object). In other words, if tupd > tfetch, then the node should not be a home node during the current interval; if tupd < tfetch, then the node should be made a home node during the current interval.
In order to make this comparison, the node must know the object fetch time (constant to a first-order approximation for a given system), and the object update time. The node dynamically measure tupd by recording (V, t) on the home node, where this pair represents the total object update time between the current object version number and the last object version number. The object version number is updated on each home node after processing all updates flushed from other nodes.
1.4.3 Online Home Prediction
When a node first accesses a shared object after a release synchronization event, it uses a local online home predictor to determine whether or not to become a home node, drop from the home set, or do neither. Normally, memory-sharing patterns in applications are strongly correlated with past history. Thus, predictions made based on past history are fairly accurate [31]. Also, since the decision is one of two possible outcomes, “to become a home node” or “to drop from the home set”, a two-level adaptive branch predictor [32] is a good candidate for the online home predictor. In HDSM implements the online home predictor in terms of a Pap branch predictor in which each shared object has a separate history register (HR) that indexes a separate pattern history table (PHT) for the object, and each PHT entry is a saturating counter. By comparing the indexed PHT entry and a pre-defined threshold, a binary prediction is generated. Afterward, the PHT entry and the HR will be updated according to the predicted and real outcome.
Online Home Prediction on a Home Node
Suppose the current version number of object i is Vi,curr, and the version number when a home node last accessed this object is Vi,last. The home node retrieves the object update records and calculates the actual total object update time: tupd = ?last
Online Home Prediction on a Non-home Node
Similarly, when a non-home node accesses an object i, an object fault occurs and the node must send an object request to one of the home nodes to get the updated copy of the object. The node then attempts to predict if it is more or less cost effective to be a home node for the object i.
In order to make this decision, the node must know the cost to be a home node since the last access. However, such object update times are recorded only on the home node. To solve this problem, when a non-home node sends a object request, it also sends the object version number Vi,last. When the home node receives the object request, it takes this object version number Vi,last, retrieves its local object update records, and calculates the total object update cost for the requesting node. This object update cost, together with the updated object copy, is returned to the requesting node. After receiving the reply from the home node, the updated object is installed. The node then compares tupd and the object fetch time tfetch, updating the saturating counter in the PHT indexed by current history register and history register itself, and makes a prediction.
1.4.4 Object Sharing Pattern Adaptation
Sharing patterns can be classified into at least three categories: migratory, single-producer/consumer, and multiple-writer [33]. This section shows how the system adapts to these sharing patterns respectively.
Migratory Sharing Pattern
In this sharing pattern, a single writer migrates from one node i to another node j. In the system, since node j now writes to the object frequently, it will become the home node according to the prediction outcome by the online home predictor. If node i accesses the object infrequently, at the next access or object update event the home predictor will cause the node to remove itself from the home set.
Single-Producer/Consumer Pattern
In this sharing pattern, there is only one writer but multiple readers. In HLRC, since there can be only one home node for any object in the system, the home node is set to the writing node. However, readers have to repeatedly fetch the whole object from that home node, incurring a large execution time overhead. On the other hand, in out system the readers themselves can become home nodes. In that case, changes made by the writer are propagated to the readers via one multicast message. Additionally, using the system in this manner updates the object in advance, providing pre-fetching benefits.
Multiple-Writer Pattern
One primary advantage of this system is that home objects can be redistributed better in general sharing pattern cases. One common example is multiple producers/multiple readers, in which more than one writer modifies the shared object frequently, and more than one reader reads this object frequently at the same time. In AHLRC, all of these writers and readers may become home nodes. Consequently, changes from each writer will be propagated to each reader, eliminating subsequent object fetches from the remote home node.
1.5 ORGANIZATION OF THE REPORT
This report is split into five broad chapters. Chapter 1 gives an overall picture of the system (HDSM) to be developed. A brief introduction to object based distributed shared memory and the adaptive home-based lazy release consistency (AHLRC) has been given. A background of what is to be dealt with in the rest of the thesis is also discussed.
Chapter 2 enlists the various features and functions that are to be supported by HDSM. It gives a detailed requirement specification on the various features like create new object, read, write etc.
Chapter 3 identifies the modules that are to be implemented. These modules are later described in a detailed manner through algorithms. The various data structures are identified. A test plan is also prepared simultaneously.
In Chapter 4 the implementation details are discussed. Sample test inputs and outputs are also noted. Performance analysis of the system is done in comparison with another system implementing HLRC (Home based last release consistency).
In Chapter 5 the concluding remarks about the project are given.
CHAPTER 2
REQUIREMENTS SPECIFICATION
2.1 INTRODUCTION
This is the Software Requirements Specification (SRS) for Home Node prediction in Object-based Distributed Shared Memory (HDSM).
2.1.1 Purpose
The purpose of this document is to convey information about the application requirements, both functional and non-functional, to the reader. This document provides (a) a description of the environment in which the application is expected to operate, (b) a definition of the application capabilities, and (c) a specification of the application’s functional and nonfunctional requirements.
2.1.2 Intended Audience
The chapter is intended to serve several groups of audiences
First, it is anticipated that the application designers will use the SRS. Designers will use the information recorded here as the basis for creating the application’s design.
Second, the application maintainers will review the chapter to clarify their understanding of what the application does.
Third, test planner will use this chapter to derive test plans and test cases.
2.2 SCOPE
HDSM is an object-based middleware for java that uses the adaptive home-based lazy release consistency protocol (AHLRC) that provides high-level abstraction to java based distributed application developers.
2.3 OVERALL DESCRIPTION
This section gives the over all description about the project.
2.3.1 Product Perspective
The software is an independent system that is not part of a large system. It can be executed on windows operating system
2.3.2 Product Features
The features supported by HDSM are as below:
Object based DSM with basic features
The basic features of the object based DSM are create new object, get object ids for a class, read an object, write to an object, and remove an object.
Adaptive Home-based Lazy Release Consistency (AHLRC)
This feature is to make the replicas that exist at various nodes in HDSM use the adaptive home-based lazy release consistency model for consistency.
Two-level Adaptive Branch Predictor
The feature is to allow each node to independently predict if it is beneficial to join the home set or not.
Distributed locking protocol
The feature is to serialize all the reads and writes on the replicated objects that exist at various home nodes.
2.3.3 User Classes and Characteristics
There are two types of users for HDSM. The Table 2.1 describes general user characteristics that will affect the functionality of the software product.
Table 2.1 User class and characteristics
Type of UserUser CharacteristicsUser Technical ExpertiseHow the user characteristics and technical expertise affect HDSM functionality
Administrator
Good understanding of HDSM operations. And responsible for HDSM operations as a whole
Good technical expertise. And basic knowledge of network environment
User interface with less input steps. And manual network configuration
Distributed Application Developer
Good understanding of HDSM operations. And good understanding of application development in java
Good technical proficiency in java programming
User interface will be java DSM objects, which will provide functions to access HDSM.
2.3.4 Operating Environment
Hardware Platform: Set of network PCs with connected in a LAN.
Operating Systems: Microsoft Windows 2000
Software Components: Java runtime environment installation
2.3.5 Design and Implementation Constraints
The Administrator is responsible for configuring the HDSM operation over the network.
HDSM should be able to run on existing PCs in an existing LAN supporting multicasting.
LAN should support TCP/IP protocol.
HDSM should be able to handle a multi-user environment. It must allow simultaneous access by more than one terminal where HDSM is running at any time.
2.3.6 Assumptions and Dependencies
The following is the list of assumptions and dependencies that would affect the software requirements if they were turned out to be false.
The system depends on the software components and operating systems that has been specified above
Assume that the hardware used to interact with HDSM will be a standard keyboard and monitor.
2.4 EXTERNAL INTERFACE REQUIREMENTS
This section gives the external requirements of the project.
2.4.1 User Interface
A runtime HDSM object is present in the nodes of the system. This HDSM object serves as the interface for the client code to access the HDSM service.
2.4.2 Hardware Interface
The hardware interface for the system will be a standard keyboard, monitor. The systems need to be connected to each other in a LAN network.
2.4.3 Software Interface
The system on which HDSM is running should have the following components installed:
Java runtime environment
Java standard development kit
2.4.4 Communication Interface
It is the administrator’s responsibility to configure the HDSM in the network. The administrator shall provide the address of the system where the server resides.
2.5 SYSTEM FEATURES
This section gives the main feature of the project.
2.5.1 Create New Object
Description:
The user calls this function to create a new object in the HDSM.
Stimulus/Response Sequence:
The user shall provide the object and the class name of the object of which it is an instance of. This shall return unique ID across all nodes, which can be used to access the object in future.
Functional Requirements:
The object, object ID and class name shall be stored in the object store. The predictor data for the object shall be created and initialized. The home set for the object shall be create and the current node be added as home.
2.5.2 Get Object name for a class name
Description:
The user calls this function to get object ID for a given class.
Stimulus/Response Sequence:
The user shall provide the class name. This shall return the object ID.
Functional Requirements:
The object ID for the given class is searched locally. If the object id is found, it is returned. If not found it is got from the home set manager.
2.5.3 Read existing object
Description
The user calls this function to get the object with a given object id.
Stimulus/Response Sequence
The user shall provide the object ID. This shall return the object. And make a prediction to join the home set or remove from home set or do nothing.
Functional Requirements
The read lock must be acquired before read, and released after read. If available locally make a local read. Else find the node where the object resides from home set and make a remote fetch from the node. Make online statistics recording for the object.
2.5.4 Write to existing object
Description
The user calls this function to write an object into the HDSM.
Stimulus/Response Sequence
The user shall provide the object, the object ID.
Functional Requirements
From home set all the home node ID are got. A write lock is acquired from all the homes and released after write. The update is multicast to all the home nodes where the new object is stored. Local object store is updated. Online statistics is recorded for the object.
2.5.5 Remove existing object
Figure 2.1 Context diagram
Description
The user calls this function to remove the object from HDSM.
Stimulus/Response Sequence
The user shall provide the object ID.
Functional Requirements
If it is a non-home node no action is taken. If home, object is removed from object store and predictor data is also removed. If only home node then home set is also removed.
2.6 FUNCTIONAL REQUIREMENTS
This section gives the functional requirements of the project.
2.6.1 Information Flows
Data flow diagram 1
The level 1 flow diagram for HDSM is as shown in Figure 2.2.
Data entities
Object
Object id
Class name
Pertinent processes
Client process
HDSM runtime
Topology
Figure 2.1 Context diagram
Data flow diagram 2
The level 2 flow diagram for HDSM is as shown in Figure 2.3.
Data entities
Object
Object id
Class name
Object store
Predictor data
Home set data
Pertinent processes
Client process
Local Manager
Home set manager
Home node predictor
Topology
Figure 2.2 Level 1 DFD
Data flow diagram 3
The level 3 flow diagram for HDSM is as shown in Figure 2.4.
Data entities
Object
Object id
Class name
Object store
Predictor data
Home set data
Lock release/acquire
Remote lock release/acquire
Multicast updates
Remote update
Fetch request
Remote fetch
Pertinent processes
Client process
Local manager
Local lock manager
Updates manager
Fetch manager
Home set manager
Online home predictor
Online statistics recorder
Topology
Figure 2.3 Level 2 DFD
2.6.2 Process description
Local Manager
Input data entities
Objects, Class name, Object ID
Algorithm for process
The process has to take all requests from the client, and also provide the appropriate response.
Affected data entities
Object Store
Local Lock Manager
Input data entities
Object ID, Node IDs.
Algorithm for process
The process is responsible for local lock acquires and release.
Also for remote lock acquire and release.
Affected data entities
Lock data
Updates Manager
Input data entities
Object ID, Node IDs
Algorithm for process
The process is responsible for multicasting the update to all other home nodes.
The process is also responsible for receiving the updates and applying them in the local object store.
Affected data entities
Object Store
Fetch Manager
Input data entities
Object ID, Node IDs
Algorithm for process
The process is responsible for fetching an object from other home node.
The process is also responsible for receiving fetch request and sending back the corresponding data
Affected data entities
NA
Home set Manager
Input data entities
Node ID, Class name, Object ID
Algorithm for process
The process is responsible for providing the node IDs for a particular object ID.
Affected data entities
Home set.
Online home predictor
Input data entities
Object ID
Algorithm for process
The process will use two-level adaptive branch predictor for predicting if a node can join home set or not.
Affected data entities
Predictor Data
Online Statistics recorder
Input data entities
Object ID and object statistics
Algorithm for process
The process is responsible for storing the predictor related data for each object.
Affected data entities
Predictor data
2.6.3 Data construct specifications
Object Store
Record Type
Hash tables
Constituent fields
Object, Object ID, Class name
Home set
Record Type
Hash tables
Constituent fields
Object, Object ID, Node IDs
Predictor data
Record Type
Hash tables
Constituent fields
Object ID, Version last access, Fetch time, Version and time for updates, Pattern history table (PHT), History register (HR)
2.7 NON-FUNCTIONAL REQUIREMENTS
This section gives the non-functional requirements of the project.
2.7.1 Performance Requirements
No performance requirements as of the last revision.
2.7.2 Safety Requirements
No safety requirements as of the last revision.
2.7.3 Security Requirements
No security requirements as of the last revision.
2.7.4 Software Quality Attributes
Availability : Not applicable as of the last revision.
Security : Not applicable as of the last revision.
Maintainability : Not applicable as of the last revision.
Portability : The software will work in all operating system environment specified earlier. Also, the software is portable on any Java Development Kit 1.1 and later.
CHAPTER 3
SYSTEMS DESIGN AND TEST PLAN
3.1 DETAILED DESIGN
This section gives the detailed design for the project.
3.1.1 Module Detailed Design
There are seven main modules in the project. These modules are discussed in this section.
3.1.1.1 Module 1 – Local Manager Description
Description:
This module is responsible to take all the operations from the client process.
Input:
Client processes, shall provide the object, object ID and class name as inputs.
Output:
The outputs will be either the object or object ID.
Methods:
public String CreateNew(Object obj, String ClassName)
Algorithm:
Begin
Get the object id from client process
Get the class name from client process
Generate object ID for the object
Store the object in object store
Send the object ID and class name to home set manager
Generate predictor data for the object in online statistics recorder
Return the object id to the client process
End
public String getOID(String className)
Algorithm:
Begin
Get the class name from client process
Send class name to the Home set manager
Get back the object IDs list
Selected a random object ID
Return one object ID to the client process
End
public String[] getOIDs(String className)
Algorithm:
Begin
Get the class name from client process
Send class name to the Home set manager
Get back the object IDs list
Return one object IDs to the client process
End
public Object read(String oid)
Algorithm:
Begin
Get the object id from client process
If object there in object store
Send read lock request to local lock manager
Retrieve the object
Update the version number last accessed in online statistics recorder
Send request to online home predictor for prediction
Send read unlock request for the object
Else
Get the list of nodes having the object in there object store from home set
Select one random node to read the object
Send object request to remote fetch manager with version number last accessed
Update the version number last accessed in online statistics recorder
Send request to online home predictor for prediction
Return the object to the client process
End
public void Write(Object obj, String oid)
Algorithm:
Begin
Get the object and the object id from client process
If object there in object store
Send write lock request to local lock manager
Set write lock in home set manager for the object
Get multicast address for the object from home set
Send lock request from all nodes
Wait for lock acknowledge till time out
If any node lock not acknowledged
Send lock request to non-acknowledged nodes
Wait for lock acknowledge till time out
If any node lock not acknowledged
i.Remove non acknowledged node from home set
Send updates to the acknowledged nodes
Wait for update acknowledge till time out
If any node update not acknowledged
Send update request to non-acknowledged nodes
Wait for update acknowledge till time out
If any node update not acknowledged
i.Remove non acknowledged node from home set
Send unlock request from all nodes
Wait for unlock acknowledge till time out
If any node unlock not acknowledged
Send unlock request to non-acknowledged nodes
Wait for unlock acknowledge till time out
If any node unlock not acknowledged
i.Remove non acknowledged node from home set
Update the version number last accessed in online statistics recorder
Return to client process
End
3.1.1.2 Module 2 – Local lock manager Description
Description:
This module is responsible for maintaining a distributed lock for each object. A read and write lock. It is also responsible for acquiring and releasing locks from other home nodes. Once the lock is released triggering online home predictor.
Input:
Lock request.
Output:
Lock acknowledged.
Algorithm:
Begin
IF local write lock request
If write lock available
i.Set write lock for the object
Else
i.Wait until lock available
ii.Set write lock for the object
Return
IF remote write lock request
If write lock available
i.Set write lock for the object
Else
i.Wait until lock available
ii.Set write lock for the object
Acknowledge write lock
IF local read lock request
If read lock available
i.Set read lock for the object
Else
i.Wait until lock available
ii.Set read lock for the object
Return
IF local write unlock request
Reset write lock for the object
IF remote write unlock request
Reset write lock for the object
Acknowledge write unlock
IF local read unlock request
Reset read lock for the object
End
3.1.1.3 Module 3 – Update manager Description
Description:
This module is responsible for getting remote updates and applying them on the object store.
Input:
Update request.
Output:
Update acknowledged.
Algorithm:
Begin
Wait until update request arrives
If update version number greater than version number available
Apply the update to the object store
Record the time to update for the current version number in online statistics recorder
Send back update acknowledgement
Else if update version number equals version number available
Send back update acknowledgement
End
3.1.1.4 Module 4 – Fetch manager Description
Description:
This module is responsible to handle fetch when a fetch request comes, send back the object and object details.
Input:
Update request.
Output:
Update acknowledged.
Algorithm:
Begin
Send read lock request to local lock manager
Get the object from the object store
Send request to the predictor store and calculate the total update time since last version number sent from online statistics recorder
Send read unlock request to local lock manager
Send the object and the total update time
End
3.1.1.5 Module 5 – Home set manager Description
Description:
This module is responsible for maintaining the home set for all objects. It is also responsible for providing the object to class name mapping for non-home nodes.
Input:
Update request.
Output:
Update acknowledged.
Algorithm:
Begin
IF new object added
Store the object ID and class name in home set
Assign a multicast address for the new group
Add the client node to the nodes list of home nodes
IF new node added
Add the new node to the nodes list of home nodes
IF node remove request from home set
Remove the node from the home nodes list
IF home nodes list requested
Collect the list of home nodes for the object
Return the home nodes list
End
3.1.1.6 Module 6 – Online home predictor Description
Description:
This module is responsible for predicting if the node can join the home set or not using the two-level adaptive branch predictor.
Input:
Update request.
Output:
Update acknowledged.
Algorithm:
Begin
IF Pattern History Table value indexed by History register > threshold value in online statistics recorder
IF not already home node
i.Add the current node as a home node in home set manager
Else
IF already home node
i.Remove the current node from the home set
IF time to update < time to fetch in online statistics recorder
Increment the Pattern History Table value indexed by History register
Append 1 to History register
Else
Decrement Pattern History Table value indexed by History register
Append 0 to History register
End
3.1.1.7 Module 7 – Online statistics recorder Description
Description:
This module is responsible for recording the statistics for the online home predictor.
Input:
Update request.
Output:
Update acknowledged.
Algorithm:
Begin
IF new predictor data
Store the version number last accessed as 1
Store time to fetch from remote node as constant
Initialize PHT table
Initialize HR
IF record update time requested
Append the time to update for the current version number
IF calculate update time requested for a version number
Retrieve the update records
Sum up all the update times which have version larger then given version number
Return the total update time
End
3.1.2 Data detailed design
Data entity 1 – Object store description
Name of field
Data typeDescription
ObjectObjectThese are the objects that were contributed by the client process.
Object IDStringThis is the unique ID for the object by which the object can be identified.
Class nameStringThis is the class name for which the object is instance of.
Data entity 2 – Home set description
Name of field
Data typeDescription
Class nameStringThis is the class name for which the object is instance of.
Object IDStringThis is the unique ID for the object by which the object can be identified.
Node IDsString arrayThis is the remote reference of all the home nodes.
Data entity 3 – Predictor data description
Name of field
Data typeDescription
Object IDStringThis is the unique ID for the object by which the object can be identified.
V last accessIntegerThis is to store the version number when the object was last accessed.
T fetchIntegerThis is the fetch time for remote object.
V, T updatesArray listThis is to store time version number the time to update to this version number.
History register (HR)IntegerThis is to store the last k predictions made.
Pattern history table (PHT)Array listThis is to store the number of positive predictions for the 2k possible predictions in HR.
3.2 TEST PLAN
The following section gives test plan for the project.
3.2.1 Test case for Create New
Test 1
Test nameNot passing object
ObjectiveTest parameter object passed is not object
InputPrimitive type as input
Expected OutputError message of parameter not being object
Test 2
Test nameNot passing class name
ObjectiveTest parameter class name passed is not there
InputBlank class name
Expected OutputError message asking for parameter missing
Test 3
Test nameBlank class name
ObjectiveTo test if a empty class name is given as input
InputClass name parameter is empty
Expected OutputError message asking for valid class name
Test 4
Test nameValid input
ObjectiveTo test if a correct create new statement is given creates a new object in HDSM
InputValid object and class name
Expected OutputThe object must be created in HDSM
3.2.2 Test case for Get Object id
Test 1
Test nameNot passing class name
ObjectiveTest parameter class name passed is not there
InputBlank class name
Expected OutputError message asking for parameter missing
Test 2
Test nameClass name not present
ObjectiveTest for getting the object id for a class name that is not there in HDSM
InputGive class name that is not there in HDSM
Expected OutputReturned string is null
Test 3
Test nameValid Class name
ObjectiveTest for getting the object id for a class name where the current node is home node
InputGive class name that is there in HDSM
Expected OutputReturned one object id for the class name
Test 4
Test nameValid Class name
ObjectiveTest for getting the object id for a class name where the current node is non-home node
InputGive class name that is there in HDSM
Expected OutputReturned one object id for the class name
3.2.3 Test case for Get Object ids
Test 1
Test nameNot passing class name
ObjectiveTest parameter class name passed is not there
InputBlank class name
Expected OutputError message asking for parameter missing
Test 2
Test nameClass name not present
ObjectiveTest for getting the object id for a class name that is not there in HDSM
InputGive class name that is not there in HDSM
Expected OutputReturned string is null
Test 3
Test nameStoring multiple object id in single string
ObjectiveTest for storing array of string in a single string
InputStoring string array in simple string.
Expected OutputError message asking for incompatible storing
Test 4
Test nameValid Class name
ObjectiveTest for getting the object id for a class name where the current node is home node
InputGive class name that is there in HDSM
Expected OutputReturned one object ids for the class name
Test 5
Test nameValid Class name
ObjectiveTest for getting the object id for a class name where the current node is non-home node
InputGive class name that is there in HDSM
Expected OutputReturned one object ids for the class name
3.2.4 Test case for Read
Test 1
Test nameNot passing object id
ObjectiveTest parameter object id passed is not there
InputBlank object id
Expected OutputError message asking for parameter missing
Test 2
Test nameRead non existing object
ObjectiveTest to read an object that is not present in HDSM
InputGive an object id that is not present in HDSM
Expected OutputReturn null
Test 3
Test nameValid read object
ObjectiveTest to read an object that is present in HDSM for which current node is home node
InputGive an object id that is present in HDSM
Expected OutputReturn the correct object with the object id
Test 4
Test nameValid read object for which current node is not home node
ObjectiveTest to read an object that is present in another node
InputGive an object id that is present in other node
Expected OutputReturn the correct object with the object id
Test 5
Test nameContinuously read same object
ObjectiveTest converting the current non home node into home node
InputGive an object id that is present in other node
Expected OutputCurrent node added as home node
Test 6
Test nameNot reading object for long time
ObjectiveTest converting the current home node into non home node
InputNA
Expected OutputCurrent node removed from home set
3.2.5 Test case for Write
Test 1
Test nameNot passing object
ObjectiveTest parameter object passed is not object
InputPrimitive type as input
Expected OutputError message of parameter not being object
Test 2
Test nameNot passing object id
ObjectiveTest parameter object id passed is not there
InputBlank object id
Expected OutputError message asking for parameter missing
Test 3
Test nameWriting to object not present
ObjectiveTest parameter object id passed is not there
InputGive an object id that is not present in HDSM
Expected OutputNo change or removal taking place in HDSM
Test 4
Test nameWriting to object present
ObjectiveTest for valid write to the object for which current node is home node
InputGive an object id that is present in HDSM
Expected OutputThe object should be updated and other home nodes also updated
Test 5
Test nameContinuously write to same object
ObjectiveTest converting the current non home node into home node
InputGive an object id that is present in other node
Expected OutputCurrent node added as home node
Test 6
Test nameNot writing to object for long time
ObjectiveTest converting the current home node into non home node
InputNA
Expected OutputCurrent node removed from home set
CHAPTER 4
IMPLEMENTATION AND PERFORMANCE RESULTS
4.1 IMPLEMENTATION
HDSM is implemented as a middleware for java based application. HDSM provides distributed shared memory programming model for java objects. All the components of the HDSM are written in java language using RMI (Remote method invocation) and sockets.
The DSM class provides all the methods listed in the section 2.5. It hides all the networking issues from the programmer. The DSM class accesses the local manager, which handles all the local nodes operations. And the local manager takes care of all the networking issues for the programmer by abstracting all the networking issues. Also the local manager takes care of the replication and consistency.
The Update manager handles the entire multicast message sending for sending updates and multicast receiving for receiving updates. Also update manager takes care of the resending acknowledgements and processing received acknowledgments. A multicast time out receiver is implemented for handling lost messages. Also the objects are versioned for managing duplicate messages.
The local lock manager handles all the lock and unlocks requests that arrive at a node. The lock manager, handles both read and write lock for all the objects at a node. Concurrent reads are allowed, but no writes are allowed at read time. Only a single write is allowed at a time, but no reads are allowed at write time.
The fetch manager handles the entire fetch request for objects at each node. It identifies if object replica is available locally or not. And if available locally it does not use the network. Otherwise convert the request into a network request and return the requested object.
The home set manager manages all the local managers. It coordinates all the operations that use the network for both remote reads and remote writes for those nodes that do not have replicas.
The online home predictor does all the prediction i.e. if it is good to have a replica or remove a replica. It handles all the operations of the 2-level adaptive branch predictor. It does the prediction for each of the object replica that exists at each node.
The online statistics recorder records the statistics that are needed for performing the home node prediction for each object. The statistics recorder also maintains the version number of the objects.
4.2 HDSM SYSTEM
The following section defines the sample screen shots of the application.
Home set manager
The screen shot of home set manager is shown in the Figure 4.1.
Figure 4.2 Home Set Manager Screen Shot
Local HDSM manager
The screen shot of local HDSM manager is shown in the Figure 4.2.
Figure 4.3 Local HDSM Manager Screen Shot
HDSM Sample distributed object’s class program
This is a sample client that can be used for working on HDSM.
import java.io.*;
public class Client1 implements Serializable{
/***************************************************************
Member Variables
**************************************************************/
int i;
public Client1(){
i=10;
}
public void setI(){
int integer;
try{
BufferedReader br = new BufferedReader(
new InputStreamReader(System.in));
System.out.println(“Enter some integer :”);
String input = br.readLine();
integer = Integer.parseInt(input);
i = integer;
}
catch (Exception e) {
System.out.println(“ERROR while setI in Client1”);
System.out.println(e);
}
}
public int getI(){
return i;
}
public void incrementI(){
i++;
}
public void displayI()
{
System.out.println(i);
}
}
Sample HDSM distributed program for creating new object
/***************************************************************
Imported Packages
**************************************************************/
import hdsm.*;
public class HDsmCreateClient1 {
public static void main(String[] args) {
DSM dsmObj = new DSM();
Client1 cl = new Client1();
System.out.println(“Value of I = “);
System.out.println(cl.getI());
cl.setI();
System.out.println(“New Value of I = “+ cl.getI());
dsmObj.CreateNew(cl,”Client1”);
}
}
Sample HDSM distributed program for getting object id
/***************************************************************
Imported Packages
**************************************************************/
import hdsm.DSM;
public class HDsmGetOidClient1 {
public static void main(String[] args) {
DSM dsmObj = new DSM();
System.out.println(“Object ID for Client 1 -> “);
System.out.println(dsmObj.getOID(“Client1”));
}
}
Sample HDSM distributed program for getting object ids
/***************************************************************
Imported Packages
**************************************************************/
import hdsm.DSM;
public class HDsmGetOidsClient1 {
public static void main(String[] args) {
DSM dsmObj = new DSM();
String [] str = new String[10];
str = dsmObj.getOIDs(“Client1”);
System.out.println(“Object IDs for Client 1 -> “);
for(int i=0; i<10 && str[i]!=null; i++)
{
System.out.println(str[i]);
}
}
}
Sample HDSM distributed program for reading an object
/***************************************************************
Imported Packages
**************************************************************/
import hdsm.DSM;
public class HDsmReadAllClient1 {
public static void main(String[] args) {
DSM dsmObj = new DSM();
String str = dsmObj.getOIDs(“Client1”);
Client1 client1 = (Client1) dsmObj.read(str);
client1.displayI();
}
}
Sample HDSM distributed program for writing to object
/***************************************************************
Imported Packages
**************************************************************/
import hdsm.DSM;
public class HDsmWriteClient1 {
public static void main(String[] args) {
DSM dsmObj = new DSM();
String str = dsmObj.getOID(“Client1”);
Client1 client1 = new Client1();
client1.setI();
dsmObj.write(client1,str);
}
}
Sample HDSM distributed program for removing object
/***************************************************************
Imported Packages
**************************************************************/
import hdsm.DSM;
public class HDsmRemoveClient1 {
public static void main(String[] args) {
DSM dsmObj = new DSM();
String str = dsmObj.getOID(“Client1”);
System.out.println(“removing “+str);
dsmObj.remove(str);
}
}
4.3 PERFORMANCE EVALUATION
The section presents performance results obtained by HDSM on sample-distributed programs. This section begins with a brief description of the experimental environment. Next the performance of HDSM is evaluated and analyzed. Then compare the performance of HLRC and HDSM implementing AHLRC.
4.3.1 Experimental Configuration
The performance of HDSM is evaluated on an Ethernet LAN. Each node contains a 1700 MHz Pentium IV processor, 256 Megabytes RAM, runs the Windows 2000 professional, and java development kit 1.4.2_01. The interconnection network used is 100 Mbps Ethernet. In order to support multicast for object update, the multicast in j2sdk1.4.2_01 is used. Explicit acknowledgement message are used to overcome the effect of lost messages. The creator of the object is the initial home node for the object.
Because most accesses in DSM systems are short-term correlated [34], [35], a low history depth in HR is enough to provide prediction accuracy. Therefore the nodes set history length to three and a threshold of two.
4.3.2 HDSM Performance
The performance parameters are recorder for 1000 accesses of objects on HDSM. Table 4.1 shows average read time for object when a local replica of object exists at a node vs. when no local replica exists at a node.
Table 4.1 HDSM Read Time (milliseconds)
Read time with local replica
Read time without local replica
8.639
49.073
Figure 4.4 Read Time (milliseconds) in HDSM
Read time in HDSM has reduced, when a local replica exists vs. when local replica does not exist. This read time reduction is mainly due to the fact that read from local memory is much faster than read time over a network from another node. There is performance benefit of 40.434 milliseconds.
Table 4.2 shows average write time for object when only one replica of object exists and when more then one replica exists.
Table 4.2 HDSM Write Time (milliseconds)
Write time with more than 1 replica
Write time with 1 replica
55.594
46.477
Figure 4.5 Write Time (milliseconds) in HDSM
Write time in HDSM has slightly increased, when one replica exists vs. when more than one replica does not exist. These write time loss is mainly due to the fact that more than one replica are updated. There is performance loss of 9.117 milliseconds.
On the other hand loss due to replication is lesser than benefits due to replication. Hence replication is a better alternative than non-replicated option. The overall performance improvement due to replication of the objects at nodes is 31.317 milliseconds.
4.3.3 HDSM vs. HLRC Performance
Table 4.3 shows average read time over a 1000 reads for an object when the object is available at local node.
Table 4.3 Local Read Time (milliseconds)
HDSM local read timeHLRC local read time
0.48
8.001
Figure 4.6 Local Read Time (milliseconds)
HDSM has outperformed HLRC. This performance benefit is mainly because HDSM uses pre-fetching of objects once objects are updated. But in HLRC the object is not updated till other accesses are being done on the object. This laziness in propagation of updates to the home node has caused the increase in local read time.
Table 4.4 shows average read time over a 1000 reads for an object when the object is not available at local node.
Table 4.4 Remote Read Time (milliseconds)
HDSM remote read timeHLRC remote read time
1.82
15.157
Figure 7 Remote Read Time (milliseconds)
HDSM has outperformed HLRC. This performance benefit is mainly because the local online home predictor after repeated remote reads from other node makes the current node the home node for the object, and converts the remote reads into local reads. But in HLRC the remote reads always use the network.
CHAPTER 5
CONCLUSION
The project HDSM presents a middleware for java object-based distributed application development. This environment provides an abstraction for transparent sharing of distributed objects to distributed application developers. The distributed programmer is able to use the HDSM middleware for developing java based distributed applications, without specifying the inter process communication. Dynamic replication of objects depending on access pattern of the developer is realized. Also all the replication and consistency of objects is maintained transparently without any programming effort from the distributed programmer.
The middleware provides abstraction for java-based distributed application development. By using local-only online home prediction to decide whether a node should create or drop a replica for a given object, AHLRC maintains overall system efficiency for different memory sharing patterns. It uses adaptive home-based lazy release consistency protocol (AHLRC) for maintaining consistency of the objects, built on the concept of utilizing prediction to dynamically determine replicating node for each given object.
When HDSM implementing adaptive home-based lazy release consistency protocol (AHLRC) was compared with an object-based DSM implementing Home-based lazy release consistency protocol (HLRC), HDSM showed overall better performance. HDSM could be extended to support fault tolerance.
REFERENCES
[1] K. Li and P. Hudak, “Memory Coherence in Shared Memory Systems”, In Proceedings of the 5th Annual ACM Symposium on Principles of Distributed Computing, page 229-239, August 1986 and ACM Transactions on Computer Systems, 7(4):321-359, November 1989.
[2] K. Li, “Shared Virtual Memory on Loosely-coupled Multiprocessors”, PhD Thesis,YaleUniversity, October 1986. Tech Report YALEEU-RR-492.
[3] S. Ahuja, N. Carreiro, and D. Gelernter, “Linda and friends”, IEEE Computer, 19(8): 26-34, August 1986.
[4] L. Lamport, “How to Make a Multiprocessor Computer That Correctly Executes Multiprocessor Programs”, IEEE Transactions on Computers, C-28(9): 690-691, 1979.
[5] K. Gharachorloo, D. Lenoski, J. Laudon, et al., “Memory consistency and event ordering in scalable shared-memory multiprocessors”, Proceedings of the 17th Annual International Symposium on Computer Architecture, pages 15–26, May 1990.
[6] J. B. Carter, J. K. Bennett, and W. Zwaenepoel, “Implementation and performance of Munin”, In Proceeding of the 13th ACM Symposium on Operating Systems Principles , page 152-164, October 1991.
[7] P. Keleher, A. L. Cox, and W. Zwaenepoel, “Lazy release consistency for software distributed shared memory”, Proceedings of the 19th Annual International Symposium on Computer Architecture, pages 13–21, May 1992.
[8] B. Bershad, M. Zekauskas, and W. Sawdon, “The Midway distributed shared memory system”, Proceedings of the 38th IEEE Computer Society International Conference, pages 528–537, March 1993.
[9] Y. Zhou, L. Iftode, and K. Li, “Performance evaluation of two home-based lazy release consistency protocols for shared virtual memory systems”, Proceedings of the 2nd USENIX Symposium on Operating System Design and Implementation, pages 75–88, October 1996.
[10] L. Iftode, J. P. Singh, and K. Li, “Scope consistency: A bridge between release consistency and entry consistency”, Proceedings of the 8th ACM Annual Symposium on Parallel Algorithms and Architectures, pages 277–287, June 1996.
[11] A. Bilas, C. Liao, and J. P. Singh, “Using network interface support to avoid asynchronous protocol processing in shared virtual memory systems”, Proceedings of the 26th International Symposium on Computer Architecture, pages 282–293, May 1999.
[12] M. C. Ng and W. F. Wong, “Orion: An adaptive home-based software distributed shared memory system”, Proceedings of the 7th International Conference on Parallel and Distributed Systems, pages 187–194, July 2000.
[13] R. Samanta, A. Bilas, L. Iftode, et al., “Home-based SVM protocols for SMP clusters: Design and performance”, Proceedings of the 4th International Symposium on High-Performance Computer Architecture, pages 113–124, February 1998.
[14] L. Whately, R. Pinto, M. Rangarajan, et al, “Adaptive techniques for home-based software DSMs”, Proceedings of the 13th Symposium on Computer Architecture and High- Performance Computing, pages 164–171, September 2001.
[15] H. C. Yun, S. K. Lee, J. Lee, et al, “An efficient lock protocol for home-based lazy release consistency”, Proceedings of the 3rd International Workshop on Software Distributed Shared Memory System, May 2001.
[16] L. Iftode, “Home-based Shared Virtual Memory”, PhD Thesis,PrincetonUniversity, 1998.
[17] B. Cheung, C. L. Wang, and K. Hwang, “A migrating-home protocol for implementing scope consistency model on a cluster of workstations”, Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, pages 821–827, June 1999.
[18] J. W. Chung, B. H. Seong, K. H. Park, et al., “Moving home based lazy release consistency for shared virtual memory systems”, Proceedings of the International Conference on Parallel Processing, pages 282–290, September 1999.
[19] W. Hu,W. Shi, and Z. Tang, “Home migration in home based software DSMs”, Proceedings of ACM 1st Workshop on Software DSM System, June 1999.
[20] P. Keleher, “Update protocols and iterative scientific applications”, Proceedings of the 12th International Parallel Processing Symposium, pages 675–681, March 1998.
[21] R. Stets, S. Dwarkadas, N. Hardavellas, et al., “Cashmere- 2l: Software coherent shared memory on a clustered remote write network”, Proceedings of the 16th ACM Symposium on Operating Systems Principles, pages 170–183, October 1997.
[22] P. Keleher, S. Dwarkadas, A. L. Cox, et al, “Treadmarks: Distributed shared memory on standard workstation and operating systems”, Proceedings of the Winter 94 Usenix Conference, pages 115–131, January 1994.
[23] Song Peng, and E. Speight, “Utilizing home node prediction to improve the performance of software distributed shared memory”, Proceedings. 18th International Parallel and Distributed Processing Symposium, Pages:59 – 68, April 2004.
[24] H. E. Bal, M. F. Kaashoek and A. S. Tanenbaum, “Experience with distributed programming in Orca”, In Proceedings of IEEE International Conference on Computer Language, pages 79-89, 1990.
[25] Nicholas Carriero, and David Gelenter,”Linda in Context”, Communication of the ACM, 4(32): 444-458, April 1989.
[26] “JavaSpaces Technology, Sun Microsystems Inc.”, www.java.sun.com,1999.
[27] P. Wyckoff, S. McLaughry, T. Lehman and D. Ford, “TSpaces”, IBM Systems Journals, Volume 37, Number 3, 1998.
[28] George Coulouris, Jean Dollimore, and Tim Kindberg, “Distributed Systems concepts and design”, third edition, Pearson Education, 2001.
[29] Yukihiko Sohda,Hidemoto Nakada, and Satoshi Matsuoka, “Implementation of a portable software DSM in java.”, Proceedings of the 2001 joint ACM-ISCOPE conference on Java Grande, June
2001
.
[30] Seidmann T., “Distributed shared memory using the .NET framework”, Cluster Computing and the Grid 2003 Proceedings CCGrid 2003, 3rd IEEE/ACM International Symposium, Pages: 457 – 462, May 2003.
[31] E. Speight and M. Burtscher, “Delphi: Prediction-based page prefetching to improve the performance of shared virtual memory systems”, Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, pages 49–55, June 2002.
[32] T. Y. Yeh and Y. N. Patt, “Alternative implementation of two-level adaptive branch prediction”, Proceedings of the 19th Annual International Symposium on Computer Architecture, pages 124–134, May 1992.
[33] L. R. Monnerat and R. Bianchini, “Efficiently adapting to sharing patterns in software DSMs”, Proceedings of the 4th IEEE Symposium on High-Performance Computer Architecture, pages 289–299, February 1998.
[34] A. C. Lai and B. Falsafi, “Memory sharing predictor: The key to a speculative coherent DSM”, Proceedings of the 26th International Symposium on Computer Architecture, pages 172–183, June 1999.
[35] S. Mukherjee and M. D. Hill, “Using prediction to accelerate coherence protocols”, Proceedings of the 25th Annual International Symposium on Computer Architecture, pages 179–190, June 1998.
Cite this Page
Distributed application development. (2019, Mar 10). Retrieved from https://phdessay.com/distributed-application-development/
Run a free check or have your essay done for you