High Performance Computing in Remote Sensing - Chapter 9 pps

19 234 0
High Performance Computing in Remote Sensing - Chapter 9 pps

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Chapter 9 An Introduction to Grids for Remote Sensing Applications Craig A. Lee, The Aerospace Corporation Contents 9.1 Introduction 184 9.2 Previous Approaches 185 9.3 The Service-Oriented Architecture Concept 186 9.4 Current Approaches 188 9.4.1 Web Services 188 9.4.2 Grid Architectures 189 9.4.3 End-User Tools 194 9.5 Current Projects 195 9.5.1 Infrastructure Projects 195 9.5.2 Scientific Grid Projects 196 9.6 Future Directions 197 References 199 This chapter introduces fundamental concepts for grid computing, Web services, and service-oriented architectures. We briefly discuss the drawbacks of previous ap- proachestodistributedcomputingand howtheyare addressedby servicearchitectures. After presenting the basic service architecture components, we discuss current Web service implementations, and how grid services are built on top of them to enable the design, deployment, and management of large, distributed systems. After discussing best practices and emerging standards for grid infrastructure, we also discuss some end-user tools. We conclude with a short review of some scientific gridprojects whose science goals are directly relevant to remote sensing. 183 © 2008 by Taylor & Francis Group, LLC 184 High-Performance Computing in Remote Sensing 9.1 Introduction The concept of distributed computing has been around since the development of net- works and many computers could interact. The current notion of grid computing, however, as a field of distributed computing, has been enabled by the pervasive avail- ability of these devices and the resources they represent. In much the same way that the World Wide Web has made it easy to distribute Web content, even to PDAs and cell phones, and engage in user-oriented interactions, grid computing endeavors to make distributed computing resources easy to utilize for the spectrum of application domains [26]. Managing distributed resources for any computational purpose, however, is much more difficult than simply serving Web content. In general, grids and grid users require information and monitoring services to know what machines are available, what current loads are, and where faults have occurred. Grids also require scheduling capabilities, job submission tools, support for data movement between sites, and notification for job status and results. When managing sets of resources, the user may need workflow management tools. When managing such tasks across administrative domains, a strong security model is critical to authenticate user identities and enforce authorization policies. With such fundamental capabilities in place, it is possible to support many different styles of distributed computing. Data grids will be used to manage access to massive data stores. Compute grids will connect supercomputing installations to allow cou- pled scientific models to be run. Task farming systems, such as SETI@Home [29], and Entropia [9], will be able to transparently distribute independent tasks across thousands of hosts. Supercomputers and databases will be integrated with cell phones to allow seamless interaction. This level of managed resource sharing will enable resource virtualization. That is to say, computing tasks and computing resources will not have to be hard-wired in a fixed configuration to fixed machines to support a particular computing goal. Such flexibility will also support the dynamic construction of groups of resources and institutions into virtual organizations. This flexibility and wide applicability to the scientific computing domain means that grids have clear relevance to remote sensing applications. From a computational viewpoint, remote sensing could have a broad interpretation to mean both on-orbit sensors that are remote from the natural phenomena being measured, and in-situ sensors that could be in a sensor web. In both cases, the sensors are remote from the main computational infrastructure that is used to acquire, disseminate, process, and understand the data. This chapter seeks to introduce grid computing technology in preparation for the chapters to follow. We will briefly review previous approaches to distributed com- puting before introducing the concept of service architectures. We then introduce current Web and grid service standards, along with some end-user tools for building grid applications. This is followed by a short survey of current grid infrastructure and © 2008 by Taylor & Francis Group, LLC An Introduction to Grids for Remote Sensing Applications 185 science projects relevant to remote sensing. We conclude with a discussion of future directions. 9.2 Previous Approaches The origins of the current grid computing approach can be traced to the late 1980’s and early 1990’s and the tremendous amounts of research being done on parallel programming and distributed systems. Parallel computers in a variety of architectures had become commercially available, and networking hardware and software were becoming morewidely deployed. To effectively programthese new parallel machines, a long list of parallel programming languages and tools were being developed and evaluated to support both shared-memory and distributed-memory machines [30]. With the commercial availability of networking hardware, it soon became obvious that networked groups of machines could be used together by one parallel code as a distributed-memory machine. NOWs (network of workstations) became widely used for parallel computation. Such efforts gave rise to the notion of cluster computing, where commodity processors are connected with commodity networks. Dedicated networks with private IP addresses areused to support parallelcommunication, access to files,and bootingthe operatingsystem onall clusternodes froma singleOS “image” file. A special, front-end machine typically provides the public interface. Of course, networks were originally designed and built to connect heterogeneous sets of machines. Indeed, the field of distributed computing deals with essentially unbounded sets of machines where the number and location of machines may not be explicitly known. This is, in fact, a fundamental difference between clusters and grids, as a distributed computing infrastructure. In a cluster, the number and location of nodes are known and relatively fixed, whereas in a grid, this information may be relatively dynamic and have to be discovered at run-time. Indeed, early distributed computing focused on basic capabilities such as algorithms for consensus, synchro- nization, and distributed termination detection, using whatever programming models were available. At that time, systems such as the Distributed Computing Environment (DCE) [44] were built to facilitate the use of groups of machines, albeit in relatively static, well- defined, closed configurations. DCE used the notion of cells of machines in which users could run codes. Different mechanisms were used for inter-cell and intra-cell communication. The Common Object Request Broker Architecture (CORBA) managed distributed systems by providing an object-oriented, client-side API that could access other ob- jects through an Object Request Broker (ORB) [43]. CORBA is genuinely object- oriented andsupports thekey object-orientedproperties such asencapsulation ofstate, inheritance, and methods that separate interfaces from implementations. To manage interfaces, CORBA used the notion of an Interface Definition Language (IDL), which could be used to produce stubs and skeletons. To use a remote object, a client would have to compile-in the required stub. If an object interface changed, the client would © 2008 by Taylor & Francis Group, LLC 186 High-Performance Computing in Remote Sensing have to be recompiled. Interoperability was not initially considered by the CORBA standards and many vendorORBs werenot compatible.Hence, deploying adistributed system of any size required deploying the same ORB everywhere. Interoperability was eventually addressed by the Internet Inter-ORB Protocol (IIOP) [42]. While CORBA provided novel capabilities at the time, some people argued that the CORBA paradigm was not sufficiently loosely coupled to manage open-ended distributed systems. Interfaces were brittle, vendor ORBs were non-interoperable, and no real distinction was made between objects and object instances. At roughly the same time, the term metacomputing was being used to describe the use of aggregate computing resources to address application requirements. Research projects such as Globus [38], Legion [41], Condor [33], and UNICORE [46] were underway and beginning to provide initial capabilities using ‘home-grown’ imple- mentations. The Globus user, for instance, could use the Globus Monitoring and Dis- coveryService (MDS)to findappropriate hosts.The Globus ResourceAccess Manager (GRAM) client would then contact the Gatekeeper to do authentication, and request the local job manager to allocate and create the desired process. The Globus Access to Secondary Storage (GASS) (now deprecated) could be used to readremote files. Even- tually the term metacomputing was replaced by grid computing by way of the analogy with the electrical power grid, where power is available everywhere on demand. While these early grid systems also provided novel capabilities, they nonetheless had design issues, too. Running a grid task using Globus was still oriented toward running a pre-staged binary identified by a known path name. Legion imposed an object model on all aspects of the system and applications regardless of whether it was necessary or not. The experience gained with these systems, however, was useful since they generated widespread interest and motivated further development. 9.3 The Service-Oriented Architecture Concept With the rapid growth of interest in grid computing, it quickly became clear that the most widely supported models and best practices needed to be standardized. The lessons learned from these earlier approaches to distributed computing motivated the adoption and development of even more loosely coupled models that required even less a priori informationabout the computing environment. Thisapproach is generally called service-oriented architectures. The fundamental notion is that hosts interact via services that can be dynamically discovered at run-time. Of course, this requires some conventions and at least one well-known service to enable the discovery process. This is illustrated in Figure 9.1. First, an available service must register itself with a well-known registry service. A client can then query the registry to find appropriate services that match the desired criteria. One or more service handles are returned to the client, which may select among them. The service is then invoked with any necessary input data and the results are eventually returned. © 2008 by Taylor & Francis Group, LLC An Introduction to Grids for Remote Sensing Applications 187 SERVICE 1. Register (Publish) 4. Invoke 5. Consume 3. Find REGISTRY 2. Request CLIENT Figure 9.1 The service architecture concept. This fundamental interaction identifies several key components that must all be clearly defined to make this all work: r Representation. Service-related network messages and services must observe somecommon, base-levelrepresentation. Inmany cases,the eXtensibleMarkup Language (XML) is used. r Transport Protocol. A network transport protocol is necessary for simply transferring service-related messages between the source and destination. r Interaction Protocol. Each service invocation must observe a protocol where- by the requester makes the request, the service acknowledges the request, and eventually returns the results, assuming no failures occur. r Service Description. For each service that is available, there needs to be an on-line service description that captures all relevant properties of the service, e.g., the service name, what input data are required, what type of output data is produced, the running time, the service cost, etc. r Service Discovery. In a loosely coupled, open-ended environment, it is unten- able that every client must know a priori of every service it needs to use. Both client and service hosts may change for a variety of reasons, and being able to automatically discover and reconfigure the interactions is a key capability. The clear benefit of this architectural concept is that service selection, location, and execution do not have to be hard-wired. The proper representations, descriptions, and protocols enable services to be potentially hosted in multiple locations, discovered and utilized as necessary. Service discovery enables a system to improve flexibility and fault tolerance by dynamically finding service providers when necessary. Of course, a service does not have to be looked up every time it is used. Once discovered, a service handle could be cached and used many times. It is also possible to compose multiple services into a single, larger, composite service. Another extremely important concept is that service architectures enable the man- agement of shared resources. We use the term resource, in its most general sense, to mean all manners of machines, processes, networks, bandwidth, routing, files, © 2008 by Taylor & Francis Group, LLC 188 High-Performance Computing in Remote Sensing data, databases, instruments, sensors, signals, events, subsystems comprised of sets of resources, etc. Such resources can be made accessible as services in a distributed environment, which also means they can be shared among multiple clients. With mul- tiple clients potentially competing for a shared resource, there must be well-defined security models and policies in place to establish client identity and determine who, for example, gets to use certain machines, what services or processes they can run there, who gets to read/write databases containing sensitive information, who gets notified about available information, etc. Being ableto manageshared resourcesmeans thatwe canvirtualize thoseresources. Systems that are ‘stovepiped’ essentially have system functions thatare all hard-wired to specific machines, and the interactions among those machines are also hard-wired. Hence, it isvery difficult to modifyor extend thosesystems to interact, orinteroperate, with systems that they were not designed for. If specific system functions are not hard-wired to certain machines, and can be scheduled on different hosts while still functioning within the larger system, wehave virtualized thatsystem function. Hence, if system functions are addressedby logical name, they can be used regardless of what physical machine they are running on. Note that data can also be virtualized. Data storage systems can become ‘global’ if the file name space is mapped across multiple physical locations. Again, by using a logical name, a file can be read or written without the client having to know where the file is physically stored. In fact, files could be striped or replicated across multiple locations to enable improved performance and reliability. 9.4 Current Approaches This tremendous flexibility of virtualization is enabled by the notion of a service architecture, which is clearly embodied by design in Web services. As we shall see, each of the key components are represented. While the basic Web services stack provides the fundamental mechanismfor discovery andclient-server interaction,there are many larger issues associated with managing sets of distributed resources that are not addressed.Such distributed resourcemanagement, however, was theoriginal focus of grid computing to support large-scale scientific computations. After introducing the Web services stack, we shall look at how grid services build on top of them, and some of the current and emerging end-user grid tools. 9.4.1 Web Services Web services are not to be confused with Web pages, Web forms, or using a Web browser. Web services essentially define how a client and service can interact with a minimum of a priori information. The following Web service standards provide the necessary key capabilities [47]: r XML (eXtensible Markup Language). XML is a common representation using content-oriented markup symbols that has gained wide-spread use in © 2008 by Taylor & Francis Group, LLC An Introduction to Grids for Remote Sensing Applications 189 many applications. Besides providing basic structuring for attribute values, XML namespaces and schemas can be defined for specific applications. r HTTP (Hyper Text Transport Protocol). HTTP is the transport protocol developed for the World Wide Web to move structured data from point A to B. While its use for Web services is not mandatory, it is nonetheless widely used. r SOAP (Simple Object Access Protocol). SOAP provides a request-reply in- teraction protocol with an XML-based message format. At the top level, SOAP messages consist of an envelope with delivery information, a header, and a message body with processing instructions. (We note that while SOAP was originally an acronym, it no longer is since technically it is not used for objects.) r WSDL (Web Services Description Language). WSDL provides an XML- based service interface description format for interfaces, attributes, and other properties. r UDDI (Universal Description, Discovery and Integration). UDDI provides a platform-independent framework for publishing and discovering services. A WSDL service description may be published as part of a service’s registration that is provided to the client upon look-up. 9.4.2 Grid Architectures While Web services were originally motivated to provide basic client-server discovery and interaction, grids were originally motivated by the need to manage groups of machines for scientific computation. Hence, from the beginning, the grid community was concerned about issues such as the scheduling of resources, advance reservations (scheduling resources ahead of time), co-scheduling (scheduling sets of resources), workflow management, virtual organizations, and a distributed security model to manage access across multiple administrative domains. Since scientific computing was the primary motivation, there was also an emphasis on high performance and managing massive data. With the rapid emergence of Web services to address simple interactions in support of e-commerce, however, it quickly became evident that they would provide a widely accepted, standardized infrastructure on which to build. What Web services did not initially address adequately, however, was state management, lifetime management, and notification. State management determines how data, i.e., state, are handled across successive service invocations. Clearly a client may need to have a sequence of related interac- tions with a remote service. The results of any one particular service invocation may depend on results from the previous invocations. This would suggest that, in general, services may need to be stateful. However, a service may be servicing multiple clients with separate invocation streams. Also, successive service invocations may, in fact, involve the interaction of two different services in two different locations. Hence, given these considerations, managing the context or session state separately from © 2008 by Taylor & Francis Group, LLC 190 High-Performance Computing in Remote Sensing the service, such that the service itself is stateless rather than stateful, has distinct advantages such as supporting workflow management and fault tolerance. There are several design choices for how to go about this. A simple avenue is to carry all session states on each service invocation message. This would allow services to be stateless, enabling better fault tolerance because multiple servers could be used since they don’t encapsulate any state. However, this approach is only reasonable for applications with small data sets, such as simple e-commerce interactions. For scientific applications where data sets may involve megabytes, gigabytes, or more, this is simply not scalable. Another approach is to use a service factory to create a transient service instance that manages all states relevant to a particular client and invocation context. After the initial call, a new service handle is returned to the client that is used for all subsequent interactions with the transient service. While this approach may be somewhat more scalable, it means that all data are hidden in the service instance. WS-Context [48] is yet another approach where explicit context structures can be defined that capture all relevant context and session information for a set of related services and calls. While context structures can be passed by value (as part of a message), they can also be referenced by a URI (Uniform Resource Identifier) and accessed througha separate Context Service that manages astore of context structures. Contexts are created with a begin command and destroyed with a complete command. A timeout value can also be set for a context. A fourth, similar, approach is the Web Services Resource Framework (WSRF) [12], where all relevant data, local or remote, can be managed as resources. Resources are accessed through a WS-Resource-qualified endpoint reference that is essentially a ‘network-wide’ pointer to a WS-Resource. Such endpoints may be returned as a reply to a service request, returned from a query to a service registry, or from a request to a resource factory to create a new WS-Resource. In fact, WSRF does not define specific mechanisms for creating resources. Nonetheless, the lifetime of a resource can be explicitly managed. A resource may be immediately destroyed with an explicit destroy request message, or through a scheduled destruction at a future time. Equally important asthe management ofstates is the management of services them- selves. Services can be manually installed on particular hosts and registered with a registry, but thiscan become untenable foreven moderatelysized systems. If aprocess or host should crash, identifying the problem and manually rebooting the system can be tedious. Hence, there is clearly a need to be able to dynamically install, boot, and terminate services. For this reason, the concept of service containers was developed. Services are typically deployed within a container and have a specific time-to-live.If a service’s time-to-live is not occasionally extended, it will eventually be terminated by the container, thereby reclaiming the resources (host memory and cycles) for other purposes without having to have a distributed garbage collection mechanism. Event notification is an essential part of any distributed system [19]. Events can be considered to be simply small messages that have the semantics of being de- livered and acted up on as quickly as possible. Hence, events are commonly used to asynchronously signal any system occurrences that have a time-sensitive nature. Sim- ple, atomic events can be used to represent occurrences such as process completion, © 2008 by Taylor & Francis Group, LLC An Introduction to Grids for Remote Sensing Applications 191 failure, or heartbeats during execution. Events could also be represented by attribute- value pairs with associated metadata, such as changes in a sensor value that exceeds some threshold. Events could also have highly structured, compound attributes, such as the ‘interest regions’ in a distributed simulation. (Interest regions can be used by simulated entities to ‘advertise’ what types of simulated events are of interest.) Regardless of the specific representation, events have producers and consumers. In simple systems, event producers and consumers may be explicitly known to one another and be connected point-to-point. Many event systems offer topic-oriented publish/subscribe where producers (publishers) and consumers (subscribers) use a named channel to distribute events related to a well-known topic. In contrast, content- oriented publish/subscribe delivers events by matching an event’s content (attributes and values) to content-based subscriptions posted by consumers. In the context of grid service architectures, WSRF uses WS-Notification [28], which supports event publishers, consumers, topics, subscriptions, etc., for XML- based messages. Besides defining NotificationProducers and NotificationConsumers that can exchange events, WS-Notification also supports the notion of subscriptions to WSRF Resource Properties. That is to say, a client can subscribe to a remote (data) resource and be notified when the resource value changes. In conjunction with WS-Notification, WS-Topics presents XML descriptions of topics and associated meta-data, whileWS-BrokeredNotification definesan intermediaryservice to manage subscriptions. Along side all of these capabilities is an integral security model. This is critical in a distributed computing infrastructure that may span multiple administrative do- mains. Security requires that authentication, authorization, privacy, data integrity, and non-repudiation be enforced. Authentication establishes a user’s identity, while authorization establishes what they can do. Privacy ensures that data cannot be seen and understood by unauthorized parties, while data integrity ensures that data cannot be maliciously altered even though it may be seen (regardless of whether it is un- derstood). Non-repudiation between two partners to a transaction means that neither partner can later deny that the transaction took place. These capabilitiesare commonly provided bythe GridSecurity Infrastructure(GSI) [40]. GSI uses public/private key cryptography rather than simply passwords. User A’s publickey can bewidely distributed. Any User Bcan use this publickey to encrypt a message to User A. Only User A can decrypt the message using the private key. In essence, public/private keys make it possible to digitally ‘sign’ information. GSI uses this conceptto build a certificatethat establishes a user’s identity. A GSI certificatehas a subject name (user or object), the public key associated with that subject, the name of the Certificate Authority (CA) that signed the certificate certifying that the public key and identity belong to the subject, and the digital signature of the named CA. GSI also provides the capability to delegate trust to a proxy using a proxy certificate, thus allowing multiple entities to act on the user’s behalf. This, in turn, enables the capability of single sign-on where a user only has to ‘login once’ to be authenticated to all resources that are in use. Using these capabilities, we note that it is possible to securely build virtual organizations across physical organizations by establishing one’s grid identity and role within the VO. © 2008 by Taylor & Francis Group, LLC 192 High-Performance Computing in Remote Sensing Applications & User-Level Tools • Execution Mgmt OGSA • Security • Self-Mgmt • Info Svcs Web Services Operating Systems Networks StorageServers • Data Mgmt • Resource Mgmt Figure 9.2 The OGSA framework. The GridShib project is extending GSI with work from the Internet2’s Shibboleth project [24].Shibboleth isbased onthe SecurityAssertion MarkupLanguage (SAML) to exchange attributes between trusted organizations. To use a remote resource, a user authenticates to their home institution, which, in turn, authenticates them to the remote institution based on the user’s attributes. GridShib introduces both push and pull modes for managing the exchange of attributes and certificates. Up to this point, we have presented and discussed basic Web services and the ad- ditional fundamental capabilities that essentially extend these into grid services. We now discuss how these capabilities can be combined into a coherent service archi- tecture. The key example here is the emerging standard of the Open Grid Services Architecture (OGSA) [10, 39]. Figure 9.2 gives a very high-level view of how OGSA interprets basic Web services to present a uniform user interface and service archi- tecture for the management of servers, storage, and networks. OGSA provides the following broad categories of services: r Execution Management Services – Execution Planning – Resource Selection and Scheduling – Workload Management r Data Services – Storage Management – Transport – Replica Management r Resource Management Services – Resource Virtualization – Reservations – Monitoring and Control © 2008 by Taylor & Francis Group, LLC [...]... http://www.earthobservations.org [15] The Math Works, Inc MATLAB Reference Guide 199 2 [16] International Virtual Observatory Alliance http://www.ivoa.net © 2008 by Taylor & Francis Group, LLC 200 High- Performance Computing in Remote Sensing [17] K Ballinger et al Basic Profile, v1.0 http://www.ws-i.org/Profiles/ BasicProfile-1. 0-2 00 4-0 4-1 6.html [18] K Cline and others Towards Converging Web Service Standards for Resources,... Francis Group, LLC 196 High- Performance Computing in Remote Sensing Naregi The Japanese NaReGI (National Research Grid Initiative) [21] will facilitate collaboration among academia, industry, and government by supporting many virtual organizations with resources across the country Super-SINET is the NaReGI network infrastructure and provides a 10-Gbps photonic backbone stretching 6000 km The initial NaReGI... Microsoft announced their intent to converge some of the competing Web standards [18] At the risk of over-simplification, this effort will attempt to merge WS-Management and WS-Distributed Management, WS-Notification and WSEventing, and refactor services for managing resources and metadata While no time © 2008 by Taylor & Francis Group, LLC 194 High- Performance Computing in Remote Sensing table was announced... supported through a joint collaboration of the UK Research Councils and likewise aims to facilitate joint academic and industrial research Supported application domains include astronomy, physics, biology, engineering, finance, and health care The Open Middleware Infrastructure Institute (OMII-UK) maintains an open-source Web service infrastructure and provides comprehensive training for application stakeholders... important since there will always be support for the fundamental capabilities of information discovery, remote job management, data movement, etc Any distinctions will be based on the style of application and how it uses these fundamental capabilities That is to say, service architectures will enable a wider range of peer-to-peer computing, Internet computing, high- throughput computing, distributed supercomputing,... research efforts to build distributed infrastructure prototypes to the current efforts in Web/grid service architectures Grids still, however, have a tremendous amount of unrealized potential © 2008 by Taylor & Francis Group, LLC 198 High- Performance Computing in Remote Sensing Seamless Integration of Resources in a Ubiquitous Service Architecture: Grids will not begin to reach their full potential until... http://devresource.hp.com/drc/specifications/ wsm/index.jsp [ 19] C Lee, B S Michel, E Deelman, and J Blythe From Event-Driven Workflows Towards a posteriori Computing In Reinefeld, Laforenza, and Getov, editors, Future Generation Grids, pages 3–28 Springer-Verlag, 2006 [20] C Lee and D Talia Grid Programming Models: Current Tools, Issues and Directions In Berman, Fox, and Hey, editors, Grid Computing: Making the Global Infrastructure... by Taylor & Francis Group, LLC An Introduction to Grids for Remote Sensing Applications 197 arrived at an airport and have enough throughput to do an analysis within the aircraft’s turn-around time Clearly, though, with the advent of satellite-based, in- flight Internet access, such sensor data could be streamed in real-time to analysis facilities IVOA The goal of the International Virtual Observatory... Conference on System Sciences, January 2004 [5] J Austin et al DAME: Searching Large Data Sets Within a Grid- Enabled Engineering Application Proceedings of the IEEE, pages 496 –5 09, March 2005 Special issue on Grid Computing, M Parashar and C Lee, guest editors [6] E Deelman et al Mapping Abstract Complex Workflows onto Grid Environments Journal of Grid Computing, 2003 1(1), 2003 [7] A Donnellan, J Parker,... national and international grid infrastructures and facilitate science discovery and engineering advancement, the immediate goal now is to connect them to remote sensing applications DAME As part of the UK e-Science Programme, the Distributed Aircraft Maintenance Environment (DAME) project endeavored to build a grid-based diagnosis and prognosis system for jet engines [5] Vibration sensor data from in- service . of peer-to-peer computing, Internet computing, high- throughput computing, distributed supercomputing, or ubiquitous computing. 9. 4.3 End-User Tools Thus far, we have described the service-oriented. relevant to remote sensing. 183 © 2008 by Taylor & Francis Group, LLC 184 High- Performance Computing in Remote Sensing 9. 1 Introduction The concept of distributed computing has been around since. Francis Group, LLC 192 High- Performance Computing in Remote Sensing Applications & User-Level Tools • Execution Mgmt OGSA • Security • Self-Mgmt • Info Svcs Web Services Operating Systems Networks

Ngày đăng: 12/08/2014, 03:20

Mục lục

  • Chapter 9: An Introduction to Grids for Remote Sensing Applications

    • Contents

    • 9.3 The Service-Oriented Architecture Concept

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan