Handbook of Research on Geoinformatics - Hassan A. Karimi Part 4 pot

52 279 0
Handbook of Research on Geoinformatics - Hassan A. Karimi Part 4 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

120 Network Modeling and methods for describing and measuring net- works and proving properties of networks are well-developed. There are a variety of network models in GISystems, which are primarily dif- ferentiated by the topological relationships they maintain. Network models can act as the basis for location through the process of linear referenc- ing. Network analyses such as routing and ow modeling have to some extent been implemented, although there are substantial opportunities for additional theoretical advances and diversied application. references Ahuja, R. K., Magnanti, T. L., & Orlin, J. B. (1993). Network Flows: Theory, Algorithms, and Applications. Upper Saddle River, NJ: Prentice- Hall, Inc. Cooke, D. F. (1998). Topology and TIGER: The Census Bureau’s Contribution. In T. W. Foresman (Ed.), The History of Geographic Information Systems. Upper Saddle River, NJ: Prentice Hall. Curtin, K. M., Noronha, V., Goodchild, M. F., & Grise, S. (2001). ArcGIS Transportation Data Model. Redlands, CA: Environmental Systems Research Institute. Curtin, K. M., Qiu, F., Hayslett-McCall, K., & Bray, T. (2005). Integrating GIS and Maximal Covering Models to Determine Optimal Police Patrol Areas. In F. Wang (Ed.), Geographic In- formation Systems and Crime Analysis. Hershey: Idea Group. Dueker, K. J., & Butler, J. A. (2000). A geographic information system framework for transportation data sharing. Transportation Research Part C- Emerging Technologies, 8(1-6), 13-36. Evans, J. R., & Minieka, E. (1992). Optimization Algorithms for Networks and Graphs (2nd ed.). New York: Marcel Dekker. Federal Highway Administration. (2001). Imple- mentation of GIS Based Highway Safety Analy- sis: Bridging the Gap (No. FHWA-RD-01-039). McLean, VA: U.S. Department of Transporta- tion. Federal Transit Administration. (2003). Best Practices for Using Geographic Data in Transit: A Location Referencing Guidebook (No. FTA-NJ- 26-7044-2003.1). Washington, DC: U.S. Depart- ment of Transportation. Fletcher, D., Expinoza, J., Mackoy, R. D., Gordon, S., Spear, B., & Vonderohe, A. (1998). The Case for a Unied Linear Reference System. URISA Journal, 10(1). Fohl, P., Curtin, K. M., Goodchild, M. F., & Church, R. L. (1996). A Non-Planar, Lane-Based Navigable Data Model for ITS. Paper presented at the International Symposium on Spatial Data Handling, Delft, The Netherlands. Harary, F. (1982). Graph Theory. Reading: Ad- dison Wesley. Kansky, K. (1963). Structure of transportation net- works: relationships between network geogrpahy and regional characteristics (No. 84). Chicago, IL: University of Chicago. Koncz, N. A., & Adams, T. M. (2002). A data model for multi-dimensional transportation ap- plications. International Journal of Geographical Information Science, 16(6), 551-569. Noronha, V., & Church, R. L. (2002). Linear Ref- erencing and Other Forms of Location Expression for Transportation (No. Task Order 3021). Santa Barbara: Vehicle Intelligence & Transportation Analysis Laboratory, University of California. Nyerges, T. L. (1990). Locational Referencing and Highway Segmentation in a Geographic Informa- tion System. ITE Journal, March, 27-31. Rodrigue, J., Comtois, C., & Slack, B. (2006). The Geography of Transport Systems. London: Routledge. 121 Network Modeling Scarponcini, P. (2001). Linear reference system for life-cycle integration. Journal of Computing in Civil Engineering, 15(1), 81-88. Sutton, J. C., & Wyman, M. M. (2000). Dynamic location: an iconic model to synchronize temporal and spatial transportation data. Transportation Research Part C-Emerging Technologies, 8(1- 6), 37-52. Vonderohe, A., Chou, C., Sun, F., & Adams, T. (1997). A generic data model for linear referenc- ing systems (No. Research Results Digest Number 218). Washington D.C.: National Cooperative Highway Research Program, Transportation Research Board. Wilson, R. J. (1996). Introduction to Graph Theory. Essex: Longman. keywords Capacity: The largest amount of ow permit- ted on an edge or through a vertex. Graph Theory: The mathematical discipline related to the properties of networks. Linear Referencing: The process of associat- ing events with a network datum. Network: A connected set of edges and vertices. Network Design Problems: A set of com- binatorially complex network analysis problems where routes across (or ows through) the network must be determined. Network Indices: Comparisons of network measures designed to describe the level of con- nectivity, level of efciency, level of development, or shape of a network. Topology: The study of those properties of net- works that are not altered by elastic deformations. These properties include adjacency, incidence, connectivity, and containment. 122 Chapter XVI Articial Neural Networks Xiaojun Yang Florida State University, USA Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited. Abstr Act Articial neural networks are increasingly being used to model complex, nonlinear phenomena. The purpose of this chapter is to review the fundamentals of articial neural networks and their major applications in geoinformatics. It begins with a discussion on the basic structure of articial neural networks with the focus on the multilayer perceptron networks given their robustness and popularity. This is followed by a review on the major applications of articial neural networks in geoinformat- ics, including pattern recognition and image classication, hydrological modeling, and urban growth prediction. Finally, several areas are identied for further research in order to improve the success of articial neural networks for problem solving in geoinformatics. Introduct Ion An articial neural network (commonly just neural network) is an interconnected assemblage of articial neurons that uses a mathematical or computational model of theorized mind and brain activity, attempting to parallel and simulate the powerful capabilities for knowledge acquisi- tion, recall, synthesis, and problem solving. It originated from the concept of articial neuron introduced by McCulloch and Pitts in 1943. Over the past six decades, articial neural networks have evolved from the preliminary development of articial neuron, through the rediscovery and popularization of the back-propagation training algorithm, to the implementation of articial neu- 123 Articial Neural Networks ral networks using dedicated hardware. Theoreti- cally, articial neural networks are highly robust in data distribution, and can handle incomplete, noisy and ambiguous data. They are well suited for modeling complex, nonlinear phenomena ranging from nancial management, hydrologi- cal modeling to natural hazard prediction. The purpose of the article is to introduce the basic structure of articial neural networks, review their major applications in geoinformatics, and discuss future and emerging trends. bAckground The basic structure of an articial neural network involves a network of many interconnected neu- rons. These neurons are very simple processing elements that individually handle pieces of a big problem. A neuron computes an output using an activation function that considers the weighted sum of all its inputs. These activation functions can have many different types but the logistic sigmoid function is quite common: 1 f ( x ) 1 e x where f(x) is the output of a neuron and x rep- resents the weighted sum of inputs to a neuron. As suggested from Equation 1, the principles of computation at the neuron level are quite simple, and the power of neural computation relies upon the use of distributed, adaptive and nonlinear computing. The distributed comput- ing environment is realized through the massive interconnected neurons that share the load of the overall processing task. The adaptive property is embedded with the network by adjusting the weights that interconnect the neurons during the training phase. The use of an activation function in each neuron introduces the nonlinear behavior to the network. There are many different types of neural net- works, but most can fall into one of the ve major paradigms listed in Table 1. Each paradigm has advantages and disadvantages depending upon specic applications. A detailed discussion about these paradigms can be found elsewhere (e.g., Bishop, 1995; Rojas, 1996; Haykin, 1999; and Principe et al., 2000). This article will concentrate upon multilayer perceptron networks due to their technological robustness and popularity (Bishop, 1 995 ). Figure 1 illustrates a simple multilayer per- ceptron neural network with a 4×5×4×1 structure. This is a typical feed-forward network that al- lows the connections between neurons to ow in one direction. Information ow starts from the neurons in the input layer, and then moves along weighted links to neurons in the hidden layers for processing. The weights are normally determined through training. Each neuron contains a nonlinear activation function that combines information from all neurons in the preceding layers.The output layer is a complex function of inputs and internal network transformations. The topology of a neural network is critical for neural computing to solve problems with reasonable training time and performance. For any neural computing, training time is always the biggest bottleneck and thus, every effort is needed to make training effective and affordable. Training time is a function of the complexity of the network topology which is ultimately deter- mined by the combination of hidden layers and neurons. A trade-off is needed to balance the processing purpose of the hidden layers and the training time needed. A network without a hidden layer is only able to solve a linear problem. To tackle a nonlinear problem, a reasonable number of hidden layers is needed. A network with one hidden layer has the power to approximate any function provided that the number of neurons and the training time are not constrained (Hornik, 1993). But in practice, many functions are dif- cult to approximate with one hidden layer and thus, Flood and Kartam (1994) suggested using two hidden layers as a starting point. 124 Articial Neural Networks No. Type Example Brief description 1 Feed-forward neural network Multi-layer perceptron It consists of multiple layers of processing units that are usually interconnected in a feed-forward way Radial basis functions As powerful interpolation techniques, they are used to replace the sigmoidal hidden layer transfer function in multi-layer perceptrons Kohonen self-organiz- ing networks They use a form of unsupervised learning method to map points in an input space to coordinate in an output space. 2 Recurrent network Simple recurrent networks Contrary to feed-forward networks, recurrent neural networks use bi-directional data ow and propagate data from later processing stages to earlier stages Hopeld network 3 Stochastic neural networks Boltzmann machine They introduce random variations, often viewed as a form of statis- tical sampling, into the networks 4 Modular neural networks Committee of machine They use several small networks that cooperate or compete to solve problems. 5 Other types Dynamic neural net- works They not only deal with nonlinear multivariate behavior, but also include learning of time-dependent behavior. Cascading neural networks They begin their training without any hidden neurons. When the output error reaches a predened error threshold, the networks add a new hidden neuron. Neuro-fuzzy networks They are a fuzzy inference system in the body which introduces the processes such as fuzzication, inference, aggregation and defuzzi- cation into a neural network. Table 1. Classication of articial neural networks (Source: Haykin, 1999) Figure 1. A simple multilayer perceptron(MLP) neutral network with a 4 X 5 X 4 X 1 structure 125 Articial Neural Networks The number of neurons for the input and output layers can be dened according to the research problem identied in an actual application. The critical aspect is related to the choice of the number of neurons in hidden layers and hence the number of connection weights. If there are too few neu- rons in hidden layers, the network may be unable to approximate very complex functions because of insufcient degrees of freedom. On the other hand, if there are too many neurons, the network tends to have a large number of degrees of free- dom which may lead to overtraining and hence poor performance in generalization (Rojas, 1996). Thus, it is crucial to nd the ‘optimum’ number of neurons in hidden layers that adequately capture the relationship in the training data. This optimi- zation can be achieved by using trial and error or several systematic approaches such as pruning and constructive algorithms (Reed, 1993). Training is a learning process by which the connection weights are adjusted until the network is optimal. This involves the use of training sam- ples, an error measure and a learning algorithm. Training samples are presented to the network with input and output data over many iterations. They should not only be large in size but also be representative of the entire data set to ensure sufcient generalization ability. There are several different error measures such as the mean squared error (MSE), the mean squared relative error (MSRE), the coefcient of efciency (CE), and the coefcient of determination (r 2 ) (Dawson and Wilby, 2001). The MSE has been most commonly used. The overall goal of training is to optimize errors through either a local or global learning algorithm. Local methods adjust weights of the network by using its localized input signals and localized rst- or second- derivative of the error function. They are computationally effective for changing the weights in a feed-forward network but are susceptible to local minima in the er- ror surface. Global methods are able to escape local minima in the error surface and thus can nd optimal weight congurations (Maier and Dandy, 2000). By far the most popular algorithm for opti- mizing feed-forward neural networks is error back-propagation (Rumelhart et al., 1986). This is a rst-order local method. It is based on the method of steepest descent, in which the descent direction is equal to the negative of the gradient of the error. The drawback of this method is that its search for the optimal weight can become caught in local minima, thus resulting in suboptimal solutions. This vulnerability could increase when the step size taken in weight space becomes too small. Increasing the step size can help escape lo- cal error minima, but when the step size becomes too large, training can fall into oscillatory traps (Rojas, 1996). If that happens, the algorithm will diverge and the error will increase rather than decrease. Apparently, it is difcult to nd a step size that can balance high learning speed and minimiza- tion of the risk of divergence. Recently, several algorithms have been introduced to help adapt step sizes during training (e.g., Maier and Dandy, 2000). In practice, however, a trial-and-error approach has often been used to optimize step size. Another sensitive issue in back-propagation training is the choice of initial weights. In the absence of any a priori knowledge, random values should be used for initial weights. The stop criteria for learning are very im- portant. Training can be stopped when the total number of iterations specied or a targeted value of error is reached, or when the training is at the point of diminishing returns. It should be noted that using low error level is not always safe to stop the training because of possible overtraining or overtting. When this happens, the network memorizes the training patterns, thus losing the ability to generalize. A highly recommended method for stopping the training is through cross validation (e.g., Amari et al., 1997). In doing so, an independent data set is required for test pur- poses, and close monitoring of the error in the training set and the test set is needed. Once the error in the test set increases, the training should 126 Articial Neural Networks be stopped since the point of best generalization has been reached. AppLIcAt Ions Articial neural networks are applicable when a relationship between the independent variables and dependent variables exists. They have been applied for such generic tasks as regression analy- sis, time series prediction and modeling, pattern recognition and image classication, and data processing. The applications of articial neural networks in geoinformatics have concentrated on a few major areas such as pattern recognition and image classication (Bruzzone et al., 1999), hydrological modeling (Maier and Dandy, 2000) and urban growth prediction (Yang, 2009). The following paragraphs will provide a brief review on these areas. Pattern recognition and image classication are among the most common applications of articial neural networks in remote sensing, and the documented cases overwhelmingly relied upon the use of multi-layer perceptron networks. The major advantages of articial neural networks over conventional parametric statistical approaches to image classication, such as the Euclidean, maxi- mum likelihood (ML), and Mahalanobis distance classiers, are that they are distribution-free with less severe statistical assumptions needed and that they are suitable for data integration from various sources (Foody, 1995). Articial neural networks are found to be accurate in the classication of remotely sensed data, although improvements in accuracies have generally been small or modest (Campbell, 2002). Articial neural networks are being used in- creasingly to predict and forecast water resource variables such as algae concentration, nitrogen concentration, runoff, total volume, discharge, or ow (Maier and Dandy, 2000; Dawson and Wilby, 2001). Most of the documented cases used a multi-layer perceptron that was trained by using the back-propagation algorithm. Based on the results obtained so far, there is little doubt that articial neural networks have the potential to be a useful tool for the prediction and forecasting of water resource variables. The application of articial neural networks for urban predictive modeling is a new but rapidly expanding area of research (Yang, 2009). Neural networks have been used to compute develop- ment probability by integrating a set of predictive variables as the core of a land transformation model (e.g., Pijanowski et al., 2002) or a cellular automata-based model (e.g., Yeh and Li, 2003). All the applications documented so far involved the use of the multilayer perceptron network, a grid- based modeling framework, and a Geographic Information Systems (GIS) that was loosely or tightly integrated with the network for input data preparation, modeling validation and analysis. conc Lus Ion And future trends Based on many documented applications within recent years, the prospect of articial neural networks in geoinformatics seems to be quite promising. On the other hand, the capability of neural networks tends to be oversold as an all- inclusive ‘black box’ that is capable to formulate an optimal solution to any problem regardless of network architecture, system conceptualiza- tion, or data quality. Thus, this eld has been characterized by inconsistent research design and poor modeling practice. Several researchers recently emphasized the need to adopt a system- atic approach for effective neural network model development that considers problem conceptual- ization, data preprocessing, network architecture design, training methods, and model validation in a sequential mode (e.g., Mailer and Dandy, 2000; Dawson and Wilby, 2001; Yang, 2009). T here are a few areas where further research is needed. Firstly, there are many arbitrary decisions 127 Articial Neural Networks involved in the construction of a neural network model, and therefore, there is a need to develop guidance that helps identify the circumstances under which particular approaches should be adopted and how to optimize the parameters that control them. For this purpose, more empirical, inter-model comparisons and rigorous assessment of neural network performance with different inputs, architectures, and internal parameters are needed. Secondly, data preprocessing is an area where little guidance can be found. There are many theoretical assumptions that have not been conrmed by empirical trials. It is not clear how different preprocessing methods could affect the model outcome. Future investigation is needed to explore the impact of data quality and different methods in data division, data standardization, or data reduction. Thirdly, continuing research is needed to develop effective strategies and prob- ing tools for mining the knowledge contained in the connection weights of trained neural network models for prediction purposes. This can help uncover the ‘black-box’ construction of the neural network, thus facilitating the understanding of the physical meanings of spatial factors and their contribution to geoinformatics. This should help improve the success of neural network applica- tions for problem solving in geoinformatics. references Amari, S., Murata, N., Muller, K. R., Finke, M., & Yang, H. H. (1997). Asymptotic statistical theory of overtraining and cross-validation. IEEE Trans- actions On Neural Networks, 8(5), 985-996. Bishop, C. ( 1995). Neural Networks for Pattern Recognition (p. 504). Oxford: University Press. Bruzzone, L., Prieto, D. F., & Serpico, S. B. (1999). A neural-statistical approach to multitemporal and multisource remote-sensing image classication. IEEE Transactions on Geoscience and Remote Sensing, 37(3), 1350-1359. Campbell, J. B. (2002). Introduction to Remote Sensing (3 rd ) (p. 620). New York: The Guiford Press. Dawson, C. W., & Wilby, R. L. (2001). Hydro- logical modelling using articial neural networks. Progress in Physical Geography, 25(1), 80-108. Flood, I., & Kartam, N. (1994). Neural networks in civil engineering.2. systems and application. Journal of Computing in Civil Engineering, 8(2), 149-162. Foody, G. M. (1995). Land cover classication using an articial neural network with ancillary information. International Journal of Geographi- cal Information Systems, 9, 527- 542. Haykin, S. (1999). Neural Networks: A Compre- hensive Foundation (p. 842). Prentice Hall. Hornik, K. (1993). Some new results on neural- network approximation. Neural Networks, 6(8), 1069-1072. Kwok, T. Y., & Yeung, D. Y. (1997). Constructive algorithms for structure learning in feed-forward neural networks for regression problems. IEEE Transactions On Neural Networks, 8(3), 630- 645. Maier, H. R., & Dandy, G. C. (2000). Neural networks for the prediction and forecasting of water resources variables: A review of modeling issues and applications. Environmental Modelling & Software, 15, 101-124. Pijanowski, B. C., Brown, D., Shellito, B., & Manik, G. (2002). Using neural networks and GIS to forecast land use changes: A land transforma- tion model. Computers, Environment and Urban Systems, 26, 553–575. Principe, J. C., Euliano, N. R., & Lefebvre, W. C. (2000). Neural and Adaptive Systems: Fun- damentals Through Simulations (p. 565). New York: John Wiley & Sons. 128 Articial Neural Networks Reed, R. (1993). Pruning algorithms - a survey. IEEE Transactions On Neural Networks, 4(5), 740-747. Rojas, R. (1996). Neural Networks: A Systematic Introduction (p. 502). Springer-Verlag, Berlin. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In Parallel Distributed Processing D. E. Rumelhart, & J. L. McClelland. Cambridge: MIT Press. Yang, X. (2009). Articial neural networks for urban modeling. In Manual of Geographic Information Systems, M. Madden. American Society for Photogrammetry and Remote Sens- ing (in press). Yeh, A. G. O., & Li, X. (2003). Simulation of development alternatives using neural networks, cellular automata, and GIS for urban planning. Photogrammetric Engineering and Remote Sens- ing, 69(9), 1043-1052. key ter Ms Architecture: The structure of a neural network including the number and connectivity of neurons. A network generally consists of an input layer, one or more hidden layers, and an output layer. Back-Propagation: The training algorithm for the feed-forward, multi-layer perceptron networks which works by propagating errors back through a network and adjusting weights in the direction opposite to the largest local gradient. Error Space: The n-dimensional surface in which weights in a networks are adjusted by the back-propagation algorithm to minimize model error. Feed-Forward: A network in which all the connections between neurons ow in one direc- tion from an input layer, through hidden layers, to an output layer. Multiplayer Perceptron: The most popular network which consists of multiple layers of in- terconnected processing units in a feed-forward way. Neuron: The basic building block of a neural network. A neuron sums the weighed inputs, processes them using an activation function, and produces an output response. Pruning Algorithm: A training algorithm that optimizes the number of hidden layer neurons by removing or disabling unnecessary weights or neurons from a large network that is initially constructed to capture the input-output relationship. Training/Learning: The processing by which the connection weights are adjusted until the network is optimal. 129 Chapter XVII Spatial Interpolation Xiaojun Yang Florida State University, USA Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited. Abstr Act Spatial interpolation is a core component of data processing and analysis in geoinformatics. The purpose of this chapter is to discuss the concept and techniques of spatial interpolation. It begins with an over- view of the concept and brief history of spatial interpolation. Then, the chapter reviews some commonly used interpolations that are specically designed for working with point data, including inverse distance weighting, kriging, triangulation, Thiessen polygons, radial basis functions, minimum curvature, and trend surface. This is followed by a discussion on some criteria that are proposed to help select an ap- propriate interpolator; these criteria include global accuracy, local accuracy, visual pleasantness and faithfulness, sensitivity, and computational intensity. Finally, future research needs and new, emerging applications are presented. Introduct Ion Spatial interpolation is a core component of data processing and analysis in geographic informa- tion systems. It is also an important subject in spatial statistics and geostatistics. By denition, spatial interpolation is the procedure of predi- cating the value of properties from known sites to un-sampled, missing, or obscured locations. The rationale behind interpolation is the very common observation that values at points close together in space are more likely to be similar than points further apart. This observation has been formulated as the First Law of Geography (Tobler, 1970). Data sources for spatial interpola- tion are normally scattered sample points such as soil proles, water wells, meteorological stations or counts of species, people or market outlets [...]... a 8 6 ,0 0 0 2 8 ,0 0 0 a Time-stamped tables (Gadia and Vaishnav 1985) S to ck P rice F ro m To IB M IB M IB M 16 19 16 1 0 -7 -9 1 1 0 :0 7 a m 1 0 -1 5 -9 1 4 :3 5 p m 1 0 -3 0 -9 1 4 :5 7 p m 1 0 -1 5 -9 1 4 :3 5 p m 1 0 -3 0 -9 1 4 :5 7 p m 11 -2 -9 1 1 2 :5 3 p m IB M 25 11 -2 -9 1 1 2 :5 3 p m 11 -5 -9 1 2 :0 2 p m b Time-stamped tuples (rows): an ungrouped relation (Snodgrass and Ahn 1985) N... Journal of Environmental Management, 75 (4, Spec Iss.), 32 5-3 36 Kraak, M.-J (20 04) Visualization Viewpoints - Geovisualization for Knowledge Construction and Decision SupportGeovisualization illustrated IEEE computer graphics and applications, 24( 1), 13 Kraak, M.-J., & MacEachren, A M (2005) Special Content: Geovisualization and GIScience Cartography and geographic information science, 32(2), 6 7-6 8 Kwan,... lat ion in S pat ial Infor mat ion In geographic data information granulation is a central aspect of observation or abstract representation of geographic entities The representation of space and spatial relationships in terms of idealized abstraction of geometric primitives (e.g., polygons, lines and points or pixels) and topological constraints inherently presupposes granulation of the space-time continuum... (Zadeh, 1996) - information granulation involves partitioning a class of objects (points) into granules, with a granule being a clump of objects (points) which are drawn together by indistinguishability, similarity, or functionality ” Instead of focusing on machine-centric approach to information, granular computing emphasizes human-centric perception in terms of formation of abstract concept in representing... Editions Technip Okabe, A., Boots, B., & Sugihara, K (1992) Spatial Tessellations, 532 New York: John Wiley & Sons Oliver, M A., & Webster, R (1990) Kriging: A method of interpolation for geographic information systems International Journal of Geographical Information Systems, 4( 3), 31 3-3 32 Tobler, W R (1970) A computer movie simulating urban growth in the Detroit region Economic Geography, 46 , 2 34 40 ... analysis of geographical data International journal of geographical information systems, 9(1), 7 Raper, J (2000) Multidimensional geographic information science London and New York: Taylor and Francis Raper, J., & Livingstone, D (1995) Development of a geomorphological spatial model using objectoriented design International Journal of Geographical Information Systems, 9 (4) , 35 9-3 84 Robinson, A C.,... h n [0 ,2 0 ] U [ 4 1 ,5 1 ] To m S a lary D epartm en t [11 , 4 9 ] 1 5 K [11 ,4 4 ] To ys [5 0 , 5 4 ] 2 0 K [4 5 , 6 0 ] S h o e s [5 5 , 6 0 ] 2 5 K [0 , 2 0 ] 2 0 K [0 , 2 0 ] H a rd w a re [4 1 , 5 1 ] 3 0 K [4 1 , 5 1 ] C lo th in g [0 ,4 4 ] U [5 0 , N o w ] [0 ,4 4 ] U [5 0 , [0 ,4 4 ] U [5 0 , N o w ] M a ry N ow] 2 5K C re d it c Time-stamp values (cells): a group relation (Gadia and Yeung... a second point, and so on Cross-validation outputs various summary statistics of the errors that measure the global accuracy 136 Geostatistics: A branch of statistical estimation concentrating on the application of the theory of random functions for estimating natural phenomena Sampling: The technique of acquiring sufficient observations that can be used to obtain a satisfactory representation of the... pycnophylactic interpolation for geographical regions Journal of the American Statistical Association, 74, 51 9-3 0 Yang, X., & Hodler, T (2000) Visual and statistical comparisons of surface modeling techniques for point-based environmental data Cartography and Geographic Information Science, 17(2), 16 5-1 75 key TER MS Cross Validation: A validation method in which observations are dropped one at a time, the... spatial type that contains the distribution of all sections of T Spatio-Temporal Object Modeling Figure 1 Extended spatio-temporal object types Extended object types Geometric types Geometry Geometry Collection Parametric types Time-interval Geometric types Temporal Point LineString Polygon Spatial Points LineStrings Polygons ST (e.g., all locations of sections of pavement material of type String), . prediction and modeling, pattern recognition and image classication, and data processing. The applications of articial neural networks in geoinformatics have concentrated on a few major areas. geogrpahy and regional characteristics (No. 84) . Chicago, IL: University of Chicago. Koncz, N. A. , & Adams, T. M. (2002). A data model for multi-dimensional transportation ap- plications Department of Transporta- tion. Federal Transit Administration. (2003). Best Practices for Using Geographic Data in Transit: A Location Referencing Guidebook (No. FTA-NJ- 2 6-7 04 4-2 003.1). Washington,

Ngày đăng: 08/08/2014, 13:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan