Fast data processing with spark, 2nd edition

184 2.5K 0
Fast data processing with spark, 2nd edition

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Tài liệu hướng dẫn cơ bản về Apache Spark. Xử lý dữ liệu lớn với tốc độ nhanh, tiết kiệm chi phí truyền tải. Cái nhìn tổng quan về công cụ xử lý dữ liệu lớn, nhanh gấp 100 lần so với Apache Hadoop. Apache Spark tương thích hoàn toàn với hệ thống HDFS, Hive,...

www.allitebooks.com Fast Data Processing with Spark Second Edition Perform real-time analytics using Spark in a fast, distributed, and scalable way Krishna Sankar Holden Karau BIRMINGHAM - MUMBAI www.allitebooks.com Fast Data Processing with Spark Second Edition Copyright © 2015 Packt Publishing All rights reserved No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews Every effort has been made in the preparation of this book to ensure the accuracy of the information presented However, the information contained in this book is sold without warranty, either express or implied Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals However, Packt Publishing cannot guarantee the accuracy of this information First published: October 2013 Second edition: March 2015 Production reference: 1250315 Published by Packt Publishing Ltd Livery Place 35 Livery Street Birmingham B3 2PB, UK ISBN 978-1-78439-257-4 www.packtpub.com www.allitebooks.com Credits Authors Copy Editor Krishna Sankar Hiral Bhat Holden Karau Project Coordinator Neha Bhatnagar Reviewers Robin East Proofreaders Toni Verbeiren Maria Gould Lijie Xu Ameesha Green Commissioning Editor Joanna McMahon Akram Hussain Indexer Acquisition Editors Tejal Soni Shaon Basu Production Coordinator Kunal Parikh Nilesh R Mohite Content Development Editor Cover Work Arvind Koul Nilesh R Mohite Technical Editors Madhunikita Sunil Chindarkar Taabish Khan www.allitebooks.com About the Authors Krishna Sankar is a chief data scientist at http://www.blackarrow.tv/, where he focuses on optimizing user experiences via inference, intelligence, and interfaces His earlier roles include principal architect, data scientist at Tata America Intl, director of a data science and bioinformatics start-up, and a distinguished engineer at Cisco He has spoken at various conferences, such as Strata-Sparkcamp, OSCON, Pycon, and Pydata about predicting NFL (http://goo.gl/movfds), Spark (http://goo.gl/E4kqMD), data science (http://goo.gl/9pyJMH), machine learning (http://goo.gl/SXF53n), and social media analysis (http://goo.gl/D9YpVQ) He was a guest lecturer at Naval Postgraduate School, Monterey His blogs can be found at https://doubleclix.wordpress.com/ His other passion is Lego Robotics You can ind him at the St Louis FLL World Competition as the robots design judge The credit goes to my coauthor, Holden Karau, the reviewers, and the editors at Packt Publishing Holden wrote the irst edition, and I hope I was able to contribute to the same depth I am deeply thankful to the reviewers Lijie, Robin, and Toni They spent time diligently reviewing the material and code They have added lots of insightful tips to the text, which I have gratefully included In addition, their sharp eyes caught tons of errors in the code and text Thanks to Arvind Koul, who has been the chief force behind the book A great editor is absolutely essential for the completion of a book, and I was lucky to have Arvind I also want to thank the editors at Packt Publishing: Anila, Madhunikita, Milton, Neha, and Shaon, with whom I had the fortune to work with at various stages The guidance and wisdom from Joe Matarese, my boss at http://www.blackarrow tv/, and from Paco Nathan at Databricks are invaluable My spouse, Usha and son Kaushik, were always with me, cheering me on for any endeavor that I embark upon—mostly successful, like this book, and occasionally foolhardy efforts! I dedicate this book to my mom, who unfortunately passed away last month; she was always proud to see her eldest son as an author www.allitebooks.com Holden Karau is a software development engineer and is active in the open source sphere She has worked on a variety of search, classiication, and distributed systems problems at Databricks, Google, Foursquare, and Amazon She graduated from the University of Waterloo with a bachelor's of mathematics degree in computer science Other than software, she enjoys playing with ire and hula hoops, and welding www.allitebooks.com About the Reviewers Robin East has served a wide range of roles covering operations research, inance, IT system development, and data science In the 1980s, he was developing credit scoring models using data science and big data before anyone (including himself) had even heard of those terms! In the last 15 years, he has worked with numerous large organizations, implementing enterprise content search applications, content intelligence systems, and big data processing systems He has created numerous solutions, ranging from swaps and derivatives in the banking sector to fashion analytics in the retail sector Robin became interested in Apache Spark after realizing the limitations of the traditional MapReduce model with respect to running iterative machine learning models His focus is now on trying to further extend the Spark machine learning libraries, and also on teaching how Spark can be used in data science and data analytics through his blog, Machine Learning at Speed (http://mlspeed wordpress.com) Before NoSQL databases became the rage, he was an expert on tuning Oracle databases and extracting maximum performance from EMC Documentum systems This work took him to clients around the world and led him to create the open source proiling tool called DFCprof that is used by hundreds of EMC users to track down performance problems For many years, he maintained the popular Documentum internals and tuning blog, Inside Documentum (http://robineast wordpress.com), and contributed hundreds of posts to EMC support forums These community efforts bore fruit in the form of the award of EMC MVP and acceptance into the EMC Elect program www.allitebooks.com Toni Verbeiren graduated as a PhD in theoretical physics in 2003 He used to work on models of artiicial neural networks, entailing mathematics, statistics, simulations, (lots of) data, and numerical computations Since then, he has been active in the industry in diverse domains and roles: infrastructure management and deployment, service management, IT management, ICT/business alignment, and enterprise architecture Around 2010, Toni started picking up his earlier passion, which was then named data science The combination of data and common sense can be a very powerful basis to make decisions and analyze risk Toni is active as an owner and consultant at Data Intuitive (http://www.dataintuitive.com/) in everything related to big data science and its applications to decision and risk management He is currently involved in Exascience Life Lab (http://www.exascience.com/) and the Visual Data Analysis Lab (http://vda-lab be/), which is concerned with scaling up visual analysis of biological and chemical data I'd like to thank various employers, clients, and colleagues for the insight and wisdom they shared with me I'm grateful to the Belgian and Flemish governments (FWO, IWT) for inancial support of the aforementioned academic projects Lijie Xu is a PhD student at the Institute of Software, Chinese Academy of Sciences His research interests focus on distributed systems and large-scale data analysis He has both academic and industrial experience in Microsoft Research Asia, Alibaba Taobao, and Tencent As an open source software enthusiast, he has contributed to Apache Spark and written a popular technical report, named Spark Internals, in Chinese at https://github.com/JerryLead/SparkInternals/ tree/master/markdown www.allitebooks.com www.PacktPub.com Support iles, eBooks, discount offers, and more For support iles and downloads related to your book, please visit www.PacktPub.com Did you know that Packt offers eBook versions of every book published, with PDF and ePub iles available? You can upgrade to the eBook version at www.PacktPub com and as a print book customer, you are entitled to a discount on the eBook copy Get in touch with us at service@packtpub.com for more details At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks TM https://www2.packtpub.com/books/subscription/packtlib Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library Here, you can search, access, and read Packt's entire library of books Why subscribe? • • • Fully searchable across every book published by Packt Copy and paste, print, and bookmark content On demand and accessible via a web browser Free access for Packt account holders If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view entirely free books Simply use your login credentials for immediate access www.allitebooks.com Table of Contents Preface Chapter 1: Installing Spark and Setting up your Cluster Directory organization and convention Installing prebuilt distribution Building Spark from source Downloading the source Compiling the source with Maven Compilation switches Testing the installation Spark topology A single machine Running Spark on EC2 Running Spark on EC2 with the scripts Deploying Spark on Elastic MapReduce Deploying Spark with Chef (Opscode) Deploying Spark on Mesos Spark on YARN Spark Standalone mode Summary Chapter 2: Using the Spark Shell v 5 7 9 10 16 17 18 19 19 24 25 Loading a simple text ile Using the Spark shell to run logistic regression Interactively loading data from S3 Running Spark shell in Python Summary [i] www.allitebooks.com 26 29 32 34 35 Tips and Tricks As discussed in the earlier chapters, you have the tools to build and test Spark jobs as well as set up a Spark cluster to run them on, so now it's time to igure out how to make the most of your time as a Spark developer The Spark documentation includes good tips on tuning and is available at http://spark.apache.org/docs/latest/ tuning.html Where to ind logs Spark has very useful logs to igure out what's going on when things are not going as expected Spark keeps a per machine log on each machine by default in the SPARK_HOME/work subdirectory Spark's web UI provides a convenient place to see STDOUT and STDERR of each job, running and completed jobs, separated out per worker Concurrency limitations Spark's concurrency for operations is limited by the number of partitions Conversely, having too many partitions can cause excess overhead by launching too many tasks If you have too many partitions, you can shrink it by using the coalesce(numPartitions,shuffle) method The coalesce method is a good method to pack and rebalance your RDDs (for example, after a ilter operation where you have less data after the action) If the new number of partitions is more than what you have now, set shuffle=True, else set shuffle=false While creating a new RDD, you can specify the number of partitions to be used Also, the grouping/ joining mechanism on RDDs of pairs can take the number of partitions or a custom partitioner class The default number of partitions for new RDDs is controlled by spark.default.parallelism, which also controls the number of tasks used by groupByKey and other shufle operations that need shufling [ 151 ] Tips and Tricks Memory usage and garbage collection To measure the impact of garbage collection, you can ask the JVM to print details about the garbage collection You can this by adding -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps to your SPARK_JAVA_OPTS in conf/spark-env.sh You can also include the -Xloggc option to print the log messages to a separate ile so that log messages are kept separate The details will then be printed to the standard out when you run your job, which will be available as described in the irst section of this chapter If you ind that your Spark cluster uses too much time collecting garbage, you can reduce the amount of space used for RDD caching by changing spark.storage memoryFraction; here, the default is 0.6 If you are planning to run Spark for a long time on a cluster, you may wish to enable spark.cleaner.ttl By default, Spark does not clean up any metadata (stages generated, tasks generated, and so on); set this to a non-zero value in seconds to clean up the metadata after that length of time The documentation page (https://spark.apache.org/docs/latest/configuration html) has the default settings and details about all the coniguration options You can also control the RDD storage level if you ind that you use too much memory I usually use top to see the memory consumption of the processes If your RDDs don't it within memory and you still wish to cache them, you can try using a different storage level shown as follows (also check the documentation page for the latest information on RDD persistence options at http://spark.apache.org/docs/ latest/programming-guide.html#rdd-persistence): • MEMORY_ONLY: This stores the entire RDD in memory if it can, which is the default • MEMORY_AND_DISK: This stores each partition in memory if it can fit; else it stores it on disk • DISK_ONLY: This stores each partition on disk regardless of whether it can fit in memory These options are set when you call the persist function (rdd.persist()) on your RDD By default, the RDDs are stored in a deserialized form, which requires less parsing We can save space by adding _SER to the storage level (for example, MEMORY_ONLY_SER, MEMORY_AND_DISK_SER), in which case Spark will serialize the data to be stored, which normally saves some space but increases the execution time [ 152 ] Chapter 11 Serialization Spark supports different serialization mechanisms; the choice is a trade-off between speed, space eficiency, and full support of all Java objects If you are using the serializer to cache your RDDs, you should strongly consider a fast serializer The default serializer uses Java's default serialization The KyroSerializer is much faster and generally uses about one tenth of the memory as the default serializer You can switch the serializer by setting spark.serializer to spark.KryoSerializer If you want to use KyroSerializer, you need to make sure that the classes are serializable by KyroSerializer Spark provides a trait KryoRegistrator, which you can extend to register your classes with Kyro, as shown in the following code: class Reigstrer extends spark.KyroRegistrator { override def registerClasses(kyro: Kyro) { kyro.register(classOf[MyClass]) } } Take a look at https://code.google.com/p/ kryo/#Quickstart to igure out how to write custom serializers for your classes if you need something customized You can substantially decrease the amount of space used for your objects by customizing your serializers For example, rather than writing out the full class name, you can give them an integer ID by calling kyro.register(classOf[MyClass],100) IDE integration For Emacs users, the ENSIME sbt plugin is a good addition ENhanced Scala Interaction Mode for Emacs (ENSIME) provides many features that are available in IDEs such as error checking and symbol inspection You can install the latest ENSIME from https://github.com/aemoncannon/ensime/downloads (make sure you choose the one that matches your Scala version) Or, you can run the following commands: wget https://github.com/downloads/aemoncannon/ensime/ ensime_2.10.0-RC30.9.8.2.tar.gz tar -xvf ensime_2.10.0-RC3-0.9.8.2.tar.gz  In your emacs, add this: ;; Load the ensime lisp code (add-to-list 'load-path "ENSIME_ROOT/elisp/") (require 'ensime) [ 153 ] Tips and Tricks ;; This step causes the ensime-mode to be started whenever ;; scala-mode is started for a buffer You may have to customize ;; this step if you're not using the standard scala mode (add-hook 'scala-mode-hook 'ensime-scala-mode-hook) You can then add the ENSIME sbt plugin to your project (in project/plugins.sbt): addSbtPlugin("org.ensime" % "ensime-sbt-cmd" % "0.1.0") You should then run the following commands: sbt > ensime generate If you are using Git, you will probably want to add ensime to the gitignore ile if it isn't already present If you have an IntelliJ, a similar plugin exists called sbt-idea, which can be used to generate IntelliJ idea iles You can add the IntelliJ sbt plugin to your project (in project/plugins.sbt) like this: addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.5.1") You should then run the following commands: sbt > gen-idea This will generate the idea ile, which can be loaded into IntelliJ Eclipse users can also use sbt to generate Eclipse project iles with the sbteclipse plugin You can add the Eclipse sbt plugin to your project (in project/plugins.sbt) like this: addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.3.0") You should then run the following commands: sbt > eclipse This will generate the Eclipse project iles and you can then import them into your Eclipse project using the Import Wizard in Eclipse Eclipse users might also ind the spark-plug project useful, which can be used to launch clusters from within Eclipse An import step is to add spark-assembly-1.2.0-hadoop2.6.0.jar in your Java build path or Maven dependency Pay attention so you match the Spark version number (1.2.0) with the Hadoop version number (2.6.0) [ 154 ] Chapter 11 Using Spark with other languages If you ind yourself wanting to work with your RDD in another language, there are a few options available for you From Java/Scala you can try using JNI, and with Python you can use the FFI Sometimes however, you will want to work with a language that isn't C or work with an already compiled program In that case, the easiest thing to is to use the pipe interface that is available in all three of the APIs The stream API works by taking the RDD and serializing it to strings and then piping it to the speciied program If your data happens to be plain strings, this is very convenient, but if it's not so, you will need to serialize your data in such a way that it can be understood on either side JSON or protocol buffers can be good options for this depending on how structured your data is A quick note on security Another important consideration in your Spark setup is security If you are using Spark on EC2 with the default scripts, you will notice that the access to your Spark cluster is restricted This is a good idea to even if you aren't running inside of EC2 since your Spark cluster will likely have access to the data you would rather not share with the world (and even if it doesn't have it, you probably don't want to allow arbitrary code execution by strangers) If your Spark cluster is already on a private network, that is great, otherwise you should talk with your system administrator about setting up some IPtables rules to restrict access Community developed packages A new package index site (http://spark-packages.org/) has a lot of packages and libraries that work with Apache Spark It's an essential site to visit and make use of Mailing lists Probably the most useful tip to inish this chapter with is that the Spark user's mailing list is an excellent source of up-to-date information about other people's experiences in using Spark The best place to get information on meetups, slides, and so forth is https://spark.apache.org/community.html The two Spark users mailing lists are user@spark.apache.org and dev@spark.apache.org [ 155 ] Tips and Tricks Some more information can be found at the following sites: • • http://blog.quantifind.com/posts/logging-post/ • • https://www.assembla.com/spaces/liftweb/wiki/Emacs-ENSIME • https://spark.apache.org/docs/latest/tuning.html • http://spark.apache.org/docs/latest/running-on-mesos.html • http://kryo.googlecode.com/svn/api/v2/index.html • https://code.google.com/p/kryo/ • http://scala-ide.org/download/current.html • http://syndeticlogic.net/?p=311 • http://mail-archives.apache.org/mod_mbox/incubator-spark-user/ • https://groups.google.com/forum/?fromgroups#!forum/spark-users http://jawher.net/2011/01/17/scala-development-environmentemacs-sbt-ensime/ https://github.com/shivaram/spark-ec2/blob/master/ganglia/init sh Summary That wraps up some common things that you can use to help improve your Spark development experience I wish you the best of luck with your Spark projects; now go and solve some fun problems! :) [ 156 ] Index A D accumulate 67 Alternating Least Square (ALS) algorithm about 136 reference link 136 Amazon Machine Images (AMI) 15 architecture, Spark SQL 94 data loading, from S3 32, 33 loading, into RDD 52-61 saving 62 datailes, GitHub reference link 96 directory convention organization references doctest 149 double RDD functions about 78 sampleStdev 78 Stats 78 Stdev 78 Sum 78 variance 78 B basic statistics, Spark MLlib examples 121-123 broadcast 67 C Chef about 17 Spark, deploying with 17 references 17 classiication, Spark MLlib examples 126-132 clustering, Spark MLlib examples 132-135 code testable making 141-143 commands, quick start URL 34 community developed packages 155 concurrency, limitations about 151 IDE integration 153, 154 memory usage, and garbage collection 152 serialization 153 custom serializers references 153 E EC2 Spark, running on 9, 10 EC2 command line tools references 11 EC2 scripts, Amazon URL 10 Elastic MapReduce (EMR) Spark, deploying on 16 ENhanced Scala Interaction Mode for Emacs (ENSIME) about 153 URL 153 [ 157 ] F iles loading, to Parquet 109, 110 saving, to Parquet 108 latMap function 67 functions, for joining PairRDDs about 76 coGroup 76 join 76 subtractKey 76 functions, on JavaPairRDDs about 84 cogroup 84 collectAsMap 84 combineByKey 84 countByKey 84 latMapValues 84 join 84 keys 84 lookup 84 reduceByKey 85 sortByKey 85 values 85 G general RDD functions about 79 aggregate 79 cache 79 collect 79 count 79 countByValue 79 distinct 79 ilter 79 ilterWith 79 irst 79 latMap 79 fold 79 foreach 79 groupBy 79 keyBy 80 map 80 mapPartitions 80 mapPartitionsWithIndex 80 mapWith 80 persist 80 pipe 80 sample 80 takeSample 80 toDebugString 80 union 81 unpersist 81 zip 81 GitHub repository reference link, for data iles 121 H Hadoop Distributed File System (HDFS) HBase about 107, 114 data, loading 115, 116 data, saving 116, 117 metadata, obtaining 117 I Impala Parquet iles, querying 111-114 interactions testing, with SparkContext 144-147 J Java RDD, manipulating in 65-75 SparkContext object, creating in 46 using, as testing library 141 Java RDD functions about 81, 82 cache 82 coalesce 82 collect 82 common Java RDD functions 82 count 82 countByValue 82 distinct 82 ilter 82 irst 82 latMap 82 fold 82 foreach 83 [ 158 ] groupBy 83 map 83 mapPartitions 83 reduce 83 sample 83 Spark Java function classes 81 L lambda 66 latest development source, Spark references linear regression, Spark MLlib examples 124, 125 logistic regression running, Spark shell used 29-31 logs inding 151 M mailing lists about 155 references 156 map 66 massively parallel processing (MPP) 111 Maven Spark job, building with 41-43 Mesos about 18 Spark, deploying on 18 URL 18 metadata, SparkContext object about 48 appName 47 getConf 47 getExecutorMemoryStatus 47 Master 47 Version 47 methods, for combining JavaRDDs about 83 subtract 83 union 83 zip 83 multiple tables handling, with Spark SQL 98-104 N nondata-driven methods, SparkContext object addFile(path) 49 addJar(path) 49 clearFiles() 49 clearJars() 49 stop() 49 P package index site reference link 155 PairRDD functions about 77, 89 cogroup 90 collectAsMap 77, 89 combineByKey 90 countByKey 77, 89 latMapValues 78 groupByKey 90 join 90 leftOuterJoin 90 lookup 77 mapValues 77 partitionBy 77 reduceByKey 89 rightOuterJoin 90 zip 90 Parquet about 107 iles, loading 109, 110 iles, querying with Impala 111-114 iles, saving 108 processed RDD, saving 111 Personal Package Archive (PPA) 38 prebuilt distribution installing 3, processed RDD saving, in Parquet 111 PySpark 148 Python RDD, manipulating in 85-88 SparkContext object, creating in 49, 50 Spark shell, running in 34, 35 Python testing, of Spark 148, 149 [ 159 ] Q QuickStart VM URL 112 R recommendation, Spark MLlib examples about 136-140 reference link 140 reduce 66 Resilient Distributed Dataset (RDD) about 8, 27, 51 data, loading into 52-61 manipulating, in Java 65-75 manipulating, in Python 85-88 manipulating, in Scala 65-75 references 91 Run Length Encoding (RLE) 108 S S3 data, loading from 32, 33 sbt (simple-build-tool) Spark project, building with 37-41 Scala RDD, manipulating in 65-75 SparkContext object, creating in 46 Scala APIs 49 Scala RDD functions about 76 foldByKey 76 groupByKey 76 reduceByKey 76 ScalaTest using, as testing library 141 security 155 shared Java APIs 49 simple text ile loading 26-29 single machine source Spark, building from spam dataset, GitHub link URL 26 Spark building, from source deploying, on Elastic MapReduce (EMR) 16 deploying, on Mesos 18 deploying, with Chef 17 installation, testing references 1-5, 19, 37 running, on EC2 running on EC2, with scripts 10-15 standalone mode 19-23 using, with other languages 155 Spark, building from source about compilation switches download source source, compiling with Maven 5, Spark community URL SparkContext object creating, in Java 46 creating, in Python 49, 50 creating, in Scala 46 interactions, testing with 144-147 metadata 47, 48 references 50 Spark documentation references 151, 152 Spark Java function classes about 81 DoubleFlatMapFunction 82 DoubleFunction 81 FlatMapFunction 81 Function2 82 Function 81 PairFlatMapFunction 81 PairFunction 81 Spark job building 44 building, with Maven 41-43 Spark machine learning algorithm table 120 Spark MLlib about 119 URL 119 Spark MLlib examples about 120 basic statistics 121-123 [ 160 ] classiication 126-132 clustering 132-135 linear regression 124, 125 recommendation 136-140 Spark, on YARN 19 Spark project building, with sbt 37-41 Spark shell about 25 running, in Python 34, 35 used, for running logistic regression 29-31 Spark SQL about 93-95 architecture 94 multiple tables, handling with 98-104 overview 94 references 105 SQL access, to simple data table 95-98 Spark topology 7-9 standalone mode, Spark reference link 19 standard RDD functions about 88 cartesian 88 countByValue 89 distinct 88 ilter 88 latMap 88 fold 89 foreach 89 groupBy 89 mapParitions 88 partitionBy 89 pipe 89 reduce 89 take 89 union 88 T testing references 150 type inference 66 Y YARN 19 [ 161 ] Thank you for buying Fast Data Processing with Spark Second Edition About Packt Publishing Packt, pronounced 'packed', published its irst book, Mastering phpMyAdmin for Effective MySQL Management, in April 2004, and subsequently continued to specialize in publishing highly focused books on speciic technologies and solutions Our books and publications share the experiences of your fellow IT professionals in adapting and customizing today's systems, applications, and frameworks Our solution-based books give you the knowledge and power to customize the software and technologies you're using to get the job done Packt books are more speciic and less general than the IT books you have seen in the past Our unique business model allows us to bring you more focused information, giving you more of what you need to know, and less of what you don't Packt is a modern yet unique publishing company that focuses on producing quality, cutting-edge books for communities of developers, administrators, and newbies alike For more information, please visit our website at www.packtpub.com About Packt Open Source In 2010, Packt launched two new brands, Packt Open Source and Packt Enterprise, in order to continue its focus on specialization This book is part of the Packt Open Source brand, home to books published on software built around open source licenses, and offering information to anybody from advanced developers to budding web designers The Open Source brand also runs Packt's Open Source Royalty Scheme, by which Packt gives a royalty to each open source project about whose software a book is sold Writing for Packt We welcome all inquiries from people who are interested in authoring Book proposals should be sent to author@packtpub.com If your book idea is still at an early stage and you would like to discuss it irst before writing a formal book proposal, then please contact us; one of our commissioning editors will get in touch with you We're not just looking for published authors; if you have strong technical skills but no writing experience, our experienced editors can help you develop a writing career, or simply get some additional reward for your expertise Machine Learning with Spark ISBN: 978-1-78328-851-9 Paperback: 338 pages Create scalable machine learning applications to power a modern data-driven business using Spark A practical tutorial with real-world use cases allowing you to develop your own machine learning systems with Spark Combine various techniques and models into an intelligent machine learning system Use Spark's powerful tools to load, analyze, clean, and transform your data Application Development with Parse using iOS SDK ISBN: 978-1-78355-033-3 Paperback: 112 pages Develop the backend of your applications instantly using Parse iOS SDK Build your applications using Parse iOS which serves as a complete cloud-based backend service Understand and write your code on cloud to minimize the load on the client side Learn how to create your own applications using Parse SDK, with the help of the step-by-step, practical tutorials Please check www.PacktPub.com for information on our titles Practical Data Science Cookbook ISBN: 978-1-78398-024-6 Paperback: 396 pages 89 hands-on recipes to help you complete real-world data science projects in R and Python Learn about the data science pipeline and use it to acquire, clean, analyze, and visualize data Understand critical concepts in data science in the context of multiple projects Expand your numerical programming skills through step-by-step code examples and learn more about the robust features of R and Python Starling Game Development Essentials ISBN: 978-1-78398-354-4 Paperback: 116 pages Develop and deploy isometric turn-based games using Starling Create a cross-platform Starling Isometric game Add enemy AI and multiplayer capability Explore the complete source code for the Web and cross-platform game development Please check www.PacktPub.com for information on our titles [...]... Data in Spark, deals with how we can get data in and out of a Spark environment [v] Preface Chapter 6, Manipulating your RDD, describes how to program the Resilient Distributed Datasets, which is the fundamental data abstraction in Spark that makes all the magic possible Chapter 7, Spark SQL, deals with the SQL interface in Spark Spark SQL probably is the most widely used feature Chapter 8, Spark with. .. as Kafka, we can seamlessly span the data management and data science pipelines We can build data science models on larger datasets, requiring not just sample data However, whatever models we build can be deployed into production (with added work from engineering on the "ilities", of course) It is our hope that this book would enable an engineer to get familiar with the fundamentals of the Spark platform... 37 Chapter 4: Creating a SparkContext 45 Chapter 5: Loading and Saving Data in Spark 51 Building your Spark project with sbt Building your Spark job with Maven Building your Spark job with something else Summary Scala Java SparkContext – metadata Shared Java and Scala APIs Python Summary RDDs Loading data into an RDD Saving your data Summary Chapter 6: Manipulating your RDD Manipulating your RDD in... access to a simple data table Handling multiple tables with Spark SQL Aftermath 95 98 104 Summary 105 [ ii ] Table of Contents Chapter 8: Spark with Big Data 107 Chapter 9: Machine Learning Using Spark MLlib 119 Parquet – an eficient and interoperable big data format Saving iles to the Parquet format Loading Parquet iles Saving processed RDD in the Parquet format Querying Parquet iles with Impala HBase... parallel or carry out data parallelism, that is, we run the same algorithms over a partitioned dataset in parallel In my humble opinion, Spark is extremely effective in data parallelism in an elegant framework As you will see in the rest of this book, the two components are Resilient Distributed Dataset (RDD) and cluster manager The cluster manager distributes the code and manages the data that is represented... Spark Spark on a single machine is excellent for testing or exploring small datasets, but here you will also learn to use Spark's built-in deployment scripts with a dedicated cluster via SSH (Secure Shell) This chapter will explain the use of Mesos and Hadoop clusters with YARN or Chef to deploy Spark For Cloud deployments of Spark, this chapter will look at EC2 (both traditional and EC2MR) Feel free... we do not assume any esoteric equipment for running the examples and developing the code A normal development machine is enough Who this book is for Data scientists and data engineers would beneit more from this book Folks who have an exposure to big data and analytics will recognize the patterns and the pragmas Having said that, anyone who wants to understand distributed programming would beneit from... works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy Please contact us at copyright@packtpub.com with a link to the suspected pirated material We appreciate your help in protecting our authors and our ability to bring you valuable content Questions If you have a problem with any aspect of this book, you can contact us... imagination of the analytics and big data developers, and rightfully so In a nutshell, Spark enables distributed computing on a large scale in the lab or in production Till now, the pipeline collect-store-transform was distinct from the Data Science pipeline reason-model, which was again distinct from the deployment of the analytics and machine learning models Now, with Spark and technologies, such as... https://spark.apache.org/docs/latest/building-spark.html Both source code and prebuilt binaries are available at this link To interact with Hadoop Distributed File System (HDFS), you need to use Spark, which is built against the same version of Hadoop as your cluster For Version 1.1.0 of Spark, the prebuilt package is built against the available Hadoop Versions 1.x, 2.3, and 2.4 If you are up for the challenge, .. .Fast Data Processing with Spark Second Edition Perform real-time analytics using Spark in a fast, distributed, and scalable way Krishna Sankar... scalable way Krishna Sankar Holden Karau BIRMINGHAM - MUMBAI www.allitebooks.com Fast Data Processing with Spark Second Edition Copyright © 2015 Packt Publishing All rights reserved No part of this... and Saving Data in Spark 51 Building your Spark project with sbt Building your Spark job with Maven Building your Spark job with something else Summary Scala Java SparkContext – metadata Shared

Ngày đăng: 20/02/2016, 15:40

Từ khóa liên quan

Mục lục

  • Cover

  • Copyright

  • Credits

  • About the Authors

  • About the Reviewers

  • www.PacktPub.com

  • Table of Contents

  • Preface

  • Chapter 1: Installing Spark and Setting up your Cluster

    • Directory organization and convention

    • Installing prebuilt distribution

    • Building Spark from source

      • Download source

      • Compile source with Maven

      • Compilation switches

      • Testing the installation

      • Spark topology

      • Single machine

      • Running Spark on EC2

        • Running Spark on EC2 with the scripts

        • Deploying Spark on Elastic MapReduce

        • Deploying Spark with Chef (opscode)

        • Deploying Spark on Mesos

Tài liệu cùng người dùng

Tài liệu liên quan