Learning Path: Big Data Analytics

Course

Online

£ 40 + VAT

Description

  • Type

    Course

  • Methodology

    Online

  • Start date

    Different dates available

Massive amounts of data are being generated everyday, everywhere. As a result, a number of organizations are focusing on big data processing. In this course we’ll help you understand how Hadoop, as an ecosystem, helps us store, process, and analyze data. We will then smoothly move to developing large-scale distributed data processing applications using Apache Spark 2.About the AuthorsRandal Scott KingRandal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children. You can visit his blog at Rajanarayanan ThottuvaikkatumanaRajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.

Facilities

Location

Start date

Online

Start date

Different dates availableEnrolment now open

About this course

Install and configure an Hadoop instance of your own
Navigate Hue, the GUI for common tasks in Hadoop
Import data manually, and automatically from a database 
Build scripts with Pig to perform common ETL tasks
Write and run a simple MapReduce program
Structure and query data effectively with Hive, Hadoop’s built-in data warehousing component
Get to know the fundamentals of Spark 2.0 and the Spark programming model using Scala and Python
Know how to use Spark SQL and DataFrames using Scala and Python
Get an introduction to Spark programming using R
Perform Spark data processing, charting, and plotting using Python
Get acquainted with Spark stream processing using Scala and Python
Be introduced to machine learning with Spark using Scala and Python
Get started with graph processing with Spark using Scala
Develop a complete Spark application

Questions & Answers

Add your question

Our advisors and other users will be able to reply to you

Who would you like to address this question to?

Fill in your details to get a reply

We will only publish your name and question

Emagister S.L. (data controller) will process your data to carry out promotional activities (via email and/or phone), publish reviews, or manage incidents. You can learn about your rights and manage your preferences in the privacy policy.

Reviews

This centre's achievements

2021

All courses are up to date

The average rating is higher than 3.7

More than 50 reviews in the last 12 months

This centre has featured on Emagister for 6 years

Subjects

  • Ms Word
  • Import
  • Systems
  • Consulting
  • Database training
  • Database
  • Server
  • Java
  • Word

Course programme

Learning Hadoop 2 19 lectures 01:30:04 The Course Overview This video will offer the overview of the course. Overview of HDFS and YARN This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop.
  • HDFS is the file system that Hadoop uses
  • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
Overview of Sqoop and Flume An introduction to the basic concepts of Sqoop and Flume, two tools for the automation of data import into Hadoop.
  • First, we will talk about Sqoop
  • Next, we go over Flume
Overview of MapReduce An introduction to the basic concepts of MapReduce, the computation engine of Hadoop.
  • Discussing the history and concept of MapReduce
  • Let's look at the word count example
Overview of Pig An introduction to the basic concepts of Pig, a scripting language for Hadoop.
  • Discuss what Pig is
  • Taking a look at the "word count" example
Overview of Hive An introduction to the basic concepts of Hive, Hadoop’s data warehousing solution.
  • Cover the basic concept of Hive
  • Take a look at internal versus external tables
  • Understand how Hive works with metadata
  • Discuss about HiveQL
Downloading and Installing Hadoop Put a working Hadoop installation on a laptop or server. You will need Hadoop on your laptop or server in order to continue.
  • Download the Quickstart VM from Cloudera.com
  • Start the VM
Exploring Hue Exploring the Hue, a GUI for Hadoop, to get familiar with the interface.
  • Navigate to the Hue page
  • Explanation of the file browser and query editor dropdowns
  • Create a new user
Manual Import This video will cover how to get data into HDFS manually.
  • Use Hue to pull data from local file system to HDFS
  • Use the command line to move data from the local file system onto HDFS
Importing from Databases Using Sqoop This video will explain how to get data from databases into HDFS.
  • Create a database in MySQL and load data
  • Use Sqoop command line to transfer data to HDFS
Using Flume to Import Streaming Data This video will cover how to import streaming data using the Flume tool.
  • Modify the Flume Agent configuration file
  • Create a text file in the local spooling directory and check to make sure Flume imports it to HDFS
Coding "Word Count" in MapReduce This video will explore how to build "Word Count" in Eclipse, then save it to a .jar and run it from MapReduce.
  • Opening Eclipse and using it to import the "Word Count" code
  • Save the .jar to the local file system
  • Run the code in MapReduce, check the progress of the job, and view the result
Coding "Word Count" in Pig Coding the same word counting program, but this time in Pig.
  • Open the Pig Script Editor in Hue and build our script
  • Save the script for future use and run it
  • Check the progress of the job in Hue and view the result
Performing Common ETL Functions in Pig This video will discuss how to use Pig to perform common Extract, Transform, and Load functions on data.
  • Filter out certain data from a dataset and save the result
  • Append one dataset to another in an identical format using Union
  • Join one dataset to another using a common column in each
Using User-defined Functions in Pig This video will explore how to use predefined code called User Defined Functions (UDFs) in Pig scripts.
  • Identify whether two UDF repositories (Piggybank and DataFu) are installed
  • Register the Stats UDF and define a Quartile function to use it
  • Write the script and run the code, resulting in a document that shows the minimum, median, and max values for Quantity in our data
Importing Data from HDFS into Hive Create a database in Hive.
  • Import data into an internal table (the default)
  • Import data into an external table
  • How to get data from HDFS into Hive
Importing Data Directly from a Database This video will cover how to get data into Hive from a database without going to HDFS first.
  • Use Sqoop from the command line to move the data
  • Check the data browser to see if the right directory was created in Hive
  • Use "select * from table" to see the data in the table
Performing Basic Queries in Hive Using queries in Hive to find information.
  • Using the basic Select From Where query
  • Combining two tables using Union
  • Creating a new table from the results of a query
Putting It All Together A quick summary of what the viewer has learned in the entire course.
  • Review the Hadoop Ecosystem chart
  • See graphic of structured and unstructured data import to Hadoop
  • Introduce the term “Data Lake” and understand that we can make one now
Learning Hadoop 2. 19 lectures 01:30:04 The Course Overview This video will offer the overview of the course. Overview of HDFS and YARN This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop.
  • HDFS is the file system that Hadoop uses
  • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
Overview of Sqoop and Flume An introduction to the basic concepts of Sqoop and Flume, two tools for the automation of data import into Hadoop.
  • First, we will talk about Sqoop
  • Next, we go over Flume
Overview of MapReduce An introduction to the basic concepts of MapReduce, the computation engine of Hadoop.
  • Discussing the history and concept of MapReduce
  • Let's look at the word count example
Overview of Pig An introduction to the basic concepts of Pig, a scripting language for Hadoop.
  • Discuss what Pig is
  • Taking a look at the "word count" example
Overview of Hive An introduction to the basic concepts of Hive, Hadoop’s data warehousing solution.
  • Cover the basic concept of Hive
  • Take a look at internal versus external tables
  • Understand how Hive works with metadata
  • Discuss about HiveQL
Downloading and Installing Hadoop Put a working Hadoop installation on a laptop or server. You will need Hadoop on your laptop or server in order to continue.
  • Download the Quickstart VM from Cloudera.com
  • Start the VM
Exploring Hue Exploring the Hue, a GUI for Hadoop, to get familiar with the interface.
  • Navigate to the Hue page
  • Explanation of the file browser and query editor dropdowns
  • Create a new user
Manual Import This video will cover how to get data into HDFS manually.
  • Use Hue to pull data from local file system to HDFS
  • Use the command line to move data from the local file system onto HDFS
Importing from Databases Using Sqoop This video will explain how to get data from databases into HDFS.
  • Create a database in MySQL and load data
  • Use Sqoop command line to transfer data to HDFS
Using Flume to Import Streaming Data This video will cover how to import streaming data using the Flume tool.
  • Modify the Flume Agent configuration file
  • Create a text file in the local spooling directory and check to make sure Flume imports it to HDFS
Coding "Word Count" in MapReduce This video will explore how to build "Word Count" in Eclipse, then save it to a .jar and run it from MapReduce.
  • Opening Eclipse and using it to import the "Word Count" code
  • Save the .jar to the local file system
  • Run the code in MapReduce, check the progress of the job, and view the result
Coding "Word Count" in Pig Coding the same word counting program, but this time in Pig.
  • Open the Pig Script Editor in Hue and build our script
  • Save the script for future use and run it
  • Check the progress of the job in Hue and view the result
Performing Common ETL Functions in Pig This video will discuss how to use Pig to perform common Extract, Transform, and Load functions on data.
  • Filter out certain data from a dataset and save the result
  • Append one dataset to another in an identical format using Union
  • Join one dataset to another using a common column in each
Using User-defined Functions in Pig This video will explore how to use predefined code called User Defined Functions (UDFs) in Pig scripts.
  • Identify whether two UDF repositories (Piggybank and DataFu) are installed
  • Register the Stats UDF and define a Quartile function to use it
  • Write the script and run the code, resulting in a document that shows the minimum, median, and max values for Quantity in our data
Importing Data from HDFS into Hive Create a database in Hive.
  • Import data into an internal table (the default)
  • Import data into an external table
  • How to get data from HDFS into Hive
Importing Data Directly from a Database This video will cover how to get data into Hive from a database without going to HDFS first.
  • Use Sqoop from the command line to move the data
  • Check the data browser to see if the right directory was created in Hive
  • Use "select * from table" to see the data in the table
Performing Basic Queries in Hive Using queries in Hive to find information.
  • Using the basic Select From Where query
  • Combining two tables using Union
  • Creating a new table from the results of a query
Putting It All Together A quick summary of what the viewer has learned in the entire course.
  • Review the Hadoop Ecosystem chart
  • See graphic of structured and unstructured data import to Hadoop
  • Introduce the term “Data Lake” and understand that we can make one now
The Course Overview This video will offer the overview of the course. The Course Overview This video will offer the overview of the course. The Course Overview This video will offer the overview of the course. The Course Overview This video will offer the overview of the course. This video will offer the overview of the course. This video will offer the overview of the course. Overview of HDFS and YARN This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop.
  • HDFS is the file system that Hadoop uses
  • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
Overview of HDFS and YARN This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop.
  • HDFS is the file system that Hadoop uses
  • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
Overview of HDFS and YARN This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop.
  • HDFS is the file system that Hadoop uses
  • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
Overview of HDFS and YARN This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop.
  • HDFS is the file system that Hadoop uses
  • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop.
  • HDFS is the file system that Hadoop uses
  • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
This video will introduce you to the basic concepts of Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN), which are the two core components of Hadoop.
  • HDFS is the file system that Hadoop uses
  • Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
Overview of Sqoop and Flume An introduction to the basic concepts of Sqoop and Flume, two tools for the automation of data import into Hadoop.
  • First, we will talk about Sqoop
  • Next, we go over Flume
Overview of Sqoop and Flume An introduction to the basic concepts of Sqoop and Flume, two tools for the automation of data import into Hadoop.
  • First, we will talk about Sqoop
  • Next, we go over Flume
Overview of Sqoop and Flume An introduction to the basic concepts of Sqoop and Flume, two tools for the automation of data import into Hadoop.
  • First, we will talk about Sqoop
  • Next, we go over Flume
Overview of Sqoop and Flume An introduction to the basic concepts of Sqoop and Flume, two tools for the automation of data import into Hadoop or server. You will need Hadoop...

Additional information

Data scientists or big data architects interested in combining the data processing power of Hadoop and Apache Spark should be having prior knowledge of these technologies

Learning Path: Big Data Analytics

£ 40 + VAT