Learning Path: Big Data Analytics
Course
Online
Description
-
Type
Course
-
Methodology
Online
-
Start date
Different dates available
Massive amounts of data are being generated everyday, everywhere. As a result, a number of organizations are focusing on big data processing. In this course we’ll help you understand how Hadoop, as an ecosystem, helps us store, process, and analyze data. We will then smoothly move to developing large-scale distributed data processing applications using Apache Spark 2.About the AuthorsRandal Scott KingRandal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children. You can visit his blog at Rajanarayanan ThottuvaikkatumanaRajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.
Facilities
Location
Start date
Start date
About this course
Install and configure an Hadoop instance of your own
Navigate Hue, the GUI for common tasks in Hadoop
Import data manually, and automatically from a database
Build scripts with Pig to perform common ETL tasks
Write and run a simple MapReduce program
Structure and query data effectively with Hive, Hadoop’s built-in data warehousing component
Get to know the fundamentals of Spark 2.0 and the Spark programming model using Scala and Python
Know how to use Spark SQL and DataFrames using Scala and Python
Get an introduction to Spark programming using R
Perform Spark data processing, charting, and plotting using Python
Get acquainted with Spark stream processing using Scala and Python
Be introduced to machine learning with Spark using Scala and Python
Get started with graph processing with Spark using Scala
Develop a complete Spark application
Reviews
This centre's achievements
All courses are up to date
The average rating is higher than 3.7
More than 50 reviews in the last 12 months
This centre has featured on Emagister for 6 years
Subjects
- Ms Word
- Import
- Systems
- Consulting
- Database training
- Database
- Server
- Java
- Word
Course programme
- HDFS is the file system that Hadoop uses
- Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
- First, we will talk about Sqoop
- Next, we go over Flume
- Discussing the history and concept of MapReduce
- Let's look at the word count example
- Discuss what Pig is
- Taking a look at the "word count" example
- Cover the basic concept of Hive
- Take a look at internal versus external tables
- Understand how Hive works with metadata
- Discuss about HiveQL
- Download the Quickstart VM from Cloudera.com
- Start the VM
- Navigate to the Hue page
- Explanation of the file browser and query editor dropdowns
- Create a new user
- Use Hue to pull data from local file system to HDFS
- Use the command line to move data from the local file system onto HDFS
- Create a database in MySQL and load data
- Use Sqoop command line to transfer data to HDFS
- Modify the Flume Agent configuration file
- Create a text file in the local spooling directory and check to make sure Flume imports it to HDFS
- Opening Eclipse and using it to import the "Word Count" code
- Save the .jar to the local file system
- Run the code in MapReduce, check the progress of the job, and view the result
- Open the Pig Script Editor in Hue and build our script
- Save the script for future use and run it
- Check the progress of the job in Hue and view the result
- Filter out certain data from a dataset and save the result
- Append one dataset to another in an identical format using Union
- Join one dataset to another using a common column in each
- Identify whether two UDF repositories (Piggybank and DataFu) are installed
- Register the Stats UDF and define a Quartile function to use it
- Write the script and run the code, resulting in a document that shows the minimum, median, and max values for Quantity in our data
- Import data into an internal table (the default)
- Import data into an external table
- How to get data from HDFS into Hive
- Use Sqoop from the command line to move the data
- Check the data browser to see if the right directory was created in Hive
- Use "select * from table" to see the data in the table
- Using the basic Select From Where query
- Combining two tables using Union
- Creating a new table from the results of a query
- Review the Hadoop Ecosystem chart
- See graphic of structured and unstructured data import to Hadoop
- Introduce the term “Data Lake” and understand that we can make one now
- HDFS is the file system that Hadoop uses
- Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
- First, we will talk about Sqoop
- Next, we go over Flume
- Discussing the history and concept of MapReduce
- Let's look at the word count example
- Discuss what Pig is
- Taking a look at the "word count" example
- Cover the basic concept of Hive
- Take a look at internal versus external tables
- Understand how Hive works with metadata
- Discuss about HiveQL
- Download the Quickstart VM from Cloudera.com
- Start the VM
- Navigate to the Hue page
- Explanation of the file browser and query editor dropdowns
- Create a new user
- Use Hue to pull data from local file system to HDFS
- Use the command line to move data from the local file system onto HDFS
- Create a database in MySQL and load data
- Use Sqoop command line to transfer data to HDFS
- Modify the Flume Agent configuration file
- Create a text file in the local spooling directory and check to make sure Flume imports it to HDFS
- Opening Eclipse and using it to import the "Word Count" code
- Save the .jar to the local file system
- Run the code in MapReduce, check the progress of the job, and view the result
- Open the Pig Script Editor in Hue and build our script
- Save the script for future use and run it
- Check the progress of the job in Hue and view the result
- Filter out certain data from a dataset and save the result
- Append one dataset to another in an identical format using Union
- Join one dataset to another using a common column in each
- Identify whether two UDF repositories (Piggybank and DataFu) are installed
- Register the Stats UDF and define a Quartile function to use it
- Write the script and run the code, resulting in a document that shows the minimum, median, and max values for Quantity in our data
- Import data into an internal table (the default)
- Import data into an external table
- How to get data from HDFS into Hive
- Use Sqoop from the command line to move the data
- Check the data browser to see if the right directory was created in Hive
- Use "select * from table" to see the data in the table
- Using the basic Select From Where query
- Combining two tables using Union
- Creating a new table from the results of a query
- Review the Hadoop Ecosystem chart
- See graphic of structured and unstructured data import to Hadoop
- Introduce the term “Data Lake” and understand that we can make one now
- HDFS is the file system that Hadoop uses
- Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
- HDFS is the file system that Hadoop uses
- Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
- HDFS is the file system that Hadoop uses
- Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
- HDFS is the file system that Hadoop uses
- Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
- HDFS is the file system that Hadoop uses
- Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
- HDFS is the file system that Hadoop uses
- Next, we will cover YARN, the component of Hadoop that allocates resources such as CPU time and memory to jobs submitted for completion
- First, we will talk about Sqoop
- Next, we go over Flume
- First, we will talk about Sqoop
- Next, we go over Flume
- First, we will talk about Sqoop
- Next, we go over Flume
Additional information
Learning Path: Big Data Analytics
