Videomontage met Adobe Premiere
- formation par Syntra Antwerpen & Vlaams-Brabant
- Geel, Malines, Louvain
Get hands-on practice on Linux with Apache Hadoop (HDFS, Yarn, Pig, and Hive) in this two-day ABIS training.
Nowadays everybody seems to be working with "big data", most often in the context of analytics and "Data Science". Do you also want to store and then interrogate your several data sources (click streams, social media, relational data, sensor data, IoT, ...) and are you experiencing the shortcomings of traditional data tools? Maybe you are in need of distributed data stores like HDFS and a MapReduce infrastructure like Hadoop's.
This course builds on the concepts which are set forth in the Big data architecture and infrastructure course. You will get hands-on practice on linux with Apache Hadoop: HDFS, Yarn, Pig, and Hive.
You learn
After successful completion of the course, you will have sufficient basic expertise to set up a Hadoop cluster, to import data into HDFS, and to interrogate it cleverly using MapReduce.
When you want to use Hadoop with Spark, you are referred to the course Big data in practice using Spark.
Classroom instruction, with practical examples and supported by extensive practical exercises.
Delivered as a live, interactive training – available in-person or online, or in a hybrid format. Training can be implemented in English, Dutch, or French.
Familiarity with the concepts of data stores and more specifically of "big data" is necessary; see our course Big data architecture and infrastructure. Additionally, minimal knowledge of SQL, Linux and Java are useful. Experience with a programming language (e.g. Java, PHP, Python, Perl, C++ or C#) is a must.
Whoever wants to start practising "big data": developers, data architects, and anyone who needs to work with big data technology.