In data science, data is called "big" if it cannot fit into the memory of a single standard laptop or workstation. The analysis of big data sets requires using a cluster of tens, hundreds or thousands of computers. Effectively using such clusters requires the use of distributed files systems, such as the Hadoop Distributed File System (HDFS) and corresponding computational models, such as Hadoop, Map Reduce and Spark. In this course, part of the Data Science Micro Masters program, you will learn what the bottlenecks are in massive parallel computation and how to use spark to minimize these bottlenecks. You will learn how to perform supervised an unsupervised machine learning on massive datasets using the Machine Learning Library (MLlib). In this course, as in the other ones in this Micro Masters program, you will gain hands-on experience using PySpark within the Jupyter notebooks environment.
Studied or Worked here? Share Your Review
Please do not post:
Thank you once again for doing your part to keep Edarabia the most trusted education source.