The BigData Facility (DHYAN) has 20 CPU (800 cores) data node clusters with 8TB of RAM. The BigData Facility use Parallel File System (~250TB) of DDN GRIDScaler storage at 15 GBps throughput over 100 Gbps interconnect network.
Submitting job on BigData Cluster:
Spark built on the concept of Distributed Datasets, which contain arbitrary Java or Python objects. You create a dataset from external data, and then apply parallel operations to it. The building block of the Spark API is its RDD API. In the RDD API, there are two types of operations: transformations, which define a new dataset based on previous ones, and actions, which kick off a job to execute on a cluster. On top of Spark’s RDD API, high-level APIs are provided, e.g. DataFrame API and Machine Learning API. These high-level APIs provide a concise way to conduct certain data operations. In this page, we will show examples using RDD API as well as examples using high level APIs.
Note: Your files or directory path defined in hdfs path
Check HDFS file system: hdfs dfs –ls username
Spark examples
spark-submit
--class org.apache.spark.examples.SparkPi
--master yarn
--deploy-mode cluster
--driver-memory 4g
--executor-memory 2g
--executor-cores 1
--queue default
spark-examples_2.11-2.3.0.2.6.5.0-292.jar 10
Useful Commnads
hdfs dfs -mkdir DIR_PATH
hdfs dfs -rm r DIR_PATH / File
hdfs dfs -put SOURCE_PATH DEST_PATH
hdfs dfs -get SOURCE_PATH DEST_PATH
In Spark, a DataFr ame is a distributed collection of data organized into named columns. Users can use DataFrame API to perform various relational operations on both external data sources and Spark’s built in distributed collections without providing specific procedures for processing data. Also, programs based on DataFrame API will be automatically optimized by Spark’s built in optimizer, Catalyst. Examples:
Text Search: In this example, we search through the error messages in a log file. Written in python)
textFile = sc.textFile("hdfs://...")
df = textFile.map(lambda r: Row(r)).toDF(["line"])
errors = df.filter(col("line").like("%ERROR%"))
errors.count()
errors.filter(col("line").like("%MySQL%")).count()
errors.filter(col("line").like("%MySQL%")).collect()
Simple Data Operations: In this example, we read a table stored in a database and calculate the number of people for every age. Finally, we save the calculated result to S3 in the format of JSON. A simple MySQL table "people" is used in the example and this table has two columns, "name" and "age". Examples: (Written in python)
# Creates a DataFrame based on a table named "people"url = "jdbc:mysql://yourIP:yourPort/test?user=yourUsername;password=yourPassword"
df = sqlContext \
.read \
.format("jdbc") \
.option("url", url) \
.option("dbtable", "people") \
.load()
df.printSchema()df.printSchema()
countsByAge = df.groupBy("age").count()
countsByAge.show()
countsByAge.write.format("json").save("s3a://...")
MLlib, Spark’s Machine Learning (ML) library, provides many distributed ML algorithms. These algorithms cover tasks such as feature extracti on, classification, regression, clustering, recommendation, and more. MLlib also provides tools such as ML Pipelines for building workflows, CrossValidator for tuning parameters, and model persistence for saving and loading models.
Prediction with Logistic Regression: In this example, we take a dataset of labels and feature vectors. We learn to predict the labels from feature vectors using the Logistic Regression algorithm. Examples:
# Every record of this DataFrame contains the label and features represented by a vector.df = sqlContext.createDataFrame(data, ["label", "features"])
lr = LogisticRegression(maxIter=10)
model = lr.fit(df)
model.transform(df).show()
Useful Commands
ssh <username>@172.20.70.16
Usage Guidelines