Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Connect to the Tufts High Performance Compute Cluster. See Connecting for a detailed guide.

  2. Load the Spark module with the following command:

    Code Block
    module load spark

    Note that you can see a list of all available modules (potentially including different versions of Spark) by typing:

    Code Block
    module avail

    You can specify a specific version of Spark with the module load command or use the generic module name (spark) to load the latest version. 

  3. Start PySpark session by typing:

    Code Block
    pyspark

A Simple Test

To make sure that Spark and PySpark are working properly, lets load an RDD and perform a simple function on it. In this example, we will create a text file with a few lines and use PySpark to count both the number of lines and the number of words.

  1. Create a new text file in your home directory on the cluster using nano (or your favorite text editor):

    Code Block
    nano sparktest.txt
  2. Put a few lines of text into the file and save it, for example:

    Code Block
    This is line one
    This is line two
    This is line three
    This is line four
  3. Load the file into an RDD as follows:

    Code Block
    rdd = sc.textFile("sparktest.txt")

    Note that you case use the type() command to verify that rdd is indeed a PySpark RDD.

  4. Count the number of lines in the rdd:

    Code Block
    lines = rdd.count()
  5. Now you can use the split() and flatMap() functions to count the number of individual words:

    Code Block
    words = rdd.flatMap(lambda x: x.split()).count()