27  Parallelisation

“Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time.” Wikipedia

Note

In the past, a computer processing unit (CPU) would have had a single core.

However, in more recent years, having multiple cores has become the norm. The benefit of this is that multiple tasks can be handled simultaneously.

By default, our Python code will not make use of multiple cores. Everything will be run sequentially on a single core.

However, for some kinds of code, running it across multiple cores at once can be a great way to speed things up.

Simpy is a good candidate for running code in parallel! By running our simpy code in parallel, we can potentially dramatically cut down the length of time

Warning

You may not be able to use parallelisation when deploying your code to the web - it will vary depending on your deployment/hosting choices.

27.1 A simple joblib example

First, it may be helpful to see a simpler example of joblib.

Let’s start by looking at a for loop to square the numbers 1 to 10.

squared_numbers = []
for i in range(1, 11, 1):
  squared_numbers.append(i * i)

print(squared_numbers)
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]

We can simplify the code above into a list comprehension.

[i*i for i in range(1, 11, 1)]
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]

Why is this important? Well, to use joblib, it’s easiest to write our loop as a list comprehension.

Instead of doing i * i to square our number, we have made a new function that does the same thing.

from joblib import Parallel, delayed

def multiply_by_self(input_number):
  return input_number * input_number

Parallel(n_jobs=2)(delayed(multiply_by_self)(i) for i in range(1, 11, 1))
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]

27.2 A code example

Note

Thanks go to Michael Allen for providing an example of how this can be achieved in SimPy. His repository can be found here.

We will make use of the Joblib package to easily split our SimPy code to run across multiple processor cores.

We will take the model created in the Reproducibility chapter (Chapter 13) and add parallelisation to it.

27.2.1 Library imports

We will need to import Parallel and delayed from the joblib library.

You will need to run !pip install joblib if you have not previously made use of this library.

from joblib import Parallel, delayed

27.2.2 The g, Patient and Model classes

Our g, patient and model classes are unchanged.

27.2.3 The trial class

In the trial class, we need to change a number of functions, tweak our attributes, and make use of the joblib library.

Warning

Because of the way joblib executes things, if we try to keep track of our results in the same way we have so far - setting up a dummy dataframe and then using the .loc accessor to write our results to the correct row of the dataframe for each run - we will end up with an empty results list.

Instead, we will create an empty list. Into this list we will place a dictionary of results from the run.

27.2.3.1 The init method

Let’s start by adjusting our __init__ method for our new way of carrying out the results collection.

def  __init__(self):
    self.df_trial_results = []

27.2.3.2 The process_trial_results method

Next we want to create a new method that will turn our list of dictionaries into a Python dataframe.

All we need to do is call pd.DataFrame on that object. In this case, we overwrite the original df_trial_results object.

Next we set the index of the dataframe to the run number, which is how it was set up in the original code.

def process_trial_results(self):
  self.df_trial_results = pd.DataFrame(self.df_trial_results)
  self.df_trial_results.set_index("Run Number", inplace=True)

27.2.3.3 the print_trial_results method

Because we went to the effort of setting the index in the step above, this method can remain unchanged.

27.2.3.4 the run_single method

First, let’s look back at how our run_trial function was written before.

def run_trial(self):
        print(f"{g.number_of_receptionists} receptionists, {g.number_of_nurses} nurses, {g.number_of_doctors} doctors")
        print("") # Print a blank line

        # Run the simulation for the number of runs specified in g class.
        # For each run, we create a new instance of the Model class and call its
        # run method, which sets everything else in motion.  Once the run has
        # completed, we grab out the stored run results (just mean queuing time
        # here) and store it against the run number in the trial results
        # dataframe.

        for run in range(g.number_of_runs):
            random.seed(run)

            my_model = Model(run)
            patient_level_results = my_model.run()

            self.df_trial_results.loc[run] = [
                len(patient_level_results),
                my_model.mean_q_time_recep,
                my_model.mean_q_time_nurse,
                my_model.mean_q_time_doctor
                ]

        # Once the trial (ie all runs) has completed, print the final results
        self.print_trial_results()

To use parallelisation, we now split this out into two separate functions. The first is the run_single method.

Note that it’s very similar to the indented part of the for loop from the code above.

The main change is how the results are stored - they are now put into a dictionary. Remember, dictionaries use the format {“key”:value} - here we have made our column names the ‘keys’ and our results the ‘values’.

Finally, it’s important to return the results object from the function.

def run_single(self, run):
    # For each run, we create a new instance of the Model class and call its
    # run method, which sets everything else in motion.  Once the run has
    # completed, we grab out the stored run results (just mean queuing time
    # here) and store it against the run number in the trial results
    # dataframe.
    random.seed(run)

    my_model = Model(run)
    patient_level_results = my_model.run()

    results = {"Run Number":run,
        "Arrivals": len(patient_level_results),
        "Mean Q Time Recep": my_model.mean_q_time_recep,
        "Mean Q Time Nurse": my_model.mean_q_time_nurse,
        "Mean Q Time Doctor": my_model.mean_q_time_doctor
        }

    return results

27.2.3.5 the run_trial method

Finally, we need to do a few things.

The key one is making our trial now use the Parallel class and delayed function.

We set up an instance of the Parallel class and set the number of jobs to -1.

Tip

-1 just means that the joblib library will use every available core to run the code.

You can instead specify a particular number of cores to use as a positive integer value.

We then pass in the self.run_single function to the delayed function.

Finally, we pass in the arguments that are required for the self.run_single function, which is just the run number.

The syntax can appear a little bit strange - just take a close look at the full line below and try and understand it.

self.df_trial_results = Parallel(n_jobs=-1)(delayed(self.run_single)(run) for run in range(g.number_of_runs))

We assign all of this to the self.df_trial_results attribute of our class.

Now the only additional step is to run our new process_trial_results() function before we run print_trial_results().

def run_trial(self):
    print(f"{g.number_of_receptionists} receptionists, {g.number_of_nurses} nurses, {g.number_of_doctors} doctors")
    print("") # Print a blank line

    # Run the simulation for the number of runs specified in g class.
    self.df_trial_results = Parallel(n_jobs=-1)(delayed(self.run_single)(run) for run in range(g.number_of_runs))

    # Once the trial (ie all runs) has completed, print the final results
    self.process_trial_results()
    self.print_trial_results()

Voila! Our model is now set up to use parallelisation. Let’s take a look at how much faster this can make things.

27.3 Evaluating the code outputs

First, let’s run this the original way and time how long it takes.

1 receptionists, 1 nurses, 2 doctors

Trial Results
            Arrivals  Mean Q Time Recep  Mean Q Time Nurse  Mean Q Time Doctor
Run Number                                                                    
0              102.0               0.00              57.19                1.15
1              125.0               1.84             144.69                0.02
2              112.0               0.85              15.30                1.13
3              120.0               1.08              82.67                0.04
4              132.0               1.94             107.47                0.51
...              ...                ...                ...                 ...
995             97.0               0.59              36.91                0.00
996            111.0               1.10              68.32                0.18
997            129.0               0.99             122.27                0.06
998            140.0               1.73              92.57                0.30
999            109.0               0.67              45.83                0.39

[1000 rows x 4 columns]
Arrivals              120.98
Mean Q Time Recep       1.31
Mean Q Time Nurse      62.75
Mean Q Time Doctor      0.50
dtype: float64

It took 33.5614 seconds to do 10 runs without parallelisation

Now let’s run it again with parallisation.

1 receptionists, 1 nurses, 2 doctors

Trial Results
            Arrivals  Mean Q Time Recep  Mean Q Time Nurse  Mean Q Time Doctor
Run Number                                                                    
0                102               0.00              57.19                1.15
1                125               1.84             144.69                0.02
2                112               0.85              15.30                1.13
3                120               1.08              82.67                0.04
4                132               1.94             107.47                0.51
...              ...                ...                ...                 ...
995               97               0.59              36.91                0.00
996              111               1.10              68.32                0.18
997              129               0.99             122.27                0.06
998              140               1.73              92.57                0.30
999              109               0.67              45.83                0.39

[1000 rows x 4 columns]
Arrivals              120.98
Mean Q Time Recep       1.31
Mean Q Time Nurse      62.75
Mean Q Time Doctor      0.50
dtype: float64

It took 5.5744 seconds to do 10 runs **with** parallelisation

27.4 Evaluating speed gains

Let’s run the model a few times, specifying a different number of cores to run it on each time.

This book is being compiled on a machine with a 14 core processor.

An argument has been added to the run_trial function to allow us to pass in the number of cores to use.

The results below all relate to 100 runs of the simulation.

speed = []

g.number_of_runs = 100

for i in range(1, 15, 1):
    start_time = time.time()
    # Create an instance of the Trial class
    my_trial = Trial()

    # Call the run_trial method of our Trial object
    my_trial.run_trial(cores=i)

    run_time = round((time.time() - start_time), 3)

    speed.append({"Cores":i, "Run Time (seconds)": run_time})

timing_results = pd.DataFrame(speed)

print(timing_results)
    Cores  Run Time (seconds)
0       1               3.437
1       2               2.234
2       3               1.580
3       4               1.372
4       5               1.215
5       6               1.092
6       7               1.118
7       8               0.709
8       9               0.678
9      10               0.691
10     11               1.055
11     12               0.620
12     13               0.602
13     14               0.593

Let’s run it again and look at the speed gains when doing 1000 runs of the simulation.

Notice that doubling the number of cores doesn’t halve the time - there is fixed overhead that will take a certain amount of time. This can be even more noticeable with a smaller number of runs.

We make big gains at the beginning, but the fixed overheads mean that higher numbers of cores start to have less and less of an effect.

    Cores  Run Time (seconds)
0       1              34.700
1       2              18.816
2       3              14.274
3       4              10.761
4       5               9.242
5       6               7.829
6       7               7.329
7       8               6.765
8       9               6.443
9      10               6.294
10     11               6.426
11     12               5.954
12     13               5.790
13     14               5.519