DESperately Seeking Simulation - Part 2

Sammi Rosser, Dr Dan Chalk

Health Service Modelling Associates Programme Lambda

Recap

So far we’ve talked about the different parts of a discrete event simulation.

But what if we could actually see the simulation panning out?

Let’s revisit the key concepts.

Entities arrive in your system at varying rates

Entities are often the service users in your model.

Like buses, you might have none arrive for ages, then several at once.

Generators bring entities into being, and sinks take entities out of being.

And we might have multiple generators for different kinds of patients, if they arrive at different rates.

Our entities might take a range of different routes through the system.

Queues might be seen in the order they arrive - or you may use priority-based queueing.

The Power of Animations

We’ve demonstrated a few simple animations here to help recap the concepts.

Each animation is actually a real discrete event simulation running under the hood, using real simulation code.

But these animations aren’t just limited to toy examples - they can be applied to real simulations too!

From an emergency department…

To a ward…

What-if Scenarios

The Power of Questions

There are some key kinds of questions you can ask using a DES:

What happens if

  • we add or remove resources?
    • at certain times of day?
    • or if we change the timing and overlap of different shifts?
  • demand increases or decreases?
    • from certain patient subgroups?
  • patient characteristics change?
  • we reorder or change the process?
  • parts of the process get faster or slower?
  • we change how the queue is prioritised?
  • we change operating hours

And animations can help you understand the answer to your questions!

Other DES Outputs

Animations can help you understand what’s going on - but they’re not so good at summarising the answers to your what-if questions.

What else might we track?

Wait Times

A common thing to measure is the wait time for key points in the service.

  • The mean average wait time
  • The median average wait time
  • The maximum wait time
  • The variation in wait times

You might also subset this by time (like the hour or day), or across different patient types

Discussion

That gives you one idea for what you might measure in a model.

But what other sorts of things might you want to be tracking and discussing?

Discuss in your groups for 5 minutes, and then we’ll share some ideas.

Meeting wait targets

Rather than looking purely at the raw wait times, we might be interested in meeting targets.

For example:

  • what % of people waited more than 4 hours?
  • what % of cases met the 2 week wait target?

Queue Lengths

We might also be interested in

  • the average lengths of queues
  • how queues vary across
    • different resources/parts of the process
    • different times of day/days of the week/months of the year

Resource Utilisation

We may also be interested in resource utilisation.

If our waits are good, but most of the time our resources are sitting idle, this is not optimal…

But what is the ideal level of resource utilisation? Shout out your thoughts.

We often don’t want all our resources being utilised 100% of the time - they’ll burn out! Or if we have a ward, we might not want it running at 100% occupancy so that there’s space for emergency admissions.

In fact, depending on your ward size and how important it is that a bed is immediately available, optimum occupancy could be as low as ~55%.

There’s a great paper here if you’d like to find out more!

And more!

  • Resources being blocked from use by a downstream resource limitation (e.g. ambulances being blocked from unloading patients)
  • Patients reneging or baulking
  • Non-attenders
  • Balance between admissions and discharges
  • Throughput (e.g. patients treated per day)
  • Total resources used (e.g. medication)

Cost Savings

We can bring in cost estimates of different parts of the system and use this to explore the cost-effectiveness of scenarios.

Visualising Metrics

Let’s look at a couple of key ways you might see these sorts of things displayed in models.

Bar Plots

Bar plots can be a good way of summarising the outputs of multiple runs

We can bring in background graphics to help us quickly interpret the results.

Box Plots

Box plots can help us understand how things can vary across runs

Histograms

Histograms are another good tool for understanding variation

Line Charts

Line charts can be good for helping us understand how things vary over time

Interacting with Models

There are a couple of different ways to interact with models.

We’ll start with the old-school way…

The Code

Here’s a sample of some code for a simulation model (that your HSMA would write!)

import simpy
import random
import math
import matplotlib.pyplot as plt
import argparse

# --------------------------
# Parse command-line arguments
# --------------------------
parser = argparse.ArgumentParser(description="Healthcare Waiting List Simulation")
parser.add_argument("--patients", type=int, default=25, help="Average new patients per week (default: 25)")
parser.add_argument("--clinicians", type=int, default=4, help="Number of clinicians (default: 4)")
parser.add_argument("--patients_per_clinician_per_week", type=int, default=5, help="Patients each clinician can see per week (default: 5)")
parser.add_argument("--duration_years", type=int, default=3, help="Simulation duration in years (default: 3)")
parser.add_argument("--initial_waitlist", type=int, default=0, help="Initial waiting list length (default: 0)")

args = parser.parse_args()

# --------------------------
# Assign from arguments
# --------------------------
patients = args.patients
clinicians = args.clinicians
patients_per_clinician_per_week = args.patients_per_clinician_per_week
sim_duration_years = args.duration_years
waiting_list_start_length = args.initial_waitlist

# --------------------------
# Tracking variables
# --------------------------
waiting_times = []
queue_lengths = []
time_points = []
patients_seen = []
all_patients = []

random.seed(42)

# --------------------------
# Simulation functions
# --------------------------
def patient(env, name, nurses, arrival_time):
    service_time = 1 / patients_per_clinician_per_week
    patient_record = {
        'name': name,
        'arrival_time': arrival_time,
        'service_start': None,
        'wait_time_weeks': None,
        'status': 'waiting'
    }
    all_patients.append(patient_record)

    with nurses.request() as request:
        yield request
        wait_time = env.now - arrival_time
        waiting_times.append(wait_time)

        patient_record['service_start'] = env.now
        patient_record['wait_time_weeks'] = wait_time
        patient_record['status'] = 'seen'

        patients_seen.append(patient_record.copy())
        yield env.timeout(service_time)

def patient_generator(env, nurses):
    for i in range(waiting_list_start_length):
        env.process(patient(env, f"Initial Patient {i+1}", nurses, 0))

    while True:
        num_arrivals = math.ceil(random.normalvariate(patients, patients * 0.2))
        num_arrivals = max(0, num_arrivals)
        for i in range(num_arrivals):
            env.process(patient(env, f"Week {math.ceil(env.now)} Patient {i+1}", nurses, env.now))
        yield env.timeout(1)

def monitor_queue(env, nurses):
    while True:
        queue_lengths.append(len(nurses.queue))
        time_points.append(env.now)
        yield env.timeout(1)

# --------------------------
# Run simulation
# --------------------------
env = simpy.Environment()
nurses_resource = simpy.Resource(env, capacity=clinicians)
env.process(patient_generator(env, nurses_resource))
env.process(monitor_queue(env, nurses_resource))
env.run(until=sim_duration_years * 52)

# --------------------------
# Plot results
# --------------------------
plt.figure(figsize=(10, 4))
plt.plot(time_points, queue_lengths, linewidth=2, color='#1f77b4')
plt.xlabel('Time (weeks)')
plt.ylabel('Waiting List Length')
plt.title('Waiting List Length Over Time')
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()

# --------------------------
# Print summary
# --------------------------
final_waiting_list = len(nurses_resource.queue)
patients_seen_count = len(patients_seen)
avg_wait = sum(waiting_times) / len(waiting_times) if waiting_times else 0

print("\n--- Simulation Summary ---")
print(f"Final Waiting List: {final_waiting_list}")
print(f"Patients Seen: {patients_seen_count}")
print(f"Average Wait (weeks): {avg_wait:.1f}")

Let’s now introduce multiple runs to capture variation.

Quantifying Variability

There are a number of ways we might look to quantify variation:

  • Range (minimum value to maximum value)
  • Inter-quartile range (range of middle 50% of values)
  • Standard Deviation (average variation from the mean)
  • Confidence intervals (e.g. 95% CI around the mean)
  • Percentiles (e.g. 5th, 50th, 95th percentile)
  • and more!

Talk to your analyst to find out what they’ve chosen, and why!

Not all error bars are showing the same thing…

Now let’s make it easier to compare scenarios in the app.

Summary

  • Building a web app interface to simulation code can make it easier for non-experts to interact with simulation models
  • Animations can help communicate how the model works, and start spotting patterns
  • Thinking about the metrics to track, and how to display them, is an important part of the modelling process

Exercise: The DES Playground

The Playground

You’re now going to have a go with an interactive app.

In the app, you are running an emergency department. You usually have 24 cubicles available, which are strictly divided between two pathways and multiple steps.

The Exercise

Go to hsma.co.uk/workshop

With 24 rooms, your system is coping. However, due to building work, you are temporarily going to have 20 rooms available.

You need to reallocate the number of rooms available to different steps of the pathway - but is it possible to keep the hospital running smoothly still?

  1. Run the simulation with the default parameters. What results does it give you?

  2. Change some parameters and run it again. How do the results change?

  3. Try to meet the goals!

You have 10 minutes.

Discussion

How to Make a Good Model

A model is doomed to fail if it’s not designed well.

So how can you help ensure that you are enabling your analysts to make good models?

Understanding the Process

What is the process - really?

  • You’re setting the model up for failure if you don’t have the experts in the process in the room from the start

  • Even then, it can be complex to work out what to model and what to simplify

    • But the scenarios you want to test can really influence this!
  • And do you actually hold the data you need to parameterise the model?

    • Expert opinions can be used where data is missing, but this needs to be documented!

Measuring Up

  • From the beginning, you need to be thinking about what metrics/KPIs do you want to be able to measure
  • How are you going to measure that the model reflects the current reality?

Scenarios

  • What do you need to be able to change?
    • and what can you actually change in your system?
  • Will the simplified model logic support the scenarios you want to test?
  • How do you want to be able to compare scenarios?
    • looking in-depth at scenarios one at a time?
    • or displaying outputs side-by-side in a web app?

Using it for decisions

  • Is this a one and done output, or an ongoing tool?
  • What do you need to be able to export?
  • Have you asked the modeller how they are assessing it against the real system, and refine it?

Understanding the limits

A model is not a crystal ball!

It should generally be thought of as presenting a ‘direction of travel’ - not exact estimates.

As well, a common mistake amongst those newer to modelling is to assume that

  • models need to capture everything (and in detail)
  • and that a more “realistic” model is a better one

Model Detail and Scope

“All models are wrong, but some are useful.”

  • George E.P. Box

(a British Statistician considered one of the greatest statistical minds of the 20th century)

Model Detail and Scope (continued)

Tube Map

This model is wrong…

Actual Layout


but very useful!

Model Simplifications

All models are built on assumptions and simplifications.


Assumptions are things we have to include because we don’t or can’t know something

Simplifications are things in a model which we choose to represent in less detail than the real world equivalent because the added detail won’t give us added benefit

As models are essentially collections of assumptions and simplifications, the more of the real world we choose to capture, the more potential inaccuracy we introduce!

Scope and Detail

When designing a model, the modeller needs to consider scope and level of detail.

Scope determines which section of the real world is carved out and represented in the model - where are the boundaries?

Level of detail is how much of the real world detail is represented vs simplified

Scope and Detail (continued)

Imagine you’re modelling an ED to identify strategies to reduce waits.

Does modelling all the tasks a nurse does during a triage give you anything above simplifying to a “triage” process that takes x amount of time?


If I want to use the model to explore changing some of those processes then maybe…

But if it will just change the overall amount of time with the nurse, it’s equally effective to say “What if the average triage time was reduced by 2 minutes?”

A good rule of thumb



Build the simplest model to sufficiently answer the question for which the model is being used


If extra scope or detail is needed, how can its representation be simplified?

How soon do you need it?

How long does a model take?

  • A good model can take time to create
    • it’s not quite like a regular analysis
  • You can help it go faster by carefully considering the things on the preceding slides
  • Set your analysts up for success!
  • A rushed model can do more harm than good
  • But it’s generally still quicker than trialling the real world changes!

No, really, how long?

  • A simple model can go together pretty quickly (hours to days)
    • but your analysts may need time to upskill in R or Python…
    • and upskill in DES-specific concepts…
    • and add in a Streamlit frontend…
    • and clear visuals…
    • and animations…
    • and more complex inputs and scenarios…
    • and the ability to compare things side-by-side…
    • and the ability to download a summary table…
    • and robust tests…

Summary

In reality, a good model may take weeks to months - and may be iterated on for years

But the time and effort can be worth it!


“I can give you an answer by this afternoon that’s probably wrong and you’ll probably have to ask me again next week.

Or I can build a model, which will take a bit longer, but you’ll probably never need to ask me that question again.”

  • One of our HSMAs from 2016 to their senior managers

Exercise: Design your own Discrete Event Simulation + Interface

Your Task

Choose a system

  • it could be the one you discussed earlier
  • or you can choose something different

Using the provided paper and markers, you are going to draw some wireframes (mock-ups) of a web application front-end for your system.

Your Task (continued)

Think about the kinds of inputs and outputs you want in your web app.

You can use multiple pieces of paper to represent different pages of the app.