MPI Component Tutorial¶
This tutorial covers the MPI component in rompy-xbeach, which controls
domain decomposition for parallel execution of XBeach simulations.
What You'll Learn¶
- How MPI parallelisation works in XBeach
- Domain decomposition strategies
- Configuring manual domain subdivision
- Best practices for parallel execution
Prerequisites¶
- Familiarity with the
Configclass - Basic understanding of parallel computing concepts
Helper Functions¶
def print_params(lines: str, filename: str = "params.txt"):
"""Display parameters in a text-editor style format."""
border = "─" * 60
print(f"┌{border}┐")
print(f"│ {filename:<58} │")
print(f"├{border}┤")
for line in lines.strip().split("\n"):
print(f"│ {line:<58} │")
print(f"└{border}┘")
def show_params(obj, destdir=None):
"""Show parameters that would be written to params.txt."""
import tempfile
if destdir is None:
destdir = tempfile.mkdtemp()
params = obj.get(destdir=destdir)
lines = "\n".join(f"{k} = {v}" for k, v in params.items())
print_params(lines)
# return params
def print_warning(text: str):
"""Display a warning message with styling."""
border = "─" * 60
print(f"\033[93m┌{border}┐\033[0m")
print(f"\033[93m│ ⚠ WARNING{' ' * 49}│\033[0m")
print(f"\033[93m├{border}┤\033[0m")
for line in text.strip().split("\n"):
print(f"\033[93m│ {line:<58} │\033[0m")
print(f"\033[93m└{border}┘\033[0m")
1. Introduction to MPI in XBeach¶
What is MPI?¶
MPI (Message Passing Interface) enables XBeach to run in parallel across multiple CPU cores by dividing the computational domain into sub-domains.
┌─────────────────────────────────────────────────────────────┐
│ Full Domain (nx × ny) │
│ │
│ ┌─────────────┬─────────────┬─────────────┬─────────────┐ │
│ │ Core 0 │ Core 1 │ Core 2 │ Core 3 │ │
│ │ │ │ │ │ │
│ │ Sub-domain │ Sub-domain │ Sub-domain │ Sub-domain │ │
│ │ 0 │ 1 │ 2 │ 3 │ │
│ └─────────────┴─────────────┴─────────────┴─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Each sub-domain is computed independently, with boundary information exchanged between neighbouring domains when needed.
Running XBeach with MPI¶
XBeach is launched with MPI using a command like:
mpirun -np 4 xbeach
The number of processes (-np 4) determines how many sub-domains are created.
The Mpi component in rompy-xbeach controls how the domain is subdivided.
from rompy_xbeach.components.mpi import Mpi
Automatic Decomposition (Default)¶
XBeach automatically determines the optimal subdivision to minimise internal boundary length:
mpi_auto = Mpi(mpiboundary="auto")
show_params(mpi_auto)
┌────────────────────────────────────────────────────────────┐ │ params.txt │ ├────────────────────────────────────────────────────────────┤ │ mpiboundary = auto │ └────────────────────────────────────────────────────────────┘
Cross-Shore Decomposition (x)¶
Subdivides in the cross-shore direction only. Each sub-domain spans the full alongshore extent:
┌─────────────────────────────────────────────────────────────┐
│ Offshore │
│ ┌─────────────────────────────────────────────────────────┐│
│ │ Sub-domain 0 ││
│ ├─────────────────────────────────────────────────────────┤│
│ │ Sub-domain 1 ││
│ ├─────────────────────────────────────────────────────────┤│
│ │ Sub-domain 2 ││
│ ├─────────────────────────────────────────────────────────┤│
│ │ Sub-domain 3 ││
│ └─────────────────────────────────────────────────────────┘│
│ Onshore │
└─────────────────────────────────────────────────────────────┘
mpi_x = Mpi(mpiboundary="x")
show_params(mpi_x)
┌────────────────────────────────────────────────────────────┐ │ params.txt │ ├────────────────────────────────────────────────────────────┤ │ mpiboundary = x │ └────────────────────────────────────────────────────────────┘
Alongshore Decomposition (y)¶
Subdivides in the alongshore direction only. Each sub-domain spans the full cross-shore extent:
┌─────────────────────────────────────────────────────────────┐
│ Offshore │
│ ┌──────────┬──────────┬──────────┬──────────┐ │
│ │ │ │ │ │ │
│ │ Sub- │ Sub- │ Sub- │ Sub- │ │
│ │ domain │ domain │ domain │ domain │ │
│ │ 0 │ 1 │ 2 │ 3 │ │
│ │ │ │ │ │ │
│ └──────────┴──────────┴──────────┴──────────┘ │
│ Onshore │
└─────────────────────────────────────────────────────────────┘
mpi_y = Mpi(mpiboundary="y")
show_params(mpi_y)
┌────────────────────────────────────────────────────────────┐ │ params.txt │ ├────────────────────────────────────────────────────────────┤ │ mpiboundary = y │ └────────────────────────────────────────────────────────────┘
Manual Decomposition (man)¶
Specify exact subdivision using mmpi (cross-shore) and nmpi (alongshore):
# 2 domains cross-shore × 4 domains alongshore = 8 total
mpi_manual = Mpi(
mpiboundary="man",
mmpi=2, # Cross-shore divisions
nmpi=4, # Alongshore divisions
)
show_params(mpi_manual)
┌────────────────────────────────────────────────────────────┐ │ params.txt │ ├────────────────────────────────────────────────────────────┤ │ mmpi = 2 │ │ mpiboundary = man │ │ nmpi = 4 │ └────────────────────────────────────────────────────────────┘
try:
# This will fail - manual mode requires both mmpi and nmpi
invalid_mpi = Mpi(mpiboundary="man", mmpi=2)
except ValueError as e:
print_warning(f"Validation error:\n{str(e)[:55]}")
┌────────────────────────────────────────────────────────────┐ │ ⚠ WARNING │ ├────────────────────────────────────────────────────────────┤ │ Validation error: │ │ 1 validation error for Mpi │ │ Value error, When mpibound │ └────────────────────────────────────────────────────────────┘
Parameter Ranges¶
Both mmpi and nmpi must be between 1 and 100:
try:
invalid_mpi = Mpi(mpiboundary="man", mmpi=0, nmpi=4)
except Exception as e:
print_warning(f"Validation error:\nmmpi must be >= 1")
┌────────────────────────────────────────────────────────────┐ │ ⚠ WARNING │ ├────────────────────────────────────────────────────────────┤ │ Validation error: │ │ mmpi must be >= 1 │ └────────────────────────────────────────────────────────────┘
# Example configuration with MPI
# config = Config(
# grid=grid,
# bathy=bathy,
# input=data_interface,
# mpi=Mpi(mpiboundary="auto"),
# )
print("Config with MPI:")
print(" mpi=Mpi(mpiboundary='auto')")
Config with MPI: mpi=Mpi(mpiboundary='auto')
Omitting MPI Configuration¶
If you don't specify the mpi field, XBeach uses its defaults:
mpiboundary = autommpi = 2(if manual mode)nmpi = 4(if manual mode)
# No MPI configuration - uses XBeach defaults
# config = Config(
# grid=grid,
# bathy=bathy,
# input=data_interface,
# # mpi not specified - uses defaults
# )
print("Config without MPI field:")
print(" XBeach uses default: mpiboundary=auto")
Config without MPI field: XBeach uses default: mpiboundary=auto
5. Choosing a Decomposition Strategy¶
When to Use Each Strategy¶
| Strategy | Best For | Considerations |
|---|---|---|
auto |
Most cases | Let XBeach optimise |
x |
Long, narrow domains | Minimises cross-shore communication |
y |
Wide domains | Minimises alongshore communication |
man |
Specific requirements | Full control over decomposition |
Performance Considerations¶
- Load balancing: Ideally, each sub-domain should have similar computational load
- Communication overhead: More sub-domains = more boundary exchanges
- Memory: Each process needs memory for its sub-domain plus ghost cells
print_warning(
"The number of MPI processes must match the\n"
"total number of sub-domains (mmpi × nmpi).\n"
"Mismatches will cause XBeach to fail."
)
┌────────────────────────────────────────────────────────────┐ │ ⚠ WARNING │ ├────────────────────────────────────────────────────────────┤ │ The number of MPI processes must match the │ │ total number of sub-domains (mmpi × nmpi). │ │ Mismatches will cause XBeach to fail. │ └────────────────────────────────────────────────────────────┘
Example: Matching Processes to Domains¶
# For a 2×4 manual decomposition:
mpi_2x4 = Mpi(mpiboundary="man", mmpi=2, nmpi=4)
total_domains = 2 * 4
print(f"Manual decomposition: {2} × {4} = {total_domains} domains")
print(f"Required MPI command: mpirun -np {total_domains} xbeach")
Manual decomposition: 2 × 4 = 8 domains Required MPI command: mpirun -np 8 xbeach
6. Summary¶
Key Parameters¶
| Parameter | Values | Description |
|---|---|---|
mpiboundary |
auto, x, y, man |
Decomposition strategy |
mmpi |
1-100 | Cross-shore divisions (manual mode) |
nmpi |
1-100 | Alongshore divisions (manual mode) |
Typical Usage¶
# Automatic decomposition (recommended)
mpi = Mpi(mpiboundary="auto")
# Manual 2×4 decomposition for 8 cores
mpi = Mpi(mpiboundary="man", mmpi=2, nmpi=4)
# In Config
config = Config(..., mpi=Mpi(mpiboundary="auto"))
Running with MPI¶
# Run with 4 processes
mpirun -np 4 xbeach
# Run with 8 processes (for 2×4 manual decomposition)
mpirun -np 8 xbeach
Related Components¶
- Hotstart: For chained simulations (must use consistent MPI decomposition)
- Output: Output files are combined automatically after MPI runs