Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions partitioned-heat-conduction/metadata.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
name: Partitioned heat conduction
path: partitioned-heat-conduction
url: https://precice.org/tutorials-partitioned-heat-conduction.html

participants:
- Dirichlet
- Neumann

cases:
dirichlet-fenics:
participant: Dirichlet
directory: ./dirichlet-fenics
run: ./run.sh
component: fenics-adapter

dirichlet-nutils:
participant: Dirichlet
directory: ./dirichlet-nutils
run: ./run.sh
component: nutils-adapter

dirichlet-openfoam:
participant: Dirichlet
directory: ./dirichlet-openfoam
run: ./run.sh
component: openfoam-adapter

neumann-fenics:
participant: Neumann
directory: ./neumann-fenics
run: ./run.sh
component: fenics-adapter

neumann-nutils:
participant: Neumann
directory: ./neumann-nutils
run: ./run.sh
component: nutils-adapter

neumann-openfoam:
participant: Neumann
directory: ./neumann-openfoam
run: ./run.sh
component: openfoam-adapter
11 changes: 11 additions & 0 deletions partitioned-heat-conduction/reference-results/.gitkeep
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Reference results for partitioned-heat-conduction are stored as Git LFS archives.
#
# To generate them locally, run from tutorials/tools/tests:
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instructions say to run from "tutorials/tools/tests", but in this repository the path is just "tools/tests" from the repo root. Please update this path to avoid confusing users.

Suggested change
# To generate them locally, run from tutorials/tools/tests:
# To generate them locally, run from tools/tests:

Copilot uses AI. Check for mistakes.
#
# python generate_reference_results.py --tutorial partitioned-heat-conduction \
# --case-combination dirichlet-fenics neumann-fenics
#
# Expected archives (one per registered case combination in tests.yaml):
# dirichlet-fenics_neumann-fenics.tar.gz
# dirichlet-nutils_neumann-nutils.tar.gz
# dirichlet-openfoam_neumann-openfoam.tar.gz
18 changes: 16 additions & 2 deletions tools/tests/systemtests.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import argparse
from pathlib import Path
from systemtests.SystemtestArguments import SystemtestArguments
from systemtests.Systemtest import Systemtest, display_systemtestresults_as_table
from systemtests.Systemtest import Systemtest, display_systemtestresults_as_table, GLOBAL_TIMEOUT
from systemtests.TestSuite import TestSuites
from metadata_parser.metdata import Tutorials, Case
import logging
Expand All @@ -26,13 +26,27 @@ def main():
parser.add_argument('--log-level', choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
default='INFO', help='Set the logging level')

parser.add_argument(
'--timeout',
type=int,
default=GLOBAL_TIMEOUT,
help=(
f'Maximum number of seconds to wait for each docker-compose process '
f'(build, run, or field-compare) before killing it and marking the '
f'test as failed. Defaults to {GLOBAL_TIMEOUT} seconds. '
f'Increase this value for slow machines or large simulations; '
f'decrease it to catch hanging tests faster.'
)
)
Comment on lines +29 to +40
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new --timeout flag controls the docker-compose process timeout, but issue #402 requests overriding preCICE's in precice-config.xml for selected tests. If this PR intends to close #402, it likely needs additional YAML plumbing + XML modification (or the PR title/issue linkage should be updated to avoid implying #402 is fixed).

Copilot uses AI. Check for mistakes.

# Parse the command-line arguments
args = parser.parse_args()

# Configure logging based on the provided log level
logging.basicConfig(level=args.log_level, format='%(levelname)s: %(message)s')

print(f"Using log-level: {args.log_level}")
print(f"Using timeout: {args.timeout} seconds")

systemtests_to_run = []
available_tutorials = Tutorials.from_path(PRECICE_TUTORIAL_DIR)
Expand Down Expand Up @@ -61,7 +75,7 @@ def main():
for case, reference_result in zip(
test_suite.cases_of_tutorial[tutorial], test_suite.reference_results[tutorial]):
systemtests_to_run.append(
Systemtest(tutorial, build_args, case, reference_result))
Systemtest(tutorial, build_args, case, reference_result, timeout=args.timeout))

if not systemtests_to_run:
raise RuntimeError("Did not find any Systemtests to execute.")
Expand Down
87 changes: 69 additions & 18 deletions tools/tests/systemtests/Systemtest.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,47 +74,83 @@ class SystemtestResult:
fieldcompare_time: float # in seconds


def _escape_markdown_cell(text: str) -> str:
"""
Escape content for use inside a GitHub Flavored Markdown table cell.

The pipe character must be escaped as ``\\|`` because it is the column
delimiter in GFM tables. Other characters that can trigger unwanted
inline formatting (backtick, asterisk, underscore, tilde) are also
escaped so that e.g. a tutorial path like ``fluid_openfoam`` is not
rendered as italic text.
"""
text = str(text)
# Order matters: backslash first to avoid double-escaping
for char in ('\\', '|', '`', '*', '_', '~'):
text = text.replace(char, f'\\{char}')
return text


def display_systemtestresults_as_table(results: List[SystemtestResult]):
"""
Prints the result in a nice tabluated way to get an easy overview
Prints the result in a nice tabluated way to get an easy overview.
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docstring typo: "tabluated" should be "tabulated".

Suggested change
Prints the result in a nice tabluated way to get an easy overview.
Prints the result in a nice tabulated way to get an easy overview.

Copilot uses AI. Check for mistakes.

Plain-text output goes to stdout with fixed-width columns.
A properly-escaped GitHub Flavored Markdown table is appended to
GITHUB_STEP_SUMMARY when that environment variable is set.
"""
def _get_length_of_name(results: List[SystemtestResult]) -> int:
return max(len(str(result.systemtest)) for result in results)

max_name_length = _get_length_of_name(results)

header = f"| {'systemtest':<{max_name_length + 2}} "\
# --- plain-text output (terminal) ---
header_plain = f"| {'systemtest':<{max_name_length + 2}} "\
f"| {'success':^7} "\
f"| {'building time [s]':^17} "\
f"| {'solver time [s]':^15} "\
f"| {'fieldcompare time [s]':^21} |"
separator_plaintext = "+-" + "-" * (max_name_length + 2) + \
"-+---------+-------------------+-----------------+-----------------------+"
separator_markdown = "| --- | --- | --- | --- | --- |"

print(separator_plaintext)
print(header)
print(header_plain)
print(separator_plaintext)

if "GITHUB_STEP_SUMMARY" in os.environ:
with open(os.environ["GITHUB_STEP_SUMMARY"], "a") as f:
print(header, file=f)
print(separator_markdown, file=f)

for result in results:
row = f"| {str(result.systemtest):<{max_name_length + 2}} "\
row_plain = f"| {str(result.systemtest):<{max_name_length + 2}} "\
f"| {result.success:^7} "\
f"| {result.build_time:^17.1f} "\
f"| {result.solver_time:^15.1f} "\
f"| {result.fieldcompare_time:^21.1f} |"
print(row)
print(row_plain)
print(separator_plaintext)
if "GITHUB_STEP_SUMMARY" in os.environ:
with open(os.environ["GITHUB_STEP_SUMMARY"], "a") as f:
print(row, file=f)

# --- GitHub step summary (Markdown) ---
if "GITHUB_STEP_SUMMARY" in os.environ:
# Use a clean, properly-escaped Markdown table — never reuse the
# fixed-width plain-text format because extra spaces are collapsed
# and pipe characters in cell content would break the table structure.
md_header = "| systemtest | success | building time [s] | solver time [s] | fieldcompare time [s] |"
md_separator = "| --- | --- | --- | --- | --- |"

with open(os.environ["GITHUB_STEP_SUMMARY"], "a") as f:
print(md_header, file=f)
print(md_separator, file=f)
for result in results:
# Represent success as a clear visual symbol rather than the
# Python literal ``True`` / ``False``.
success_icon = ":white_check_mark:" if result.success else ":x:"
# Escape all cell content that may contain Markdown-special chars.
name_escaped = _escape_markdown_cell(str(result.systemtest))
md_row = (
f"| {name_escaped} "
f"| {success_icon} "
f"| {result.build_time:.1f} "
f"| {result.solver_time:.1f} "
f"| {result.fieldcompare_time:.1f} |"
)
print(md_row, file=f)
print("\n\n", file=f)
print(
"In case a test fails, download the archive from the bottom of this page and look into each `stdout.log` and `stderr.log`. The time spent in each step might already give useful hints.",
Expand All @@ -134,6 +170,9 @@ class Systemtest:
arguments: SystemtestArguments
case_combination: CaseCombination
reference_result: ReferenceResult
# Maximum number of seconds to wait for a docker-compose process before
# considering it hung and killing it. Defaults to GLOBAL_TIMEOUT.
timeout: int = GLOBAL_TIMEOUT
params_to_use: Dict[str, str] = field(init=False)
env: Dict[str, str] = field(init=False)

Expand Down Expand Up @@ -354,6 +393,12 @@ def __write_env_file(self):
env_file.write(f"{key}={value}\n")

def __unpack_reference_results(self):
if not self.reference_result.path.exists():
raise FileNotFoundError(
f"Reference results archive not found at '{self.reference_result.path}'. "
f"Please generate reference results first by running "
f"'python generate_reference_results.py' from the tools/tests directory, "
f"or download them from the CI artifacts stored in Git LFS.")
with tarfile.open(self.reference_result.path) as reference_results_tared:
# specify which folder to extract to
reference_results_tared.extractall(self.system_test_dir / PRECICE_REL_REFERENCE_DIR)
Comment on lines 402 to 404
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using tarfile.extractall() on an archive without validating member paths can allow path traversal (e.g., entries with "../") and overwrite files outside the target directory. Please implement a safe extraction that rejects absolute paths and parent-directory traversals before extracting.

Copilot uses AI. Check for mistakes.
Expand All @@ -372,7 +417,13 @@ def _run_field_compare(self):
"""
logging.debug(f"Running fieldcompare for {self}")
time_start = time.perf_counter()
self.__unpack_reference_results()
try:
self.__unpack_reference_results()
except FileNotFoundError as e:
elapsed_time = time.perf_counter() - time_start
error_msg = str(e)
logging.error(f"Cannot run field comparison for {self}: {error_msg}")
return FieldCompareResult(1, [], [error_msg], self, elapsed_time)
docker_compose_content = self.__get_field_compare_compose_file()
stdout_data = []
stderr_data = []
Expand All @@ -394,7 +445,7 @@ def _run_field_compare(self):
cwd=self.system_test_dir)

try:
stdout, stderr = process.communicate(timeout=GLOBAL_TIMEOUT)
stdout, stderr = process.communicate(timeout=self.timeout)
except KeyboardInterrupt as k:
process.kill()
raise KeyboardInterrupt from k
Comment on lines 447 to 451
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exception handling around this docker compose execution appears broken because the outer error path later in this method calls logging.CRITICAL(...) (CRITICAL is an int constant, not a logger function). That would raise a TypeError and hide the real failure. Please change it to logging.critical(...) (or logging.exception(...)) so errors are reported correctly.

Copilot uses AI. Check for mistakes.
Expand Down Expand Up @@ -439,7 +490,7 @@ def _build_docker(self):
cwd=self.system_test_dir)

try:
stdout, stderr = process.communicate(timeout=GLOBAL_TIMEOUT)
stdout, stderr = process.communicate(timeout=self.timeout)
except KeyboardInterrupt as k:
process.kill()
# process.send_signal(9)
Expand Down Expand Up @@ -483,7 +534,7 @@ def _run_tutorial(self):
cwd=self.system_test_dir)

try:
stdout, stderr = process.communicate(timeout=GLOBAL_TIMEOUT)
stdout, stderr = process.communicate(timeout=self.timeout)
except KeyboardInterrupt as k:
process.kill()
# process.send_signal(9)
Expand Down
18 changes: 18 additions & 0 deletions tools/tests/tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,24 @@ test_suites:
- solid-upstream-dealii
- solid-downstream-dealii
reference_result: ./perpendicular-flap/reference-results/fluid-openfoam_solid-upstream-dealii_solid-downstream-dealii.tar.gz
partitioned_heat_conduction_test:
tutorials:
- path: partitioned-heat-conduction
case_combination:
- dirichlet-fenics
- neumann-fenics
reference_result: ./partitioned-heat-conduction/reference-results/dirichlet-fenics_neumann-fenics.tar.gz
- path: partitioned-heat-conduction
case_combination:
- dirichlet-nutils
- neumann-nutils
reference_result: ./partitioned-heat-conduction/reference-results/dirichlet-nutils_neumann-nutils.tar.gz
- path: partitioned-heat-conduction
case_combination:
- dirichlet-openfoam
- neumann-openfoam
reference_result: ./partitioned-heat-conduction/reference-results/dirichlet-openfoam_neumann-openfoam.tar.gz
Comment on lines +74 to +90
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new test suite references reference_result archives that are not present in the repository (partitioned-heat-conduction/reference-results currently only contains .gitkeep). As-is, running this suite will always fail until the corresponding .tar.gz files (likely Git LFS pointers) are added, or the suite is gated/removed until reference results land.

Suggested change
partitioned_heat_conduction_test:
tutorials:
- path: partitioned-heat-conduction
case_combination:
- dirichlet-fenics
- neumann-fenics
reference_result: ./partitioned-heat-conduction/reference-results/dirichlet-fenics_neumann-fenics.tar.gz
- path: partitioned-heat-conduction
case_combination:
- dirichlet-nutils
- neumann-nutils
reference_result: ./partitioned-heat-conduction/reference-results/dirichlet-nutils_neumann-nutils.tar.gz
- path: partitioned-heat-conduction
case_combination:
- dirichlet-openfoam
- neumann-openfoam
reference_result: ./partitioned-heat-conduction/reference-results/dirichlet-openfoam_neumann-openfoam.tar.gz

Copilot uses AI. Check for mistakes.

elastic_tube_1d_test:
tutorials:
- path: elastic-tube-1d
Expand Down
Loading