Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
a99d3b7
Add February 2026 architectural reviews for Units and Data Access sys…
lmoresi Feb 1, 2026
b2748a2
Fix review document structure for CI validation
lmoresi Feb 1, 2026
a744bdc
Fix file paths and code examples in architecture reviews
lmoresi Feb 24, 2026
ef5763c
docs(beginner): add create_xdmf guide for PETSc HDF5->XDMF workflow
gthyagi Mar 3, 2026
992d538
Add boundary integral support (BdIntegral) — closes #47
lmoresi Mar 3, 2026
b67b078
Clarify AI attribution convention for PRs in CLAUDE.md
lmoresi Mar 3, 2026
f2018a7
Add MPI Allreduce to boundary integral wrapper
lmoresi Mar 3, 2026
bfc0b54
Fix ghost facet double-counting in boundary integrals under MPI
lmoresi Mar 3, 2026
4f76375
Add PETSc patch for ghost facet double-counting in boundary assembly
lmoresi Mar 4, 2026
f4fb037
Fix DMPlexComputeBdIntegral signature for PETSc < 3.22
lmoresi Mar 4, 2026
2881718
Skip BoxInternalBoundary tests under MPI (pre-existing mesh construct…
lmoresi Mar 4, 2026
3ba5eed
Merge pull request #69 from gthyagi/codex/write-timestep-vertex-cell-…
lmoresi Mar 5, 2026
c75ebad
Rewrite write_timestep compat groups: use uw.function.evaluate instea…
lmoresi Mar 5, 2026
504af6e
Add field projection utility for degree transfer without MeshVariable…
lmoresi Mar 5, 2026
887d49e
Refactor compat groups to use field_projection + standalone PETSc Vec…
lmoresi Mar 5, 2026
54910bd
Add tensor repacking to ParaView 9-component format for visualisation…
lmoresi Mar 5, 2026
96431a6
Fix CI failure, PETSc resource leaks, and add tensor test
lmoresi Mar 5, 2026
4af32f0
Handle PETSc 3.21+ auto-created vertex_fields in compat writer
lmoresi Mar 5, 2026
220fd1c
Guard against NULL boundary labels and add internal normal vector tests
lmoresi Mar 6, 2026
c4f450c
Switch default MPI to OpenMPI on macOS (fixes #68)
lmoresi Mar 10, 2026
ec5eb93
perf(mpi): cap thread pools for MPI and document policy
gthyagi Mar 10, 2026
9fa266d
Merge PR #72: Refine XDMF compat groups with field projection and ten…
lmoresi Mar 10, 2026
09b7907
Merge pull request #45 from underworldcode/review/architecture-2026-02
lmoresi Mar 10, 2026
91a5a20
Record review outcomes for February 2026 architectural reviews
lmoresi Mar 10, 2026
0561eae
petsc-custom: add internal-boundary ownership patch, mpi test, and docs
gthyagi Mar 12, 2026
861390b
docs(petsc): expand MR notes with reproducer, metrics, risk, and back…
gthyagi Mar 12, 2026
c98affe
Merge pull request #78 from gthyagi/feature/boundary-integrals
lmoresi Mar 12, 2026
5237564
Merge feature/boundary-integrals into cleanup branch
lmoresi Mar 12, 2026
95c6a95
Clean up boundary integral implementation after PETSc ownership patch
lmoresi Mar 12, 2026
895c46c
Remove accidentally committed worktree symlinks, fix .gitignore
lmoresi Mar 12, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -250,6 +250,9 @@ debug_*.py

# Visualization output files (generated during notebook execution)
docs/beginner/tutorials/html5/*.html
petsc-custom/petsc/
# Pixi environment (directory in main repo, symlink in worktrees)
.pixi
.pixi-env
# PETSc build (directory in main repo, symlink in worktrees)
petsc-custom/petsc
Untitled*.ipynb
14 changes: 14 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
# CHANGES: Underworld3

## 2026-03-13

- New `uw.maths.BdIntegral` for boundary and surface integrals:
- Wraps PETSc `DMPlexComputeBdIntegral` with MPI Allreduce and units support
- Works on external boundaries, internal boundaries (`AnnulusInternalBoundary`, etc.)
- Integrand can reference outward unit normal via `mesh.Gamma` / `mesh.Gamma_N`
- Handles PETSc API change in v3.22.0 (function pointer signature)
- PETSc patch for internal boundary assembly in parallel (`plexfem-internal-boundary-ownership-fix.patch`):
- Ghost facet filtering in boundary integral, residual, and Jacobian paths
- Part-consistent assembly (`support[key.part]`) with support-size guards
- Fixes rank-dependent L2 norms for internal boundary natural BCs (issue #77)
- MPI regression test: `tests/parallel/test_0765_internal_boundary_integral_mpi.py`
- Boundary integral tests: `tests/test_0502_boundary_integrals.py`

## 2025-12-21

- PETSc 3.24 compatibility verified (conda-forge petsc 3.24.2 works correctly)
Expand Down
27 changes: 20 additions & 7 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,16 +198,20 @@ worktree changes, the worktree edits will not be active. Always:
2. `./uw build`
3. Run your code / tests from there

### AI-Assisted Commit Attribution
When committing code developed with AI assistance, end the commit message with:
### AI-Assisted Attribution (Commits and PRs)
When committing code or creating pull requests with AI assistance, end the
message/body with:

```
Underworld development team with AI support from Claude Code
Underworld development team with AI support from [Claude Code](https://claude.com/claude-code)
```

(In commit messages, use the plain-text form without the markdown link.)

**Do NOT use**:
- `Co-Authored-By:` with a noreply email (useless for soliciting responses)
- Generic AI attribution without team context
- Emoji in PR descriptions

---

Expand All @@ -220,11 +224,20 @@ Underworld development team with AI support from Claude Code
- Requires complete rebuild (~1 hour) if relocated

### Rebuild After Source Changes
**After modifying source files, always run `pixi run underworld-build`!**
**After modifying source files, always run `./uw build`!**
- Underworld3 is installed as a package in the pixi environment
- Changes go to `.pixi/envs/default/lib/python3.12/site-packages/underworld3/`
- Verify with `uw.model.__file__`

**⚠️ STALE BUILD CACHE**: If `./uw build` succeeds but Python still uses old code
(e.g. a new parameter is "unknown"), pip's wheel cache is stale. Fix with:
```bash
rm -rf build/lib.* build/bdist.*
pixi run -e default pip install --no-build-isolation --force-reinstall --no-deps .
```
This is the most common build issue — `./uw build` reuses cached wheels when the
version number hasn't changed. Always verify changes are installed before debugging.

### Test Quality Principles
**New tests must be validated before making code changes to fix them!**
- Validate test correctness before changing main code
Expand Down Expand Up @@ -534,10 +547,10 @@ When working on specific subsystems, these documents provide detailed guidance.

## Quick Reference

### Pixi Commands
### Build & Test Commands
```bash
pixi run underworld-build # Rebuild after source changes
pixi run underworld-test # Run test suite
./uw build # Rebuild after source changes (preferred)
./uw test # Run test suite
pixi run -e default python # Run Python in environment
```

Expand Down
52 changes: 51 additions & 1 deletion docs/advanced/parallel-computing.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,56 @@ Underworld3 uses PETSc for parallel operations, which means **you rarely need to

The main use of `uw.mpi.rank` is for conditional output/visualization.

## MPI + Thread Pools (Oversubscription)

When running with MPI, each rank can also spawn BLAS/OpenMP worker threads.
If this is not controlled, total runnable threads can explode and performance
can degrade severely.

Example: `mpirun -np 8` with OpenBLAS default `10` threads can create up to
`80` compute threads, often slower than expected.

### Default Underworld3 Policy

Underworld3 now applies MPI-safe defaults (thread pool size `1`) unless users
explicitly set their own values:

- `OMP_NUM_THREADS`
- `OPENBLAS_NUM_THREADS`
- `MKL_NUM_THREADS`
- `VECLIB_MAXIMUM_THREADS`
- `NUMEXPR_NUM_THREADS`

This happens in two places:

1. `./uw` launcher: sets defaults before Python starts.
2. `underworld3` import path: applies the same defaults for MPI runs if unset.

### Runtime Warning

If running with MPI and any of the thread variables above are explicitly set
to values greater than `1`, Underworld3 prints a rank-0 warning about possible
oversubscription.

### User Controls

- Disable automatic thread caps:

```bash
export UW_DISABLE_THREAD_CAPS=1
```

- Suppress warning (keep your explicit thread settings):

```bash
export UW_SUPPRESS_THREAD_WARNING=1
```

### Recommended Practice

For most MPI benchmark and production jobs, keep `1` thread per rank unless
you are intentionally tuning hybrid MPI+threads.

## Parallel-Safe Output

### The Problem with Rank Conditionals
Expand Down Expand Up @@ -468,4 +518,4 @@ These operations require **ALL ranks** to participate:
4. **Collective operations must run on ALL ranks** - never inside rank conditionals
5. **Test with `mpirun -np N`** to catch issues early

The parallel safety system makes parallel programming in Underworld3 safer and more intuitive - collective operations are evaluated on all ranks automatically, preventing common deadlock scenarios!
The parallel safety system makes parallel programming in Underworld3 safer and more intuitive - collective operations are evaluated on all ranks automatically, preventing common deadlock scenarios!
Loading
Loading