Skip to content

Refactor fast physics#368

Open
iboutle wants to merge 22 commits intoMetOffice:mainfrom
iboutle:fast_refactor
Open

Refactor fast physics#368
iboutle wants to merge 22 commits intoMetOffice:mainfrom
iboutle:fast_refactor

Conversation

@iboutle
Copy link
Copy Markdown
Contributor

@iboutle iboutle commented Mar 13, 2026

PR Summary

Sci/Tech Reviewer: MichaelWhitall
Code Reviewer: Lottie Turner (@mo-lottieturner)

This PR refactors fast physics to enable 2 key pieces of science infrastructure (which are also implemented by this PR):

  • allow the implicit boundary layer to be called before convection (needed by Comorph-B)
  • allow different convection schemes to be called on different outer loops (specifically LLCS on the first outer loop)

To enable this, it has been necessary to properly separate the boundary layer and convection increments, which were previously badly inter-mixed. Fast physics now tracks "latest" fields which are updated by each scheme in turn (since the physics here is sequential by design), and then finally constructs an increment from the difference between the final and initial fields. Because this changes the calculation order, it changes many KGOs for lfric_atm

The calls to these schemes in slow physics have also had to be updated to be consistent, and diagnostics previously produced in bl_imp_alg but that depend on convection have had to be separated into their own algorithm to be called after BL and convection have definitely both been done.

Ideally, the new "BL before conv" functionality would be switched on in the comorph_dev tests, however these currently fail when doing so. I will investigate this separately, as the failure is not connected to the code introduced by this PR (particularly the refactoring, which we want to consolidate back to main asap), but is a feedback on model evolution. Doing so would also conflict with several other PRs in flight (#292, #326) so is best left as a separate PR.

closes #228

Code Quality Checklist

  • I have performed a self-review of my own code
  • My code follows the project's style guidelines
  • Comments have been included that aid understanding and enhance the readability of the code
  • My changes generate no new warnings
  • All automated checks in the CI pipeline have completed successfully

Testing

  • I have tested this change locally, using the LFRic Apps rose-stem suite
  • If any tests fail (rose-stem or CI) the reason is understood and acceptable (e.g. kgo changes)
  • I have added tests to cover new functionality as appropriate (e.g. system tests, unit tests, etc.)
  • Any new tests have been assigned an appropriate amount of compute resource and have been allocated to an appropriate testing group (i.e. the developer tests are for jobs which use a small amount of compute resource and complete in a matter of minutes)

trac.log

Test Suite Results - lfric_apps - fast_refactor/run1

Suite Information

Item Value
Suite Name fast_refactor/run1
Suite User ian.boutle
Workflow Start 2026-03-17T14:04:49
Groups Run developer', 'lfric_atm_extra
Dependency Reference Main Like
casim MetOffice/casim@2026.03.2 True
jules MetOffice/jules@2026.03.2 True
lfric_apps iboutle/lfric_apps@fast_refactor False
lfric_core MetOffice/lfric_core@2026.03.2 True
moci MetOffice/moci@2026.03.2 True
SimSys_Scripts MetOffice/SimSys_Scripts@2026.03.2 True
socrates MetOffice/socrates@2026.03.2 True
socrates-spectral MetOffice/socrates-spectral@2026.03.2 True
ukca MetOffice/ukca@2026.03.2 True

Task Information

✅ succeeded tasks - 1310

Security Considerations

  • I have reviewed my changes for potential security issues
  • Sensitive data is properly handled (if applicable)
  • Authentication and authorisation are properly implemented (if applicable)

Performance Impact

  • Performance of the code has been considered and, if applicable, suitable performance measurements have been conducted

AI Assistance and Attribution

  • Some of the content of this change has been produced with the assistance of Generative AI tool name (e.g., Met Office Github Copilot Enterprise, Github Copilot Personal, ChatGPT GPT-4, etc) and I have followed the Simulation Systems AI policy (including attribution labels)

Documentation

  • Where appropriate I have updated documentation related to this change and confirmed that it builds correctly

PSyclone Approval

  • If you have edited any PSyclone-related code (e.g. PSyKAl-lite, Kernel interface, optimisation scripts, LFRic data structure code) then please contact the TCD Team

Sci/Tech Review

  • I understand this area of code and the changes being added
  • The proposed changes correspond to the pull request description
  • Documentation is sufficient (do documentation papers need updating)
  • Sufficient testing has been completed

(Please alert the code reviewer via a tag when you have approved the SR)

Code Review

  • All dependencies have been resolved
  • Related Issues have been properly linked and addressed
  • CLA compliance has been confirmed
  • Code quality standards have been met
  • Tests are adequate and have passed
  • Documentation is complete and accurate
  • Security considerations have been addressed
  • Performance impact is acceptable

@iboutle iboutle added this to the Summer 2026 milestone Mar 13, 2026
@iboutle iboutle added KGO This PR contains changes to KGO macro This PR contains a metadata upgrade macro labels Mar 13, 2026
@github-actions github-actions bot added the cla-signed The CLA has been signed as part of this PR - added by GA label Mar 13, 2026
@iboutle iboutle marked this pull request as ready for review March 17, 2026 15:33
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Driver code owner review: Simple changes to what's in and what's out.

Copy link
Copy Markdown
Contributor

@thomasmelvin thomasmelvin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

@MichaelWhitall
Copy link
Copy Markdown

This looks good to me so far...

I think I mentioned this before, but the update of the fields in fast physics could be further simplified? As it stands theta_latest, u_latest are intent(in) to each physics scheme called from fast_physics_alg_mod.X90‎; each scheme outputs a separate increment array that then gets added onto theta_latest, u_latest in fast_physics_alg_mod.X90‎. Couldn't each scheme just update theta_latest, u_latest directly / internally instead of outputting separate increment arrays?

The biggest win would be to do this for the convection increments to qv, qcl, cf_liq, cf_bulk. As it stands, the convection increments get added onto local copies of these prognostics in pc2_conv_coupling_kernel_mod.F90 (for use in calculating the PC2 response), and then we add the convection increments onto the actual prognostics later in pc2_conv_coupling_alg_mod.x90‎. Instead, the convection schemes could update the prognostics directly, so that the subsequent addition of conv increments onto the prognostics in pc2_conv_coupling_kernel_mod.F90 could be skipped (the latter subroutine could then just add the PC2 increments onto the prognostics instead of updating the convection increment arrays). Then most of the convection increment arrays would be redundant and could be deleted from fast_physics_alg_mod.X90‎ (we'd only need the conv theta and q increments for PC2).

What do you reckon? Or is this best done in a separate PR?

Cheers!
Mike

@iboutle
Copy link
Copy Markdown
Contributor Author

iboutle commented Mar 26, 2026

This looks good to me so far...

I think I mentioned this before, but the update of the fields in fast physics could be further simplified? As it stands theta_latest, u_latest are intent(in) to each physics scheme called from fast_physics_alg_mod.X90‎; each scheme outputs a separate increment array that then gets added onto theta_latest, u_latest in fast_physics_alg_mod.X90‎. Couldn't each scheme just update theta_latest, u_latest directly / internally instead of outputting separate increment arrays?

The biggest win would be to do this for the convection increments to qv, qcl, cf_liq, cf_bulk. As it stands, the convection increments get added onto local copies of these prognostics in pc2_conv_coupling_kernel_mod.F90 (for use in calculating the PC2 response), and then we add the convection increments onto the actual prognostics later in pc2_conv_coupling_alg_mod.x90‎. Instead, the convection schemes could update the prognostics directly, so that the subsequent addition of conv increments onto the prognostics in pc2_conv_coupling_kernel_mod.F90 could be skipped (the latter subroutine could then just add the PC2 increments onto the prognostics instead of updating the convection increment arrays). Then most of the convection increment arrays would be redundant and could be deleted from fast_physics_alg_mod.X90‎ (we'd only need the conv theta and q increments for PC2).

What do you reckon? Or is this best done in a separate PR?

Cheers! Mike

Yep, I agree with the plan but would like to tackle that as a separate PR - this one seems big enough, and I can still just about convince myself that the KGOs are unchanged for several jobs. Untangling the issues around convection & PC2 increments will lead to more changes because of the way things are added and subtracted, and I think we'd want to change the convection increment from T to theta at the same time. Once that is done I think we could get each algorithm to update the latest fields, but currently the convection update can't happen in the convection algorithm because it needs to happen after the PC2 routine.

@MichaelWhitall
Copy link
Copy Markdown

This looks good to me so far...
I think I mentioned this before, but the update of the fields in fast physics could be further simplified? As it stands theta_latest, u_latest are intent(in) to each physics scheme called from fast_physics_alg_mod.X90‎; each scheme outputs a separate increment array that then gets added onto theta_latest, u_latest in fast_physics_alg_mod.X90‎. Couldn't each scheme just update theta_latest, u_latest directly / internally instead of outputting separate increment arrays?
The biggest win would be to do this for the convection increments to qv, qcl, cf_liq, cf_bulk. As it stands, the convection increments get added onto local copies of these prognostics in pc2_conv_coupling_kernel_mod.F90 (for use in calculating the PC2 response), and then we add the convection increments onto the actual prognostics later in pc2_conv_coupling_alg_mod.x90‎. Instead, the convection schemes could update the prognostics directly, so that the subsequent addition of conv increments onto the prognostics in pc2_conv_coupling_kernel_mod.F90 could be skipped (the latter subroutine could then just add the PC2 increments onto the prognostics instead of updating the convection increment arrays). Then most of the convection increment arrays would be redundant and could be deleted from fast_physics_alg_mod.X90‎ (we'd only need the conv theta and q increments for PC2).
What do you reckon? Or is this best done in a separate PR?
Cheers! Mike

Yep, I agree with the plan but would like to tackle that as a separate PR - this one seems big enough, and I can still just about convince myself that the KGOs are unchanged for several jobs. Untangling the issues around convection & PC2 increments will lead to more changes because of the way things are added and subtracted, and I think we'd want to change the convection increment from T to theta at the same time. Once that is done I think we could get each algorithm to update the latest fields, but currently the convection update can't happen in the convection algorithm because it needs to happen after the PC2 routine.

Yep no problem at all to leave this for a later PR; I'm happy to tidy the convection / PC2 increment handling up at some point...

Cheers!
Mike

@MichaelWhitall
Copy link
Copy Markdown

Hi just checking I understand what's going on with the refactoring of boundary-layer diagnostics code?

So, previously there was a call to bl_extra_diags_kernel_type to compute numerous diagnostics from inside subroutine output_diags_for_bl_imp in bl_imp_diags_mod.x90‎, which is in turn called from bl_imp_alg_mod.x90‎. bl_extra_diags_kernel_type calculates some diagnostics which have dependency on convection outputs, so needed to be moved to be called later in fast_physics_alg_mod.X90‎ (in the call to subroutine output_diags_for_bl_extra held in bl_extra_diags_mod.x90‎).

However, bl_extra_diags_kernel_type also takes as input a bunch of other non-convection BL diagnostics; those then needed to be passed out from bl_imp_alg_mod.x90‎, through fast_physics_alg_mod.X90‎, and into output_diags_for_bl_extra. Hence t1p5m, q1p5m etc now appear in various argument lists (and are local variables in fast_physics_alg_mod.X90‎).

Is it a case of these diagnostics could have still been output inside bl_imp_alg_mod.x90‎, but splitting-up the code in bl_extra_diags_kernel_type to facilitate that would have been far more trouble than its worth?

Cheers!
Mike

@iboutle
Copy link
Copy Markdown
Contributor Author

iboutle commented Mar 27, 2026

Hi just checking I understand what's going on with the refactoring of boundary-layer diagnostics code?

So, previously there was a call to bl_extra_diags_kernel_type to compute numerous diagnostics from inside subroutine output_diags_for_bl_imp in bl_imp_diags_mod.x90‎, which is in turn called from bl_imp_alg_mod.x90‎. bl_extra_diags_kernel_type calculates some diagnostics which have dependency on convection outputs, so needed to be moved to be called later in fast_physics_alg_mod.X90‎ (in the call to subroutine output_diags_for_bl_extra held in bl_extra_diags_mod.x90‎).

However, bl_extra_diags_kernel_type also takes as input a bunch of other non-convection BL diagnostics; those then needed to be passed out from bl_imp_alg_mod.x90‎, through fast_physics_alg_mod.X90‎, and into output_diags_for_bl_extra. Hence t1p5m, q1p5m etc now appear in various argument lists (and are local variables in fast_physics_alg_mod.X90‎).

Is it a case of these diagnostics could have still been output inside bl_imp_alg_mod.x90‎, but splitting-up the code in bl_extra_diags_kernel_type to facilitate that would have been far more trouble than its worth?

Cheers! Mike

No - the diagnostics output by bl_extra_diags are things like visibility, which require both the convection diagnostics and BL diagnostics as input. Therefore the plumbing of things like t1p5m is required because these are input to the fog and vis calculations, which also require the convective rain and suchlike. It was kind of surprising, but also somewhat happy, that just moving the kernel as-is was actually what was required!

@MichaelWhitall
Copy link
Copy Markdown

No - the diagnostics output by bl_extra_diags are things like visibility, which require both the convection diagnostics and BL diagnostics as input. Therefore the plumbing of things like t1p5m is required because these are input to the fog and vis calculations, which also require the convective rain and suchlike. It was kind of surprising, but also somewhat happy, that just moving the kernel as-is was actually what was required!

Ah OK yes that makes sense; so t1p5m etc are inputs to the visibility calculation, but that also depends on convection outputs, so t1p5m etc had to be passed around to go into the new later place where the visibility calculations is done. That all looks good to me...

Copy link
Copy Markdown

@MichaelWhitall MichaelWhitall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I just found the "submit review" button! I think its been hiding all my in-line comments from you until I press this button... Can you see the comments now?

@github-actions github-actions bot removed the cla-signed The CLA has been signed as part of this PR - added by GA label Mar 30, 2026
@iboutle
Copy link
Copy Markdown
Contributor Author

iboutle commented Mar 30, 2026

Ah I just found the "submit review" button! I think its been hiding all my in-line comments from you until I press this button... Can you see the comments now?

Hopefully I've addressed all of these...

@MichaelWhitall
Copy link
Copy Markdown

Hopefully I've addressed all of these...

Yep this all looks good to me :) The bot seems to've already automatically moved this out of sci/tech review for me (I assumed I'd need to press a button somewhere to sign this off?) Maybe it counts as signed-off once all the comments are marked as "resolved"?)

Cheers!
Mike

Copy link
Copy Markdown

@MichaelWhitall MichaelWhitall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh no wait there actually is an "approve" button in here, which I hadn't pressed yet; I think the bot's activity was a bit premature? Anyhow I'm pressing the button now ;)

@iboutle
Copy link
Copy Markdown
Contributor Author

iboutle commented Mar 30, 2026

Hopefully I've addressed all of these...

Yep this all looks good to me :) The bot seems to've already automatically moved this out of sci/tech review for me (I assumed I'd need to press a button somewhere to sign this off?) Maybe it counts as signed-off once all the comments are marked as "resolved"?)

Cheers! Mike

The bot moved it to "changes requested" - I should have moved it back to sci/tech after doing them. I think you need to tick the boxes in the science review section, and then tag the code reviewer in a comment saying your happy it's passed - the bot is supposed to move it to code review, although I've never actually got that to work!

@MichaelWhitall
Copy link
Copy Markdown

Sci/Tech review approved on this one, passing over to Lottie Turner (@mo-lottieturner) for code review...

Cheers!
Mike

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

KGO This PR contains changes to KGO macro This PR contains a metadata upgrade macro

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add option to call implicit boundary-layer before convection

6 participants