Skip to content

Implementation of a Parametrised Reduced Functional#241

Open
divijghose wants to merge 23 commits intodolfin-adjoint:masterfrom
divijghose:parametrised_reduced_functional
Open

Implementation of a Parametrised Reduced Functional#241
divijghose wants to merge 23 commits intodolfin-adjoint:masterfrom
divijghose:parametrised_reduced_functional

Conversation

@divijghose
Copy link
Contributor

@divijghose divijghose commented Feb 4, 2026

Current method to use parameters

The derivative_components optional argument of ReducedFunctional is used after adding parameters to the list of controls, to specify which components are to be zeroed out (by omitting them from derivative_components). This allows the user to update parameters by calling the Reduced Functional while zeroing out the gradient with respect to the parameters.

Parametrised Reduced Functional

ParametrisedReducedFunctional is a subclass of ReducedFunctional with wrapping call and derivative methods, with the parameters as attributes. The parameter_update method is called to update parameters. The parameters are not included in the derivative calculation and the optional argument derivative_components is not required.

…educedFunctional` where parameters can be updated but are not included in the derivative calculations:

1. Adds a `parameter_update` method
2. Parameters are appended at the end of the list of optimization controls, so `derivative_components` is not a required argument.
3. The `derivative` method returns only derivative corresponding to optimization controls.
@JHopeCollins
Copy link
Contributor

In the current implementation ParameterisedReducedFunctional inherits the controls property from ReducedFunctional. I think this means that this will pass when really it shouldn't:

prf = ParameterisedReducedFunctional(functional, user_controls, parameters)
assert len(prf.controls) == len(user_controls) + len(parameters)

This will also be a problem later because the optimisers will expect:

len(prf.derivative()) == len(prf.controls)

when you will actually have (correctly):

len(prf.derivative()) == len(prf.user_controls)

If this is the case then you may need to override the controls property for ParameterisedReducedFunctional so that it returns only what the user thinks are the controls.
However, if I remember correctly how Python inheritance works, that will then mean that when the parent ReducedFunctional accesses self.controls (for example to calculate the derivative here), it will won't see the full list of user_controls + parameter, but will only see user_controls.

To get around this, you may have to instead inherit from the AbstractReducedFunctional base class and just internally create your own ReducedFunctional(functional=functional, controls=user_controls+parameters).

@divijghose
Copy link
Contributor Author

In this intermediate implementation, the ParametrisedReducedFunctional inherits from AbstractReducedFunctional. This means that self.controls returns user_controls, which is the required behaviour as discussed above.

However, a lot of the code is duplicated, and, as @colinjcotter suggested, a more efficient way to do this would be to call ReducedFunctional inside ParametrisedReducedFunctional, which will get the required behaviour from self.controls while reusing most of the methods from ReducedFunctional.

…dFunctional` internally:

- Instead of inheriting the `AbstractReducedFunctional` or `ReducedFunctional` classes, `ParametrisedReducedFunctional` simply calls a `ReducedFunctional` object internally and passes the controls and parameters together as `all_controls`
`ParametrisedReducedFunctional` must be a subclass of `AbstractReducedFunctional`

Co-authored-by: Josh Hope-Collins <jhc.jss@gmail.com>
…h component of the parameter list must first be wrapped in `Control`.
1. Basic test to check `call`, `derivative` and `parameter_update` methods
2. Combination tests with single/multiple controls and single/multiple parameters
3. Tests to check behaviour of `controls` and `parameters` property
4. Evaluation on a more complex example
5. Tests to check behaviour in case of multiple parameter updates before call.
…cedFunctional` with `derivative_components`.
Comment on lines +407 to +423
derivative_cb_pre (function): Callback function before evaluating
derivatives. Input is a list of Controls.
Should return a list of Controls (usually the same
list as the input) to be passed to compute_derivative.
derivative_cb_post (function): Callback function after evaluating
derivatives. Inputs are: functional.block_variable.checkpoint,
list of functional derivatives, list of functional values.
Should return a list of derivatives (usually the same
list as the input) to be returned from self.derivative.
hessian_cb_pre (function): Callback function before evaluating the Hessian.
Input is a list of Controls.
hessian_cb_post (function): Callback function after evaluating the Hessian.
Inputs are the functional, a list of Hessian, and controls.
tlm_cb_pre (function): Callback function before evaluating the tangent linear model.
Input is a list of Controls.
tlm_cb_post (function): Callback function after evaluating the tangent linear model.
Inputs are the functional, the tlm result, and controls.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have a think about whether these should be for all_controls (as it is currently), or just the ParameterisedReducedFunctional.controls, and possibly add separate callbacks for the parameters.

eval_cb_post=lambda *args: None,
derivative_cb_pre=lambda controls: controls,
derivative_cb_post=lambda checkpoint, derivative_components,
controls: derivative_components,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repeated arg in the middle of the kwargs?

Suggested change
controls: derivative_components,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JHopeCollins, the argument list is the similar to the one used in ReducedFunctional, with derivative_components omitted and parameters added.

Comment on lines +438 to +439
if parameters is None:
raise ValueError("Parameters must be provided. If no parameters are needed, use ReducedFunctional instead.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does passing an empty list work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing an empty list should not work. I have updated ParametrisedReducedFunctional to check for it:

        if len(Enlist(parameters)) == 0:
            raise ValueError("Parameters list cannot be empty. If no parameters are needed, use ReducedFunctional instead.")

along with a corresponding test

def test_parametrised_rf_empty_parameter_list():
    """Test that creating a ParametrisedReducedFunctional with an empty parameter list raises an error."""
    c = AdjFloat(2.0)
    J = c * 3.0 
    with pytest.raises(ValueError):
        Jhat = ParametrisedReducedFunctional(J, Control(c), parameters=[])

derivatives_all = self._reduced_functional.derivative(adj_input=adj_input,
apply_riesz=apply_riesz)

return Enlist(derivatives_all)[:self.n_opt]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user didn't give us a list of controls then we shouldn't return a list of derivatives

Suggested change
return Enlist(derivatives_all)[:self.n_opt]
return self.controls.delist(Enlist(derivatives_all)[:self.n_opt])

evaluate_tlm=evaluate_tlm,
apply_riesz=apply_riesz)

return hessian_all[:self.n_opt] # Return only the hessian components corresponding to optimization controls.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return hessian_all[:self.n_opt] # Return only the hessian components corresponding to optimization controls.
# Return only the hessian components corresponding to optimization controls.
return self.controls.delist(Enlist(hessian_all)[:self.n_opt])

Comment on lines +498 to +502
def hessian(self, m_dot, hessian_input=None, evaluate_tlm=True, apply_riesz=False):
hessian_all = self._reduced_functional.hessian(m_dot,
hessian_input=hessian_input,
evaluate_tlm=evaluate_tlm,
apply_riesz=apply_riesz)
Copy link
Contributor

@JHopeCollins JHopeCollins Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self._reduced_functional.hessian will expect len(m_dot) == len(self._all_controls) so I think you will have to pad it with zeros (but you should write out the maths to check this will DTRT). Something like:

m_dot_all = Enlist(m_dot) + [p._ad_init_zero() for p in self.parameters]

Comment on lines +507 to +510
@no_annotations
def tlm(self, m_dot):
tlm_all = self._reduced_functional.tlm(m_dot)
return tlm_all[:self.n_opt] # Return only the tlm components corresponding to optimization controls.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. You'll need the same zero-padding of m_dot here as in the hessian.
  2. The result of tlm lives in the same space as the functional so you don't need to apply the mask.
Suggested change
@no_annotations
def tlm(self, m_dot):
tlm_all = self._reduced_functional.tlm(m_dot)
return tlm_all[:self.n_opt] # Return only the tlm components corresponding to optimization controls.
@no_annotations
def tlm(self, m_dot):
tlm_all = self._reduced_functional.tlm(m_dot)
return tlm

Copy link
Contributor

@JHopeCollins JHopeCollins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is shaping up nicely!

I've left a few comments, but my main two requests are for a bit more test coverage.

  1. Please can you test the convergence rates from taylor_to_dict? This will do both the first and second order taylor tests so that we can check that the tlm and hessian are correct.
  2. Please can you add an optimisation test? You could just use a quadratic polynomial as the objective functional with the coefficients being parameters. Then the optimisation should be globally convergent and you can test the result against the analytic minimum.
    If you want to be very thorough then you could also test a multivariate quadratic to make sure that it works with multiple controls.

Comment on lines +384 to +389
"""Class representing the reduced functional with parameters.

A reduced functional maps a control value to the provided functional.
It may also be used to compute the derivative of the functional with
respect to the control. In addition, parameters may be specified which
are updated, but not included in the derivative calculations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice to show a few lines of maths to explain here like in the docstring of AbstractReducedFunctional

divijghose and others added 14 commits February 20, 2026 11:27
Remove newlines at the end of file

Co-authored-by: Josh Hope-Collins <jhc.jss@gmail.com>
Co-authored-by: Josh Hope-Collins <jhc.jss@gmail.com>
Co-authored-by: Josh Hope-Collins <jhc.jss@gmail.com>
Co-authored-by: Josh Hope-Collins <jhc.jss@gmail.com>
…parameter list is passed, and a corresponding test.
…he user is not expected to pass `parameters` wrapped in `Control`
1.  The helper functions now return only the expression insread of a parametrised reduced functional. The value of this expression is checked against the parametrised reduced functional's evaluation in the tests.
2.`check_taylor`test_convergence` has been added in place of checking the values of the derivatives. It uses `taylor_to_dict` to verify the `R0`, `R1` and `R2` rates of convergence.
3. Removed the comparison between float values of the evaluation, `np.isclose` is used instead.
control.update(m_i)
return m

class ParametrisedReducedFunctionalNumPy(AbstractReducedFunctional):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does prf_np = ReducedFunctionalNumPy(parameterised_rf) not work? We should fix that instead of duplicating all this code. In theory the numpy rf should work for any AbstractReducedFunctional.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants