Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #19 +/- ##
===========================================
+ Coverage 58.75% 79.34% +20.59%
===========================================
Files 17 35 +18
Lines 320 1225 +905
===========================================
+ Hits 188 972 +784
- Misses 132 253 +121 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
At this point, tests are passing for user defined templates, including tests which run a simulation and compare cross sections parsed from the output to expected values. See It should be fairly simple to copy and modify this test file to apply to a new test suite. We need test suites for problems with the template types that can be generated in bfrescox:
There is an additional remaining task:
|
|
Also, we could probably simplify testing with https://docs.python.org/3/library/tempfile.html |
beykyle
left a comment
There was a problem hiding this comment.
Initial review for feature-alpha into main. I will push some updates so that everything correctly references the main branch. The rest of the issues are documented with comments.
There was a problem hiding this comment.
The testing support code can likely be made far more economical by adding a common/test/, which would contain common/TestData, as well as the python code in this utils file, and versions of the harness classes that inherit from unittest.TestCase. Between the two packages, the only differences in them are 1) the import (bfrescox vs bfrescoxpro) and 2) extra work to handle pro configuration from the latter (which is always included in the json testing specs anyways). Moreover, much of the functionality between each of these test harness classes (setup, tear down, parsing and iterating over tests in a json file) is common across the types of tests (elastic, inelastic, user defined).
Therefore, the various versions of these classes between the two packages should be merged into a single TestHarness class, which takes as input into __init__ a module (either bfrescox or bfrescoxpro), and a test_handler lambda which handles the specifics of running an individual test (e.g. generate_inelastic_template vs generate_elastic_template, handling pro settings, etc.).
This will greatly simplify the testing code in each of the package test dirs themselves, removing essentially repeated code and streamlining the process of standing up new kinds of tests across both packages.
Furthermore, a brief description of the test framework should be included in the documentation, so the procedure for standing up a new test point (e.g. adding an entry to TestSuite_Elastic.json for example), and for standing up a new test suite, across both packages, is clear.
There was a problem hiding this comment.
Is this what Issue #44 is a placeholder for?
Unifying and simplifying the testing infrastructure would certainly be a welcome idea. Note that for developers it might nice to be able to use such testing tools themselves outside of the package. For instance, they might want to define a minimal test suite that they would like to use for regression testing during their current phase of development.
If this is done, perhaps that same tooling could just be used by the package with fixed, package-specific test suites.
There was a problem hiding this comment.
#44 mentions adding info about testing to the documentation. This would look something like this readme. Should we just put this info in the docs and link to it here?
There was a problem hiding this comment.
Good question. Who is the audience for this document? Should the general user be interested in this and have easy access to it? Or is it more for developers?
One way to think about where to put this is where should we put it so that maintainers find it, read it, and update it?
| "STEP_SIZE": f"{step_size_fm:.9f}", | ||
| "RMATCH": f"{R_match_fm:.9f}", | ||
| "J_TOT_MIN": f"{float(J_tot_min):.1f}", | ||
| "J_TOT_MAX": f"{float(J_tot_max):.1f}", | ||
| "E_LAB": f"{E_lab_MeV:.9f}", | ||
| "MASS_P": f"{projectile_mass_amu:.9f}", | ||
| "CHARGE_P": f"{projectile_atomic_number:.9f}", | ||
| "MASS_T": f"{target_mass_amu:.9f}", | ||
| "CHARGE_T": f"{target_atomic_number:.9f}", | ||
| "S_PROJECTILE": f"{float(projectile_spin):.1f}", | ||
| "I_GROUND": f"{float(target_spin):.1f}", | ||
| "E_GROUND": f"{E_0_MeV:.9f}", |
There was a problem hiding this comment.
Output all fp with full precision
| if not _is_fraction_integer_or_half_integer(J_tot_min): | ||
| raise ValueError("J_tot_min must be an integer or half-integer.") | ||
| if not _is_fraction_integer_or_half_integer(J_tot_max): | ||
| raise ValueError("J_tot_max must be an integer or half-integer.") |
There was a problem hiding this comment.
These are made redundant by use of _validate_spin
common/generate_elastic_template.py
Outdated
| if not _is_fraction_integer_or_half_integer(J_tot_min): | ||
| raise ValueError("J_tot_min must be an integer or half-integer.") | ||
| if not _is_fraction_integer_or_half_integer(J_tot_max): | ||
| raise ValueError("J_tot_max must be an integer or half-integer.") |
There was a problem hiding this comment.
These are made redundant by _validate_spin
Left TODOs in place because we will eventually have to define what events should lead to officially publishing the book.
Try Python 3.14 and need to use latest action versions to avoid Node.js 20 deprecations.
Include in API docs all functions that are in the public interface even if they are prototypes. If prototypes, this needs to be clearly noted. The f90nml version limit corresponds to a version from 2021. We suspect that it is not needed in practice.
It looks like load_tests and test were shuffled (for alphabetizing?). See if putting them in this order fixes issue.
Explicitly handling possibilities that a filename points to a pre-existing directory. In some cases, making it clear that if users are providing a working directory, that directory must be pre-existing. Prefer using parsing routine so that we simplify code and exercise the parsing routine in more cases.
Fixing typos and making docs more explicit.
Make changes made in one package to similar files in other package. Move error checking to lower-level code. This simplifies the higher-level code.
Cleaned up docs. Made parsing more strict and more explicit/defensize. Tests were still passing after making these changes. DataFrames produced during tests had appropriate headers.
Removed an old file that was accidentally readded during my review.
Homogenize file/directory error handling and communication.
The original implementation of this parsing routine allowed for a large amount of variability in the format/content of the fort.16 file. However, our test suite does not yet check any of that flexibility and we don't yet have a clear use case for it that we could turn into a test case. Therefore, we make parsing far more explicit and constrained so that any deviation in the file from the assumptions implicit in the current test suite should be caught in an obvious way. All tests passing locally still.
This removes parsing oddities immediately and details their cause. Assume that we will always get angle/sigma data and improve self-documentation of returned results. Tests passing locally.
|
I have finished reviewing the main infrastructure, docs, and code. @beykyle Can you please review all of the changes that I have made since your last review? Let me know when you finish so that I can continue reviewing the test and template code. |
Pending PR tasks
bfrescoxtests passing. Please document and justify all test design decisions inTestData/README(e.g., adjusting threshold test value)bfrescoxprotesting to run the same tests asbfrescoxchecktox task)PR Self-review
bfrescoxand confirm that contents are correct and minimal.@jared321 PR Review
bfrescoxpro, on a variety of different machines with modules and different MPI implementationsmain