Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a configurable HepMC particle trigger and updates the HepMC jet trigger (lepton PDG filtering, eta gating, event-count gating). Introduces many new calibration/QA subsystems (dEdx fitter, sEPD QVec calibration, EventPlane recov, GlobalVertex/Mbd/Svtx v3), waveform functional fits, build/autotools additions, I/O hooks, and numerous API/signature and const-correctness modernizations. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120–180 minutes Poem (short):
|
There was a problem hiding this comment.
Pull request overview
This pull request contains a large set of code modernization, refactoring, and feature additions across the offline reconstruction and analysis framework. The changes include code style improvements (const-correctness, auto* usage, formatting), algorithmic optimizations (std::min/max), new analysis modules, and bug fixes.
Changes:
- Code style modernization: const-correctness for string parameters, auto* for pointers, consistent formatting
- Performance improvements: replacing manual comparisons with std::min/std::max
- New features: flow calibration system for jet background, tower combination utility, state-cluster residuals QA
- Bug fixes: micromegas hit key sample tracking, MVTX/INTT pool filling logic, MBD empty packet handling
Reviewed changes
Copilot reviewed 87 out of 87 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| PHTpcDeltaZCorrection.h, PHSimpleVertexFinder.h, PHActsTrkFitter.h, PHActsTrkFitter.cc | Added const to string parameter references for setters |
| PHSiliconTpcTrackMatching.h/.cc | Added cluster map name configurability |
| DSTClusterPruning.h/.cc | Added pruneAllSeeds feature |
| AlignmentTransformation.h/.cc | Enhanced TPC module tilt handling and transform logic |
| TrainingHits.cc | Moved member initializations to initializer list |
| TpcSimpleClusterizer.cc, TpcClusterizer.cc, etc. | Replaced manual min/max with std::min/std::max, changed auto to auto* |
| TpcRawDataTree.cc | Changed array access to .data() method |
| TpcDistortionCorrection.cc | Removed inline keyword from constexpr |
| LaserClusterizer.cc | Major formatting improvements and code cleanup |
| SegmentationAlpide.cc | Replaced boost::format with std::format |
| MvtxHitPruner.cc, MvtxClusterizer.cc | Formatting and const-correctness improvements |
| CylinderGeom_Mvtx.h/.cc | Added const to methods, removed using namespace std |
| MicromegasDefs.h/.cc | Added sample field to hit keys |
| MicromegasCombinedDataDecoder.cc | Enhanced to track sample with maximum ADC |
| MicromegasClusterizer.cc | Fixed duplicate hit handling by sample |
| MbdReco.cc, MbdEvent.cc, MbdCalib.cc | Added event number to warnings, added empty packet checks |
| DetermineTowerBackground.h/.cc | Added flow v2 calibration system |
| TriggerDSTSkimmer.h/.cc | Added max event acceptance feature |
| KFParticle_sPHENIX.cc | Changed ABORTEVENT to EVENT_OK for missing tracks/vertices |
| PhotonClusterBuilder.cc | Added e22 shower shape parameter |
| CombineTowerInfo.h/.cc | New utility for combining tower information |
| CopyIODataNodes.h/.cc | Added TowerInfo copying capability |
| SingleTriggeredInput.h/.cc | Enhanced GL1 skip logic and BCO alignment |
| SingleMvtxPoolInput.cc | Adjusted negative BCO handling |
| SingleMicromegasPoolInput_v2.h/.cc | Added matched flag to waveform structure |
| Fun4AllStreamingInputManager.cc | Refactored INTT and MVTX filling logic |
| InputFileHandler.h/.cc | Added script execution before file opening |
| Fun4AllServer.h/.cc | Added cout state restoration |
| CreateFileList.pl | Added O+O hijing production type |
| Timing.h/.cc, SyncReco.h, HeadReco.h/.cc, FlagHandler.h, CDBInterface.h/.cc | Minor cleanups |
| StateClusterResidualsQA.h/.cc | New QA module for track state-cluster residuals |
| MicromegasClusterQA.h/.cc | Added sample range filtering |
| CaloValid.h | Added trigger index 12 |
| PHHepMCGenEventv1.cc | Return NaN instead of 0 for missing flow psi |
| PHGenFit Track.cc, Fitter.cc | Code cleanup and style improvements |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
offline/framework/fun4allraw/SingleMvtxPoolInput.cc (1)
210-229: Guard against emptym_BclkStackbefore dereferencingitWhen
m_BclkStackis empty,lower_boundreturnsbegin()==end()and Line 214 dereferences*it, which is UB and can crash. Add an early empty check before dereferencing.🐛 Proposed fix
- auto strb_it = m_BclkStack.cend(); + if (m_BclkStack.empty()) + { + continue; + } + auto strb_it = m_BclkStack.cend(); if (it == m_BclkStack.begin()) { if (*it == lv1Bco) { strb_it = it; } else { strb_it = m_BclkStack.cend(); } } else { // safe because it != begin() auto prev = it; --prev; strb_it = prev; }offline/framework/fun4allraw/SingleMicromegasPoolInput_v2.cc (1)
914-967: Resetfee_bco_predicted_matchedwhen no prediction is available.When the prediction is absent on a matched waveform, the field can retain the prior value and pollute QA output. Clearing it before the optional assignment avoids stale data.
♻️ Proposed fix
- { - const auto predicted = bco_matching_information.get_predicted_fee_bco(gtm_bco);; - if( predicted ) - { - m_waveform.fee_bco_predicted_matched = predicted.value(); - } - } + m_waveform.fee_bco_predicted_matched = 0; + { + const auto predicted = bco_matching_information.get_predicted_fee_bco(gtm_bco); + if (predicted) + { + m_waveform.fee_bco_predicted_matched = predicted.value(); + } + }offline/packages/trackreco/DSTClusterPruning.cc (1)
173-211: Guard condition blockspruneAllSeeds()in minimal setups.The function requires
m_track_seed_containerunconditionally (line 173), but thepruneAllSeedsbranch (lines 180–211) only needs TPC and Silicon seed containers. The non-pruneAllSeedspath (starting line 213) is wherem_track_seed_containeris actually used. Workflows withoutSvtxTrackSeedContainerthat enablepruneAllSeedswill unnecessarily bail out.🔧 Suggested fix
- if (!(m_cluster_map && m_reduced_cluster_map && m_track_seed_container && m_silicon_track_seed_container && m_tpc_track_seed_container)) + if (!(m_cluster_map && m_reduced_cluster_map && m_silicon_track_seed_container && m_tpc_track_seed_container)) { if (Verbosity() > 1){ std::cout << "Missing container" << std::endl; } return; } + if (!m_pruneAllSeeds && !m_track_seed_container) + { + if (Verbosity() > 1){ + std::cout << "Missing track seed container" << std::endl; + } + return; + }offline/packages/tpc/Tpc3DClusterizer.cc (1)
684-706: Uncommentrtree.remove()or make rtree parameter const-reference.The
rtree.remove(clusHit)call at line 694 is commented out, butrtreeis passed by non-const reference. The similar implementation inLaserClusterizer::remove_hits(lines 223) actively callsrtree.remove(clusHit), suggesting this should not be commented out. Either:
- Uncomment the removal to match the expected behavior
- If rtree should remain unmodified, change the parameter to
const bgi::rtree<...> &rtreeto document intentThe
rtree.empty()check (line 690) can be removed if rtree isn't being modified—it serves no algorithmic purpose and only logs a diagnostic message.
🤖 Fix all issues with AI agents
In `@generators/Herwig/HepMCTrigger/HepMCParticleTrigger.cc`:
- Around line 239-266: The AbsEta setters (SetAbsEtaHigh, SetAbsEtaLow,
SetAbsEtaHighLow) currently write into the signed-eta members
_theEtaHigh/_theEtaLow, causing signed-eta cuts to be unintentionally modified;
change the implementation to use dedicated abs-eta members (e.g. add
_theAbsEtaHigh and _theAbsEtaLow initialized in the constructor) and have
SetAbsEtaHigh/SetAbsEtaLow/SetAbsEtaHighLow assign those new members and set
_doAbsEtaHighCut/_doAbsEtaLowCut/_doBothAbsEtaCut, or alternatively explicitly
disable the signed-eta cuts inside those setters (clear
_doEtaHighCut/_doEtaLowCut) so the signed-eta thresholds are not used when
absolute-eta cuts are enabled; update any other occurrences (e.g. the similar
block at the other location noted) to use the new abs members or disabling
logic.
- Around line 22-58: The constructor HepMCParticleTrigger's initializer list
contains standalone commas that create empty initializers and break compilation;
edit the HepMCParticleTrigger(...) initializer list to remove the stray
comma-only lines so each member (threshold, goal_event_number, set_event_limit,
_theEtaHigh, _theEtaLow, _thePtHigh, _thePtLow, _thePHigh, _thePLow, _thePzHigh,
_thePzLow, _doEtaHighCut, _doEtaLowCut, _doBothEtaCut, _doAbsEtaHighCut,
_doAbsEtaLowCut, _doBothAbsEtaCut, _doPtHighCut, _doPtLowCut, _doBothPtCut,
_doPHighCut, _doPLowCut, _doBothPCut, _doPzHighCut, _doPzLowCut, _doBothPzCut)
is separated by a single comma with no empty lines; ensure no trailing commas or
blank comma lines remain in the initializer list for HepMCParticleTrigger.
In `@offline/framework/fun4all/InputFileHandler.cc`:
- Around line 183-187: The code builds a shell command by concatenating
m_RunBeforeOpeningScript, m_OpeningArgs and elements of stringvec into fullcmd
which allows shell injection; fix by avoiding passing a single shell
string—either construct an argument vector and invoke the helper directly with
an exec/spawn family call (use execvp/posix_spawn or equivalent) or, if
retaining shell execution, robustly escape/shell-quote each element
(m_OpeningArgs and each element of stringvec) before concatenation; update the
call site that uses fullcmd to accept an argv-style array or the sanitized,
safely-quoted command and reference the symbols m_RunBeforeOpeningScript,
m_OpeningArgs, stringvec and fullcmd when making changes.
In `@offline/framework/fun4allraw/Fun4AllStreamingInputManager.cc`:
- Line 1367: The calculation of ref_bco_minus_range in
Fun4AllStreamingInputManager.cc uses a fallback of m_mvtx_negative_bco when
m_RefBCO < m_mvtx_negative_bco, which is inconsistent with other places
(FillMvtx, FillInttPool) that default to 0; change the ternary so that when
m_RefBCO < m_mvtx_negative_bco the result defaults to 0 instead of
m_mvtx_negative_bco (i.e., update the computation of ref_bco_minus_range using
m_RefBCO, m_mvtx_negative_bco to return 0 on underflow) and ensure any logic
relying on ref_bco_minus_range continues to treat 0 as the minimum valid BCO.
- Around line 974-986: The computation of lower_limit in
Fun4AllStreamingInputManager::... can underflow when m_mvtx_is_triggered is
false (lower_limit = select_crossings - m_mvtx_bco_range - m_mvtx_negative_bco);
change this to perform a safe subtraction and clamp to zero (or use a
signed/extended intermediate like int64_t temp = (int64_t)select_crossings -
(int64_t)m_mvtx_bco_range - (int64_t)m_mvtx_negative_bco; lower_limit = temp < 0
? 0 : (uint64_t)temp) so that when m_RefBCO < m_mvtx_negative_bco you do not
wrap to a huge unsigned value; apply the same pattern/guarding used earlier
around line 839 and ensure the for-loop over m_MvtxRawHitMap continues to use
lower_limit and upper_limit as before.
In `@offline/framework/fun4allraw/SingleTriggeredInput.cc`:
- Around line 436-511: The skiptrace block leaks evt (and sometimes pkt) on
early continues and lacks handling when the 5-retry clock consistency loop
fails; update the block around GetEventIterator()/pkt retrieval so that whenever
you continue you first delete evt and/or pkt as appropriate (references:
GetEventIterator()->getNextEvent(), pkt from evt->getPacket(representative_pid),
FillPacketClock, m_bclkdiffarray_map, Gl1Input()/gl1), and after the
clockconsistency retry loop exhausts (clockconstcount>=5) add deterministic
failure handling—delete any allocated evt/pkt, log an error with Name(), and
propagate a failure (e.g. return -1 or call FilesDone/close) rather than falling
through so no dangling objects remain and misaligned data isn’t processed.
In `@offline/packages/CaloEmbedding/CombineTowerInfo.cc`:
- Around line 46-72: DetNode (the PHCompositeNode returned by iter.findFirst and
stored in DetNode) may be null before calling DetNode->addNode(...); update
CombineTowerInfo to check DetNode for null and handle it safely: if DetNode is
null either create and insert a new PHCompositeNode with the detector name into
the appropriate parent (so subsequent addNode calls are valid) or fail early
with a clear exception/log; ensure this null-check and handling occurs before
using DetNode to add the PHIODataNode for m_towersOut and reference the symbols
DetNode, m_outputNode, m_towersOut, and the PHIODataNode<PHObject> construction
in your change.
In `@offline/packages/CaloEmbedding/CopyIODataNodes.cc`:
- Around line 307-327: CopyTowerInfo dereferences TowerInfoContainer pointers
without null checks which can crash if findNode::getClass fails for
from_towerInfo_name or to_towerInfo_name; add null checks right after retrieving
from_towerInfo and to_towerInfo in CopyTowerInfo and bail out (log an error via
Verbosity()/std::cout or return) when either is null, and also guard the loop
that calls get_tower_at_channel/ch->copy_tower by verifying from_tow and the
result of to_towerInfo->get_tower_at_channel(ch) are non-null before calling
copy_tower; keep the existing identify() debug prints but only call them when
containers are valid.
In `@offline/packages/jetbackground/DetermineTowerBackground.cc`:
- Around line 76-108: The function DetermineTowerBackground::LoadCalibrations
currently calls exit(-1) when calibdir is empty; instead, remove the exit call
and return an error code so InitRun can handle it—specifically, when
calibdir.empty() is true (taking into account m_overwrite_average_calo_v2 and
m_overwrite_average_calo_v2_path), replace the exit(-1) with returning
Fun4AllReturnCodes::ABORTRUN and ensure callers (e.g., InitRun) will propagate
that failure.
In `@offline/packages/mvtx/MvtxHitPruner.cc`:
- Around line 158-167: Check for a nullptr return from m_hits->findHitSet inside
MvtxHitPruner::process_event before dereferencing hitset: if hitset is null,
emit a warning (or debug) message using the existing Verbosity() branch and skip
processing/continue the loop instead of accessing hitset->size() or calling
TrkrHitSet::ConstRange hitrangei = hitset->getHits(); this prevents
dereferencing a null pointer and keeps existing logging behavior consistent.
In `@offline/packages/tpc/Tpc3DClusterizer.cc`:
- Around line 659-681: The block that builds the fX array and calls
m_clusterNT->Fill(fX) must be guarded because m_clusterNT is only created when
m_output is true; restore a null-check or the original if (m_output) guard
around that block (or at minimum check m_clusterNT != nullptr before calling
Fill) to avoid a potential null pointer dereference; update the code around the
fX construction and the call to m_clusterNT->Fill so it only executes when
m_output is true and m_clusterNT is valid.
- Around line 554-559: The variables iphimin/iphimax are being assigned the
double angle phi (radians) causing type/semantic mismatch; either (A) treat them
as angle bounds: rename iphimin/iphimax to phimin/phimax and change their type
to double, update all uses (including phisize calculation) to compute angular
size from phimin/phimax, or (B) keep the iphi semantics and assign the integer
iphi (from coords[1]) instead of phi so iphimin/iphimax remain integer indices;
update any downstream code that computes phisize on the basis of index vs angle
accordingly (check uses of phi, iphi, coords[1], phisize to ensure consistency).
In `@offline/packages/trackreco/PHActsTrkFitter.cc`:
- Around line 391-409: The code currently leaves crossing_estimate set to
SHRT_MAX when m_pp_mode is true and m_enable_crossing_estimate is false, which
can propagate an invalid sentinel into drift corrections and fits; update the
logic in PHActsTrkFitter (the block referencing m_pp_mode,
m_enable_crossing_estimate, crossing, crossing_estimate and SHRT_MAX) to handle
the "no INTT crossing" case when estimate mode is disabled by either (1)
skipping the track (return/continue) or (2) setting crossing_estimate to a safe
default (e.g., 0) and not attempting fit; ensure use_estimate and nvary are only
set when m_enable_crossing_estimate is true and add a clear comment explaining
the chosen behavior.
In `@offline/QA/Tracking/StateClusterResidualsQA.cc`:
- Around line 181-198: The code calls
geometry->getGlobalPosition(state->get_cluskey(), cluster) before verifying
cluster is non-null, risking a crash; reorder and null-check so you call
cluster_map->findCluster(...) into cluster, then immediately if (!cluster)
continue (or skip) and only then call geometry->getGlobalPosition(..., cluster)
and use its result to compute cluster_x/cluster_y/cluster_z for filling
m_histograms_x/y/z; update the block around findCluster, getGlobalPosition, and
the if (cluster) check accordingly.
In `@offline/QA/Tracking/StateClusterResidualsQA.h`:
- Around line 56-101: These setters (setNMvtx, setNIntt, setNTpc, setPhiRange,
setEtaRange, setPtRange, setPositiveTracks, setNegativeTracks) assume
m_pending.back() exists and cause UB if called before an addHistogram; guard
each setter by ensuring m_pending is non-empty (e.g., if m_pending.empty()
push_back(default-constructed PendingEntry) or throw/assert with a clear
message) before accessing m_pending.back(), so callers can safely call any
setter before addHistogram; update all listed methods to perform the same
presence check and create a default pending entry when missing.
🟡 Minor comments (12)
offline/packages/mvtx/MvtxClusterPruner.cc-228-241 (1)
228-241: Guard against division by zero when computing fraction.If no MVTX clusters are processed during the run (e.g., empty events or missing data),
m_cluster_counter_totalremains 0, causing undefined behavior on line 237.Proposed fix
std::cout << "MvtxClusterPruner::End -" << " m_cluster_counter_deleted: " << m_cluster_counter_deleted - << " fraction: " << double(m_cluster_counter_deleted) / m_cluster_counter_total + << " fraction: " << (m_cluster_counter_total > 0 ? double(m_cluster_counter_deleted) / m_cluster_counter_total : 0.0) << std::endl;offline/framework/fun4all/InputFileHandler.cc-177-182 (1)
177-182: Permission check may be too restrictive.The check only verifies
owner_execpermission. If the process runs under a different user who has execute permission via group or other bits, the script will be incorrectly rejected.Suggested fix: check any execute permission
- if (!((std::filesystem::status(m_RunBeforeOpeningScript).permissions() & std::filesystem::perms::owner_exec) == std::filesystem::perms::owner_exec)) + auto perms = std::filesystem::status(m_RunBeforeOpeningScript).permissions(); + bool is_executable = (perms & std::filesystem::perms::owner_exec) != std::filesystem::perms::none || + (perms & std::filesystem::perms::group_exec) != std::filesystem::perms::none || + (perms & std::filesystem::perms::others_exec) != std::filesystem::perms::none; + if (!is_executable) { std::cout << PHWHERE << "RunBeforeOpeningScript script " << m_RunBeforeOpeningScript << " is not owner executable" << std::endl; return -1; }offline/packages/CaloEmbedding/CopyIODataNodes.cc-322-322 (1)
322-322: Typo: "TowerInfoCOntainer" should be "TowerInfoContainer".🔧 Fix typo
- std::cout << "To TowerInfoCOntainer identify()" << std::endl; + std::cout << "To TowerInfoContainer identify()" << std::endl;offline/packages/CaloEmbedding/CombineTowerInfo.cc-25-28 (1)
25-28: Missing validation form_detector.The
m_detectormember is used inCreateNodesto locate the detector node, but it's not validated inInitRunalongside the other node names.🔧 Proposed fix
- if (m_inputNodeA.empty() || m_inputNodeB.empty() || m_outputNode.empty()) + if (m_inputNodeA.empty() || m_inputNodeB.empty() || m_outputNode.empty() || m_detector.empty()) { - throw std::runtime_error("CombineTowerInfo: input/output node names not set"); + throw std::runtime_error("CombineTowerInfo: input/output node names or detector not set"); }offline/packages/micromegas/MicromegasClusterizer.cc-159-166 (1)
159-166: Remove incomplete comment and use const reference in range-based for loop.Line 166 contains
//mawhich appears to be an incomplete comment or debug artifact that should be removed before merging.Additionally, consider using
const std::string&in the range-based for loop to avoid unnecessary string copies.Proposed fix
- for( std::string geonodename: {"CYLINDERGEOM_MICROMEGAS_FULL", "CYLINDERGEOM_MICROMEGAS" } ) + for( const std::string& geonodename: {"CYLINDERGEOM_MICROMEGAS_FULL", "CYLINDERGEOM_MICROMEGAS" } ) { // try load node and test geonode = findNode::getClass<PHG4CylinderGeomContainer>(topNode, geonodename); if( geonode ) { break;} } - - //maoffline/packages/tpc/Tpc3DClusterizer.cc-150-158 (1)
150-158: Fix inconsistent fallback value form_seed.
m_seedis initialized asint m_seed {-1};in the header, but when theRANDOMSEEDflag is absent, the code sets it tostd::numeric_limits<int>::quiet_NaN(), which returns0for integer types. This creates an inconsistency in sentinel values.Use
-1consistently instead ofquiet_NaN(), or choose another explicit sentinel value that matches the member initialization.offline/packages/trackreco/PHActsTrkFitter.cc-411-418 (1)
411-418: Clarify or enforce the “veto others” behavior in non‑pp mode.The comment says “veto others,” but the code now forces
crossing = 0and continues. Either update the comment or re‑enable the veto (continue) so behavior matches intent.Example: update comment to match behavior
- // non pp mode, we want only crossing zero, veto others + // non-pp mode: force crossing to zero for all tracksoffline/packages/Skimmers/Trigger/TriggerDSTSkimmer.h-25-29 (1)
25-29: Guard against non‑positivemax_events.Passing
0or negative values currently enables the cap and will abort all events. Consider treating<= 0as “no cap” (or assert) to avoid accidental shutdowns.🩹 Suggested fix
void set_accept_max(int max_events) { - use_max_accept = true; - max_accept = max_events; - return; + if (max_events <= 0) + { + use_max_accept = false; + max_accept = 0; + return; + } + use_max_accept = true; + max_accept = max_events; }offline/packages/jetbackground/DetermineTowerBackground.h-41-45 (1)
41-45: Make SetOverwriteCaloV2 accept a const string reference.
Line 41 takes a non-const reference, which blocks temporaries and const strings. Other setters in this class (SetBackgroundOutputName, set_towerNodePrefix) correctly useconst std::string&.♻️ Proposed fix
- void SetOverwriteCaloV2(std::string &url) + void SetOverwriteCaloV2(const std::string &url) { m_overwrite_average_calo_v2 = true; m_overwrite_average_calo_v2_path = url; }offline/packages/jetbackground/DetermineTowerBackground.cc-948-971 (1)
948-971: Replace hardcoded centrality bin bounds with dynamic vector size check.The
_CENTRALITY_V2vector is initialized with exactly 100 elements (indices 0–99) and populated for all centrality bins. The conditioncentrality_bin > 0 && centrality_bin < 95incorrectly excludes valid bins 0 and 95–99, causing them to be treated as flow failures despite having populated v2 values. Use the vector size for bounds checking instead.Fix
- if (centrality_bin > 0 && centrality_bin < 95) + if (centrality_bin >= 0 && + centrality_bin < static_cast<int>(_CENTRALITY_V2.size())) { _v2 = _CENTRALITY_V2[centrality_bin]; }offline/framework/fun4all/Fun4AllServer.cc-904-905 (1)
904-905: Add cout formatting restore to the NewSubsystems loop for consistency.The Subsystems loop (line 904) restores cout formatting after calling BeginRunSubsystem, but the NewSubsystems loop (lines 906-910) does not. Since BeginRunSubsystem produces output that could modify formatting, this inconsistency can allow formatting changes to leak to subsequent code. Add the restore after the BeginRunSubsystem call in the NewSubsystems loop.
♻️ Proposed fix
for (; !NewSubsystems.empty(); NewSubsystems.pop_front()) { registerSubsystem((NewSubsystems.front()).first, (NewSubsystems.front()).second); BeginRunSubsystem(std::make_pair(NewSubsystems.front().first, topNode(NewSubsystems.front().second))); + std::cout.copyfmt(m_saved_cout_state); }offline/packages/jetbackground/DetermineTowerBackground.cc-536-621 (1)
536-621: Add null check before dereferencing sEPDNS pointer.
Lines 605-607 assumeEventplaneinfoMap::sEPDNSexists whenever the map is non-empty. Theget()method returnsnullptrif the key is absent, and a non-empty map does not guarantee this entry exists. Without a null check, dereferencingEPDNSwill crash ifsEPDNSwas not inserted into the map.♻️ Proposed fix
- if (!(epmap->empty())) - { - auto *EPDNS = epmap->get(EventplaneinfoMap::sEPDNS); - _Psi2 = EPDNS->get_shifted_psi(2); - } + if (!(epmap->empty())) + { + auto *EPDNS = epmap->get(EventplaneinfoMap::sEPDNS); + if (EPDNS) + { + _Psi2 = EPDNS->get_shifted_psi(2); + } + else + { + _is_flow_failure = true; + _Psi2 = 0; + } + }
🧹 Nitpick comments (34)
offline/packages/mbd/MbdSig.h (1)
35-36: LGTM!The new getter correctly exposes the private
_nsamplesmember and follows the existing naming conventions in the class.Optional improvement: Consider adding
constqualifier for const-correctness:- int GetNSamples() { return _nsamples; } + int GetNSamples() const { return _nsamples; }This applies to other getters in the class as well, but can be deferred to a broader const-correctness cleanup.
offline/packages/tpc/TpcClusterMover.cc (1)
161-191: LGTM! Const qualification is correct.The method correctly doesn't modify any member state, so the
constqualifier is appropriate.Minor nit: There's an unnecessary nested brace block on lines 169-175. The outer braces around the
ifblock serve no scoping purpose here.🔧 Optional cleanup for redundant braces
if (std::isnan(xplus)) { - { - if (_verbosity > 1) - { - std::cout << " circle/circle intersection calculation failed, skip this cluster" << std::endl; - std::cout << " target_radius " << target_radius << " fitted R " << R << " fitted X0 " << X0 << " fitted Y0 " << Y0 << std::endl; - } + if (_verbosity > 1) + { + std::cout << " circle/circle intersection calculation failed, skip this cluster" << std::endl; + std::cout << " target_radius " << target_radius << " fitted R " << R << " fitted X0 " << X0 << " fitted Y0 " << Y0 << std::endl; } return Fun4AllReturnCodes::ABORTEVENT; // skip to next cluster }offline/packages/tpc/LaserEventIdentifier.cc (1)
141-141: Consider using integer arithmetic for BCO threshold comparison.The comparison mixes unsigned integer subtraction with a floating-point expression (
350.0/30*16≈ 186.67). If the intent is a timing window based on clock cycles, using explicit integer arithmetic would improve clarity.💡 Suggested clarification
- else if ((gl1pkt->getBCO() - prev_BCO) < 350.0/30*16) + // 350 ns timing window: 350/30 MHz * 16 clocks ≈ 186 BCO ticks + else if ((gl1pkt->getBCO() - prev_BCO) < 187)Or keep the calculation explicit with a constexpr if the derivation is important for documentation.
offline/packages/tpc/TpcRawDataTree.cc (1)
180-206: Pre-existing: Duplicatem_fee == 16condition causes dead code.This is not related to your change, but
m_fee == 16appears in both conditions (line 185 and line 196). Since it's checked first in the R1 block, it will never match the R2 block—making the duplicate on line 196 unreachable. Consider removing it from one of the conditions or verifying which region FEE 16 should belong to.🔧 Suggested fix
else if (m_fee == 11 || m_fee == 12 || m_fee == 19 || m_fee == 18 || m_fee == 01 || m_fee == 00 || - m_fee == 16 || m_fee == 15)offline/framework/fun4allraw/SingleTriggeredInput.h (1)
115-115: Consider usingm_prefix for naming consistency.Most private members in this class use the
m_prefix (e.g.,m_EventNumber,m_Event,m_EventIterator), whileeventcounterfollows the pattern of existing exceptions likefirstcallandfirstclockcheck. For consistency with the dominant naming convention, consider renaming tom_EventCounter.offline/packages/trackbase/AlignmentTransformation.h (1)
131-132: Consider passinguse_module_tiltas a parameter tonewMakeTransforminstead of using member state.This flag is set within a nested loop in
createMapand read innewMakeTransform. Using member state for this transient communication is fragile—ifnewMakeTransformis ever called from another context, the state may be incorrect. Additionally, the comment "starts at false in all cases" is redundant given the initializer.Suggested approach
Add a
bool use_module_tiltparameter tonewMakeTransformand pass the value directly from the call site, removing the member variable entirely.offline/packages/trackbase/AlignmentTransformation.cc (1)
280-291: Consider replacing magic numbertest_layer < 4with a named constant or clearer condition.The value 4 is a magic number. Based on context, TPC module hitsetkeys have
test_layervalues of 0, 1, or 2 (for the three regions). A named constant or explicit comparison against the module layer range would improve readability.Example
+ constexpr unsigned int kMaxModuleLayer = 3; use_module_tilt = false; - if (test_layer < 4 || use_module_tilt_always) + if (test_layer < kMaxModuleLayer || use_module_tilt_always)offline/packages/mvtx/MvtxHitPruner.cc (2)
54-67: Consider markingbegin()andend()asconstmember functions.These accessor methods don't modify state and should be
constfor correctness and to allow use with constrange_adaptorinstances.♻️ Suggested improvement
- const typename T::first_type& begin() { return m_range.first; } - const typename T::second_type& end() { return m_range.second; } + const typename T::first_type& begin() const { return m_range.first; } + const typename T::second_type& end() const { return m_range.second; }
140-140: Consider using[[maybe_unused]]or just_for the unused structured binding element.The variable
unusedis intentionally unused. Using[[maybe_unused]]or the common convention of a single underscore makes the intent clearer and may suppress compiler warnings.♻️ Suggested improvement
- for (const auto& [unused, hitsetkey] : range_adaptor(bare_hitsetrange)) + for ([[maybe_unused]] const auto& [unused, hitsetkey] : range_adaptor(bare_hitsetrange))Or simply:
- for (const auto& [unused, hitsetkey] : range_adaptor(bare_hitsetrange)) + for (const auto& [_, hitsetkey] : range_adaptor(bare_hitsetrange))offline/packages/PHGenFitPkg/PHGenFit/Fitter.cc (1)
53-58: VerifyTGeoManager::Importownership/overrides with pre-allocation.
_tgeo_manageris allocated beforeTGeoManager::Import(...); depending on ROOT semantics, Import may create/replace the current manager, which could leave_tgeo_managerunused or stale. Please confirm the ROOT version’s behavior and consider assigning_tgeo_managerto the Import result to avoid leaks or dangling references if replacement occurs.offline/framework/fun4allraw/SingleMvtxPoolInput.cc (1)
463-467: Avoid the magic number120form_NegativeBcoLine 466 hard-codes
120without rationale. Consider a named constant (e.g., inMvtxRawDefs) or a brief comment tying it to the strobe-width window to prevent accidental regressions.offline/packages/CaloEmbedding/CombineTowerInfo.cc (1)
85-95: Consider adding defensive null checks for tower pointers.The
get_tower_at_channelcalls assume valid returns. While the size check inCreateNodesshould ensure valid indices, defensive null checks would prevent crashes if tower data is inconsistent.♻️ Optional defensive checks
for (unsigned int ich = 0; ich < ntowers; ++ich) { TowerInfo* towerA = m_towersA->get_tower_at_channel(ich); TowerInfo* towerB = m_towersB->get_tower_at_channel(ich); TowerInfo* towerO = m_towersOut->get_tower_at_channel(ich); + if (!towerA || !towerB || !towerO) + { + continue; // or log a warning + } towerO->copy_tower(towerA);offline/packages/CaloEmbedding/CopyIODataNodes.h (2)
38-44: Minor: Unnecessaryreturnstatement in void setter.The
return;at line 43 is unnecessary in a void function. Other setters in this class (lines 31-37) don't use explicit returns.♻️ Align with existing style
void set_CopyTowerInfo(const std::string& set_from_towerInfo_name,const std::string& set_to_towerInfo_name) { from_towerInfo_name = set_from_towerInfo_name; to_towerInfo_name = set_to_towerInfo_name; m_CopyTowerInfoFlag = true; - return; }
76-79: Naming inconsistency: missingm_prefix on member variables.The new
from_towerInfo_nameandto_towerInfo_namemembers don't follow them_prefix convention used by other members in this class.♻️ Align naming with class convention
bool m_CopyTowerInfoFlag = false; - std::string from_towerInfo_name = {}; - std::string to_towerInfo_name = {}; + std::string m_fromTowerInfoName; + std::string m_toTowerInfoName;offline/packages/CaloEmbedding/CopyIODataNodes.cc (1)
369-369: Unconditional debug print should be gated by verbosity.This print will appear for every run regardless of verbosity settings, unlike similar debug outputs elsewhere in this file which are guarded by
Verbosity() > 0.♻️ Gate debug output
void CopyIODataNodes::CreateTowerInfo(PHCompositeNode *from_topNode, PHCompositeNode *to_topNode) { - std::cout << "copying tower info" << std::endl; + if (Verbosity() > 0) + { + std::cout << "copying tower info" << std::endl; + } TowerInfoContainer *from_towerInfo = findNode::getClass<TowerInfoContainer>(from_topNode, from_towerInfo_name);offline/packages/tpc/Tpc3DClusterizer.cc (2)
284-343: Consider extracting magic numbers into named constants.The iphi boundary values (191, 206, 383, 576, etc.) appear to be hardware-specific pad boundaries. While the brace additions are good, extracting these into named constants or a lookup table would improve maintainability and self-documentation.
♻️ Example refactor
// At class or namespace level static constexpr std::array<int, 14> kExcludedPadBoundaries = { 0, 191, 206, 383, 576, 767, 960, 1344, 1522, 1536, 1728, 1920, 2111, 2303 }; static constexpr std::array<int, 14> kBoundaryTolerances = { 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; // In the loop if (layer >= 7 + 32) { for (size_t i = 0; i < kExcludedPadBoundaries.size(); ++i) { if (abs(iphi - kExcludedPadBoundaries[i]) <= kBoundaryTolerances[i]) { continue; // Note: would need restructuring to break outer loop } } }
523-528: Usestd::numeric_limitsfor min/max initialization.The magic numbers (6666, -1, 66666666.6, -6666666666.6) are fragile. If hardware configurations change to support larger values, these could produce incorrect results.
♻️ Recommended fix
- int iphimin = 6666; - int iphimax = -1; - int ilaymin = 6666; - int ilaymax = -1; - float itmin = 66666666.6; - float itmax = -6666666666.6; + int iphimin = std::numeric_limits<int>::max(); + int iphimax = std::numeric_limits<int>::min(); + int ilaymin = std::numeric_limits<int>::max(); + int ilaymax = std::numeric_limits<int>::min(); + float itmin = std::numeric_limits<float>::max(); + float itmax = std::numeric_limits<float>::lowest();offline/packages/tpc/LaserClusterizer.cc (3)
69-70: Consider making global arrays const.The global arrays
layerMinsandlayerMaxesappear to hold constant configuration data. Declaring themconst(and potentiallyconstexpr) would prevent accidental modification and clarify intent.♻️ Proposed refactor
-int layerMins[3] = {7, 23, 39}; -int layerMaxes[3] = {22, 38, 54}; +constexpr int layerMins[3] = {7, 23, 39}; +constexpr int layerMaxes[3] = {22, 38, 54};
295-338: Consider usingstd::setfor duplicate detection.The current implementation performs linear searches through
usedLayer,usedIPhi, andusedITvectors for each hit, resulting in O(n²) complexity. Since<set>is already included (Line 51), usingstd::setwould improve this to O(n log n).♻️ Proposed refactor using std::set
- std::vector<float> usedLayer; - std::vector<float> usedIPhi; - std::vector<float> usedIT; + std::set<float> usedLayer; + std::set<float> usedIPhi; + std::set<float> usedIT; // ... in the loop: - bool foundLayer = false; - for (float i : usedLayer) - { - if (coords[0] == i) - { - foundLayer = true; - break; - } - } - - if (!foundLayer) - { - usedLayer.push_back(coords[0]); - } + usedLayer.insert(coords[0]); - bool foundIPhi = false; - for (float i : usedIPhi) - { - if (coords[1] == i) - { - foundIPhi = true; - break; - } - } - - if (!foundIPhi) - { - usedIPhi.push_back(coords[1]); - } + usedIPhi.insert(coords[1]); - bool foundIT = false; - for (float i : usedIT) - { - if (coords[2] == i) - { - foundIT = true; - break; - } - } - - if (!foundIT) - { - usedIT.push_back(coords[2]); - } + usedIT.insert(coords[2]);Note: You'll also need to update the sorting and access patterns later (Lines 392-394, 409) to work with
std::setiterators.
833-833: Consider extracting the event selection logic for clarity.The conditional combines run number checks with different laser event flags, making it harder to understand at a glance. Extracting this logic to a named helper function would improve readability.
♻️ Proposed refactor
Add a helper method to the class:
bool LaserClusterizer::isValidLaserEvent() const { const auto runNumber = eventHeader->get_RunNumber(); if (runNumber > 66153) { return m_laserEventInfo->isGl1LaserEvent(); } return m_laserEventInfo->isLaserEvent(); }Then simplify the conditional:
- if ((eventHeader->get_RunNumber() > 66153 && !m_laserEventInfo->isGl1LaserEvent()) || (eventHeader->get_RunNumber() <= 66153 && !m_laserEventInfo->isLaserEvent())) + if (!isValidLaserEvent()) { return Fun4AllReturnCodes::EVENT_OK; }offline/framework/fun4allraw/SingleTriggeredInput.cc (2)
355-359: Static local variablefirstclockarraymay not behave as intended.The
static bool firstclockarrayis shared across all instances ofSingleTriggeredInputand all iterations of the pid loop. It will only log for the very first pid encountered across the entire program lifetime, not once per instance or once per pool fill. If the intent is to log the first clock for each instance, this should be a member variable instead.Consider using a member variable if per-instance logging is desired
- static bool firstclockarray=true; - if(firstclockarray){ - std::cout << "first clock call pid " << pid << " m_bclkarray_map[pid][0] : " << m_bclkarray_map[pid][0] << std::endl; - firstclockarray=false; - } + if(m_firstClockLogged.insert(pid).second) { + std::cout << "first clock call pid " << pid << " m_bclkarray_map[pid][0] : " << m_bclkarray_map[pid][0] << std::endl; + }This requires adding
std::unordered_set<int> m_firstClockLogged;as a member variable.
437-491: Consider extracting repeated event-fetch-with-file-handling pattern.The pattern of fetching the next event with file close/reopen handling appears three times in
FillEventVector()(lines 391-401, 440-448, 482-490). This duplication increases maintenance burden and risk of inconsistent behavior.Consider extracting to a helper method
// Helper method to fetch next event, handling file transitions Event* SingleTriggeredInput::FetchNextEvent() { Event* evt = GetEventIterator()->getNextEvent(); while (!evt) { fileclose(); if (OpenNextFile() == InputFileHandlerReturnCodes::FAILURE) { FilesDone(1); return nullptr; } evt = GetEventIterator()->getNextEvent(); } return evt; }Then replace the duplicated blocks with:
evt = FetchNextEvent(); if (!evt) { return -1; }Additionally, consider making the retry limit (5) a named constant:
static constexpr int kMaxClockRecoveryRetries = 5;offline/packages/tpc/TpcClusterizer.cc (1)
1205-1211: Guard the geometry down-casts to avoid UB.
static_castassumes the container always returnsPHG4TpcGeomv1. If a different version appears, this is undefined behavior. Considerdynamic_castwith a null check (or a checked helper) to fail fast.♻️ Safer cast pattern (apply similarly to g2/g3)
- auto *g1 = static_cast<PHG4TpcGeomv1*> (geom->GetFirstLayerCellGeom()); // cast because << not in the base class + auto *g1 = dynamic_cast<PHG4TpcGeomv1*>(geom->GetFirstLayerCellGeom()); + if (!g1) + { + std::cout << PHWHERE << "Unexpected TPC geometry type for first layer" << std::endl; + return Fun4AllReturnCodes::ABORTRUN; + }offline/framework/fun4allraw/Fun4AllStreamingInputManager.cc (1)
797-797: Minor: Trailing whitespace.Line 797 has a trailing space after the closing brace.
Fix
- } + }offline/packages/trackreco/PHActsTrkFitter.cc (1)
316-338: Guard silicon seed lookup before callingget().
m_siliconSeeds->get(siid)is called before checkingsiid == std::numeric_limits<unsigned int>::max(). Ifget()doesn’t tolerate invalid IDs, this can read past bounds. Consider guarding first or confirmget()safely returnsnullptrfor invalid IDs.Suggested defensive guard
- auto *siseed = m_siliconSeeds->get(siid); - if (siseed) - { - silicon_crossing = siseed->get_crossing(); - } + TrackSeed* siseed = nullptr; + if (siid != std::numeric_limits<unsigned int>::max()) + { + siseed = m_siliconSeeds->get(siid); + if (siseed) + { + silicon_crossing = siseed->get_crossing(); + } + }offline/QA/Tracking/MicromegasClusterQA.h (1)
54-58: Clarify the sample range as inclusive/exclusive.The implementation uses a half‑open range (
< m_sample_max), but the setter/member comments read like inclusive bounds. Clarifying avoids off‑by‑one misconfigurations.✏️ Suggested comment tweak
- /// set min sample for signal hits + /// set min sample for signal hits (inclusive) void set_sample_min(uint16_t value) { m_sample_min = value; } - /// set max sample for signal hits + /// set max sample for signal hits (exclusive) void set_sample_max(uint16_t value) { m_sample_max = value; } - /// min sample for signal + /// min sample for signal (inclusive) uint16_t m_sample_min = 0; - /// max sample for signal + /// max sample for signal (exclusive) uint16_t m_sample_max = 1024;Also applies to: 108-113
offline/QA/Tracking/MicromegasClusterQA.cc (1)
76-78: Add a quick sanity check for sample range.If
m_sample_min >= m_sample_max, the new gating silently drops all clusters. A simple assert (or warning) inInitRunwould make misconfigurations obvious.✅ Minimal guard
std::cout << "MicromegasClusterQA::InitRun - m_sample_min: " << m_sample_min << std::endl; std::cout << "MicromegasClusterQA::InitRun - m_sample_max: " << m_sample_max << std::endl; + assert(m_sample_min < m_sample_max);offline/packages/micromegas/MicromegasCombinedDataDecoder.cc (1)
261-265: Gate duplicate-hit log behind Verbosity to avoid log spam.This keeps logging consistent with the rest of the function and reduces overhead.
♻️ Proposed tweak
- if (hit) - { - std::cout << "MicromegasCombinedDataDecoder::process_event - duplicated hit, hitsetkey: " << hitsetkey << " strip: " << strip << std::endl; - continue; - } + if (hit) + { + if (Verbosity()) + { + std::cout << "MicromegasCombinedDataDecoder::process_event - duplicated hit, hitsetkey: " << hitsetkey << " strip: " << strip << std::endl; + } + continue; + }generators/Herwig/HepMCTrigger/HepMCJetTrigger.cc (1)
130-133: Confirm hard |eta| cut is intended (or make it configurable).This introduces a fixed acceptance cut before the pT threshold. Please confirm this matches the physics requirement, or consider exposing it as a parameter for flexibility.
offline/packages/Skimmers/Trigger/TriggerDSTSkimmer.cc (1)
54-54: Replace the magic GL1 packet ID with a named constant.Using a named constant will make the intent clearer and reduce maintenance risk if the ID changes.
♻️ Proposed refactor
+namespace +{ +constexpr int kGL1PacketNodeId = 14001; +} // namespace + - Gl1Packet *_gl1PacketInfo = findNode::getClass<Gl1Packet>(topNode, 14001); + Gl1Packet *_gl1PacketInfo = findNode::getClass<Gl1Packet>(topNode, kGL1PacketNodeId);generators/PHPythia8/PHPythia8.cc (1)
62-67: Prefer an RAII guard for cout formatting.The manual save/restore pairs are easy to miss on future edits. A tiny guard makes restoration automatic on all paths (including early returns).
♻️ Suggested pattern (apply in ctor / Init / process_event)
+namespace +{ +struct CoutFormatGuard +{ + explicit CoutFormatGuard(std::ostream& os) : _os(os), _old(nullptr) { _old.copyfmt(os); } + ~CoutFormatGuard() { _os.copyfmt(_old); } + std::ostream& _os; + std::ios _old; +}; +} // namespace @@ - std::ios old_state(nullptr); - old_state.copyfmt(std::cout); - m_Pythia8.reset(new Pythia8::Pythia(thePath)); - std::cout.copyfmt(old_state); + CoutFormatGuard cout_guard(std::cout); + m_Pythia8.reset(new Pythia8::Pythia(thePath)); @@ - std::ios old_state(nullptr); - old_state.copyfmt(std::cout); // save current state + CoutFormatGuard cout_guard(std::cout); @@ - std::cout.copyfmt(old_state); // restore state to saved state - return Fun4AllReturnCodes::ABORTRUN; + return Fun4AllReturnCodes::ABORTRUN; @@ - std::cout.copyfmt(old_state); // restore state to saved stateAlso applies to: 126-133, 210-213, 288-289, 309-310
generators/Herwig/HepMCTrigger/HepMCParticleTrigger.h (1)
83-87: Align in‑class defaults with constructor initialization.
_thePtLowis initialized to-999.9in the header but to0(then overridden) in the constructor. Consider aligning the in‑class default or removing it to avoid drift.♻️ Possible adjustment
- float _thePtLow{-999.9}; + float _thePtLow{0};offline/framework/fun4all/Fun4AllServer.cc (1)
131-134: Verify the cout baseline is intended to be the InitAll snapshot.
Line 131 captures std::cout once; subsequent restores reset formatting to that initial state, which can override caller formatting changes made after InitAll. If the intent is “restore to pre-call state,” consider capturing/restoring locally around each call (or refreshing the saved state).generators/Herwig/HepMCTrigger/Makefile.am (1)
18-20: Consider linking the new lib in testexternals for a link check.
Line 19 addslibHepMCParticleTrigger.la, buttestexternalsstill only linkslibHepMCJetTrigger.la, so the new lib isn’t link-checked. Consider including it there.♻️ Proposed fix
-testexternals_LDADD = libHepMCJetTrigger.la +testexternals_LDADD = libHepMCJetTrigger.la libHepMCParticleTrigger.la
| HepMCParticleTrigger::HepMCParticleTrigger(float trigger_thresh, int n_incom, bool up_lim, const std::string& name) | ||
| : SubsysReco(name) | ||
| , threshold(trigger_thresh) | ||
| , goal_event_number(n_incom) | ||
| , set_event_limit(up_lim) | ||
| , _theEtaHigh(1.1) | ||
| , _theEtaLow(-1.1) | ||
| , _thePtHigh(999.9) | ||
| , _thePtLow(0) | ||
| , _thePHigh(999.9) | ||
| , _thePLow(-999.9) | ||
| , _thePzHigh(999.9) | ||
| , _thePzLow(-999.9) | ||
| , | ||
|
|
||
| _doEtaHighCut(true) | ||
| , _doEtaLowCut(true) | ||
| , _doBothEtaCut(true) | ||
| , | ||
|
|
||
| _doAbsEtaHighCut(false) | ||
| , _doAbsEtaLowCut(false) | ||
| , _doBothAbsEtaCut(false) | ||
| , | ||
|
|
||
| _doPtHighCut(false) | ||
| , _doPtLowCut(false) | ||
| , _doBothPtCut(false) | ||
| , | ||
| _doPHighCut(false) | ||
| , _doPLowCut(false) | ||
| , _doBothPCut(false) | ||
| , | ||
|
|
||
| _doPzHighCut(false) | ||
| , _doPzLowCut(false) | ||
| , _doBothPzCut(false) |
There was a problem hiding this comment.
Fix initializer list syntax (standalone commas).
The empty comma-only lines create empty initializers and will not compile.
🐛 Proposed fix
, _thePzHigh(999.9)
, _thePzLow(-999.9)
- ,
-
- _doEtaHighCut(true)
+ , _doEtaHighCut(true)
, _doEtaLowCut(true)
, _doBothEtaCut(true)
- ,
-
- _doAbsEtaHighCut(false)
+ , _doAbsEtaHighCut(false)
, _doAbsEtaLowCut(false)
, _doBothAbsEtaCut(false)
- ,
-
- _doPtHighCut(false)
+ , _doPtHighCut(false)
, _doPtLowCut(false)
, _doBothPtCut(false)
- ,
- _doPHighCut(false)
+ , _doPHighCut(false)
, _doPLowCut(false)
, _doBothPCut(false)
- ,
-
- _doPzHighCut(false)
+ , _doPzHighCut(false)
, _doPzLowCut(false)
, _doBothPzCut(false)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| HepMCParticleTrigger::HepMCParticleTrigger(float trigger_thresh, int n_incom, bool up_lim, const std::string& name) | |
| : SubsysReco(name) | |
| , threshold(trigger_thresh) | |
| , goal_event_number(n_incom) | |
| , set_event_limit(up_lim) | |
| , _theEtaHigh(1.1) | |
| , _theEtaLow(-1.1) | |
| , _thePtHigh(999.9) | |
| , _thePtLow(0) | |
| , _thePHigh(999.9) | |
| , _thePLow(-999.9) | |
| , _thePzHigh(999.9) | |
| , _thePzLow(-999.9) | |
| , | |
| _doEtaHighCut(true) | |
| , _doEtaLowCut(true) | |
| , _doBothEtaCut(true) | |
| , | |
| _doAbsEtaHighCut(false) | |
| , _doAbsEtaLowCut(false) | |
| , _doBothAbsEtaCut(false) | |
| , | |
| _doPtHighCut(false) | |
| , _doPtLowCut(false) | |
| , _doBothPtCut(false) | |
| , | |
| _doPHighCut(false) | |
| , _doPLowCut(false) | |
| , _doBothPCut(false) | |
| , | |
| _doPzHighCut(false) | |
| , _doPzLowCut(false) | |
| , _doBothPzCut(false) | |
| HepMCParticleTrigger::HepMCParticleTrigger(float trigger_thresh, int n_incom, bool up_lim, const std::string& name) | |
| : SubsysReco(name) | |
| , threshold(trigger_thresh) | |
| , goal_event_number(n_incom) | |
| , set_event_limit(up_lim) | |
| , _theEtaHigh(1.1) | |
| , _theEtaLow(-1.1) | |
| , _thePtHigh(999.9) | |
| , _thePtLow(0) | |
| , _thePHigh(999.9) | |
| , _thePLow(-999.9) | |
| , _thePzHigh(999.9) | |
| , _thePzLow(-999.9) | |
| , _doEtaHighCut(true) | |
| , _doEtaLowCut(true) | |
| , _doBothEtaCut(true) | |
| , _doAbsEtaHighCut(false) | |
| , _doAbsEtaLowCut(false) | |
| , _doBothAbsEtaCut(false) | |
| , _doPtHighCut(false) | |
| , _doPtLowCut(false) | |
| , _doBothPtCut(false) | |
| , _doPHighCut(false) | |
| , _doPLowCut(false) | |
| , _doBothPCut(false) | |
| , _doPzHighCut(false) | |
| , _doPzLowCut(false) | |
| , _doBothPzCut(false) |
🤖 Prompt for AI Agents
In `@generators/Herwig/HepMCTrigger/HepMCParticleTrigger.cc` around lines 22 - 58,
The constructor HepMCParticleTrigger's initializer list contains standalone
commas that create empty initializers and break compilation; edit the
HepMCParticleTrigger(...) initializer list to remove the stray comma-only lines
so each member (threshold, goal_event_number, set_event_limit, _theEtaHigh,
_theEtaLow, _thePtHigh, _thePtLow, _thePHigh, _thePLow, _thePzHigh, _thePzLow,
_doEtaHighCut, _doEtaLowCut, _doBothEtaCut, _doAbsEtaHighCut, _doAbsEtaLowCut,
_doBothAbsEtaCut, _doPtHighCut, _doPtLowCut, _doBothPtCut, _doPHighCut,
_doPLowCut, _doBothPCut, _doPzHighCut, _doPzLowCut, _doBothPzCut) is separated
by a single comma with no empty lines; ensure no trailing commas or blank comma
lines remain in the initializer list for HepMCParticleTrigger.
| void HepMCParticleTrigger::SetAbsEtaHigh(double pt) | ||
| { | ||
| _theEtaHigh = pt; | ||
| _doAbsEtaHighCut = true; | ||
| if (_doAbsEtaLowCut) | ||
| { | ||
| _doBothAbsEtaCut = true; | ||
| } | ||
| return; | ||
| } | ||
| void HepMCParticleTrigger::SetAbsEtaLow(double pt) | ||
| { | ||
| _theEtaLow = pt; | ||
| _doAbsEtaLowCut = true; | ||
| if (_doAbsEtaHighCut) | ||
| { | ||
| _doBothAbsEtaCut = true; | ||
| } | ||
| return; | ||
| } | ||
| void HepMCParticleTrigger::SetAbsEtaHighLow(double ptHigh, double ptLow) | ||
| { | ||
| _theEtaHigh = ptHigh; | ||
| _doAbsEtaHighCut = true; | ||
| _theEtaLow = ptLow; | ||
| _doAbsEtaLowCut = true; | ||
| _doBothAbsEtaCut = true; | ||
| return; |
There was a problem hiding this comment.
AbsEta setters overwrite signed-eta thresholds (risk of asymmetric selection).
SetAbsEta* writes into _theEtaHigh/_theEtaLow, which are also used by the always-enabled signed-eta cuts. This means SetAbsEtaLow(>0) will also set the signed-eta lower bound > 0, effectively discarding all negative-eta particles. Unless that is intended, you should separate abs-eta thresholds (or disable signed-eta cuts when abs-eta cuts are enabled).
🔧 Suggested direction (separate abs-eta thresholds)
void HepMCParticleTrigger::SetAbsEtaHigh(double pt)
{
- _theEtaHigh = pt;
+ _theAbsEtaHigh = pt;
_doAbsEtaHighCut = true;
if (_doAbsEtaLowCut)
{
_doBothAbsEtaCut = true;
}
return;
}
void HepMCParticleTrigger::SetAbsEtaLow(double pt)
{
- _theEtaLow = pt;
+ _theAbsEtaLow = pt;
_doAbsEtaLowCut = true;
if (_doAbsEtaHighCut)
{
_doBothAbsEtaCut = true;
}
return;
}
void HepMCParticleTrigger::SetAbsEtaHighLow(double ptHigh, double ptLow)
{
- _theEtaHigh = ptHigh;
+ _theAbsEtaHigh = ptHigh;
_doAbsEtaHighCut = true;
- _theEtaLow = ptLow;
+ _theAbsEtaLow = ptLow;
_doAbsEtaLowCut = true;
_doBothAbsEtaCut = true;
return;
}
- if ((_doAbsEtaHighCut || _doBothAbsEtaCut) && std::abs(eta) > _theEtaHigh)
+ if ((_doAbsEtaHighCut || _doBothAbsEtaCut) && std::abs(eta) > _theAbsEtaHigh)
{
continue;
}
- if ((_doAbsEtaLowCut || _doBothAbsEtaCut) && std::abs(eta) < _theEtaLow)
+ if ((_doAbsEtaLowCut || _doBothAbsEtaCut) && std::abs(eta) < _theAbsEtaLow)
{
continue;
}Note: this requires adding _theAbsEtaHigh/_theAbsEtaLow members and constructor initialization in the header/initializer list.
Also applies to: 312-318
🤖 Prompt for AI Agents
In `@generators/Herwig/HepMCTrigger/HepMCParticleTrigger.cc` around lines 239 -
266, The AbsEta setters (SetAbsEtaHigh, SetAbsEtaLow, SetAbsEtaHighLow)
currently write into the signed-eta members _theEtaHigh/_theEtaLow, causing
signed-eta cuts to be unintentionally modified; change the implementation to use
dedicated abs-eta members (e.g. add _theAbsEtaHigh and _theAbsEtaLow initialized
in the constructor) and have SetAbsEtaHigh/SetAbsEtaLow/SetAbsEtaHighLow assign
those new members and set _doAbsEtaHighCut/_doAbsEtaLowCut/_doBothAbsEtaCut, or
alternatively explicitly disable the signed-eta cuts inside those setters (clear
_doEtaHighCut/_doEtaLowCut) so the signed-eta thresholds are not used when
absolute-eta cuts are enabled; update any other occurrences (e.g. the similar
block at the other location noted) to use the new abs members or disabling
logic.
| std::string fullcmd = m_RunBeforeOpeningScript + " " + m_OpeningArgs; | ||
| for (const auto& iter : stringvec) | ||
| { | ||
| fullcmd += " " + iter; | ||
| } |
There was a problem hiding this comment.
Potential shell command injection vulnerability.
The command is built by string concatenation without sanitizing m_OpeningArgs or elements of stringvec. If filenames or arguments contain shell metacharacters (e.g., ; rm -rf /, $(cmd), or backticks), they will be interpreted by the shell.
This is a security risk if:
- File paths come from untrusted sources (e.g., user-provided list files)
m_OpeningArgsis set from external input
Consider quoting or escaping the arguments, or using an exec-style API that doesn't invoke a shell.
Minimal mitigation: shell-quote filenames
+#include <sstream>
+
+// Helper to shell-quote a string
+static std::string ShellQuote(const std::string& s) {
+ std::string result = "'";
+ for (char c : s) {
+ if (c == '\'') {
+ result += "'\\''";
+ } else {
+ result += c;
+ }
+ }
+ result += "'";
+ return result;
+}
- std::string fullcmd = m_RunBeforeOpeningScript + " " + m_OpeningArgs;
+ std::string fullcmd = ShellQuote(m_RunBeforeOpeningScript);
+ if (!m_OpeningArgs.empty()) {
+ fullcmd += " " + m_OpeningArgs; // User is responsible for quoting args
+ }
for (const auto& iter : stringvec)
{
- fullcmd += " " + iter;
+ fullcmd += " " + ShellQuote(iter);
}🤖 Prompt for AI Agents
In `@offline/framework/fun4all/InputFileHandler.cc` around lines 183 - 187, The
code builds a shell command by concatenating m_RunBeforeOpeningScript,
m_OpeningArgs and elements of stringvec into fullcmd which allows shell
injection; fix by avoiding passing a single shell string—either construct an
argument vector and invoke the helper directly with an exec/spawn family call
(use execvp/posix_spawn or equivalent) or, if retaining shell execution,
robustly escape/shell-quote each element (m_OpeningArgs and each element of
stringvec) before concatenation; update the call site that uses fullcmd to
accept an argv-style array or the sanitized, safely-quoted command and reference
the symbols m_RunBeforeOpeningScript, m_OpeningArgs, stringvec and fullcmd when
making changes.
| uint64_t lower_limit = m_mvtx_is_triggered ? select_crossings : select_crossings - m_mvtx_bco_range - m_mvtx_negative_bco; | ||
| uint64_t upper_limit = m_mvtx_is_triggered ? select_crossings + m_mvtx_bco_range : select_crossings; | ||
|
|
||
| for (auto& [bco, hitinfo] : m_MvtxRawHitMap) | ||
| { | ||
| while (select_crossings <= m_MvtxRawHitMap.begin()->first && m_MvtxRawHitMap.begin()->first <= select_crossings + m_mvtx_bco_range) // triggered | ||
| if (bco < lower_limit) | ||
| { | ||
| if (Verbosity() > 2) | ||
| { | ||
| std::cout << "Adding 0x" << std::hex << m_MvtxRawHitMap.begin()->first | ||
| << " ref: 0x" << select_crossings << std::dec << std::endl; | ||
| } | ||
| for (auto *mvtxFeeIdInfo : m_MvtxRawHitMap.begin()->second.MvtxFeeIdInfoVector) | ||
| { | ||
| if (Verbosity() > 1) | ||
| { | ||
| mvtxFeeIdInfo->identify(); | ||
| } | ||
| mvtxEvtHeader->AddFeeIdInfo(mvtxFeeIdInfo); | ||
| delete mvtxFeeIdInfo; | ||
| } | ||
| m_MvtxRawHitMap.begin()->second.MvtxFeeIdInfoVector.clear(); | ||
| mvtxEvtHeader->AddL1Trg(m_MvtxRawHitMap.begin()->second.MvtxL1TrgBco); | ||
| continue; | ||
| } | ||
| if (bco > upper_limit) | ||
| { | ||
| break; | ||
| } |
There was a problem hiding this comment.
Potential unsigned underflow in lower_limit calculation.
When m_mvtx_is_triggered is false, Line 974 computes:
lower_limit = select_crossings - m_mvtx_bco_range - m_mvtx_negative_bcoSince select_crossings = m_RefBCO + m_mvtx_bco_range (when not triggered), this simplifies to m_RefBCO - m_mvtx_negative_bco. If m_RefBCO < m_mvtx_negative_bco, this will underflow, causing lower_limit to wrap to a very large value and the loop to skip all entries.
Consider adding underflow protection consistent with line 839:
Proposed fix
- uint64_t lower_limit = m_mvtx_is_triggered ? select_crossings : select_crossings - m_mvtx_bco_range - m_mvtx_negative_bco;
+ uint64_t lower_limit = m_mvtx_is_triggered ? select_crossings : (select_crossings > m_mvtx_bco_range + m_mvtx_negative_bco ? select_crossings - m_mvtx_bco_range - m_mvtx_negative_bco : 0);🤖 Prompt for AI Agents
In `@offline/framework/fun4allraw/Fun4AllStreamingInputManager.cc` around lines
974 - 986, The computation of lower_limit in Fun4AllStreamingInputManager::...
can underflow when m_mvtx_is_triggered is false (lower_limit = select_crossings
- m_mvtx_bco_range - m_mvtx_negative_bco); change this to perform a safe
subtraction and clamp to zero (or use a signed/extended intermediate like
int64_t temp = (int64_t)select_crossings - (int64_t)m_mvtx_bco_range -
(int64_t)m_mvtx_negative_bco; lower_limit = temp < 0 ? 0 : (uint64_t)temp) so
that when m_RefBCO < m_mvtx_negative_bco you do not wrap to a huge unsigned
value; apply the same pattern/guarding used earlier around line 839 and ensure
the for-loop over m_MvtxRawHitMap continues to use lower_limit and upper_limit
as before.
| int Fun4AllStreamingInputManager::FillMvtxPool() | ||
| { | ||
| uint64_t ref_bco_minus_range = m_RefBCO < m_mvtx_bco_range ? m_mvtx_bco_range : m_RefBCO - m_mvtx_bco_range; | ||
| uint64_t ref_bco_minus_range = m_RefBCO < m_mvtx_negative_bco ? m_mvtx_negative_bco : m_RefBCO - m_mvtx_negative_bco; |
There was a problem hiding this comment.
Inconsistent underflow protection value.
Line 1367 defaults to m_mvtx_negative_bco when m_RefBCO < m_mvtx_negative_bco:
uint64_t ref_bco_minus_range = m_RefBCO < m_mvtx_negative_bco ? m_mvtx_negative_bco : m_RefBCO - m_mvtx_negative_bco;This is inconsistent with:
- Line 839 (
FillMvtx): defaults to0 - Lines 1243-1247 (
FillInttPool): defaults to0
Defaulting to m_mvtx_negative_bco doesn't represent a valid minimum BCO value. This should likely default to 0 for consistency.
Proposed fix
- uint64_t ref_bco_minus_range = m_RefBCO < m_mvtx_negative_bco ? m_mvtx_negative_bco : m_RefBCO - m_mvtx_negative_bco;
+ uint64_t ref_bco_minus_range = m_RefBCO < m_mvtx_negative_bco ? 0 : m_RefBCO - m_mvtx_negative_bco;🤖 Prompt for AI Agents
In `@offline/framework/fun4allraw/Fun4AllStreamingInputManager.cc` at line 1367,
The calculation of ref_bco_minus_range in Fun4AllStreamingInputManager.cc uses a
fallback of m_mvtx_negative_bco when m_RefBCO < m_mvtx_negative_bco, which is
inconsistent with other places (FillMvtx, FillInttPool) that default to 0;
change the ternary so that when m_RefBCO < m_mvtx_negative_bco the result
defaults to 0 instead of m_mvtx_negative_bco (i.e., update the computation of
ref_bco_minus_range using m_RefBCO, m_mvtx_negative_bco to return 0 on
underflow) and ensure any logic relying on ref_bco_minus_range continues to
treat 0 as the minimum valid BCO.
| iphimin = std::min<double>(phi, iphimin); | ||
| iphimax = std::max<double>(phi, iphimax); | ||
| ilaymin = std::min(lay, ilaymin); | ||
| ilaymax = std::max(lay, ilaymax); | ||
| itmin = std::min<float>(tbin, itmin); | ||
| itmax = std::max<float>(tbin, itmax); |
There was a problem hiding this comment.
Variable naming inconsistency: iphimin/iphimax store phi (radians), not iphi (integer index).
Lines 554-555 assign phi (a double representing an angle in radians) to iphimin/iphimax, but the variable names suggest they should store integer phi indices. This causes:
- Precision loss when storing doubles in
intvariables - Confusing semantics - the variables are used to compute
phisizeon line 631
Either rename the variables to phimin/phimax and change their type to double, or use the integer iphi value from coords[1] instead.
🐛 Proposed fix (if tracking phi angles)
- int iphimin = 6666;
- int iphimax = -1;
+ double phimin = std::numeric_limits<double>::max();
+ double phimax = std::numeric_limits<double>::lowest();
...
- iphimin = std::min<double>(phi, iphimin);
- iphimax = std::max<double>(phi, iphimax);
+ phimin = std::min(phi, phimin);
+ phimax = std::max(phi, phimax);
...
- int phisize = iphimax - iphimin + 1;
+ double phisize = phimax - phimin; // Note: units are radians, not bins🐛 Alternative fix (if tracking iphi indices)
- iphimin = std::min<double>(phi, iphimin);
- iphimax = std::max<double>(phi, iphimax);
+ iphimin = std::min(static_cast<int>(coords[1]), iphimin);
+ iphimax = std::max(static_cast<int>(coords[1]), iphimax);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| iphimin = std::min<double>(phi, iphimin); | |
| iphimax = std::max<double>(phi, iphimax); | |
| ilaymin = std::min(lay, ilaymin); | |
| ilaymax = std::max(lay, ilaymax); | |
| itmin = std::min<float>(tbin, itmin); | |
| itmax = std::max<float>(tbin, itmax); | |
| iphimin = std::min(static_cast<int>(coords[1]), iphimin); | |
| iphimax = std::max(static_cast<int>(coords[1]), iphimax); | |
| ilaymin = std::min(lay, ilaymin); | |
| ilaymax = std::max(lay, ilaymax); | |
| itmin = std::min<float>(tbin, itmin); | |
| itmax = std::max<float>(tbin, itmax); |
🤖 Prompt for AI Agents
In `@offline/packages/tpc/Tpc3DClusterizer.cc` around lines 554 - 559, The
variables iphimin/iphimax are being assigned the double angle phi (radians)
causing type/semantic mismatch; either (A) treat them as angle bounds: rename
iphimin/iphimax to phimin/phimax and change their type to double, update all
uses (including phisize calculation) to compute angular size from phimin/phimax,
or (B) keep the iphi semantics and assign the integer iphi (from coords[1])
instead of phi so iphimin/iphimax remain integer indices; update any downstream
code that computes phisize on the basis of index vs angle accordingly (check
uses of phi, iphi, coords[1], phisize to ensure consistency).
| // if (m_output){ | ||
| float fX[20] = {0}; | ||
| int n = 0; | ||
| fX[n++] = m_event; | ||
| fX[n++] = m_seed; | ||
| fX[n++] = clusX; | ||
| fX[n++] = clusY; | ||
| fX[n++] = clusZ; | ||
| fX[n++] = clusR; | ||
| fX[n++] = clusPhi; | ||
| fX[n++] = clusiPhi; | ||
| fX[n++] = clusT; | ||
| fX[n++] = adcSum; | ||
| fX[n++] = maxAdc; | ||
| fX[n++] = (layerSum/adcSum); | ||
| fX[n++] = maxsector; | ||
| fX[n++] = maxside; | ||
| fX[n++] = nHits; | ||
| fX[n++] = phisize; | ||
| fX[n++] = tsize; | ||
| fX[n++] = lsize; | ||
| m_clusterNT->Fill(fX); | ||
| // } | ||
| float fX[20] = {0}; | ||
| int n = 0; | ||
| fX[n++] = m_event; | ||
| fX[n++] = m_seed; | ||
| fX[n++] = clusX; | ||
| fX[n++] = clusY; | ||
| fX[n++] = clusZ; | ||
| fX[n++] = clusR; | ||
| fX[n++] = clusPhi; | ||
| fX[n++] = clusiPhi; | ||
| fX[n++] = clusT; | ||
| fX[n++] = adcSum; | ||
| fX[n++] = maxAdc; | ||
| fX[n++] = (layerSum / adcSum); | ||
| fX[n++] = maxsector; | ||
| fX[n++] = maxside; | ||
| fX[n++] = nHits; | ||
| fX[n++] = phisize; | ||
| fX[n++] = tsize; | ||
| fX[n++] = lsize; | ||
| m_clusterNT->Fill(fX); | ||
| // } |
There was a problem hiding this comment.
Potential null pointer dereference: m_clusterNT->Fill() called unconditionally.
m_clusterNT is only created when m_output is true (line 120), but m_clusterNT->Fill(fX) at line 680 is called unconditionally. If m_output is false, this will crash with a null pointer dereference.
The commented // if (m_output){ at line 659 and // } at line 681 suggest this guard was accidentally removed.
🐛 Proposed fix
- // if (m_output){
float fX[20] = {0};
int n = 0;
fX[n++] = m_event;
...
fX[n++] = lsize;
- m_clusterNT->Fill(fX);
- // }
+ if (m_output)
+ {
+ m_clusterNT->Fill(fX);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // if (m_output){ | |
| float fX[20] = {0}; | |
| int n = 0; | |
| fX[n++] = m_event; | |
| fX[n++] = m_seed; | |
| fX[n++] = clusX; | |
| fX[n++] = clusY; | |
| fX[n++] = clusZ; | |
| fX[n++] = clusR; | |
| fX[n++] = clusPhi; | |
| fX[n++] = clusiPhi; | |
| fX[n++] = clusT; | |
| fX[n++] = adcSum; | |
| fX[n++] = maxAdc; | |
| fX[n++] = (layerSum/adcSum); | |
| fX[n++] = maxsector; | |
| fX[n++] = maxside; | |
| fX[n++] = nHits; | |
| fX[n++] = phisize; | |
| fX[n++] = tsize; | |
| fX[n++] = lsize; | |
| m_clusterNT->Fill(fX); | |
| // } | |
| float fX[20] = {0}; | |
| int n = 0; | |
| fX[n++] = m_event; | |
| fX[n++] = m_seed; | |
| fX[n++] = clusX; | |
| fX[n++] = clusY; | |
| fX[n++] = clusZ; | |
| fX[n++] = clusR; | |
| fX[n++] = clusPhi; | |
| fX[n++] = clusiPhi; | |
| fX[n++] = clusT; | |
| fX[n++] = adcSum; | |
| fX[n++] = maxAdc; | |
| fX[n++] = (layerSum / adcSum); | |
| fX[n++] = maxsector; | |
| fX[n++] = maxside; | |
| fX[n++] = nHits; | |
| fX[n++] = phisize; | |
| fX[n++] = tsize; | |
| fX[n++] = lsize; | |
| m_clusterNT->Fill(fX); | |
| // } | |
| if (m_output) | |
| { | |
| float fX[20] = {0}; | |
| int n = 0; | |
| fX[n++] = m_event; | |
| fX[n++] = m_seed; | |
| fX[n++] = clusX; | |
| fX[n++] = clusY; | |
| fX[n++] = clusZ; | |
| fX[n++] = clusR; | |
| fX[n++] = clusPhi; | |
| fX[n++] = clusiPhi; | |
| fX[n++] = clusT; | |
| fX[n++] = adcSum; | |
| fX[n++] = maxAdc; | |
| fX[n++] = (layerSum / adcSum); | |
| fX[n++] = maxsector; | |
| fX[n++] = maxside; | |
| fX[n++] = nHits; | |
| fX[n++] = phisize; | |
| fX[n++] = tsize; | |
| fX[n++] = lsize; | |
| m_clusterNT->Fill(fX); | |
| } |
🤖 Prompt for AI Agents
In `@offline/packages/tpc/Tpc3DClusterizer.cc` around lines 659 - 681, The block
that builds the fX array and calls m_clusterNT->Fill(fX) must be guarded because
m_clusterNT is only created when m_output is true; restore a null-check or the
original if (m_output) guard around that block (or at minimum check m_clusterNT
!= nullptr before calling Fill) to avoid a potential null pointer dereference;
update the code around the fX construction and the call to m_clusterNT->Fill so
it only executes when m_output is true and m_clusterNT is valid.
| if (m_pp_mode) | ||
| { | ||
| if (m_enable_crossing_estimate && crossing == SHRT_MAX) | ||
| { | ||
| // this only happens if there is a silicon seed but no assigned INTT crossing, and only in pp_mode | ||
| // If there is no INTT crossing, start with the crossing_estimate value, vary up and down, fit, and choose the best chisq/ndf | ||
| use_estimate = true; | ||
| nvary = max_bunch_search; | ||
| if (Verbosity() > 1) | ||
| { | ||
| std::cout << " No INTT crossing: use crossing_estimate " << crossing_estimate << " with nvary " << nvary << std::endl; | ||
| } | ||
| } | ||
| else | ||
| { | ||
| if (m_enable_crossing_estimate && crossing == SHRT_MAX) | ||
| { | ||
| // this only happens if there is a silicon seed but no assigned INTT crossing, and only in pp_mode | ||
| // If there is no INTT crossing, start with the crossing_estimate value, vary up and down, fit, and choose the best chisq/ndf | ||
| use_estimate = true; | ||
| nvary = max_bunch_search; | ||
| if (Verbosity() > 1) | ||
| { | ||
| std::cout << " No INTT crossing: use crossing_estimate " << crossing_estimate << " with nvary " << nvary << std::endl; | ||
| } | ||
| } | ||
| else | ||
| { | ||
| // use INTT crossing | ||
| crossing_estimate = crossing; | ||
| } | ||
| // use INTT crossing | ||
| crossing_estimate = crossing; | ||
| } | ||
| } |
There was a problem hiding this comment.
Avoid using SHRT_MAX as a real crossing when estimates are disabled.
In pp mode, if there’s a silicon seed but no INTT crossing and m_enable_crossing_estimate is false, the code proceeds with crossing_estimate = SHRT_MAX. That sentinel value is likely invalid for drift corrections and fit. Consider skipping the track or defaulting to a safe crossing (e.g., 0) when estimate mode is off.
One possible safeguard (skip when no crossing)
- if (m_pp_mode)
- {
- if (m_enable_crossing_estimate && crossing == SHRT_MAX)
- {
- // this only happens if there is a silicon seed but no assigned INTT crossing, and only in pp_mode
- // If there is no INTT crossing, start with the crossing_estimate value, vary up and down, fit, and choose the best chisq/ndf
- use_estimate = true;
- nvary = max_bunch_search;
- if (Verbosity() > 1)
- {
- std::cout << " No INTT crossing: use crossing_estimate " << crossing_estimate << " with nvary " << nvary << std::endl;
- }
- }
- else
- {
- // use INTT crossing
- crossing_estimate = crossing;
- }
- }
+ if (m_pp_mode)
+ {
+ if (m_enable_crossing_estimate && crossing == SHRT_MAX)
+ {
+ // this only happens if there is a silicon seed but no assigned INTT crossing, and only in pp_mode
+ // If there is no INTT crossing, start with the crossing_estimate value, vary up and down, fit, and choose the best chisq/ndf
+ use_estimate = true;
+ nvary = max_bunch_search;
+ if (Verbosity() > 1)
+ {
+ std::cout << " No INTT crossing: use crossing_estimate " << crossing_estimate << " with nvary " << nvary << std::endl;
+ }
+ }
+ else if (crossing == SHRT_MAX)
+ {
+ if (Verbosity() > 1)
+ {
+ std::cout << " No INTT crossing and estimates disabled; skipping track" << std::endl;
+ }
+ continue;
+ }
+ else
+ {
+ // use INTT crossing
+ crossing_estimate = crossing;
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (m_pp_mode) | |
| { | |
| if (m_enable_crossing_estimate && crossing == SHRT_MAX) | |
| { | |
| // this only happens if there is a silicon seed but no assigned INTT crossing, and only in pp_mode | |
| // If there is no INTT crossing, start with the crossing_estimate value, vary up and down, fit, and choose the best chisq/ndf | |
| use_estimate = true; | |
| nvary = max_bunch_search; | |
| if (Verbosity() > 1) | |
| { | |
| std::cout << " No INTT crossing: use crossing_estimate " << crossing_estimate << " with nvary " << nvary << std::endl; | |
| } | |
| } | |
| else | |
| { | |
| if (m_enable_crossing_estimate && crossing == SHRT_MAX) | |
| { | |
| // this only happens if there is a silicon seed but no assigned INTT crossing, and only in pp_mode | |
| // If there is no INTT crossing, start with the crossing_estimate value, vary up and down, fit, and choose the best chisq/ndf | |
| use_estimate = true; | |
| nvary = max_bunch_search; | |
| if (Verbosity() > 1) | |
| { | |
| std::cout << " No INTT crossing: use crossing_estimate " << crossing_estimate << " with nvary " << nvary << std::endl; | |
| } | |
| } | |
| else | |
| { | |
| // use INTT crossing | |
| crossing_estimate = crossing; | |
| } | |
| // use INTT crossing | |
| crossing_estimate = crossing; | |
| } | |
| } | |
| if (m_pp_mode) | |
| { | |
| if (m_enable_crossing_estimate && crossing == SHRT_MAX) | |
| { | |
| // this only happens if there is a silicon seed but no assigned INTT crossing, and only in pp_mode | |
| // If there is no INTT crossing, start with the crossing_estimate value, vary up and down, fit, and choose the best chisq/ndf | |
| use_estimate = true; | |
| nvary = max_bunch_search; | |
| if (Verbosity() > 1) | |
| { | |
| std::cout << " No INTT crossing: use crossing_estimate " << crossing_estimate << " with nvary " << nvary << std::endl; | |
| } | |
| } | |
| else if (crossing == SHRT_MAX) | |
| { | |
| if (Verbosity() > 1) | |
| { | |
| std::cout << " No INTT crossing and estimates disabled; skipping track" << std::endl; | |
| } | |
| continue; | |
| } | |
| else | |
| { | |
| // use INTT crossing | |
| crossing_estimate = crossing; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@offline/packages/trackreco/PHActsTrkFitter.cc` around lines 391 - 409, The
code currently leaves crossing_estimate set to SHRT_MAX when m_pp_mode is true
and m_enable_crossing_estimate is false, which can propagate an invalid sentinel
into drift corrections and fits; update the logic in PHActsTrkFitter (the block
referencing m_pp_mode, m_enable_crossing_estimate, crossing, crossing_estimate
and SHRT_MAX) to handle the "no INTT crossing" case when estimate mode is
disabled by either (1) skipping the track (return/continue) or (2) setting
crossing_estimate to a safe default (e.g., 0) and not attempting fit; ensure
use_estimate and nvary are only set when m_enable_crossing_estimate is true and
add a clear comment explaining the chosen behavior.
| StateClusterResidualsQA& setNMvtx(int min, int max) | ||
| { | ||
| m_pending.back().min_mvtx_clusters = min; | ||
| m_pending.back().max_mvtx_clusters = max; | ||
| return *this; | ||
| } | ||
| StateClusterResidualsQA& setNIntt(int min, int max) | ||
| { | ||
| m_pending.back().min_intt_clusters = min; | ||
| m_pending.back().max_intt_clusters = max; | ||
| return *this; | ||
| } | ||
| StateClusterResidualsQA& setNTpc(int min, int max) | ||
| { | ||
| m_pending.back().min_tpc_clusters = min; | ||
| m_pending.back().max_tpc_clusters = max; | ||
| return *this; | ||
| } | ||
| StateClusterResidualsQA& setPhiRange(float min, float max) | ||
| { | ||
| m_pending.back().phi_min = min; | ||
| m_pending.back().phi_max = max; | ||
| return *this; | ||
| } | ||
| StateClusterResidualsQA& setEtaRange(float min, float max) | ||
| { | ||
| m_pending.back().eta_min = min; | ||
| m_pending.back().eta_max = max; | ||
| return *this; | ||
| } | ||
| StateClusterResidualsQA& setPtRange(float min, float max) | ||
| { | ||
| m_pending.back().pt_min = min; | ||
| m_pending.back().pt_max = max; | ||
| return *this; | ||
| } | ||
| StateClusterResidualsQA& setPositiveTracks() | ||
| { | ||
| m_pending.back().charge = 1; | ||
| return *this; | ||
| } | ||
| StateClusterResidualsQA& setNegativeTracks() | ||
| { | ||
| m_pending.back().charge = -1; | ||
| return *this; | ||
| } |
There was a problem hiding this comment.
Avoid UB when setters are called before addHistogram.
All setters assume m_pending is non-empty; calling a setter first dereferences an empty vector.
🐛 Proposed fix
StateClusterResidualsQA& setNMvtx(int min, int max)
{
- m_pending.back().min_mvtx_clusters = min;
- m_pending.back().max_mvtx_clusters = max;
+ auto& cfg = current_config();
+ cfg.min_mvtx_clusters = min;
+ cfg.max_mvtx_clusters = max;
return *this;
}
StateClusterResidualsQA& setNIntt(int min, int max)
{
- m_pending.back().min_intt_clusters = min;
- m_pending.back().max_intt_clusters = max;
+ auto& cfg = current_config();
+ cfg.min_intt_clusters = min;
+ cfg.max_intt_clusters = max;
return *this;
}
StateClusterResidualsQA& setNTpc(int min, int max)
{
- m_pending.back().min_tpc_clusters = min;
- m_pending.back().max_tpc_clusters = max;
+ auto& cfg = current_config();
+ cfg.min_tpc_clusters = min;
+ cfg.max_tpc_clusters = max;
return *this;
}
StateClusterResidualsQA& setPhiRange(float min, float max)
{
- m_pending.back().phi_min = min;
- m_pending.back().phi_max = max;
+ auto& cfg = current_config();
+ cfg.phi_min = min;
+ cfg.phi_max = max;
return *this;
}
StateClusterResidualsQA& setEtaRange(float min, float max)
{
- m_pending.back().eta_min = min;
- m_pending.back().eta_max = max;
+ auto& cfg = current_config();
+ cfg.eta_min = min;
+ cfg.eta_max = max;
return *this;
}
StateClusterResidualsQA& setPtRange(float min, float max)
{
- m_pending.back().pt_min = min;
- m_pending.back().pt_max = max;
+ auto& cfg = current_config();
+ cfg.pt_min = min;
+ cfg.pt_max = max;
return *this;
}
StateClusterResidualsQA& setPositiveTracks()
{
- m_pending.back().charge = 1;
+ current_config().charge = 1;
return *this;
}
StateClusterResidualsQA& setNegativeTracks()
{
- m_pending.back().charge = -1;
+ current_config().charge = -1;
return *this;
}
private:
+ ResidualHistConfig& current_config()
+ {
+ if (m_pending.empty())
+ {
+ m_pending.emplace_back();
+ }
+ return m_pending.back();
+ }
std::vector<ResidualHistConfig> m_pending;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| StateClusterResidualsQA& setNMvtx(int min, int max) | |
| { | |
| m_pending.back().min_mvtx_clusters = min; | |
| m_pending.back().max_mvtx_clusters = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setNIntt(int min, int max) | |
| { | |
| m_pending.back().min_intt_clusters = min; | |
| m_pending.back().max_intt_clusters = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setNTpc(int min, int max) | |
| { | |
| m_pending.back().min_tpc_clusters = min; | |
| m_pending.back().max_tpc_clusters = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setPhiRange(float min, float max) | |
| { | |
| m_pending.back().phi_min = min; | |
| m_pending.back().phi_max = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setEtaRange(float min, float max) | |
| { | |
| m_pending.back().eta_min = min; | |
| m_pending.back().eta_max = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setPtRange(float min, float max) | |
| { | |
| m_pending.back().pt_min = min; | |
| m_pending.back().pt_max = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setPositiveTracks() | |
| { | |
| m_pending.back().charge = 1; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setNegativeTracks() | |
| { | |
| m_pending.back().charge = -1; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setNMvtx(int min, int max) | |
| { | |
| auto& cfg = current_config(); | |
| cfg.min_mvtx_clusters = min; | |
| cfg.max_mvtx_clusters = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setNIntt(int min, int max) | |
| { | |
| auto& cfg = current_config(); | |
| cfg.min_intt_clusters = min; | |
| cfg.max_intt_clusters = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setNTpc(int min, int max) | |
| { | |
| auto& cfg = current_config(); | |
| cfg.min_tpc_clusters = min; | |
| cfg.max_tpc_clusters = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setPhiRange(float min, float max) | |
| { | |
| auto& cfg = current_config(); | |
| cfg.phi_min = min; | |
| cfg.phi_max = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setEtaRange(float min, float max) | |
| { | |
| auto& cfg = current_config(); | |
| cfg.eta_min = min; | |
| cfg.eta_max = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setPtRange(float min, float max) | |
| { | |
| auto& cfg = current_config(); | |
| cfg.pt_min = min; | |
| cfg.pt_max = max; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setPositiveTracks() | |
| { | |
| current_config().charge = 1; | |
| return *this; | |
| } | |
| StateClusterResidualsQA& setNegativeTracks() | |
| { | |
| current_config().charge = -1; | |
| return *this; | |
| } | |
| private: | |
| ResidualHistConfig& current_config() | |
| { | |
| if (m_pending.empty()) | |
| { | |
| m_pending.emplace_back(); | |
| } | |
| return m_pending.back(); | |
| } | |
| std::vector<ResidualHistConfig> m_pending; |
🤖 Prompt for AI Agents
In `@offline/QA/Tracking/StateClusterResidualsQA.h` around lines 56 - 101, These
setters (setNMvtx, setNIntt, setNTpc, setPhiRange, setEtaRange, setPtRange,
setPositiveTracks, setNegativeTracks) assume m_pending.back() exists and cause
UB if called before an addHistogram; guard each setter by ensuring m_pending is
non-empty (e.g., if m_pending.empty() push_back(default-constructed
PendingEntry) or throw/assert with a clear message) before accessing
m_pending.back(), so callers can safely call any setter before addHistogram;
update all listed methods to perform the same presence check and create a
default pending entry when missing.
|
Note Docstrings generation - SUCCESS |
Docstrings generation was requested by @blackcathj. * #13 (comment) The following files were modified: * `generators/Herwig/HepMCTrigger/HepMCJetTrigger.cc` * `generators/Herwig/HepMCTrigger/HepMCParticleTrigger.cc` * `generators/Herwig/HepMCTrigger/HepMCParticleTrigger.h` * `generators/PHPythia8/PHPythia8.cc` * `generators/phhepmc/PHHepMCGenEventv1.cc` * `offline/QA/Calorimeters/CaloValid.cc` * `offline/QA/Tracking/MicromegasClusterQA.cc` * `offline/QA/Tracking/MicromegasClusterQA.h` * `offline/QA/Tracking/StateClusterResidualsQA.cc` * `offline/QA/Tracking/StateClusterResidualsQA.h` * `offline/framework/ffamodules/CDBInterface.cc` * `offline/framework/ffamodules/FlagHandler.h` * `offline/framework/ffamodules/HeadReco.cc` * `offline/framework/ffamodules/HeadReco.h` * `offline/framework/ffamodules/SyncReco.h` * `offline/framework/ffamodules/Timing.cc` * `offline/framework/ffamodules/Timing.h` * `offline/framework/fun4all/Fun4AllServer.cc` * `offline/framework/fun4all/Fun4AllServer.h` * `offline/framework/fun4all/InputFileHandler.cc` * `offline/framework/fun4all/InputFileHandler.h` * `offline/framework/fun4allraw/Fun4AllStreamingInputManager.cc` * `offline/framework/fun4allraw/SingleMicromegasPoolInput_v2.cc` * `offline/framework/fun4allraw/SingleMicromegasPoolInput_v2.h` * `offline/framework/fun4allraw/SingleMvtxPoolInput.cc` * `offline/framework/fun4allraw/SingleTriggeredInput.cc` * `offline/framework/fun4allraw/SingleTriggeredInput.h` * `offline/packages/CaloEmbedding/CombineTowerInfo.cc` * `offline/packages/CaloEmbedding/CombineTowerInfo.h` * `offline/packages/CaloEmbedding/CopyIODataNodes.cc` * `offline/packages/CaloEmbedding/CopyIODataNodes.h` * `offline/packages/CaloReco/PhotonClusterBuilder.cc` * `offline/packages/KFParticle_sPHENIX/KFParticle_sPHENIX.cc` * `offline/packages/PHGenFitPkg/PHGenFit/Fitter.cc` * `offline/packages/PHGenFitPkg/PHGenFit/Track.cc` * `offline/packages/Skimmers/Trigger/TriggerDSTSkimmer.cc` * `offline/packages/Skimmers/Trigger/TriggerDSTSkimmer.h` * `offline/packages/jetbackground/DetermineTowerBackground.cc` * `offline/packages/jetbackground/DetermineTowerBackground.h` * `offline/packages/mbd/MbdCalib.cc` * `offline/packages/mbd/MbdEvent.cc` * `offline/packages/mbd/MbdReco.cc` * `offline/packages/mbd/MbdSig.h` * `offline/packages/micromegas/MicromegasClusterizer.cc` * `offline/packages/micromegas/MicromegasCombinedDataDecoder.cc` * `offline/packages/micromegas/MicromegasCombinedDataDecoder.h` * `offline/packages/micromegas/MicromegasDefs.cc` * `offline/packages/micromegas/MicromegasDefs.h` * `offline/packages/mvtx/CylinderGeom_Mvtx.cc` * `offline/packages/mvtx/CylinderGeom_Mvtx.h` * `offline/packages/mvtx/MvtxClusterPruner.cc` * `offline/packages/mvtx/MvtxClusterizer.cc` * `offline/packages/mvtx/MvtxHitPruner.cc` * `offline/packages/mvtx/SegmentationAlpide.cc` * `offline/packages/tpc/LaserClusterizer.cc` * `offline/packages/tpc/LaserEventIdentifier.cc` * `offline/packages/tpc/Tpc3DClusterizer.cc` * `offline/packages/tpc/TpcClusterMover.cc` * `offline/packages/tpc/TpcClusterMover.h` * `offline/packages/tpc/TpcClusterizer.cc` * `offline/packages/tpc/TpcCombinedRawDataUnpacker.cc` * `offline/packages/tpc/TpcCombinedRawDataUnpackerDebug.cc` * `offline/packages/tpc/TpcDistortionCorrection.cc` * `offline/packages/tpc/TpcLoadDistortionCorrection.cc` * `offline/packages/tpc/TpcRawDataTree.cc` * `offline/packages/tpc/TpcRawWriter.cc` * `offline/packages/tpc/TpcSimpleClusterizer.cc` * `offline/packages/tpc/TrainingHits.cc` * `offline/packages/trackbase/AlignmentTransformation.cc` * `offline/packages/trackbase/AlignmentTransformation.h` * `offline/packages/trackreco/DSTClusterPruning.cc` * `offline/packages/trackreco/DSTClusterPruning.h` * `offline/packages/trackreco/PHActsTrkFitter.cc` * `offline/packages/trackreco/PHActsTrkFitter.h` * `offline/packages/trackreco/PHSiliconTpcTrackMatching.cc` * `offline/packages/trackreco/PHSiliconTpcTrackMatching.h` * `offline/packages/trackreco/PHSimpleVertexFinder.h` * `offline/packages/trackreco/PHTpcDeltaZCorrection.h`
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
offline/framework/frog/CreateFileList.pl (1)
171-201: InitializeOO_pileupstringfor default pileup modes.Right now
OO_pileupstringis only set in theelsebranch, so for pileup values 1–5 it stays undef and O+O filenames drop the pileup suffix (and can trigger warnings). Initialize it alongside the other pileup strings or explicitly reject unsupported pileup modes for O+O.🐛 Proposed fix (initialize alongside AuAu defaults)
if ($pileup == 1) { $AuAu_pileupstring = sprintf("_50kHz%s",$AuAu_bkgpileup); $pp_pileupstring = sprintf("_3MHz"); $pAu_pileupstring = sprintf("_500kHz%s",$pAu_bkgpileup); + $OO_pileupstring = sprintf("_50kHz%s",$OO_bkgpileup); } elsif ($pileup == 2) { $AuAu_pileupstring = sprintf("_25kHz%s",$AuAu_bkgpileup); + $OO_pileupstring = sprintf("_25kHz%s",$OO_bkgpileup); } elsif ($pileup == 3) { $AuAu_pileupstring = sprintf("_10kHz%s",$AuAu_bkgpileup); + $OO_pileupstring = sprintf("_10kHz%s",$OO_bkgpileup); }
🤖 Fix all issues with AI agents
In @.github/workflows/fix-eof-newline-coderabbit-docstrings.yml:
- Around line 21-25: The workflow currently embeds untrusted PR metadata
directly into shell script variables (AUTHOR, BASE_REPO, HEAD_REPO, TITLE) using
${{ ... }} which can enable injection in pull_request_target; instead move those
expressions into the job or step env block (e.g., env: PR_AUTHOR: ${{
github.event.pull_request.user.login }}, PR_BASE_REPO: ${{
github.event.pull_request.base.repo.full_name }}, PR_HEAD_REPO: ${{
github.event.pull_request.head.repo.full_name }}, PR_TITLE: ${{
github.event.pull_request.title }}) and then reference them inside the run
script as normal shell variables (AUTHOR="$PR_AUTHOR",
BASE_REPO="$PR_BASE_REPO", HEAD_REPO="$PR_HEAD_REPO", TITLE="$PR_TITLE") so the
interpolations occur in the workflow engine, not inline in the script.
- Around line 101-105: The workflow is interpolating
github.event.pull_request.head.ref directly in the git push which is unsafe for
pull_request_target; instead set an environment variable from
github.event.pull_request.head.ref before the push (e.g., export HEAD_REF=...),
then use that env var in the git push command (replace the inline "${{
github.event.pull_request.head.ref }}" with the env var like "${HEAD_REF}");
update the block containing set -euo pipefail and the git push line to export
and reference the env var so the push uses the validated environment value.
In `@calibrations/tpc/dEdx/autogen.sh`:
- Around line 2-8: Replace backticks by $(dirname "$0") when setting srcdir,
quote variable expansions (e.g. "$srcdir", "${OFFLINE_MAIN}/share", and "$@") to
prevent word-splitting, and add error handling to the cd operation so the script
exits with a clear error if cd fails; specifically update the srcdir assignment,
change the subshell line that does (cd ...) to cd "$srcdir" || { echo "Failed to
cd to $srcdir" >&2; exit 1; } before running aclocal -I "${OFFLINE_MAIN}/share"
libtoolize --force automake -a --add-missing autoconf, and invoke the configure
script as "$srcdir/configure" "$@".
In `@calibrations/tpc/dEdx/bethe_bloch.h`:
- Around line 34-53: The three functions bethe_bloch_new, bethe_bloch_new_2D,
and bethe_bloch_new_1D can call TMath::Log with betagamma (or an expression
multiplying it) which is invalid for betagamma ≤ 0; add a small-clamp or guard
at the start of each function (e.g., if betagamma <= epsilon then set betagamma
= epsilon or return a sensible sentinel) before computing beta and calling
TMath::Log so you never pass non-positive values to the log; apply the same
epsilon consistently across all three functions and keep it configurable/visible
as a named constant.
In `@calibrations/tpc/dEdx/dEdxFitter.cc`:
- Around line 53-107: process_tracks currently dereferences _trackmap without
checking for null, which can crash if GetNodes left any of the node pointers
(_trackmap, _clustermap, _geometry, _tpcgeom, _vertexmap) unset; add an early
guard at the start of dEdxFitter::process_tracks that checks these pointers (at
minimum _trackmap, and optionally _clustermap/_geometry/_tpcgeom/_vertexmap if
used later) and returns immediately (or logs and returns) when they are null to
avoid dereferencing; update any subsequent code that assumes those maps exist
(e.g., the loop over _trackmap and calls to get_nclus/get_dedx) to only run when
the guards pass, and include a clear debug/error message using
Verbosity()/std::cout or the existing logger when returning early.
- Around line 219-247: Outf (the ROOT TFile pointer created with new in
InitRun()) is closed but not deleted in End(); after calling outf->Close() add
proper deletion and nulling (e.g., delete outf; outf = nullptr;) to free the
allocated TFile and avoid the memory leak—update End() in dEdxFitter (and ensure
outf is a member variable initialized in InitRun()) to perform this cleanup.
- Around line 1-11: Include the missing standard headers at the top of
dEdxFitter.cc: add <cmath> to provide std::isnan usage and add <limits> to
provide std::numeric_limits usage so calls to std::isnan(...) and
std::numeric_limits<...>::... compile correctly (place these alongside the
existing includes in dEdxFitter.cc).
In `@calibrations/tpc/dEdx/dEdxFitter.h`:
- Around line 4-12: The header dEdxFitter.h is not self-contained: it uses
std::vector, std::tuple, std::unique_ptr and size_t in declarations (references
around lines where types are used) without including their standard headers; add
the appropriate standard includes (<vector>, <tuple>, <memory>, and <cstddef>)
at the top of dEdxFitter.h so symbols like std::vector, std::tuple,
std::unique_ptr and size_t are defined for consumers of the header (keep these
additions alongside the other includes and before including
"GlobaldEdxFitter.h").
In `@calibrations/tpc/dEdx/GlobaldEdxFitter.cc`:
- Around line 213-247: The loop in GlobaldEdxFitter::get_betagamma can skip all
branches when distance comparisons are equal, leaving betagamma shorter than
dEdx and causing OOB in TGraph construction; fix by making the branch selection
tie-safe so exactly one betagamma is pushed per iteration—e.g., change the
strict `<` comparisons to `<=` (or replace the final `else if(d_dist<...)` with
a plain `else`) so one of the pi/K/p/d cases always executes and always calls
betagamma.push_back(...) for each index i (use the existing variables p, dEdx,
dedx_pi/dedx_K/dedx_p/dedx_d to determine which mass to push).
- Around line 40-56: The loop can call t->GetEntry(entry) with entry >=
total_entries when skip >= total_entries; change the loop bounds so the end is
min(skip + ntracks, total_entries) (use total_entries from t->GetEntriesFast())
or return/skip early if skip >= total_entries, and after calling
t->GetEntry(entry) check its return value and continue on failure; update the
loop that iterates over entry and the use of t->GetEntry(entry) to use these
guards to avoid pushing stale px/py/pz/dedx values into p and dEdx.
- Around line 300-313: The code returns a pointer from minimizer->X() after
deleting the minimizer, causing a use-after-free; fix by copying the parameter
value from the pointer into a local double (e.g., double result = xs[0];)
immediately after calling minimizer->X(), then delete minimizer and return that
local copy; update the block around ROOT::Math::Minimizer* minimizer,
minimizer->Minimize(), const double* xs = minimizer->X(), delete minimizer,
return xs[0] to instead retrieve xs, copy xs[0] into a local variable, delete
minimizer, and return the local variable (referencing
GlobaldEdxFitter::get_fitquality_functor and minimizer/X()).
In `@calibrations/tpc/dEdx/test_sample_size.C`:
- Around line 166-170: The code dereferences TFile* f_h and TH2F* dedx_h without
null checks which will crash if the file or histogram is missing; update the
block that uses TFile::Open and f_h->Get("dedx_log_30") to check both f_h and
dedx_h for nullptr before calling dedx_h->Draw or cb->SetLogz, log or report a
clear error when either is null, and ensure f_h->Close() (or delete/cleanup) is
called on all exit paths to avoid resource leaks.
In `@offline/framework/frog/CreateFileList.pl`:
- Around line 935-947: In the prodtype == 37 branch the code incorrectly sets
pileupstring to $AuAu_pileupstring which will strip the wrong suffix; change the
assignment so pileupstring = $OO_pileupstring instead (leave filenamestring and
the call to &commonfiletypes() as-is) to ensure the O+O pileup suffix is used
downstream.
In `@offline/packages/CaloReco/CaloWaveformFitting.cc`:
- Line 733: The TH1F histogram "h" (name "h_funcfit") is being created and will
be registered in ROOT's gDirectory by default; to avoid global-dir tracking and
potential memory/name collisions, call h.SetDirectory(nullptr) immediately after
the TH1F h(...) construction in CaloWaveformFitting (so the histogram is not
added to gDirectory and is managed only by local code).
- Line 25: The global raw pointer t is leaked because it’s initialized with new
ROOT::TThreadExecutor(1) and later reassigned in initialize_processing() without
deleting the prior instance; either delete the existing t before reassigning in
initialize_processing() or convert t to a RAII-managed object (e.g.,
std::unique_ptr<ROOT::TThreadExecutor>) or make it a class member with proper
lifecycle management so reinitialization doesn't leak; update references to t
accordingly (use reset() or release/assign as appropriate) and ensure
destruction at program shutdown.
- Around line 717-730: The software zero-suppression block in
CaloWaveformFitting.cc reads v.at(6) (and v.at(1)) without ensuring enough
samples, which can throw for nsamples in 3..6; update the if-condition that uses
_bdosoftwarezerosuppression and _maxsoftwarezerosuppression to also require at
least 7 samples (e.g. nsamples > 6 or v.size() > 6) before accessing v.at(6)
(and ensure v.size() > 1 before v.at(1)), so the branch only runs when the
vector has the needed indices (refer to variables/functions:
_bdosoftwarezerosuppression, _maxsoftwarezerosuppression, v, nsamples,
_nsoftwarezerosuppression, _nzerosuppresssamples).
- Around line 789-794: The calculation of fit_amp and fit_time can divide by the
decay parameter f.GetParameter(3), which may be zero; add a guard around the
computations that use f.GetParameter(3) (the decay parameter) in
CaloWaveformFitting.cc so if decay is 0 or |decay| < eps (choose a small
epsilon) you avoid the pow(divide) and division: e.g., detect decay via a local
variable (decay = f.GetParameter(3)) and only compute fit_amp and fit_time using
the existing formulas when decay is safe, otherwise set fit_amp and fit_time to
a sensible fallback (0, NAN, or a capped value) and optionally log/debug the
condition; leave fit_ped = f.GetParameter(4) unchanged.
🧹 Nitpick comments (7)
offline/QA/Calorimeters/CaloValid.cc (1)
179-180: Consider moving threshold initialization outside the conditional.The
adc_threshold_hcalandadc_threshold_emcalvalues are identical in bothAuAuand non-AuAubranches (30 and 70 respectively). These could be initialized once before theifblock to reduce duplication.♻️ Suggested simplification
float adc_threshold_hcal; float adc_threshold_emcal; + adc_threshold_hcal = 30; + adc_threshold_emcal = 70; float emcal_hit_threshold; ... if (m_species == "AuAu") { emcaldownscale = 1350000. / 800.; ihcaldownscale = 55000. / 300.; ohcaldownscale = 265000. / 600.; mbddownscale = 2800.0; - adc_threshold_hcal = 30; - adc_threshold_emcal = 70; ... } else { emcaldownscale = 100000. / 800.; ihcaldownscale = 4000. / 300.; ohcaldownscale = 25000. / 600.; mbddownscale = 200.0; - adc_threshold_hcal = 30; - adc_threshold_emcal = 70; ... }Also applies to: 194-196, 211-212
.github/workflows/fix-eof-newline-coderabbit-docstrings.yml (1)
62-83: Harden file iteration against spaces/newlines.
for f in $filessplits on whitespace; filenames with spaces will break. Consider-zwith a NUL-delimited loop.♻️ Suggested refactor
- files=$(git diff --name-only "$BASE_SHA" "$HEAD_SHA" -- \ - '*.C' '*.c' '*.cc' '*.cpp' '*.cxx' '*.h' '*.hh' '*.hpp' '*.hxx' || true) - - if [[ -z "${files}" ]]; then - echo "No relevant files changed." - exit 0 - fi - - changed=0 - for f in $files; do + files_cmd=(git diff --name-only -z "$BASE_SHA" "$HEAD_SHA" -- \ + '*.C' '*.c' '*.cc' '*.cpp' '*.cxx' '*.h' '*.hh' '*.hpp' '*.hxx') + + if ! "${files_cmd[@]}" | read -r -d '' _; then + echo "No relevant files changed." + exit 0 + fi + + changed=0 + while IFS= read -r -d '' f; do [[ -f "$f" ]] || continue @@ - done + done < <("${files_cmd[@]}")offline/packages/CaloReco/CaloWaveformProcessing.cc (1)
68-80: Consider documenting the hardcoded saturation handling.The FUNCFIT initialization path correctly sets up the fitter with all necessary parameters. However,
set_handleSaturation(true)is unconditionally called on line 79, which may differ from other processing types where saturation handling is configurable. Consider whether this should be a user-configurable option for consistency withTEMPLATE_NOSAT.Also, the
static_caston line 72 correctly converts theinttype toFuncFitTypeenum, but this relies on the enum values staying synchronized with the integer mapping documented in comments.♻️ Optional: Add named constants or use enum directly through the chain
To reduce fragility, consider using the
FuncFitTypeenum type throughout the configuration chain instead ofint, or document the integer mapping in a central location that's easier to maintain.calibrations/tpc/dEdx/bethe_bloch.h (1)
55-63: Avoid duplicate electron-mass literal; reusededx_constants::m_e.Keeping a single source of truth helps prevent silent divergence if the constant is updated later.
♻️ Minimal refactor
inline double bethe_bloch_species(const double betagamma, const double I) { - const double m_e = 511e3; // eV - const double beta = betagamma/sqrt(1.+betagamma*betagamma); - return 1./(beta*beta)*(TMath::Log(2.*m_e/I*betagamma*betagamma)-beta*beta); + return 1./(beta*beta)*(TMath::Log(2.*dedx_constants::m_e/I*betagamma*betagamma)-beta*beta); }calibrations/tpc/dEdx/configure.ac (1)
9-13: Make-Werroropt-in to avoid breaking external builds.Treating warnings as errors is great for CI, but it can make downstream builds fragile when compilers change. Consider gating it behind a configure flag.
♻️ Optional configure flag
+dnl Enable -Werror only when explicitly requested +AC_ARG_ENABLE([werror], + AS_HELP_STRING([--enable-werror],[Treat warnings as errors]), + [], [enable_werror=no]) + if test $ac_cv_prog_gxx = yes; then - CXXFLAGS="$CXXFLAGS -Wall -Werror" + CXXFLAGS="$CXXFLAGS -Wall" + if test "x$enable_werror" = "xyes"; then + CXXFLAGS="$CXXFLAGS -Werror" + fi ficalibrations/tpc/dEdx/test_sample_size.C (1)
1-11: Add missing standard headers to make the macro self‑contained.This file uses
std::vector,std::string,std::cout, andfloor/pow/sqrtwithout including the corresponding STL headers, which makes compilation depend on transitive ROOT includes.♻️ Proposed include additions
`#include` <TGraphErrors.h> `#include` <TFile.h> +#include <cmath> +#include <iostream> +#include <string> +#include <vector> `#include` <algorithm> `#include` <numeric>calibrations/tpc/dEdx/GlobaldEdxFitter.h (1)
1-13: Make the header self‑contained with required STL includes.The header uses
std::vector,std::string,std::pair, andsize_tbut doesn’t include the corresponding headers. Adding them avoids brittle transitive dependencies.♻️ Proposed include additions
`#include` "Math/Minimizer.h" `#include` "Math/Functor.h" `#include` "Math/Factory.h" +#include <cstddef> +#include <string> +#include <utility> +#include <vector>
| AUTHOR='${{ github.event.pull_request.user.login }}' | ||
| BASE_REPO='${{ github.event.pull_request.base.repo.full_name }}' | ||
| HEAD_REPO='${{ github.event.pull_request.head.repo.full_name }}' | ||
| TITLE='${{ github.event.pull_request.title }}' | ||
|
|
There was a problem hiding this comment.
Avoid inline interpolation of untrusted PR metadata.
Directly embedding PR fields in the script can enable injection in pull_request_target workflows. Pass them via env and reference shell variables instead.
🔒 Suggested fix
- name: Guard - only CodeRabbit docstring PRs from same repo
id: guard
shell: bash
+ env:
+ PR_AUTHOR: ${{ github.event.pull_request.user.login }}
+ PR_BASE_REPO: ${{ github.event.pull_request.base.repo.full_name }}
+ PR_HEAD_REPO: ${{ github.event.pull_request.head.repo.full_name }}
+ PR_TITLE: ${{ github.event.pull_request.title }}
run: |
set -euo pipefail
-
- AUTHOR='${{ github.event.pull_request.user.login }}'
- BASE_REPO='${{ github.event.pull_request.base.repo.full_name }}'
- HEAD_REPO='${{ github.event.pull_request.head.repo.full_name }}'
- TITLE='${{ github.event.pull_request.title }}'
+ AUTHOR="$PR_AUTHOR"
+ BASE_REPO="$PR_BASE_REPO"
+ HEAD_REPO="$PR_HEAD_REPO"
+ TITLE="$PR_TITLE"Also applies to: 38-39
🤖 Prompt for AI Agents
In @.github/workflows/fix-eof-newline-coderabbit-docstrings.yml around lines 21
- 25, The workflow currently embeds untrusted PR metadata directly into shell
script variables (AUTHOR, BASE_REPO, HEAD_REPO, TITLE) using ${{ ... }} which
can enable injection in pull_request_target; instead move those expressions into
the job or step env block (e.g., env: PR_AUTHOR: ${{
github.event.pull_request.user.login }}, PR_BASE_REPO: ${{
github.event.pull_request.base.repo.full_name }}, PR_HEAD_REPO: ${{
github.event.pull_request.head.repo.full_name }}, PR_TITLE: ${{
github.event.pull_request.title }}) and then reference them inside the run
script as normal shell variables (AUTHOR="$PR_AUTHOR",
BASE_REPO="$PR_BASE_REPO", HEAD_REPO="$PR_HEAD_REPO", TITLE="$PR_TITLE") so the
interpolations occur in the workflow engine, not inline in the script.
| run: | | ||
| set -euo pipefail | ||
| # If no commit was created, pushing will fail; so only push if HEAD is ahead. | ||
| if git rev-parse HEAD~1 >/dev/null 2>&1; then | ||
| git push origin "HEAD:${{ github.event.pull_request.head.ref }}" |
There was a problem hiding this comment.
Pass head.ref via env before using it in git push.
Inline interpolation of head.ref is untrusted in pull_request_target. Use an env var instead.
🔒 Suggested fix
- name: Push fix commit back to PR branch
if: steps.guard.outputs.run == 'true'
shell: bash
+ env:
+ HEAD_REF: ${{ github.event.pull_request.head.ref }}
run: |
set -euo pipefail
# If no commit was created, pushing will fail; so only push if HEAD is ahead.
if git rev-parse HEAD~1 >/dev/null 2>&1; then
- git push origin "HEAD:${{ github.event.pull_request.head.ref }}"
+ git push origin "HEAD:${HEAD_REF}"
fi🧰 Tools
🪛 actionlint (1.7.10)
101-101: "github.event.pull_request.head.ref" is potentially untrusted. avoid using it directly in inline scripts. instead, pass it through an environment variable. see https://docs.github.com/en/actions/reference/security/secure-use#good-practices-for-mitigating-script-injection-attacks for more details
(expression)
🤖 Prompt for AI Agents
In @.github/workflows/fix-eof-newline-coderabbit-docstrings.yml around lines 101
- 105, The workflow is interpolating github.event.pull_request.head.ref directly
in the git push which is unsafe for pull_request_target; instead set an
environment variable from github.event.pull_request.head.ref before the push
(e.g., export HEAD_REF=...), then use that env var in the git push command
(replace the inline "${{ github.event.pull_request.head.ref }}" with the env var
like "${HEAD_REF}"); update the block containing set -euo pipefail and the git
push line to export and reference the env var so the push uses the validated
environment value.
| srcdir=`dirname $0` | ||
| test -z "$srcdir" && srcdir=. | ||
|
|
||
| (cd $srcdir; aclocal -I ${OFFLINE_MAIN}/share;\ | ||
| libtoolize --force; automake -a --add-missing; autoconf) | ||
|
|
||
| $srcdir/configure "$@" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Locate and read the file
fd autogen.sh --type fRepository: sPHENIX-Test/coresoftware
Length of output: 5081
🏁 Script executed:
# If found, display the file content
if [ -f "calibrations/tpc/dEdx/autogen.sh" ]; then
cat -n calibrations/tpc/dEdx/autogen.sh
fiRepository: sPHENIX-Test/coresoftware
Length of output: 315
🏁 Script executed:
# Check for ShellCheck availability and run it
if command -v shellcheck >/dev/null 2>&1; then
shellcheck calibrations/tpc/dEdx/autogen.sh 2>&1
else
echo "ShellCheck not available"
fiRepository: sPHENIX-Test/coresoftware
Length of output: 1251
Harden path handling and add error handling to cd.
ShellCheck confirms several issues in this script:
- Line 2: Use
$(...)instead of backticks (SC2006) - Lines 2, 5, 8: Quote variables to prevent word splitting (SC2086)
- Line 5: Add error handling to
cdin case it fails (SC2164)
🛠️ Suggested changes
-srcdir=`dirname $0`
+srcdir="$(dirname "$0")"
test -z "$srcdir" && srcdir=.
-(cd $srcdir; aclocal -I ${OFFLINE_MAIN}/share;\
-libtoolize --force; automake -a --add-missing; autoconf)
+(cd "$srcdir" || exit 1; aclocal -I "${OFFLINE_MAIN}/share";\
+libtoolize --force; automake -a --add-missing; autoconf)
-$srcdir/configure "$@"
+"$srcdir/configure" "$@"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| srcdir=`dirname $0` | |
| test -z "$srcdir" && srcdir=. | |
| (cd $srcdir; aclocal -I ${OFFLINE_MAIN}/share;\ | |
| libtoolize --force; automake -a --add-missing; autoconf) | |
| $srcdir/configure "$@" | |
| srcdir="$(dirname "$0")" | |
| test -z "$srcdir" && srcdir=. | |
| (cd "$srcdir" || exit 1; aclocal -I "${OFFLINE_MAIN}/share";\ | |
| libtoolize --force; automake -a --add-missing; autoconf) | |
| "$srcdir/configure" "$@" |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 5-5: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🤖 Prompt for AI Agents
In `@calibrations/tpc/dEdx/autogen.sh` around lines 2 - 8, Replace backticks by
$(dirname "$0") when setting srcdir, quote variable expansions (e.g. "$srcdir",
"${OFFLINE_MAIN}/share", and "$@") to prevent word-splitting, and add error
handling to the cd operation so the script exits with a clear error if cd fails;
specifically update the srcdir assignment, change the subshell line that does
(cd ...) to cd "$srcdir" || { echo "Failed to cd to $srcdir" >&2; exit 1; }
before running aclocal -I "${OFFLINE_MAIN}/share" libtoolize --force automake -a
--add-missing autoconf, and invoke the configure script as "$srcdir/configure"
"$@".
| inline double bethe_bloch_new(const double betagamma, const double A, const double B, const double C) | ||
| { | ||
| const double beta = betagamma/sqrt(1.+betagamma*betagamma); | ||
|
|
||
| return A/(beta*beta)*TMath::Log(betagamma) + A/(beta*beta)*B - A - C; | ||
| } | ||
|
|
||
| inline double bethe_bloch_new_2D(const double betagamma, const double A, const double B) | ||
| { | ||
| const double beta = betagamma/sqrt(1.+betagamma*betagamma); | ||
|
|
||
| return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * betagamma) - beta*beta) - B; | ||
| } | ||
|
|
||
| inline double bethe_bloch_new_1D(const double betagamma, const double A) | ||
| { | ||
| const double beta = betagamma/sqrt(1.+betagamma*betagamma); | ||
|
|
||
| return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * betagamma) - beta*beta); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "bethe_bloch.h" -type fRepository: sPHENIX-Test/coresoftware
Length of output: 106
🏁 Script executed:
cat -n ./calibrations/tpc/dEdx/bethe_bloch.hRepository: sPHENIX-Test/coresoftware
Length of output: 7501
🏁 Script executed:
rg "bethe_bloch_new|bethe_bloch_species|bethe_bloch_total" --type cpp --type h -l | head -20Repository: sPHENIX-Test/coresoftware
Length of output: 223
🏁 Script executed:
rg "bethe_bloch_new|bethe_bloch_total" calibrations/tpc/dEdx/GlobaldEdxFitter.cc -B 3 -A 3 | head -60Repository: sPHENIX-Test/coresoftware
Length of output: 1617
🏁 Script executed:
rg "bethe_bloch_new|bethe_bloch_total" calibrations/tpc/dEdx/dEdxFitter.cc -B 3 -A 3 | head -80Repository: sPHENIX-Test/coresoftware
Length of output: 805
🏁 Script executed:
rg "SetRange|SetDomain" calibrations/tpc/dEdx/ -A 2 -B 2Repository: sPHENIX-Test/coresoftware
Length of output: 986
🏁 Script executed:
rg "TF1.*bethe_bloch" calibrations/tpc/dEdx/ -B 2 -A 5Repository: sPHENIX-Test/coresoftware
Length of output: 5008
🏁 Script executed:
rg "GetXmin|GetXmax|SetRange" calibrations/tpc/dEdx/dEdxFitter.cc -B 2 -A 2Repository: sPHENIX-Test/coresoftware
Length of output: 51
🏁 Script executed:
head -100 calibrations/tpc/dEdx/dEdxFitter.ccRepository: sPHENIX-Test/coresoftware
Length of output: 2430
🏁 Script executed:
rg "addTrack|get_p\(\)" calibrations/tpc/dEdx/GlobaldEdxFitter.cc -B 2 -A 2 | head -40Repository: sPHENIX-Test/coresoftware
Length of output: 164
Guard against log-domain errors when betagamma → 0.
Lines 38, 45, and 52 call TMath::Log(...) on betagamma without bounds checking. If betagamma ≤ 0 (e.g., zero momentum), this yields -inf/NaN and can destabilize fits. Consider clamping to a small epsilon (e.g., 1e-9) or returning early.
Suggested guard (adjust epsilon as needed for your fit range)
inline double bethe_bloch_new(const double betagamma, const double A, const double B, const double C)
{
- const double beta = betagamma/sqrt(1.+betagamma*betagamma);
+ const double bg = (betagamma > 0.) ? betagamma : 1e-9;
+ const double beta = bg/sqrt(1.+bg*bg);
- return A/(beta*beta)*TMath::Log(betagamma) + A/(beta*beta)*B - A - C;
+ return A/(beta*beta)*TMath::Log(bg) + A/(beta*beta)*B - A - C;
}
inline double bethe_bloch_new_2D(const double betagamma, const double A, const double B)
{
- const double beta = betagamma/sqrt(1.+betagamma*betagamma);
+ const double bg = (betagamma > 0.) ? betagamma : 1e-9;
+ const double beta = bg/sqrt(1.+bg*bg);
- return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * betagamma) - beta*beta) - B;
+ return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * bg) - beta*beta) - B;
}
inline double bethe_bloch_new_1D(const double betagamma, const double A)
{
- const double beta = betagamma/sqrt(1.+betagamma*betagamma);
+ const double bg = (betagamma > 0.) ? betagamma : 1e-9;
+ const double beta = bg/sqrt(1.+bg*bg);
- return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * betagamma) - beta*beta);
+ return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * bg) - beta*beta);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| inline double bethe_bloch_new(const double betagamma, const double A, const double B, const double C) | |
| { | |
| const double beta = betagamma/sqrt(1.+betagamma*betagamma); | |
| return A/(beta*beta)*TMath::Log(betagamma) + A/(beta*beta)*B - A - C; | |
| } | |
| inline double bethe_bloch_new_2D(const double betagamma, const double A, const double B) | |
| { | |
| const double beta = betagamma/sqrt(1.+betagamma*betagamma); | |
| return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * betagamma) - beta*beta) - B; | |
| } | |
| inline double bethe_bloch_new_1D(const double betagamma, const double A) | |
| { | |
| const double beta = betagamma/sqrt(1.+betagamma*betagamma); | |
| return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * betagamma) - beta*beta); | |
| } | |
| inline double bethe_bloch_new(const double betagamma, const double A, const double B, const double C) | |
| { | |
| const double bg = (betagamma > 0.) ? betagamma : 1e-9; | |
| const double beta = bg/sqrt(1.+bg*bg); | |
| return A/(beta*beta)*TMath::Log(bg) + A/(beta*beta)*B - A - C; | |
| } | |
| inline double bethe_bloch_new_2D(const double betagamma, const double A, const double B) | |
| { | |
| const double bg = (betagamma > 0.) ? betagamma : 1e-9; | |
| const double beta = bg/sqrt(1.+bg*bg); | |
| return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * bg) - beta*beta) - B; | |
| } | |
| inline double bethe_bloch_new_1D(const double betagamma, const double A) | |
| { | |
| const double bg = (betagamma > 0.) ? betagamma : 1e-9; | |
| const double beta = bg/sqrt(1.+bg*bg); | |
| return A/(beta*beta)*(2.*TMath::Log(2.*dedx_constants::m_e/dedx_constants::sphenix_I * bg) - beta*beta); | |
| } |
🤖 Prompt for AI Agents
In `@calibrations/tpc/dEdx/bethe_bloch.h` around lines 34 - 53, The three
functions bethe_bloch_new, bethe_bloch_new_2D, and bethe_bloch_new_1D can call
TMath::Log with betagamma (or an expression multiplying it) which is invalid for
betagamma ≤ 0; add a small-clamp or guard at the start of each function (e.g.,
if betagamma <= epsilon then set betagamma = epsilon or return a sensible
sentinel) before computing beta and calling TMath::Log so you never pass
non-positive values to the log; apply the same epsilon consistently across all
three functions and keep it configurable/visible as a named constant.
| #include "dEdxFitter.h" | ||
|
|
||
| #include <phool/phool.h> | ||
| #include <phool/getClass.h> | ||
| #include <fun4all/PHTFileServer.h> | ||
| #include <fun4all/Fun4AllServer.h> | ||
| #include <trackbase_historic/TrackAnalysisUtils.h> | ||
| #include <g4detectors/PHG4TpcGeom.h> | ||
|
|
||
| #include <iostream> | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "dEdxFitter.cc" -type fRepository: sPHENIX-Test/coresoftware
Length of output: 106
🏁 Script executed:
cat -n ./calibrations/tpc/dEdx/dEdxFitter.cc | head -200Repository: sPHENIX-Test/coresoftware
Length of output: 6852
🏁 Script executed:
rg "std::isnan|std::numeric_limits" ./calibrations/tpc/dEdx/dEdxFitter.ccRepository: sPHENIX-Test/coresoftware
Length of output: 341
🏁 Script executed:
find . -name "dEdxFitter.h" -type fRepository: sPHENIX-Test/coresoftware
Length of output: 105
🏁 Script executed:
cat ./calibrations/tpc/dEdx/dEdxFitter.hRepository: sPHENIX-Test/coresoftware
Length of output: 2197
Add missing standard headers for std::isnan and std::numeric_limits.
std::isnan (Line 68) and std::numeric_limits (Line 166) require <cmath> and <limits>, respectively.
📌 Header fix
`#include` <g4detectors/PHG4TpcGeom.h>
+#include <cmath>
`#include` <iostream>
+#include <limits>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #include "dEdxFitter.h" | |
| #include <phool/phool.h> | |
| #include <phool/getClass.h> | |
| #include <fun4all/PHTFileServer.h> | |
| #include <fun4all/Fun4AllServer.h> | |
| #include <trackbase_historic/TrackAnalysisUtils.h> | |
| #include <g4detectors/PHG4TpcGeom.h> | |
| #include <iostream> | |
| `#include` "dEdxFitter.h" | |
| `#include` <phool/phool.h> | |
| `#include` <phool/getClass.h> | |
| `#include` <fun4all/PHTFileServer.h> | |
| `#include` <fun4all/Fun4AllServer.h> | |
| `#include` <trackbase_historic/TrackAnalysisUtils.h> | |
| `#include` <g4detectors/PHG4TpcGeom.h> | |
| `#include` <cmath> | |
| `#include` <iostream> | |
| `#include` <limits> | |
🤖 Prompt for AI Agents
In `@calibrations/tpc/dEdx/dEdxFitter.cc` around lines 1 - 11, Include the missing
standard headers at the top of dEdxFitter.cc: add <cmath> to provide std::isnan
usage and add <limits> to provide std::numeric_limits usage so calls to
std::isnan(...) and std::numeric_limits<...>::... compile correctly (place these
alongside the existing includes in dEdxFitter.cc).
| #include <string> | ||
|
|
||
| static ROOT::TThreadExecutor *t = new ROOT::TThreadExecutor(1);// NOLINT(misc-use-anonymous-namespace) | ||
| static ROOT::TThreadExecutor *t = new ROOT::TThreadExecutor(1); // NOLINT(misc-use-anonymous-namespace) |
There was a problem hiding this comment.
Memory leak when reinitializing TThreadExecutor.
The global static t is initialized here with new ROOT::TThreadExecutor(1), but in initialize_processing() at line 47, it's reassigned with t = new ROOT::TThreadExecutor(_nthreads) without deleting the previous instance.
🧹 Suggested fix
Consider deleting the old instance before reassignment in initialize_processing():
void CaloWaveformFitting::initialize_processing(const std::string &templatefile)
{
// ... existing code ...
m_peakTimeTemp = h_template->GetBinCenter(h_template->GetMaximumBin());
+ delete t;
t = new ROOT::TThreadExecutor(_nthreads);
}Alternatively, consider using std::unique_ptr or making t a class member with proper lifecycle management.
🤖 Prompt for AI Agents
In `@offline/packages/CaloReco/CaloWaveformFitting.cc` at line 25, The global raw
pointer t is leaked because it’s initialized with new ROOT::TThreadExecutor(1)
and later reassigned in initialize_processing() without deleting the prior
instance; either delete the existing t before reassigning in
initialize_processing() or convert t to a RAII-managed object (e.g.,
std::unique_ptr<ROOT::TThreadExecutor>) or make it a class member with proper
lifecycle management so reinitialization doesn't leak; update references to t
accordingly (use reset() or release/assign as appropriate) and ensure
destruction at program shutdown.
| // Software zero suppression check | ||
| if ((_bdosoftwarezerosuppression && v.at(6) - v.at(0) < _nsoftwarezerosuppression) || | ||
| (_maxsoftwarezerosuppression && maxheight - pedestal < _nsoftwarezerosuppression)) | ||
| { | ||
| amp = v.at(6) - v.at(0); | ||
| time = std::numeric_limits<float>::quiet_NaN(); | ||
| ped = v.at(0); | ||
| if (v.at(0) != 0 && v.at(1) == 0) | ||
| { | ||
| chi2 = 1000000; | ||
| } | ||
| fit_values.push_back({amp, time, ped, chi2, 0}); | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Potential out-of-bounds access when nsamples is between 3 and 6.
The software zero suppression check at line 718 accesses v.at(6), but the only prior guard is nsamples != _nzerosuppresssamples (which is 2). If nsamples is 3-6, v.at(6) will throw std::out_of_range.
🐛 Proposed fix
Add a minimum sample count check before accessing index 6:
+ // Software zero suppression check - requires at least 7 samples
+ if (nsamples < 7)
+ {
+ // Handle short waveforms without SW ZS check
+ fit_values.push_back({maxheight - pedestal, static_cast<float>(maxbin), pedestal,
+ std::numeric_limits<float>::quiet_NaN(), 0});
+ continue;
+ }
+
// Software zero suppression check
if ((_bdosoftwarezerosuppression && v.at(6) - v.at(0) < _nsoftwarezerosuppression) ||
(_maxsoftwarezerosuppression && maxheight - pedestal < _nsoftwarezerosuppression))🤖 Prompt for AI Agents
In `@offline/packages/CaloReco/CaloWaveformFitting.cc` around lines 717 - 730, The
software zero-suppression block in CaloWaveformFitting.cc reads v.at(6) (and
v.at(1)) without ensuring enough samples, which can throw for nsamples in 3..6;
update the if-condition that uses _bdosoftwarezerosuppression and
_maxsoftwarezerosuppression to also require at least 7 samples (e.g. nsamples >
6 or v.size() > 6) before accessing v.at(6) (and ensure v.size() > 1 before
v.at(1)), so the branch only runs when the vector has the needed indices (refer
to variables/functions: _bdosoftwarezerosuppression,
_maxsoftwarezerosuppression, v, nsamples, _nsoftwarezerosuppression,
_nzerosuppresssamples).
| } | ||
|
|
||
| // Create histogram for fitting | ||
| TH1F h("h_funcfit", "", nsamples, -0.5, nsamples - 0.5); |
There was a problem hiding this comment.
TH1F will be added to ROOT's global directory by default.
"When a histogram, a TTree or a TEventList is created, it is added by default to the list of objects in the current directory gDirectory. In many cases that is the TFile that was opened most recently." This can cause memory management issues and name collisions when processing many waveforms.
🧹 Suggested fix
Call SetDirectory(nullptr) after creating the histogram to prevent it from being tracked by gDirectory:
// Create histogram for fitting
TH1F h("h_funcfit", "", nsamples, -0.5, nsamples - 0.5);
+ h.SetDirectory(nullptr);
int ndata = 0;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| TH1F h("h_funcfit", "", nsamples, -0.5, nsamples - 0.5); | |
| TH1F h("h_funcfit", "", nsamples, -0.5, nsamples - 0.5); | |
| h.SetDirectory(nullptr); |
🤖 Prompt for AI Agents
In `@offline/packages/CaloReco/CaloWaveformFitting.cc` at line 733, The TH1F
histogram "h" (name "h_funcfit") is being created and will be registered in
ROOT's gDirectory by default; to avoid global-dir tracking and potential
memory/name collisions, call h.SetDirectory(nullptr) immediately after the TH1F
h(...) construction in CaloWaveformFitting (so the histogram is not added to
gDirectory and is managed only by local code).
| // Calculate peak amplitude and time from fit parameters | ||
| // Peak height is (p0 * Power(p2/p3, p2)) / exp(p2) | ||
| fit_amp = (f.GetParameter(0) * pow(f.GetParameter(2) / f.GetParameter(3), f.GetParameter(2))) / exp(f.GetParameter(2)); | ||
| // Peak time is t0 + power/decay | ||
| fit_time = f.GetParameter(1) + f.GetParameter(2) / f.GetParameter(3); | ||
| fit_ped = f.GetParameter(4); |
There was a problem hiding this comment.
Potential division by zero if decay parameter is zero.
If GetParameter(3) (decay) returns zero or a very small value, the expression pow(f.GetParameter(2) / f.GetParameter(3), f.GetParameter(2)) will produce infinity or NaN. While the fit parameter limits on line 783 set a lower bound of 0 (not excluding 0), the fit could still converge to zero.
🧹 Suggested fix
Add a guard against zero decay parameter:
+ double decay = f.GetParameter(3);
+ if (decay < 1e-9)
+ {
+ fit_amp = maxheight - pedestal; // fallback to raw estimate
+ fit_time = maxbin;
+ }
+ else
+ {
// Calculate peak amplitude and time from fit parameters
- fit_amp = (f.GetParameter(0) * pow(f.GetParameter(2) / f.GetParameter(3), f.GetParameter(2))) / exp(f.GetParameter(2));
- fit_time = f.GetParameter(1) + f.GetParameter(2) / f.GetParameter(3);
+ fit_amp = (f.GetParameter(0) * pow(f.GetParameter(2) / decay, f.GetParameter(2))) / exp(f.GetParameter(2));
+ fit_time = f.GetParameter(1) + f.GetParameter(2) / decay;
+ }
fit_ped = f.GetParameter(4);🤖 Prompt for AI Agents
In `@offline/packages/CaloReco/CaloWaveformFitting.cc` around lines 789 - 794, The
calculation of fit_amp and fit_time can divide by the decay parameter
f.GetParameter(3), which may be zero; add a guard around the computations that
use f.GetParameter(3) (the decay parameter) in CaloWaveformFitting.cc so if
decay is 0 or |decay| < eps (choose a small epsilon) you avoid the pow(divide)
and division: e.g., detect decay via a local variable (decay =
f.GetParameter(3)) and only compute fit_amp and fit_time using the existing
formulas when decay is safe, otherwise set fit_amp and fit_time to a sensible
fallback (0, NAN, or a capped value) and optionally log/debug the condition;
leave fit_ped = f.GetParameter(4) unchanged.
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
offline/packages/trackreco/PHSiliconSeedMerger.cc (3)
98-98:⚠️ Potential issue | 🔴 Critical
std::numeric_limits<int>::quiet_NaN()is invalid for integer types.
quiet_NaN()is only meaningful for floating-point types. For integers, this returns 0 (or is undefined behavior in some implementations). The comparisontrack1Strobe == track2Strobeat line 188/202 will incorrectly match when both are uninitialized.Consider using a sentinel value like
-1orstd::optional<int>to represent "not set".🐛 Proposed fix using sentinel value
- int track1Strobe = std::numeric_limits<int>::quiet_NaN(); + int track1Strobe = -1; // sentinel for "no MVTX cluster found"And similarly for line 134:
- int track2Strobe = std::numeric_limits<int>::quiet_NaN(); + int track2Strobe = -1; // sentinel for "no MVTX cluster found"Then update the comparison logic to check for valid strobes:
- if (track1Strobe == track2Strobe && m_mergeSeeds) + if (track1Strobe >= 0 && track1Strobe == track2Strobe && m_mergeSeeds)
134-134:⚠️ Potential issue | 🔴 CriticalSame
quiet_NaN()issue fortrack2Strobe.See comment above regarding line 98. This also needs to be fixed.
183-225:⚠️ Potential issue | 🔴 CriticalLogic error:
keysToKeepis computed but never used, and duplicate insertions occur.There are two issues in this block:
keysToKeep(lines 184, 187-191, 201, 203-205) accumulates the merged keys but is never used afterward. The intended merge is not being applied.Lines 223-225 unconditionally insert
track1IDintomatchesandtrack2IDintoseedsToDelete, duplicating the logic already handled in the if/else branches above (lines 192-193, 206-207). This causes:
- When
mvtx1Keys.size() >= mvtx2Keys.size(): correct behavior (track1ID kept, track2ID deleted)- When
mvtx2Keys.size() > mvtx1Keys.size(): wrong behavior - track2ID should be kept but lines 223-225 override to keep track1ID and delete track2ID🐛 Proposed fix to remove duplicate insertions and use keysToKeep
if (mvtx1Keys.size() >= mvtx2Keys.size()) { keysToKeep = mvtx1Keys; if (track1Strobe == track2Strobe && m_mergeSeeds) { keysToKeep.insert(mvtx2Keys.begin(), mvtx2Keys.end()); } matches.insert(std::make_pair(track1ID, keysToKeep)); seedsToDelete.insert(track2ID); if (Verbosity() > 2) { std::cout << " will delete seed " << track2ID << std::endl; } } else { keysToKeep = mvtx2Keys; if (track1Strobe == track2Strobe && m_mergeSeeds) { keysToKeep.insert(mvtx1Keys.begin(), mvtx1Keys.end()); } matches.insert(std::make_pair(track2ID, keysToKeep)); seedsToDelete.insert(track1ID); if (Verbosity() > 2) { std::cout << " will delete seed " << track1ID << std::endl; } } if (Verbosity() > 2) { std::cout << "Match IDed" << std::endl; - for (const auto& key : mvtx1Keys) + for (const auto& key : keysToKeep) { std::cout << " total track keys " << key << std::endl; } } - matches.insert(std::make_pair(track1ID, mvtx1Keys)); - seedsToDelete.insert(track2ID); break; }offline/packages/trackreco/PHSimpleVertexFinder.cc (2)
282-286:⚠️ Potential issue | 🟠 MajorPotential null pointer dereference on
siseed.At line 286,
siseed->get_crossing()is called without a null check. Whiletrackis verified non-null at line 283,get_silicon_seed()at line 285 can returnnullptr, leading to a crash.🐛 Proposed fix
SvtxTrack *track = _track_map->get(trid); if (track) { auto *siseed = track->get_silicon_seed(); + if (!siseed) + { + continue; + } short int intt_crossing = siseed->get_crossing();
399-400:⚠️ Potential issue | 🟠 MajorPotential null pointer dereference on
siseedin verbose output loop.Same issue as above:
siseed->get_crossing()at line 400 is called without checking ifsiseedis null.🐛 Proposed fix
auto *siseed = track->get_silicon_seed(); + if (!siseed) + { + continue; + } short int intt_crossing = siseed->get_crossing();
🤖 Fix all issues with AI agents
In `@generators/Herwig/HepMCTrigger/HepMCParticleTrigger.h`:
- Line 88: The default value for the member _thePtLow is inconsistent: the
header HepMCParticleTrigger.h sets float _thePtLow{-999.9} while the constructor
initializer in the .cc sets it to 0; align them by choosing one canonical
default and making both places match (preferably set the header and the
constructor initializer list for HepMCParticleTrigger to the same value, e.g.,
-999.9 or 0) so the declaration of _thePtLow and the constructor initializer
list in the HepMCParticleTrigger class are identical.
In `@offline/packages/globalvertex/GlobalVertexv3.h`:
- Around line 22-26: CloneMe currently returns a shallow copy via the default
copy constructor which duplicates raw pointers in _vtxs and leads to
double-free/UAF when Reset()/destructor run; change CloneMe in class
GlobalVertexv3 to perform a deep clone by creating a new GlobalVertexv3 instance
and for each pointer in the original's _vtxs allocate/clone a fresh vertex
object (using the vertex type's copy constructor or its Clone/CloneMe method if
available) and push those new pointers into the new object's _vtxs, ensuring
ownership semantics match Reset()/destructor and avoiding shared raw pointers
between original and clone.
In `@offline/packages/globalvertex/SvtxVertex_v2.h`:
- Around line 61-71: set_beam_crossing currently casts rollover_short(cross) to
unsigned which maps negative values (e.g. -32768) to large unsigned sentinel
values; change set_beam_crossing to validate the result of rollover_short(cross)
before casting: call rollover_short(cross) into a short int (cross_ro), and if
cross_ro is negative set _beamcrossing to a defined safe value (e.g. 0u or
another agreed clamp) instead of static_cast<unsigned int>(cross_ro); keep the
existing special-case when cross == short_int_max that sets _beamcrossing to
std::numeric_limits<unsigned int>::max(), and update any callers/contract if a
different sentinel is chosen.
- Around line 91-98: rollover_short currently maps cross == -32768 into the same
sentinel slot as short_int_max causing collisions; update rollover_short to
explicitly handle the minimum short value (use numeric_limits<short>::min() or
the defined short_int_min) and either (a) reject it (throw std::out_of_range or
assert) so set_beam_crossing can reject/document out-of-range inputs, or (b) map
it to a distinct reserved value (not short_int_max) so serialization is
bijective; ensure callers set_beam_crossing/get_beam_crossing handle the new
behavior and update any docs/comments accordingly.
In `@offline/packages/globalvertex/SvtxVertex_v3.cc`:
- Around line 106-112: The covar_index method lacks bounds checking and can
produce out-of-range indexes into the _err[6] covariance storage; update
SvtxVertex_v3::covar_index to validate that both i and j are within [0,2] before
computing the packed index (keep the existing std::swap(i,j) behavior), and if
an index is invalid, fail fast by throwing a std::out_of_range (or using an
assert) with a clear message mentioning the invalid i/j values and the object,
so callers cannot index _err out-of-bounds.
In `@offline/packages/jetbackground/DetermineTowerBackground.cc`:
- Around line 542-602: The code in DetermineTowerBackground::process_event
currently uses a literal return -1 when G4TruthInfo is missing and exit(-1) when
EventplaneinfoMap is missing; replace both with the framework-safe return value
Fun4AllReturnCodes::ABORTRUN so the framework can handle the abort: change the
early return after the truthinfo null check (where it currently does return -1)
to return Fun4AllReturnCodes::ABORTRUN, and replace the exit(-1) in the
EventplaneinfoMap null check with return Fun4AllReturnCodes::ABORTRUN (ensure
the correct header/namespace for Fun4AllReturnCodes is included where
DetermineTowerBackground::process_event is defined).
- Around line 970-976: The range check for centrality_bin in
DetermineTowerBackground.cc wrongly excludes valid indices (0 and 95–99); update
the conditional in the block that assigns _v2 (inside the code path using
centinfo->get_centrality_bin) to accept the full populated range—use
centrality_bin >= 0 && centrality_bin < 100 or better yet centrality_bin >= 0 &&
centrality_bin < static_cast<int>(_CENTRALITY_V2.size())—so _v2 is set from
_CENTRALITY_V2[centrality_bin] for valid bins and avoid incorrectly marking flow
failure for those bins.
In `@offline/packages/jetbase/TowerJetInput.cc`:
- Line 13: Remove the unused include directive for GlobalVertexv3.h: locate the
top-of-file include block where `#include` <globalvertex/GlobalVertexv3.h> is
present and delete that line; the file only uses GlobalVertexMap and calls
GlobalVertex::get_z(), so no references to GlobalVertexv3 exist and removing the
include will clean up the headers.
In `@offline/packages/mbd/MbdCalib.cc`:
- Around line 89-99: Download_Status currently treats a missing status file/URL
as an error and leaves the member _mbdstatus at -1, causing Download_All and
repeated runs to fail or retain stale values; change the logic so that a missing
status is non-fatal and defaults to "good" by setting _mbdstatus = 0 (either in
the MbdCalib constructor/initializer or at start of Download_Status), treat a
not-found / empty response as success (no error) while still parsing the content
and returning errors for malformed content, and update any checks in
Download_All that assume a non-negative _mbdstatus so they do not fail when the
status was simply absent; key symbols to modify: Download_Status, Download_All,
and the _mbdstatus initialization in the MbdCalib class.
- Around line 1421-1427: Download_TimeRMS currently returns distinct codes (1
success, 0 missing, negative errors) but callers ignore its return so they
cannot distinguish missing TimeRMS from success; update every caller that
invokes Download_TimeRMS to capture its return value, check for ==1 (proceed),
==0 (log a clear warning that _trms_y was empty and handle missing calibration
path), and <0 (treat as error: log with the returned code and propagate or fail
as appropriate). Use get_trms() only for safe access to _trms_y_interp but do
not rely on it to infer load status; if desired, convert non-1 results into
appropriate control flow (early return, exception, or status propagation) so
callers can debug and respond to missing or error states.
In `@offline/packages/TrackingDiagnostics/TrkrNtuplizer.cc`:
- Around line 1542-1549: The range check for INTT layers incorrectly uses
"layer_local < _nlayers_intt", causing most INTT layers to be skipped; update
the condition in the block that handles INTT hits (where variables like
layer_local, _nlayers_maps, _nlayers_intt, InttDefs, and CylinderGeomIntt
appear) so the upper bound is _nlayers_maps + _nlayers_intt (i.e., use
layer_local >= _nlayers_maps && layer_local < _nlayers_maps + _nlayers_intt) to
correctly detect all INTT layers before calling find_strip_center_localcoords.
In `@offline/packages/trackreco/PHSiliconSeedMerger.cc`:
- Line 92: The build fails because code (e.g., PHSiliconSeedMerger.cc using
seedsToDelete.contains(track1ID)) relies on C++20; update the build
configuration to enable C++20 by modifying configure.ac to add the C++20
standard flag to the compiler flags (either append -std=c++20 to
CXXFLAGS/CXXFLAGS_DEFAULT) or, preferably, use Autoconf’s AX_CHECK_COMPILE_FLAG
(or similar) to detect and add a supported C++20 flag; ensure this change covers
all trackreco sources that call std::set::contains and that the configure check
emits a clear error if no C++20-supporting flag is available.
In `@offline/QA/Tracking/StateClusterResidualsQA.cc`:
- Around line 114-142: The loop that pushes dynamic_cast results from
hm->getHisto into vectors (e.g., m_histograms_x, m_histograms_y,
m_histograms_layer_x, etc.) can store null pointers if getHisto() returns
nullptr or the cast fails, leading to crashes later in process_event when Fill()
is called; update the loop that iterates over m_pending (inside the
m_use_local_coords branches) to check each returned pointer before push_back (or
assert/throw with a clear message), and only push valid TH1/TH2 pointers (or
substitute a safe no-op placeholder) so that subsequent process_event accesses
of those vectors are guaranteed non-null. Ensure checks reference the same calls
shown (hm->getHisto(std::string(cfg.name + "...")) and the dynamic_cast results)
and adjust downstream logic in process_event to assume non-null or handle the
placeholder consistently.
- Around line 184-262: The loop over m_pending uses a separate counter h that is
skipped when early continue is taken (in the charge filter), causing histogram
indexing to desynchronize; fix by ensuring h is incremented before every
continue in the for (const auto& cfg : m_pending) loop (i.e., place ++h just
before each continue in the charge checks) or refactor the loop to an
index-based iteration (for (size_t h=0; h<m_pending.size(); ++h) using
m_pending[h]) so the histogram index always matches the config; update
references to cfg accordingly (m_pending, h, charge filter block, and the
histogram-fill section).
In `@offline/QA/Tracking/StateClusterResidualsQA.h`:
- Line 181: The header guard closing comment is incorrect: the guard used is
STATECLUSTERRESIDUALSQA_H but the closing comment reads TRACKFITTINGQA_H; update
the trailing comment to match the guard identifier (STATECLUSTERRESIDUALSQA_H)
so the opening and closing guard identifiers are consistent with the
`#ifndef/`#define for STATECLUSTERRESIDUALSQA_H.
🧹 Nitpick comments (26)
simulation/g4simulation/g4main/PHG4TruthTrackingAction.h (1)
22-83: Attach the Doxygen blocks directly to the declarations they document.
These comment blocks sit outside the class and may be treated as orphaned by doc tooling; move them adjacent to each member (or consolidate into class-level docs) so they’re reliably picked up.Example of moving one block (apply the same pattern to others)
-class PHG4TruthTrackingAction : public PHG4TrackingAction -{ - public: - //! constructor - PHG4TruthTrackingAction(PHG4TruthEventAction*); +class PHG4TruthTrackingAction : public PHG4TrackingAction +{ + public: + /** + * Construct a PHG4TruthTrackingAction associated with an event action. + * `@param` eventAction Pointer to the owning PHG4TruthEventAction used to record per-event truth information. + */ + PHG4TruthTrackingAction(PHG4TruthEventAction*);offline/packages/trackreco/PHSiliconSeedMerger.h (3)
28-50: Inconsistent indentation for method definitions.The method bodies (
void trackMapName(...),void clusterOverlap(...),void searchIntt(),void mergeSeeds()) start at column 0 but should be indented to match the class member indentation level. This appears to be a formatting issue where the docblocks are indented but the method signatures are not.🔧 Suggested indentation fix
/** * Set the name of the track seed container to use when retrieving silicon tracks. * * `@param` name Name of the TrackSeedContainer node (defaults to "SiliconTrackSeedContainer"). */ -void trackMapName(const std::string &name) { m_trackMapName = name; } + void trackMapName(const std::string &name) { m_trackMapName = name; } /** * Set the maximum number of overlapping clusters considered during seed merging. * `@param` nclusters Maximum number of clusters that may overlap (overlap threshold). */ -void clusterOverlap(const unsigned int nclusters) { m_clusterOverlap = nclusters; } + void clusterOverlap(const unsigned int nclusters) { m_clusterOverlap = nclusters; } /** * `@brief` Allow merging searches to include the INTT detector. * * Configure the merger to include INTT clusters in subsequent processing by disabling the MVTX-only restriction. */ -void searchIntt() { m_mvtxOnly = false; } + void searchIntt() { m_mvtxOnly = false; } /** * Enable merging of silicon seed tracks during event processing. * * When enabled, the module will merge overlapping silicon seed tracks where applicable. */ -void mergeSeeds() { m_mergeSeeds = true; } + void mergeSeeds() { m_mergeSeeds = true; }
57-73: Same indentation issue for member variable declarations.The member variables and their documentation blocks have inconsistent indentation -
unsigned int m_clusterOverlap{1};andbool m_mvtxOnly{false};are at column 0 instead of being indented within the class.🔧 Suggested indentation fix
/** * Minimum number of clusters that must be shared between two silicon track seeds * for them to be considered overlapping. * * Defaults to 1. */ -unsigned int m_clusterOverlap{1}; + unsigned int m_clusterOverlap{1}; bool m_mergeSeeds{false}; /** * Restrict seed processing to the MVTX detector only. * * When set to `true`, operations that iterate or merge silicon seed tracks * will be limited to clusters originating from the MVTX vertex detector. * When `false`, clusters from other silicon detectors are included. */ -bool m_mvtxOnly{false}; + bool m_mvtxOnly{false}; + bool m_mergeSeeds{false};
75-77: Extra blank lines at end of file.There are unnecessary trailing blank lines after the
#endifguard. Consider removing them for cleaner formatting.offline/packages/jetbackground/DetermineTowerBackground.cc (1)
79-106: Use RAII forCDBTTreeto avoid manual delete paths.
[RAII + exception safety]♻️ Suggested change
- CDBTTree *cdbtree_calo_v2 = nullptr; + std::unique_ptr<CDBTTree> cdbtree_calo_v2; ... - cdbtree_calo_v2 = new CDBTTree(calibdir); + cdbtree_calo_v2 = std::make_unique<CDBTTree>(calibdir); ... - cdbtree_calo_v2->LoadCalibrations(); + cdbtree_calo_v2->LoadCalibrations(); ... - _CENTRALITY_V2[icent] = cdbtree_calo_v2->GetFloatValue(icent, "jet_calo_v2"); + _CENTRALITY_V2[icent] = cdbtree_calo_v2->GetFloatValue(icent, "jet_calo_v2"); - - delete cdbtree_calo_v2;offline/packages/globalvertex/SvtxVertex_v2.h (1)
100-118: Add documentation explaining the rollover conversion logic.The conversion between unsigned int and short int involves complex rollover semantics that are not immediately obvious. Consider adding comments explaining:
- The expected value ranges for
_beamcrossing- Why values > 32767 might exist (legacy data, serialization compatibility)
- The intended behavior of the double rollover at lines 114-117
simulation/g4simulation/g4main/PHG4TruthTrackingAction.cc (2)
360-427: Ancestry traversal lacks cycle detection—relies on Geant4 invariant.The
while (parent)loop at line 402 relies on the assumption that Geant4 always processes parents before children, ensuring no cycles. The comment at line 401 acknowledges this risk. While this invariant should hold in practice, a defensive check could prevent an infinite loop if the truth container is ever corrupted.🛡️ Optional: Add visited-set guard for defensive programming
PHG4Particle* parent = truth.GetParticle(particle->get_parent_id()); - // if there is a loop of the parent, then we have a problem lol with this while loop btw + std::unordered_set<int> visited; while (parent) { + if (!visited.insert(parent->get_track_id()).second) + { + std::cerr << "PHG4TruthTrackingAction::issPHENIXPrimary - cycle detected in ancestry for track id " << particle->get_track_id() << std::endl; + return false; + } if (isLongLived(parent->get_pid())) {
379-386: Remove or guard commented-out debug code before merging.Leaving commented debug prints in production code adds clutter. Consider removing this block entirely or guarding it behind a verbosity flag if it's needed for future debugging.
generators/Herwig/HepMCTrigger/HepMCParticleTrigger.cc (5)
12-16: Remove unused FastJet includes.
fastjet/JetDefinition.hhandfastjet/PseudoJet.hhare included but not used in this particle trigger implementation. These are likely copied fromHepMCJetTriggerand should be removed.♻️ Proposed fix
`#include` <phool/PHCompositeNode.h> `#include` <phool/getClass.h> -#include <fastjet/JetDefinition.hh> - `#include` <HepMC/GenEvent.h> -#include <fastjet/PseudoJet.hh> `#include` <map>
85-108: Early abort on first bad event in multi-event map loses information.In
process_event, when iterating overPHHepMCGenEventMap, an earlyreturn Fun4AllReturnCodes::ABORTEVENTon the first failing event (lines 90, 95, 100) discards any information about which specific event failed. If the map typically contains only one event, this is fine, but if multiple events exist, consider logging which one failed before aborting.
114-121:AddParticlescan useinsertfor efficiency.The loop can be replaced with a single
insertcall for better clarity and potential performance.♻️ Proposed fix
void HepMCParticleTrigger::AddParticles(const std::vector<int>& particles) { - for (auto p : particles) - { - _theParticles.push_back(p); - } - return; + _theParticles.insert(_theParticles.end(), particles.begin(), particles.end()); }
344-351: Usemap::try_emplaceoroperator[]increment for cleaner map update.The contains-then-increment pattern can be simplified.
♻️ Proposed fix
- if (particle_types.contains(pid)) - { - particle_types[pid]++; - } - else - { - particle_types[pid] = 1; - } + particle_types[pid]++; // operator[] default-initializes to 0 if key absent
302-302: The API cannot distinguish particles from antiparticles—this is an intentional design choice that affects usability.Line 302 uses
std::abs((*iter)->pdg_id()), which combines particles with their antiparticles. The absolute value normalization is applied consistently throughout the matching pipeline: users callingAddParticle(11)orAddParticle(-11)both match viastd::abs(trigger_particle)inparticleAboveThreshold()(line 363), and all event particles are stored using their absolute pdg_id in theparticle_typesmap. This means electrons (11) and positrons (-11) cannot be distinguished.If distinguishing particles from antiparticles is important for your trigger use cases, the API would need to preserve the sign of pdg_id throughout the matching logic.
generators/Herwig/HepMCTrigger/HepMCJetTrigger.h (1)
50-51: Addconstqualifier to getter methods.These getters don't modify state and should be marked
constfor correctness and to allow calling on const references.♻️ Proposed fix
- int getNevts(){return this->n_evts;} - int getNgood(){return this->n_good;} + int getNevts() const { return n_evts; } + int getNgood() const { return n_good; }generators/Herwig/HepMCTrigger/HepMCParticleTrigger.h (2)
8-8: Remove unused FastJet include.
<fastjet/PseudoJet.hh>is not used in this header. This appears to be copied fromHepMCJetTrigger.h.♻️ Proposed fix
`#include` <fun4all/SubsysReco.h> -#include <fastjet/PseudoJet.hh> - `#include` <cmath>
69-70: Addconstqualifier to getter methods.These getters don't modify state and should be marked
const. This matches the recommended change forHepMCJetTrigger.h.♻️ Proposed fix
- int getNevts(){return this->n_evts;} - int getNgood(){return this->n_good;} + int getNevts() const { return n_evts; } + int getNgood() const { return n_good; }offline/QA/Tracking/StateClusterResidualsQA.cc (3)
33-34: Unusedsquaretemplate function.This helper is defined but never used in the file. Consider removing it to avoid dead code.
🧹 Proposed fix
namespace { - template <typename T> - inline T square (T const& t) { return t * t; } - template <class T> class range_adaptor
215-235: Unused variablesstate_zandcluster_zin local coordinates branch.In the
m_use_local_coords == truebranch,state_zandcluster_zare declared but never assigned or used, leaving them uninitialized.🧹 Proposed fix: Move declarations inside branches
- float state_x; - float state_y; - float state_z; - float cluster_x; - float cluster_y; - float cluster_z; if (m_use_local_coords == true) { - state_x = state->get_localX(); - state_y = state->get_localY(); + float state_x = state->get_localX(); + float state_y = state->get_localY(); Acts::Vector2 loc = geometry->getLocalCoords(state->get_cluskey(), cluster); - cluster_x = loc.x(); - cluster_y = loc.y(); + float cluster_x = loc.x(); + float cluster_y = loc.y(); // ... rest unchanged } else { - state_x = state->get_x(); + float state_x = state->get_x(); // ... rest unchanged }
394-400: Unused histogram manager inEndRun.The histogram manager is retrieved and asserted but not used. If no cleanup is needed, consider simplifying:
int StateClusterResidualsQA::EndRun(const int /*unused*/) { - auto *hm = QAHistManagerDef::getHistoManager(); - assert(hm); - return Fun4AllReturnCodes::EVENT_OK; }offline/QA/Tracking/StateClusterResidualsQA.h (1)
9-9: Unused include<limits>.
FLT_MAXcomes from<cfloat>(line 11);std::numeric_limitsis not used.-#include <limits>offline/packages/trackreco/PHActsSiliconSeeding.h (1)
179-188: Clarify units for MVTX/beam-spot setters.
Defaults are documented in cm, but the setters accept raw floats. A short Doxygen note (or x_cm/y_cm naming) will reduce accidental mm inputs.💡 Suggested doc tweak
- void set_mvtxCenterXY(const float X, const float Y) + /// MVTX center in cm (sPHENIX coordinates) + void set_mvtxCenterXY(const float X, const float Y) { m_mvtx_x0 = X; m_mvtx_y0 = Y; } - void set_beamSpotXY(const float X, const float Y) + /// Beam spot in cm (sPHENIX coordinates) + void set_beamSpotXY(const float X, const float Y) { m_beamSpotx = X; m_beamSpoty = Y; }offline/packages/trackreco/PHActsSiliconSeeding.cc (1)
1174-1207: Simplify MVTX-center offset math initerateLayers(INTT-only).
All current call sites passstartLayer >= 3, so thelayer < 3branch is dead and the offset math collapses to a no-op. Consider removing it here (or add a note if MVTX layers will be added later).♻️ Optional simplification
- // move the fitted circle center the negative of the MVTX center position - float x0 = 0.0; // cm - float y0 = 0.0; for (int layer = startLayer; layer < endLayer; ++layer) { float layerradius = m_geomContainerIntt->GetLayerGeom(layer)->get_radius(); - if(layer < 3) - { - x0 = m_mvtx_x0; - y0 = m_mvtx_y0; - } - - float xfitradius_moved = fitpars[1] - x0; - float yfitradius_moved = fitpars[2] - y0; - - const auto [xplus, yplus, xminus, yminus] = TrackFitUtils::circle_circle_intersection(layerradius, fitpars[0], xfitradius_moved, yfitradius_moved); - - float xp = xplus + x0; - float xm = xminus + x0; - float yp = yplus + y0; - float ym = yminus + y0; + const auto [xplus, yplus, xminus, yminus] = + TrackFitUtils::circle_circle_intersection(layerradius, fitpars[0], fitpars[1], fitpars[2]); + + float xp = xplus; + float xm = xminus; + float yp = yplus; + float ym = yminus;offline/packages/TrackingDiagnostics/TrkrNtuplizer.cc (2)
1480-1484: Use_nlayers_mapsinstead of hard-coded3.
Keeps MVTX layer selection consistent with configurable layer counts.♻️ Proposed fix
- if (layer_local < 3) + if (layer_local < _nlayers_maps)
1506-1507: Remove commented-out assignments to reduce clutter.🧹 Cleanup
- //fx_hit[n_hit::nhitphielem] = TpcDefs::getSectorId(hitset_key); - //fx_hit[n_hit::nhitzelem] = TpcDefs::getSide(hitset_key);offline/packages/globalvertex/GlobalVertexv2.h (1)
82-99: Consider consolidating duplicated rollover utilities.The
rollover_shortandrollover_from_unsignedintfunctions are identically implemented inGlobalVertexv2.h,MbdVertexv2.h, andSvtxVertex_v2.h. Consider extracting these to a shared utility header to reduce code duplication.offline/packages/globalvertex/GlobalVertexv3.cc (1)
120-151: Avoid copying the SVTX vertex vector in hot-path getters.
These getters copysvtxit->secondinto a local vector; a const reference avoids repeated allocations and copies.♻️ Suggested change
- GlobalVertex::VertexVector trackvertices = svtxit->second; + const auto& trackvertices = svtxit->second;- GlobalVertex::VertexVector trackvertices = svtxit->second; + const auto& trackvertices = svtxit->second;- GlobalVertex::VertexVector trackvertices = svtxit->second; + const auto& trackvertices = svtxit->second;- GlobalVertex::VertexVector trackvertices = svtxit->second; + const auto& trackvertices = svtxit->second;Also applies to: 153-174, 176-197, 199-231
| float _theEtaHigh{1.1}; | ||
| float _theEtaLow{-1.1}; | ||
| float _thePtHigh{999.9}; | ||
| float _thePtLow{-999.9}; |
There was a problem hiding this comment.
Inconsistent default for _thePtLow between header and constructor.
The header initializes _thePtLow to -999.9, but the constructor initializer list in the .cc file sets it to 0. The constructor value wins at runtime, but this inconsistency is confusing and error-prone if the constructor initialization is ever removed.
🐛 Proposed fix (align header with .cc)
float _thePtHigh{999.9};
- float _thePtLow{-999.9};
+ float _thePtLow{0};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| float _thePtLow{-999.9}; | |
| float _thePtHigh{999.9}; | |
| float _thePtLow{0}; |
🤖 Prompt for AI Agents
In `@generators/Herwig/HepMCTrigger/HepMCParticleTrigger.h` at line 88, The
default value for the member _thePtLow is inconsistent: the header
HepMCParticleTrigger.h sets float _thePtLow{-999.9} while the constructor
initializer in the .cc sets it to 0; align them by choosing one canonical
default and making both places match (preferably set the header and the
constructor initializer list for HepMCParticleTrigger to the same value, e.g.,
-999.9 or 0) so the declaration of _thePtLow and the constructor initializer
list in the HepMCParticleTrigger class are identical.
| // PHObject virtual overloads | ||
| void identify(std::ostream& os = std::cout) const override; | ||
| void Reset() override; | ||
| int isValid() const override; | ||
| PHObject* CloneMe() const override { return new GlobalVertexv3(*this); } |
There was a problem hiding this comment.
CloneMe performs a shallow copy of owned vertices.
Line 26 uses the default copy constructor, which shallow-copies _vtxs raw pointers. Because Reset()/destructor delete these pointers, CloneMe() can lead to double-free or UAF when the clone and original are destroyed.
🛠️ Proposed deep-clone implementation
- PHObject* CloneMe() const override { return new GlobalVertexv3(*this); }
+ PHObject* CloneMe() const override;+PHObject* GlobalVertexv3::CloneMe() const
+{
+ auto* clone = new GlobalVertexv3();
+ clone->_id = _id;
+ clone->_bco = _bco;
+ for (const auto& [type, vertices] : _vtxs)
+ {
+ for (const auto* vertex : vertices)
+ {
+ clone->clone_insert_vtx(type, vertex);
+ }
+ }
+ return clone;
+}🤖 Prompt for AI Agents
In `@offline/packages/globalvertex/GlobalVertexv3.h` around lines 22 - 26, CloneMe
currently returns a shallow copy via the default copy constructor which
duplicates raw pointers in _vtxs and leads to double-free/UAF when
Reset()/destructor run; change CloneMe in class GlobalVertexv3 to perform a deep
clone by creating a new GlobalVertexv3 instance and for each pointer in the
original's _vtxs allocate/clone a fresh vertex object (using the vertex type's
copy constructor or its Clone/CloneMe method if available) and push those new
pointers into the new object's _vtxs, ensuring ownership semantics match
Reset()/destructor and avoiding shared raw pointers between original and clone.
| void set_beam_crossing(short int cross) override | ||
| { | ||
| if (cross == short_int_max) | ||
| { | ||
| _beamcrossing = std::numeric_limits<unsigned int>::max(); | ||
| return; | ||
| } | ||
|
|
||
| const short int cross_ro = rollover_short(cross); | ||
| _beamcrossing = static_cast<unsigned int>(cross_ro); | ||
| } |
There was a problem hiding this comment.
Unsafe cast: negative cross_ro converts to large unsigned value.
At line 70, static_cast<unsigned int>(cross_ro) with a negative cross_ro (possible when input is -32768) produces UINT_MAX or near-max values, colliding with the sentinel. Consider adding a guard:
Proposed fix with validation
void set_beam_crossing(short int cross) override
{
if (cross == short_int_max)
{
_beamcrossing = std::numeric_limits<unsigned int>::max();
return;
}
const short int cross_ro = rollover_short(cross);
+ if (cross_ro < 0)
+ {
+ // Handle edge case - either reject or map to a valid value
+ _beamcrossing = std::numeric_limits<unsigned int>::max();
+ return;
+ }
_beamcrossing = static_cast<unsigned int>(cross_ro);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| void set_beam_crossing(short int cross) override | |
| { | |
| if (cross == short_int_max) | |
| { | |
| _beamcrossing = std::numeric_limits<unsigned int>::max(); | |
| return; | |
| } | |
| const short int cross_ro = rollover_short(cross); | |
| _beamcrossing = static_cast<unsigned int>(cross_ro); | |
| } | |
| void set_beam_crossing(short int cross) override | |
| { | |
| if (cross == short_int_max) | |
| { | |
| _beamcrossing = std::numeric_limits<unsigned int>::max(); | |
| return; | |
| } | |
| const short int cross_ro = rollover_short(cross); | |
| if (cross_ro < 0) | |
| { | |
| // Handle edge case - either reject or map to a valid value | |
| _beamcrossing = std::numeric_limits<unsigned int>::max(); | |
| return; | |
| } | |
| _beamcrossing = static_cast<unsigned int>(cross_ro); | |
| } |
🤖 Prompt for AI Agents
In `@offline/packages/globalvertex/SvtxVertex_v2.h` around lines 61 - 71,
set_beam_crossing currently casts rollover_short(cross) to unsigned which maps
negative values (e.g. -32768) to large unsigned sentinel values; change
set_beam_crossing to validate the result of rollover_short(cross) before
casting: call rollover_short(cross) into a short int (cross_ro), and if cross_ro
is negative set _beamcrossing to a defined safe value (e.g. 0u or another agreed
clamp) instead of static_cast<unsigned int>(cross_ro); keep the existing
special-case when cross == short_int_max that sets _beamcrossing to
std::numeric_limits<unsigned int>::max(), and update any callers/contract if a
different sentinel is chosen.
| static short int rollover_short(short int cross) | ||
| { | ||
| if (cross == short_int_max) return short_int_max; | ||
| if (cross >= 0) return cross; | ||
|
|
||
| const int cross_ro = static_cast<int>(short_int_max) + static_cast<int>(cross); // cross negative | ||
| return static_cast<short int>(cross_ro); | ||
| } |
There was a problem hiding this comment.
Critical: rollover_short(-32768) returns -1, causing sentinel collision.
When cross = -32768:
cross_ro = 32767 + (-32768) = -1- In
set_beam_crossing,static_cast<unsigned int>(-1)yieldsUINT_MAX - This conflicts with the sentinel value for
short_int_max(32767)
Both set_beam_crossing(32767) and set_beam_crossing(-32768) result in _beamcrossing = UINT_MAX, making them indistinguishable.
Additionally, the conversion is non-bijective: negative values in [-32767, -1] map via rollover to [0, 32766], the same range as non-negative inputs [0, 32766]. Round-trip set_beam_crossing(x) → get_beam_crossing() will not preserve negative input values.
Proposed fix to handle -32768 edge case
static short int rollover_short(short int cross)
{
if (cross == short_int_max) return short_int_max;
if (cross >= 0) return cross;
const int cross_ro = static_cast<int>(short_int_max) + static_cast<int>(cross); // cross negative
+ // Ensure result is non-negative to avoid UINT_MAX collision when cast
+ if (cross_ro < 0)
+ {
+ return 0; // or handle as special case / error
+ }
return static_cast<short int>(cross_ro);
}Consider documenting the valid input range and expected behavior for negative values, or rejecting out-of-range inputs explicitly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| static short int rollover_short(short int cross) | |
| { | |
| if (cross == short_int_max) return short_int_max; | |
| if (cross >= 0) return cross; | |
| const int cross_ro = static_cast<int>(short_int_max) + static_cast<int>(cross); // cross negative | |
| return static_cast<short int>(cross_ro); | |
| } | |
| static short int rollover_short(short int cross) | |
| { | |
| if (cross == short_int_max) return short_int_max; | |
| if (cross >= 0) return cross; | |
| const int cross_ro = static_cast<int>(short_int_max) + static_cast<int>(cross); // cross negative | |
| // Ensure result is non-negative to avoid UINT_MAX collision when cast | |
| if (cross_ro < 0) | |
| { | |
| return 0; // or handle as special case / error | |
| } | |
| return static_cast<short int>(cross_ro); | |
| } |
🤖 Prompt for AI Agents
In `@offline/packages/globalvertex/SvtxVertex_v2.h` around lines 91 - 98,
rollover_short currently maps cross == -32768 into the same sentinel slot as
short_int_max causing collisions; update rollover_short to explicitly handle the
minimum short value (use numeric_limits<short>::min() or the defined
short_int_min) and either (a) reject it (throw std::out_of_range or assert) so
set_beam_crossing can reject/document out-of-range inputs, or (b) map it to a
distinct reserved value (not short_int_max) so serialization is bijective;
ensure callers set_beam_crossing/get_beam_crossing handle the new behavior and
update any docs/comments accordingly.
| unsigned int SvtxVertex_v3::covar_index(unsigned int i, unsigned int j) const | ||
| { | ||
| if (i > j) | ||
| { | ||
| std::swap(i, j); | ||
| } | ||
| return i + 1 + (j + 1) * (j) / 2 - 1; |
There was a problem hiding this comment.
Missing bounds validation for covariance matrix indices.
The covar_index function doesn't validate that i and j are within bounds (0-2). If called with indices >= 3, it will compute an out-of-bounds index into _err[6], causing undefined behavior.
🛡️ Proposed defensive fix
unsigned int SvtxVertex_v3::covar_index(unsigned int i, unsigned int j) const
{
+ if (i > 2 || j > 2)
+ {
+ return 0; // or throw/assert
+ }
if (i > j)
{
std::swap(i, j);
}
return i + 1 + (j + 1) * (j) / 2 - 1;
}🤖 Prompt for AI Agents
In `@offline/packages/globalvertex/SvtxVertex_v3.cc` around lines 106 - 112, The
covar_index method lacks bounds checking and can produce out-of-range indexes
into the _err[6] covariance storage; update SvtxVertex_v3::covar_index to
validate that both i and j are within [0,2] before computing the packed index
(keep the existing std::swap(i,j) behavior), and if an index is invalid, fail
fast by throwing a std::out_of_range (or using an assert) with a clear message
mentioning the invalid i/j values and the object, so callers cannot index _err
out-of-bounds.
| if (layer_local >= _nlayers_maps && layer_local < _nlayers_intt) | ||
| { | ||
| int row = InttDefs::getRow(hit_key); | ||
| int col = InttDefs::getCol(hit_key); | ||
|
|
||
| CylinderGeomIntt* intt_cylinder = dynamic_cast<CylinderGeomIntt*>(_inttGeom->GetLayerGeom(layer_local)); | ||
| double localcoords[3]; | ||
| intt_cylinder->find_strip_center_localcoords(InttDefs::getLadderZId(hitset_key),row,col,localcoords); |
There was a problem hiding this comment.
Fix INTT layer range check (currently skips most INTT layers).
layer_local < _nlayers_intt only handles the first INTT layer when using global layer indices. Use _nlayers_maps + _nlayers_intt as the upper bound.
🐛 Proposed fix
- if (layer_local >= _nlayers_maps && layer_local < _nlayers_intt)
+ if (layer_local >= _nlayers_maps && layer_local < (_nlayers_maps + _nlayers_intt))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (layer_local >= _nlayers_maps && layer_local < _nlayers_intt) | |
| { | |
| int row = InttDefs::getRow(hit_key); | |
| int col = InttDefs::getCol(hit_key); | |
| CylinderGeomIntt* intt_cylinder = dynamic_cast<CylinderGeomIntt*>(_inttGeom->GetLayerGeom(layer_local)); | |
| double localcoords[3]; | |
| intt_cylinder->find_strip_center_localcoords(InttDefs::getLadderZId(hitset_key),row,col,localcoords); | |
| if (layer_local >= _nlayers_maps && layer_local < (_nlayers_maps + _nlayers_intt)) | |
| { | |
| int row = InttDefs::getRow(hit_key); | |
| int col = InttDefs::getCol(hit_key); | |
| CylinderGeomIntt* intt_cylinder = dynamic_cast<CylinderGeomIntt*>(_inttGeom->GetLayerGeom(layer_local)); | |
| double localcoords[3]; | |
| intt_cylinder->find_strip_center_localcoords(InttDefs::getLadderZId(hitset_key),row,col,localcoords); |
🤖 Prompt for AI Agents
In `@offline/packages/TrackingDiagnostics/TrkrNtuplizer.cc` around lines 1542 -
1549, The range check for INTT layers incorrectly uses "layer_local <
_nlayers_intt", causing most INTT layers to be skipped; update the condition in
the block that handles INTT hits (where variables like layer_local,
_nlayers_maps, _nlayers_intt, InttDefs, and CylinderGeomIntt appear) so the
upper bound is _nlayers_maps + _nlayers_intt (i.e., use layer_local >=
_nlayers_maps && layer_local < _nlayers_maps + _nlayers_intt) to correctly
detect all INTT layers before calling find_strip_center_localcoords.
| TrackSeed* track1 = m_siliconTracks->get(track1ID); | ||
|
|
||
| if (seedsToDelete.find(track1ID) != seedsToDelete.end()) | ||
| if (seedsToDelete.contains(track1ID)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for C++20 or C++17 configuration in build files
echo "=== Checking for C++ standard in CMake files ==="
rg -n "CMAKE_CXX_STANDARD|CXX_STANDARD|c\+\+20|c\+\+2a|std=c\+\+20" --type cmake -A 2
echo -e "\n=== Searching CMakeLists.txt files ==="
fd "CMakeLists.txt" | head -20
echo -e "\n=== Checking root CMakeLists.txt ==="
if [ -f "CMakeLists.txt" ]; then
head -50 CMakeLists.txt
fi
echo -e "\n=== Checking offline package CMakeLists.txt ==="
if [ -f "offline/CMakeLists.txt" ]; then
grep -n "CMAKE_CXX_STANDARD\|CXX_STANDARD" offline/CMakeLists.txt || echo "Not found in offline/CMakeLists.txt"
fi
echo -e "\n=== Checking trackreco package CMakeLists.txt ==="
if [ -f "offline/packages/trackreco/CMakeLists.txt" ]; then
cat offline/packages/trackreco/CMakeLists.txt
fiRepository: sPHENIX-Test/coresoftware
Length of output: 461
🏁 Script executed:
#!/bin/bash
# Find all CMakeLists.txt files
echo "=== Finding CMakeLists.txt files ==="
find . -name "CMakeLists.txt" -type f 2>/dev/null | head -30
echo -e "\n=== Searching for C++ standard settings (broad search) ==="
rg "CMAKE_CXX_STANDARD|set.*CXX_STANDARD|c\+\+20|c\+\+17|c\+\+2a" -i 2>/dev/null | head -40
echo -e "\n=== Checking root directory structure ==="
ls -la | head -20
echo -e "\n=== Checking offline directory ==="
ls -la offline/ | head -20
echo -e "\n=== Checking trackreco directory ==="
ls -la offline/packages/trackreco/ 2>/dev/null | head -20Repository: sPHENIX-Test/coresoftware
Length of output: 3120
🏁 Script executed:
#!/bin/bash
# Find the specific file
echo "=== Locating PHSiliconSeedMerger.cc ==="
find . -name "PHSiliconSeedMerger.cc" -type f
echo -e "\n=== Checking if contains() is used elsewhere in codebase ==="
rg "\.contains\(" --type cc --type cpp -l | head -20
echo -e "\n=== Checking line 92 of PHSiliconSeedMerger.cc ==="
if [ -f "offline/packages/trackreco/PHSiliconSeedMerger.cc" ]; then
sed -n '85,99p' offline/packages/trackreco/PHSiliconSeedMerger.cc | cat -n
else
echo "File not found at expected location"
fi
echo -e "\n=== Looking for build configuration files ==="
find . -name "*.cmake" -o -name "Makefile*" -o -name "setup.py" -o -name "setup.cfg" 2>/dev/null | head -20
echo -e "\n=== Checking .clang-tidy for any C++ standard hints ==="
head -50 .clang-tidyRepository: sPHENIX-Test/coresoftware
Length of output: 3795
🏁 Script executed:
#!/bin/bash
# Look for configure.ac or Makefile.am in trackreco
echo "=== Looking for build files in trackreco ==="
ls -la offline/packages/trackreco/ | grep -E "Makefile|configure"
echo -e "\n=== Searching for configure.ac in repo ==="
find . -name "configure.ac" -o -name "configure.in" 2>/dev/null
echo -e "\n=== Checking trackreco Makefile.am if it exists ==="
if [ -f "offline/packages/trackreco/Makefile.am" ]; then
cat offline/packages/trackreco/Makefile.am
fi
echo -e "\n=== Searching for CXX flags in any Makefile.am ==="
rg "CXXFLAGS|CXX_STANDARD|c\+\+20" 2>/dev/null | head -20
echo -e "\n=== Looking for any contains() usage in .cc files ==="
rg "contains\(" offline/packages/trackreco/ 2>/dev/null
echo -e "\n=== Check if std::set is used elsewhere with contains ==="
rg "std::set" offline/packages/trackreco/ --context 2 2>/dev/null | head -30Repository: sPHENIX-Test/coresoftware
Length of output: 16578
🏁 Script executed:
#!/bin/bash
# Check the full trackreco configure.ac file
echo "=== Full trackreco/configure.ac ==="
cat offline/packages/trackreco/configure.ac
echo -e "\n=== Check for parent configure.ac or build system defaults ==="
cat configure.ac 2>/dev/null || echo "No root configure.ac"
echo -e "\n=== Check if any compiler version is enforced ==="
rg "AC_PROG_CXX|AX_CXX" offline/packages/trackreco/ 2>/dev/null
echo -e "\n=== Count of contains() usages by file ==="
rg "\.contains\(" offline/packages/trackreco/ -l 2>/dev/null | xargs -I {} sh -c 'echo "{}:"; rg "\.contains\(" "{}" | wc -l'Repository: sPHENIX-Test/coresoftware
Length of output: 1179
Add C++20 standard flag to build configuration.
The code uses std::set::contains(), a C++20 feature, but offline/packages/trackreco/configure.ac does not explicitly enable C++20 (no -std=c++20 or equivalent flag). This will fail on compilers defaulting to C++17 or earlier. The issue is widespread—six other trackreco source files also use .contains(). Update configure.ac to add the C++20 flag to CXXFLAGS or use AX_CHECK_COMPILE_FLAG to ensure C++20 support.
🤖 Prompt for AI Agents
In `@offline/packages/trackreco/PHSiliconSeedMerger.cc` at line 92, The build
fails because code (e.g., PHSiliconSeedMerger.cc using
seedsToDelete.contains(track1ID)) relies on C++20; update the build
configuration to enable C++20 by modifying configure.ac to add the C++20
standard flag to the compiler flags (either append -std=c++20 to
CXXFLAGS/CXXFLAGS_DEFAULT) or, preferably, use Autoconf’s AX_CHECK_COMPILE_FLAG
(or similar) to detect and add a supported C++20 flag; ensure this change covers
all trackreco sources that call std::set::contains and that the configure check
emits a clear error if no C++20-supporting flag is available.
| for (const auto& cfg : m_pending) | ||
| { | ||
| if (m_use_local_coords) | ||
| { | ||
| m_histograms_x.push_back(dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_local_rphi")))); | ||
| m_histograms_y.push_back(dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_local_z")))); | ||
| m_histograms_layer_x.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_local_layer_rphi")))); | ||
| m_histograms_layer_y.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_local_layer_z")))); | ||
| m_histograms_phi_x.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_local_phi_rphi")))); | ||
| m_histograms_phi_y.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_local_phi_z")))); | ||
| m_histograms_eta_x.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_local_eta_rphi")))); | ||
| m_histograms_eta_y.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_local_eta_z")))); | ||
| } | ||
| else | ||
| { | ||
| m_histograms_x.push_back(dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_x")))); | ||
| m_histograms_y.push_back(dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_y")))); | ||
| m_histograms_z.push_back(dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_z")))); | ||
| m_histograms_layer_x.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_layer_x")))); | ||
| m_histograms_layer_y.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_layer_y")))); | ||
| m_histograms_layer_z.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_layer_z")))); | ||
| m_histograms_phi_x.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_phi_x")))); | ||
| m_histograms_phi_y.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_phi_y")))); | ||
| m_histograms_phi_z.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_phi_z")))); | ||
| m_histograms_eta_x.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_eta_x")))); | ||
| m_histograms_eta_y.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_eta_y")))); | ||
| m_histograms_eta_z.push_back(dynamic_cast<TH2 *>(hm->getHisto(std::string(cfg.name + "_eta_z")))); | ||
| } | ||
| } |
There was a problem hiding this comment.
Missing null-checks after dynamic_cast may cause crashes in process_event.
If hm->getHisto() returns nullptr or the cast fails, null pointers are stored in the histogram vectors. Later calls to Fill() (lines 228-257) will dereference these nulls.
🐛 Proposed fix: Add null-checks or assertions
for (const auto& cfg : m_pending)
{
if (m_use_local_coords)
{
- m_histograms_x.push_back(dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_local_rphi"))));
- m_histograms_y.push_back(dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_local_z"))));
+ auto* h_x = dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_local_rphi")));
+ auto* h_y = dynamic_cast<TH1 *>(hm->getHisto(std::string(cfg.name + "_local_z")));
+ if (!h_x || !h_y)
+ {
+ std::cout << PHWHERE << " Failed to retrieve histograms for config: " << cfg.name << std::endl;
+ return Fun4AllReturnCodes::ABORTRUN;
+ }
+ m_histograms_x.push_back(h_x);
+ m_histograms_y.push_back(h_y);
// ... similar checks for other histograms🤖 Prompt for AI Agents
In `@offline/QA/Tracking/StateClusterResidualsQA.cc` around lines 114 - 142, The
loop that pushes dynamic_cast results from hm->getHisto into vectors (e.g.,
m_histograms_x, m_histograms_y, m_histograms_layer_x, etc.) can store null
pointers if getHisto() returns nullptr or the cast fails, leading to crashes
later in process_event when Fill() is called; update the loop that iterates over
m_pending (inside the m_use_local_coords branches) to check each returned
pointer before push_back (or assert/throw with a clear message), and only push
valid TH1/TH2 pointers (or substitute a safe no-op placeholder) so that
subsequent process_event accesses of those vectors are guaranteed non-null.
Ensure checks reference the same calls shown (hm->getHisto(std::string(cfg.name
+ "...")) and the dynamic_cast results) and adjust downstream logic in
process_event to assume non-null or handle the placeholder consistently.
| int h = 0; | ||
| for (const auto& cfg : m_pending) | ||
| { | ||
| if (cfg.charge != 0) | ||
| { | ||
| if ((cfg.charge < 0) && track->get_positive_charge()) | ||
| { | ||
| continue; | ||
| } | ||
| if ((cfg.charge > 0) && !(track->get_positive_charge())) | ||
| { | ||
| continue; | ||
| } | ||
| } | ||
| if (cfg.min_mvtx_clusters <= counters[TrkrDefs::mvtxId] && cfg.max_mvtx_clusters >= counters[TrkrDefs::mvtxId] | ||
| && cfg.min_intt_clusters <= counters[TrkrDefs::inttId] && cfg.max_intt_clusters >= counters[TrkrDefs::inttId] | ||
| && cfg.min_tpc_clusters <= counters[TrkrDefs::tpcId] && cfg.max_tpc_clusters >= counters[TrkrDefs::tpcId] | ||
| && cfg.phi_min <= track_phi && cfg.phi_max >= track_phi | ||
| && cfg.eta_min <= track_eta && cfg.eta_max >= track_eta | ||
| && cfg.pt_min <= track_pt && cfg.pt_max >= track_pt) | ||
| { | ||
| for (auto const& [path_length, state] : range_adaptor(track->begin_states(), track->end_states())) | ||
| { | ||
| if (path_length == 0) { continue; } | ||
|
|
||
| auto *cluster = cluster_map->findCluster(state->get_cluskey()); | ||
| if (!cluster) | ||
| { | ||
| continue; | ||
| } | ||
|
|
||
| float state_x; | ||
| float state_y; | ||
| float state_z; | ||
| float cluster_x; | ||
| float cluster_y; | ||
| float cluster_z; | ||
| if (m_use_local_coords == true) | ||
| { | ||
| state_x = state->get_localX(); | ||
| state_y = state->get_localY(); | ||
| Acts::Vector2 loc = geometry->getLocalCoords(state->get_cluskey(), cluster); | ||
| cluster_x = loc.x(); | ||
| cluster_y = loc.y(); | ||
| m_histograms_x[h]->Fill(state_x - cluster_x); | ||
| m_histograms_y[h]->Fill(state_y - cluster_y); | ||
| m_histograms_layer_x[h]->Fill(TrkrDefs::getLayer(state->get_cluskey()), state_x - cluster_x); | ||
| m_histograms_layer_y[h]->Fill(TrkrDefs::getLayer(state->get_cluskey()), state_y - cluster_y); | ||
| m_histograms_phi_x[h]->Fill(state->get_phi(), state_x - cluster_x); | ||
| m_histograms_phi_y[h]->Fill(state->get_phi(), state_y - cluster_y); | ||
| m_histograms_eta_x[h]->Fill(state->get_eta(), state_x - cluster_x); | ||
| m_histograms_eta_y[h]->Fill(state->get_eta(), state_y - cluster_y); | ||
| } | ||
| else | ||
| { | ||
| state_x = state->get_x(); | ||
| state_y = state->get_y(); | ||
| state_z = state->get_z(); | ||
| Acts::Vector3 glob = geometry->getGlobalPosition(state->get_cluskey(), cluster); | ||
| cluster_x = glob.x(); | ||
| cluster_y = glob.y(); | ||
| cluster_z = glob.z(); | ||
| m_histograms_x[h]->Fill(state_x - cluster_x); | ||
| m_histograms_y[h]->Fill(state_y - cluster_y); | ||
| m_histograms_z[h]->Fill(state_z - cluster_z); | ||
| m_histograms_layer_x[h]->Fill(TrkrDefs::getLayer(state->get_cluskey()), state_x - cluster_x); | ||
| m_histograms_layer_y[h]->Fill(TrkrDefs::getLayer(state->get_cluskey()), state_y - cluster_y); | ||
| m_histograms_layer_z[h]->Fill(TrkrDefs::getLayer(state->get_cluskey()), state_z - cluster_z); | ||
| m_histograms_phi_x[h]->Fill(state->get_phi(), state_x - cluster_x); | ||
| m_histograms_phi_y[h]->Fill(state->get_phi(), state_y - cluster_y); | ||
| m_histograms_phi_z[h]->Fill(state->get_phi(), state_z - cluster_z); | ||
| m_histograms_eta_x[h]->Fill(state->get_eta(), state_x - cluster_x); | ||
| m_histograms_eta_y[h]->Fill(state->get_eta(), state_y - cluster_y); | ||
| m_histograms_eta_z[h]->Fill(state->get_eta(), state_z - cluster_z); | ||
| } | ||
| } | ||
| } | ||
| ++h; | ||
| } |
There was a problem hiding this comment.
Index desynchronization: continue skips ++h, causing wrong histograms to be filled.
When the charge filter at lines 189-196 triggers continue, the index h is not incremented, but the loop advances to the next config. This causes subsequent configs to fill the wrong histograms.
Example: If m_pending has 3 configs and the first config's charge check fails, h remains 0 while processing the second config, filling m_histograms_x[0] instead of m_histograms_x[1].
🐛 Proposed fix: Increment `h` before `continue`
if (cfg.charge != 0)
{
if ((cfg.charge < 0) && track->get_positive_charge())
{
+ ++h;
continue;
}
if ((cfg.charge > 0) && !(track->get_positive_charge()))
{
+ ++h;
continue;
}
}Alternatively, use index-based iteration:
- int h = 0;
- for (const auto& cfg : m_pending)
+ for (size_t h = 0; h < m_pending.size(); ++h)
{
+ const auto& cfg = m_pending[h];
if (cfg.charge != 0)
{
if ((cfg.charge < 0) && track->get_positive_charge())
{
continue;
}
...
}
...
- ++h;
}🤖 Prompt for AI Agents
In `@offline/QA/Tracking/StateClusterResidualsQA.cc` around lines 184 - 262, The
loop over m_pending uses a separate counter h that is skipped when early
continue is taken (in the charge filter), causing histogram indexing to
desynchronize; fix by ensuring h is incremented before every continue in the for
(const auto& cfg : m_pending) loop (i.e., place ++h just before each continue in
the charge checks) or refactor the loop to an index-based iteration (for (size_t
h=0; h<m_pending.size(); ++h) using m_pending[h]) so the histogram index always
matches the config; update references to cfg accordingly (m_pending, h, charge
filter block, and the histogram-fill section).
| std::vector<TH2*> m_histograms_eta_z{}; | ||
| }; | ||
|
|
||
| #endif // TRACKFITTINGQA_H |
There was a problem hiding this comment.
Header guard comment mismatch.
The closing comment says TRACKFITTINGQA_H but the guard is STATECLUSTERRESIDUALSQA_H (line 3).
-#endif // TRACKFITTINGQA_H
+#endif // STATECLUSTERRESIDUALSQA_H📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #endif // TRACKFITTINGQA_H | |
| `#endif` // STATECLUSTERRESIDUALSQA_H |
🤖 Prompt for AI Agents
In `@offline/QA/Tracking/StateClusterResidualsQA.h` at line 181, The header guard
closing comment is incorrect: the guard used is STATECLUSTERRESIDUALSQA_H but
the closing comment reads TRACKFITTINGQA_H; update the trailing comment to match
the guard identifier (STATECLUSTERRESIDUALSQA_H) so the opening and closing
guard identifiers are consistent with the `#ifndef/`#define for
STATECLUSTERRESIDUALSQA_H.
fix: remove reference in time frame builder
…on to waveform due to events from prior crossings better
Optimized running ped length, fixed all waveform contamination from prior crossings.
add 12 GeV jets to CreateFileList.pl
- Use 1% centrality bins instead of 10% - Allows to capture the finer variations of calibraitons that change on the order of 1% centrality
… calibration runs
Fixes for maps.
reco error update
…CMap Add a function to universally mask the INTT chip
make listPayloadIOVs useful
- Removed CDBUtils dependency: Deleted the std::unique_ptr<CDBUtils> member and removed its associated header from filter-datasets - Integrated CDBInterface: Updated getCalibration to utilize the CDBInterface::instance()->getUrl() method. - Managed Global State via recoConsts: - Updated getCalibration to dynamically set the TIMESTAMP flag in recoConsts for each IOV to ensure the correct calibration version is retrieved. - Initialized the CDB_GLOBALTAG flag within the main function of CaloCDB-FilterDatasets.cc. - Resolved Header Conflicts: Eliminated the destructor requirement that triggered the incomplete type error in the unique_ptr cleanup.
CaloCDB: Migrate FilterDatasets from CDBUtils to CDBInterface
…builder_badfees feat: fee masking in tpc decoder
- Public Headers (pkginclude_HEADERS): Retained GenStatus.h and geometry_constants.h because they are required for external macro execution in `runProd.C`. - Private Headers (noinst_HEADERS): Moved FilterDatasets.h and myUtils.h to noinst_HEADERS to prevent unnecessary global installation of internal utilities.
CaloCDB: Refactor header installation in Makefile.am
This was another reason using get_z on the global vertex returned NaN for type=truth. Now it works (tested privately)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…interactions Double interactions
…ition-to-truthvertex Add get_position to truthvertex
HFTrackEfficiency updates
DecayFinder search/match fix
Types of changes
What kind of change does this PR introduce? (Bug fix, feature, ...)
TODOs (if applicable)
Links to other PRs in macros and calibration repositories (if applicable)
Motivation / context
This PR is a large, cross-cutting update adding new trigger modules, QA and calibration tools, waveform fitting options, dE/dx fitting utilities, event‑plane (sEPD) calibration code, new/updated data-types (vertex/MBD V3, MbdRaw V2), many build/autotools and ROOT dictionary artifacts, and numerous refactors and defensive fixes across detectors and framework components. The changes aim to improve configurability, QA coverage, calibration workflows, and signal-fitting fidelity, while modernizing APIs and improving defensive/const-correct coding.
Key changes (high-level)
Potential risk areas
Possible future improvements
Caveat / reading note
I generated this summary from the provided change listings—AI tools can make mistakes or miss nuanced context. Please review the diffs and run a full build and validation suite (including ROOT dictionary generation and CDB interactions) before merging. I can prepare targeted build/test checklists, run repository scans, or produce focused lists of files to inspect if you want additional validation assistance.