Conversation
works
with working juliac example
working on multi variable
new api working
wip
making progress...
trying to make con works
fixed
almost done... getting there
rosenrock works
some more progress
debugging App
some more work; now found the issue
type stable
still some instances not working
Add minimal working example demonstrating grpass type instability
Shows that when the same variable appears 16+ times in one expression body,
Julia's loop detection widens the accumulated cnt tuple from NTuple{N,Int64}
to Tuple{Int64,...,Vararg{Int64}}, making _simdfunction return non-concrete.
https://claude.ai/code/session_01W8YnWSNbNtcra3Pd3M4u82
Fix type instability in _simdfunction by using Vector sparsity detection + NTuple construction
Replace the growing-tuple sparsity detection approach (which caused Julia to widen
Tuple{Int,Int,...} to Tuple{Int,Vararg{Int}} via loop detection) with:
- Vector-based traversal in grpass/hrpass/hdrpass comp::Nothing leaf cases
- Type-level count functions (_count_gr, _count_hr0, etc.) mirroring the AD pass structure
- @generated _gr_nnz/_hr_nnz functions returning compile-time Int constants
- ntuple(f, Val(N)) to construct NTuple{N,Int} in Compressor, preserving GPU compatibility
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
claude made many progresses
claude progresses
everything works except LinearAlgebra
Add Val-based operator specialization, Const wrapper, and auto-Const macros
- Val-based compile-time specialization for ^, +, -, *, / operators:
x^1 → identity, x^2 → abs2, x+0 → identity, x*1 → identity, etc.
Type-stable via Val dispatch (no runtime branching).
- Const{T} <: AbstractNode: value-hiding wrapper that prevents recompilation
when numeric constants change (unlike Val which encodes value in type).
- _maybe_const() runtime gatekeeper: macros auto-wrap free symbols via
_maybe_const(v), which wraps Real values in Const() and passes through
everything else (Variables, Parameters, Expressions, nodes).
- Auto-Const in @obj, @con, @con!, @expr, @var macros: free symbols in
generator bodies are automatically wrapped, excluding iterator variables,
function names, array bases, and dot-access targets.
- add_var(core, gen::Generator): creates variables constrained to equal
generator expressions (x[i] = gen.f(i)), returning (core, Variable).
- replace_T support for Const nodes in simdfunction.jl.
Benchmarks show no regressions, with up to 25% improvement on complex
expression trees (minsurf grad).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fix add_var(gen) indexing: pair local indices with original parameters
The constraint generator was using gen.iter directly as parameter data,
causing y[p] to index by the original iterator values (e.g., 2:7) instead
of local 1-based indices (1:6). Fix by pairing enumerate(gen.iter) so
x[j] uses 1:n while gen.f(orig) sees the original iterator element.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add multi-generator add_con and add_var(gen) indexing fix
- add_con(core, gen, gens...; kwargs...): creates constraint from first
generator, augments with subsequent ones via add_con!. Works at the
function level so @con macro gets it for free.
- @var macro now applies _auto_const_gen to generator arguments.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fix juliac --trim=safe AOT compilation: type-stable expression tree construction
Four layered fixes to make juliac --trim=safe produce a working binary for all
three test models (rosenrock, augmented_lagrangian, broyden_tridiagonal):
1. nlp.jl: Add Base.eltype(::Type{NaNSource{T}}) for type-level eltype dispatch
used during simdfunction sparsity detection.
2. nlp.jl: Add _indexed_var bypass for Variable.getindex — stores the runtime
offset as a plain Int64 child in Node2{+, I, Int64} instead of wrapping it
in Val(runtime_int), which would produce abstract Val{<:Int64} and cascade
to non-concrete Compressor types. (440 → 24 verifier errors)
3. specialization.jl: Override Base.literal_pow for AbstractNode so the integer
exponent stays as a type parameter Val{P} through _pow_val dispatch; change
_pow_val fallthrough from v::Val (abstract) to ::Val{V} where V (type-param
bound) so juliac can trace the specific Node2{^, ..., Val{P}} constructor.
(24 → 0 verifier errors for rosenrock/augmented_lagrangian)
4. specialization.jl: Add explicit Base.:-(d1::Real, d2::AbstractNode) override
using Const(d1) instead of Val(d1). Val(d1::Real) produces abstract Val{<:Real}
when d1 is a runtime value, blocking juliac from tracing Node2 construction.
Const{T} is fully concrete (value in field, not type parameter), giving juliac
a concrete Node2{-, Const{T}, D2} to trace. (fixes 8 verifier errors from
broyden_tridiagonal's "3 - 2*x[i]" pattern)
Also: NLPModelsIpoptLite: switch default linear_solver from ma27 to mumps (ma27
produced garbage error values); add HSL_jll dep; expose linear_solver and
print_timing_statistics options. LuksanVlcekApp: uncomment broyden_tridiagonal.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add COPSBenchmark AOT test and fix nested-generator auto-const wrapping
- Fix _wrap_free_symbols to handle nested :generator expressions (e.g.
sum(f(x) for x in 1:n) inside an outer generator body). Previously,
the else-branch recursed into for-specs, turning the iterator variable
assignment `j = 1:n` into `_maybe_const(j) = 1:n`, which Julia
lowering reads as a global method definition, causing a LoadError in
COPSBenchmark extension files (catmix.jl, etc.).
- Add test/COPSApp.jl app with rocket, catmix, and chain COPS models.
- Update JuliaCTest to use JuliaC programmatic API (ImageRecipe +
LinkRecipe + compile_products/link_products) instead of shelling out
to juliac, and add COPSApp to the AOT test suite.
- Add JuliaC to test/Project.toml deps.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fix AOT compilation for COPS models and improve docstrings
- Replace (head, rest...) tuple destructuring with Base.tail + first in
all 12 recursive NLP dispatch functions in nlp.jl, fixing juliac
--trim=safe failures for models with >10 constraint groups (glider, robot)
- Add Base.:^(AbstractNode, Real) = Node2(^, d1, Const(d2)) override in
specialization.jl to prevent abstract Val{<:Real} inference from
_register_biv, fixing minsurf (^(1/2) exponent)
- Enable glider, robot, rocket, minsurf, steering, torsion in COPSApp.jl
(only sum/prod-using models remain disabled: catmix, gasoil, marine,
methanol, pinene)
- Add docstrings for Variable, Parameter, WrapperNLPModel, TimedNLPModel,
CompressedNLPModel and improve existing Expression/ExaCore docs
- Clean up stale test code in TwoStageTest and subexpr_test
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
fixed testing
cleanup
- backends.jl: push sub-env paths to LOAD_PATH so extension packages (KernelAbstractions, OpenCL, StaticArrays) remain resolvable after switching back to test/ project - nlp.jl: convert GPU arrays to CPU before passing to NLPModelMeta (its constructor uses findall which does scalar indexing) - JuliaCTest: gracefully skip when JuliaC version lacks ImageRecipe API Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- formatter.yml: exclude AOT app files using @main (unsupported by Runic) by filtering changed files before invoking Runic directly - gpu.jl: add `using ExaModels` before macro usage in function definitions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- formatter.yml: exclude AOT app files using @main (unsupported by Runic) - gpu.jl: add `using ExaModels` before macro usage in function definitions - deps: extract shared Symbolics helpers into codegen_utils.jl, reducing redundancy between generate_functionlist.jl and generate_specialfunctions.jl Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Guard JuliaC.ImageRecipe usage with isdefined check since the API is not available on older Julia/JuliaC versions. Tests gracefully skip with a warning instead of erroring. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use @test skip= to mark juliac compilation tests as broken/skipped when JuliaC.ImageRecipe is not available, instead of failing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Drop ExaModelsStaticArrays extension and weak dependency. The extension only provided zero/one/adjoint/dot scaffolding for StaticArrays interop which is no longer needed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Append .exe suffix on Windows so isfile(exe_path) finds the compiled binary - Replace .== 0.0 with iszero predicate to avoid Float64 promotion on Metal Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ase functions - Move _needs_overload into register.jl and use it in both macros instead of plain hasmethod, which was too conservative for Base generics like max/min (hasmethod returned true via the generic isless-based definition, preventing the ExaModels-specific overload from being added) - Rewrite functionlist.jl to drive registration via @eval @register_univariate/ bivariate loops over _UNIVARIATES/_BIVARIATES, unifying the two workflows - Drop _register_univ/_register_biv and their Val(d2) wrapping for runtime Real constants (Val{<:Real} is abstract and problematic for juliac --trim=safe; the macros store Real directly as a concrete typed field instead) - _UNIVARIATES/_BIVARIATES constants are preserved for ADTest derivative verification Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add (::Val{V})(i, x, θ) = V so Val{V}() instances stored in Node2 by
_pow_val (for literal integer exponents >= 3) are callable in the node
evaluation context; the @register_bivariate macro's general I1,I2 eval
dispatch calls n.inner2(i, x, θ), which failed for non-callable Val{V}
- Fix LuksanVlcekApp: replace `using LuksanVlcekBenchmark` with
`import LuksanVlcekBenchmark as LV` — the `LV.` prefix was used
throughout but the alias was never defined, causing UndefVarError at
runtime for all solved cases
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Ipopt status 1 (SOLVED_TO_ACCEPTABLE_LEVEL) is a valid solution, just converged to looser tolerance. On Windows CI, MUMPS numerical behavior causes glider N=20 to land on status=1 instead of status=0 — both are correct solves. Returning exit code 1 for status=1 causes the test to fail unnecessarily. - COPSApp / LuksanVlcekApp: return 0 for status <= 1 (optimal or acceptable) - JuliaCTest: check "Ipopt status : " (solver ran) rather than "Ipopt status : 0" (exact convergence quality), since the AOT test is about binary execution correctness, not solver optimality certification Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
N=20 was numerically harder and didn't converge to optimality on Windows CI with MUMPS. N=10 converges cleanly on all platforms. Revert the status<=1 workarounds added in the previous commit. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Glider N=20 doesn't converge to optimality on Windows CI due to MUMPS numerical differences. Use broken=Sys.iswindows() so the test is tracked as a known issue rather than silently skipped or incorrectly changed. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- generate_functionlist.jl: remove helper functions and _register_univ/_register_biv (now in src/register.jl), use @eval @register_univariate/@register_bivariate instead - generate_specialfunctions.jl: output ext/functionlist.jl instead of full module file (module wrapper now stays in ext/ExaModelsSpecialFunctions.jl) - Standardize comment headers across src/functionlist.jl and ext/functionlist.jl (consistent separator length and wording) https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
juliac --trim=safe fails to resolve Core.kwcall for add_con/add_var/etc when the ExaCore type is deeply parametric (e.g. glider_model with many named constraints). Fix by adding positional-name forwarding methods and updating @var/@par/@obj/@con/@expr macros to pass name::Val as the last positional argument instead of a keyword argument. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
|
Your PR requires formatting changes to meet the project's style guidelines. Click here to view the suggested changes.diff --git a/callback_benchmark.jl b/callback_benchmark.jl
index eb5bdfd9..a879cb47 100644
--- a/callback_benchmark.jl
+++ b/callback_benchmark.jl
@@ -6,11 +6,13 @@ using Printf
function btime(f, N)
f() # warmup
GC.gc()
- return minimum(begin
- t = time_ns()
- f()
- (time_ns() - t) / 1e9
- end for _ in 1:N)
+ return minimum(
+ begin
+ t = time_ns()
+ f()
+ (time_ns() - t) / 1.0e9
+ end for _ in 1:N
+ )
end
function benchmark_callbacks(m; N = 20)
@@ -19,39 +21,42 @@ function benchmark_callbacks(m; N = 20)
nnzj = m.meta.nnzj
nnzh = m.meta.nnzh
- x = copy(m.meta.x0)
- y = zeros(ncon)
- c = zeros(ncon)
- g = zeros(nvar)
- jac = zeros(nnzj)
+ x = copy(m.meta.x0)
+ y = zeros(ncon)
+ c = zeros(ncon)
+ g = zeros(nvar)
+ jac = zeros(nnzj)
hess = zeros(nnzh)
- tobj = btime(() -> ExaModels.obj(m, x), N)
- tcon = ncon > 0 ? btime(() -> ExaModels.cons!(m, x, c), N) : NaN
+ tobj = btime(() -> ExaModels.obj(m, x), N)
+ tcon = ncon > 0 ? btime(() -> ExaModels.cons!(m, x, c), N) : NaN
tgrad = btime(() -> ExaModels.grad!(m, x, g), N)
- tjac = ncon > 0 ? btime(() -> ExaModels.jac_coord!(m, x, jac), N) : NaN
+ tjac = ncon > 0 ? btime(() -> ExaModels.jac_coord!(m, x, jac), N) : NaN
thess = btime(() -> ExaModels.hess_coord!(m, x, y, hess), N)
- return (nvar=nvar, ncon=ncon, tobj=tobj, tcon=tcon, tgrad=tgrad, tjac=tjac, thess=thess)
+ return (nvar = nvar, ncon = ncon, tobj = tobj, tcon = tcon, tgrad = tgrad, tjac = tjac, thess = thess)
end
function print_header(title)
println()
- println("=" ^ 80)
- @printf(" %-20s %6s %6s | %8s %8s %8s %8s %8s\n",
- title, "nvar", "ncon", "obj", "cons", "grad", "jac", "hess")
- println("=" ^ 80)
+ println("="^80)
+ @printf(
+ " %-20s %6s %6s | %8s %8s %8s %8s %8s\n",
+ title, "nvar", "ncon", "obj", "cons", "grad", "jac", "hess"
+ )
+ return println("="^80)
end
function print_row(name, r)
- @printf(" %-20s %6s %6s | %8.2e %8s %8.2e %8s %8.2e\n",
+ return @printf(
+ " %-20s %6s %6s | %8.2e %8s %8.2e %8s %8.2e\n",
name,
- r.nvar < 1_000_000 ? @sprintf("%6d", r.nvar) : @sprintf("%5.0fk", r.nvar/1000),
- r.ncon < 1_000_000 ? @sprintf("%6d", r.ncon) : @sprintf("%5.0fk", r.ncon/1000),
+ r.nvar < 1_000_000 ? @sprintf("%6d", r.nvar) : @sprintf("%5.0fk", r.nvar / 1000),
+ r.ncon < 1_000_000 ? @sprintf("%6d", r.ncon) : @sprintf("%5.0fk", r.ncon / 1000),
r.tobj,
- isnan(r.tcon) ? " N/A " : @sprintf("%8.2e", r.tcon),
+ isnan(r.tcon) ? " N/A " : @sprintf("%8.2e", r.tcon),
r.tgrad,
- isnan(r.tjac) ? " N/A " : @sprintf("%8.2e", r.tjac),
+ isnan(r.tjac) ? " N/A " : @sprintf("%8.2e", r.tjac),
r.thess,
)
end
@@ -71,22 +76,22 @@ end
# ── COPS ───────────────────────────────────────────────────────────────────────
const COPS_INSTANCES = [
- (:bearing_model, (50, 50)),
- (:chain_model, (800,)),
- (:camshape_model, (1000,)),
- (:catmix_model, (500,)),
- (:channel_model, (1000,)),
- (:gasoil_model, (500,)),
- (:glider_model, (500,)),
- (:marine_model, (500,)),
- (:methanol_model, (500,)),
- (:minsurf_model, (100, 100)),
- (:pinene_model, (500,)),
- (:robot_model, (500,)),
- (:rocket_model, (2000,)),
- (:steering_model, (1000,)),
- (:torsion_model, (100, 100)),
- (:channel_model, (1000,)),
+ (:bearing_model, (50, 50)),
+ (:chain_model, (800,)),
+ (:camshape_model, (1000,)),
+ (:catmix_model, (500,)),
+ (:channel_model, (1000,)),
+ (:gasoil_model, (500,)),
+ (:glider_model, (500,)),
+ (:marine_model, (500,)),
+ (:methanol_model, (500,)),
+ (:minsurf_model, (100, 100)),
+ (:pinene_model, (500,)),
+ (:robot_model, (500,)),
+ (:rocket_model, (2000,)),
+ (:steering_model, (1000,)),
+ (:torsion_model, (100, 100)),
+ (:channel_model, (1000,)),
]
print_header("COPS")
@@ -95,7 +100,7 @@ for (sym, params) in COPS_INSTANCES
try
m = model_func(COPSBenchmark.ExaModelsBackend(), params...)
r = benchmark_callbacks(m)
- print_row("$sym($(join(params,',')))", r)
+ print_row("$sym($(join(params, ',')))", r)
catch e
@printf(" %-20s ERROR: %s\n", string(sym), e)
end
diff --git a/deps/generate_specialfunctions.jl b/deps/generate_specialfunctions.jl
index bb5222e3..ec54e2c9 100644
--- a/deps/generate_specialfunctions.jl
+++ b/deps/generate_specialfunctions.jl
@@ -152,7 +152,7 @@ end
function main()
content = generate()
- if "--write" in ARGS
+ return if "--write" in ARGS
outfile = joinpath(dirname(@__DIR__), "ext", "functionlist.jl")
write(outfile, content)
println("Wrote $(length(content)) bytes to $outfile")
diff --git a/docs/src/distillation.jl b/docs/src/distillation.jl
index 52ed949f..232f2cde 100644
--- a/docs/src/distillation.jl
+++ b/docs/src/distillation.jl
@@ -46,14 +46,14 @@ function distillation_column_model(T = 3; backend = nothing)
c = ExaCore(; backend)
- @var(c, xA, 0:T, 0:(NT+1); start = 0.5)
- @var(c, yA, 0:T, 0:(NT+1); start = 0.5)
+ @var(c, xA, 0:T, 0:(NT + 1); start = 0.5)
+ @var(c, yA, 0:T, 0:(NT + 1); start = 0.5)
@var(c, u, 0:T; start = 1.0)
@var(c, V, 0:T; start = 1.0)
@var(c, L2, 0:T; start = 1.0)
- @obj(c, (yA[t, 1] - ybar)^2 for t = 0:T)
- @obj(c, (u[t] - ubar)^2 for t = 0:T)
+ @obj(c, (yA[t, 1] - ybar)^2 for t in 0:T)
+ @obj(c, (u[t] - ubar)^2 for t in 0:T)
@con(c, xA[0, i] - xA0 for (i, xA0) in xA0s)
@con(
@@ -86,8 +86,8 @@ function distillation_column_model(T = 3; backend = nothing)
(1 / Ar) * (L2[t] * xA[t, NT] - (F - D) * xA[t, NT+1] - V[t] * yA[t, NT+1]) for
t = 1:T
)
- @con(c, V[t] - u[t] * D - D for t = 0:T)
- @con(c, L2[t] - u[t] * D - F for t = 0:T)
+ @con(c, V[t] - u[t] * D - D for t in 0:T)
+ @con(c, L2[t] - u[t] * D - F for t in 0:T)
@con(
c,
yA[t, i] * (1 - xA[t, i]) - alpha * xA[t, i] * (1 - yA[t, i]) for (t, i) in itr2
diff --git a/docs/src/gpu.jl b/docs/src/gpu.jl
index 93f88a63..fc497075 100644
--- a/docs/src/gpu.jl
+++ b/docs/src/gpu.jl
@@ -23,9 +23,9 @@ end
function luksan_vlcek_model(N)
c = ExaCore()
- @var(c, x, N; start = (luksan_vlcek_x0(i) for i = 1:N))
- @con(c, luksan_vlcek_con(x, i) for i = 1:(N-2))
- @obj(c, luksan_vlcek_obj(x, i) for i = 2:N)
+ @var(c, x, N; start = (luksan_vlcek_x0(i) for i in 1:N))
+ @con(c, luksan_vlcek_con(x, i) for i in 1:(N - 2))
+ @obj(c, luksan_vlcek_obj(x, i) for i in 2:N)
return ExaModel(c)
end
@@ -34,9 +34,9 @@ end
function luksan_vlcek_model(N, backend = nothing)
c = ExaCore(; backend = backend) # specify the backend
- @var(c, x, N; start = (luksan_vlcek_x0(i) for i = 1:N))
- @con(c, luksan_vlcek_con(x, i) for i = 1:(N-2))
- @obj(c, luksan_vlcek_obj(x, i) for i = 2:N)
+ @var(c, x, N; start = (luksan_vlcek_x0(i) for i in 1:N))
+ @con(c, luksan_vlcek_con(x, i) for i in 1:(N - 2))
+ @obj(c, luksan_vlcek_obj(x, i) for i in 2:N)
return ExaModel(c)
end
diff --git a/docs/src/guide.jl b/docs/src/guide.jl
index 4c48a4f8..b0e57009 100644
--- a/docs/src/guide.jl
+++ b/docs/src/guide.jl
@@ -29,7 +29,7 @@ c = ExaCore()
# ## Variables
# Now, let's create the optimization variables. From the problem definition, we can see that we will need $N$ scalar variables. We will choose $N=10$, and create the variable $x\in\mathbb{R}^{N}$ with the following command:
N = 10
-@var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
+@var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i in 1:N))
# This creates the variable `x`, which we will be able to refer to when we create constraints/objective constraints. Also, this modifies the information in the `ExaCore` object properly so that later an optimization model can be properly created with the necessary information. Observe that we have used the keyword argument `start` to specify the initial guess for the solution. The variable upper and lower bounds can be specified in a similar manner. For example, if we wanted to set the lower bound of the variable `x` to 0.0 and the upper bound to 10.0, we could do it as follows:
# ```julia
# @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N), lvar = 0.0, uvar = 10.0)
@@ -37,7 +37,7 @@ N = 10
# ## Objective
# The objective can be set as follows:
-@obj(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)
+@obj(c, 100 * (x[i - 1]^2 - x[i])^2 + (x[i - 1] - 1)^2 for i in 2:N)
# !!! note
# Note that the terms here are summed, without explicitly using `sum( ... )` syntax.
diff --git a/docs/src/opf.jl b/docs/src/opf.jl
index 2a4199d5..c1a44ccf 100644
--- a/docs/src/opf.jl
+++ b/docs/src/opf.jl
@@ -101,7 +101,7 @@ function ac_power_model(filename; backend = nothing, T = Float64)
w = ExaCore(T; backend = backend)
- @var(w, va, length(data.bus);)
+ @var(w, va, length(data.bus))
@var(
w,
diff --git a/docs/src/parameters.jl b/docs/src/parameters.jl
index 6c86169a..081e81eb 100644
--- a/docs/src/parameters.jl
+++ b/docs/src/parameters.jl
@@ -14,10 +14,10 @@ c_param = ExaCore()
# Define the variables as before:
N = 10
-@var(c_param, x_p, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
+@var(c_param, x_p, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i in 1:N))
# Now we can use the parameters in our objective function just like variables:
-@obj(c_param, θ[1] * (x_p[i-1]^2 - x_p[i])^2 + (x_p[i-1] - θ[2])^2 for i = 2:N)
+@obj(c_param, θ[1] * (x_p[i - 1]^2 - x_p[i])^2 + (x_p[i - 1] - θ[2])^2 for i in 2:N)
# Add the same constraints as before:
@con(
diff --git a/docs/src/performance.jl b/docs/src/performance.jl
index a83fedb2..f6ce053f 100644
--- a/docs/src/performance.jl
+++ b/docs/src/performance.jl
@@ -9,8 +9,8 @@ using ExaModels
t = @elapsed begin
c = ExaCore()
N = 10
- @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
- @obj(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)
+ @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i in 1:N))
+ @obj(c, 100 * (x[i - 1]^2 - x[i])^2 + (x[i - 1] - 1)^2 for i in 2:N)
@con(
c,
3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] - x[i]exp(x[i] - x[i+1]) - 3 for i = 1:(N-2)
@@ -24,8 +24,8 @@ println("$t seconds elapsed")
t = @elapsed begin
c = ExaCore()
N = 10
- @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
- @obj(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)
+ @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i in 1:N))
+ @obj(c, 100 * (x[i - 1]^2 - x[i])^2 + (x[i - 1] - 1)^2 for i in 2:N)
@con(
c,
3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] - x[i]exp(x[i] - x[i+1]) - 3 for i = 1:(N-2)
@@ -39,8 +39,8 @@ println("$t seconds elapsed")
# But instead, if you create a function, we can significantly reduce the model creation time.
function luksan_vlcek_model(N)
c = ExaCore()
- @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
- @obj(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)
+ @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i in 1:N))
+ @obj(c, 100 * (x[i - 1]^2 - x[i])^2 + (x[i - 1] - 1)^2 for i in 2:N)
@con(
c,
3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -
@@ -74,8 +74,8 @@ function luksan_vlcek_model_concrete(N)
arr1 = Array(2:N)
arr2 = Array(1:(N-2))
- @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
- @obj(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i in arr1)
+ @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i in 1:N))
+ @obj(c, 100 * (x[i - 1]^2 - x[i])^2 + (x[i - 1] - 1)^2 for i in arr1)
@con(
c,
3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -
@@ -90,8 +90,8 @@ function luksan_vlcek_model_non_concrete(N)
arr1 = Array{Any}(2:N)
arr2 = Array{Any}(1:(N-2))
- @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
- @obj(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i in arr1)
+ @var(c, x, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i in 1:N))
+ @obj(c, 100 * (x[i - 1]^2 - x[i])^2 + (x[i - 1] - 1)^2 for i in arr1)
@con(
c,
3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -
diff --git a/docs/src/quad.jl b/docs/src/quad.jl
index 49a5ad45..7c5692e9 100644
--- a/docs/src/quad.jl
+++ b/docs/src/quad.jl
@@ -22,11 +22,11 @@ function quadrotor_model(N = 3; backend = nothing)
c = ExaCore(; backend = backend)
- @var(c, x, 1:(N+1), 1:n)
+ @var(c, x, 1:(N + 1), 1:n)
@var(c, u, 1:N, 1:p)
@con(c, x[1, i] - x0 for (i, x0) in x0s)
- @con(c, -x[i+1, 1] + x[i, 1] + (x[i, 2]) * dt for i = 1:N)
+ @con(c, -x[i + 1, 1] + x[i, 1] + (x[i, 2]) * dt for i in 1:N)
@con(
c,
-x[i+1, 2] +
@@ -36,7 +36,7 @@ function quadrotor_model(N = 3; backend = nothing)
u[i, 1] * sin(x[i, 7]) * sin(x[i, 9])
) * dt for i = 1:N
)
- @con(c, -x[i+1, 3] + x[i, 3] + (x[i, 4]) * dt for i = 1:N)
+ @con(c, -x[i + 1, 3] + x[i, 3] + (x[i, 4]) * dt for i in 1:N)
@con(
c,
-x[i+1, 4] +
@@ -46,7 +46,7 @@ function quadrotor_model(N = 3; backend = nothing)
u[i, 1] * sin(x[i, 7]) * cos(x[i, 9])
) * dt for i = 1:N
)
- @con(c, -x[i+1, 5] + x[i, 5] + (x[i, 6]) * dt for i = 1:N)
+ @con(c, -x[i + 1, 5] + x[i, 5] + (x[i, 6]) * dt for i in 1:N)
@con(
c,
-x[i+1, 6] + x[i, 6] + (u[i, 1] * cos(x[i, 7]) * cos(x[i, 8]) - 9.8) * dt for
@@ -77,7 +77,7 @@ function quadrotor_model(N = 3; backend = nothing)
@obj(c, 0.5 * R * (u[i, j]^2) for (i, j, R) in itr0)
@obj(c, 0.5 * Q * (x[i, j] - d)^2 for (i, j, Q, d) in itr1)
- @obj(c, 0.5 * Qf * (x[N+1, j] - d)^2 for (j, Qf, d) in itr2)
+ @obj(c, 0.5 * Qf * (x[N + 1, j] - d)^2 for (j, Qf, d) in itr2)
m = ExaModel(c)
diff --git a/docs/src/two_stage.jl b/docs/src/two_stage.jl
index 44aa5edc..d5e836e6 100644
--- a/docs/src/two_stage.jl
+++ b/docs/src/two_stage.jl
@@ -20,13 +20,13 @@ core = TwoStageExaCore()
@var(core, d; start = 1.0, lvar = 0.0, uvar = Inf, scenario = 0) ## design variable d
# For the recourse variables `v`, we specify `scenario = [i for i=1:ns, j=1:nv]` to indicate that each variable `v[s,i]` belongs to scenario `s`. This allows us to define scenario-specific constraints and objectives that involve these recourse variables.
-@var(core, v, ns, nv; start = 1.0, lvar = 0.0, uvar = Inf, scenario = [i for i=1:ns, j=1:nv]) ## recourse variables v
+@var(core, v, ns, nv; start = 1.0, lvar = 0.0, uvar = Inf, scenario = [i for i in 1:ns, j in 1:nv]) ## recourse variables v
# Now we can define the constraints and objective function. The `scenario` keyword argument in the `constraint` and `objective` functions allows us to specify which scenario(s) each constraint or objective term belongs to.
-@con(core, v[s,1] - v[s,2]^2 for s in 1:ns; lcon = 0.0, scenario = 1:ns)
+@con(core, v[s, 1] - v[s, 2]^2 for s in 1:ns; lcon = 0.0, scenario = 1:ns)
@obj(core, d^2)
-@obj(core, weight * (v[s,i] - d)^2 for s in 1:ns, i in 1:nv)
+@obj(core, weight * (v[s, i] - d)^2 for s in 1:ns, i in 1:nv)
m = ExaModel(core)
diff --git a/ext/ExaModelsKernelAbstractions.jl b/ext/ExaModelsKernelAbstractions.jl
index d899059c..95bde223 100644
--- a/ext/ExaModelsKernelAbstractions.jl
+++ b/ext/ExaModelsKernelAbstractions.jl
@@ -121,7 +121,7 @@ end
_conaug_structure!(T, backend, ::Tuple{}, sparsity) = nothing
function _conaug_structure!(T, backend, (con, cons...), sparsity)
_conaug_structure!(T, backend, cons, sparsity)
- con isa ExaModels.ConstraintAug && !isempty(con.itr) &&
+ return con isa ExaModels.ConstraintAug && !isempty(con.itr) &&
kers(backend)(sparsity, con.f, con.itr, con.oa; ndrange = length(con.itr))
end
@kernel function kers(sparsity, @Const(f), @Const(itr), @Const(oa))
@@ -134,7 +134,7 @@ end
_grad_structure!(T, backend, ::Tuple{}, gsparsity) = nothing
function _grad_structure!(T, backend, (obj, objs...), gsparsity)
_grad_structure!(T, backend, objs, gsparsity)
- ExaModels.sgradient!(backend, gsparsity, obj, ExaModels.NaNSource{T}(), ExaModels.NaNSource{T}(), T(NaN))
+ return ExaModels.sgradient!(backend, gsparsity, obj, ExaModels.NaNSource{T}(), ExaModels.NaNSource{T}(), T(NaN))
end
function ExaModels.jac_structure!(
@@ -150,7 +150,7 @@ end
_jac_structure!(T, backend, ::Tuple{}, rows, cols) = nothing
function _jac_structure!(T, backend, (con, cons...), rows, cols)
_jac_structure!(T, backend, cons, rows, cols)
- ExaModels.sjacobian!(backend, rows, cols, con, ExaModels.NaNSource{T}(), ExaModels.NaNSource{T}(), T(NaN))
+ return ExaModels.sjacobian!(backend, rows, cols, con, ExaModels.NaNSource{T}(), ExaModels.NaNSource{T}(), T(NaN))
end
@@ -169,12 +169,12 @@ end
_obj_hess_structure!(T, backend, ::Tuple{}, rows, cols) = nothing
function _obj_hess_structure!(T, backend, (obj, objs...), rows, cols)
_obj_hess_structure!(T, backend, objs, rows, cols)
- ExaModels.shessian!(backend, rows, cols, obj, ExaModels.NaNSource{T}(), ExaModels.NaNSource{T}(), T(NaN), T(NaN))
+ return ExaModels.shessian!(backend, rows, cols, obj, ExaModels.NaNSource{T}(), ExaModels.NaNSource{T}(), T(NaN), T(NaN))
end
_con_hess_structure!(T, backend, ::Tuple{}, rows, cols) = nothing
function _con_hess_structure!(T, backend, (con, cons...), rows, cols)
_con_hess_structure!(T, backend, cons, rows, cols)
- ExaModels.shessian!(backend, rows, cols, con, ExaModels.NaNSource{T}(), ExaModels.NaNSource{T}(), T(NaN), T(NaN))
+ return ExaModels.shessian!(backend, rows, cols, con, ExaModels.NaNSource{T}(), ExaModels.NaNSource{T}(), T(NaN), T(NaN))
end
@@ -220,7 +220,7 @@ end
_cons_nln!(backend, y, ::Tuple{}, x, θ) = nothing
function _cons_nln!(backend, y, (con, cons...), x, θ)
_cons_nln!(backend, y, cons, x, θ)
- if con isa ExaModels.Constraint && !isempty(con.itr)
+ return if con isa ExaModels.Constraint && !isempty(con.itr)
kerf(backend)(y, con.f, con.itr, x, θ; ndrange = length(con.itr))
end
end
@@ -230,7 +230,7 @@ end
_conaugs!(backend, y, ::Tuple{}, x, θ) = nothing
function _conaugs!(backend, y, (con, cons...), x, θ)
_conaugs!(backend, y, cons, x, θ)
- if con isa ExaModels.ConstraintAug && !isempty(con.itr)
+ return if con isa ExaModels.ConstraintAug && !isempty(con.itr)
kerf2(backend)(y, con.f, con.itr, x, θ, con.oa; ndrange = length(con.itr))
end
end
@@ -261,7 +261,7 @@ end
_grad!(backend, y, ::Tuple{}, x, θ) = nothing
function _grad!(backend, y, (obj, objs...), x, θ)
_grad!(backend, y, objs, x, θ)
- ExaModels.sgradient!(backend, y, obj, x, θ, one(eltype(y)))
+ return ExaModels.sgradient!(backend, y, obj, x, θ, one(eltype(y)))
end
function ExaModels.jac_coord!(
@@ -276,7 +276,7 @@ end
_jac_coord!(backend, y, ::Tuple{}, x, θ) = nothing
function _jac_coord!(backend, y, (con, cons...), x, θ)
_jac_coord!(backend, y, cons, x, θ)
- ExaModels.sjacobian!(backend, y, nothing, con, x, θ, one(eltype(y)))
+ return ExaModels.sjacobian!(backend, y, nothing, con, x, θ, one(eltype(y)))
end
function ExaModels.jprod_nln!(
@@ -456,12 +456,12 @@ end
_obj_hess_coord!(backend, hess, ::Tuple{}, x, θ, obj_weight) = nothing
function _obj_hess_coord!(backend, hess, (obj, objs...), x, θ, obj_weight)
_obj_hess_coord!(backend, hess, objs, x, θ, obj_weight)
- ExaModels.shessian!(backend, hess, nothing, obj, x, θ, obj_weight, zero(eltype(hess)))
+ return ExaModels.shessian!(backend, hess, nothing, obj, x, θ, obj_weight, zero(eltype(hess)))
end
_con_hess_coord!(backend, hess, ::Tuple{}, x, θ, y) = nothing
function _con_hess_coord!(backend, hess, (con, cons...), x, θ, y)
_con_hess_coord!(backend, hess, cons, x, θ, y)
- ExaModels.shessian!(backend, hess, nothing, con, x, θ, y, zero(eltype(hess)))
+ return ExaModels.shessian!(backend, hess, nothing, con, x, θ, y, zero(eltype(hess)))
end
diff --git a/src/gradient.jl b/src/gradient.jl
index c4054cb6..5dd69f29 100644
--- a/src/gradient.jl
+++ b/src/gradient.jl
@@ -8,7 +8,7 @@ Performs dense gradient evaluation via the reverse pass on the computation (sub)
- `y`: result vector
- `adj`: adjoint propagated up to the current node
"""
-@inline function drpass(d::D, y, adj) where {D<:Union{Real,AdjointNull}}
+@inline function drpass(d::D, y, adj) where {D <: Union{Real, AdjointNull}}
nothing
end
@inline function drpass(d::D, y, adj) where {D<:AdjointNode1}
@@ -98,44 +98,44 @@ end
end
@inline function grpass(
- d::D,
- comp::Nothing,
- y,
- o1,
- cnt::Tuple{<:Tuple,<:Tuple},
- adj,
-) where {D<:Union{AdjointNull,ParIndexed,Real}}
+ d::D,
+ comp::Nothing,
+ y,
+ o1,
+ cnt::Tuple{<:Tuple, <:Tuple},
+ adj,
+ ) where {D <: Union{AdjointNull, ParIndexed, Real}}
return cnt
end
@inline function grpass(
- d::D,
- comp::Nothing,
- y,
- o1,
- cnt::Tuple{<:Tuple,<:Tuple},
- adj,
-) where {D<:AdjointNode1}
+ d::D,
+ comp::Nothing,
+ y,
+ o1,
+ cnt::Tuple{<:Tuple, <:Tuple},
+ adj,
+ ) where {D <: AdjointNode1}
return grpass(d.inner, nothing, y, o1, cnt, adj * d.y)
end
@inline function grpass(
- d::D,
- comp::Nothing,
- y,
- o1,
- cnt::Tuple{<:Tuple,<:Tuple},
- adj,
-) where {D<:AdjointNode2}
+ d::D,
+ comp::Nothing,
+ y,
+ o1,
+ cnt::Tuple{<:Tuple, <:Tuple},
+ adj,
+ ) where {D <: AdjointNode2}
cnt = grpass(d.inner1, nothing, y, o1, cnt, adj * d.y1)
return grpass(d.inner2, nothing, y, o1, cnt, adj * d.y2)
end
@inline function grpass(
- d::D,
- comp::Nothing,
- y,
- o1,
- cnt::Tuple{<:Tuple,<:Tuple},
- adj,
-) where {D<:AdjointNodeVar}
+ d::D,
+ comp::Nothing,
+ y,
+ o1,
+ cnt::Tuple{<:Tuple, <:Tuple},
+ adj,
+ ) where {D <: AdjointNodeVar}
mapping, uniques = cnt
idx = _grpass_find_ident(d.i, uniques, 1)
if idx === 0
diff --git a/src/graph.jl b/src/graph.jl
index 2ae39287..8c0af9ca 100644
--- a/src/graph.jl
+++ b/src/graph.jl
@@ -351,12 +351,12 @@ end
# ── Primal evaluation (x::AbstractVector → scalar) ───────────────────────────
-@inline (n::SumNode{Tuple{}})(i, x::V, θ) where {T, V<:AbstractVector{T}} = zero(T)
-@inline (n::SumNode)(i, x::V, θ) where {T, V<:AbstractVector{T}} =
+@inline (n::SumNode{Tuple{}})(i, x::V, θ) where {T, V <: AbstractVector{T}} = zero(T)
+@inline (n::SumNode)(i, x::V, θ) where {T, V <: AbstractVector{T}} =
mapreduce(inner -> inner(i, x, θ), +, n.inners)
-@inline (n::ProdNode{Tuple{}})(i, x::V, θ) where {T, V<:AbstractVector{T}} = one(T)
-@inline (n::ProdNode)(i, x::V, θ) where {T, V<:AbstractVector{T}} =
+@inline (n::ProdNode{Tuple{}})(i, x::V, θ) where {T, V <: AbstractVector{T}} = one(T)
+@inline (n::ProdNode)(i, x::V, θ) where {T, V <: AbstractVector{T}} =
mapreduce(inner -> inner(i, x, θ), *, n.inners)
# ── Adjoint tree (gradient) ───────────────────────────────────────────────────
diff --git a/src/hessian.jl b/src/hessian.jl
index 64f92a07..5b9223bc 100644
--- a/src/hessian.jl
+++ b/src/hessian.jl
@@ -268,15 +268,15 @@ end
end
@inline function hdrpass(
- t1::T1,
- t2::T2,
- comp::Nothing,
- y1,
- y2,
- o2,
- cnt::Tuple{<:Tuple,<:Tuple},
- adj,
-) where {T1<:SecondAdjointNodeVar,T2<:SecondAdjointNodeVar}
+ t1::T1,
+ t2::T2,
+ comp::Nothing,
+ y1,
+ y2,
+ o2,
+ cnt::Tuple{<:Tuple, <:Tuple},
+ adj,
+ ) where {T1 <: SecondAdjointNodeVar, T2 <: SecondAdjointNodeVar}
pair = (t1.i, t2.i)
mapping, uniques = cnt
idx = _hpass_find_pair(pair, uniques, 1)
@@ -524,7 +524,7 @@ end
y1,
y2,
o2,
- cnt::Vector,
+ cnt::Vector,
adj,
)
push!(cnt, (t1.i, t2.i))
@@ -544,15 +544,15 @@ end
end
@inline function hrpass(
- t::T,
- comp::Nothing,
- y1,
- y2,
- o2,
- cnt::Tuple{<:Tuple,<:Tuple},
- adj,
- adj2,
-) where {T<:SecondAdjointNodeVar}
+ t::T,
+ comp::Nothing,
+ y1,
+ y2,
+ o2,
+ cnt::Tuple{<:Tuple, <:Tuple},
+ adj,
+ adj2,
+ ) where {T <: SecondAdjointNodeVar}
pair = (t.i, t.i)
mapping, uniques = cnt
idx = _hpass_find_pair(pair, uniques, 1)
diff --git a/src/nlp.jl b/src/nlp.jl
index af0f31cc..2affeca9 100644
--- a/src/nlp.jl
+++ b/src/nlp.jl
@@ -16,11 +16,11 @@ A handle to a block of optimization variables added to an [`ExaCore`](@ref) via
individual entries in objective and constraint expressions. Retrieve solution
values with [`solution`](@ref).
"""
-struct Variable{D,S,O} <: AbstractVariable
+struct Variable{D, S, O} <: AbstractVariable
size::S
length::O
offset::O
- Variable(size::S, length::O, offset::O, D) where {S, O} = new{D,S,O}(size, length, offset)
+ Variable(size::S, length::O, offset::O, D) where {S, O} = new{D, S, O}(size, length, offset)
end
Base.show(io::IO, v::Variable) = print(
io,
@@ -49,10 +49,10 @@ end
Base.show(io::IO, s::Expression) = print(
io,
"""
-Subexpression (reduced)
+ Subexpression (reduced)
- s ∈ R^{$(join(size(s.size), " × "))}
-""",
+ s ∈ R^{$(join(size(s.size), " × "))}
+ """,
)
"""
@@ -84,7 +84,7 @@ An objective term group added to an [`ExaCore`](@ref) via [`add_obj`](@ref) /
[`@obj`](@ref). All `Objective` objects in a core are summed at evaluation time
to form the total objective value.
"""
-struct Objective{F,I} <: AbstractObjective
+struct Objective{F, I} <: AbstractObjective
f::F
itr::I
end
@@ -108,7 +108,7 @@ A block of constraints added to an [`ExaCore`](@ref) via [`add_con`](@ref) /
Row `k` of this block maps to global constraint index `offset + k`. Dual
solution values can be retrieved with [`multipliers`](@ref).
"""
-struct Constraint{F,I,O} <: AbstractConstraint
+struct Constraint{F, I, O} <: AbstractConstraint
f::F
itr::I
offset::O
@@ -136,7 +136,7 @@ by `idx` at evaluation time. Multiple `ConstraintAug` objects can be stacked on
the same base constraint to aggregate contributions from several data sources
(e.g. summing arc flows into nodal balance constraints).
"""
-struct ConstraintAug{F,I,D} <: AbstractConstraint
+struct ConstraintAug{F, I, D} <: AbstractConstraint
f::F
itr::I
oa::Int
@@ -155,7 +155,7 @@ Constraint Augmentation
""",
)
-abstract type AbstractExaCore{T,VT,B,S} end
+abstract type AbstractExaCore{T, VT, B, S} end
"""
ExaCore([array_eltype::Type; backend = nothing, minimize = true, name = :Generic])
@@ -199,7 +199,7 @@ An ExaCore
number of constraint patterns: ... 0""" @inline function _exa_core(
-@inline ExaCore(::Type{T}; backend = nothing, kwargs...) where {T<:AbstractFloat} = _exa_core(; x0 = convert_array(zeros(T, 0), backend), backend, kwargs...) number of objective patterns: .... $(depth(c.obj))
""", @@ -320,7 +320,7 @@ An abstract type for ExaModel, which is a subtype of `NLPModels.AbstractNLPModel -struct ExaModel{T,VT,E,V,P,O,C,S,R} <: AbstractExaModel{T,VT,E} """ @@ -715,7 +715,7 @@ function add_par(c::C, start::AbstractArray; name = nothing) where {T,C<:ExaCore
""" -function add_var(c::C; kwargs...) where {T,C<:ExaCore{T}}
-function add_var(c::C, name::Symbol, args...; kwargs...) where {T,C<:ExaCore{T}}
@@ -814,12 +814,12 @@ Objective """
"""
Positional-name forwarding for add_con — avoids Core.kwcall for
|
The previous positional-name methods had ambiguity when @con/@obj received bare expressions (not generators) — e.g. boundary conditions like `@con(c, c2, x[1] - 1.0)`. The untyped `n` parameter in `add_con(c, n, name::Val)` conflicted with the AbstractNode method. Fix by adding explicit AbstractNode and Integer dispatch methods, and typing the generator methods with Base.Generator. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
The previous forwarding methods called add_var(c, n1; name=name, kwargs...)
which put name=Val{:x}() back into Core.kwcall. Fix by calling the original
function WITHOUT name, then patching refs on the returned ExaCore. This keeps
Val types completely out of kwcall throughout the chain.
https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
These two new models from COPSBenchmark dd06bc8 fail juliac --trim=safe because elec_model calls ExaCore(T; backend=nothing) with an abstract T::Type parameter, and channel_model has a similar kwbody resolution issue. These need fixes on the COPSBenchmark side. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
Switch to claude/compare-backends-glider-IRYgD which fixes juliac compatibility for elec_model and channel_model. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
Tested all commented-out models for AOT compatibility:
- elec, channel: compile + run OK (enabled)
- polygon: compiles but NaN at runtime
- tetra_*, triangle_*: compile but crash at runtime (mesh data issues)
- lane_emden, dirichlet, henon: fail to compile (Dict{Symbol,Any} in PDEDiscretizationDomain)
https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
COPSBenchmark fixed the PDE and mesh models on their side. Enable all 28 models for juliac --trim=safe compilation. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
…enon) These use transition_state_model which passes T::DataType abstractly through kwargs, causing juliac --trim=safe to fail. All other models (including tetra_*, triangle_*, polygon, elec, channel) now compile OK. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
COPSBenchmark fixed transition_state_model for juliac compatibility. All 28 COPS models now enabled. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
AOT compilation of 28 COPS models is expensive (~5 min) and only needs to run on one platform. Skip it on: - Julia LTS (juliac features are Julia 1.12+ only) - GPU/Metal self-hosted runners (AOT is CPU-only, wastes GPU time) Also add runtime tests for the newly enabled elec/channel models. Set EXAMODELS_SKIP_AOT=1 to skip AOT in any CI leg. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
CPU tests already run on the github matrix runners. GPU self-hosted runners should only test GPU backends to save time. Uses the existing EXAMODELS_NO_TEST_CPU flag. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
The macOS CI showed "Unexpected Pass" for glider N=20, meaning our positional-name kwcall fix resolved the macOS AOT issue. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
Windows CI showed "Unexpected Pass" for glider N=20, confirming the kwcall fix resolved the issue on all platforms. https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo
Succeeds #252 #255
https://claude.ai/code/session_01QsVaXnG1Cw7LdtCzvVYRoo