Skip to content

Update Rust crate ort to v2.0.0-rc.12#32

Open
renovate[bot] wants to merge 1 commit intomainfrom
renovate/ort-2.x-lockfile
Open

Update Rust crate ort to v2.0.0-rc.12#32
renovate[bot] wants to merge 1 commit intomainfrom
renovate/ort-2.x-lockfile

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Jun 1, 2025

This PR contains the following updates:

Package Type Update Change
ort (source) dependencies patch 2.0.0-rc.92.0.0-rc.12

Release Notes

pykeio/ort (ort)

v2.0.0-rc.12

Compare Source

2.0.0-rc.12

💖 If you find ort useful, please consider sponsoring us on Open Collective 💖

🤔 Need help upgrading? Ask questions in GitHub Discussions or in the pyke.io Discord server!


This release was made possible by Rime.ai!

Rime.ai Authentic AI voice models for enterprise.


📍 Multiversioning

🚨 If you used ort with default-features = false, enable the api-24 feature to use the latest features.

The big highlight of this release is multiversioning: ort can now use any minor version of ONNX Runtime from v1.17 to v1.24. New features are gated behind api-* feature flags, like api-20 or api-24. These flags will set the minimum version of ONNX Runtime required by ort.

More info 👉 https://ort.pyke.io/setup/multiversion

🪄 Automatic device selection

With ONNX Runtime 1.22 or later, ort will now automatically use an NPU if one is available for maximum efficiency & power savings! Setting your own execution providers will override this.

This is thanks to the super cool new SessionBuilder::with_auto_device API! There's also SessionBuilder::with_devices for finer control.

👁️ CUDA 13

ort now ships builds for both CUDA 12 & CUDA 13! It should automatically detect which CUDA you're using, but if it gets it wrong, you can override it by setting the ORT_CUDA_VERSION environment variable to 12 or 13.

🩹 SessionBuilder error recovery

You can now recover from errors when building a session by calling .recover() on the error type to get the SessionBuilder back.

🛡️ Build attestations

Prebuilt binaries are now attested via GitHub Actions, so you can verify that they are untampered builds of ONNX Runtime coming straight from pyke.io.

To verify, download your binary package of choice and use the gh CLI to verify:

➜  gh attestation verify --owner pykeio ./x86_64-pc-windows-msvc+cu13.tar.lzma2
Loaded digest sha256:e96616510082108be228ad6ea026246a31650b7d446b330c6b9671fcb9ae6267 for file://./x86_64-pc-windows-msvc+cu13.tar.lzma2
Loaded 1 attestation from GitHub API

The following policy criteria will be enforced:
- OIDC Issuer must match:................... https://token.actions.githubusercontent.com
- Source Repository Owner URI must match:... https://github.com/pykeio
- Predicate type must match:................ https://slsa.dev/provenance/v1
- Subject Alternative Name must match regex: (?i)^https://github.com/pykeio/

✓ Verification succeeded!

sha256:e96616510082108be228ad6ea026246a31650b7d446b330c6b9671fcb9ae6267 was attested by:
REPO                  PREDICATE_TYPE                  WORKFLOW
pykeio/ort-artifacts  https://slsa.dev/provenance/v1  .github/workflows/build-runner.yml@refs/heads/main

(Also note that the SHA-256 hash lines up with the one defined in dist.txt.)


Moving stuff around

  • The ORT_LIB_LOCATION environment variable has been renamed to ORT_LIB_PATH.
    • Same with all other env vars ending in _LOCATION.
    • The old names will continue to work, but they won't make it into v2.0.0, so it's a good idea to change them now!
  • Everything that used to be in ort::tensor is now in ort::value, because why have a tensor module if the Tensor<T> type actually comes from the value module?
  • IoBinding and Adapter were moved from their own modules into ort::session. All sub-modules of ort::session besides builder were collapsed into ort::session.
  • All sub-modules of ort::operator were collapsed into ort::operator.
  • Session option changes:
    • with_denormal_as_zero -> with_flush_to_zero
    • with_device_allocator_for_initializers -> with_device_allocated_initializers

Fixes

  • c52bd2a Fix MIGraphX registration.
    • #​512 missed a spot, thank you IntQuant =)
  • 374a9d1 Fix global environment thread pools
  • ff08428 Fix a segfault in Tensor::clone.
  • 3d6c2a9 Use new API to load the DirectML EP.
  • 5913ae0 Make vcpkg builds work again.
  • 079ecb4 Fix issues with multiple environment registration.

❤️🧡💛💚💙💜

v2.0.0-rc.11

Compare Source

rc11
💖 If you find ort useful, please consider sponsoring us on Open Collective 💖

🤔 Need help upgrading? Ask questions in GitHub Discussions or in the pyke.io Discord server!


I'm sorry it took so long to get to this point, but the next big release of ort should be, finally, 2.0.0 🎉. I know I said that about one of the old alpha releases (if you can even remember those), but I mean it this time! Also, I would really like to not have to do another major release right after, so if you have any concerns about any APIs, please speak now or forever hold your peace!

A huge thank you to all the individuals who have contributed to the Collective over the years: Marius, Urban Pistek, Phu Tran, Haagen, Yunho Cho, Laco Skokan, Noah, Matouš Kučera, mush42, Thomas, Bartek, Kevin Lacker, & Okabintaro. You guys have made these past rc releases possible.

If you are a business using ort, please consider sponsoring me. Egress bandwidth from pyke.io has quadrupled in the last 4 months, and 90% of that comes from just a handful of businesses. I'm lucky enough that I don't have to pay for egress right now, but I don't expect that arrangement to last forever. pyke & ort have been funded entirely from my own personal savings for years, and (as I'm sure you're well aware 😂) everything is getting more expensive, so that definitely isn't sustainable.

Seeing companies that raise tens of millions in funding build large parts of their business on ort, ask for support, and then not give anything back just... seems kind of unfair, no?


ort-web

ort-web allows you to use the fully-featured ONNX Runtime on the Web! This time, it's hack-free and thus here to stay (it won't be removed, and then added back, and then removed again like last time!)

See the crate docs for info on how to port your application to ort-web; there is a little bit of work involved. For a very barebones sample application, see ort-web-sample.

Documentation for ort-web, like the rest of ort, will improve by the time 2.0.0 comes around. If you ever have any questions, you can always reach out via GitHub Discussions or Discord!

Features

  • 5d85209 Add WebNN & WASM execution providers for ort-web.
  • #430 (💖 @​jhonboy121) Support statically linking to iOS frameworks.
  • #433 (💖 @​rMazeiks) Implement more traits for GraphOptimizationLevel.
  • 6727c98 Make PrepackedWeights Send + Sync.
  • 15bd15c Make the TLS backend configurable with new tls-* Cargo features.
  • f3cd995 Allow overriding the cache dir with the ORT_CACHE_DIR environment variable.
  • 🚨 8b3a1ed Load the dylib immediately when using ort::init_from.
    • You can now detect errors from dylib loading and let your program react accordingly.
  • 🚨 #484 (💖 @​michael-p) Update ndarray to v0.17.
    • This means you'll need to upgrade your ndarray dependency to v0.17, too.
  • 0084d08 New ort::lifetime tracing target tracks when objects are allocated/freed to aid in debugging leaks.

Fixes

  • 2ee17aa Fix a memory leak in IoBinding.
  • 317be20 Don't store Environment as a static.
    • This fixes a mutex lock failed: Invalid argument crash on macOS when exiting the process.
  • 466025c Fix unexpected CPU usage when copying GPU tensors.
  • ecca246 Fix UB when extracting empty tensors.
  • 22f71ba Gate the ArrayExtensions trait behind the std feature, fixing #![no_std] builds.
  • af63cea Fix an illegal memory access on no_std builds.
  • #444 (💖 @​pembem22) Fix Android link.
  • 1585268 Don't allow sessions to be created with non-CPU allocators
  • #485 (💖 @​mayocream) Fix load order when using cuda::preload_dylibs.
  • c5b68a1 Fix AsyncInferenceFut drop behavior.

Misc

  • Update ONNX Runtime to v1.23.2.
  • The MSRV is now Rust 1.88.
  • Binaries are now compressed using LZMA2, which reduces bandwidth by 30% compared to gzip but may double the time it takes to download binaries for the first time.
    • If you use ort in CI, please cache the ~/.cache/ort.pyke.io directory between runs.
  • ort's dependency tree has shrunk a little bit, so it should build a little faster!
  • b68c928 Overhaul build.rs
    • Warnings should now appear when binaries aren't available, and errors should look a lot nicer.
    • pkg-config support now requires the pkg-config feature.
  • 🚨 d269461 Make Metadata methods return Option<T> instead of Result<T>.
  • 🚨 47e5667 Gate preload_dylib and cuda::preload_dylibs behind a new preload-dylibs feature flag instead of load-dynamic.
  • 🚨 3b408b1 Shorten execution_providers to ep and XXXExecutionProvider to XXX.
    • They are still re-imported as their old names to avoid breakage, but these re-imports will be removed and thus broken in 2.0.0, so it's a good idea to change them now.
  • 🚨 38573e0 Simplify ThreadManager trait.
ONNX Runtime binary changes
  • Now shipping iOS & Android builds!!! Thank you Raphael Menges!!!
  • Support for Intel macOS (x86_64-apple-darwin) has been dropped following upstream changes to ONNX Runtime & Rust.
    • Additionally, the macOS target has been raised to 13.4.
    • This means I can't debug macOS issues in my Hackintosh VM anymore, so expect little to no macOS support in general from now on. If you know where I can get a used 16GB Apple Silicon Mac Mini for cheap, please let me know!
  • ONNX Runtime is now compiled with --client_package_build, meaning default options will optimize for low-resource edge inference rather than high throughput.
    • This currently only disables spinning by default. For server deployments, re-enable inter- and intra-op spinning for best throughput.
  • Now shipping TensorRT RTX builds on Windows & Linux!
  • x86_64 builds now target x86-64-v3, aka Intel Haswell/Broadwell and AMD Zen (any Ryzen) or later.
  • Linux builds are now built with Clang instead of GCC.
  • Various CUDA changes:
    • Kernels are now shipped compressed; this saves bandwidth & file size, but may slightly increase first-run latency. It will have no effect on subsequent runs.
    • Recently-added float/int matrix multiplication kernels aren't enabled. Quantized models will miss out on a bit of performance, but it was impossible to compile these kernels within the limitations of free GitHub Actions runners.

ort-tract

  • Update tract to 0.22.
  • 2d40e05 ort-tract no longer claims it is ort-candle in ort::info().

ort-candle

  • Update candle to 0.9.

❤️🧡💛💚💙💜

v2.0.0-rc.10

Compare Source

rc10 graphic


💖 If you find ort useful, please consider sponsoring us on Open Collective 💖

🤔 Need help upgrading? Ask questions in GitHub Discussions or in the pyke.io Discord server!


🔗 Tensor Array Views

You can now create a TensorRef directly from an ArrayView. Previously, tensors could only be created via Tensor::from_array (which, in many cases, performed a copy if borrowed data was provided). The new TensorRef::from_array_view (and the complementary TensorRefMut::from_array_view_mut) method(s) allows for the zero-copy creation of tensors directly from an ArrayView.

Tensor::from_array now only accepts owned data, so you should either refactor your code to use TensorRefs or pass ownership of the array to the Tensor.

⚠️ ndarrays must be in standard/contiguous memory layout to be converted to a TensorRef(Mut); see .as_standard_layout().

↔️ Copy Tensors

rc.10 now allows you to manually copy tensors between devices using Tensor::to!

// Create our tensor in CUDA memory
let cuda_allocator = Allocator::new(
	&session,
	MemoryInfo::new(AllocationDevice::CUDA, 0, AllocatorType::Device, MemoryType::Default)?
)?;
let cuda_tensor = Tensor::<f32>::new(&cuda_allocator, [1_usize, 3, 224, 224])?;

// Copy it back to CPU
let cpu_tensor = cuda_tensor.to(AllocationDevice::CPU, 0)?;

There's also Tensor::to_async, which replicates the functionality of PyTorch's non_blocking=True. Additionally, Tensors now implement Clone.

⚙️ Alternative Backends

ort is no longer just a wrapper for ONNX Runtime; it's a one-stop shop for inferencing ONNX models in Rust thanks to the addition of the alternative backend API.

Alternative backends wrap other inference engines behind ONNX Runtime's API, which can simply be dropped in and used in ort - all it takes is one line of code:

fn main() {
    ort::set_api(ort_tract::api()); // <- magic!

    let session = Session::builder()?
        ...
}

2 alternative backends are shipping alongside rc.10 - ort-tract, powered by tract, and ort-candle, powered by candle, with more to come in the future.

Outside of the Rust ecosystem, these alternative backends can also be compiled as standalone libraries that can be directly dropped in to applications as a replacement for libonnxruntime. 🦀🦠

✏️ Model Editor

Models can be created entirely programmatically, or edited from an existing ONNX model via the new Model Editor API.

See src/editor/tests.rs for an example of how an ONNX model can be created programmatically. You can combine the Model Editor API with SessionBuilder::with_optimized_model_path to export the model outside Rust.

⚛️ Compiler

Many execution providers internally convert ONNX graphs to a framework-specific graph representation, like CoreML networks/TensorRT engines. This process can take a long time, especially for larger and more complex models. Since these generated artifacts aren't persisted between runs, they have to be created every time a session is loaded.

The new Compiler API allows you to compile an optimized, EP-ready graph ahead-of-time, so subsequent loads are lighting fast! ⚡

ModelCompiler::new(
    Session::builder()?
        .with_execution_providers([
            TensorRTExecutionProvider::default().build()
        ])?
)?
    .with_model_from_file("model.onnx")?
    .compile_to_file("compiled_trt_model.onnx")?;

🪶 #![no_std]

🚨 BREAKING: If you previously used ort with default-features = false...

That will now disable ort's std feature, which means you don't get to use APIs that interact with the operating system, like SessionBuilder::commit_from_file - APIs you probably need!

To minimize breakage, manually enable the std feature:

[dependencies]
ort = { version = "=2.0.0-rc.10", default-features = false, features = [ "std", ... ] }

ort no longer depends on std (but does still depend on alloc) - default-features = false will enable #![no_std] for ort.

⚡ Execution Providers

🚨 BREAKING: Boolean options for ArmNN, CANN, CoreML, CPU, CUDA, MIGraphX, NNAPI, OpenVINO, & ROCm...

If you previously used an option setter on one of these EPs that took no parameters (i.e. a boolean option that was false by default), note that these functions now do take a boolean parameter to align with Rust idiom.

Migrating is as simple as passing true to these functions. Affected functions include:

  • ArmNNExecutionProvider::with_arena_allocator
  • CANNExecutionProvider::with_dump_graphs
  • CPUExecutionProvider::with_arena_allocator
  • CUDAExecutionProvider::with_cuda_graph
  • CUDAExecutionProvider::with_skip_layer_norm_strict_mode
  • CUDAExecutionProvider::with_prefer_nhwc
  • MIGraphXExecutionProvider::with_fp16
  • MIGraphXExecutionProvider::with_int8
  • NNAPIExecutionProvider::with_fp16
  • NNAPIExecutionProvider::with_nchw
  • NNAPIExecutionProvider::with_disable_cpu
  • NNAPIExecutionProvider::with_cpu_only
  • OpenVINOExecutionProvider::with_opencl_throttling
  • OpenVINOExecutionProvider::with_dynamic_shapes
  • OpenVINOExecutionProvider::with_npu_fast_compile
  • ROCmExecutionProvider::with_exhaustive_conv_search
🚨 BREAKING: Renamed enum options for CANN, CUDA, QNN...

The following EP option enums have been renamed to reduce verbosity:

  • CANNExecutionProviderPrecisionMode -> CANNPrecisionMode
  • CANNExecutionProviderImplementationMode -> CANNImplementationMode
  • CUDAExecutionProviderAttentionBackend -> CUDAAttentionBackend
  • CUDAExecutionProviderCuDNNConvAlgoSearch -> CuDNNConvAlgorithmSearch
  • QNNExecutionProviderPerformanceMode -> QNNPerformanceMode
  • QNNExecutionProviderProfilingLevel -> QNNProfilingLevel
  • QNNExecutionProviderContextPriority -> QNNContextPriority
🚨 BREAKING: Updated CoreML options...

CoreMLExecutionProvider has been updated to use a new registration API, unlocking more options. To migrate old options:

  • .with_cpu_only() -> .with_compute_units(CoreMLComputeUnits::CPUOnly)
  • .with_ane_only() -> .with_compute_units(CoreMLComputeUnits::CPUAndNeuralEngine)
  • .with_subgraphs() -> .with_subgraphs(true)

rc.10 adds support for 3 execution providers:

  • Azure allows you to call Azure AI models like GPT-4 directly from ort.
  • WebGPU is powered by Dawn, an implementation of the WebGPU standard, allowing accelerated inference with almost any D3D12/Metal/Vulkan/OpenGL-supported GPU. Binaries with the WebGPU EP are available on Windows & Linux, so you can start testing it straight away!
  • NV TensorRT RTX is a new execution provider purpose-built for NVIDIA RTX GPUs running with ONNX Runtime on Windows. It's powered by TensorRT for RTX, a specially-optimized inference library built upon TensorRT releasing in June.

All binaries are now statically linked! This means the cuda and tensorrt features no longer use onnxruntime.dll/libonnxruntime.so. The EPs themselves do still require separate DLLs - like libonnxruntime_providers_cuda - but this change should make it significantly easier to set up and use ort with CUDA/TRT.

🧩 Custom Operator Improvements

🚨 BREAKING: Migrating your custom operators...
  1. All methods under Operator now take &self.
  2. The operator's kernel is no longer an associated type - create_kernel is instead expected to return a Box<dyn Kernel> (which can now be created directly from a function!)
 impl Operator for MyCustomOp {
-    type Kernel = MyCustomOpKernel;
 
-    fn name() -> &'static str {
+    fn name(&self) -> &str {
         "MyCustomOp"
     }
 
-    fn inputs() -> Vec<OperatorInput> {
+    fn inputs(&self) -> Vec<OperatorInput> {
         vec![OperatorInput::required(TensorElementType::Float32)]
     }
 
-    fn outputs() -> Vec<OperatorOutput> {
+    fn outputs(&self) -> Vec<OperatorOutput> {
         vec![OperatorOutput::required(TensorElementType::Float32)]
     }
 
-   fn create_kernel(_: &KernelAttributes) -> ort::Result<Self::Kernel> {
-       Ok(MyCustomOpKernel)
-   }
+   fn create_kernel(&self, _: &KernelAttributes) -> ort::Result<Box<dyn Kernel>> {
+       Ok(Box::new(|ctx: &KernelContext| {
+           ...
+       }))
+   }
 }

To add an operator to an OperatorDomain, you now pass the operator by value instead of as a type parameter:

 let mut domain = OperatorDomain::new("io.pyke")?;
-domain = domain.add::<MyCustomOp>()?;
+domain = domain.add(MyCustomOp)?;

Custom operators have been internally revamped to reduce code size & compilation time, and allow operators to be Sized.

🔷 Miscellaneous changes

  • Updated to ONNX Runtime v1.22.0.
  • The minimum supported Rust version (MSRV) is now 1.81.0.
  • The tracing dependency is now optional (but enabled by default).
    • To keep using tracing with default-features = false, enable the tracing feature.
    • When disabled, ONNX Runtime will log its messages directly to stdout. The log level defaults to WARN but can be controlled at runtime via the ORT_LOG environment variable by setting it to one of verbose, info, warning, error, or fatal.
  • The domain serving prebuilt binaries has moved from parcel.pyke.io to cdn.pyke.io, so make sure to update firewall exclusions.
  • The build.rs hack for Apple platforms is no longer required. (9b31680)
  • The ureq dependency (used by download-binaries/fetch-models) has been ugpraded to v3.0.
    • ort with the fetch-models feature will use rustls as the TLS provider.
    • ort-sys with the download-binaries feature will use native-tls since that pulls less dependencies (it previously used rustls). No prerequisites are required when building on Windows & macOS, but other platforms now require OpenSSL to be installed.
  • All ONNX Runtime tensor types are now supported - including Complex64 & Complex128, 4-bit integers, and 8 bit floats!
    • Tensors of these types cannot be created from an array or extracted since they don't have de facto Rust equivalents, but you can use DynTensor::new to allocate a tensor and DynTensor::data_ptr to access its data.
  • Reduce allocations (e136869)
    • Session::run can now be zero-alloc (on the Rust side)!
  • Prebuilt binaries are now powered by KleidiAI on ARM64 - this should make them a fair bit faster!
⚠️ Breaking
  • 🚨 Session::run now takes &mut self.
    • 💡 Tip when using mutexes: You can use SessionOutputs::remove to get an owned session output.
  • 🚨 ort::inputs! no longer outputs a Result, so remove the trailing ? from any invocations of the macro.
  • 🚨 extract_tensor to extract a tensor to an ndarray has been renamed to extract_array, with extract_raw_tensor now taking the place of extract_tensor.
    • DynValue::try_extract_tensor(_mut) -> DynValue::try_extract_array(_mut)
    • Tensor::extract_tensor(_mut) -> Tensor::extract_array(_mut)
    • DynValue::try_extract_raw_tensor(_mut) -> DynValue::try_extract_tensor(_mut)
    • Tensor::extract_raw_tensor(_mut) -> Tensor::extract_tensor(_mut)
  • Session::run_async now always takes &RunOptions; Session::run_async_with_options has been removed.
  • Most instances of "dimensions" (i.e. in ValueType::tensor_dimensions) has been replaced with "shape" (so ValueType::tensor_shape) for consistency.
  • Tensor shapes now use a custom struct, ort::tensor::Shape, instead of a Vec<i64> directly.
    • Similarly, ValueType::Tensor.dimension_symbols is its own struct, SymbolicDimensions.
    • Both can be converted from their prior forms via ::from()/.into().
  • SessionBuilder::with_execution_providers now takes AsRef<[EP]> instead of any iterable type.
  • SessionBuilder::with_external_initializer_file_in_memory requires a Path for the path parameter instead of a regular &str.

🪲 Fixes

  • Zero out tensors created on the CPU via Tensor::new. (7a95f98)
    • In some cases, the memory allocated by ONNX Runtime for new tensors was not initally zeroed. Now, any tensors created in CPU-accessible memory via Tensor::new will be manually zeroed on the Rust side.
  • IoBinding::synchronize_* now takes &self so synchronize_outputs can actually be used as intended (e8d873a)
  • Fix XNNPACKExecutionProvider::is_available always returning false (5ad997c)
  • Fix a memory lifetime issue with AllocationDevice & MemoryInfo (3ca14c2)
  • Fix OpenVINO EP registration failures by ensuring an environment is available (3e7e8fe)
    • and use the new registration API for OpenVINO (5661450)
  • ort-sys crate now specifies links, hopefully preventing linking conflicts (d2dc7c8)
  • Correct the internal device name for the DirectML AllocationDevice (46c3376)
  • ort-sys no longer tries to download binaries when building with --offline (d7d4493)
  • Dylib symlinks are now properly renewed when the library updates (4b6b163)
  • ONNX Runtime log levels are now mapped directly to their corresponding tracing level instead of being knocked down a level (d8bcfd7)
  • Fixed the name of the flag set by TensorRTExecutionProvider::with_context_memory_sharing (#​327)
    • ...and with_build_heuristics & with_sparisty (b6ddfd8)
  • Fixed concurrent downloads from commit_from_url or ort-sys (eb51646/#​323)
  • Fix linking XNNPACK on ARM64. (#​384)

❤️🧡💛💚💙💜


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@sourcery-ai
Copy link

sourcery-ai bot commented Jun 1, 2025

Reviewer's Guide

Bumps the ort dependency to v2.0.0-rc.10 by updating the lockfile entries and checksums in Cargo.lock.

File-Level Changes

Change Details Files
Upgrade ort crate to v2.0.0-rc.10
  • Update version field from 2.0.0-rc.9 to 2.0.0-rc.10
  • Update package checksum to match new release
Cargo.lock

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@renovate renovate bot force-pushed the renovate/ort-2.x-lockfile branch from eee53dd to 0f47789 Compare August 10, 2025 14:13
@renovate renovate bot force-pushed the renovate/ort-2.x-lockfile branch from 0f47789 to 3f06c3f Compare December 10, 2025 11:12
@renovate renovate bot force-pushed the renovate/ort-2.x-lockfile branch from 3f06c3f to c5119a8 Compare January 7, 2026 05:52
@renovate renovate bot changed the title Update Rust crate ort to v2.0.0-rc.10 Update Rust crate ort to v2.0.0-rc.11 Jan 7, 2026
@renovate renovate bot force-pushed the renovate/ort-2.x-lockfile branch from c5119a8 to 62196a5 Compare February 2, 2026 20:52
@renovate renovate bot force-pushed the renovate/ort-2.x-lockfile branch from 62196a5 to e450922 Compare February 12, 2026 17:58
@renovate renovate bot force-pushed the renovate/ort-2.x-lockfile branch from e450922 to 45e5672 Compare February 25, 2026 11:12
@renovate renovate bot force-pushed the renovate/ort-2.x-lockfile branch from 45e5672 to 4231388 Compare March 5, 2026 08:33
@renovate renovate bot changed the title Update Rust crate ort to v2.0.0-rc.11 Update Rust crate ort to v2.0.0-rc.12 Mar 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants