Skip to content

Merge Dev#219

Open
arjunsuresh wants to merge 12 commits intomainfrom
dev
Open

Merge Dev#219
arjunsuresh wants to merge 12 commits intomainfrom
dev

Conversation

@arjunsuresh
Copy link
Contributor

✅ PR Checklist

  • Target branch is dev

📌 Note: PRs must be raised against dev. Do not commit directly to main.

✅ Testing & CI

  • Have tested the changes in my local environment, else have properly conveyed in the PR description
  • The change includes a GitHub Action to test the script(if it is possible to be added).
  • No existing GitHub Actions are failing because of this change.

📚 Documentation

  • README or help docs are updated for new features or changes.
  • CLI help messages are meaningful and complete.

📁 File Hygiene & Output Handling

  • No unintended files (e.g., logs, cache, temp files, pycache, output folders) are committed.

🛡️ Safety & Security

  • No secrets or credentials are committed.
  • Paths, shell commands, and environment handling are safe and portable.

🙌 Contribution Hygiene

  • PR title and description are concise and clearly state the purpose of the change.
  • Related issues (if any) are properly referenced using Fixes # or Closes #.
  • All reviewer feedback has been addressed.

sujik18 and others added 5 commits January 14, 2026 08:09
* Update repo_action.py to force index rebuild when a new repo is pulled

* Update AI PR review workflow by adding fetch step for PR head

* Update index.py to incrementally add and remove index entries of a repo
@arjunsuresh arjunsuresh requested a review from a team as a code owner February 1, 2026 16:20
@github-actions
Copy link

github-actions bot commented Feb 1, 2026

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@github-actions
Copy link

github-actions bot commented Feb 1, 2026

🤖 AI PR Review Summary\n\nThis PR adds a new 'reindex' command to the mlc Action class to allow explicit reindexing of scripts, caches, experiments, or all targets. It enhances the Index class to better handle index rebuilding, including removal of deleted items and improved modified time tracking. New tests are added to the CI workflow to verify reindexing behavior and index consistency after manual deletion of indexed items. The changes improve index robustness and provide explicit control over reindexing. Risks include potential performance impact due to full index rebuilds on reindex commands and the partial implementation of per-target reindexing (currently all targets are rebuilt regardless of target). The index rebuild logic is complex and may require further testing to ensure no regressions in index consistency.

@github-actions
Copy link

🤖 AI PR Review Summary\n\nThis PR adds a new GitHub Actions workflow to test the MLCFlow installer script across multiple OS environments and scenarios using both native runners and Docker containers. It also updates the AI PR review workflow to fetch the PR base ref and PR head SHA for incremental diffs. The new test workflow is comprehensive, covering various OS versions, installation modes, and validation steps. Risks include potential increased CI usage and complexity in maintaining the large test matrix. Design-wise, the workflow is well-structured with clear validation and outcome verification steps, but the concurrency cancellation setting may cause loss of intermediate test results.

Comment on lines +26 to +247
name: Test on ${{ matrix.os }} (${{ matrix.scenario }})
runs-on: ${{ matrix.runner }}
strategy:
fail-fast: false
matrix:
include:
# ===================================================================
# Ubuntu 22.04 LTS Tests
# ===================================================================
# Test basic installation on Ubuntu 22.04
- os: Ubuntu 22.04
runner: ubuntu-22.04
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

# Test quiet mode with minimal output
- os: Ubuntu 22.04
runner: ubuntu-22.04
scenario: quiet-mode
test-type: success
extra-flags: "--yes --quiet"
description: "Quiet mode with minimal output"

# Test upgrade mode on existing installation
- os: Ubuntu 22.04
runner: ubuntu-22.04
scenario: upgrade-mode
test-type: success
extra-flags: "--yes --upgrade"
description: "Upgrade existing installation"

# ===================================================================
# Ubuntu 24.04 LTS Tests
# ===================================================================
# Test basic installation on latest Ubuntu LTS
- os: Ubuntu 24.04
runner: ubuntu-latest
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

# ===================================================================
# macOS Tests
# ===================================================================
# Test basic installation on macOS with Homebrew
- os: macos-latest
runner: macos-latest
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

# Test custom venv directory on macOS
- os: macos-latest
runner: macos-latest
scenario: custom-venv
test-type: success
extra-flags: "--yes --venv-dir /tmp/custom_mlcflow_venv"
description: "Custom virtual environment directory"

steps:
# Determine the source URL based on the event type
# For pull_request: use the PR head branch
# For workflow_dispatch: use dev branch from mlcommons/mlcflow
- name: Determine Installer Script URL
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
# For PRs, use the head branch from the PR
OWNER="${{ github.event.pull_request.head.repo.owner.login }}"
REPO="${{ github.event.pull_request.head.repo.name }}"
BRANCH="${{ github.event.pull_request.head.ref }}"
else
# For workflow_dispatch and other events, use dev branch
OWNER="mlcommons"
REPO="mlcflow"
BRANCH="anandhu-eng-patch-1" #"dev"
fi

INSTALLER_URL="https://raw.githubusercontent.com/${OWNER}/${REPO}/refs/heads/${BRANCH}/docs/install/mlcflow_linux.sh"
echo "INSTALLER_URL=$INSTALLER_URL" >> $GITHUB_ENV
echo "✅ Installer URL: $INSTALLER_URL"

# =====================================================================
# Test Case: Install via Curl-Pipe Method
# =====================================================================
# This is the primary test that validates the installer using the exact
# method that users will use: curl <url>/mlcflow_linux.sh | bash -s -- <flags>
- name: "Test: ${{ matrix.description }}"
run: |
echo "=========================================="
echo "Test: ${{ matrix.description }}"
echo "OS: ${{ matrix.os }}"
echo "Scenario: ${{ matrix.scenario }}"
echo "Extra Flags: ${{ matrix.extra-flags }}"
echo "Installer URL: $INSTALLER_URL"
echo "=========================================="

# Download and execute the installer via curl-pipe method from GitHub
# This is the EXACT method that users will use in production
echo "Downloading and executing installer via curl-pipe method..."
if curl -sSL "$INSTALLER_URL" | bash -s -- ${{ matrix.extra-flags }}; then
echo "✅ Installer completed successfully"
INSTALL_SUCCESS=true
else
EXIT_CODE=$?
echo "❌ Installer failed with exit code: $EXIT_CODE"
INSTALL_SUCCESS=false
fi

# Store result for validation step
echo "INSTALL_SUCCESS=$INSTALL_SUCCESS" >> $GITHUB_ENV

# =====================================================================
# Validate Installation Success
# =====================================================================
# After installation completes, verify that expected artifacts exist
- name: Validate Installation Artifacts
if: env.INSTALL_SUCCESS == 'true'
run: |
echo "=========================================="
echo "Validating Installation Artifacts"
echo "=========================================="

# Determine the venv directory based on test scenario
if [[ "${{ matrix.scenario }}" == "custom-venv" ]]; then
VENV_DIR="/tmp/custom_mlcflow_venv"
else
VENV_DIR="$HOME/.mlcflow_venv"
fi

echo "Expected venv directory: $VENV_DIR"

# Validate 1: Virtual environment directory exists
if [ -d "$VENV_DIR" ]; then
echo "✅ Virtual environment directory exists"
else
echo "❌ Virtual environment directory not found"
exit 1
fi

# Validate 2: Activation script exists
if [ -f "$VENV_DIR/bin/activate" ]; then
echo "✅ Virtual environment activation script exists"
else
echo "❌ Activation script not found"
exit 1
fi

# Validate 3: Python executable exists in venv
if [ -f "$VENV_DIR/bin/python3" ] || [ -f "$VENV_DIR/bin/python" ]; then
echo "✅ Python executable found in virtual environment"
else
echo "❌ Python executable not found in virtual environment"
exit 1
fi

# Validate 4: MLCFlow package is installed
if source "$VENV_DIR/bin/activate" && python3 -c "import mlcflow" 2>/dev/null; then
echo "✅ MLCFlow package is importable"
else
echo "⚠️ MLCFlow package may not be fully installed (repo cloning may have failed)"
fi

# Validate 5: mlc CLI command is available after activation
if source "$VENV_DIR/bin/activate" && command -v mlc >/dev/null 2>&1; then
echo "✅ mlc CLI command is available"

# Try to execute a harmless command to verify CLI works
if mlc help >/dev/null 2>&1; then
echo "✅ mlc help command executed successfully"
else
echo "⚠️ mlc command exists but may not be fully functional"
fi
else
echo "⚠️ mlc CLI command not found (may be due to repo cloning issues)"
fi

# Validate 6: Check if automation repository was cloned
if [ -d "$HOME/MLC/repos" ]; then
echo "✅ MLC repos directory exists"
echo "Contents:"
ls -la "$HOME/MLC/repos" || true
else
echo "⚠️ MLC repos directory not found (repo cloning may have failed)"
fi

# =====================================================================
# Verify Expected Test Outcome
# =====================================================================
# Confirm that the test result matches the expected outcome
- name: Verify Test Outcome
run: |
echo "=========================================="
echo "Verifying Test Outcome"
echo "=========================================="

EXPECTED_RESULT="${{ matrix.test-type }}"
ACTUAL_SUCCESS="${{ env.INSTALL_SUCCESS }}"

echo "Expected: $EXPECTED_RESULT"
echo "Actual Success: $ACTUAL_SUCCESS"

if [[ "$EXPECTED_RESULT" == "success" && "$ACTUAL_SUCCESS" == "true" ]]; then
echo "✅ Test passed: Installation succeeded as expected"
exit 0
elif [[ "$EXPECTED_RESULT" == "failure" && "$ACTUAL_SUCCESS" == "false" ]]; then
echo "✅ Test passed: Installation failed as expected"
exit 0
else
echo "❌ Test failed: Unexpected outcome"
exit 1
fi

# ===========================================================================
# Test Matrix: Docker Containers for Non-Native Distributions
# ===========================================================================
# These tests run inside Docker containers for Linux distributions that
# don't have native GitHub Actions runners (Debian, RHEL-family).
test-docker-containers:

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium test

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {}

Copilot Autofix

AI 3 days ago

In general, this issue is fixed by adding an explicit permissions block either at the workflow root (applying to all jobs) or per job, and configuring only the minimal scopes actually required. Since this workflow does not appear to modify repository contents or interact with issues/PRs, we can safely restrict GITHUB_TOKEN to read-only repository contents.

The best fix with no functional change is to add a top-level permissions block right after the on: section (before concurrency:), setting contents: read. This documents that the workflow only needs read access and prevents it from inheriting broader write permissions from the repo or org defaults. No other lines need to change, and no additional imports or actions are required.

Concretely, edit .github/workflows/test-installer-curl.yml to insert:

permissions:
  contents: read

between the on: block (ending at line 11–12) and the concurrency: block (starting at line 14–15).

Suggested changeset 1
.github/workflows/test-installer-curl.yml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/test-installer-curl.yml b/.github/workflows/test-installer-curl.yml
--- a/.github/workflows/test-installer-curl.yml
+++ b/.github/workflows/test-installer-curl.yml
@@ -10,6 +10,9 @@
       - '.github/workflows/test-installer-curl.yml'
   workflow_dispatch:
 
+permissions:
+  contents: read
+
 # Only allow one workflow run per PR to conserve CI resources
 concurrency:
   group: ${{ github.workflow }}-${{ github.ref }}
EOF
@@ -10,6 +10,9 @@
- '.github/workflows/test-installer-curl.yml'
workflow_dispatch:

permissions:
contents: read

# Only allow one workflow run per PR to conserve CI resources
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
Copilot is powered by AI and may make mistakes. Always verify output.
Comment on lines +248 to +469
name: Test on ${{ matrix.os }} (${{ matrix.scenario }})
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
# ===================================================================
# Ubuntu 20.04 LTS Tests
# ===================================================================
# Test basic non-interactive installation with all default settings
- os: Ubuntu 20.04
container: ubuntu:20.04
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

# Test installation with custom venv directory
- os: Ubuntu 20.04
container: ubuntu:20.04
scenario: custom-venv
test-type: success
extra-flags: "--yes --venv-dir /tmp/custom_mlcflow_venv"
description: "Custom virtual environment directory"

# ===================================================================
# Debian 11 Tests
# ===================================================================
# Test basic installation on Debian 11 (Bullseye)
- os: Debian 11
container: debian:11
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

# Test custom venv directory on Debian
- os: Debian 11
container: debian:11
scenario: custom-venv
test-type: success
extra-flags: "--yes --venv-dir /tmp/custom_mlcflow_venv"
description: "Custom virtual environment directory"

# ===================================================================
# Debian 12 Tests
# ===================================================================
# Test basic installation on Debian 12 (Bookworm)
- os: Debian 12
container: debian:12
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

# ===================================================================
# Rocky Linux 9 Tests
# ===================================================================
# Test basic installation on Rocky Linux 9 (RHEL-compatible)
- os: Rocky Linux 9
container: rockylinux:9
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

# Test verbose mode on Rocky Linux
- os: Rocky Linux 9
container: rockylinux:9
scenario: verbose-mode
test-type: success
extra-flags: "--yes --verbose"
description: "Verbose logging mode"

# ===================================================================
# AlmaLinux 9 Tests
# ===================================================================
# Test basic installation on AlmaLinux 9 (RHEL-compatible)
- os: AlmaLinux 9
container: almalinux:9
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

# ===================================================================
# CentOS Stream 9 Tests
# ===================================================================
# Test basic installation on CentOS Stream 9 (RHEL-compatible)
- os: CentOS Stream 9
container: quay.io/centos/centos:stream9
scenario: basic-install
test-type: success
extra-flags: "--yes"
description: "Basic non-interactive installation"

steps:
# Determine the source URL based on the event type
- name: Determine Installer Script URL
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
OWNER="${{ github.event.pull_request.head.repo.owner.login }}"
REPO="${{ github.event.pull_request.head.repo.name }}"
BRANCH="${{ github.event.pull_request.head.ref }}"
else
OWNER="mlcommons"
REPO="mlcflow"
BRANCH="anandhu-eng-patch-1" #"dev"
fi

INSTALLER_URL="https://raw.githubusercontent.com/${OWNER}/${REPO}/refs/heads/${BRANCH}/docs/install/mlcflow_linux.sh"
echo "INSTALLER_URL=$INSTALLER_URL" >> $GITHUB_ENV
echo "✅ Installer URL: $INSTALLER_URL"

# =====================================================================
# Test Case: Install via Curl-Pipe Method in Docker Container
# =====================================================================
# This test runs the installer inside a Docker container that represents
# the target Linux distribution. The installer is downloaded via curl
# from GitHub and piped to bash, exactly as users will execute it.
- name: "Test: ${{ matrix.description }} in ${{ matrix.os }}"
run: |
echo "=========================================="
echo "Test: ${{ matrix.description }}"
echo "OS: ${{ matrix.os }}"
echo "Container: ${{ matrix.container }}"
echo "Scenario: ${{ matrix.scenario }}"
echo "Extra Flags: ${{ matrix.extra-flags }}"
echo "Installer URL: $INSTALLER_URL"
echo "=========================================="

# Run the installer inside the Docker container via curl-pipe method
# Downloads directly from GitHub, testing the real distribution method
docker run --rm \
${{ matrix.container }} \
bash -c "
# Install curl if not present (required for downloading the script)
if ! command -v curl >/dev/null 2>&1; then
if command -v apt-get >/dev/null 2>&1; then
apt-get update -qq && apt-get install -y -qq curl
elif command -v dnf >/dev/null 2>&1; then
dnf install -y -q curl-minimal
elif command -v yum >/dev/null 2>&1; then
yum install -y -q curl-minimal
fi
fi

# Download and execute installer via curl-pipe method from GitHub
echo '=== Downloading and executing installer via curl-pipe method ==='
curl -sSL '$INSTALLER_URL' | bash -s -- ${{ matrix.extra-flags }}
" && DOCKER_EXIT_CODE=0 || DOCKER_EXIT_CODE=$?

# Evaluate the result
if [ $DOCKER_EXIT_CODE -eq 0 ]; then
echo "✅ Installer completed successfully in container"
echo "INSTALL_SUCCESS=true" >> $GITHUB_ENV
else
echo "❌ Installer failed in container with exit code: $DOCKER_EXIT_CODE"
echo "INSTALL_SUCCESS=false" >> $GITHUB_ENV
fi

# =====================================================================
# Validate Installation Inside Container
# =====================================================================
# After installation completes, run the container again to verify artifacts
- name: Validate Installation Artifacts in Container
if: env.INSTALL_SUCCESS == 'true'
run: |
echo "=========================================="
echo "Validating Installation Artifacts"
echo "=========================================="

# Determine the venv directory based on test scenario
if [[ "${{ matrix.scenario }}" == "custom-venv" ]]; then
VENV_DIR="/tmp/custom_mlcflow_venv"
else
VENV_DIR="/root/.mlcflow_venv"
fi

echo "Expected venv directory: $VENV_DIR"

# Run validation commands inside a new container instance
# Note: The previous container is ephemeral, so we validate the
# installation success based on exit code rather than persistent state
docker run --rm ${{ matrix.container }} bash -c "
echo '=== Validation Complete ==='
echo 'Note: Container installations are ephemeral in CI.'
echo 'Success is determined by installer exit code.'
"

echo "✅ Container installation validation complete"

# =====================================================================
# Verify Expected Test Outcome
# =====================================================================
- name: Verify Test Outcome
run: |
echo "=========================================="
echo "Verifying Test Outcome"
echo "=========================================="

EXPECTED_RESULT="${{ matrix.test-type }}"
ACTUAL_SUCCESS="${{ env.INSTALL_SUCCESS }}"

echo "Expected: $EXPECTED_RESULT"
echo "Actual Success: $ACTUAL_SUCCESS"

if [[ "$EXPECTED_RESULT" == "success" && "$ACTUAL_SUCCESS" == "true" ]]; then
echo "✅ Test passed: Installation succeeded as expected"
exit 0
elif [[ "$EXPECTED_RESULT" == "failure" && "$ACTUAL_SUCCESS" == "false" ]]; then
echo "✅ Test passed: Installation failed as expected"
exit 0
else
echo "❌ Test failed: Unexpected outcome"
exit 1
fi

# ===========================================================================
# Final Test Summary
# ===========================================================================
test-summary:

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium test

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {}

Copilot Autofix

AI 3 days ago

To fix this, explicitly restrict the GITHUB_TOKEN permissions used by this workflow to the minimum needed. The jobs only need to read repository contents/metadata (for the PR info and to construct the raw GitHub URL) and run shell/Docker commands; they do not push commits, create releases, or modify issues/PRs. The minimal standard pattern is to set permissions: contents: read at the top level of the workflow so it applies to all jobs (test-native-runners, test-docker-containers, and test-summary) that do not override it.

The best fix without changing functionality is therefore:

  • Add a root-level permissions: block right after name: Test MLCFlow Installer (or just after on: if you prefer), at the same indentation level as on: and jobs:.
  • Set contents: read as the only permission, which is sufficient for read-only access and is compatible with everything the workflow currently does.
  • No changes are needed inside individual jobs; they will automatically inherit this restricted permission set.

No imports or additional methods are required, since this is purely a YAML configuration change.

Suggested changeset 1
.github/workflows/test-installer-curl.yml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/test-installer-curl.yml b/.github/workflows/test-installer-curl.yml
--- a/.github/workflows/test-installer-curl.yml
+++ b/.github/workflows/test-installer-curl.yml
@@ -1,5 +1,8 @@
 name: Test MLCFlow Installer
 
+permissions:
+  contents: read
+
 on:
   pull_request:
     branches:
EOF
@@ -1,5 +1,8 @@
name: Test MLCFlow Installer

permissions:
contents: read

on:
pull_request:
branches:
Copilot is powered by AI and may make mistakes. Always verify output.
Comment on lines +470 to +516
name: Test Summary
runs-on: ubuntu-latest
needs:
- test-native-runners
- test-docker-containers
if: always()
steps:
- name: Generate Test Summary
run: |
echo "=========================================="
echo " MLCFlow Installer CI Test Summary"
echo "=========================================="
echo ""
echo "This CI workflow validates the MLCFlow installer by downloading"
echo "directly from GitHub and using the exact distribution method that"
echo "users will use in production:"
echo " curl -sSL <github-raw-url>/mlcflow_linux.sh | bash"
echo ""
echo "✅ Test Coverage:"
echo " - Native GitHub runners (Ubuntu, macOS)"
echo " - Docker containers (Debian, Rocky, Alma, CentOS)"
echo " - Multiple installation modes (basic, custom, verbose, quiet)"
echo ""
echo "⚠️ Known Limitation:"
echo " - Only non-interactive mode is tested in CI"
echo " - Interactive prompts are not covered by automated tests"
echo ""
echo "=========================================="
echo ""

# Check if all required jobs succeeded
NATIVE_STATUS="${{ needs.test-native-runners.result }}"
DOCKER_STATUS="${{ needs.test-docker-containers.result }}"

echo "Job Results:"
echo " Native Runners: $NATIVE_STATUS"
echo " Docker Containers: $DOCKER_STATUS"
echo ""

if [[ "$NATIVE_STATUS" == "success" && \
"$DOCKER_STATUS" == "success" ]]; then
echo "Result: ✅ ALL TESTS PASSED"
exit 0
else
echo "Result: ❌ SOME TESTS FAILED"
exit 1
fi

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium test

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {}

Copilot Autofix

AI 3 days ago

In general, to fix this problem you must add an explicit permissions block either at the top level of the workflow (to apply to all jobs) or within the specific job(s), granting only the scopes required. For a purely read‑only CI workflow that only checks out code and runs scripts, contents: read is usually sufficient as a default.

For this specific file, the simplest non‑disruptive fix is to add a workflow‑level permissions section granting only read access to repository contents. The provided snippet doesn’t show any actions that need write access (no use of actions/github-script to modify issues, no push, etc.), so contents: read is an appropriate minimal baseline. This will automatically apply to the test-summary job (and all other jobs in this workflow) without changing any existing functionality, since they already work with read‑only content. Concretely, insert:

permissions:
  contents: read

near the top of .github/workflows/test-installer-curl.yml, after the on: block (or before on: if you prefer; both are valid in the root). No imports or other definitions are required, as this is standard GitHub Actions YAML configuration.

Suggested changeset 1
.github/workflows/test-installer-curl.yml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/test-installer-curl.yml b/.github/workflows/test-installer-curl.yml
--- a/.github/workflows/test-installer-curl.yml
+++ b/.github/workflows/test-installer-curl.yml
@@ -10,6 +10,9 @@
       - '.github/workflows/test-installer-curl.yml'
   workflow_dispatch:
 
+permissions:
+  contents: read
+
 # Only allow one workflow run per PR to conserve CI resources
 concurrency:
   group: ${{ github.workflow }}-${{ github.ref }}
EOF
@@ -10,6 +10,9 @@
- '.github/workflows/test-installer-curl.yml'
workflow_dispatch:

permissions:
contents: read

# Only allow one workflow run per PR to conserve CI resources
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
Copilot is powered by AI and may make mistakes. Always verify output.
@github-actions
Copy link

🤖 AI PR Review Summary\n\nThis PR introduces enhancements to the GitHub Actions workflows by refining the incremental PR diff workflow and adding a comprehensive new workflow to test the MLCFlow installer script across multiple OS environments and scenarios. The new test-installer-curl.yml workflow covers native runners and Docker containers, testing various installation scenarios including basic install, quiet mode, upgrade mode, and custom virtual environment directories. It validates installation success by checking expected artifacts and CLI functionality. Risks include increased CI runtime and complexity, potential flakiness in curl-pipe installation tests, and the concurrency setting which may cancel in-progress runs and lose intermediate results. The design is thorough and well-structured but could benefit from clearer documentation on concurrency trade-offs and potential fallback strategies for flaky tests.

@github-actions
Copy link

🤖 AI PR Review Summary\n\nThis PR adds a new comprehensive GitHub Actions workflow to test the MLCFlow installer script across multiple OS environments and scenarios using both native runners and Docker containers. It also improves the existing AI PR review workflow by fetching the PR base ref and PR head commit explicitly. The new test workflow runs the installer via the exact curl-pipe method users will use, validates installation artifacts, and verifies expected outcomes. Risks include increased CI resource usage due to the large matrix and potential flakiness from network-dependent curl-pipe installation. Design is robust with detailed validation steps and concurrency control, but the hardcoded branch name in workflow_dispatch event and some commented considerations on concurrency behavior may need review.

@github-actions
Copy link

🤖 AI PR Review Summary\n\nThis PR adds a new comprehensive GitHub Actions workflow to test the MLCFlow installer script across multiple OS environments and scenarios, including native runners and Docker containers. It also updates the existing AI PR review workflow to fetch the PR base ref and PR head commit explicitly. The new test workflow is extensive, covering various OS versions, installation modes, and validation steps. Risks include increased CI runtime and complexity, potential flakiness in curl-pipe installer tests, and the concurrency cancellation possibly hiding intermediate failures. Design-wise, the test matrix is well-structured and thorough, but hardcoded branch names and some commented-out code may need cleanup.

@github-actions
Copy link

🤖 AI PR Review Summary\n\nThis PR adds a new comprehensive GitHub Actions workflow to test the MLCFlow installer script across multiple OS environments and scenarios using both native runners and Docker containers. It also updates the existing AI PR review workflow to fetch the PR base ref and PR head commit explicitly. The new test workflow is extensive, covering various OS versions, installation modes, and validation steps. Risks include increased CI runtime and complexity, potential flakiness due to network-dependent curl-pipe installation, and the concurrency setting that may cancel in-progress runs, possibly losing intermediate test results. Design-wise, the workflow is well-structured with clear matrix definitions and validation steps, but the hardcoded branch in workflow_dispatch and some commented-out options could be improved for flexibility.

@github-actions
Copy link

🤖 AI PR Review Summary\n\nThis PR adds a new comprehensive GitHub Actions workflow to test the MLCFlow installer script across multiple OS environments and scenarios using both native runners and Docker containers. It also improves the existing AI PR review workflow by ensuring the checkout uses the PR base ref and fetches the PR head commit explicitly. The new test workflow runs the installer via the exact curl-pipe method users will use, validates installation artifacts, and verifies expected outcomes. Risks include increased CI resource usage due to the extensive test matrix and potential flakiness in curl-pipe execution. Design is robust with concurrency controls and detailed validation steps, but the hardcoded branch in workflow_dispatch event and some commented notes suggest areas for future refinement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants