Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
160 commits
Select commit Hold shift + click to select a range
feead09
WCNF parser
ThomSerg Sep 11, 2025
5ade48e
Small docstring change
ThomSerg Sep 11, 2025
7f52f5f
OPB parser
ThomSerg Sep 11, 2025
548de8e
Move parser out of init and add cli
ThomSerg Sep 12, 2025
4505025
Add MSE and OPB datasets
ThomSerg Sep 12, 2025
2b26034
Rename datasets to dataset
ThomSerg Sep 12, 2025
e238c29
Dataset specific 'open'
ThomSerg Sep 12, 2025
669875a
Dataset module init file
ThomSerg Sep 12, 2025
c1bd2fe
Add benchmark runners
ThomSerg Sep 12, 2025
83454e0
Formatting
ThomSerg Sep 12, 2025
7f2d363
XCSP3 as dataset and benchmark
ThomSerg Sep 12, 2025
9173c9f
Parsers with changeable 'open'
ThomSerg Sep 12, 2025
52b95de
Type-hints and docstrings
ThomSerg Sep 12, 2025
bf5ecd2
Add TODOs
ThomSerg Sep 12, 2025
5dc3886
Mising helper functions
ThomSerg Sep 12, 2025
7209c62
Print stacktrace of process
ThomSerg Sep 12, 2025
f66c8c5
Fix arguments
ThomSerg Sep 12, 2025
6ab8b32
Fix overwritten open
ThomSerg Sep 12, 2025
34c8a9e
Read as string instead of StringIO
ThomSerg Sep 12, 2025
fd55b3a
Read as text instead of binary
ThomSerg Sep 12, 2025
2be9fa6
Sigterm callbacks
ThomSerg Sep 12, 2025
2e64623
Attempt at fixing some nested memory exceptions
ThomSerg Sep 12, 2025
5b92680
Overwritable exit status
ThomSerg Sep 12, 2025
8fff254
Validate dataset arguments
ThomSerg Sep 12, 2025
2b4a8f0
Check non-empty dataset
ThomSerg Sep 12, 2025
b68144d
Add feedback finished downloading
ThomSerg Sep 12, 2025
b08df43
Small fixes
ThomSerg Sep 12, 2025
431b065
Fix intermediate solutions and time tracking
ThomSerg Oct 10, 2025
7d98c35
Increase intermediate solution time resolution
ThomSerg Oct 10, 2025
4664051
Missing default return argument
ThomSerg Oct 10, 2025
582fc96
Only import "resource" when supported
ThomSerg Oct 17, 2025
2eea41c
remove var x0 which is not used in opb
OrestisLomis Oct 23, 2025
6111fc4
rcpsp dataset and benchmark
ThomSerg Oct 24, 2025
af36c87
opb fix intermediate solutions
ThomSerg Oct 24, 2025
a834387
update docstrings
ThomSerg Oct 24, 2025
8805cad
Fix more docstring
ThomSerg Oct 24, 2025
ce6b6bc
Add JSPLib dataset and benchmark
ThomSerg Oct 24, 2025
9098299
Add bounds for all jsplib instances
ThomSerg Oct 24, 2025
658967d
Fix choco args
ThomSerg Oct 25, 2025
eb41634
Merge branch 'benchmark_datasets' of https://github.com/CPMpy/cpmpy i…
ThomSerg Oct 25, 2025
38db290
Fixes
ThomSerg Oct 25, 2025
41e3768
Merge remote-tracking branch 'origin/master' into benchmark_datasets
ThomSerg Oct 27, 2025
62b605d
correct jsplib output file name
ThomSerg Nov 3, 2025
ddf6938
remove matplotlib import
ThomSerg Nov 3, 2025
344aaaf
xcsp3 track intermediate sol time
ThomSerg Nov 3, 2025
7cd1bb1
opb print intermediate solutions
ThomSerg Nov 3, 2025
a21a040
mse print intermediate solutions
ThomSerg Nov 3, 2025
eda839c
cplex and hexaly solver arguments
ThomSerg Nov 3, 2025
0bd0daa
Merge branch 'master' into benchmark_datasets
ThomSerg Jan 6, 2026
2004cfe
Add nurse rostering dataset
ThomSerg Jan 6, 2026
8b76fd3
Remove left-over print statement
ThomSerg Jan 6, 2026
e59fa99
small docstring
ThomSerg Jan 6, 2026
c8729af
Nurserostering parser tool
ThomSerg Jan 6, 2026
dd8fe9b
Nurserostering benchmark
ThomSerg Jan 6, 2026
30c9e48
ensutre soft is lower
ThomSerg Jan 6, 2026
5eb8b57
make sure path exists
ThomSerg Jan 6, 2026
cef1ab3
prototype runner
ThomSerg Jan 6, 2026
5611bf0
test_examples: missing comma in skip list
tias Jan 7, 2026
b1e6793
Missing packaging in setup.py (#813)
ThomSerg Jan 7, 2026
841a8ab
Expand CNF using encoding back-end (#782)
hbierlee Jan 8, 2026
1ac8f7c
Change `cp.sum(*iterable, **kwargs)` to `cp.sum(iterable, **kwargs)` …
hbierlee Jan 9, 2026
e30bec7
add hexaly to readme, name it a global opt solver (#767)
tias Jan 12, 2026
1186d28
Fix/bug810 handle pdk unsat with conditions (#811)
hbierlee Jan 12, 2026
76d19dc
miplib dataset
ThomSerg Jan 20, 2026
54031bd
start of reader and writer
ThomSerg Jan 20, 2026
3c5e1bc
move datastructures from class to instance level
ThomSerg Jan 22, 2026
b353a95
Fix parser and add objective transformation
ThomSerg Jan 22, 2026
73ea3de
Add basic tests
ThomSerg Jan 22, 2026
5b32da1
Add SCIP as reader/writer tool
ThomSerg Jan 29, 2026
a5fd596
Merge branch 'miplib' into benchmark_datasets
ThomSerg Jan 29, 2026
7138a20
Add dataset name as metadata
ThomSerg Jan 29, 2026
f5d40c5
Setup add io deps
ThomSerg Jan 29, 2026
b3aaf70
generic IO module
ThomSerg Jan 29, 2026
cb10ab8
Dimacs add header support
ThomSerg Jan 29, 2026
b4be3aa
Dimacs check for objective
ThomSerg Jan 29, 2026
dc34e6e
Move IO tools
ThomSerg Jan 29, 2026
ae75200
add tools to IO module
ThomSerg Jan 29, 2026
df93d35
Add comment
ThomSerg Jan 29, 2026
d6e3f97
Add docstring
ThomSerg Jan 29, 2026
06cf736
Move datasets to shared directory
ThomSerg Jan 29, 2026
4c54114
convert JSPLibDataset to _Dataset subclass
ThomSerg Jan 29, 2026
a601711
support nested files
ThomSerg Jan 29, 2026
2240e0c
opb nested files and competition filter
ThomSerg Jan 29, 2026
00cd3c9
More _Dataset subclassing
ThomSerg Jan 29, 2026
e57c533
Fix metadata order
ThomSerg Jan 30, 2026
132725d
re-usable dataset downloader
ThomSerg Jan 30, 2026
d85aeb2
nurserostering reuse downloader
ThomSerg Jan 30, 2026
61b0c42
Improve all datasets
ThomSerg Jan 30, 2026
0c838b6
add dataset classes to module
ThomSerg Jan 30, 2026
3476620
Simplify __main__
ThomSerg Jan 30, 2026
5fbc2ee
small tweaks to readers and writers
ThomSerg Jan 30, 2026
c4e6b3b
Add Ignace's opb writer
ThomSerg Jan 30, 2026
52e905d
add writer to module
ThomSerg Jan 30, 2026
52a3948
small change to docstring
ThomSerg Jan 30, 2026
0ef45ce
Merge branch 'benchmark_datasets' into observer_pattern
ThomSerg Feb 3, 2026
8005700
update io
ThomSerg Feb 4, 2026
8e7fb1c
update base
ThomSerg Feb 4, 2026
67643b3
add observers
ThomSerg Feb 4, 2026
dd53ae0
Collection of other changes
ThomSerg Feb 4, 2026
804c96f
setup command
ThomSerg Feb 4, 2026
7180214
fix for pinac
ThomSerg Feb 4, 2026
ad0d061
update to new IO location
ThomSerg Feb 11, 2026
7a48b1e
Update writers
ThomSerg Feb 12, 2026
0baaeb2
fix import
ThomSerg Feb 12, 2026
017e1a2
experimental metadata collection
ThomSerg Feb 12, 2026
95198ff
experimental download origin
ThomSerg Feb 12, 2026
41ee111
imports
ThomSerg Feb 12, 2026
f477159
dataset cli
ThomSerg Feb 12, 2026
31b0aee
remove duplicate writer
ThomSerg Feb 12, 2026
4bedebd
Dataset transform helpers
ThomSerg Feb 12, 2026
0b36a56
Cleanup dataset base
ThomSerg Feb 13, 2026
671eeb4
Remove duplicate methods
ThomSerg Feb 13, 2026
ca67973
simplify metadata logic
ThomSerg Feb 13, 2026
21ea64b
More IO optional dependencies
ThomSerg Feb 13, 2026
782ad0d
clarify name
ThomSerg Feb 13, 2026
7bd68da
support multiple citations
ThomSerg Feb 13, 2026
f36e213
Consistent dataset objects
ThomSerg Feb 16, 2026
a86544f
Remove domain and format tags
ThomSerg Feb 16, 2026
ece1fc7
consistent reader/parser/loader naming
ThomSerg Feb 17, 2026
048ad1b
writer auto format detection
ThomSerg Feb 17, 2026
655dd86
Merge branch 'observer_pattern' into benchmark_datasets
ThomSerg Feb 17, 2026
75585c6
Merge branch 'master' into benchmark_datasets
ThomSerg Feb 17, 2026
85f2f98
Metadata collection
ThomSerg Feb 28, 2026
9bba9b4
Move to datasets
ThomSerg Feb 28, 2026
e89601e
xcsp3 io tool
ThomSerg Feb 28, 2026
59623af
io tool all pip install
ThomSerg Feb 28, 2026
6e803ce
Dataset class hierarchy
ThomSerg Mar 1, 2026
9e0bbdc
Dataset metadata properties
ThomSerg Mar 1, 2026
0d88403
_loader
ThomSerg Mar 1, 2026
783504b
utils
ThomSerg Mar 1, 2026
a76560f
Fix paths
ThomSerg Mar 1, 2026
86eaf16
Files and generator datasets
ThomSerg Mar 1, 2026
ee5a030
SAT dataset
ThomSerg Mar 1, 2026
5335c43
More expressive generator datasets
ThomSerg Mar 1, 2026
c46cf03
Dataset metadata classproperty
ThomSerg Mar 1, 2026
ea40e28
Vastly expanded metadata system based on industry best practices
ThomSerg Mar 2, 2026
a1b421f
Model objects metadata
ThomSerg Mar 2, 2026
cad1228
Refactor and document datasets core
ThomSerg Mar 5, 2026
d99b77c
Clean utils
ThomSerg Mar 5, 2026
b425c22
Remove unused config
ThomSerg Mar 5, 2026
5424821
Refactor metadata
ThomSerg Mar 5, 2026
df426ea
Fix imports and names
ThomSerg Mar 6, 2026
fdad3db
Update datasets
ThomSerg Mar 6, 2026
766112a
opb writer fix transformations
ThomSerg Mar 6, 2026
9d7dade
scip small changes
ThomSerg Mar 6, 2026
e127311
path rename
ThomSerg Mar 6, 2026
81b88a3
Move dimacs
ThomSerg Mar 6, 2026
3bd4db8
Fix imports
ThomSerg Mar 6, 2026
94a0211
Writer compression option
ThomSerg Mar 6, 2026
e019155
Small fixes to metadata
ThomSerg Mar 6, 2026
aa1303a
Start of docs
ThomSerg Mar 6, 2026
e3d647f
Merge remote-tracking branch 'origin/master' into benchmark_datasets
ThomSerg Mar 9, 2026
b542c6e
Wcnf reader and writer
ThomSerg Mar 11, 2026
2aa5ed7
Dimacs loader support raw strings
ThomSerg Mar 11, 2026
b3478bf
Consistent IO naming
ThomSerg Mar 11, 2026
974df6b
Make datasets consistent with paper: remove loader
ThomSerg Mar 11, 2026
f4dd1ab
Add / update docs
ThomSerg Mar 11, 2026
cf221b5
Add missing wcnf parts
ThomSerg Mar 11, 2026
43746f8
Remove import of not yet added dataset
ThomSerg Apr 3, 2026
9369103
Disable model features temporarily
ThomSerg Apr 3, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
152 changes: 148 additions & 4 deletions cpmpy/cli.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,146 @@
"""
Command-line interface for CPMpy.

This module provides a simple CLI to interact with CPMpy, primarily to display
version information about CPMpy itself and the available solver backends.

Usage:
cpmpy <COMMAND>

Commands:
version Show the CPMpy library version and the versions of installed solver backends.
version Show CPMpy version and solver backends
dataset list List available datasets
dataset info <name> Show dataset details
dataset download <name> [options] Download a dataset
"""

import argparse
from cpmpy import __version__
import cpmpy as cp


# ── Dataset class registry ───────────────────────────────────────
# Maps CLI name -> (class, {param_name: default_value})

DATASET_CLASSES = {
"xcsp3": ("XCSP3Dataset", {"year": 2024, "track": "CSP"}),
"mse": ("MSEDataset", {"year": 2024, "track": "exact-unweighted"}),
"opb": ("OPBDataset", {"year": 2024, "track": "OPT-LIN"}),
"miplib": ("MIPLibDataset", {"year": 2024, "track": "exact-unweighted"}),
"psplib": ("PSPLibDataset", {"variant": "rcpsp", "family": "j30"}),
"nurserostering": ("NurseRosteringDataset", {}),
"jsplib": ("JSPLibDataset", {}),
}


def _import_dataset_class(class_name):
"""Lazily import a dataset class from cpmpy.tools.dataset."""
import cpmpy.tools.dataset as ds
return getattr(ds, class_name)


# ── Commands ─────────────────────────────────────────────────────

def command_version(args):
print(f"CPMpy version: {__version__}")
cp.SolverLookup().print_version()


def command_dataset_list(args):
print("Available datasets:\n")
for name, (cls_name, params) in DATASET_CLASSES.items():
try:
cls = _import_dataset_class(cls_name)
desc = getattr(cls, "description", "")
except Exception:
desc = ""
line = f" {name:<20s}"
if desc:
# Truncate long descriptions
short = desc if len(desc) <= 60 else desc[:57] + "..."
line += f" {short}"
print(line)
print(f"\nUse 'cpmpy dataset info <name>' for details.")


def command_dataset_info(args):
name = args.name.lower()
if name not in DATASET_CLASSES:
print(f"Unknown dataset: {args.name}")
print(f"Available: {', '.join(DATASET_CLASSES)}")
return

cls_name, params = DATASET_CLASSES[name]
try:
cls = _import_dataset_class(cls_name)
meta = cls.dataset_metadata()
except Exception as e:
print(f"Error loading dataset class: {e}")
return

print(f"\n {meta.get('name', name).upper()}")
print(f" {'─' * 40}")
if meta.get("description"):
print(f" {meta['description']}")
print()
for key in ("domain", "format", "url", "license"):
val = meta.get(key)
if val:
print(f" {key:<12s} {val}")

if params:
print(f"\n Parameters:")
for p, default in params.items():
print(f" --{p:<14s} (default: {default})")

# Show example usage
print(f"\n Example:")
arg_parts = []
for p, default in params.items():
arg_parts.append(f"--{p} {default}")
extra = (" " + " ".join(arg_parts)) if arg_parts else ""
print(f" cpmpy dataset download {name}{extra}")
print()


def command_dataset_download(args):
name = args.name.lower()
if name not in DATASET_CLASSES:
print(f"Unknown dataset: {args.name}")
print(f"Available: {', '.join(DATASET_CLASSES)}")
return

cls_name, param_defaults = DATASET_CLASSES[name]

# Build constructor kwargs from CLI args
kwargs = {"root": args.root, "download": True}

for param, default in param_defaults.items():
cli_val = getattr(args, param, None)
if cli_val is not None:
# Cast to int if the default is int
if isinstance(default, int):
try:
cli_val = int(cli_val)
except ValueError:
pass
kwargs[param] = cli_val
else:
kwargs[param] = default

cls = _import_dataset_class(cls_name)
print(f"Downloading {name} dataset...")
for param, default in param_defaults.items():
print(f" {param}: {kwargs.get(param, default)}")
print(f" root: {args.root}")
print()

try:
dataset = cls(**kwargs)
print(f"\nDone! {len(dataset)} instances downloaded to {args.root}/")
except Exception as e:
print(f"\nError: {e}")


# ── Main ─────────────────────────────────────────────────────────

def main():
parser = argparse.ArgumentParser(description="CPMpy command line interface")
subparsers = parser.add_subparsers(dest="command", required=True)
Expand All @@ -28,5 +149,28 @@ def main():
version_parser = subparsers.add_parser("version", help="Show version information on CPMpy and its solver backends")
version_parser.set_defaults(func=command_version)

# cpmpy dataset ...
dataset_parser = subparsers.add_parser("dataset", help="Browse and download benchmark datasets")
dataset_sub = dataset_parser.add_subparsers(dest="dataset_command", required=True)

# cpmpy dataset list
list_parser = dataset_sub.add_parser("list", help="List available datasets")
list_parser.set_defaults(func=command_dataset_list)

# cpmpy dataset info <name>
info_parser = dataset_sub.add_parser("info", help="Show dataset details")
info_parser.add_argument("name", help="Dataset name")
info_parser.set_defaults(func=command_dataset_info)

# cpmpy dataset download <name> [options]
dl_parser = dataset_sub.add_parser("download", help="Download a dataset")
dl_parser.add_argument("name", help="Dataset name")
dl_parser.add_argument("--root", default="./data", help="Download directory (default: ./data)")
dl_parser.add_argument("--year", default=None, help="Year/edition")
dl_parser.add_argument("--track", default=None, help="Track/category")
dl_parser.add_argument("--variant", default=None, help="Variant (e.g. for psplib)")
dl_parser.add_argument("--family", default=None, help="Family (e.g. for psplib)")
dl_parser.set_defaults(func=command_dataset_download)

args = parser.parse_args()
args.func(args)
2 changes: 1 addition & 1 deletion cpmpy/solvers/cpo.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@
import warnings

from .solver_interface import SolverInterface, SolverStatus, ExitStatus, Callback
from .. import DirectConstraint
from ..expressions.globalconstraints import DirectConstraint
from ..expressions.core import Expression, Comparison, Operator, BoolVal
from ..expressions.globalconstraints import GlobalConstraint
from ..expressions.globalfunctions import GlobalFunction
Expand Down
2 changes: 2 additions & 0 deletions cpmpy/solvers/pindakaas.py
Original file line number Diff line number Diff line change
Expand Up @@ -279,6 +279,8 @@ def _post_constraint(self, cpm_expr, conditions=[]):
raise TypeError

"""Add a single, *transformed* constraint, implied by conditions."""
import pindakaas as pdk

if isinstance(cpm_expr, BoolVal):
# base case: Boolean value
if cpm_expr.args[0] is False:
Expand Down
68 changes: 68 additions & 0 deletions cpmpy/tools/benchmark/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
import sys
import time
import warnings
import psutil


TIME_BUFFER = 5 # seconds
# TODO : see if good value
MEMORY_BUFFER_SOFT = 2 # MiB
MEMORY_BUFFER_HARD = 0 # MiB
MEMORY_BUFFER_SOLVER = 20 # MB



def set_memory_limit(mem_limit):
"""
Set memory limit (Virtual Memory Size).
"""
if mem_limit is not None:
soft = max(_mib_as_bytes(mem_limit) - _mib_as_bytes(MEMORY_BUFFER_SOFT), _mib_as_bytes(MEMORY_BUFFER_SOFT))
hard = max(_mib_as_bytes(mem_limit) - _mib_as_bytes(MEMORY_BUFFER_HARD), _mib_as_bytes(MEMORY_BUFFER_HARD))
soft = min(soft, hard)
if sys.platform != "win32":
import resource
resource.setrlimit(resource.RLIMIT_AS, (soft, hard)) # limit memory in number of bytes
else:
warnings.warn("Memory limits using `resource` are not supported on Windows. Skipping hard limit.")

def disable_memory_limit():
if sys.platform != "win32":
import resource
soft, hard = resource.getrlimit(resource.RLIMIT_AS)
# set a very high soft limit
resource.setrlimit(resource.RLIMIT_AS, (hard, hard))

def set_time_limit(time_limit, verbose:bool=False):
"""
Set time limit (CPU time in seconds).
"""
if time_limit is not None:
if sys.platform != "win32":
import resource
soft = int(time_limit)
hard = resource.RLIM_INFINITY
resource.setrlimit(resource.RLIMIT_CPU, (soft, hard))
else:
warnings.warn("CPU time limits using `resource` are not supported on Windows. Skipping hard limit.")

def _wall_time(p: psutil.Process):
return time.time() - p.create_time()

def _mib_as_bytes(mib: int) -> int:
return mib * 1024 * 1024

def _mb_as_bytes(mb: int) -> int:
return mb * 1000 * 1000

def _bytes_as_mb(bytes: int) -> int:
return bytes // (1000 * 1000)

def _bytes_as_gb(bytes: int) -> int:
return bytes // (1000 * 1000 * 1000)

def _bytes_as_mb_float(bytes: int) -> float:
return bytes / (1000 * 1000)

def _bytes_as_gb_float(bytes: int) -> float:
return bytes / (1000 * 1000 * 1000)
Loading
Loading