diff --git a/README.md b/README.md index f6397ac..d243ee8 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,226 @@ # ArcFlow -Code for exporting data from ArchivesSpace to ArcLight, along with additional utility scripts for data handling and transformation. \ No newline at end of file +Code for exporting data from ArchivesSpace to ArcLight, along with additional utility scripts for data handling and transformation. + +## Quick Start + +This directory contains a complete, working installation of arcflow with creator records support. To run it: + +```bash +# 1. Install dependencies +pip install -r requirements.txt + +# 2. Configure credentials +cp .archivessnake.yml.example .archivessnake.yml +nano .archivessnake.yml # Add your ArchivesSpace credentials + +# 3. Run arcflow +python -m arcflow.main \ + --arclight-dir /path/to/your/arclight-app \ + --aspace-dir /path/to/your/archivesspace \ + --solr-url http://localhost:8983/solr/blacklight-core \ + --aspace-solr-url http://localhost:8983/solr/archivesspace +``` + +--- + +## Features + +- **Collection Indexing**: Exports EAD XML from ArchivesSpace and indexes to ArcLight Solr +- **Creator Records**: Extracts creator agent information and indexes as standalone documents +- **Biographical Notes**: Injects creator biographical/historical notes into collection EAD XML +- **PDF Generation**: Generates finding aid PDFs via ArchivesSpace jobs +- **Incremental Updates**: Supports modified-since filtering for efficient updates + +## Creator Records + +ArcFlow now generates standalone creator documents in addition to collection records. Creator documents: + +- Include biographical/historical notes from ArchivesSpace agent records +- Link to all collections where the creator is listed +- Can be searched and displayed independently in ArcLight +- Are marked with `is_creator: true` to distinguish from collections +- Must be fed into a Solr instance with fields to match their specific facets (See: Configure Solr Schema below) + +### Agent Filtering + +**ArcFlow automatically filters agents to include only legitimate creators** of archival materials. The following agent types are **excluded** from indexing: + +- ✗ **System users** - ArchivesSpace software users (identified by `is_user` field) +- ✗ **System-generated agents** - Auto-created for users (identified by `system_generated` field) +- ✗ **Software agents** - Excluded by not querying the `/agents/software` endpoint +- ✗ **Repository agents** - Corporate entities representing the repository itself (identified by `is_repo_agent` field) +- ✗ **Donor-only agents** - Agents with only the 'donor' role and no creator role + +**Agents are included if they meet any of these criteria:** + +- ✓ Have the **'creator' role** in linked_agent_roles +- ✓ Are **linked to published records** (and not excluded by filters above) + +This filtering ensures that only legitimate archival creators are discoverable in ArcLight, while protecting privacy and security by excluding system users and donors. + +### How Creator Records Work + +1. **Extraction**: Agent data is exported from ArchivesSpace for use in creator records +2. **Filtering**: Creator vs. non-creator agents are determined via Solr queries built from `_get_target_agent_criteria()` and `_get_nontarget_agent_criteria()`, which exclude system users, donors, and other non-creator agents +3. **Processing**: For each target creator agent, ArcFlow generates an EAC-CPF XML document that includes bioghist notes +4. **Linking**: Handled via Solr using the persistent_id field (agents and collections linked through bioghist references) +5. **Indexing**: Creator XML files are indexed to Solr using `traject_config_eac_cpf.rb` + +### Creator Document Format + +Creator documents are stored as XML files in `agents/` directory using the ArchivesSpace EAC-CPF export: + +```xml + + + + + + corporateBody + + Core: Leadership, Infrastructure, Futures + local + + + + + 2020- + + +

Founded on September 1, 2020, the Core: Leadership, Infrastructure, Futures division of the American Library Association has a mission to cultivate and amplify the collective expertise of library workers in core functions through community building, advocacy, and learning. + In June 2020, the ALA Council voted to approve Core: Leadership, Infrastructure, Futures as a new ALA division beginning September 1, 2020, and to dissolve the Association for Library Collections and Technical Services (ALCTS), the Library Information Technology Association (LITA) and the Library Leadership and Management Association (LLAMA) effective August 31, 2020. The vote to form Core was 163 to 1.(1)

+ 1. "ALA Council approves Core; dissolves ALCTS, LITA and LLAMA," July 1, 2020, http://www.ala.org/news/member-news/2020/07/ala-council-approves-core-dissolves-alcts-lita-and-llama. +
+
+ +
+
+``` + +### Indexing Creator Documents + +#### Configure Solr Schema (Required Before Indexing) + +⚠️ **CRITICAL PREREQUISITE** - Before you can index creator records to Solr, you must configure the Solr schema. + +**See [SOLR_SCHEMA.md](SOLR_SCHEMA.md) for complete instructions on:** +- Which fields to add (is_creator, creator_persistent_id, etc.) +- Three methods to add them (Schema API recommended, managed-schema, or schema.xml) +- How to verify they're added +- Troubleshooting "unknown field" errors + +**Quick Schema Setup (Schema API method):** +```bash +# Add is_creator field +curl -X POST -H 'Content-type:application/json' \ + http://localhost:8983/solr/blacklight-core/schema \ + -d '{"add-field": {"name": "is_creator", "type": "boolean", "indexed": true, "stored": true}}' + +# Add other required fields (see SOLR_SCHEMA.md for complete list) +``` + +**Verify schema is configured:** +```bash +curl "http://localhost:8983/solr/blacklight-core/schema/fields/is_creator" +# Should return field definition, not 404 +``` + +⚠️ **If you skip this step, you'll get:** +``` +ERROR: [doc=creator_corporate_entities_584] unknown field 'is_creator' +``` + +This is a **one-time setup** per Solr instance. + +--- + +### Traject Configuration for Creator Indexing + +The `traject_config_eac_cpf.rb` file defines how EAC-CPF creator records are mapped to Solr fields. + +**Search Order**: arcflow searches for the traject config following the collection records pattern: +1. **arcuit_dir parameter** (if provided via `--arcuit-dir`) - Highest priority, most up-to-date user control +2. **arcuit gem** (via `bundle show arcuit`) - For backward compatibility when arcuit_dir not provided +3. **example_traject_config_eac_cpf.rb** in arcflow - Fallback for module usage without arcuit + +**Example File**: arcflow includes `example_traject_config_eac_cpf.rb` as a reference implementation. For production: +- Copy this file to your arcuit gem as `traject_config_eac_cpf.rb`, or +- Specify the location with `--arcuit-dir /path/to/arcuit` + +**Logging**: arcflow clearly logs which traject config file is being used when creator indexing runs. + +To index creator documents to Solr manually: + +```bash +bundle exec traject \ + -u http://localhost:8983/solr/blacklight-core \ + -i xml \ + -c traject_config_eac_cpf.rb \ + /path/to/agents/*.xml +``` + +Or integrate into your ArcFlow deployment workflow. + +## Installation + +See the original installation instructions in your deployment documentation. + +## Configuration + +- `.archivessnake.yml` - ArchivesSpace API credentials +- `.arcflow.yml` - Last update timestamp tracking + +## Usage + +```bash +python -m arcflow.main --arclight-dir /path --aspace-dir /path --solr-url http://... [options] +``` + +### Command Line Options + +Required arguments: +- `--arclight-dir` - Path to ArcLight installation directory +- `--aspace-dir` - Path to ArchivesSpace installation directory +- `--solr-url` - URL of the ArcLight Solr core (e.g., http://localhost:8983/solr/blacklight-core) +- `--aspace-solr-url` URL of the ASpace Solr core + +Optional arguments: +- `--force-update` - Force update of all data (recreates everything from scratch) +- `--traject-extra-config` - Path to extra Traject configuration file +- `--agents-only` - Process only agent records, skip collections (useful for testing agents) +- `--collections-only` - Skips creators, processes EAD, PDF finding aid and indexes collections +- `--skip-creator-indexing` - Collects EAC-CPF files only, does not index into Solr +### Examples + +**Normal run (process all collections and agents):** +```bash +python -m arcflow.main \ + --arclight-dir /path/to/arclight \ + --aspace-dir /path/to/archivesspace \ + --solr-url http://localhost:8983/solr/blacklight-core \ + --aspace-solr-url http://localhost:8983/solr/archivesspace + +``` + +**Process only agents (skip collections):** +```bash +python -m arcflow.main \ + --arclight-dir /path/to/arclight \ + --aspace-dir /path/to/archivesspace \ + --solr-url http://localhost:8983/solr/blacklight-core \ + --aspace-solr-url http://localhost:8983/solr/archivesspace \ + --agents-only +``` + +**Force full update:** +```bash +python -m arcflow.main \ + --arclight-dir /path/to/arclight \ + --aspace-dir /path/to/archivesspace \ + --solr-url http://localhost:8983/solr/blacklight-core \ + --aspace-solr-url http://localhost:8983/solr/archivesspace \ + --force-update +``` + +See `--help` for all available options. \ No newline at end of file diff --git a/arcflow/main.py b/arcflow/main.py index f4f900e..9273791 100644 --- a/arcflow/main.py +++ b/arcflow/main.py @@ -9,13 +9,15 @@ import re import logging import math +import sys from xml.dom.pulldom import parse, START_ELEMENT from xml.sax.saxutils import escape as xml_escape +from xml.etree import ElementTree as ET from datetime import datetime, timezone from asnake.client import ASnakeClient from multiprocessing.pool import ThreadPool as Pool from utils.stage_classifications import extract_labels - +import glob base_dir = os.path.abspath((__file__) + "/../../") log_file = os.path.join(base_dir, 'logs/arcflow.log') @@ -38,14 +40,20 @@ class ArcFlow: """ - def __init__(self, arclight_dir, aspace_dir, solr_url, traject_extra_config='', force_update=False): + def __init__(self, arclight_dir, aspace_dir, solr_url, aspace_solr_url, traject_extra_config='', force_update=False, agents_only=False, collections_only=False, arcuit_dir=None, skip_creator_indexing=False): self.solr_url = solr_url + self.aspace_solr_url = aspace_solr_url self.batch_size = 1000 - self.traject_extra_config = f'-c {traject_extra_config}' if traject_extra_config.strip() else '' + clean_extra_config = traject_extra_config.strip() + self.traject_extra_config = clean_extra_config or None self.arclight_dir = arclight_dir self.aspace_jobs_dir = f'{aspace_dir}/data/shared/job_files' self.job_type = 'print_to_pdf_job' self.force_update = force_update + self.agents_only = agents_only + self.collections_only = collections_only + self.arcuit_dir = arcuit_dir + self.skip_creator_indexing = skip_creator_indexing self.log = logging.getLogger('arcflow') self.pid = os.getpid() self.pid_file_path = os.path.join(base_dir, 'arcflow.pid') @@ -59,10 +67,15 @@ def __init__(self, arclight_dir, aspace_dir, solr_url, traject_extra_config='', self.start_time = int(time.time()) try: with open(self.arcflow_file_path, 'r') as file: - config = yaml.safe_load(file) + config = yaml.safe_load(file) or {} try: - self.last_updated = datetime.strptime( - config['last_updated'], '%Y-%m-%dT%H:%M:%S%z') + date_fmt = '%Y-%m-%dT%H:%M:%S%z' + epoch = datetime.fromtimestamp(0, timezone.utc) + legacy_ts = config.get('last_updated') + collections_ts_str = config.get('last_updated_collections') or legacy_ts + creators_ts_str = config.get('last_updated_creators') or legacy_ts + self.last_updated_collections = datetime.strptime(collections_ts_str, date_fmt) if collections_ts_str else epoch + self.last_updated_creators = datetime.strptime(creators_ts_str, date_fmt) if creators_ts_str else epoch except Exception as e: self.log.error(f'Error parsing last_updated date on file .arcflow.yml: {e}') exit(0) @@ -71,7 +84,8 @@ def __init__(self, arclight_dir, aspace_dir, solr_url, traject_extra_config='', self.log.error('File .arcflow.yml not found. Create the file and try again or run with --force-update to recreate EADs from scratch.') exit(0) else: - self.last_updated = datetime.fromtimestamp(0, timezone.utc) + self.last_updated_collections = datetime.fromtimestamp(0, timezone.utc) + self.last_updated_creators = datetime.fromtimestamp(0, timezone.utc) try: with open(os.path.join(base_dir, '.archivessnake.yml'), 'r') as file: config = yaml.safe_load(file) @@ -127,13 +141,16 @@ def update_repositories(self): self.log.info('Checking for updates on repositories information...') update_repos = False + # Use the oldest of the two run timestamps so that a repo change + # is detected regardless of which pipeline last ran. + last_updated = min(self.last_updated_collections, self.last_updated_creators) for repo in repos: # python doesn't support Zulu timezone suffixes, # converting system_mtime and user_mtime to UTC offset notation - if (self.last_updated <= datetime.strptime( + if (last_updated <= datetime.strptime( repo['system_mtime'].replace('Z','+0000'), '%Y-%m-%dT%H:%M:%S%z') - or self.last_updated <= datetime.strptime( + or last_updated <= datetime.strptime( repo['user_mtime'].replace('Z','+0000'), '%Y-%m-%dT%H:%M:%S%z')): update_repos = True @@ -301,6 +318,7 @@ def task_resource(self, repo, resource_id, xml_dir, pdf_dir, indent_size=0): f'{pdf_dir}/{prev_ead_id}.pdf', indent_size=indent_size) + os.makedirs(xml_dir, exist_ok=True) self.save_file(xml_file_path, xml_content, 'XML', indent_size=indent_size) self.create_symlink( os.path.basename(xml_file_path), @@ -376,6 +394,7 @@ def task_pdf(self, repo_uri, job_id, ead_id, pdf_dir, indent_size=0): else: pdf_content = b'' # empty PDF file + os.makedirs(pdf_dir, exist_ok=True) self.save_file( f'{pdf_dir}/{ead_id}.pdf', pdf_content, @@ -389,41 +408,19 @@ def task_pdf(self, repo_uri, job_id, ead_id, pdf_dir, indent_size=0): time.sleep(5) - def update_eads(self): + def process_collections(self): """ Update EADs in ArcLight with the latest data from resources in ArchivesSpace. """ xml_dir = f'{self.arclight_dir}/public/xml' + resource_dir = f'{xml_dir}/resources' pdf_dir = f'{self.arclight_dir}/public/pdf' - modified_since = int(self.last_updated.timestamp()) + modified_since = int(self.last_updated_collections.timestamp()) + if self.force_update or modified_since <= 0: modified_since = 0 - # delete all EADs and related files in ArcLight Solr - try: - response = requests.post( - f'{self.solr_url}/update?commit=true', - json={'delete': {'query': '*:*'}}, - ) - if response.status_code == 200: - self.log.info('Deleted all EADs from ArcLight Solr.') - # delete related directories after suscessful - # deletion from solr - for dir_path, dir_name in [(xml_dir, 'XMLs'), (pdf_dir, 'PDFs')]: - try: - shutil.rmtree(dir_path) - self.log.info(f'Deleted {dir_name} directory {dir_path}.') - except Exception as e: - self.log.error(f'Error deleting {dir_name} directory "{dir_path}": {e}') - else: - self.log.error(f'Failed to delete all EADs from Arclight Solr. Status code: {response.status_code}') - except requests.exceptions.RequestException as e: - self.log.error(f'Error deleting all EADs from ArcLight Solr: {e}') - - # create directories if don't exist - for dir_path in (xml_dir, pdf_dir): - os.makedirs(dir_path, exist_ok=True) # process resources that have been modified in ArchivesSpace since last update self.log.info('Fetching resources from ArchivesSpace...') @@ -435,7 +432,7 @@ def update_eads(self): # Tasks for processing repositories results_1 = [pool.apply_async( self.task_repository, - args=(repo, xml_dir, modified_since, indent_size)) + args=(repo, resource_dir, modified_since, indent_size)) for repo in repos] # Collect outputs from repository tasks outputs_1 = [r.get() for r in results_1] @@ -443,7 +440,7 @@ def update_eads(self): # Tasks for processing resources results_2 = [pool.apply_async( self.task_resource, - args=(repo, resource_id, xml_dir, pdf_dir, indent_size)) + args=(repo, resource_id, resource_dir, pdf_dir, indent_size)) for repo, resources in outputs_1 for resource_id in resources] # Collect outputs from resource tasks outputs_2 = [r.get() for r in results_2] @@ -457,8 +454,8 @@ def update_eads(self): # Tasks for indexing pending resources results_3 = [pool.apply_async( - self.index, - args=(repo_id, f'{xml_dir}/{repo_id}_*_batch_{batch_num}.xml', indent_size)) + self.index_collections, + args=(repo_id, f'{resource_dir}/{repo_id}_*_batch_{batch_num}.xml', indent_size)) for repo_id, batch_num in batches] # Wait for indexing tasks to complete @@ -467,7 +464,7 @@ def update_eads(self): # Remove pending symlinks after indexing for repo_id, batch_num in batches: - xml_file_path = f'{xml_dir}/{repo_id}_*_batch_{batch_num}.xml' + xml_file_path = f'{resource_dir}/{repo_id}_*_batch_{batch_num}.xml' try: result = subprocess.run( f'rm {xml_file_path}', @@ -490,57 +487,126 @@ def update_eads(self): for r in results_4: r.get() - # processing deleted resources is not needed when - # force-update is set or modified_since is set to 0 - if self.force_update or modified_since <= 0: - self.log.info('Skipping deleted resources processing.') - return + return + + + + def process_deleted_records(self, scope): + """ + Process records deleted in ArchivesSpace since the last run. + + scope: 'collections', 'creators', or 'all' + Determines which record types are checked for deletion and which + timestamp is used as the lower bound for the delete-feed query. + """ + xml_dir = f'{self.arclight_dir}/public/xml' + resource_dir = f'{xml_dir}/resources' + agent_dir = f'{xml_dir}/agents' + pdf_dir = f'{self.arclight_dir}/public/pdf' + + # Use the earlier timestamp when both types are in scope so no + # deletions are missed. Per-type filtering happens in the loop below. + if scope == 'all': + modified_since = min(int(self.last_updated_collections.timestamp()), + int(self.last_updated_creators.timestamp())) + elif scope == 'collections': + modified_since = int(self.last_updated_collections.timestamp()) + else: # 'creators' + modified_since = int(self.last_updated_creators.timestamp()) + + resource_pattern = r'^/repositories/(?P\d+)/resources/(?P\d+)$' + agent_pattern = r'^/agents/(?Ppeople|corporate_entities|families)/(?P\d+)$' - # process resources that have been deleted since last update in ArchivesSpace - pattern = r'^/repositories/(?P\d+)/resources/(?P\d+)$' page = 1 while True: deleted_records = self.client.get( - f'/delete-feed', + '/delete-feed', params={ 'page': page, 'modified_since': modified_since, } ).json() for record in deleted_records['results']: - match = re.match(pattern, record) - if match: - resource_id = match.group('resource_id') - self.log.info(f'{" " * indent_size}Processing deleted resource ID {resource_id}...') + resource_match = re.match(resource_pattern, record) + agent_match = re.match(agent_pattern, record) - symlink_path = f'{xml_dir}/{resource_id}.xml' + if resource_match and scope in ('collections', 'all'): + resource_id = resource_match.group('record_id') + self.log.info(f'Processing deleted resource ID {resource_id}...') + symlink_path = f'{resource_dir}/{resource_id}.xml' ead_id = self.get_ead_from_symlink(symlink_path) if ead_id: self.delete_ead( - resource_id, + resource_id, ead_id.replace('.', '-'), # dashes in Solr - f'{xml_dir}/{ead_id}.xml', # dots in filenames - f'{pdf_dir}/{ead_id}.pdf', - indent=4) + f'{resource_dir}/{ead_id}.xml', # dots in filenames + f'{pdf_dir}/{ead_id}.pdf', + indent_size=4) else: - self.log.error(f'{" " * (indent_size+2)}Symlink {symlink_path} not found. Unable to delete the associated EAD from Arclight Solr.') + self.log.error(f'Symlink {symlink_path} not found. Unable to delete the associated EAD from ArcLight Solr.') + + if agent_match and scope in ('creators', 'all'): + agent_type = agent_match.group('agent_type') + agent_id = agent_match.group('record_id') + self.log.info(f'Processing deleted agent ID {agent_id}...') + file_path = f'{agent_dir}/{agent_id}.xml' + agent_solr_id = f'creator_{agent_type}_{agent_id}' + self.delete_creator(file_path, agent_solr_id) if deleted_records['last_page'] == page: break page += 1 - def index(self, repo_id, xml_file_path, indent_size=0): + def index_collections(self, repo_id, xml_file_path, indent_size=0): + """Index collection XML files to Solr using traject.""" indent = ' ' * indent_size self.log.info(f'{indent}Indexing pending resources in repository ID {repo_id} to ArcLight Solr...') try: + # Get arclight traject config path + result_show = subprocess.run( + ['bundle', 'show', 'arclight'], + capture_output=True, + text=True, + cwd=self.arclight_dir + ) + arclight_path = result_show.stdout.strip() if result_show.returncode == 0 else '' + + if not arclight_path: + self.log.error(f'{indent}Could not find arclight gem path') + return + + traject_config = f'{arclight_path}/lib/arclight/traject/ead2_config.rb' + xml_files = glob.glob(xml_file_path) # Returns list of matching files + cmd = [ + 'bundle', 'exec', 'traject', + '-u', self.solr_url, + '-s', 'processing_thread_pool=8', + '-s', 'solr_writer.thread_pool=8', + '-s', f'solr_writer.batch_size={self.batch_size}', + '-s', 'solr_writer.commit_on_close=true', + '-i', 'xml', + '-c', traject_config, + ] + xml_files + + if self.traject_extra_config: + if isinstance(self.traject_extra_config, (list, tuple)): + cmd.extend(self.traject_extra_config) + else: + # Treat a string extra config as a path and pass it with -c + cmd.extend(['-c', self.traject_extra_config]) + + env = os.environ.copy() + env['REPOSITORY_ID'] = str(repo_id) result = subprocess.run( - f'REPOSITORY_ID={repo_id} bundle exec traject -u {self.solr_url} -s processing_thread_pool=8 -s solr_writer.thread_pool=8 -s solr_writer.batch_size={self.batch_size} -s solr_writer.commit_on_close=true -i xml -c $(bundle show arclight)/lib/arclight/traject/ead2_config.rb {self.traject_extra_config} {xml_file_path}', -# f'FILE={xml_file_path} SOLR_URL={self.solr_url} REPOSITORY_ID={repo_id} TRAJECT_SETTINGS="processing_thread_pool=8 solr_writer.thread_pool=8 solr_writer.batch_size=1000 solr_writer.commit_on_close=false" bundle exec rake arcuit:index', - shell=True, + cmd, cwd=self.arclight_dir, - stderr=subprocess.PIPE,) - self.log.error(f'{indent}{result.stderr.decode("utf-8")}') + env=env, + stderr=subprocess.PIPE, + ) + + if result.stderr: + self.log.error(f'{indent}{result.stderr.decode("utf-8")}') if result.returncode != 0: self.log.error(f'{indent}Failed to index pending resources in repository ID {repo_id} to ArcLight Solr. Return code: {result.returncode}') else: @@ -627,6 +693,420 @@ def get_creator_bioghist(self, resource, indent_size=0): return '\n'.join(bioghist_elements) return None + def _get_target_agent_criteria(self, modified_since=0): + """ + Defines the Solr query criteria for "target" agents. + These are agents we want to process. + """ + criteria = [ + "linked_agent_roles:creator", + "system_generated:false", + "is_user:false", +# "is_repo_agent:false", + ] + + # Add time filter if applicable + if modified_since > 0 and not self.force_update: + mtime_utc = datetime.fromtimestamp(modified_since, tz=timezone.utc).strftime('%Y-%m-%dT%H:%M:%SZ') + criteria.append(f"system_mtime:[{mtime_utc} TO *]") + + return criteria + + def _get_nontarget_agent_criteria(self, modified_since=0): + """ + Defines the Solr query criteria for "non-target" (excluded) agents. + This is the logical inverse of the target criteria. + """ + # The core logic for what makes an agent a "target" + target_logic = " AND ".join([ + "linked_agent_roles:creator", + "system_generated:false", + "is_user:false", +# "is_repo_agent:false", + ]) + + # We find non-targets by negating the entire block of target logic + criteria = [f"NOT ({target_logic})"] + + # We still apply the time filter to the overall query + if modified_since > 0 and not self.force_update: + mtime_utc = datetime.fromtimestamp(modified_since, tz=timezone.utc).strftime('%Y-%m-%dT%H:%M:%SZ') + criteria.append(f"system_mtime:[{mtime_utc} TO *]") + + return criteria + + def _execute_solr_query(self, query_parts, solr_url=None, fields=['id'], indent_size=0): + """ + A generic function to execute a query against the Solr index. + + Args: + query_parts (list): A list of strings that will be joined with " AND ". + fields (list): A list of Solr fields to return in the response. + + Returns: + list: A list of dictionaries, where each dictionary contains the requested fields. + Returns an empty list on failure. + """ + indent = ' ' * indent_size + if not query_parts: + self.log.error("Cannot execute Solr query with empty criteria.") + return [] + + if not solr_url: + solr_url = self.solr_url + + query_string = " AND ".join(query_parts) + self.log.info(f"{indent}Executing Solr query: {query_string}") + + try: + # First, get the total count of matching documents + count_params = {'q': query_string, 'rows': 0, 'wt': 'json'} + count_response = requests.get(f'{solr_url}/select', params=count_params) + self.log.info(f" [Solr Count Request]: {count_response.request.url}") + + count_response.raise_for_status() + num_found = count_response.json()['response']['numFound'] + + if num_found == 0: + return [] # No need to query again if nothing was found + + # Now, fetch the actual data for the documents + data_params = { + 'q': query_string, + 'rows': num_found, # Use the exact count to fetch all results + 'fl': ','.join(fields), # Join field list into a comma-separated string + 'wt': 'json' + } + response = requests.get(f'{solr_url}/select', params=data_params) + response.raise_for_status() + # Log the exact URL for the data request + self.log.info(f" [Solr Data Request]: {response.request.url}") + + return response.json()['response']['docs'] + + except requests.exceptions.RequestException as e: + self.log.error(f"Failed to execute Solr query: {e}") + self.log.error(f" Failed query string: {query_string}") + return [] + + def get_all_agents(self, agent_types=None, modified_since=0, indent_size=0): + """ + Fetch target agent URIs from the Solr index and log non-target agents. + """ + if agent_types is None: + agent_types = ['agent_person', 'agent_corporate_entity', 'agent_family'] + + if self.force_update: + modified_since = 0 + indent = ' ' * indent_size + self.log.info(f'{indent}Fetching agent data from Solr...') + + # Base criteria for all queries in this function + base_criteria = [f"primary_type:({' OR '.join(agent_types)})"] + + # Get and log the non-target agents + nontarget_criteria = base_criteria + self._get_nontarget_agent_criteria(modified_since) + excluded_docs = self._execute_solr_query(nontarget_criteria,self.aspace_solr_url, fields=['id']) + if excluded_docs: + excluded_ids = [doc['id'] for doc in excluded_docs] + self.log.info(f"{indent} Found {len(excluded_ids)} non-target (excluded) agents.") + # Optional: Log the actual IDs if the list isn't too long + # for agent_id in excluded_ids: + # self.log.debug(f"{indent} - Excluded: {agent_id}") + + # Get and return the target agents + target_criteria = base_criteria + self._get_target_agent_criteria(modified_since) + self.log.info('Target Criteria:') + target_docs = self._execute_solr_query(target_criteria, self.aspace_solr_url, fields=['id']) + + target_agents = [doc['id'] for doc in target_docs] + self.log.info(f"{indent} Found {len(target_agents)} target agents to process.") + + return target_agents + + def task_agent(self, agent_uri, agents_dir, repo_id=1, indent_size=0): + """ + Process a single agent and generate a creator document in EAC-CPF XML format. + Retrieves EAC-CPF directly from ArchivesSpace archival_contexts endpoint. + + Args: + agent_uri: Agent URI from ArchivesSpace (e.g., '/agents/corporate_entities/123') + agents_dir: Directory to save agent XML files + repo_id: Repository ID to use for archival_contexts endpoint (default: 1) + indent_size: Indentation size for logging + + Returns: + str: Creator document ID if successful, None otherwise + """ + indent = ' ' * indent_size + + try: + # Parse agent URI to extract type and ID + # URI format: /agents/{agent_type}/{id} + parts = agent_uri.strip('/').split('/') + if len(parts) != 3 or parts[0] != 'agents': + self.log.error(f'{indent}Invalid agent URI format: {agent_uri}') + return None + + agent_type = parts[1] # e.g., 'corporate_entities', 'people', 'families' + agent_id = parts[2] + + # Construct EAC-CPF endpoint + # Format: /repositories/{repo_id}/archival_contexts/{agent_type}/{id}.xml + eac_cpf_endpoint = f'/repositories/{repo_id}/archival_contexts/{agent_type}/{agent_id}.xml' + + self.log.debug(f'{indent}Fetching EAC-CPF from: {eac_cpf_endpoint}') + + # Fetch EAC-CPF XML + response = self.client.get(eac_cpf_endpoint) + + if response.status_code != 200: + self.log.error(f'{indent}Failed to fetch EAC-CPF for {agent_uri}: HTTP {response.status_code}') + return None + + eac_cpf_xml = response.text + + # Parse the EAC-CPF XML to validate and inspect its structure + try: + root = ET.fromstring(eac_cpf_xml) + self.log.debug(f'{indent}Parsed EAC-CPF XML root element: {root.tag}') + except ET.ParseError as e: + self.log.error(f'{indent}Failed to parse EAC-CPF XML for {agent_uri}: {e}') + return None + + # Generate creator ID + creator_id = f'creator_{agent_type}_{agent_id}' + + # Save EAC-CPF XML to file + filename = f'{agents_dir}/{creator_id}.xml' + os.makedirs(agents_dir, exist_ok=True) + with open(filename, 'w', encoding='utf-8') as f: + f.write(eac_cpf_xml) + + self.log.info(f'{indent}Created creator document: {creator_id}') + return creator_id + + except Exception as e: + self.log.error(f'{indent}Error processing agent {agent_uri}: {e}') + import traceback + self.log.error(f'{indent}{traceback.format_exc()}') + return None + + def process_creators(self): + """ + Process creator agents and generate standalone creator documents. + + Returns: + list: List of created creator document IDs + """ + + xml_dir = f'{self.arclight_dir}/public/xml' + agents_dir = f'{xml_dir}/agents' + modified_since = int(self.last_updated_creators.timestamp()) + indent_size = 0 + indent = ' ' * indent_size + + self.log.info(f'{indent}Processing creator agents...') + + # Get agents to process + agents = self.get_all_agents(modified_since=modified_since, indent_size=indent_size) + + # Process agents in parallel + with Pool(processes=10) as pool: + results_agents = [pool.apply_async( + self.task_agent, + args=(agent_uri_item, agents_dir, 1, indent_size)) # Use repo_id=1 + for agent_uri_item in agents] + + creator_ids = [r.get() for r in results_agents] + creator_ids = [cid for cid in creator_ids if cid is not None] + + self.log.info(f'{indent}Created {len(creator_ids)} creator documents.') + + # NOTE: Collection links are NOT added to creator XML files. + # Instead, linking is handled via Solr using the persistent_id field: + # - Creator bioghist has persistent_id as the 'id' attribute + # - Collection EADs reference creators via bioghist with persistent_id + # - Solr indexes both, allowing queries to link them + # This avoids the expensive operation of scanning all resources to build a linkage map. + + # Index creators to Solr (if not skipped) + if not self.skip_creator_indexing and creator_ids: + self.log.info(f'{indent}Indexing {len(creator_ids)} creator records to Solr...') + traject_config = self.find_traject_config() + if traject_config: + self.log.info(f'{indent}Using traject config: {traject_config}') + indexed = self.index_creators(agents_dir, creator_ids) + self.log.info(f'{indent}Creator indexing complete: {indexed}/{len(creator_ids)} indexed') + else: + self.log.warning(f'{indent}Skipping creator indexing (traject config not found)') + self.log.info(f'{indent}To index manually:') + self.log.info(f'{indent} cd {self.arclight_dir}') + self.log.info(f'{indent} bundle exec traject -u {self.solr_url} -i xml \\') + self.log.info(f'{indent} -c /path/to/arcuit-gem/traject_config_eac_cpf.rb \\') + self.log.info(f'{indent} {agents_dir}/*.xml') + elif self.skip_creator_indexing: + self.log.info(f'{indent}Skipping creator indexing (--skip-creator-indexing flag set)') + + return creator_ids + + + def find_traject_config(self): + """ + Find the traject config for creator indexing. + + Search order (follows collection records pattern): + 1. arcuit_dir if provided (most up-to-date user control) + 2. arcuit gem via bundle show (for backward compatibility) + 3. example_traject_config_eac_cpf.rb in arcflow (fallback when used as module without arcuit) + + Returns: + str: Path to traject config, or None if not found + """ + self.log.info('Searching for traject_config_eac_cpf.rb...') + searched_paths = [] + + # Try 1: arcuit_dir if provided (highest priority - user's explicit choice) + if self.arcuit_dir: + self.log.debug(f' Checking arcuit_dir parameter: {self.arcuit_dir}') + candidate_paths = [ + os.path.join(self.arcuit_dir, 'traject_config_eac_cpf.rb'), + os.path.join(self.arcuit_dir, 'lib', 'arcuit', 'traject', 'traject_config_eac_cpf.rb'), + ] + searched_paths.extend(candidate_paths) + for traject_config in candidate_paths: + if os.path.exists(traject_config): + self.log.info(f'✓ Using traject config from arcuit_dir: {traject_config}') + return traject_config + self.log.debug(' traject_config_eac_cpf.rb not found in arcuit_dir') + + # Try 2: bundle show arcuit (for backward compatibility when arcuit_dir not provided) + try: + result = subprocess.run( + ['bundle', 'show', 'arcuit'], + cwd=self.arclight_dir, + capture_output=True, + text=True, + timeout=10 + ) + if result.returncode == 0: + arcuit_path = result.stdout.strip() + self.log.debug(f' Found arcuit gem at: {arcuit_path}') + candidate_paths = [ + os.path.join(arcuit_path, 'traject_config_eac_cpf.rb'), + os.path.join(arcuit_path, 'lib', 'arcuit', 'traject', 'traject_config_eac_cpf.rb'), + ] + searched_paths.extend(candidate_paths) + for traject_config in candidate_paths: + if os.path.exists(traject_config): + self.log.info(f'✓ Using traject config from arcuit gem: {traject_config}') + return traject_config + self.log.debug( + ' traject_config_eac_cpf.rb not found in arcuit gem ' + '(checked root and lib/arcuit/traject/ subdirectory)' + ) + else: + self.log.debug(' arcuit gem not found via bundle show') + except Exception as e: + self.log.debug(f' Error checking for arcuit gem: {e}') + + # Try 3: example file in arcflow package (fallback for module usage without arcuit) + # We know exactly where this file is located - at the repo root + arcflow_package_dir = os.path.dirname(os.path.abspath(__file__)) + arcflow_repo_root = os.path.dirname(arcflow_package_dir) + traject_config = os.path.join(arcflow_repo_root, 'example_traject_config_eac_cpf.rb') + searched_paths.append(traject_config) + + if os.path.exists(traject_config): + self.log.info(f'✓ Using example traject config from arcflow: {traject_config}') + self.log.info( + ' Note: Using example config. For production, copy this file to your ' + 'arcuit gem or specify location with --arcuit-dir.' + ) + return traject_config + + # No config found anywhere - show all paths searched + self.log.error('✗ Could not find traject_config_eac_cpf.rb in any of these locations:') + for i, path in enumerate(searched_paths, 1): + self.log.error(f' {i}. {path}') + self.log.error('') + self.log.error(' Add traject_config_eac_cpf.rb to your arcuit gem or specify with --arcuit-dir.') + return None + + + def index_creators(self, agents_dir, creator_ids, batch_size=100): + """ + Index creator XML files to Solr using traject. + + Args: + agents_dir: Directory containing creator XML files + creator_ids: List of creator IDs to index + batch_size: Number of files to index per traject call (default: 100) + + Returns: + int: Number of successfully indexed creators + """ + traject_config = self.find_traject_config() + if not traject_config: + return 0 + + indexed_count = 0 + failed_count = 0 + + # Process in batches to avoid command line length limits + total_batches = math.ceil(len(creator_ids) / batch_size) + for i in range(0, len(creator_ids), batch_size): + batch = creator_ids[i:i+batch_size] + batch_num = (i // batch_size) + 1 + + # Build list of XML files for this batch + xml_files = [f'{agents_dir}/{cid}.xml' for cid in batch] + + # Filter to only existing files + existing_files = [f for f in xml_files if os.path.exists(f)] + + if not existing_files: + self.log.warning(f' Batch {batch_num}/{total_batches}: No files found, skipping') + continue + + try: + cmd = [ + 'bundle', 'exec', 'traject', + '-u', self.solr_url, + '-i', 'xml', + '-c', traject_config + ] + existing_files + + self.log.info(f' Indexing batch {batch_num}/{total_batches}: {len(existing_files)} files') + + result = subprocess.run( + cmd, + cwd=self.arclight_dir, + stderr=subprocess.PIPE, + timeout=300 # 5 minute timeout per batch + ) + + if result.returncode == 0: + indexed_count += len(existing_files) + self.log.info(f' Successfully indexed {len(existing_files)} creators') + else: + failed_count += len(existing_files) + self.log.error(f' Traject failed with exit code {result.returncode}') + if result.stderr: + self.log.error(f' STDERR: {result.stderr.decode("utf-8")}') + + except subprocess.TimeoutExpired: + self.log.error(f' Traject timed out for batch {batch_num}/{total_batches}') + failed_count += len(existing_files) + except Exception as e: + self.log.error(f' Error indexing batch {batch_num}/{total_batches}: {e}') + failed_count += len(existing_files) + + if failed_count > 0: + self.log.warning(f'Creator indexing completed with errors: {indexed_count} succeeded, {failed_count} failed') + + return indexed_count + def get_repo_id(self, repo): """ @@ -704,64 +1184,201 @@ def create_symlink(self, target_path, symlink_path, indent_size=0): self.log.info(f'{indent}{e}') return False - - def delete_ead(self, resource_id, ead_id, - xml_file_path, pdf_file_path, indent_size=0): + def delete_arclight_solr_record(self, solr_record_id, indent_size=0): indent = ' ' * indent_size - # delete from solr + try: response = requests.post( f'{self.solr_url}/update?commit=true', - json={'delete': {'id': ead_id}}, + json={'delete': {'id': solr_record_id}}, ) if response.status_code == 200: - self.log.info(f'{indent}Deleted EAD "{ead_id}" from ArcLight Solr.') - # delete related files after suscessful deletion from solr - for file_path in (xml_file_path, pdf_file_path): - try: - os.remove(file_path) - self.log.info(f'{indent}Deleted file {file_path}.') - except FileNotFoundError: - self.log.error(f'{indent}File {file_path} not found.') - - # delete symlink if exists - symlink_path = f'{os.path.dirname(xml_file_path)}/{resource_id}.xml' - try: - os.remove(symlink_path) - self.log.info(f'{indent}Deleted symlink {symlink_path}.') - except FileNotFoundError: - self.log.info(f'{indent}Symlink {symlink_path} not found.') + self.log.info(f'{indent}Deleted Solr record {solr_record_id}. from ArcLight Solr') + return True else: - self.log.error(f'{indent}Failed to delete EAD "{ead_id}" from Arclight Solr. Status code: {response.status_code}') + self.log.error( + f'{indent}Failed to delete Solr record {solr_record_id} from Arclight Solr. Status code: {response.status_code}') + return False except requests.exceptions.RequestException as e: - self.log.error(f'{indent}Error deleting EAD "{ead_id}" from ArcLight Solr: {e}') + self.log.error(f'{indent}Error deleting Solr record {solr_record_id} from ArcLight Solr: {e}') + + def delete_file(self, file_path, indent_size=0): + indent = ' ' * indent_size + + try: + os.remove(file_path) + self.log.info(f'{indent}Deleted file {file_path}.') + except FileNotFoundError: + self.log.error(f'{indent}File {file_path} not found.') + + def delete_ead(self, resource_id, ead_id, + xml_file_path, pdf_file_path, indent_size=0): + # delete from solr + deleted_solr_record = self.delete_arclight_solr_record(ead_id, indent_size=indent_size) + if deleted_solr_record: + self.delete_file(pdf_file_path, indent_size=indent_size) + self.delete_file(xml_file_path, indent_size=indent_size) + # delete symlink if exists + symlink_path = f'{os.path.dirname(xml_file_path)}/{resource_id}.xml' + self.delete_file(symlink_path, indent_size=indent_size) + + def delete_creator(self, file_path, solr_id, indent_size=0): + deleted_solr_record = self.delete_arclight_solr_record(solr_id, indent_size=indent_size) + if deleted_solr_record: + self.delete_file(file_path, indent_size=indent_size) + def save_config_file(self): """ - Save the last updated timestamp to the .arcflow.yml file. + Save the last updated timestamps to the .arcflow.yml file. + Each record type (collections, creators) has its own timestamp so they + can be run independently without overwriting each other's state. """ try: + # Preserve timestamps for record types not processed in this run + try: + with open(self.arcflow_file_path, 'r') as file: + config = yaml.safe_load(file) or {} + except FileNotFoundError: + config = {} + config.pop('last_updated', None) # remove legacy single key if present + now = datetime.fromtimestamp(self.start_time, timezone.utc).strftime('%Y-%m-%dT%H:%M:%S%z') + if not self.agents_only: + config['last_updated_collections'] = now + if not self.collections_only: + config['last_updated_creators'] = now with open(self.arcflow_file_path, 'w') as file: - yaml.dump({ - 'last_updated': datetime.fromtimestamp(self.start_time, timezone.utc).strftime('%Y-%m-%dT%H:%M:%S%z') - }, file) + yaml.dump(config, file) self.log.info(f'Saved file .arcflow.yml.') except Exception as e: self.log.error(f'Error writing to file .arcflow.yml: {e}') + def run_collections(self): + """ + Teardown (if force_update or first run), set up directories, and + process collection EADs. + """ + xml_dir = f'{self.arclight_dir}/public/xml' + resource_dir = f'{xml_dir}/resources' + pdf_dir = f'{self.arclight_dir}/public/pdf' + + if self.force_update or int(self.last_updated_collections.timestamp()) <= 0: + # Delete only collection records from Solr so that creator records + # remain intact when collections are rebuilt independently. + # Standard query parser: '*:* AND NOT is_creator:true' matches all + # documents except those flagged as creators. + try: + response = requests.post( + f'{self.solr_url}/update?commit=true', + json={'delete': {'query': '*:* AND NOT is_creator:true'}}, + ) + if response.status_code == 200: + self.log.info('Deleted all collection records from ArcLight Solr.') + for dir_path, dir_name in [(resource_dir, 'XMLs'), (pdf_dir, 'PDFs')]: + try: + shutil.rmtree(dir_path) + self.log.info(f'Deleted {dir_name} directory {dir_path}.') + except Exception as e: + self.log.error(f'Error deleting {dir_name} directory "{dir_path}": {e}') + else: + self.log.error(f'Failed to delete collection records from ArcLight Solr. Status code: {response.status_code}') + except requests.exceptions.RequestException as e: + self.log.error(f'Error deleting collection records from ArcLight Solr: {e}') + + os.makedirs(resource_dir, exist_ok=True) + os.makedirs(pdf_dir, exist_ok=True) + self.process_collections() + + + def run_creators(self): + """ + Teardown (if force_update or first run), set up directories, and + process creator agents. + """ + xml_dir = f'{self.arclight_dir}/public/xml' + agents_dir = f'{xml_dir}/agents' + + if self.force_update or int(self.last_updated_creators.timestamp()) <= 0: + # Delete only creator records from Solr (collections are handled separately). + try: + response = requests.post( + f'{self.solr_url}/update?commit=true', + json={'delete': {'query': 'is_creator:true'}}, + ) + if response.status_code == 200: + self.log.info('Deleted all creator records from ArcLight Solr.') + try: + shutil.rmtree(agents_dir) + self.log.info(f'Deleted agents directory {agents_dir}.') + except Exception as e: + self.log.error(f'Error deleting agents directory "{agents_dir}": {e}') + else: + self.log.error(f'Failed to delete creator records from ArcLight Solr. Status code: {response.status_code}') + except requests.exceptions.RequestException as e: + self.log.error(f'Error deleting creator records from ArcLight Solr: {e}') + + os.makedirs(agents_dir, exist_ok=True) + self.process_creators() + + + def run_all(self): + """ + Run all record-type workflows. + Updates repository metadata, then runs all record-type workflows in parallel. + This is the default execution path. When new record-type workflows + are introduced, add them here. + """ + self.update_repositories() + workflows = [self.run_collections, self.run_creators] + with concurrent.futures.ThreadPoolExecutor(max_workers=len(workflows)) as executor: + self.log.info('Running collections and creators in parallel...') + futures = [executor.submit(w) for w in workflows] + concurrent.futures.wait(futures) + exceptions = [] + for future in futures: + exc = future.exception() + if exc is not None: + self.log.error(f'Workflow failed: {exc}') + exceptions.append(exc) + if exceptions: + # Raise the first exception to signal overall failure and prevent + # downstream deleted-record processing and config timestamp updates. + raise exceptions[0] def run(self): """ Run the ArcFlow process. """ self.log.info(f'ArcFlow process started (PID: {self.pid}).') - self.update_repositories() - self.update_eads() + + if self.collections_only: + scope = 'collections' + self.run_collections() + elif self.agents_only: + scope = 'creators' + self.run_creators() + else: + scope = 'all' + self.run_all() + + # Skip deleted record processing on force_update or if all active + # timestamps indicate a first run (nothing has been indexed yet). + active_timestamps = [] + if scope in ('collections', 'all'): + active_timestamps.append(int(self.last_updated_collections.timestamp())) + if scope in ('creators', 'all'): + active_timestamps.append(int(self.last_updated_creators.timestamp())) + if self.force_update or all(t <= 0 for t in active_timestamps): + self.log.info('Skipping deleted record processing.') + else: + self.process_deleted_records(scope) + self.save_config_file() self.log.info(f'ArcFlow process completed (PID: {self.pid}). Elapsed time: {time.strftime("%H:%M:%S", time.gmtime(int(time.time()) - self.start_time))}.') + def main(): parser = argparse.ArgumentParser(description='ArcFlow') parser.add_argument( @@ -779,19 +1396,48 @@ def main(): parser.add_argument( '--solr-url', required=True, - help='URL of the Solr core',) + help='URL of the ArcLight Solr core',) + parser.add_argument( + '--aspace-solr-url', + required=True, + help='URL of the ASpace Solr core',) parser.add_argument( '--traject-extra-config', default='', help='Path to extra Traject configuration file',) + parser.add_argument( + '--agents-only', + action='store_true', + help='Process only agent records, skip collections',) + parser.add_argument( + '--collections-only', + action='store_true', + help='Process only collections, skip creator processing',) + parser.add_argument( + '--arcuit-dir', + default=None, + help='Path to arcuit repository (for traject config). If not provided, will try bundle show arcuit.',) + parser.add_argument( + '--skip-creator-indexing', + action='store_true', + help='Generate creator XML files but skip Solr indexing',) args = parser.parse_args() + + # Validate mutually exclusive flags + if args.agents_only and args.collections_only: + parser.error('Cannot use both --agents-only and --collections-only') arcflow = ArcFlow( arclight_dir=args.arclight_dir, aspace_dir=args.aspace_dir, solr_url=args.solr_url, + aspace_solr_url=args.aspace_solr_url, traject_extra_config=args.traject_extra_config, - force_update=args.force_update) + force_update=args.force_update, + agents_only=args.agents_only, + collections_only=args.collections_only, + arcuit_dir=args.arcuit_dir, + skip_creator_indexing=args.skip_creator_indexing) arcflow.run() diff --git a/example_traject_config_eac_cpf.rb b/example_traject_config_eac_cpf.rb new file mode 100644 index 0000000..f17dde8 --- /dev/null +++ b/example_traject_config_eac_cpf.rb @@ -0,0 +1,272 @@ +# Traject configuration for indexing EAC-CPF creator records to Solr +# +# This config file processes EAC-CPF (Encoded Archival Context - Corporate Bodies, +# Persons, and Families) XML documents from ArchivesSpace archival_contexts endpoint. +# +# Usage: +# bundle exec traject -u $SOLR_URL -c example_traject_config_eac_cpf.rb /path/to/agents/*.xml +# +# For production, copy this file to your arcuit gem as traject_config_eac_cpf.rb +# +# The EAC-CPF XML documents are retrieved directly from ArchivesSpace via: +# /repositories/{repo_id}/archival_contexts/{agent_type}/{id}.xml + +require 'traject' +require 'traject_plus' +require 'traject_plus/macros' +require 'time' + +# Use TrajectPlus macros (provides extract_xpath and other helpers) +extend TrajectPlus::Macros + +# EAC-CPF namespace - used consistently throughout this config +EAC_NS = { 'eac' => 'urn:isbn:1-931666-33-4' } + +# Entity types - SINGLE SOURCE OF TRUTH +ENTITY_TYPES = ['corporate_entities', 'people', 'families'] + +# Pattern matching arcflow's creator file naming: creator_{entity_type}_{id} +CREATOR_ID_PATTERN = /^creator_(#{ENTITY_TYPES.join('|')})_\d+$/ + +settings do + provide "solr.url", ENV['SOLR_URL'] || "http://localhost:8983/solr/blacklight-core" + provide "solr_writer.commit_on_close", "true" + provide "solr_writer.thread_pool", "8" + provide "solr_writer.batch_size", "100" + provide "processing_thread_pool", "4" + + # Use NokogiriReader for XML processing + provide "reader_class_name", "Traject::NokogiriReader" +end + +# Each record from reader +each_record do |record, context| + context.clipboard[:is_creator] = true +end + +# Solr uniqueKey - extract ID from filename using arcflow's creator_{entity_type}_{id} pattern +to_field 'id' do |record, accumulator, context| + source_file = context.source_record_id || context.input_name + if source_file + id_from_filename = File.basename(source_file, '.xml') + if id_from_filename =~ CREATOR_ID_PATTERN + accumulator << id_from_filename + context.logger.info("Using filename-based ID: #{id_from_filename}") + else + context.logger.error("Filename doesn't match expected pattern 'creator_{type}_{id}': #{id_from_filename}") + context.skip!("Invalid ID format in filename") + end + else + context.logger.error("No source filename available for record") + context.skip!("Missing source filename") + end +end + +# Add is_creator boolean marker field +to_field 'is_creator' do |record, accumulator| + accumulator << true +end + +# # Record type +# to_field 'record_type' do |record, accumulator| +# accumulator << 'creator' +# end + +# Entity type (corporateBody, person, family) +to_field 'entity_type_ssi' do |record, accumulator| + entity = record.xpath('//eac:cpfDescription/eac:identity/eac:entityType', EAC_NS).first + accumulator << entity.text if entity +end + +# Title/name fields - using ArcLight dynamic field naming convention +# _tesim = text, stored, indexed, multiValued (for full-text search) +# _ssm = string, stored, multiValued (for display) +# _ssi = string, stored, indexed (for faceting/sorting) +to_field 'title_tesim' do |record, accumulator| + name = record.xpath('//eac:cpfDescription/eac:identity/eac:nameEntry/eac:part', EAC_NS) + accumulator << name.map(&:text).join(' ') if name.any? +end + +to_field 'title_ssm' do |record, accumulator| + name = record.xpath('//eac:cpfDescription/eac:identity/eac:nameEntry/eac:part', EAC_NS) + accumulator << name.map(&:text).join(' ') if name.any? +end + +to_field 'title_filing_ssi' do |record, accumulator| + name = record.xpath('//eac:cpfDescription/eac:identity/eac:nameEntry/eac:part', EAC_NS) + if name.any? + text = name.map(&:text).join(' ') + # Remove leading articles and convert to lowercase for filing + accumulator << text.gsub(/^(a|an|the)\s+/i, '').downcase + end +end + +# Dates of existence - using ArcLight standard field unitdate_ssm +# (matches what ArcLight uses for collection dates) +to_field 'unitdate_ssm' do |record, accumulator| + # Try existDates element + base_path = '//eac:cpfDescription/eac:description/eac:existDates' + dates = record.xpath("#{base_path}/eac:dateRange/eac:fromDate | #{base_path}/eac:dateRange/eac:toDate | #{base_path}/eac:date", EAC_NS) + if dates.any? + from_date = record.xpath("#{base_path}/eac:dateRange/eac:fromDate", EAC_NS).first + to_date = record.xpath("#{base_path}/eac:dateRange/eac:toDate", EAC_NS).first + + if from_date || to_date + from_text = from_date ? from_date.text : '' + to_text = to_date ? to_date.text : '' + accumulator << "#{from_text}-#{to_text}".gsub(/^-|-$/, '') + else + # Single date + dates.each { |d| accumulator << d.text } + end + end +end + +# Biographical/historical note - using ArcLight conventions +# _tesim for searchable plain text +# _tesm for searchable HTML (text, stored, multiValued but not for display) +# _ssm for section heading display +to_field 'bioghist_tesim' do |record, accumulator| + # Extract text from biogHist elements for full-text search + bioghist = record.xpath('//eac:cpfDescription/eac:description/eac:biogHist//eac:p', EAC_NS) + if bioghist.any? + text = bioghist.map(&:text).join(' ') + accumulator << text + end +end + +# Biographical/historical note - HTML +to_field 'bioghist_html_tesm' do |record, accumulator| + # Extract HTML for searchable content (matches ArcLight's bioghist_html_tesm) + bioghist = record.xpath('//eac:cpfDescription/eac:description/eac:biogHist//eac:p', EAC_NS) + if bioghist.any? + # Preserve inline EAC markup inside by serializing child nodes + html = bioghist.map { |p| "

#{p.inner_html}

" }.join("\n") + accumulator << html + end +end + +to_field 'bioghist_heading_ssm' do |record, accumulator| + # Extract section heading (matches ArcLight's bioghist_heading_ssm pattern) + heading = record.xpath('//eac:cpfDescription/eac:description/eac:biogHist//eac:head', EAC_NS).first + accumulator << heading.text if heading +end + +# Full-text search field +to_field 'text' do |record, accumulator| + # Title + name = record.xpath('//eac:cpfDescription/eac:identity/eac:nameEntry/eac:part', EAC_NS) + accumulator << name.map(&:text).join(' ') if name.any? + + # Bioghist + bioghist = record.xpath('//eac:cpfDescription/eac:description/eac:biogHist//eac:p', EAC_NS) + accumulator << bioghist.map(&:text).join(' ') if bioghist.any? +end + +# Related agents (from cpfRelation elements) for display parsing and debugging, stored as a single line +# "https://archivesspace-stage.library.illinois.edu/agents/corporate_entities/57|associative" +to_field 'related_agents_debug_ssim' do |record, accumulator| + relations = record.xpath('//eac:cpfDescription/eac:relations/eac:cpfRelation', EAC_NS) + relations.each do |rel| + href = rel['href'] || rel['xlink:href'] + relation_type = rel['cpfRelationType'] + + if href + solr_id = aspace_uri_to_solr_id(href) + if solr_id + # Format: "solr_id|type" + accumulator << "#{solr_id}|#{relation_type || 'unknown'}" + end + end + end +end + +# Related agents - ASpace URIs, in parallel array to match ids and types +to_field 'related_agent_uris_ssim' do |record, accumulator| + relations = record.xpath('//eac:cpfDescription/eac:relations/eac:cpfRelation', EAC_NS) + relations.each do |rel| + href = rel['href'] || rel['xlink:href'] + accumulator << href if href + end +end + +# Related agents - Parallel array of relationship ids to match relationship types and uris +to_field 'related_agent_ids_ssim' do |record, accumulator| + relations = record.xpath('//eac:cpfDescription/eac:relations/eac:cpfRelation', EAC_NS) + relations.each do |rel| + href = rel['href'] || rel['xlink:href'] + if href + solr_id = aspace_uri_to_solr_id(href) # CONVERT URI TO ID + accumulator << solr_id if solr_id + end + end +end + +# Related Agents - Parallel array of relationship types to match relationship ids and uris +to_field 'related_agent_relationship_types_ssim' do |record, accumulator| + relations = record.xpath('//eac:cpfDescription/eac:relations/eac:cpfRelation', EAC_NS) + relations.each do |rel| + href = rel['href'] || rel['xlink:href'] + if href + relation_type = rel['cpfRelationType'] || 'unknown' + accumulator << relation_type # NO deduplication - keeps array parallel + end + end +end + +# Relationship types used for faceting, +to_field 'relationship_types_ssim' do |record, accumulator| + relations = record.xpath('//eac:cpfDescription/eac:relations/eac:cpfRelation', EAC_NS) + relations.each do |rel| + relation_type = rel['cpfRelationType'] + accumulator << relation_type if relation_type && !accumulator.include?(relation_type) + end +end + +# Agent source URI (from original ArchivesSpace) +to_field 'agent_uri_ssi' do |record, accumulator| + # Try to extract from control section or otherRecordId + other_id = record.xpath('//eac:control/eac:otherRecordId[@localType="archivesspace_uri"]', EAC_NS).first + if other_id + accumulator << other_id.text + end +end + +# Timestamp +to_field 'timestamp' do |record, accumulator| + accumulator << Time.now.utc.iso8601 +end + +# # Document type marker +# to_field 'document_type' do |record, accumulator| +# accumulator << 'creator' +# end + +# Log successful indexing +each_record do |record, context| + record_id = record.xpath('//eac:control/eac:recordId', EAC_NS).first + if record_id + context.logger.info("Indexed creator: #{record_id.text}") + end +end + + + + +# Helper to build and validate creator IDs +def build_creator_id(entity_type, id_number) + creator_id = "creator_#{entity_type}_#{id_number}" + unless creator_id =~ CREATOR_ID_PATTERN + raise ArgumentError, "Invalid creator ID: #{creator_id} doesn't match pattern" + end + creator_id +end + +# Helper to convert ArchivesSpace URI to Solr creator ID +def aspace_uri_to_solr_id(uri) + return nil unless uri + # Match: /agents/{type}/{id} or https://.../agents/{type}/{id} + if uri =~ /agents\/(#{ENTITY_TYPES.join('|')})\/(\d+)/ + build_creator_id($1, $2) + end +end \ No newline at end of file