diff --git a/docs/commons/fair/tutorials/create-metric.rst b/docs/commons/fair/tutorials/create-metric.rst deleted file mode 100644 index eed5ce5..0000000 --- a/docs/commons/fair/tutorials/create-metric.rst +++ /dev/null @@ -1,2 +0,0 @@ -How to create a metric -====================== diff --git a/docs/commons/fair/tutorials/create-benchmark.rst b/docs/commons/fair/tutorials/define-benchmark-associated-metrics.rst similarity index 100% rename from docs/commons/fair/tutorials/create-benchmark.rst rename to docs/commons/fair/tutorials/define-benchmark-associated-metrics.rst diff --git a/docs/commons/fair/tutorials/find-metrics-and-benchmarks.rst b/docs/commons/fair/tutorials/find-metrics-and-benchmarks.rst new file mode 100644 index 0000000..6f957af --- /dev/null +++ b/docs/commons/fair/tutorials/find-metrics-and-benchmarks.rst @@ -0,0 +1,243 @@ +.. _find-metrics-and-benchmarks: + +Finding and Reusing Metrics and Benchmarks +========================================== + +This tutorial explains how to discover FAIR metrics and benchmarks using +two complementary routes: the FAIRassist registry and FAIRsharing's own +search and browse features. Once you have found a metric of interest, you +can learn how to locate any tests that implement it in +`find-test-for-digital-object.rst `_. + +.. contents:: Contents + :local: + :depth: 2 + + +What are metrics and benchmarks? +--------------------------------- + +FAIRsharing registers the conceptual components of the FAIR assessment +ecosystem within its FAIRassist registry. This includes: + +- **Principles** — are designed to be subject- and implementation-agnostic criteria or high-level goals that may be refined for particular communities using the Assess IF (and are stored within FAIRsharing's + Standards registry, e.g. the + `FAIR Principles `_). +- **Metrics** — measurable criteria that interpret a Principle, + specifying what must be true of a digital object for that Principle to + be satisfied. +- **Benchmarks** — curated sets of Metrics that together define what + "FAIR" means in practice for a particular community, tool, or use case. +For more background on the FAIRassist registry and how it aligns with +the FAIR Testing Resource (FTR) vocabulary, see the +`FAIRsharing documentation on registry types +`_. + + +A note on object types +----------------------- + +Both sections of this tutorial describe a number of ways to filter metrics and +benchmarks, including by the type of digital object they are designed to assess. +Before using this filter, it is worth reviewing the full controlled +vocabulary of object types in FAIRsharing, available at +`https://fairsharing.gitbook.io/fairsharing/record-sections-and-fields/general-information/object-types +`_. + +The controlled vocabulary includes specific types such as dataset, +software application, model, terminology artifact, protocol or workflow, +and several others. Two general-purpose types are also available: + +- *object type agnostic* — use this when a metric or benchmark is + relevant across all types of digital object. +- *other object type* — use this only when the object type is not + covered by any of the other terms in the vocabulary and the resource + is not agnostic. + +.. note:: + + The term *object type not found* appears on a small number of older + deprecated records where the original object type could not be + determined during curation. This is an administrative term and is + not relevant when searching for metrics or benchmarks; you can + safely ignore it. + +Take time to review the full vocabulary before selecting a type to +filter by, as the most appropriate choice may not always be immediately +obvious. For example, a metric that is relevant to all digital objects +should be found using *object type agnostic* rather than by listing +individual types. + + +Section 1: Metric discovery via FAIRassist +------------------------------------------- + +The `FAIRassist registry `_ is the +primary entry point for discovering metrics and benchmarks. Although +FAIRassist is part of FAIRsharing, its interface is purpose-built for +exploring these record types, and presents search results in a tabular +format that gives you an overview of the full ecosystem of resources +related to a given set of metrics or benchmarks. + +FAIRassist is particularly useful for answering questions such as: + +- What assessments already exist for my subject area or for a particular + tool? +- Which standards and databases are referenced by assessments in my + domain? +- What benchmarks has a given organisation defined or contributed to? +- Which metrics apply to a particular type of digital object? + +This search and display method provides dynamic guidance that supplies extra context around FAIR +benchmarks, and transparent definitions of metrics to support their +correct reuse. + + +Using the filters +~~~~~~~~~~~~~~~~~ + +The FAIRassist registry provides five filters that can be used +individually or in combination. + + +Record type +^^^^^^^^^^^ + +Select whether you want to retrieve Metrics or Benchmarks (or both). +This is the broadest filter and must be used to ensure the appropriate usage of later filters. + + +Object type +^^^^^^^^^^^ + +Metrics can be filtered by the type of digital object they are designed +to assess. Before using this filter, review the full controlled +vocabulary of object types described in the `note on object types +<#a-note-on-object-types>`_ above and in the `FAIRsharing object types +documentation +`_, +to ensure you select the term that most accurately reflects your needs. +This may include *object type agnostic* for metrics that apply across +all digital object types, or *other object type* for those covering +object types not represented elsewhere in the vocabulary. + + +Tool +^^^^ + +Metrics can be filtered by the assessment tool that utilises them. +Currently available tool options include FOOPS! and FAIR Champion. This +filter is useful when you are working with a specific tool and want to +understand which metrics that tool draws on, or when you want to find +metrics that are already implemented in a tool you trust. + + +Subject +^^^^^^^ + +Both metrics and benchmarks can be filtered by subject area. The +subjects available reflect the FAIRsharing subject hierarchy, which you +can browse at `https://fairsharing.org/browse/subject +`_. Use this filter to narrow +your results to metrics and benchmarks that are relevant to your +research domain. + + +Organisation +^^^^^^^^^^^^ + +Both metrics and benchmarks can be filtered by the organisation +associated with them. This is useful when you want to find all metrics +or benchmarks developed or maintained by a particular institution, +project, or working group. + + +A worked example: finding metrics linked to CESSDA +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To illustrate how these filters work together, the following URL +retrieves all metrics associated with the CESSDA organisation and +linked to the FAIR Principles: + +`https://fairassist.org/registry?search=(principle=The+FAIR+Principles%26organisations=cessda%26recordType=metric_ids `_ + +The results are displayed in tabular sections, one per record type, +allowing you to quickly scan the set of metrics that CESSDA has +contributed to or is associated with, alongside contextual information +about each one. From any row in the results you can navigate directly +to the full FAIRsharing record for that metric. + + +Section 2: Metric discovery via FAIRsharing +-------------------------------------------- + +FAIRsharing's own search and browse tools can also be used to find +metrics and benchmarks. This route is particularly useful when you want +to combine FAIRassist record types with other FAIRsharing filters, such +as status or object type, or when you want to use the advanced search to +build a precise query. Full documentation on FAIRsharing's search +features is available at +`https://fairsharing.gitbook.io/fairsharing/how-to/searching-and-browsing +`_. + + +Simple search +~~~~~~~~~~~~~ + +The FAIRsharing simple search bar searches across all record types, +including metrics and benchmarks. Entering a keyword such as a FAIR +principle identifier, a domain term, or the name of a metric will return +all matching records. You can then use the faceted filters on the left of +the results page to narrow results to the FAIRassist registry, and +further refine by object type or a large number of other facets as required. + + +Advanced search +~~~~~~~~~~~~~~~ + +The advanced search gives you precise control over which record types, +statuses, object types, and fields are queried. For example, one or more object types +can be selected as part of the advanced search query construction, +allowing you to combine an object type filter with other criteria such +as record type, status, and associated tests. Before selecting an object +type, review the full controlled vocabulary described in the `note on +object types <#a-note-on-object-types>`_ above, to ensure you are using +the most appropriate term. + +The following examples show how to retrieve specific subsets of +FAIRassist records. + +To retrieve all metrics with a status of Ready or In Development: + +`https://fairsharing.org/advancedsearch?operator=_and&fields=%28operator%3D_and%26fairassisttype%3Dmetric%26status%3Dready%2Bin_development%29 `_ + +To retrieve all benchmarks with a status of Ready or In Development: + +`https://fairsharing.org/advancedsearch?operator=_and&fields=%28operator%3D_and%26fairassisttype%3Dbenchmark%26status%3Dready%2Bin_development%29 `_ + +To retrieve all Ready or In Development metrics that have at least one +associated test — a useful starting point if you are looking for metrics +that are already implemented and ready for use: + +`https://fairsharing.org/advancedsearch?operator=_and&fields=%28operator%3D_and%26fairassisttype%3Dmetric%26status%3Dready%2Bin_development%26associatedTests%3Dtrue%29 `_ + + +Browse by subject +~~~~~~~~~~~~~~~~~ + +If you prefer to explore visually, the FAIRsharing subject browser at +`https://fairsharing.org/browse/subject `_ +allows you to navigate a hierarchical sunburst diagram of subject areas. +Selecting a subject will show you all FAIRsharing records tagged with +that subject or any of its child terms, across all registries including +FAIRassist. + + +Next steps +---------- + +Once you have identified a metric of interest, you may want to find any +tests that implement it. See `find-test-for-digital-object.rst `_ for a +step-by-step guide. + + diff --git a/docs/commons/fair/tutorials/find-test-for-digital-object.rst b/docs/commons/fair/tutorials/find-test-for-digital-object.rst index 3e3aa24..d0e5835 100644 --- a/docs/commons/fair/tutorials/find-test-for-digital-object.rst +++ b/docs/commons/fair/tutorials/find-test-for-digital-object.rst @@ -1,2 +1,256 @@ -How find a test for my digital object -====================================== +.. _find-test-for-digital-object: + +================================== +How to find a Test for my digital object +================================== + +This tutorial covers how to find a Test using FAIRsharing (Test linked to a Metric), and how to find Tests using FAIR Champion. + + +FAIRsharing: Finding a Test Linked to a Metric +================================== + +This section explains how to navigate to a metric record in FAIRsharing +and locate any tests that implement it. If you have not yet found a +metric of interest, see `find-metrics-and-benchmarks `_ first. + +.. contents:: Contents + :local: + :depth: 2 + + +Background: how tests relate to metrics +----------------------------------------- + +FAIRsharing registers the conceptual components of the FAIR assessment +ecosystem — Principles, Metrics, and Benchmarks. Tests, which are the +concrete implementations of Metrics, are registered separately in FAIR +Champion at `https://tools.ostrails.eu/champion/tests/ +`_. However, FAIRsharing +Metric records provide a cross-reference to any tests that implement +them via the Associated Tests field in the record's Additional +Information section. + +Associated tests are cross-referenced in two ways: + +- At regular intervals, the FAIRsharing team automatically updates + the Associated Tests field by querying FAIR Champion and other + registered test sources. +- Metric maintainers may also add test URLs manually at any time. + +.. note:: + + FAIRsharing stores URLs pointing to tests; it does not store the + tests themselves. Each URL may point to an individual test or to a + search URL that retrieves all tests implementing that metric. An + optional free-text note field is available alongside each URL to + provide further context. + +For full documentation on the Associated Tests field and the related +Positive and Negative Examples fields, see the `FAIRsharing +documentation on metric tests and examples +`_. + + +A note on object types +----------------------- + +Metrics in FAIRsharing are tagged with one or more object types that +describe the kind of digital object the metric is designed to assess. +When looking for a test linked to a metric, it is therefore helpful to +have a clear understanding of what type of digital object you are +working with, so that you can identify the most relevant metrics before +looking for their associated tests. + +The full controlled vocabulary of object types is available at +`https://fairsharing.gitbook.io/fairsharing/record-sections-and-fields/general-information/object-types +`_. +The vocabulary includes specific types such as dataset, software +application, model, terminology artifact, and protocol or workflow, +among others. Two general-purpose types are also available: + +- *object type agnostic* — used for metrics that apply across all + types of digital object. +- *other object type* — used only when the object type is not covered + by any other term in the vocabulary and the resource is not agnostic. + +.. note:: + + The term *object type not found* appears on a small number of older + deprecated records where the original object type could not be + determined during curation. This is an administrative term and is + not relevant when searching for metrics; you can safely ignore it. + +Review the full vocabulary before filtering, as the most appropriate +choice may not always be immediately obvious. For example, a metric +relevant to all digital objects should be found using *object type +agnostic* rather than by listing individual types. + + +Step 1: Open a metric record +----------------------------- + +You can arrive at a metric record in either of two ways. + + +Via FAIRassist +~~~~~~~~~~~~~~ + +Navigate to the `FAIRassist registry `_ +and use the filters to find a metric relevant to your needs, as +described in `find-metrics-and-benchmarks `_. Click on any metric +in the results to open its full FAIRsharing record. + + +Via FAIRsharing directly +~~~~~~~~~~~~~~~~~~~~~~~~ + +Use the FAIRsharing search or advanced search to find a metric, as +described in `find-metrics-and-benchmarks `_. Alternatively, if +you already have the DOI or URL of a specific metric, navigate directly +to that record. For example: + +- `FAIR Metric - A version IRI is declared in the ontology metadata + `_ + (``https://doi.org/10.25504/FAIRsharing.99c9f7``) +- `FAIR Metric - Metadata Contains CESSDA Provenance Information + `_ + (``https://doi.org/10.25504/FAIRsharing.4e4752``) + + +Step 2: Navigate to the Additional Information section +------------------------------------------------------- + +Once you have a metric record open, scroll down past the General +Information section. FAIRsharing record pages are organised into +tabbed or sectioned areas; look for the Additional Information +section or tab. Within it, you will find the following subsections +specific to metric records: + +- Associated Tests +- Positive Examples +- Negative Examples + +.. note:: + + The Associated Tests, Positive Examples, and Negative Examples + subsections appear only on records within the FAIRassist registry + (i.e. Metric records). They are not present on standard database, + policy, or other FAIRsharing record types. + + +Step 3: View the Associated Tests +----------------------------------- + +The Associated Tests subsection lists one or more URLs, each linking +to a test or set of tests that implement the metric. Each entry may +also include a free-text note providing additional context about the +test or its scope. + + +Example 1: a metric with a FOOPS! test +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The metric `FAIR Metric - A version IRI is declared in the ontology +metadata `_ includes an +Associated Tests entry linking to a FOOPS! test. FOOPS! is an automated +tool for assessing the FAIRness of ontologies. In this case, the test +URL has been added directly by the metric maintainer rather than via +automated cross-referencing. + +Opening the metric record and navigating to Additional Information > +Associated Tests will show you the FOOPS! test URL, from which you can +access the test itself and understand how it operationalises the metric. + + +Example 2: a metric with a FAIR Champion test +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The metric `FAIR Metric - Metadata Contains CESSDA Provenance +Information `_ includes an +Associated Tests entry linking to a test registered in FAIR Champion at +`https://tools.ostrails.eu/champion/tests/ +`_. + +FAIR Champion is a tool within the OSTrails ecosystem that registers and +executes FAIR tests. Opening the metric record and navigating to +Additional Information > Associated Tests will show you the FAIR +Champion test URL. Following that link takes you to the test record in +FAIR Champion, where you can view the test specification and, where +supported, execute it against a digital object. + +.. note:: + + If a metric record's Associated Tests field is empty, it does not + necessarily mean that no test exists for that metric — it may not + yet have been cross-referenced. You can search for tests directly + in FAIR Champion at `https://tools.ostrails.eu/champion/tests/ + `_, or contact the + metric maintainer whose details appear in the record's General + Information section. + + +Step 4: Follow the test URL +---------------------------- + +Once you have identified a test URL in the Associated Tests field, +follow it to access the test itself. For tests registered in FAIR Champion, the full list of available tests +is at `https://tools.ostrails.eu/champion/tests/ +`_. + + +Finding metrics that already have tests +----------------------------------------- + +If you want to start from a list of metrics that are confirmed to have +at least one associated test, the following FAIRsharing advanced search +URL retrieves all Ready or In Development metrics with at least one +associated test: + +`https://fairsharing.org/advancedsearch?operator=_and&fields=%28operator%3D_and%26fairassisttype%3Dmetric%26status%3Dready%2Bin_development%26associatedTests%3Dtrue%29 `_ + +This is a practical starting point if you are building or evaluating a +FAIR assessment pipeline and want to work only with metrics that have +concrete implementations available. + + +Further reading +--------------- + +- `find-metrics-and-benchmarks `_ — how to discover metrics and + benchmarks via FAIRassist and FAIRsharing. +- `FAIRsharing documentation on metric tests and examples + `_ +- `FAIRassist registry `_ +- `FAIR Champion test registry `_ + +Finding a Test in FAIR Champion +============== +`FAIR Champion `_ is a tool/framework designed to evaluate and assess digital objects based on FAIR metrics. More information can be found `here `_. + +FAIR Champion also allows you to discover FAIR Assessment Components, such as Tests, that are relevant to your needs. + +Searching by keywords +--------------------- +When accessing `its landing page `_, FAIR Champion provides a set of links to different entries. +By clicking on the first option, *"List all Tests in OSTrails Registry"*, a browser interface will open, allowing you to search for Tests based on relevant terms. + +You may want to look for a Test you previously identified using the FAIRassist tool (described in the previous section of this tutorial), or you can discover Tests from scratch if you do not already know their exact name. + +1. Enter keywords related to your use case. + The system will return a list of matching Tests based on their content, such as name and description. + +2. Once you obtain the list of matching Tests, each result includes a short description to help you understand what the Test evaluates. + You can use this information to quickly assess whether a Test is relevant to your needs. + Additional information, such as the **Test ID** for the Benchmark Algorithm Spreadsheet, is available in the *Additional Details* drop-down. + The option to run the Test is also available via the *Execute Test* drop-down. + +Refining your search +-------------------- +You can narrow down your results in two ways: + +A. By refining your keywords to make them more specific. + +B. By using FAIRassist beforehand to perform a more targeted search. + This is especially useful if you have already identified a Test and want to locate it more quickly, while accessing its description, additional details, and execution options. + diff --git a/docs/commons/fair/tutorials/register-benchmark.rst b/docs/commons/fair/tutorials/register-benchmark.rst new file mode 100644 index 0000000..cf08109 --- /dev/null +++ b/docs/commons/fair/tutorials/register-benchmark.rst @@ -0,0 +1,213 @@ + +====================== +How to register a Benchmark +====================== + +This tutorial explains how to register a **community FAIR Benchmark** using the OSTrails FAIR Assessment framework. + +A Benchmark is a community-specific grouping of a set of Metrics that provides a narrative of those particular ways in which that community defines FAIR for assessment purposes. +For more information, check `the FAIR Testing Resource (FTR) vocabulary `_. + +There are two ways to register a Benchmark. The first is to use the `FAIR Wizard authoring tool `_, a questionnaire-based knowledge model designed to collect and structure metadata for FAIR Assessment Components, including Benchmarks. It auto-generates FTR metadata and registers it in FAIRsharing for you. The second is to register your Benchmark directly with FAIRsharing. This tutorial covers both options. + +FAIR Wizard +====================== + +.. _metric_prerequisites: + +Prerequisites +------------- + +Before starting you should: + +* Have a defined set of **FAIR Metrics**, as well as any required **community-specific specialised Metrics**. You may find the tutorial at `define-benchmark-associated-metrics.rst `_ useful. +* Have a completed **community FAIR Benchmark narrative** definition, aligned with the community or discipline for which it will apply. This can be done by following the tutorial at `define-benchmark-associated-metrics.rst `_. This narrative definition of FAIR will contain a description of your benchmark +* Have access to the `FAIR Wizard authoring tool `_. + +.. _create_project: + +Step 1 – Create a Benchmark project in the FAIR Wizard authoring tool +-------------------------------------- +1. Go to `the dedicated environment for this questionnaire `_. +2. Register yourself or log in if you already have access. +3. Navigate to Projects and click Create to start a new project. +4. Name your project and use the "**FAIR Assessment Authoring Tool** - Questionnaire for creating FAIR Assessment Components" template as Knowledge Model. +5. Enable **Filter by question tags**. +6. Choose **Benchmark** as the artefact type. + +By doing this, the tool will create a Benchmark-tailored questionnaire. + +.. _fill_out_questionnaire: + +Step 2 – Fill in the questionnaire +-------------------------------------- + +1. Read the instructions carefully. +2. Work through the form sequentially, completing each section with information relevant to the Benchmark you are defining. + +Note that there are questions that are *mandatory*, which will be required to be given an answer. Other questions are optional. +The mandatory fields that are required to define a FAIR Benchmark are: + +- ``Title`` +You should indicate the title of your Benchmark. To follow OSTrails best practices, consider using this Benchmark naming scheme: [[Principles name]] Benchmark - [[descriptive benchmark name]] + + +Examples: + + FAIR Benchmark - Assessment of Repositories and Knowledgebases (https://fairsharing.org/7162) + + FAIR4RS Benchmark - General Benchmark for RSFC (https://fairsharing.org/7056) + + FAIR Benchmark - CESSDA Data Catalogue (CDC) + +- ``Description`` +You should indicate a description of your Benchmark. + +- ``Abbreviation`` +You should indicate a single-word abbreviation for your Metric. Note that FAIR Wizard does not allow the use of spaces or any of these special characters in Benchmark/Metric/Test name abbreviations: : / ? # [ ] @ ! $ & ' ( ) * + , ; = "< > \ ^ { < > : " / \ +To follow OSTrails best practices, consider using this Metric naming scheme: +[[Principle abbrev]]B - [[short name for the benchmark]] + +Examples: + + FB - ARK + + FSB - RSFC + + FB - CESSDA + +- ``License`` +You should include a license URL for this Benchmark. Please do not include angle brackets "<>" in your response. + +- ``Version`` +You should indicate the version number you are interested in using for defining your Benchmark. + +- ``Organisation information`` +This question is mandatory for FAIRsharing submission. Your Benchmark might be associated with an institution as its creator or maintainer. + +- ``Responsable contact person`` +You should provide the name and email of a responsible contact person. You will find an ORCID-integrated browser to search for your personal information by either typing in your full name or your ORCID ID directly. + +- ``Country`` +You should indicate the country or geographical scope relevant to this Benchmark. If the Benchmark is not limited to a specific region, you can select *‘Worldwide’*. + +- ``Subject`` +You should specify the application domain or area of knowledge to which the Benchmark applies. If the Metric is intended to be domain-independent, you can select *‘Subject Agnostic’*. + +- ``Object type`` +You should indicate the type of digital object that the Benchmark evaluates (for example datasets, software, or workflows). If the Benchmark applies broadly, you can select Object type *’Agnostic’*. + +- ``Taxonomy`` +You should classify the Benchmark within a taxonomy. If no suitable classification is available or needed, you can select *’Not Applicable’*. + +- ``Link to a Metric`` +You should link the Benchmark to at least one Metric. Use this relationship to link to every Metric implemented by this Benchmark: `has_associated_metric `_. + +- ``Other related FAIR assessment components`` +This question might be optional or mandatory depending on the FAIR assessment component you are authoring. For Benchmarks, this question is mandatory, as it needs to have associated Metrics. + + + +**Please complete all optional sections that it is possible for you to complete. The more complete your metric, the more re-usable and FAIR it is. Incomplete metadata may delay the publishing of your Benchmark in FAIRsharing.** + + +Step 3 – Create an instance with your answers +-------------------------------------- + +Once the questionnaire has been completed: + +1. Go to the **Documents** section in the top menu. +2. Name your document and select the latest version of the "*FAIR Assessment Authoring Tool* - Jinja2-based template for authoring and registering FAIR Assessment Components" as Document Template. +3. Choose the "Metric / Benchmark" Format option. +4. Click on *Create*. + +This will create a JSON file with your input. + +Step 4 – Submit your document +-------------------------------------- + +Now, you can review the document with your answers to the questionnaire, by clicking on it, which will initiate the download of the file. Once you're happy with it, you're ready to submit your document: + +4. In the **Documents** section, click the three dots icon (⋯) beside your document. +5. Select *Submit*. + +The submission will be sent via the GitHub API to be registered in an `OSTrails GitHub repository for collecting metadata about these assessment components `_, and indexed by the `FAIRsharing `_ registry. + + +Next steps +---------- + +Once submitted to FAIRsharing, the record will remain hidden until approved by the FAIRsharing curation team. Once made public, claim your record in FAIRsharing. Information on creating an account and claiming a record is available in the next section, and at `fairsharing.gitbook.io `_. + + +FAIRsharing +====================== + +This tutorial provides a comprehensive walkthrough for registering a **Benchmark** directly within the FAIRassist registry on FAIRsharing. + +Prerequisites +------------- +* Ensure you are logged in via your ORCID. This ensures your curation work is publicly attributed to you. You can find out more about creating an account in our `gitbook documentation `_. +* Create a narrative description of your benchmark, and how it interprets the `FAIR Principles `_. You may find the benchmark sections of the tutorial at `define-benchmark-associated-metrics.rst `_ useful. + +Creating a record +---------------- +Please follow the instructions in `our documentation `_ on how to create a new record in FAIRsharing. Once you’ve done that, you will be presented with the more detailed record edit interface. + +Editing your record +---------------- + +Each of the tables below corresponds to a single tab of the edit interface for a FAIRsharing record, and summarises the key fields that should be populated. For complete documentation, see our `gitbook pages `_. + +Remember to save your record regularly. + +General Information +=================== + +The form in the general information tab establishes the identity, ownership, and scientific scope of your record. + +* `Record Name `_ (Mandatory): Provide the full name of the resource. You should create a name of the format: [[Principle name]] Benchmark - [[descriptive benchmark name]]. An example is “FAIR Benchmark - FAIR Portugal Dataverse Benchmark”. +* `Abbreviation `_ (Optional): You should create an abbreviation of the format: [[Principle abbrev]]B:[[short name for the benchmark]]. An example is “FB - FAIR PT-DV”. +* `Homepage `_ (Mandatory): Provide the homepage URL for the resource. +* `Description `_ (Mandatory): Free-text summary of the resource and its purpose; see also our documentation on descriptions. (Min. 40 chars). +* `Year of creation `_ (Recommended): Provide the year the resource was first released. +* `Contacts `_ (Mandatory): At least one contact point should be provided, consisting of a name and email address for the person or group responsible for the maintenance of the resource. +* `Countries `_ (Mandatory): Select the country or countries where the resource is hosted. At least one must be added. +* `Subjects and Taxonomies `_ (Mandatory): Select the relevant subject area and species. At least one of each must be added. “Not applicable” may be used for the Taxonomy value when the species is irrelevant, as is often the case for benchmarks. +* `Object Type `_ (Mandatory): Define the type of digital research object in scope. At least one object type must be provided for Metrics/Benchmarks. + +Licence and Support Links +========================= + +Ensures users understand how to access help and the legal usage rights of the metadata. + +* `Licences `_ (Recommended): Licences for the content of your resource (e.g. the specification) should be listed. Providing licences increases the likelihood of understanding usage rights. +* `Support `_ (Recommended): Support links allow you to supply information about the various types of documentation, training and support available for your resource. + +Publications +============ + +Connects the record to the literature related to the benchmark. + +* `Publications `_ (Recommended): This section is only for publications that describe your resource and those you would ask others to use when citing your database, standard or policy. +* `Citations `_ (Recommended): You may have one or more publications that should be used to cite your resource. Note this using the 'Cite record using this publication?' toggle. + +Organisations and Grants +======================== + +Defines the institutional backing and funding for the resource. + +* `Organisations `_ (Recommended): Each organisation involved should be added with its role. At least one maintaining organisation and one funding organisation should be added. + +Relations to Other Records +========================== + +* `related_to `_ (Recommended): One of the most important parts of a record is its relationships. Link to records (other than metrics) via the autocomplete field using the FAIRsharing ID, full name, or short name. +* `has_associated_metric `_ (Mandatory): Use this relationship to link to every metric implemented by this benchmark. + +Additional Information +====================== + +Specific functional metadata for assessment tools. + +* `Associated evaluation tools `_ (Optional): If your metric/benchmark is available from a particular FAIR evaluation tool, please add it here. diff --git a/docs/commons/fair/tutorials/register-curate-metric-fs.rst b/docs/commons/fair/tutorials/register-curate-metric-fs.rst deleted file mode 100644 index 0024637..0000000 --- a/docs/commons/fair/tutorials/register-curate-metric-fs.rst +++ /dev/null @@ -1,2 +0,0 @@ -How to register and curate a metric in FS -========================================== diff --git a/docs/commons/fair/tutorials/register-metric.rst b/docs/commons/fair/tutorials/register-metric.rst new file mode 100644 index 0000000..751c064 --- /dev/null +++ b/docs/commons/fair/tutorials/register-metric.rst @@ -0,0 +1,224 @@ +====================== +How to register a Metric +====================== + +This tutorial explains how to register a **community FAIR Metric** using the OSTrails FAIR Assessment framework. + +A Metric is a narrative description that a Test must wholly implement. Each metric should implement exactly one dimension (e.g. one of the FAIR Principles). They may be domain-agnostic or not. +For more information, check `the FAIR Testing Resource (FTR) vocabulary `_. + +There are two ways to register a Metric. The first is to use the `FAIR Wizard authoring tool `_, a questionnaire-based knowledge model designed to collect and structure metadata for FAIR Assessment Components, including Metrics. It auto-generates FTR metadata and registers it in FAIRsharing for you. The second is to register your Metric directly with FAIRsharing. This tutorial covers both options. + +Does your metric already exist? +====================== +You should review existing metrics in FAIRsharing for the Principle that you are measuring. If it already exists, then please use that metric in your benchmark rather than creating a new one. To discover the metrics related to a particular Principle, find the Principle in FAIRsharing and explore its relationships. + +For example, if you require a metric for F1 ((Meta)data are assigned globally unique and persistent identifiers) that checks the global uniqueness of an identifier, then visit https://doi.org/10.25504/FAIRsharing.a2cea7 and review the list of related metrics. See also the tutorial on `find-metrics-and-benchmarks.rst `_. + + +FAIR Wizard +====================== + +.. _metric_prerequisites: + +Prerequisites +------------- + +Before starting you should: + +* Create a narrative description of your metric, and how it interprets the `FAIR Principle `_ that it measures. You may find the metric sections of the tutorial at `define-benchmark-associated-metrics.rst `_ useful. +* Have access to the `FAIR Wizard authoring tool `_. +* Identify the **type of digital object** that your Metric will evaluate. + +.. _create_project: + +Step 1 – Create a Metric project in the FAIR Wizard authoring tool +-------------------------------------- +1. Go to `the dedicated environment for this questionnaire `_. +2. Register yourself or log in if you already have access. +3. Navigate to Projects and click Create to start a new project. +4. Name your project and use the "**FAIR Assessment Authoring Tool** - Questionnaire for creating FAIR Assessment Components" template as Knowledge Model. +5. Enable **Filter by question tags**. +6. Choose **Metric** as the artefact type. + +By doing this, the tool will create a Metric-tailored questionnaire. + +.. _fill_out_questionnaire: + +Step 2 – Fill in the questionnaire +-------------------------------------- + +1. Read the instructions carefully. +2. Work through the form sequentially, completing each section with information relevant to the Metric you are defining. + +Note that there are questions that are *mandatory*, which will be required to be given an answer. Other questions are optional. +The mandatory fields that are required to define a FAIR Metric are: + +- ``Title`` +You should indicate the name of your Metric. To follow OSTrails best practices, consider using this Metric naming scheme: + +[[Principle name]] Metric - [[Abbreviation of sub-principle]] - [[Metadata|Data depending on the focus of the metric]] - [[descriptive metric name]] + +Examples: + + FAIR Metric – F1– Metadata - Persistent identifiers for database content + + FAIR4RS Metric – F1 - Metadata - Software has persistent and unique identifier (https://doi.org/10.25504/FAIRsharing.87c9a8) + +- ``Description`` +You should indicate a description of your Metric. + +- ``Abbreviation`` +You should indicate a single-word abbreviation for your Metric. Note that FAIR Wizard does not allow the use of spaces or any of these special characters in Benchmark/Metric/Test name abbreviations: : / ? # [ ] @ ! $ & ' ( ) * + , ; = "< > \ ^ { < > : " / \ + +To follow OSTrails best practices, consider using this Metric naming scheme: + +[[Principle abbrev]]M_[[Abbreviation of sub-principle]]_[[M|D|P]]_[[short name for the metric]] + +Examples: + + FM_F1-PID_M_ARK + + FSM_F1_M_UNIQ-ID + + FM_R1.2_M_CPI + +- ``License`` +You should include a license URL for this Metric. + +- ``Version`` +You should indicate the version number you are interested in using for defining your Metric. + +- ``Responsable contact person`` +You should provide the name and email of a responsible contact person. You will find an ORCID-integrated browser to search for your personal information by either typing in your full name or your ORCID ID directly. + +- ``Country`` +You should indicate the country or geographical scope relevant to this Metric. If the Metric is not limited to a specific region, you can select *‘Worldwide’*. + +- ``Subject`` +You should specify the application domain or area of knowledge to which the Metric applies. If the Metric is intended to be domain-independent, you can select *‘Subject Agnostic’*. + +- ``Object type`` +You should indicate the type of digital object that the Metric evaluates (for example datasets, software, or workflows). If the Metric applies broadly, you can select Object type *’Agnostic’*. + +- ``Taxonomy`` +You should classify the Metric within a taxonomy. If no suitable classification is available or needed, you can select *’Not Applicable’*. + +- ``Link to a principle`` +You should link the Metric to at least one FAIR Principle. This defines which aspect of FAIRness the Metric evaluates and is essential for its interpretation and reuse. + + +**Please complete all optional sections that it is possible for you to complete. The more complete your metric, the more re-usable and FAIR it is. Incomplete metadata may delay the publishing of your Metric in FAIRsharing.** + + +Step 3 – Create an instance with your answers +-------------------------------------- + +Once the questionnaire has been completed: + +1. Go to the **Documents** section in the top menu. +2. Name your document and select the latest version of the "*FAIR Assessment Authoring Tool* - Jinja2-based template for authoring and registering FAIR Assessment Components" as Document Template. +3. Choose the "Metric / Benchmark" Format option. +4. Click on *Create*. + +This will create a JSON file with your input. + +Step 4 – Submit your document +-------------------------------------- + +Now, you can review the document with your answers to the questionnaire, by clicking on it, which will initiate the download of the file. Once you're happy with it, you're ready to submit your document: + +4. In the **Documents** section, click the three dots icon (⋯) beside your document. +5. Select *Submit*. + +The submission will be sent via the GitHub API to be registered in an `OSTrails GitHub repository for collecting metadata about these assessment components `_, and indexed by the `FAIRsharing `_ registry. + + +Next steps +---------- + +Once submitted to FAIRsharing, the record will remain hidden until approved by the FAIRsharing curation team. Once made public, claim your record in FAIRsharing. Information on creating an account and claiming a record is available in the next section, and at https://fairsharing.gitbook.io/ + + +FAIRsharing +====================== + +This tutorial provides a comprehensive walkthrough for registering a **Metric** directly within the FAIRassist registry on FAIRsharing. + +.. _fs_prerequisites: + +Prerequisites +------------- +* Ensure you are logged in via your ORCID. This ensures your curation work is publicly attributed to you. You can find out more about creating an account in our `gitbook documentation `_. +* Create a narrative description of your metric, and how it interprets the `FAIR Principle `_ that it measures. You may find the metric sections of the tutorial at `define-benchmark-associated-metrics.rst `_ useful. + +Creating a record +---------------- +Please follow the instructions in `our documentation `_ on how to create a new record in FAIRsharing. Once you’ve done that, you will be presented with the more detailed record edit interface. + +Editing your record +---------------- + +Each of the tables below corresponds to a single tab of the edit interface for a FAIRsharing record, and summarises the key fields that should be populated. For complete documentation, see our `gitbook pages `_. + +Remember to save your updates regularly. + +**General Information** +======================= + +The form in the general information tab establishes the identity, ownership, and scientific scope of your record. + +General Information +=================== + +The form in the general information tab establishes the identity, ownership, and scientific scope of your record. + +* `Record Name `_ (Mandatory): Provide the full name of the resource. You should create a name of the format: [[Principle name]] Metric - [[Abbreviation of sub-principle]] - [[Metadata or Data]] [[descriptive metric name]]. An example is “FAIR Metric – F1 – Metadata - Identifier is globally unique”. +* `Abbreviation `_ (Optional): You should create an abbreviation of the format: [[Principle abbrev]]M:[[Abbreviation of sub-principle]]:[[M or D]]:[[short name for the metric]]. An example is “FM:F1:M:IdentUnique”. +* `Homepage `_ (Mandatory): Provide the homepage URL for the resource. +* `Description `_ (Mandatory): Free-text summary of the resource and its purpose; see also our documentation on descriptions. (Min. 40 chars). +* `Year of creation `_ (Recommended): Provide the year the resource was first released. +* `Contacts `_ (Mandatory): At least one contact point should be provided, consisting of a name and email address for the person or group responsible for the maintenance of the resource. +* `Countries `_ (Mandatory): Select the country or countries where the resource is hosted. At least one must be added. +* `Subjects and Taxonomies `_ (Mandatory): Select the relevant subject area and species. At least one of each must be added. “Not applicable” may be used for the Taxonomy value when the species is irrelevant, as is often the case for metrics. +* `Object Type `_ (Mandatory): Define the type of digital research object in scope. At least one object type must be provided for Metrics/Benchmarks. + +Licence and Support Links +========================= + +Ensures users understand how to access help and the legal usage rights of the metadata. + +* `Licences `_ (Recommended): Licences for the content of your resource (e.g. the specification) should be listed. Providing licences increases the likelihood of understanding usage rights. +* `Support `_ (Recommended): Support links allow you to supply information about the various types of documentation, training and support available for your resource. + +Publications +============ + +Connects the record to the literature related to the metric. + +* `Publications `_ (Recommended): This section is only for publications that describe your resource and those you would ask others to use when citing your database, standard or policy. +* `Citations `_ (Recommended): You may have one or more publications that should be used to cite your resource. Note this using the 'Cite record using this publication?' toggle. + +Organisations and Grants +======================== + +Defines the institutional backing and funding for the resource. + +* `Organisations `_ (Recommended): Each organisation involved should be added with its role. At least one maintaining organisation and one funding organisation should be added. + +Relations to Other Records +========================== + +* `related_to `_ (Recommended): One of the most important parts of a record is its relationships. Link to records (other than benchmarks) via the autocomplete field using the FAIRsharing ID, full name, or short name. +* `measures_principle `_ (Mandatory): Every metric must be linked to exactly ONE principle record from any given principle hierarchy. Use for the narrowest principle possible (e.g. FAIR F1). Cannot point to two principles from the same hierarchy. + +Additional Information +====================== + +Specific functional metadata for assessment tools. + +* `Associated evaluation tools `_ (Optional): If your metric/benchmark is available from a particular FAIR evaluation tool, please add it here. +* `Associated Tests `_ (Optional): Links to the tests that execute this metric. +* `Positive, Negative and Indeterminate examples `_ (Optional): URLs that provide illustrative examples of positive, negative and indeterminate outcomes. + +