diff --git a/tutorials/ai-core-deploy/ai-core-deploy.md b/tutorials/ai-core-deploy/ai-core-deploy.md index daf9eea8d3..24449fb460 100644 --- a/tutorials/ai-core-deploy/ai-core-deploy.md +++ b/tutorials/ai-core-deploy/ai-core-deploy.md @@ -183,8 +183,36 @@ docker push docker.io//house-server:01 -### Create a serving executable +### Set Compute Resources for Serving - Pre Read + +SAP AI Core allows you to configure compute resources for serving workloads using either an instance type or a resource plan. You must specify at least one of these options. + + - If you specify an **instance type**, a resource plan is not required. + + - If you specify a **resource plan**, an instance type is not required. + +```YAML +labels: + ai.sap.com/resourcePlan: + (or) + ai.sap.com/instanceType: +``` + +**Note:** + - Resource plans are suitable for most standard serving workloads. + + - Instance types are recommended for GPU-based or performance-critical serving scenarios. + +**Reference:** + + - [SAP Help Portal – Choose an Instance (SAP AI Core)](https://help.sap.com/docs/sap-ai-core/predictive-ai-db13d59d17204c01b3b79c24fb82a19a/choose-instance) + + - [SAP Note 3660109 – Available Instance Types](https://me.sap.com/notes/3660109) + + + +### Create a serving executable Create an executable (YAML file) named `house-price-server.yaml` in your GitHub repository. You may use the existing GitHub path which is already tracked synced to your application of SAP AI Core. @@ -219,7 +247,7 @@ spec: autoscaling.knative.dev/target: 1 autoscaling.knative.dev/targetBurstCapacity: 0 labels: | - ai.sap.com/resourcePlan: starter # computing power + ai.sap.com/resourcePlan: starter # or ai.sap.com/instanceType: spec: | predictor: imagePullSecrets: @@ -247,7 +275,7 @@ spec: 1. You use an input artifacts placeholder `housepricemodel` for your model. 2. You use an input parameters placeholder `greetmessage` to pass any value in a string. -3. You use the `starter` computing resource plan with `ai.sap.com/resourcePlan`. To start, using a non-GPU based resource plan for serving (like `starter`) is cost effective. Find out more about available resource plans in [the help portal](https://help.sap.com/docs/AI_CORE/2d6c5984063c40a59eda62f4a9135bee/57f4f19d9b3b46208ee1d72017d0eab6.html?locale=en-US). +3. You configure compute resources using the **ai.sap.com/resourcePlan** label. In this tutorial, the starter resource plan is used for serving, as it is cost-effective for non-GPU workloads. Alternatively, you can use **ai.sap.com/instanceType** for advanced or GPU-enabled serving scenarios. Learn more in the [SAP Help Portal – Choose an Instance](https://help.sap.com/docs/AI_CORE/2d6c5984063c40a59eda62f4a9135bee/57f4f19d9b3b46208ee1d72017d0eab6.html?locale=en-US). 4. You set the auto scaling of the server with the parameters: `minReplicas` and `maxReplicas`. 5. You set the serving code to use through a Docker `image`, and the credentials to access it via `imagePullSecrets`. You must ensure that if you are using a public docker registry that has the file type `docker.io`, your secret points to the URL `https://index.docker.io`. You may delete and recreate the docker registry secret. This will not affect training templates running in parallel. 6. You use the placeholder `env` to pass your `inputs` values as environment variables in your Docker image. diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/DATASET/medicalqna_dataset.csv b/tutorials/ai-core-genaihub-evaluation-comprehensive/DATASET/medicalqna_dataset.csv new file mode 100644 index 0000000000..21ad421d1b --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-comprehensive/DATASET/medicalqna_dataset.csv @@ -0,0 +1,70 @@ +question,sentiment,reference +how does rivatigmine and otc sleep medicine interact,Interaction,"tell your doctor and pharmacist what prescription and nonprescription medications, vitamins, nutritional supplements, and herbal products you are taking or plan to take. Be sure to mention any of the following: antihistamines; aspirin and other nonsteroidal anti-inflammatory medications (NSAIDs) such as ibuprofen (Advil, Motrin) and naproxen (Aleve, Naprosyn); bethanechol (Duvoid, Urecholine); ipratropium (Atrovent, in Combivent, DuoNeb); and medications for Alzheimer's disease, glaucoma, irritable bowel disease, motion sickness, ulcers, or urinary problems. Your doctor may need to change the doses of your medications or monitor you carefully for side effects." +how does valium affect the brain,Action,"Diazepam is a benzodiazepine that exerts anxiolytic, sedative, muscle-relaxant, anticonvulsant and amnestic effects. Most of these effects are thought to result from a facilitation of the action of gamma aminobutyric acid (GABA), an inhibitory neurotransmitter in the central nervous system." +what is morphine,Information,Morphine is a pain medication of the opiate family which is found naturally in a number of plants and animals.[5][7] It acts directly on the central nervous system (CNS) to decrease the feeling of pain. +what are the milligrams for oxycodone e,Dose,� 10 mg � 20 mg � 40 mg � 80 mg ... +81% aspirin contain resin and shellac in it. ?,Ingredient,Inactive Ingredients Ingredient Name +what is desonide ointment used for,Indication,"Desonide is used to treat the redness, swelling, itching, and discomfort of various skin conditions, including psoriasis (a skin disease in which red, scaly patches form on some areas of the body and eczema (a skin disease that causes the skin to be dry and itchy and to sometimes develop red, scaly rashes)." +how soon can tylenol be taken after a cocktail?,Interaction,"According to the National Health Service (NHS) in the UK, it is usually safe to drink a small amount of alcohol while taking this pain reliever. ... However, when people take acetaminophen at high doses or together with alcohol, it can cause side effects ranging from minor to severe, with the possibility of fatal liver damage. This risk may be higher for people with alcohol use disorder (AUD), which was previously known as alcoholism.... According to the U.S. National Library of Medicine, taking acetaminophen can be dangerous for people who regularly drink alcohol. Manufacturers currently recommend that people who have more than 3 alcoholic drinks per day should ask their doctor before taking acetaminophen." +breo inhaler how it works,Action,"The combination of fluticasone and vilanterol is used to control wheezing, shortness of breath, coughing, and chest tightness caused by asthma and chronic obstructive pulmonary (COPD; a group of diseases that affect the lungs and airways, that includes chronic bronchitis and emphysema). Fluticasone is in a class of medications called steroids. It works by reducing swelling in the airways. Vilanterol is in a class of medications called long-acting beta-agonists (LABAs). It works by relaxing and opening air passages in the lungs, making it easier to breathe." +breo inhaler how it works,Usage,"To use the inhaler, follow these steps: + 1 If you will be using a new inhaler for the first time, remove it from the box and the foil wrapper. Fill in the ""Tray opened"" and ""Discard"" blanks on the inhaler label with the date that you opened the pouch and the date 6 weeks later when you must replace the inhaler. + 2 When you are ready to inhale your dose, slide the cover down to expose the mouthpiece until it clicks. If you open and close the inhaler without using your dose, you will waste the medication. + 3 The counter will count down by 1 each time you open the cover. If the counter does not count down, your inhaler will not provide the medicine. If your inhaler does not count down, call your pharmacist or doctor. + 4 Hold the inhaler away from your mouth and breathe out as far as you comfortably can. Do not breathe out into the mouthpiece. + 5 Put the mouthpiece between your lips, and close your lips firmly around it. Take a long, steady, deep breath in through your mouth. Do not breathe in through your nose. Be careful not block the air vent with your fingers. + 6 Remove the inhaler from your mouth, and hold your breath for about 3 to 4 seconds or as long as you comfortably can. Breathe out slowly. + 7 You may or may not taste or feel the medicine released by the inhaler. Even if you do not, do not inhale another dose. If you are not sure you are getting your dose of fluticasone and vilanterol, call your doctor or pharmacist. + 8 You may clean the mouthpiece with a dry tissue, if needed. Slide the cover up over the mouthpiece as far as it will go to close the inhaler. + 9 Rinse your mouth with water, but do not swallow. +Ask your pharmacist or doctor for a copy of the manufacturer's information for the patient." +qvar 40mg what is it for,Indication,"QVAR is indicated in the maintenance treatment of asthma as prophylactic therapy in patients 5 years of age and older. QVAR is also indicated for asthma patients who require systemic corticosteroid administration, where adding QVAR may reduce or eliminate the need for the systemic corticosteroids." +does cyclosporine ophthalmic helps for iritis?,Indication,This study showed improvement of recurrent anterior uveitis [iritis] in patients while on conventional treatment with cyclosporine A 0.05% compared with conventional treatment alone. +what ingredient in walnut interferes with synthroid drug absorption,Interaction,"Dietary fiber: Certain dietary fiber sources can impede absorption of the thyroid hormone replacement medication. Mayo Clinic staff say it is best to avoid dietary fiber in foods like walnuts, soy products, iron supplements and multivitamins containing iron." +what is the color of the fluvaastatin pill,Appearance,Product Characteristics Color RED (rust) +"is penicillin in the pill ""montelukast?""",Ingredient,"What are the ingredients in montelukast sodium tablets? + +Active ingredient: montelukast sodium, USP + +Inactive ingredients: + +10 mg tablet: croscarmellose sodium, hydroxypropyl cellulose, lactose monohydrate, magnesium stearate, and microcrystalline cellulose. The film coating contains: black iron oxide, hydroxypropyl cellulose, hypromellose, red iron oxide, titanium dioxide, and yellow iron oxide." +"can i take metamucil with ""ciprofloxacin?""",Interaction,"diarrhea is a common problem caused by antibiotics which usually ends when the antibiotic is discontinued. Sometimes after starting treatment with antibiotics, patients can develop watery and bloody stools (with or without stomach cramps and fever) even as late as two or more months after having taken the last dose of the antibiotic. If this occurs, patients should contact their physician as soon as possible.�" +how long before a meal should lansoprazole be taken,Usage,Swallow 1 capsule with a glass of water before eating in the morning. +what does using fluorouracil make your face look like,Side effects,"The most frequent adverse reactions to Fluorouracil 5% Topical Cream occur locally and are often related to an extension of the pharmacological activity of the drug. These include burning, crusting, allergic contact dermatitis, erosions, erythema, hyperpigmentation, irritation, pain, photosensitivity, pruritus, scarring, rash, soreness and ulceration." +why did my doctor give me level iracetam,Indication,Levetiracetam is used in combination with other medications to treat certain types of seizures in adults and children with epilepsy. Levetiracetam is in a class of medications called anticonvulsants. It works by decreasing abnormal excitement in the brain. +results of stopping terazosin?,Usage,"The effect of withdrawal of terazosin therapy in patients with mild to moderate hypertension was assessed in two double-blind, placebo-controlled studies. All patients had demonstrated a stable blood pressure response to terazosin prior to withdrawal of the drug. Patients were randomly assigned either to continue treatment with terazosin at a previously established dose that had brought blood pressure under control (dose range: 1 to 40 mg daily) or to receive a matching placebo. At the end of a six- or eight-week withdrawal period, placebo-treated patients experienced mean increases of 7.3 and 12.4 mm Hg in supine diastolic blood pressure (studies M81-020 and M81-028 site 1, respectively). These increases were significantly greater than those observed for patients who continued to receive terazosin. Similar results were observed in other blood pressure variables. Withdrawal of terazosin was accompanied by a significant weight loss (2.8 and 3.6 pounds in studies M81-020 and M81-028, respectively). There were no clinically significant changes in pulse rates, physical examinations, laboratory test results, or electrocardiograms. Headache was the most common adverse experience reported by those who received placebo during the drug withdrawal period. These studies demonstrate that withdrawal of terazosin therapy is associated with an increase in supine diastolic blood pressure, often to hypertensive levels, without signs of a withdrawal syndrome." +what meloxicam look like,Appearance,Product Characteristics Color YELLOW (light yellow) Score no score Shape OVAL Size 3mm Imprint Code S160 +nitroglycerin how often,Usage,"One tablet should be dissolved under the tongue or in the buccal pouch at the first sign of an acute anginal attack. The dose may be repeated approximately every 5 minutes until relief is obtained. If the pain persists after a total of 3 tablets in a 15-minute period, or if the pain is different than is typically experienced, prompt medical attention is recommended. Nitroglycerin may be used prophylactically 5 to 10 minutes prior to engaging in activities that might precipitate an acute attack." +whate is vitamin c chemicl symple ?,Information,Active Ingredient/Active Moiety ... ASCORBIC ACID ... +what is the maximum dose of pregabalin,Dose,"In view of the dose-dependent adverse reactions, treatment with doses above 300 mg/day is not recommended" +how long does marijuana it stay in system,Action/time,"The effects of marijuana usually last from 1 to 3 hours, but marijuana can stay in the body for days or even weeks after use. Organs in the body have fatty tissues that absorb the THC in marijuana. In general, standard urine tests can detect THC several days after use. In people who use heavily, however, urine tests can sometimes detect THC for several weeks." +neupro and ropinirole when is it safe to take,Interaction,"Anxiolytics; Sedatives; and Hypnotics: (Moderate) A reduction in the dose of anxiolytics, sedatives, hypnotics and concomitantly administered dopamine agonists with sedative properties (e.g., ropinirole, pramipexole, rotigotine, apomorphine) should be considered to minimize additive sedative effects. In addition, the risk of next-day psychomotor impairment is increased during co-administration, which may decrease the ability to perform tasks requiring full mental alertness such as driving." +neupro and ropinirole when is it safe to take,Comparison,"Switching from oral dopamine agonists to rotigotine: An open-label study of 99 subjects with Parkinson�s disease was conducted in which the subjects, previously treated with 3 to 12mg/day ropinirole with or without levodopa, were converted to treatment with transdermal rotigotine. The following dosage conversion was utilized; 3mg/day ropinirole to 2mg/24 hours rotigotine, 6mg/day ropinirole to 4mg/24 hours rotigotine, 8-9mg/day ropinirole to 6mg/24 hours rotigotine, 12mg/day ropinirole to 8mg/24 hours rotigotine. Patients were instructed to take their last dose of ropinirole in the afternoon or evening, applying a rotigotine patch the next morning upon awakening. Overall this study determined that an overnight switch from ropinirole to rotigotine was generally well tolerated without loss of efficacy." +what is prevnar >65,Information,The pneumococcal conjugate vaccine (PCV13 or Prevnar 13�) protects against 13 types of pneumococcal bacteria. CDC recommends PCV13 for use in infants and young children and adults 65 years or older. +how many mg does it take to overdose on oxycodone,Overdose,"OXYCODONE HCl CONTROLLED-RELEASE 80 mg and 160 mg Tablets, or a single dose greater than 40 mg, ARE FOR USE IN OPIOID-TOLERANT PATIENTS ONLY. A single dose greater than 40 mg, or total daily doses greater than 80 mg, may cause fatal respiratory depression when administered to patients who are not tolerant to the respiratory depressant effects of opioids." +what medication not to take with lithium,Interaction,What special precautions should I follow? +mst drug/?,Information,"MST�Continus� 5 mg, 10 mg, 15 mg, 30 mg, 60 mg, 100 mg and 200 mg prolonged release tablets: Morphine sulfate" +what size doses of metformin are available?,Dose,"Metformin Hydrochloride Tablets, USP ... 500 mg ... 850 mg ... 1000 mg" +"pravastatin s9 orange how many ""grams?�""",Dose,No answers +how long morphine remains in body,Action/time,"Morphine takes longer to work than heroin and the effects tend to last longer. Despite this, blood tests can only detect morphine for the first 12 hours after the last dose, and urine tests only work for up to 3 days. However, saliva tests are more effective, being able to detect traces of morphine for up to 4 days. Again, morphine stays in the hair for 90 days." +"what is the imprint on metoprolol succ., 50 mg",Appearance,"50 mg tablets: White, round, coated tablets debossed with Andrx logo and �831� on one side and scored on the other side." +what can take the place of tramadol,Alternatives,"The American Academy of Pediatrics (AAP) and other pediatric associations and academies have released guidelines on the management of nociceptive pain in children. The top 3 medications� recommendations in children are paracetamol, ibuprofen, and opioids: non-opioids for mild nociceptive pain; non-opioids + weak opioids for moderate nociceptive pain and non-opioids + strong opioids for severe nociceptive pain. Codeine and tramadol are the only two opioids classified as weak opioids. In most countries, they do not require a restricted medical drug prescription and as �weak� opioids, they are often considered to have a lower potential for adverse drug reactions (ADR) than �strong� opioids." +how to administer denosumab,Usage,"Denosumab injection comes as a solution (liquid) to be injected subcutaneously (under the skin) in your upper arm, upper thigh, or stomach area. It is usually injected by a doctor or nurse in a medical office or clinic. Denosumab injection (Prolia) is usually given once every 6 months. When denosumab injection (Xgeva) is used to reduce the risk of fractures from multiple myeloma, or cancer that has spread to the bones, it is usually given once every 4 weeks. When denosumab injection (Xgeva) is used to treat giant cell tumor of bone, or high calcium levels caused by cancer, it is usually given every 7 days for the first three doses (on day 1, day 8, and day 15) and then once every 4 weeks starting 2 weeks after the first three doses. + +Your doctor will tell you to take supplements of calcium and vitamin D while you are being treated with denosumab injection. Take these supplements exactly as directed. + +When denosumab injection (Prolia) is used to treat osteoporosis or bone loss, your doctor or pharmacist will give you the manufacturer's patient information sheet (Medication Guide) when you begin treatment with denosumab injection and each time you refill your prescription. Read the information carefully and ask your doctor or pharmacist if you have any questions. You can also visit the Food and Drug Administration (FDA) website (http://www.fda.gov/Drugs/DrugSafety/ucm085729.htm) or the manufacturer's website to obtain the Medication Guide." +what is barbiturates,Information,"Barbiturates are sedative-hypnotic drugs that were once commonly used as sedatives or antianxiety medications. A physician must prescribe barbiturates; otherwise, their use is considered illicit. Among their limited uses, barbiturates are used to manage some seizure disorders as well as for pre-procedural sedation. In rarer instances, they are prescribed for the treatment of headache, anxiety and insomnia. However, their use in most areas of medicine has largely been supplanted by other safer medications. Barbiturates are controlled substances due to the potential they pose for abuse, physical dependence, and addiction. Some of the more common barbiturates include Luminal (phenobarbital). Brevital (methohexital). Seconal (secobarbital). Butisol (butabarbital). Fiorinal (butalbital)." +what are the inactive ingredients to the pneumonia vaccine,Ingredient,Inactive Ingredients POLYSORBATE 80 � ALUMINUM PHOSPHATE +how to prep and administer insulin,Usage,"Humulin R U-100 may be administered by subcutaneous injection in the abdominal wall, the thigh, the gluteal region or in the upper arm. Subcutaneous injection into the abdominal wall ensures a faster absorption than from other injection sites. Injection into a lifted skin fold minimizes the risk of intramuscular injection. Injection sites should be rotated within the same region. As with all insulin, the duration of action will vary according to the dose, injection site, blood flow, temperature, and level of physical activity. Intravenous administration of Humulin R U-100 is possible under medical supervision with close monitoring of blood glucose and potassium levels to avoid hypoglycemia and hypokalemia. For intravenous use, Humulin R U-100 should be used at concentrations from 0.1 unit/mL to 1 unit/mL in infusion systems with the infusion fluids 0.9% sodium chloride using polyvinyl chloride infusion bags." +what is medical marijuana,Information,"Some states have approved ""medical marijuana"" to ease symptoms of various health problems. The U.S. Food and Drug Administration (FDA) has not approved the marijuana plant as a medicine. However, there have been scientific studies of cannabinoids, the chemicals in marijuana. This has led to two FDA-approved medicines. They contain THC, the active ingredient in marijuana. They treat nausea caused by chemotherapy and increase appetite in patients who have severe weight loss from HIV/AIDS. Scientists are doing more research with marijuana and its ingredients to treat many diseases and conditions." +"clonazepam "".25mg"" lowest dosage?",Dose,"Klonopin Wafers (clonazepam orally disintegrating tablets) are white, round and debossed with the tablet strength � 0.125 mg debossed 1/8 �" +levaquin treat uti?,Indication,... Complicated Urinary Tract Infections: ... Acute Pyelonephritis: ... Uncomplicated Urinary Tract Infections +"vitamin d 25, totalhow much to takea day",Dose,"Currently, there�s scientific debate about how much vitamin D people need each day. The Institute of Medicine, in a long-awaited report released on November 30, 2010 recommends tripling the daily vitamin D intake for children and adults in the U.S. and Canada, to 600 IU per day. (7) The report also recognized the safety of vitamin D by increasing the upper limit from 2,000 to 4,000 IU per day, and acknowledged that even at 4,000 IU per day, there was no good evidence of harm. The new guidelines, however, are overly conservative about the recommended intake, and they do not give enough weight to some of the latest science on vitamin D and health. For bone health and chronic disease prevention, many people are likely to need more vitamin D than even these new government guidelines recommend." +sickness in humans caused formaldehyde on toys from china?,Side effects,"The Uphill Battle to Better Regulate Formaldehyde ... Safety advocates say that tighter restrictions ... are necessary, particularly for products coming from China, where items as varied as toys and Christmas lights have been found to violate American safety standards." +is cyclobenzaprine a benzodiazepine?,Information,"Cyclobenzaprine is in a class of medications called skeletal muscle relaxants. It works by acting in the brain and nervous system to allow the muscles to relax. �............ Benzodiazepines (sometimes called ""benzos"") work to calm or sedate a person, by raising the level of the inhibitory neurotransmitter GABA in the brain. Common benzodiazepines include diazepam (Valium), alprazolam (Xanax), and clonazepam (Klonopin), among others." +what does vitamin d3 do,Action,"Vitamin D helps your body absorb calcium. Calcium is one of the main building blocks of bone. A lack of vitamin D can lead to bone diseases such as osteoporosis or rickets. Vitamin D also has a role in your nerve, muscle, and immune systems." +what drugs contain in estrone injection,Ingredient,"Estrone, sold under the brand names Estragyn, Kestrin, and Theelin among many others, is an estrogen medication and naturally occurring steroid hormone which has been used in menopausal hormone therapy and for other indications.[5][8][9][10][1][2] It has been available as an aqueous suspension or oil solution that is given by injection into muscle and as a vaginal cream that is applied inside of the vagina.[1][2][3][4] It can also be taken by mouth in the form of estrone sulfate, as in estropipate (piperazine estrone sulfate; Ogen) and conjugated estrogens (Premarin).[11][2][5]" +can i eat after taking rapaflo?,Usage,The recommended dose is 8 mg orally once daily with a meal. +how much levothyroxine is needed to treat hashimotos,Dose,"If Hashimoto's disease causes thyroid hormone deficiency, you may need replacement therapy with thyroid hormone. This usually involves daily use of the synthetic thyroid hormone levothyroxine (Levoxyl, Synthroid, others). ... Treatment with levothyroxine is usually lifelong, but because the dosage you need may change, your doctor is likely to check your TSH level about every 12 months." diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/README.md b/tutorials/ai-core-genaihub-evaluation-comprehensive/README.md new file mode 100644 index 0000000000..2528756d3a --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-comprehensive/README.md @@ -0,0 +1,548 @@ +# Generative AI Custom Evaluation Workflow + +This notebook demonstrates a workflow for using AI Core's custom evaluation capabilities to benchmark Large Language Models (LLMs), and evaluate different prompts for a specific use case. It utilizes the public [MedicationQA dataset](https://langtest.org/docs/pages/benchmarks/medical/medicationqa/) to showcase how to compute industry-standard metrics and assess the reliability of LLM-generated responses. + +## Prerequisites + +Before running this notebook, ensure you have the following: + +1. **Python Environment**: A running Jupyter Notebook environment. +2. **Dependencies**: The required Python packages can be installed by running the pip command in the notebook: + ```bash + pip install -r requirements.txt + ``` +3. **Environment Variables**: Create a `.env` file in the same directory as the notebook. This file should contain your credentials for SAP AI Core and AWS. A `sample.env` file is provided as a template. The notebook will prompt for any missing values. + + Your `.env` file should look like this: + ``` + # SAP AI Core Credentials + AICORE_BASE_URL= + AICORE_RESOURCE_GROUP= + AICORE_AUTH_URL= + AICORE_CLIENT_ID= + AICORE_CLIENT_SECRET= + + # AWS Credentials + AWS_ACCESS_KEY= + AWS_BUCKET_ID= + AWS_REGION= + AWS_SECRET_ACCESS_KEY= + + # Optional Orchestration Deployment URL + DEPLOYMENT_URL= + ``` + +## Workflow Overview + +The notebook is structured into the following key steps: + +### Step 1: Setup + +* **Install Dependencies**: Installs the necessary Python packages from `requirements.txt`. +* **Load Credentials**: Loads the necessary credentials and configuration from the `.env` file. It initializes the `GenAIHubProxyClient` for interacting with SAP AI Core. + +### Step 2: Prepare for Evaluation + +This section involves preparing all the necessary assets for the evaluation run. + +1. **Register Object Store Secret**: Registers your AWS S3 bucket credentials with SAP AI Core. This allows the evaluation job to access your dataset. + +An [object store secret](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/register-your-object-store-secret) is required to store credentials to access your AWS S3 buckets, and limit access to a particular directory. +User needs to select a resource group while creating secret. +To read more about resources groups visit: [Resource Group](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups) + +The API endpoint can be found in : [Object store secret endpoint](https://api.sap.com/api/AI_CORE_API/resource/Object_Store_Secret) +``` + { + "name": "genai-data-notebook", + "data": { + "AWS_ACCESS_KEY_ID": AWS_ACCESS_KEY, + "AWS_SECRET_ACCESS_KEY": AWS_SECRET_ACCESS_KEY + }, + "type": "S3", + "bucket": AWS_BUCKET_ID, + "endpoint": "https://s3.aws.com", + "region": AWS_REGION, + "pathPrefix": "" + } +``` + +2. **Upload Data to S3**: Uploads the local dataset from `DATASET` to your S3 object store and registers the root folder as artifact with AI Core. The File Upload and Artifact endpoints of AI Core API may be used for this purpose. In this example `genaiEvaluation\{prefix_guid}` is the root folder containing the orchestration configurations and test data which is registered as AI Core artifact. + +3. **Register Artifact with AI Core**: Registers the uploaded dataset in S3 as an artifact in SAP AI Core. This makes the data accessible to the evaluation workflow. +The input [artifact](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/manage-artifacts) is a placeholder in an executable or template that enables the attachment of datasets or models required for the execution of an AI workflow or pipeline. +To register an artifact with AI Core + - Upload the input files to the path specified in the object store. + - Register an artifact with AICore by providing the path to the input artifact + +The API endpoint can be found in : [Register Artifact](https://api.sap.com/api/AI_CORE_API/resource/Artifact) +``` + { + "labels": [ + { + "key": "ext.ai.sap.com/prompt-evaluation", + "value": "true" + } + ], + "name": "genai-eval-simplified-test-data", + "kind": "other", + "url": input_artifact_path, # input artifact path + "description": "demo artifacts for evaluation flow.", + "scenarioId": "genai-evaluations" + } +``` + - The url needs to be constructed as ai://genai-data-notebook/genaiEvaluation/{prefix_guid}, where genai-data-notebook is the object store secret name created previously. Hence the path translates as ai://genai-data-notebook/ which is the directory that your dataset file is located in. + - The url points to a directory, not a file, which gives you advantage that you can store multiple files in an AWS S3 directory and register the directory containing all files as a single artifact. + - All the files present in the path referenced by artifact will be copied from your S3 storage to your SAP AI Core instance during training or inferencing. This includes subfolders, apart from where Kind = MODEL. + - The scenario Id here referes to the Global Workflow already present in AI Core. + + +4. **Create Orchestration Deployment**: If you don't have an existing orchestration deployment, this step creates one. The deployment provides the endpoint for running the LLM. + + +5. **Select Metrics**: You can select from a list of system-defined metrics (e.g., ROUGE, BERT Score, Answer Relevance) and/or register your own custom metrics through the notebook. + +The following **system-defined computed metrics** are supported: + +| Name | Description | Reference required | +---------------------------------------------------------------------------------------------|------------------| +| BERT Score | https://huggingface.co/spaces/evaluate-metric/bertscore | Yes | +| BLEU | https://huggingface.co/spaces/evaluate-metric/bleu | Yes | +| ROUGE | https://huggingface.co/spaces/evaluate-metric/rouge | Yes | +| JSON Schema Match | validates LLM generated response against a predefined Json schema, returns boolean result. | Yes | +| Content Filter on Input | Whether orchestration input was rejected by the input filter | No | +| Content Filter on Output | Whether orchestration output was rejected by the output filter | No | +| Exact Match | Whether the output exactly matches the reference | Yes | +| Language Match | The metric returns true/false to indicate if the text matches the given language | No | + + +The following **system-defined model-as-a-judge metrics** are supported: + +| Name | Description | Reference required |-------------------------------------------------------------------------------------------------------------------- +| Pointwise Instruction Following | assess the model's ability to follow instructions provided in the user prompt | No | +| Pointwise Correctness | assess the model's ability to provide a correct response based on the user prompt | Yes | +| Pointwise Answer Relevance | assess the model's response is related to user prompt | No | +| Pointwise Conciseness | assess the model's response is a short and concise answer to user prompt | No | +| +*Entries marked with an asterisk (*) are experimental metrics. + + + +## Model-as-a-Judge metrics internally follow this template: +
+Pointwise Instructions Following prompt template: + +```text +Please act as an impartial judge and evaluate the quality of the responses based on the prompt and following criteria: + +## Metric Definition +You will be assessing model's the ability to follow instructions provided in the user prompt. + +## Criteria +Instruction following: The response demonstrates a clear understanding of the instructions in the user prompt, satisfying all of the instruction's requirements. +Evaluate the responses STRICTLY on the ability to follow instruction ONLY. + +## Rating Rubric +5: (Complete fulfillment). Response addresses all aspects and adheres to all requirements of the instruction. The user would feel like their instruction was completely understood. +4: (Good fulfillment). Response addresses most aspects and requirements of the instruction. It might miss very minor details or have slight deviations from requirements. The user would feel like their instruction was well understood. +3: (Some fulfillment). Response does not address some minor aspects and/or ignores some requirements of the instruction. The user would feel like their instruction was partially understood. +2: (Poor fulfillment). Response addresses some aspects of the instruction but misses key requirements or major components. The user would feel like their instruction was misunderstood in significant ways. +1: (No fulfillment). Response does not address the most important aspects of the instruction. The user would feel like their request was not at all understood. + + +User Prompt: +{{?aicore_prompt_template}} + +Model Response: +{{?aicore_llm_completion}} + +Begin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format: +{ + "explanation": string, + "rating": integer +} +Output: +``` +
+ +
+Pointwise Correctness prompt template: + +```text +You are an expert evaluator. Your task is to evaluate the quality of the responses generated by AI models. +We will provide you with the user input, an AI-generated responses and a reference answer. +You should first read the user input carefully for analyzing the task, and then evaluate the quality of the responses based on the criteria provided in the Evaluation section below. +You will assign the response a rating following the Rating Rubric and Evaluation Steps. +Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric. + +## Metric Definition +You will be assessing correctness, which measures the ability to provide a correct response based on the user prompt and the reference. + +## Criteria +Correctness: Is the response correct, accurate, and factual? + +## Rating Rubric +5: (Completely correct). The response is completely correct, accurate, and factual. +4: (Mostly correct). The response is mostly correct, accurate, and factual. +3: (Somewhat correct). The response is somewhat correct, accurate, and factual. +2: (Somewhat incorrect). The response is somewhat incorrect, inaccurate, and fictitious. +1: (Incorrect). The response is incorrect, inaccurate, and fictitious. + +## Evaluation Steps +STEP 1: Assess the response in aspects of Correctness. Identify any information in the response and provide assessment according to the Criteria. +STEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Correctness. + +Prompt: +{{?aicore_prompt_template}} + +Response: +{{?aicore_llm_completion}} + +Reference: +{{?reference}} + +Begin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format: +{ + "explanation": string, + "rating": integer +} + +Output: +``` + +
+ +
+Pointwise Answer Relevance prompt template: + +```text +You are an expert evaluator. Your task is to evaluate the relevance of responses generated by AI models. +We will provide you with the user input and an AI-generated response. +You should first read the user input carefully to understand the context and intention, and then evaluate the relevance of the response based on the criteria provided in the Evaluation section below. +You will assign the response a rating following the Rating Rubric and Evaluation Steps. +Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric. + +## Metric Definition +You will be assessing relevance, which measures the ability to provide a response that is pertinent and useful based on the user prompt and the context provided. + +## Criteria +Relevance: Does the response address the user's query appropriately and provide pertinent information? + +## Rating Rubric +5: (Highly relevant). The response is highly relevant, directly addresses the user's query, and provides useful information. +4: (Mostly relevant). The response is mostly relevant and generally addresses the user's query with useful information. +3: (Somewhat relevant). The response is somewhat relevant but may miss key aspects of the user's query. +2: (Slightly relevant). The response is slightly relevant and largely misses the user's query. +1: (Irrelevant). The response is irrelevant and does not address the user's query. + +## Evaluation Steps +STEP 1: Assess the response in terms of Relevance. Identify how well the response aligns with the user's query and context according to the Criteria. +STEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Relevance. + +Prompt: +{{?aicore_prompt_template}} + +Response: +{{?aicore_llm_completion}} + +Begin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format: +{ + "explanation": string, + "rating": integer +} + +Output: +``` + +
+ +
+Pointwise Conciseness prompt template: + +```text +You are an expert evaluator. Your task is to evaluate the conciseness of responses generated by AI models. +We will provide you with the user input and an AI-generated response. +You should first read the user input carefully to understand the context and intention, and then evaluate the conciseness of the response based on the criteria provided in the Evaluation section below. +You will assign the response a rating following the Rating Rubric and Evaluation Steps. +Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric. + +## Metric Definition +You will be assessing conciseness, which measures the ability to convey the necessary information in a clear and succinct manner. + +## Criteria +Conciseness: Does the response deliver the essential information without unnecessary words or redundancy? + +## Rating Rubric +5: (Highly concise). The response is very concise, delivering all necessary information in a succinct manner without any superfluous content. +4: (Mostly concise). The response is mostly concise and generally avoids unnecessary words while covering the essential information. +3: (Somewhat concise). The response is somewhat concise but may include some unnecessary words or slightly redundant information. +2: (Slightly concise). The response is slightly concise and contains a significant amount of unnecessary or redundant information. +1: (Not concise). The response is not concise and is filled with unnecessary or redundant content that obscures the main points. + +## Evaluation Steps +STEP 1: Assess the response in terms of Conciseness. Identify how effectively the response communicates essential information without unnecessary words according to the Criteria. +STEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Conciseness. + +Prompt: +{{?aicore_prompt_template}} + +Response: +{{?aicore_llm_completion}} + +Begin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format: +{ + "explanation": string, + "rating": integer +} + +Output: +``` + +
+ +#### User-defined metrics (Custom metrics) + +User-defined metrics can be used to evaluate the LLM outputs according to the unique needs of a use case. +A **user-defined llm-as-a-judge metric** uses a judge LLM along with a rubric to compute a metric rating. The output of a llm-as-a-judge metric can be numeric or text. + + + +The system defines a structure for the judge prompts and users provide the metric definition in the pre-defined format. Relevant instructions, such as output instructions, are automatically added to ensure the desired output from the LLM. + +Example definition +```json +{ + "scenario": "genai-evaluations", #required only if metricId is not provided + "metricName": "my_custom_metric", #required only if metricId is not provided + "version": "0.0.1", #required only if metricId is not provided + "type": "structured", # structured . + "model_configuration": { # model parameters are system-defined for structured prompts. User-defined model parameters will be ignored. + "model_name": "string", + "model_version": "string", + }, + "prompt_configuration": { + "evaluation_task": "string", #Describe the goal of this evaluation. + "criteria": "string", #Describe in a one or two sentences how the evaluation is done. + "rating_rubric": [ + { + "rating": "number", #Rating is always an integer. + "rule": "string" #Describe the criteria for choosing this rating. + }, + ... + ], + "include_properties": ["prompt", "reference"], #If present, a variable to hold the value (prompt, reference, etc) will be included. + "examples": [ #optional, few shot examples to provide context to the judge llm for better results. Ensure that examples cover all ratings for good results. + { + "prompt": "string", #required only if prompt is present in include_properties + "response": "string", #mandatory + "reference": "string", #required only if reference is present in include_properties + "rating": "number", #mandatory + "explanation": "string", #mandatory, providing this value will improve the response from the judge llm. + }, + ... + ], + } +} +``` + +The constructed prompt will be: + +``` +Please act as an impartial judge and evaluate the quality of the responses based on the following criteria: + +## Evaluation Task +{{?evaluation_task}} # Example: You will be assessing correctness, which measures the ability to provide a correct response based on the user prompt and the reference. + +## Criteria +{{?Criteria}} # Example: Correctness: Is the response correct, accurate, and factual? + +## Rating Rubric +5: (Complete fulfillment). Response addresses all aspects and adheres to all requirements of the evaluation criteria. The user would feel like their expectations were completely met. +4: (Good fulfillment). Response addresses most aspects and requirements of the evaluation criteria. It might miss very minor details or have slight deviations from expectations. The user would feel like their expectations were well met. +3: (Some fulfillment). Response does not address some minor aspects and/or ignores some requirements of the evaluation criteria. The user would feel like their expectations were partially met. +2: (Poor fulfillment). Response addresses some aspects of the evaluation criteria but misses key requirements or major components. The user would feel like their expectations were misunderstood in significant ways. +1: (No fulfillment). Response does not address the most important aspects of the evaluation criteria. The user would feel like their expectations were not at all met. + +Prompt: +{{?aicore_llm_prompt}} + +Response: +{{?aicore_llm_completion}} + +Reference: +{{?reference}} + +Begin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format: +{ + "explanation": string, + "rating": integer +} + +Output: +``` +**NOTE**: "scenario" and "metricName" and "version" is a required parameter for the custom metric in evaluation configuration. + +**NOTE**: The user must provide at least one prompt, system or user prompt, or both prompts can be provided. + + +6. **Select Models**: Choose the foundation models you want to evaluate from a list of available models in your AI Core instance. + + +⚠️ **Model Availability Notice** +If you are in a region where the `gpt-4.1` model (version `2025-04-14`) is not available, the existing LLM-as-a-Judge metrics evaluation cannot be performed. Currently, the evaluation service relies on this specific model version for metrics computation. + + +7. **Create orchestration Registry Config**: Prompt and models to be provided as part of orchestration configuration (Inline Prompt). + +Sample Body: +```json +{ + "name": "genai-eval-test", + "version": "1.0.0", + "scenario": "genai-evaluations", + "spec": { + "modules": { + "prompt_templating": { + "model": { + "name": "model_name", + "version": "model_version" + }, + "prompt": { + "template": [ + { + "role": "user", + "content": "List the benefits and side effects of the drug in the following consumer health question: {{?question}}." + } + ] + } + } + } + } + } +``` + +### Step 3: Start Evaluation Run + +* **Create AI Core Configuration**: A configuration is created that binds together the dataset artifact, the selected models, the chosen metrics, and the prompt template. +After registering input artifacts, we create AI Core configuration using the global executable of genai-evaluations global scenario. +The evaluation configuration takes the following input parameters which are provided as parameterBindings. + +After registering input artifacts, we create AI Core configuration using the global executable of genai-evaluations global scenario. The evaluation configuration takes the following input parameters which are provided as parameterBindings. + +| Input parameter | Description | +| --------------- | ----------- | +| orchestrationDeploymentURL | The orchestration deployment to use for calling the LLM. | +| metrics | A string containing comma-separated names of system-defined metrics or scenario/metricName/version for custom metrics to be evaluated. | +| testDataset | JSON containing the path to a test dataset relative to the rootFolder and its file type. | +| orchestrationRegistryIds | The ID of the orchestration config stored in the orchestration registry. | +| tags (Optional) | A JSON containing name-value pairs containing user-defined metadata | +| orchestrationDeploymentURL | The orchestration deployment to use for calling the LLM | +| repetitions (Optional) | The number of times the same input is submitted to the LLM to evaluate the consistency of the LLM outputs. Should be greater than 1 if specified. Default is 1. | +| testRowCount (Optional) | Specifies the number of rows to be selected from the testDataset for evaluation. + + +Below is an example of a configuration request body: +The API endpoint for this can be found here: [Configuration Endpoint](https://api.sap.com/api/AI_CORE_API/resource/Configuration) + +request_body = { + "name": "genai-eval-conf", + "scenarioId": "genai-evaluations", + "executableId": "genai-evaluations-simplified", + "inputArtifactBindings": [ + { + "key": "datasetFolder", + "artifactId": artifact_id + } + ], + "parameterBindings": [ + { + "key": "repetitions", + "value": repetitions + }, + { + "key": "orchestrationDeploymentURL", + "value": orchestration_deployment_url + }, + { + "key": "metrics", + "value": metrics_list + }, + { + "key": "testDataset", + "value": test_datasets + }, + { + "key": "orchestrationRegistryIds", + "value": orchestration_registry_id + }, + { + "key": "debugMode", + "value": "ON" + } + ] +} + + +* **Execute Evaluation**: Once Configration is created, we create the AI Core execution which triggers the evaluation workload. +The status of the execution needs to be Completed for the workflow to have succeeded. +The evaluation job produces two outputs +1. A SQLite DB file which stores the orchestration input, orchestration output, values for all the metrics calculated for this orchestration output and statistics such as latency for this orchestration output. These metric values are called raw metric values. This SQLite DB file is stored in the object store as an AI Core output artifact. +2. A set of metrics whose values are aggregated from the raw metric values. The aggregate metrics are stored in the tracking service. The user-defined tags along with the run names are stored with the metrics. +Post execution completion the runs generated by the workload along with the aggregate metrics can be seen by calling the tracking api. + +The API endpoints to create an execution and monitor it's status is : [Execution Endpoint](https://api.sap.com/api/AI_CORE_API/resource/Execution). + +* **Monitor Execution**: You can monitor the status of the execution until it is `COMPLETED`. + +### Step 4: Analyze Evaluation Results + +Once the execution is complete, you can analyze the results. + +1. **Retrieve Aggregate Metrics**: Fetches the high-level, aggregated metrics from the AI Core Tracking service for each evaluation run.The evaluation job will report the following aggregate statistics. + +| statistic | description | +| --------- | ----------- | +| average_latency | The average time taken in seconds to get a completion from the orchestration service | +| completion_count | Number of completions evaluated | +| total_prompt_tokens | Sum of prompt_tokens of completions | +| total_completion_tokens | Sum of completion_tokens of completions | + +The aggregation metrics can be found by calling the tracking endpoint. There are two ways we can do this: +- By using the execution id from the previous step: + +{base_url}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={execution_id} + +- By using the run name used in the dataset: + +{base_url}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/run-name={run_name} + + +For further drill down, the output artifacts can be downloaded. The output artifact contains the following information. The results will be stored in aws as an output artifact in the location +object-store-secret-path/ + +- folders containing results from each step of the workflow execution. The final result is stored in the sqlite_combined folder in the results.db . +The results.db contains the following tables : + | table name | description | + | --------- | ----------- | + | runs |Stores the prompt templates provided in the dataset| + | aggregation_results | Stores the aggregated statistics obtained from the tracking service| + | completion | Stores response body received based on the provided configuration| + | submissions |Stores the requests that will be sent to the orchestration service| + | submissions_results | Stores the output received from the orchestration service | + | evaluation_results |Stores the computed results after applying evaluation metrics| + +- Custom_logs : Useful for debugging in case of errors. + + +2. **Download Raw Results**: Downloads the detailed, instance-level results, which are stored in a SQLite database (`results.db`) in your S3 bucket. + +3. **View Detailed Results**: The notebook provides code to connect to the downloaded SQLite database and display the contents of various tables, including: + * `run`: Information about each run. + * `configuration`: The configuration used for the run. + * `submission` & `submission_result`: Details of the requests and responses to the LLM. + * `evaluation_result`: The raw, per-instance metric scores. + * `aggregation_result`: The aggregated results for the entire run. +4. **Process and Rank Results**: The notebook includes scripts to process the raw results further: + * It calculates mean and standard deviation for numerical metrics. + * It processes categorical and boolean metrics by applying a scoring system. + * Finally, it combines all the processed metrics and provides a weighted ranking of the different runs, helping you identify the best-performing model and prompt configuration based on your criteria. diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/ai-core-genaihub-evaluation-comprehensive.md b/tutorials/ai-core-genaihub-evaluation-comprehensive/ai-core-genaihub-evaluation-comprehensive.md new file mode 100644 index 0000000000..5e2353108c --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-comprehensive/ai-core-genaihub-evaluation-comprehensive.md @@ -0,0 +1,2186 @@ +--- +parser: v2 +auto_validation: true +time: 45 +primary_tag: software-product>sap-ai-core +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-ai-core ] +author_name: Smita Naik +author_profile: https://github.com/I321506 +--- + +# Custom Evaluation for Generative AI – Comprehensive Guide + This tutorial demonstrates how to use SAP AI Core Custom Evaluation to benchmark Large Language Models (LLMs) using **Orchestration Registry**. It guides you through environment setup, configuration creation, execution, and result analysis in a unified and simplified workflow. + +It extends the Quick Start tutorial and is intended for Application Developers and Data Scientists who already know the basics of GenAI workflows in SAP AI Core. + +## You will learn +- How to prepare and organize datasets for evaluation. +- How to configure and run evaluations in SAP AI Core. +- How to analyze and interpret aggregated evaluation results. + +## Prerequisites +1. **BTP Account** + Set up your SAP Business Technology Platform (BTP) account. + [Create a BTP Account](https://developers.sap.com/group.btp-setup.html) +2. **For SAP Developers or Employees** + Internal SAP stakeholders should refer to the following documentation: [How to create BTP Account For Internal SAP Employee](https://me.sap.com/notes/3493139), [SAP AI Core Internal Documentation](https://help.sap.com/docs/sap-ai-core) +3. **For External Developers, Customers, or Partners** + Follow this tutorial to set up your environment and entitlements: [External Developer Setup Tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html), [SAP AI Core External Documentation](https://help.sap.com/docs/sap-ai-core?version=CLOUD) +4. **Create BTP Instance and Service Key for SAP AI Core** + Follow the steps to create an instance and generate a service key for SAP AI Core: + [Create Service Key and Instance](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key?version=CLOUD) +5. **AI Core Setup Guide** + Step-by-step guide to set up and get started with SAP AI Core: + [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-setup.html) +6. An Extended SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard tiers. For more details, refer to +[SAP AI Core Service Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/service-plans?version=CLOUD) +7. **Orchestration Deployment** + Ensure at least one orchestration deployment is ready to be consumed during this process. +Refer to [this tutorial understand the basic consumption of GenAI models using orchestration.](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) +8. **Basic Knowledge** + Familiarity with the orchestration workflow is recommended +9. **Install Dependencies** + Install the required Python packages using the requirements.txt file provided. +Download [requirements.txt](img/requirements.txt) + +💡 Right-click the link above and choose **"Save link as..."** to download it directly. + +## Pre-Read + +This tutorial which showcases how a user can use AI Core custom evaluation to benchmark their large language models, evaluate orchestration configuration or prompts for their use case. +It uses publicly available [MedicationQA dataset](https://langtest.org/docs/pages/benchmarks/medical/medicationqa/) which consists of commonly asked consumer questions about medications. The workload computes industry standard metrics to check the reliability of the response generate by llm. + +### Environment Variables Setup + +[OPTION BEGIN [SAP AI Launchpad]] + +- Navigate to your SAP AI Core Launchpad. + +- In the Workspaces section, click on "Add" to create a new workspace. + - A workspace in SAP AI Core is a logical container that holds your resources (like models and pipelines) and provides the isolation needed for your projects. + +- When prompted, enter your AI Core credentials (such as Client ID, Client Secret, and Base URL). + - Note: If you're unsure about where to find these credentials, refer to this [guide](https://developers.sap.com/tutorials/ai-core-generative-ai.html#1c4f36d7-f345-4822-be00-c15f133ff7d8). + +- Once the workspace is successfully created, select your desired Resource Group to begin the evaluation process. + +Refer to the screenshot below for guidance: +![img](img/image_34.png) + +[OPTION END] + +[OPTION BEGIN [Python]] + +- Open **Visual Studio Code or Jupyter Notebook**. Create a new file with the .ipynb extension (e.g., custom_evaluation.ipynb). +- Create a **.env** file in the root directory of your project. +- Add your **AI Core** and **AWS credentials** as shown below. + +```env +# AICORE CREDENTIALS +AICORE_CLIENT_ID= +AICORE_CLIENT_SECRET= +AICORE_AUTH_URL= +AICORE_BASE_URL= +AICORE_RESOURCE_GROUP= + +# AWS CREDENTIALS +AWS_ACCESS_KEY= +AWS_BUCKET_ID= +AWS_REGION= +AWS_SECRET_ACCESS_KEY= + +# ORCHESTRATION DEPLOYMENT URL +DEPLOYMENT_URL= +``` + +**Note:** Replace placeholders (e.g., CLIENT_ID, CLIENT_SECRET, etc) with your actual environment credentials. + +Refer to the below screenshot for clarity: +![img](img/image_1.png) + +#### Install Dependencies + +Install the required packages using the [requirements.txt](img/requirements.txt) file you downloaded in the Prerequisites section. +```bash +pip install -r requirements.txt +``` +#### Connect to AI Core Instance + +Once the environment variables are set and dependencies are installed, run the following code to connect to your instance: + +```PYTHON +# Loading the credentials from the env file +from gen_ai_hub.proxy.gen_ai_hub_proxy import GenAIHubProxyClient +from dotenv import load_dotenv +import os + +load_dotenv(override=True) + +# Fetching environment variables +AICORE_BASE_URL = os.getenv("AICORE_BASE_URL") +AICORE_RESOURCE_GROUP = os.getenv("AICORE_RESOURCE_GROUP") +AICORE_AUTH_URL = os.getenv("AICORE_AUTH_URL") +AICORE_CLIENT_ID = os.getenv("AICORE_CLIENT_ID") +AICORE_CLIENT_SECRET = os.getenv("AICORE_CLIENT_SECRET") + +AWS_ACCESS_KEY = os.getenv("AWS_ACCESS_KEY") +AWS_BUCKET_ID = os.getenv("AWS_BUCKET_ID") +AWS_REGION = os.getenv("AWS_REGION") +AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY") +DEPLOYMENT_URL = os.getenv("DEPLOYMENT_URL") + +# Initializing the GenAIHubProxyClient +client = GenAIHubProxyClient( + base_url=AICORE_BASE_URL, + auth_url=AICORE_AUTH_URL, + client_id=AICORE_CLIENT_ID, + client_secret=AICORE_CLIENT_SECRET, + resource_group=AICORE_RESOURCE_GROUP +) +``` + +**NOTE:** +- Ensure the **requirements.txt** installation completes successfully before running the code. +- If you face any issues, recheck your **.env** values and installed packages. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +- Download the [Bruno_collections](img/AI_Core.json) file + +- please follow the steps in the [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) to set up your environment, refer step - **Set Up Your Environment and Configure Access** and proceed till generating the token + +[OPTION END] + +### Preparing Dataset Files + +[OPTION BEGIN [SAP AI Launchpad]] + +> **Note:** This step involves local setup using Python and does not require any action on the SAP AI Launchpad. + +[OPTION END] + +[OPTION BEGIN [Python]] + +In this step, the evaluation notebook dynamically detects the dataset file from a predefined folder structure. +You are not required to hardcode the dataset filename. + +```Python +import os +import json + + + +def get_dataset_file_name(folder_path): + """ + Retrieves the name of the first file in the specified folder. + """ + if not os.path.isdir(folder_path): + print(f"The folder path '{folder_path}' does not exist.") + return None + + items_in_folder = os.listdir(folder_path) + + for item in items_in_folder: + item_path = os.path.join(folder_path, item) + if os.path.isfile(item_path): + return item + + print(f"No files were found in the folder '{folder_path}'.") + return None + + + +# --- MAIN EXECUTION --- +DATASET_FOLDER = "../DATASET" + +DATASET_NAME = get_dataset_file_name(DATASET_FOLDER) + +if DATASET_NAME: + print(f"Dataset name: {DATASET_NAME}") +else: + print("Missing run or dataset file.") + raise SystemExit("Exiting due to missing run/dataset file.") +``` + +![img](img/image_py_dtst.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +> **Note:** This step involves local setup using Python and does not require any action on Bruno. + +[OPTION END] + +### Registering an Object Store Secret in AI Core + +[OPTION BEGIN [SAP AI Launchpad]] + +- Open the **SAP AI Core Launchpad** and navigate to the **Administration** tab. +- Select the **Object Store** section from the left-hand menu. +- Click on **“Add”** to register a new object store secret. +- Fill in the required bucket details as shown in the screenshot below. + +![img](img/image_33.png) + +In the **Secret** field, use the following structure to provide your AWS credentials: + +```json +{ + "AWS_ACCESS_KEY_ID": "Enter Your value", + "AWS_SECRET_ACCESS_KEY": "Enter Your value" +} +``` + +[OPTION END] + +[OPTION BEGIN [Python]] + +To make your evaluation files available for AI Core orchestration, you need to: + +- Upload them to an object store (e.g., AWS S3). +- Register the object store secret in AI Core. + +#### **Setup Authentication and Headers** + +First, define the authentication headers for AI Core REST API calls. + +```PYTHON +def _get_headers(): + headers = { + "Authorization": client.get_ai_core_token(), + "AI-Resource-Group": AICORE_RESOURCE_GROUP, + "Content-Type": "application/json", + } + return headers +``` + +#### **Register Object Store Secret in AI Core** + +Register your S3 bucket and credentials as a secret. + +```PYTHON +# Register S3 secret with AI Core which will be used an input source +import requests +import json +import logging + +def delete_oss_secret(oss_name=""): + headers = _get_headers() + + DELETE_SECRETS_ENDPOINT = f'/v2/admin/objectStoreSecrets/{oss_name}' + request_url = f"{AICORE_BASE_URL}{DELETE_SECRETS_ENDPOINT}" + + try: + response = requests.delete(request_url, headers=headers, timeout=120) + if response.status_code == 202: + print(f"Successfully deleted object store secret: {oss_name}") + elif response.status_code == 404: + print(f"Object store secret not found: {oss_name}. It may not exist.") + else: + logging.error(f"Failed to delete object store secret: {oss_name}, Status Code: {response.status_code}") + except Exception as e: + logging.error(f"Error occurred while attempting to delete object store secret: {e}") + raise + +def register_oss_secret(oss_name="", path_prefix=""): + headers = _get_headers() + + POST_SECRETS_ENDPOINT = '/v2/admin/objectStoreSecrets' + request_url = f"{AICORE_BASE_URL}{POST_SECRETS_ENDPOINT}" + + request_body = { + "name": oss_name, + "data": { + "AWS_ACCESS_KEY_ID": AWS_ACCESS_KEY, + "AWS_SECRET_ACCESS_KEY": AWS_SECRET_ACCESS_KEY + }, + "type": "S3", + "bucket": AWS_BUCKET_ID, + "endpoint": "s3-eu-central-1.amazonaws.com", + "region": AWS_REGION, + "pathPrefix": path_prefix, + "verifyssl": "0", + "usehttps": "1", + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + result = response.json() + return result + except: + logging.error("Error occurred while attempting to create object store secret") + raise + +delete_oss_secret(oss_name="default") +delete_oss_secret(oss_name="genai-simplified-notebook") + +register_oss_secret(oss_name="default", path_prefix="") +register_oss_secret(oss_name="genai-simplified-notebook", path_prefix="") +``` + +![img](img/image_objsec.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Generic secrets securely store AWS S3 credentials required for document access + +• Expand **objectStoreSecrets** under admin and select create a secret request + +Use the below payload to create a secret for AWS S3 with NoAuthentication as authentication type. + +```CODE +{ + "name": "genai-data", + "data": { + "AWS_ACCESS_KEY_ID": "", + "AWS_SECRET_ACCESS_KEY": "", + }, + "type": "S3", + "bucket": "", + "endpoint": "", + "region": "", + "pathPrefix": "" + } +``` +• Ensure that all values in the data dictionary are Base64-encoded as per AWS S3 credential requirements + +![img](img/image-br01.png) + +[OPTION END] + +> ⚠️ **Important Note (Must Read)** +> +> - You must **create an object store secret** with a user defined name (for eg: default) to store **output artifacts** from orchestration runs. This is **mandatory**. +> - For **input artifacts**, you may create additional object store secrets with different names if needed. +> - If a user defined name (for eg: default) is not configured, orchestration runs will **fail** due to missing output target setup. + + +### Upload and Register Dataset + +[OPTION BEGIN [SAP AI Launchpad]] + +After creating the secret, upload your evaluation files to the S3 bucket and register them as an artifact in AI Core. + +#### **Register Uploaded Files as Artifact in AI Core** + +To register your evaluation dataset with SAP AI Core, you need to upload it as an artifact. Follow the instructions below using the **SAP AI Launchpad UI**. + +--- + +- Open the **SAP AI Core Launchpad**. +- Navigate to the **Generative AI/Optimization/Artifacts** section to create dataset artifact. + +![img](img/image_19.png) + +- On the **Artifacts** section, click **add**. + +--- + +- On the **General Information** screen, enter the following: + + - **Select Scenario:** `genai-evaluations` + - **Name:** `genai-eval-test-data` + - **Description:** `Demo artifacts for evaluation flow.` + - **Select Object Store:** `genai-data` + - **Sub-folder path:** `genaiEvaluation/` + + > 💡 Replace `` with your **SAP BTP user ID** or the folder path in your object store where the evaluation files are uploaded. + +- On the **Labels** screen, click **“Add Label”** and provide the following: + + - **Key:** `prompt-evaluation` + - **Value:** `true` + *(Note: The prefix `ext.ai.sap.com/` is automatically pre-filled in the UI.)* + + ![img](img/image_21.png) + +- Review all entered details carefully. +- Click **“Add”** to complete the artifact registration. + +[OPTION END] + +[OPTION BEGIN [Python]] + +After creating the secret, organize your evaluation files into the eval/ folder testdata. Upload them to S3 and register as artifacts in AI Core. + +#### **Upload Files to S3 Bucket** +```python +# uploading these files to Object store to register as an artifact inside ai core + +import boto3 +import os +import uuid + +def upload_folder_to_s3(folder_path, bucket_name, s3_prefix=""): + """ + Upload a folder to an S3 bucket recursively. + + :param folder_path: The local folder path to upload. + :param bucket_name: The name of the S3 bucket. + :param s3_prefix: Optional prefix to use for the S3 keys (e.g., subfolder in the bucket). + """ + s3_client = boto3.client( + 's3', + aws_access_key_id=AWS_ACCESS_KEY, + aws_secret_access_key=AWS_SECRET_ACCESS_KEY, + region_name=AWS_REGION + ) + + for root, dirs, files in os.walk(folder_path): + for file_name in files: + print("val of root is ", file_name) + local_path = os.path.join(root, file_name) + # Compute the relative path for the S3 key + relative_path = os.path.relpath(local_path, folder_path) + s3_key = os.path.join(s3_prefix, relative_path).replace("\\", "/") # Ensure S3-compatible paths + print("val of s3 key is ", s3_key) + print(f"Uploading {local_path} to s3://{bucket_name}/{s3_key}") + + # Upload the file + s3_client.upload_file(local_path, bucket_name, s3_key) + +# Example usage +folder_to_upload_testdata = "../DATASET" +user_directory_prefix = "" # replace with your i-number as string here +prefix_guid = user_directory_prefix if user_directory_prefix is not None else str(uuid.uuid4().hex) +s3_testdata_prefix = f"genaiEvaluation/{prefix_guid}/testdata" # Leave empty for root of the bucket + + +upload_folder_to_s3(folder_to_upload_testdata, AWS_BUCKET_ID, s3_testdata_prefix) +input_artifact_path = f"ai://genai-simplified-notebook/genaiEvaluation/{prefix_guid}" +``` + ![img](img/image_5.png) + +#### **Register Uploaded Files as Artifact in AI Core** + +```Python +import requests +import logging +# Registering the uploaded files from AWS as artifacts to use inside configuration. + +def register_artifact(): + headers = _get_headers() + + GET_ARTIFACTS_ENDPOINT = '/v2/lm/artifacts' + request_url = f"{AICORE_BASE_URL}{GET_ARTIFACTS_ENDPOINT}" + + request_body = { + "labels": [ + { + "key": "ext.ai.sap.com/prompt-evaluation", + "value": "true" + } + ], + "name": "genai-eval-simplified-test-data", + "kind": "other", + "url": input_artifact_path, # input artifact path + "description": "demo artifacts for evaluation flow.", + "scenarioId": "genai-evaluations" + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + result = response.json() + print(result) + return result['id'] + except: + print("Error occurred while attempting to create an execution") + raise + +artifact_id = register_artifact() +``` +![img](img/image_6.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Before registering a dataset artifact in Bruno, you must upload your CSV file to the SAP AI Core object store using the Dataset API. +Bruno cannot upload files directly to S3; therefore, this step is required. + +**Prerequisites** + + - An object store secret must already exist in your resource group.Typically, this is the default secret named **default**. + + - The Dataset API currently supports: + + - S3 object stores only + + - CSV file uploads + +**Upload Your Dataset** + +Use the Dataset API – Upload File request in Bruno: + +```bash +PUT:{{ai_api_url}}/v2/lm/dataset/files/{{secretName}}/{{datasetPath}} +``` + +**Headers** + +```json +Authorization: Bearer {{token}} +AI-Resource-Group: {{resourceGroup}} +Content-Type: text/csv +``` + +**Body** + +Upload your .csv file directly as binary in Bruno’s Body + +Example Path Values: + + - secretName: default + + - datasetPath: testdata/medicalqna_dataset.csv + +![img](img/image_br_dt.png) + +**Note:** + +Save the ai://… URL — you will use this when creating the dataset artifact. + +**Register the Dataset Artifact** + +- Click on **Register artifact** under lm -> artifacts in bruno collection to register the artifact + +```CODE +{ + "name": "aiconfig", + "kind": "dataset", + "url": "ai://default/testdata/medicalqna_dataset.csv", + "scenarioId": "genai-evaluations" +} +``` +![img](img/image-br02.png) + +[OPTION END] + +### Approach Selection – How to Provide Prompts (Read-Up) + +In this evaluation workflow, prompts can be provided in two different ways. +Before proceeding, understand the available approaches and choose the one that fits your requirement. + +**🔹 Option 1 – Prompt Template + Model (Prompt Registry)** + + - The prompt is stored in the Prompt Registry + + - The model is referenced directly in the evaluation configuration + + - Prompts are reusable and version-controlled + + - Best suited for standardized or production-grade workflows + +**📌 When to use this?** + +If you want reusable, versioned prompts that can be managed independently. + +👉 If you would like to see this approach in action, refer to the [Evaluation Quickstart tutorial](LINK TO ADD), where we demonstrate the Prompt Registry method. + +**🔹 Option 2 – Orchestration Registry (Inline Prompt)** + + - The prompt is defined directly inside the orchestration configuration + + - No separate prompt registry entry is required + + - Ideal for ad-hoc, experimental, or one-time evaluations + +**📌 When to use this?** + +If the prompt is specific to this evaluation and does not need reuse or versioning. + +### Create a Prompt Template in Orchestration Registry + +In this tutorial, we will use the **Orchestration Registry (Inline Prompt)** approach. + +**Create Orchestration Registry Configuration** + +[OPTION BEGIN [SAP AI Launchpad]] + +Go to Generative AI Hub → Orchestration → Orchestration Configurations + +- click create + +- In templating add the system prompt + +```json +List the benefits and side effects of the drug in the following consumer health question: {{?question}}. +``` +![img](img/image_ail_or1.png) + +- select the model in model configuration and save the orchestration registry + +![img](img/image_ail_or2.png) + +![img](img/image_ail_or3.png) + +[OPTION END] + +[OPTION BEGIN [Python]] + +The following code defines a function `create_orchestration_registry_config()` that creates a new **Orchestration Configuration** in **Orchestration Registry**. + +```python +def create_orchestration_registry_config(): + headers = _get_headers() + prompt_template = { + "template": [ + { + "role": "user", + "content": "List the benefits and side effects of the drug in the following consumer health question: {{?question}}." + } + ] + } + CREATE_ORCHESTRATION_REGISTRY = '/v2/registry/v2/orchestrationConfigs' + request_url = f"{AICORE_BASE_URL}{CREATE_ORCHESTRATION_REGISTRY}" + model_name,model_version=selected_models_str.split(":") + request_body = { + "name": "genai-eval-test", + "version": "1.0.0", + "scenario": "genai-evaluations", + "spec": { + "modules": { + "prompt_templating": { + "model": { + "name": model_name, + "version": model_version + }, + "prompt": prompt_template + } + } + } + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + if(response.status_code != 200): + print(response.json()) + raise + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create a orchestration registry id") + raise +orchestration_registry_id = create_orchestration_registry_config() +``` + +![img](img/image_py_or1.png) + +**Note** : If you wish to use an existing orchestration config, skip executing this cell and add the orchestration config id in `orchestration_registry_id` string in the next cell. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +You can paste this directly into a Bruno .bru file or create a new request inside Bruno. + +**Url:** +```bash +POST {{AICORE_BASE_URL}}/v2/registry/v2/orchestrationConfigs +``` + +**headers:** +``` +{ + Authorization: Bearer {{token}} + AI-Resource-Group: {{resource_group}} + Content-Type: application/json + } +``` + +**body:** +```json +{ + "name": "genai-eval-test", + "version": "1.0.0", + "scenario": "genai-evaluations", + "spec": { + "modules": { + "prompt_templating": { + "model": { + "name": "model_name", + "version": "model_version" + }, + "prompt": { + "template": [ + { + "role": "user", + "content": "List the benefits and side effects of the drug in the following consumer health question: {{?question}}." + } + ], + "defaults": {} + } + } + } + } +} +``` + +![img](img/image_br_or1.png) + +[OPTION END] + +### Understanding Metrics (Pre-Read) + +Metrics determine how your model outputs are evaluated during an evaluation run. They define the scoring logic that SAP AI Core uses to compare models, measure quality, and validate improvements over time. + +In SAP AI Core, metrics are configured during the **Create Evaluation Configuration** step: + +```json +"metrics": "Content Filter on Input,Pointwise Instruction Following,Content Filter on Output" +``` + +You can specify one or multiple metrics (comma-separated). + +#### Types of Metrics + +SAP AI Core supports two major types: + +1. System-defined Metrics (Ready to use) + +2. Custom Metrics (User-defined) + + +**1. System-defined Metrics** + +These are built-in metrics provided by SAP AI Core. No additional setup required. + +They are grouped into two categories: + +**Computed Metrics** + +These use reference data, schema validation, or deterministic logic. + +| Name | Description | Reference required | +---------------------------------------------------------------------------------------------|------------------| +| BERT Score | https://huggingface.co/spaces/evaluate-metric/bertscore | Yes | +| BLEU | https://huggingface.co/spaces/evaluate-metric/bleu | Yes | +| ROUGE | https://huggingface.co/spaces/evaluate-metric/rouge | Yes | +| JSON Schema Match | validates LLM generated response against a predefined Json schema, returns boolean result. | Yes | +| Content Filter on Input | Whether orchestration input was rejected by the input filter | No | +| Content Filter on Output | Whether orchestration output was rejected by the output filter | No | +| Exact Match | Whether the output exactly matches the reference | Yes | +| Language Match | The metric returns true/false to indicate if the text matches the given language | No | + +👉 Use computed metrics when: + + - You have ground truth/reference answers + + - You need deterministic validation + + - You want schema validation + +**model-as-a-judge metrics** + +These use a judge LLM to evaluate responses qualitatively. + +| Name | Description | Reference required |-------------------------------------------------------------------------------------------------------------------- +| Pointwise Instruction Following | assess the model's ability to follow instructions provided in the user prompt | No | +| Pointwise Correctness | assess the model's ability to provide a correct response based on the user prompt | Yes | +| Pointwise Answer Relevance | assess the model's response is related to user prompt | No | +| Pointwise Conciseness | assess the model's response is a short and concise answer to user prompt | No | +| + +*Entries marked with an asterisk (*) are experimental metrics. + +👉 Use model-as-a-judge metrics when: + + - You need qualitative evaluation + + - No exact ground truth exists + + - You want human-like evaluation logic + +#### Custom Metrics (User-defined metrics) + +When system metrics are insufficient, you can define your own metric. + +Custom metrics can be used to evaluate the LLM outputs according to the unique needs of a use case. A user-defined llm-as-a-judge metric uses a judge LLM along with a rubric to compute a metric rating. The output of a llm-as-a-judge metric can be numeric or text. + +The system defines a structure for the judge prompts and users provide the metric definition in the pre-defined format. Relevant instructions, such as output instructions, are automatically added to ensure the desired output from the LLM. + +**Custom Metric Definition Structure** + +```json +{ + "scenario": "genai-evaluations", + "metricName": "my_custom_metric", + "version": "0.0.1", + "type": "structured", + "model_configuration": { + "model_name": "string", + "model_version": "string" + }, + "prompt_configuration": { + "evaluation_task": "Describe the goal of this evaluation.", + "criteria": "Explain how evaluation is performed.", + "rating_rubric": [ + { + "rating": 1, + "rule": "Poor quality response" + }, + { + "rating": 5, + "rule": "Excellent response" + } + ], + "include_properties": ["prompt", "reference"], + "examples": [ + { + "prompt": "Sample prompt", + "response": "Sample response", + "reference": "Expected answer", + "rating": 5, + "explanation": "Why this rating was given" + } + ] + } +} +``` +**NOTE**: "scenario" and "metricName" and "version" is a required parameter for the custom metric in evaluation configuration. + +**NOTE**: The user must provide at least one prompt, system or user prompt, or both prompts can be provided. + +**Model Availability Notice** + +⚠️ If gpt-4.1 (2025-04-14) is not available in your region: + + - LLM-as-a-Judge metrics cannot be executed + + - Evaluation service depends on this specific model version + + +### Providing Models and Metrics for Evaluation + +Metrics determine how your model outputs are evaluated during an evaluation run. They define the scoring logic that SAP AI Core uses to compare models, measure quality, and validate improvements over time. + +Metrics must be supplied before creating an Evaluation Configuration. + +[OPTION BEGIN [SAP AI Launchpad]] + +In SAP AI Launchpad, metrics are selected visually during the Evaluation Configuration creation flow. + +You can choose: + + - System-defined metrics + + - Custom metrics (your own definitions stored in the metric registry — cannot be created directly in AI Launchpad; to use them, register them via API/Bruno mentioned in the same step and then select them in the Evaluation Configuration) + +No manual JSON input is needed—the UI provides a selectable list of available metrics. + +1. Go to Generative AI Hub → Optimization. + +2. Click Create to start a new evaluation configuration. + +![img](img/image_25.png) + +- In Select Test Input section, + + - select orchestration configuration + + - Select your registered dataset artifact + + - Enter the dataset path (example): + testdata/medicalqna_dataset.csv + + - Set the number of test samples (e.g., 20) + + ![img](img/image_26.png) + +- Click **Next** to go to Metrics selection. + +#### Select Evaluation Metrics + +Choose the metrics you want to evaluate. + +You may choose one or multiple system-defined or custom metrics—examples: + + - BERT Score + + - Content Filter on Input + + - Pointwise Instruction Following + + - Content Filter on Output + +![img](img/image_27.png) + +--- + +> 📘 **Helpful Resources**: +> +> - [System-Defined Evaluation Metrics – SAP Documentation](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/system-defined-evaluation-metrics) +> - [Define Your Own Custom Metrics – SAP Guide](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/custom-metrics) +> *(If your evaluation requires domain-specific or advanced scoring logic)* + +> **Note: You may select additional metrics based on your use case.** + +--- + +[OPTION END] + +[OPTION BEGIN [Python]] + +**Select your Models** + +Add the models you wish to use in the string `selected_models_str` + +```Python +# Manual selection of models +selected_models_str="gemini-2.5-pro:001" +print("Selected models string:", selected_models_str) +``` + +**Metrics Handling in Python Notebook** + +When running the evaluation through the Python notebook, metric setup is partially automated. +Before the evaluation configuration is created, the script performs the following: + + - Users can manually specify metric IDs + + - Or can pass custom metrics JSON directly + + - It checks if each metric already exists in AI Core + + - If not found → creates it automatically + + - Prints final list of metric IDs used for evaluation + +This ensures all metrics exist before the evaluation configuration is created. + +```Python +user_metric_ids = "d18******************d1f,dbf56**********210c7e771" + +custom_metric_list = [ + { + "name": "test-metric", + "scenario": "genai-evaluations-test", + "version": "0.0.1", + "evaluationMethod": "llm-as-a-judge", + "managedBy": "imperative", + "systemPredefined": False, + "metricType": "evaluation", + "spec": { + "outputType": "numerical", + "promptType": "structured", + "configuration": { + "modelConfiguration": { + "name": "gpt-4.1-mini", + "version": "2025-08-07", + "parameters": [ + { + "key": "max_tokens", + "value": "10000" + } + ] + }, + "promptConfiguration": { + "definition": "You are an expert evaluator. Your task is to evaluate the quality of the responses generated by AI models. We will provide you with a reference and an AI-generated response. You should first read the user input carefully for analyzing the task, and then evaluate the quality of the responses based on the criteria provided in the Evaluation section below. You will assign the response a rating following the Rating Rubric and Evaluation Steps. Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\n\n## Metric Definition\nYou are an INFORMATION OVERLAP classifier providing the overlap of information between a response and reference.\n\n## Criteria\nGroundedness: The of information between a response generated by AI models and provided reference.\n\n## Rating Rubric\n5: (Fully grounded). The response and the reference are fully overlapped.\n4: (Mostly grounded). The response and the reference are mostly overlapped.\n3: (Somewhat grounded). The response and the reference are somewhat overlapped.\n2: (Poorly grounded). The response and the reference are slightly overlapped.\n1: (Not grounded). There is no overlap between the response and the reference.\n\n## Evaluation Steps\nSTEP 1: Assess the response in aspects of Groundedness. Identify any information in the response and provide assessment according to the Criteria.\nSTEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Groundedness.\n\nReference: {{?reference}}\nResponse: {{?aicore_llm_completion}}\n\nBegin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format:\n\n{ \"explanation\": string, \"rating\": integer }\n\nOutput:\n", + "evaluationTask": "You are an expert evaluator. Your task is to evaluate the quality of the responses generated by AI models. We will provide you with a reference and an AI-generated response. You should first read the user input carefully for analyzing the task, and then evaluate the quality of the responses based on the criteria provided in the Evaluation section below. You will assign the response a rating following the Rating Rubric and Evaluation Steps. Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\n\n## Metric Definition\nYou are an INFORMATION OVERLAP classifier providing the overlap of information between a response and reference.\n\n## Criteria\nGroundedness: The of information between a response generated by AI models and provided reference.\n\n## Rating Rubric\n5: (Fully grounded). The response and the reference are fully overlapped.\n4: (Mostly grounded). The response and the reference are mostly overlapped.\n3: (Somewhat grounded). The response and the reference are somewhat overlapped.\n2: (Poorly grounded). The response and the reference are slightly overlapped.\n1: (Not grounded). There is no overlap between the response and the reference.\n\n## Evaluation Steps\nSTEP 1: Assess the response in aspects of Groundedness. Identify any information in the response and provide assessment according to the Criteria.\nSTEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Groundedness.\n\nReference: {{?reference}}\nResponse: {{?aicore_llm_completion}}\n\nBegin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format:\n\n{ \"explanation\": string, \"rating\": integer }\n\nOutput:\n", + "criteria": "You should strictly follow the instruction given to you. Please act as an impartial judge and evaluate the quality of the responses based on the prompt and following criteria:", + "ratingRubric": [ + { + "rating": 3, + "rule": "Response is completely factual with no unsupported claims" + }, + { + "rating": 2, + "rule": "Response has minor inaccuracies but no major contradictions" + }, + { + "rating": 1, + "rule": "Response contains significant factual errors or hallucinations" + } + ] + } + } + } + } +] +``` + +```python +import os +import json +import requests + + +# --- Fetch all metrics from SAP AI Core --- +def fetch_all_metrics(): + request_url = f"{AICORE_BASE_URL}/v2/lm/evaluationMetrics" + resp = requests.get(request_url, headers=_get_headers()) + resp.raise_for_status() + return resp.json().get("resources", []) + +# --- Create or fetch a metric --- +def create_or_get_metric(custom_metric, user_metric_id=None): + all_metrics = fetch_all_metrics() + + # 1️⃣ User-supplied ID lookup + if user_metric_id: + for m in all_metrics: + if m.get("id") == user_metric_id: + print(f"✅ Metric already exists by ID: {user_metric_id}") + return user_metric_id + print(f"⚠️ User metric ID {user_metric_id} not found, will only include if valid later") + + # 2️⃣ Check by scenario, name, version + scenario = custom_metric.get("scenario") + name = custom_metric.get("name") + version = custom_metric.get("version") + if not all([scenario, name, version]): + raise ValueError("Metric must include 'scenario', 'name', and 'version'") + + for m in all_metrics: + if (m.get("scenario") == scenario and + m.get("name") == name and + m.get("version") == version): + metric_id = m.get("id") + print(f"✅ Metric already exists: {scenario}/{name} v{version}, ID = {metric_id}") + return metric_id + + # 3️⃣ Create metric if not found + request_url = f"{AICORE_BASE_URL}/v2/lm/evaluationMetrics" + required_fields = ["scenario", "name", "version", "evaluationMethod", "metricType"] + for f in required_fields: + if f not in custom_metric: + raise ValueError(f"❌ Missing required field: {f}") + + resp = requests.post(request_url, headers=_get_headers(), json=custom_metric) + resp.raise_for_status() + metric_id = resp.json().get("id") + print(f"✅ Metric created successfully: {name} v{version}, ID = {metric_id}") + return metric_id + +# --- Main pipeline --- + +# 1️⃣ Create/fetch metrics from SAP AI Core +metric_ids = [] +for metric in custom_metric_list: + try: + print(f"metric:{metric}") + metric_id = create_or_get_metric(metric) + metric_ids.append(metric_id) + except ValueError as e: + print(f"Skipping metric due to error: {e}") + +# 2️⃣ Validate user_metric_ids separately if provided +if user_metric_ids and user_metric_ids.strip(): + all_metrics = fetch_all_metrics() + # Split comma-separated IDs and strip whitespace + for uid in [uid.strip() for uid in user_metric_ids.split(",")]: + if any(m.get("id") == uid for m in all_metrics): + metric_ids.append(uid) + else: + print(f"⚠️ User metric ID {uid} does not exist in AI Core, skipping.") +# 3️⃣ Convert to comma-separated string +custom_metric_ids_str = ",".join(metric_ids) +print("✅ All processed metric IDs:", custom_metric_ids_str) +``` +![img](img/image_py03.png) + +This ensures all required metrics are available before launching the evaluation. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Bruno supports two ways of providing metrics: + +**Use System-Defined Metrics** + +You can directly pass system metrics in your configuration: + +Example: + +```json +"metrics": "Pointwise Answer Relevance, Pointwise Instruction Following" +``` + +If you want to register custom metrics, you must call: + +➡️ **Create Custom Metric** + +```bash +POST {{ai_api_url}}/v2/lm/evaluationMetrics +``` +**Body example:** + +```json +{ + "name": "test-metric", + "scenario": "genai-evaluations-test", + "version": "0.0.1", + "evaluationMethod": "llm-as-a-judge", + "managedBy": "imperative", + "metricType": "evaluation", + "spec": { + "outputType": "numerical", + "promptType": "structured", + "configuration": { + "modelConfiguration": { + "name": "gpt-4.1-mini", + "version": "2025-08-07", + "parameters": [ + { + "key": "max_tokens", + "value": "10000" + } + ] + }, + "promptConfiguration": { + "definition": "You are an expert evaluator. Your task is to evaluate the quality of the responses generated by AI models. We will provide you with a reference and an AI-generated response. You should first read the user input carefully for analyzing the task, and then evaluate the quality of the responses based on the criteria provided in the Evaluation section below. You will assign the response a rating following the Rating Rubric and Evaluation Steps. Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\n\n## Metric Definition\nYou are an INFORMATION OVERLAP classifier providing the overlap of information between a response and reference.\n\n## Criteria\nGroundedness: The of information between a response generated by AI models and provided reference.\n\n## Rating Rubric\n5: (Fully grounded). The response and the reference are fully overlapped.\n4: (Mostly grounded). The response and the reference are mostly overlapped.\n3: (Somewhat grounded). The response and the reference are somewhat overlapped.\n2: (Poorly grounded). The response and the reference are slightly overlapped.\n1: (Not grounded). There is no overlap between the response and the reference.\n\n## Evaluation Steps\nSTEP 1: Assess the response in aspects of Groundedness. Identify any information in the response and provide assessment according to the Criteria.\nSTEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Groundedness.\n\nReference: {{?reference}}\nResponse: {{?aicore_llm_completion}}\n\nBegin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format:\n\n{ \"explanation\": string, \"rating\": integer }\n\nOutput:\n", + "evaluationTask": "You are an expert evaluator. Your task is to evaluate the quality of the responses generated by AI models. We will provide you with a reference and an AI-generated response. You should first read the user input carefully for analyzing the task, and then evaluate the quality of the responses based on the criteria provided in the Evaluation section below. You will assign the response a rating following the Rating Rubric and Evaluation Steps. Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\n\n## Metric Definition\nYou are an INFORMATION OVERLAP classifier providing the overlap of information between a response and reference.\n\n## Criteria\nGroundedness: The of information between a response generated by AI models and provided reference.\n\n## Rating Rubric\n5: (Fully grounded). The response and the reference are fully overlapped.\n4: (Mostly grounded). The response and the reference are mostly overlapped.\n3: (Somewhat grounded). The response and the reference are somewhat overlapped.\n2: (Poorly grounded). The response and the reference are slightly overlapped.\n1: (Not grounded). There is no overlap between the response and the reference.\n\n## Evaluation Steps\nSTEP 1: Assess the response in aspects of Groundedness. Identify any information in the response and provide assessment according to the Criteria.\nSTEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Groundedness.\n\nReference: {{?reference}}\nResponse: {{?aicore_llm_completion}}\n\nBegin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format:\n\n{ \"explanation\": string, \"rating\": integer }\n\nOutput:\n", + "criteria": "You should strictly follow the instruction given to you. Please act as an impartial judge and evaluate the quality of the responses based on the prompt and following criteria:", + "ratingRubric": [ + { + "rating": 3, + "rule": "Response is completely factual with no unsupported claims" + }, + { + "rating": 2, + "rule": "Response has minor inaccuracies but no major contradictions" + }, + { + "rating": 1, + "rule": "Response contains significant factual errors or hallucinations" + } + ] + } + } + } + } +``` +![img](img/image_br_mtrs.png) + +You will receive: + +```json +"id": "" +``` + +This metric ID can be directly passed into the evaluation configuration. + +[OPTION END] + +**Note** + +To evaluate and compare multiple models in a single execution, you must create a distinct orchestration registry ID for each model you wish to test. Assign a different foundation model to each registry ID, and then pass this list of registry IDs into your evaluation configuration. This ensures the system generates separate, comparable runs for each model simultaneously. + +### Define and Create Evaluation Configurations + +[OPTION BEGIN [SAP AI Launchpad]] + +Once your dataset artifact is registered and you have completed creating Orchestration Registry, the next step is to create an Evaluation Configuration. + +An Evaluation Configuration tells SAP AI Core: + + - which dataset to evaluate + + - which prompt/model or orchestration config to use + + - which metrics to compute + + - which orchestration deployment endpoint to call + + - how many repetitions to run + + - which test dataset file to load + +This configuration becomes the blueprint for your evaluation execution. + +**Steps to Create Evaluation Configuration** + +In Additional Configuration + +- Set **Number of Repetitions** to `1`. +- Choose an existing deployment for **Orchestration Endpoint**. + + ![img](img/image_29.png) +--- + +#### Final Review & Start + +- Review all the details on the summary page. +- Once confirmed, click **Create** to start the evaluation job. + +![img](img/image_40.png) + +> ✅ You have now successfully configured and triggered a Generative AI Evaluation. + +[OPTION END] + +[OPTION BEGIN [Python]] + +When using the Python notebook, the evaluation configuration is created automatically based on your selections. +Before creating the configuration, the notebook will: + + - Load the dataset artifact ID + + - Resolve metric IDs + + - Load orchestration registry IDs + + - Validate all required parameters + +**Sample parameter setup:** + +```Python +import json +test_data_path = f"testdata/{DATASET_NAME}" # specify the test data path here. For the full folder just specifying testdata will work +test_datasets = json.dumps({'path': test_data_path, 'type': 'csv'}) +metrics_list = ",".join([selected_metrics_str,custom_metric_ids_str]) +models_list = selected_models_str +print(f"Selected metrics: {metrics_list}") +print(f"Selected models: {models_list}") +orchestration_deployment_url = deployment_url +repetitions = "1" +``` + +#### Create Configuration Body + +The notebook builds the configuration using the required SAP AI Core fields: + + - scenarioId + + - executableId + + - dataset artifact binding + + - selected metrics + + - test dataset details + + - repetitions + + - orchestration deployment URL + + - orchestrationRegistryIds + + - models. + +The following function dynamically creates the configuration body for AI Core. + +```Python +# creating an AICORE Configuration. +import requests + +request_body = { + "name": "genai-eval-conf", + "scenarioId": "genai-evaluations", + "executableId": "genai-evaluations-simplified", + "inputArtifactBindings": [ + { + "key": "datasetFolder", + "artifactId": "e30ef8d7-c3e1-4b9c-a834-a00ac0a9a053" + } + ], + "parameterBindings": [ + { + "key": "repetitions", + "value": repetitions + }, + { + "key": "orchestrationDeploymentURL", + "value": orchestration_deployment_url + }, + { + "key": "metrics", + "value": metrics_list + }, + { + "key": "testDataset", + "value": test_datasets + }, + { + "key": "orchestrationRegistryIds", + "value": orchestration_registry_id + } + ] +} + +def create_aicore_configuration(): + headers = _get_headers() + GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations' + request_url = f"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}" + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + print(response) + if(response.status_code != 201): + raise + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create a Configuration") + raise + +configuration_id = create_aicore_configuration() +``` + +You will receive a configuration ID, which is required for the next step (Execution). + +![img](img/image_py_con.png) + +SAP AI Core returns a configuration ID, which is used to trigger the evaluation execution. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +When creating an Evaluation Configuration through Bruno, you call: + +```bash +POST {{api_url}}/v2/lm/configurations +``` + +Below is the sample request body to create configuration. + +```json +{ + "name": "genai-eval-conf", + "scenarioId": "genai-evaluations", + "executableId": "genai-evaluations-simplified", + "inputArtifactBindings": [ + { + "key": "datasetFolder", + "artifactId": "{{artifactId}}" + } + ], + "parameterBindings": [ + { + "key": "repetitions", + "value": "1" + }, + { + "key": "orchestrationDeploymentURL", + "value": "{{deployment_url}}" + }, + { + "key": "metrics", + "value": "BERT Score, Pointwise Conciseness" + }, + { + "key": "testDataset", + "value": "{\"path\": \"testdata/{{dataset_file}}\", \"type\": \"csv\"}" + }, + { + "key": "orchestrationRegistryIds", + "value": "{{orchestrationRegistryIds}}" + }, + { + "key": "models", + "value": "{{model_name}}:{{model_version}}" + } + ] +} +``` +![img](img/image-br03.png) + +[OPTION END] + +### Create and Run Evaluation Execution + +After creating the Evaluation Configuration, the next step is to execute it. + +Execution triggers the evaluation workflow, which: + + - Reads the test dataset + + - Generates submissions to the orchestration service + + - Collects model outputs + + - Computes all selected metrics + + - Produces aggregate and raw evaluation results + +The process is identical for SAP AI Launchpad, Python, and Bruno, with only the invocation method differing. + +[OPTION BEGIN [SAP AI Launchpad]] + +- Once the evaluation configuration is created, the system automatically triggers an evaluation execution. + +- Follow these steps to monitor its progress and verify completion: + + - Navigate to **ML Operations** in the SAP AI Core Launchpad. + + - In the sidebar, click **Executions**. + + ![img](img/image_41.png) + + - Locate the most recent execution triggered by your evaluation configuration. You can use the timestamp or configuration name to identify it. + + - Click on the execution entry to open its details. The Current Status will update as the process runs. + + ![img](img/image_31.png) + +- Once the Target Status reaches **COMPLETED** , your evaluation has successfully finished. + +> [For More information](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/create-evaluation) + +Track Execution Status + +The execution page will show: + + - Unknown + + - Pending + + - Running + + - Completed + +Once completed, you can navigate to: + + - Outputs → Tracking Metrics (aggregate results) + + - Output Artifacts (raw results stored in the SQLite DB) + +[OPTION END] + +[OPTION BEGIN [Python]] + +Once the configuration is ready, the next step is to trigger an execution. +An execution is a single evaluation run based on the configuration you defined. + +**Create Execution** + +The following function starts the evaluation in SAP AI Core using the configuration ID: + +```python +# create an execution with the created configuration. + +import requests +def create_execution(): + headers = _get_headers() + GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions' + request_url = f"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}" + request_body = {"configurationId" : configuration_id} + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + print("response received is ", response) + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create an execution") + raise + + +execution_id = create_execution() +``` +![img](img/image_44.png) + +#### Monitor Execution Status + +The execution progresses through states: + +UNKNOWN → PENDING → RUNNING → COMPLETED + +```python +# get execution status +import requests +def get_execution_status(execution_id): + headers = _get_headers() + LOG_EXECUTIONS_ENDPOINT = f'/v2/lm/executions/{execution_id}' + request_url = f"{AICORE_BASE_URL}{LOG_EXECUTIONS_ENDPOINT}" + try: + response = requests.get( + request_url, headers=headers, timeout=120 + ) + print("response received is ", response) + result = response.json() + return result + except: + logging.error("Error occurred while attempting to get execution status") + raise + + +get_execution_status(execution_id) +``` + +#### Automatic Polling + +To continuously monitor until the evaluation finishes: + +```python +# Polling the execution status until it is COMPLETED or DEAD or timeout occurs +def poll_execution_status(execution_id, timeout_minutes=1800, poll_interval=30): + start_time = time.time() + while True: + result = get_execution_status(execution_id) + print(f"Execution Status: {result.get('status')}") + if result.get("status") == "COMPLETED": + print(f"Execution completed successfully in {time.time() - start_time} seconds, proceed to fetch results.") + break + if result.get("status") == "DEAD": + print(f"Execution failed with status DEAD in {time.time() - start_time} seconds. Check the logs for more details.") + break + if time.time() - start_time > timeout_minutes * 60: + raise TimeoutError(f"Execution status polling timed out after {timeout_minutes} minutes.") + time.sleep(poll_interval) + +``` + +![img](img/image_45.png) + +✅ Once the execution status shows COMPLETED, the evaluation results are available and can be analyzed in the next step. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +After creating the configuration, the next step is to trigger the evaluation workload by creating an AI Core execution. + +**Create an Execution with the Created Configuration** + +- Click on create execution under executions, pass the configuration id created in previous step + +![img](img/image-br04.png) + +- The status field progresses through different states over time: +UNKNOWN → PENDING → RUNNING → COMPLETED. + +**Get Execution Status** + +check the status of created execution by passing the execution ID, The Current Status will update as the process runs. please refer the below image + +![img](img/image-br05.png) + +[OPTION END] + +### View and Analyze Evaluation Results + +Once the evaluation execution is complete, SAP AI Core generates both aggregated metrics and detailed instance-level results. +These results help compare model performance, understand quality metrics, and debug issues. + +[OPTION BEGIN [SAP AI Launchpad]] + +Once the evaluation workflow execution is completed, this step retrieves the aggregated evaluation metrics from the SAP AI Core service by specifying the run name. + +1. Go to Optimizations + +2. In the runs section , select the run you created + +3. you can View detailed results of a run across your selected metrics. + +This is the easiest way to visually inspect evaluation outcomes and you can also compare multiple model runs. + +![img](img/image_46_01.png) + +- Compare run performance across your selected metrics. Metrics are aggregated at run level. + +![img](img/image_46.png) + +![img](img/image_46a.png) + +[OPTION END] + +[OPTION BEGIN [Python]] + +The notebook includes utility scripts to retrieve aggregated metrics, download detailed artifacts, and inspect SQLite results.This returns all metric values per evaluated run. + +**Retrieve Aggregate Metrics (Tracking API)** + +Aggregated metrics summarize performance across all test samples. +To fetch them using execution ID: + +```python +# Get aggregate metrics using execution id +import requests +def retrieve_aggregate_metrics(execution_id): + headers = _get_headers() + GET_METRICS_ENDPOINT = f'/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={execution_id}' + request_url = f"{AICORE_BASE_URL}{GET_METRICS_ENDPOINT}" + try: + response = requests.get(request_url, headers=headers, timeout=120) + print("response received is ", response) + result = response.json() + return result + except: + logging.error("Error occurred while attempting to retreive aggeregate metrics for the run") + raise + +runs_data = retrieve_aggregate_metrics(execution_id) +``` +![img](img/image_47.png) + +**Download Raw Results (Output Artifact)** + +All detailed evaluation outputs are stored as an output artifact in your object store. To download all output files programmatically: + +```python +# download the result artifacts from Object store. +import boto3 + +def download_all_objects(prefix, destination_folder): + """ + Recursively download all objects from an S3 bucket starting with a specific prefix. + + :param bucket_name: Name of the S3 bucket. + :param prefix: Prefix to filter objects in the bucket. + :param destination_folder: Local folder to save the downloaded files. + """ + s3_client = boto3.client( + 's3', + aws_access_key_id=AWS_ACCESS_KEY, + aws_secret_access_key=AWS_SECRET_ACCESS_KEY, + region_name=AWS_REGION + ) + + # Ensure the destination folder exists + if not os.path.exists(destination_folder): + os.makedirs(destination_folder) + + # Paginate through objects + paginator = s3_client.get_paginator('list_objects_v2') + pages = paginator.paginate(Bucket=AWS_BUCKET_ID, Prefix=prefix) + + for page in pages: + if 'Contents' in page: + for obj in page['Contents']: + key = obj['Key'] + local_file_path = os.path.join(destination_folder, os.path.relpath(key, prefix)) + + # Ensure the local directory structure exists + local_directory = os.path.dirname(local_file_path) + if not os.path.exists(local_directory): + os.makedirs(local_directory) + + # Download the object + print(f"Downloading {key} to {local_file_path}") + s3_client.download_file(AWS_BUCKET_ID, key, local_file_path) + + +# Download the evaluation results from the object store. Look at execution status under "outputArtifacts" key to see the 'url' +# which shows the data path of where your output results are stored +EXECUTION_ID = execution_id +sqlite_db_prefix = f'{EXECUTION_ID}/tmp/' # change the prefix based on where your output artifact is stored in the bucket. +destination_folder = 'results-new' + +download_all_objects(sqlite_db_prefix, destination_folder) +``` + +![img](img/image_48.png) + +**View Detailed Results (SQLite DB)** + +The evaluation stores detailed instance-level results in results.db. + +Example: Reading SQLite tables: + +```python +# viewing the results from sqlite db in tabular format.. +import sqlite3 +import pandas as pd +from IPython.display import display, HTML + +# Path to your SQLite database file +db_file = 'results-new/results.db' + +connection = sqlite3.connect(db_file) + +# Specify the table names you want to display +table_names = ['run','configuration', 'submission', 'submission_result', 'evaluation_result'] + +# Create the CSS and HTML container +html_content = """ + +
+""" + +for table_name in table_names: + query = f"SELECT * FROM {table_name};" + df = pd.read_sql_query(query, connection) + # If you want to see all the rows across all tables, remove/comment the next line + df = df.head(5) # Limiting the number of rows displayed + table_html = df.to_html(classes='table-container', index=False) + html_content += f""" +
+

Table: {table_name}

+ {table_html} +
+ """ + +html_content += "
" + +display(HTML(html_content)) + +# Close the connection +connection.close() +``` + +![img](img/image_py_rk.png) + +#### Process and Rank Results + +This step generates a leaderboard ranking models by their Win Rate (percentage of pairwise victories), providing a robust, comparative measure of the best-performing model and prompt configuration. + +```Python +import pandas as pd +import numpy as np +import sqlite3 +import json +import os +from IPython.display import display, HTML + +# ========================================== +# 1. CONFIGURATION (Separated Groups) +# ========================================== +METRIC_GROUPS = { + "Categorical": { + "type": "categorical", + "description": "Weighted Average (1-5 scale)", + "metrics": [ + "Pointwise Conciseness", + "Pointwise Instruction Following", + "Pointwise Correctness", + "Pointwise Answer Relevance" + ] + }, + "Boolean": { + "type": "categorical", # Uses same weighted avg logic (0 or 1) + "description": "Pass Rate (0-1 scale)", + "metrics": [ + "Exact Match", + "Content Filter on Input", + "Content Filter on Output", + "Language Match", + "JSON Schema Match" + ] + }, + "Numerical": { + "type": "numerical", + "description": "Mean Value", + "metrics": [ + "BLEU", + "ROUGE", + "BERT Score", + "test-metric" + ] + } +} + +# ========================================== +# 2. DATA EXTRACTION +# ========================================== +def extract_db_metadata(db_path): + if not os.path.exists(db_path): return pd.DataFrame() + conn = sqlite3.connect(db_path) + df_runs = pd.read_sql_query("SELECT id, name, tags, config FROM run", conn) + conn.close() + + meta_data = [] + for _, row in df_runs.iterrows(): + run_id = str(row["id"]) + run_name = str(row["name"]) + tags = {} + config = {} + try: tags = json.loads(row["tags"]) if isinstance(row["tags"], str) else row["tags"] + except: pass + try: config = json.loads(row["config"]) if isinstance(row["config"], str) else row["config"] + except: pass + + model = "Unknown" + try: model = config["modules"]["prompt_templating"]["model"]["name"] + except: + if isinstance(tags, dict): model = tags.get("evaluation.ai.sap.com/model", "Unknown") + elif isinstance(tags, list): + for t in tags: + if t.get("key") == "evaluation.ai.sap.com/model": model = t.get("value") + + meta_data.append({"run_id": run_id, "run_name": run_name, "model": model}) + return pd.DataFrame(meta_data) + +def extract_api_metrics(runs_data_resource): + flat_data = [] + for run in runs_data_resource: + model = "Unknown" + for t in run.get("tags", []): + if t.get("name") == "evaluation.ai.sap.com/model": + model = t.get("value") + break + for m in run.get("metrics", []): + clean_name = m.get("name", "").replace('"', '').strip() + flat_data.append({ + "model": model, + "metrics_name_clean": clean_name, + "metric_value": m.get("value") + }) + df = pd.DataFrame(flat_data) + df['metric_value'] = pd.to_numeric(df['metric_value'], errors='coerce') + return df + +# ========================================== +# 3. SCORING & HELM LOGIC +# ========================================== +def calculate_weighted_avg_score(row, cols): + """ Returns a score based on counts. + Categorical: 1-5 scale. + Boolean: 0-1 scale (Pass Rate). + """ + total_score = 0 + total_count = 0 + # Check counts 0-5 (covers Boolean 0/1 and Categorical 1-5) + for rating in range(0, 6): + col_name = next((c for c in cols if f"/{rating}/count" in c), None) + if col_name and not pd.isna(row[col_name]): + count = row[col_name] + total_score += count * rating + total_count += count + return total_score / total_count if total_count > 0 else 0.0 + +def get_metric_score_series(df_metrics, metric_name, group_type): + """ Returns a Series of SCORES (Scalar) for each model for a specific metric """ + subset = df_metrics[df_metrics['metrics_name_clean'].str.startswith(metric_name)] + if subset.empty: return None + + # Pivot to get columns for this metric + pivot = subset.pivot_table(index='model', columns='metrics_name_clean', values='metric_value', aggfunc='first') + cols = pivot.columns.tolist() + + if group_type == "categorical": + # Calculate Weighted Average (or Pass Rate for Boolean) + return pivot.apply(lambda row: calculate_weighted_avg_score(row, cols), axis=1) + else: + # Calculate Mean (Numerical) + c_mean = next((c for c in cols if "mean" in c), None) + if c_mean: return pivot[c_mean] + return None + +def calculate_group_win_rate(score_table): + """ + Calculates HELM Win Rate: % of times a model beats another model across all metrics in this group. + """ + models = score_table.index.tolist() + metrics = score_table.columns.tolist() + win_rates = {} + + for model_a in models: + wins = 0 + comparisons = 0 + + for model_b in models: + if model_a == model_b: continue + + # Compare across ALL metrics in this table + for metric in metrics: + score_a = score_table.at[model_a, metric] + score_b = score_table.at[model_b, metric] + + # Only compare valid scores + if pd.isna(score_a) or pd.isna(score_b): continue + + comparisons += 1 + if score_a > score_b: + wins += 1 + + win_rates[model_a] = wins / comparisons if comparisons > 0 else 0.0 + + return pd.Series(win_rates) + +# ========================================== +# 4. EXECUTION +# ========================================== +db_file = 'results-new/results.db' + +# A. Metadata +df_db_meta = extract_db_metadata(db_file) +df_db_unique = df_db_meta.drop_duplicates(subset=['model'], keep='last') + +# B. CSS +html_content = """ + +
+""" +if 'runs_data' in locals() and runs_data: + df_metrics_all = extract_api_metrics(runs_data['resources']) + + for group_name, config in METRIC_GROUPS.items(): + + # 1. Build Score Table + score_table = pd.DataFrame(index=df_db_unique['model'].unique()) + score_table.index.name = 'model' + + valid_metrics = [] + + # 2. Calculate Scores + for metric in config["metrics"]: + scores = get_metric_score_series(df_metrics_all, metric, config["type"]) + if scores is not None: + score_table[metric] = scores + valid_metrics.append(metric) + + if not valid_metrics: + continue + + # 3. Calculate HELM Win Rate (Specific to this group) + score_table['Win Rate'] = calculate_group_win_rate(score_table[valid_metrics]) + + # 4. Calculate Final Rank + score_table['Final Rank'] = score_table['Win Rate'].rank(ascending=False, method='min') + + # 5. Merge & Format + df_final = pd.merge(df_db_unique, score_table, on='model', how='inner') + df_final = df_final.sort_values('Final Rank') + + # Rounding + for c in valid_metrics: df_final[c] = df_final[c].fillna(0.0).astype(float).round(4) + df_final['Win Rate'] = df_final['Win Rate'].fillna(0.0).astype(float).round(4) + df_final['Final Rank'] = df_final['Final Rank'].fillna(0).astype(int) + + # Columns + meta_cols = ['run_id', 'run_name', 'model'] + final_cols = meta_cols + ['Win Rate', 'Final Rank'] + valid_metrics + + # 6. Generate HTML + table_html = df_final[final_cols].to_html(classes='table-container', index=False) + + html_content += f""" +
+

{group_name} Comparison

+

Values: {config['description']}. Win Rate based on head-to-head performance.

+ {table_html} +
+ """ + + html_content += "
" + display(HTML(html_content)) + +else: + print("'runs_data' missing.") +``` +![img](img/image_py_rnk1.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Retrieve Aggregate Metrics + +Send a GET request: + +**GET** +```bash +{{apiurl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={{execution_id}} +``` +**Retrieve Aggregate Metrics Using Run Name** + +Send a GET request: + +**GET** +```bash +{{apiurl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/run-name={{run_name}} +``` + +This returns aggregated values for: + + - latency + + - token usage + + - metric scores + + - completion count + +**Download Raw Results** + +1. Open the execution details + +2. Copy the output artifact URL + +3. Download the folder to obtain + + - step-wise results + + - sqlite_combined/results.db + +**Inspect Detailed Results** + +Open the SQLite DB in any client to inspect: + + - submissions + + - completion responses + + - evaluation_results (raw metric scores) + + - aggregation_results + + - custom_logs + +![img](img/image_49.png) + +[OPTION END] + +### Delete Evaluation Artifacts and Configurations + +Over time, your workspace may accumulate old configurations, executions, and metrics. +SAP AI Core allows you to safely delete these resources once they are no longer needed. + +This section explains how to delete: + + - Evaluation Executions + + - Evaluation Configurations + +⚠️ Important: + +Deletions are permanent and cannot be undone. + +[OPTION BEGIN [SAP AI Launchpad]] + +**Delete Executions** + +1. Go to ML Operations → Executions + +2. Select the execution + +3. Click Delete + +4. Confirm the deletion + +**Delete Evaluation Configurations** + +1. Go to ML Operations → Configurations + +2. Select the configuration you created + +3. Click Delete + +[OPTION END] + +[OPTION BEGIN [Python]] + +**1. Delete an Evaluation Execution** + +```python +#Delete Execution Id +def delete_execution(): + headers = _get_headers() + EXEC_ID = execution_id + GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions/' + request_url = f"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}{EXEC_ID}" + try: + response = requests.delete( + request_url, headers=headers, params={"AI-Resource-Group":AICORE_RESOURCE_GROUP}, timeout=120 + ) + print(response) + if(response.status_code != 202): + raise + result = response.json() + print(result) + except: + logging.error("Error occurred while attempting to delete a Configuration") + raise + +delete_execution() +``` +**2. Delete an Evaluation Configuration** + +```python +def delete_configuration(configuration_id): + headers = _get_headers() + endpoint = f"/v2/lm/configurations/{configuration_id}" + url = f"{AICORE_BASE_URL}{endpoint}" + + response = requests.delete(url, headers=headers) + print("Status:", response.status_code) + print(response.text) + +# Example: +delete_configuration(configuration_id) +``` + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +**1. Delete Execution** + +**DELETE Request** +```bash +{{apiurl}}/v2/lm/executions/{{execution_id}} +``` +**Headers:** +``` +Authorization: Bearer {{access_token}} +AI-Resource-Group: {{resource_group}} +``` +**2. Delete Configuration** + +```bash +DELETE {{apiurl}}/v2/lm/configurations/{{configuration_id}} +``` + +[OPTION END] diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/evaluation_workflow.ipynb b/tutorials/ai-core-genaihub-evaluation-comprehensive/evaluation_workflow.ipynb new file mode 100644 index 0000000000..fb89c71d9a --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-comprehensive/evaluation_workflow.ipynb @@ -0,0 +1,1848 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Generative AI Custom Evaluation\n", + "This is an example notebook which showcases how a user can use AI Core custom evaluation to benchmark their large language models, evaluate orchestration configuration or prompts for their use case.\n", + "It uses publicly available [MedicationQA dataset](https://langtest.org/docs/pages/benchmarks/medical/medicationqa/) which consists of commonly asked consumer questions about medications. The workload computes industry standard metrics to check the reliability of the response generate by llm.\n", + "
**Note: For detailed instructions please refer to [Readme](./Readme.md)**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# SetUp (Step 1)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "! pip install -r ../requirements.txt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load your environment variables\n", + "\n", + "Ensure that your environment variables are set in a `.env` file (see sample.env for an example). If there is a missing field the notebook will prompt you for a value." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Loading the credentials from the env file\n", + "from gen_ai_hub.proxy.gen_ai_hub_proxy import GenAIHubProxyClient\n", + "from dotenv import load_dotenv\n", + "import os\n", + "\n", + "load_dotenv(override=True)\n", + "\n", + "\n", + "# Fetching environment variables or prompting the user if missing\n", + "AICORE_BASE_URL = os.getenv(\"AICORE_BASE_URL\") or input(\"AICORE_BASE_URL is missing. Please enter it: \")\n", + "AICORE_RESOURCE_GROUP = os.getenv(\"AICORE_RESOURCE_GROUP\") or input(\"AICORE_RESOURCE_GROUP is missing. Please enter it (default: 'default'): \") or \"default\"\n", + "AICORE_AUTH_URL = os.getenv(\"AICORE_AUTH_URL\") or input(\"AICORE_AUTH_URL is missing. Please enter it: \")\n", + "AICORE_CLIENT_ID = os.getenv(\"AICORE_CLIENT_ID\") or input(\"AICORE_CLIENT_ID is missing. Please enter it: \")\n", + "AICORE_CLIENT_SECRET = os.getenv(\"AICORE_CLIENT_SECRET\") or input(\"AICORE_CLIENT_SECRET is missing. Please enter it: \")\n", + "\n", + "AWS_ACCESS_KEY = os.getenv(\"AWS_ACCESS_KEY\") or input(\"AWS_ACCESS_KEY is missing. Please enter it: \")\n", + "AWS_BUCKET_ID = os.getenv(\"AWS_BUCKET_ID\") or input(\"AWS_BUCKET_ID is missing. Please enter it: \")\n", + "AWS_REGION = os.getenv(\"AWS_REGION\") or input(\"AWS_REGION is missing. Please enter it: \")\n", + "AWS_SECRET_ACCESS_KEY = os.getenv(\"AWS_SECRET_ACCESS_KEY\") or input(\"AWS_SECRET_ACCESS_KEY is missing. Please enter it: \")\n", + "DEPLOYMENT_URL = os.getenv(\"DEPLOYMENT_URL\", None)\n", + "\n", + "# Initializing the GenAIHubProxyClient\n", + "client = GenAIHubProxyClient(\n", + " base_url=AICORE_BASE_URL,\n", + " auth_url=AICORE_AUTH_URL,\n", + " client_id=AICORE_CLIENT_ID,\n", + " client_secret=AICORE_CLIENT_SECRET,\n", + " resource_group=AICORE_RESOURCE_GROUP\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Dependencies and Helper Functions (Step 2)" + ] + }, + { + "cell_type": "code", + "execution_count": 65, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Dataset name: medicalqna_dataset.csv\n" + ] + } + ], + "source": [ + "import os\n", + "import json\n", + "\n", + "\n", + "\n", + "def get_dataset_file_name(folder_path):\n", + " \"\"\"\n", + " Retrieves the name of the first file in the specified folder.\n", + " \"\"\"\n", + " if not os.path.isdir(folder_path):\n", + " print(f\"The folder path '{folder_path}' does not exist.\")\n", + " return None\n", + "\n", + " items_in_folder = os.listdir(folder_path)\n", + "\n", + " for item in items_in_folder:\n", + " item_path = os.path.join(folder_path, item)\n", + " if os.path.isfile(item_path):\n", + " return item\n", + "\n", + " print(f\"No files were found in the folder '{folder_path}'.\")\n", + " return None\n", + "\n", + "\n", + "\n", + "# --- MAIN EXECUTION ---\n", + "DATASET_FOLDER = \"../DATASET\"\n", + "\n", + "DATASET_NAME = get_dataset_file_name(DATASET_FOLDER)\n", + "\n", + "if DATASET_NAME:\n", + " print(f\"Dataset name: {DATASET_NAME}\")\n", + "else:\n", + " print(\"Missing run or dataset file.\")\n", + " raise SystemExit(\"Exiting due to missing run/dataset file.\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Register an Object Store Secret\n", + "To use the evaluations service, you must register an object store with the name default. Optionally, you can register an additional object store with a name of your choice." + ] + }, + { + "cell_type": "code", + "execution_count": 66, + "metadata": {}, + "outputs": [], + "source": [ + "# setup authentication and headers needed for AI Core requests\n", + "def _get_headers():\n", + " headers = {\n", + " \"Authorization\": client.get_ai_core_token(),\n", + " \"AI-Resource-Group\": AICORE_RESOURCE_GROUP,\n", + " \"Content-Type\": \"application/json\",\n", + " }\n", + " return headers" + ] + }, + { + "cell_type": "code", + "execution_count": 67, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Successfully deleted object store secret: default\n", + "Object store secret not found: genai-simplified-notebook. It may not exist.\n" + ] + }, + { + "data": { + "text/plain": [ + "{'message': 'secret has been created'}" + ] + }, + "execution_count": 67, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Register S3 secret with AI Core which will be used an input source \n", + "import requests\n", + "import json\n", + "import logging\n", + "\n", + "def delete_oss_secret(oss_name=\"\"):\n", + " headers = _get_headers()\n", + " \n", + " DELETE_SECRETS_ENDPOINT = f'/v2/admin/objectStoreSecrets/{oss_name}'\n", + " request_url = f\"{AICORE_BASE_URL}{DELETE_SECRETS_ENDPOINT}\"\n", + " \n", + " try:\n", + " response = requests.delete(request_url, headers=headers, timeout=120)\n", + " if response.status_code == 202:\n", + " print(f\"Successfully deleted object store secret: {oss_name}\")\n", + " elif response.status_code == 404:\n", + " print(f\"Object store secret not found: {oss_name}. It may not exist.\")\n", + " else:\n", + " logging.error(f\"Failed to delete object store secret: {oss_name}, Status Code: {response.status_code}\")\n", + " except Exception as e:\n", + " logging.error(f\"Error occurred while attempting to delete object store secret: {e}\")\n", + " raise\n", + "\n", + "def register_oss_secret(oss_name=\"\", path_prefix=\"\"):\n", + " headers = _get_headers()\n", + " \n", + " POST_SECRETS_ENDPOINT = '/v2/admin/objectStoreSecrets'\n", + " request_url = f\"{AICORE_BASE_URL}{POST_SECRETS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"name\": oss_name,\n", + " \"data\": {\n", + " \"AWS_ACCESS_KEY_ID\": AWS_ACCESS_KEY,\n", + " \"AWS_SECRET_ACCESS_KEY\": AWS_SECRET_ACCESS_KEY\n", + " },\n", + " \"type\": \"S3\",\n", + " \"bucket\": AWS_BUCKET_ID,\n", + " \"endpoint\": \"s3-eu-central-1.amazonaws.com\",\n", + " \"region\": AWS_REGION,\n", + " \"pathPrefix\": path_prefix,\n", + " \"verifyssl\": \"0\",\n", + " \"usehttps\": \"1\",\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create object store secret\")\n", + " raise\n", + " \n", + "delete_oss_secret(oss_name=\"default\")\n", + "delete_oss_secret(oss_name=\"genai-simplified-notebook\")\n", + " \n", + "register_oss_secret(oss_name=\"default\", path_prefix=\"\")\n", + "register_oss_secret(oss_name=\"genai-simplified-notebook\", path_prefix=\"\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# uploading these files to Object store to register as an artifact inside ai core\n", + "\n", + "import boto3\n", + "import os\n", + "import uuid\n", + "\n", + "def upload_folder_to_s3(folder_path, bucket_name, s3_prefix=\"\"):\n", + " \"\"\"\n", + " Upload a folder to an S3 bucket recursively.\n", + "\n", + " :param folder_path: The local folder path to upload.\n", + " :param bucket_name: The name of the S3 bucket.\n", + " :param s3_prefix: Optional prefix to use for the S3 keys (e.g., subfolder in the bucket).\n", + " \"\"\"\n", + " s3_client = boto3.client(\n", + " 's3',\n", + " aws_access_key_id=AWS_ACCESS_KEY,\n", + " aws_secret_access_key=AWS_SECRET_ACCESS_KEY,\n", + " region_name=AWS_REGION\n", + " )\n", + "\n", + " for root, dirs, files in os.walk(folder_path):\n", + " for file_name in files:\n", + " print(\"val of root is \", file_name)\n", + " local_path = os.path.join(root, file_name)\n", + " # Compute the relative path for the S3 key\n", + " relative_path = os.path.relpath(local_path, folder_path)\n", + " s3_key = os.path.join(s3_prefix, relative_path).replace(\"\\\\\", \"/\") # Ensure S3-compatible paths\n", + " print(\"val of s3 key is \", s3_key)\n", + " print(f\"Uploading {local_path} to s3://{bucket_name}/{s3_key}\")\n", + " \n", + " # Upload the file\n", + " s3_client.upload_file(local_path, bucket_name, s3_key)\n", + "\n", + "# Example usage\n", + "folder_to_upload_testdata = \"../DATASET\"\n", + "user_directory_prefix = \"\" # replace with your i-number as string here\n", + "prefix_guid = user_directory_prefix if user_directory_prefix is not None else str(uuid.uuid4().hex)\n", + "s3_testdata_prefix = f\"genaiEvaluation/{prefix_guid}/testdata\" # Leave empty for root of the bucket\n", + "\n", + "\n", + "upload_folder_to_s3(folder_to_upload_testdata, AWS_BUCKET_ID, s3_testdata_prefix)\n", + "input_artifact_path = f\"ai://genai-simplified-notebook/genaiEvaluation/{prefix_guid}\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The user stores the input files in the object store and registers the root folder as artifact with AI Core. The File Upload and Artifact endpoints of AI Core API may be used for this purpose. In this example `genaiEvaluation\\{prefix_guid}` is the root folder containing the orchestration configurations and test data which is registered as AI Core artifact." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "import logging\n", + "# Registering the uploaded files from AWS as artifacts to use inside configuration.\n", + "\n", + "def register_artifact():\n", + " headers = _get_headers()\n", + " \n", + " GET_ARTIFACTS_ENDPOINT = '/v2/lm/artifacts'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_ARTIFACTS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"labels\": [\n", + " {\n", + " \"key\": \"ext.ai.sap.com/prompt-evaluation\",\n", + " \"value\": \"true\"\n", + " }\n", + " ],\n", + " \"name\": \"genai-eval-simplified-test-data\",\n", + " \"kind\": \"other\",\n", + " \"url\": input_artifact_path, # input artifact path\n", + " \"description\": \"demo artifacts for evaluation flow.\",\n", + " \"scenarioId\": \"genai-evaluations\"\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " print(\"Error occurred while attempting to create an execution\")\n", + " raise\n", + " \n", + "\n", + "artifact_id = register_artifact()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create Orchestration Deployment\n", + "An orchestration Deployment URL is required for us to run our evaluation. Once created we need to wait until the deployment is running and provides us a deployment url which will be add to our configuration file in the next step. You can skip this step if you already have a orchestration deployment running." + ] + }, + { + "cell_type": "code", + "execution_count": 69, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "import json\n", + "import time\n", + "\n", + "\n", + "\n", + "def create_orchestration_configuration():\n", + " headers = _get_headers()\n", + " request_body = {\n", + " \"name\": \"orchestrationDeployment\",\n", + " \"executableId\": \"orchestration\",\n", + " \"scenarioId\": \"orchestration\",\n", + " \"parameterBindings\": [\n", + " {\n", + " \"key\": \"modelFilterList\",\n", + " \"value\": \"null\"\n", + " },\n", + " {\n", + " \"key\": \"modelFilterListType\",\n", + " \"value\": \"allow\"\n", + " }\n", + " ],\n", + " \"inputArtifactBindings\": []\n", + " }\n", + " \n", + " GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}\"\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 201):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a Configuration\")\n", + " raise\n", + " \n", + "def execute_orchestration_deployment(configuration_id):\n", + " headers = _get_headers()\n", + " GET_DEPLOYMENTS_ENDPOINT = '/v2/lm/deployments'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_DEPLOYMENTS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"configurationId\": configuration_id\n", + " }\n", + " \n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 202):\n", + " print(\"Deployment execution failed\")\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " \n", + " except:\n", + " logging.error(\"Error occurred while attempting to create an execution\")\n", + " raise\n", + "\n", + "def get_deployment_status(orchestration_deployment_id):\n", + " headers = _get_headers()\n", + " api_url = f\"{AICORE_BASE_URL}/v2/lm/deployments/{orchestration_deployment_id}?$select=status\"\n", + " timeout = 400 \n", + " initial_interval = 30 \n", + " pending_interval = 10\n", + " start = time.time()\n", + "\n", + " status = None\n", + " current_interval = initial_interval\n", + "\n", + " while time.time() - start < timeout:\n", + " response = requests.get(api_url, headers=headers)\n", + " if response.status_code == 200:\n", + " status = response.json().get('status')\n", + " print(f\"Deployment {orchestration_deployment_id} status: {status}\")\n", + " # Adjust polling interval based on status\n", + " if status == 'RUNNING':\n", + " return True\n", + " elif status == 'UNKNOWN':\n", + " current_interval = initial_interval\n", + " elif status == 'PENDING':\n", + " current_interval = pending_interval\n", + "\n", + " else:\n", + " print(f\"Failed to fetch deployment status. HTTP {response.status_code}\")\n", + " return False\n", + "\n", + " # Waiting according to status for API call\n", + " time.sleep(current_interval)\n", + "\n", + "def get_deployment_url(orchestration_deployment_id):\n", + " headers = _get_headers()\n", + " response = requests.get(f\"{AICORE_BASE_URL}/v2/lm/deployments/{orchestration_deployment_id}\", headers=headers)\n", + " if response.status_code != 200:\n", + " raise Exception(f\"Failed to get deployment URL: {response.status_code} - {response.text}\")\n", + " return response.json().get('deploymentUrl')\n", + "\n", + "# You can skip this step if you already have a orchestration deployment running\n", + "deployment_url = DEPLOYMENT_URL\n", + "if not deployment_url:\n", + " configuration_id = create_orchestration_configuration()\n", + " orchestration_deployment_id = execute_orchestration_deployment(configuration_id)\n", + " is_running = get_deployment_status(orchestration_deployment_id) \n", + " if is_running:\n", + " deployment_url = get_deployment_url(orchestration_deployment_id)\n", + " print(f\"Deployment URL: {deployment_url}\")\n", + " else:\n", + " print(\"Deployment is not running or failed.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Manually set the orchestration deployment url\n", + "# deployment_url=\"\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Select your Models\n", + " \n", + "Add the LLMs you wish to use in the string `selected_models_str`\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 70, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Selected models string: gemini-2.5-pro:001\n" + ] + } + ], + "source": [ + "# Manual selection of models\n", + "selected_models_str=\"gemini-2.5-pro:001\"\n", + "print(\"Selected models string:\", selected_models_str)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Select system defined metrics\n", + " \n", + "Add the system defined metrics you wish to use in the string `selected_metrics_str`.\n", + "\n", + "**Note: If your dataset does not have a reference column, DO NOT Select metrcis where reference is required.**" + ] + }, + { + "cell_type": "code", + "execution_count": 71, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Pointwise Answer Relevance,Exact Match\n" + ] + } + ], + "source": [ + "# Manual Selection of Metrics\n", + "selected_metrics_str = \"Pointwise Answer Relevance,Exact Match\"\n", + "print(selected_metrics_str)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Custom Metric Creation and Selection\n", + "This script checks for an evaluation metric in SAP AI Core.\n", + "\n", + "1. You can provide Metric ID's directly by setting the variable as comma separated string:\n", + " user_metric_ids = `\"\"`\n", + " - ✅ If the ID exists, it will be returned.\n", + " \n", + "2. You can create a new custom metric by adding the json in `custom_metric_list` string\n", + " - The script will use the contents of the `custom_metric_list`\n", + " to search for an existing metric by scenario + name + version.\n", + "\n", + "3. If no existing metric is found:\n", + " - A new metric will be created using the details in `custom_metric_list`.\n", + " - Required fields in custom_metric: scenario, name, version, evaluationMethod.\n", + "\n", + "4. At the end:\n", + " - The script prints the final Metric ID that was found or created.\n", + "\n", + "Note: Skip the two following cell if you do not want to create/select a custom metric for your workload" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "user_metric_ids = \"d1868b00-1601-407a-92cd-0b9065682d1f,dbf56851-8444-45d3-a0c1-adbe210c7e771\"\n", + "\n", + "custom_metric_list = [\n", + " {\n", + " \"name\": \"test-metric\",\n", + " \"scenario\": \"genai-evaluations-test\",\n", + " \"version\": \"0.0.1\",\n", + " \"evaluationMethod\": \"llm-as-a-judge\",\n", + " \"managedBy\": \"imperative\",\n", + " \"systemPredefined\": False,\n", + " \"metricType\": \"evaluation\",\n", + " \"spec\": {\n", + " \"outputType\": \"numerical\",\n", + " \"promptType\": \"structured\",\n", + " \"configuration\": {\n", + " \"modelConfiguration\": {\n", + " \"name\": \"gpt-5\",\n", + " \"version\": \"2025-08-07\",\n", + " \"parameters\": [\n", + " {\n", + " \"key\": \"max_tokens\",\n", + " \"value\": \"10000\"\n", + " }\n", + " ]\n", + " },\n", + " \"promptConfiguration\": {\n", + " \"definition\": \"You are an expert evaluator. Your task is to evaluate the quality of the responses generated by AI models. We will provide you with a reference and an AI-generated response. You should first read the user input carefully for analyzing the task, and then evaluate the quality of the responses based on the criteria provided in the Evaluation section below. You will assign the response a rating following the Rating Rubric and Evaluation Steps. Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\\n\\n## Metric Definition\\nYou are an INFORMATION OVERLAP classifier providing the overlap of information between a response and reference.\\n\\n## Criteria\\nGroundedness: The of information between a response generated by AI models and provided reference.\\n\\n## Rating Rubric\\n5: (Fully grounded). The response and the reference are fully overlapped.\\n4: (Mostly grounded). The response and the reference are mostly overlapped.\\n3: (Somewhat grounded). The response and the reference are somewhat overlapped.\\n2: (Poorly grounded). The response and the reference are slightly overlapped.\\n1: (Not grounded). There is no overlap between the response and the reference.\\n\\n## Evaluation Steps\\nSTEP 1: Assess the response in aspects of Groundedness. Identify any information in the response and provide assessment according to the Criteria.\\nSTEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Groundedness.\\n\\nReference: {{?reference}}\\nResponse: {{?aicore_llm_completion}}\\n\\nBegin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format:\\n\\n{ \\\"explanation\\\": string, \\\"rating\\\": integer }\\n\\nOutput:\\n\",\n", + " \"evaluationTask\": \"You are an expert evaluator. Your task is to evaluate the quality of the responses generated by AI models. We will provide you with a reference and an AI-generated response. You should first read the user input carefully for analyzing the task, and then evaluate the quality of the responses based on the criteria provided in the Evaluation section below. You will assign the response a rating following the Rating Rubric and Evaluation Steps. Give step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\\n\\n## Metric Definition\\nYou are an INFORMATION OVERLAP classifier providing the overlap of information between a response and reference.\\n\\n## Criteria\\nGroundedness: The of information between a response generated by AI models and provided reference.\\n\\n## Rating Rubric\\n5: (Fully grounded). The response and the reference are fully overlapped.\\n4: (Mostly grounded). The response and the reference are mostly overlapped.\\n3: (Somewhat grounded). The response and the reference are somewhat overlapped.\\n2: (Poorly grounded). The response and the reference are slightly overlapped.\\n1: (Not grounded). There is no overlap between the response and the reference.\\n\\n## Evaluation Steps\\nSTEP 1: Assess the response in aspects of Groundedness. Identify any information in the response and provide assessment according to the Criteria.\\nSTEP 2: Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Groundedness.\\n\\nReference: {{?reference}}\\nResponse: {{?aicore_llm_completion}}\\n\\nBegin your evaluation by providing a short explanation. Be as unbiased as possible. After providing your explanation, please rate the response according to the rubric and outputs STRICTLY following this JSON format:\\n\\n{ \\\"explanation\\\": string, \\\"rating\\\": integer }\\n\\nOutput:\\n\",\n", + " \"criteria\": \"You should strictly follow the instruction given to you. Please act as an impartial judge and evaluate the quality of the responses based on the prompt and following criteria:\",\n", + " \"ratingRubric\": [\n", + " {\n", + " \"rating\": 3,\n", + " \"rule\": \"Response is completely factual with no unsupported claims\"\n", + " },\n", + " {\n", + " \"rating\": 2,\n", + " \"rule\": \"Response has minor inaccuracies but no major contradictions\"\n", + " },\n", + " {\n", + " \"rating\": 1,\n", + " \"rule\": \"Response contains significant factual errors or hallucinations\"\n", + " }\n", + " ]\n", + " }\n", + " }\n", + " }\n", + " }\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import json\n", + "import requests\n", + "\n", + "\n", + "# --- Fetch all metrics from SAP AI Core ---\n", + "def fetch_all_metrics():\n", + " request_url = f\"{AICORE_BASE_URL}/v2/lm/evaluationMetrics\"\n", + " resp = requests.get(request_url, headers=_get_headers())\n", + " resp.raise_for_status()\n", + " return resp.json().get(\"resources\", [])\n", + "\n", + "# --- Create or fetch a metric ---\n", + "def create_or_get_metric(custom_metric, user_metric_id=None):\n", + " all_metrics = fetch_all_metrics()\n", + "\n", + " # 1️⃣ User-supplied ID lookup\n", + " if user_metric_id:\n", + " for m in all_metrics:\n", + " if m.get(\"id\") == user_metric_id:\n", + " print(f\"✅ Metric already exists by ID: {user_metric_id}\")\n", + " return user_metric_id\n", + " print(f\"⚠️ User metric ID {user_metric_id} not found, will only include if valid later\")\n", + "\n", + " # 2️⃣ Check by scenario, name, version\n", + " scenario = custom_metric.get(\"scenario\")\n", + " name = custom_metric.get(\"name\")\n", + " version = custom_metric.get(\"version\")\n", + " if not all([scenario, name, version]):\n", + " raise ValueError(\"Metric must include 'scenario', 'name', and 'version'\")\n", + "\n", + " for m in all_metrics:\n", + " if (m.get(\"scenario\") == scenario and\n", + " m.get(\"name\") == name and\n", + " m.get(\"version\") == version):\n", + " metric_id = m.get(\"id\")\n", + " print(f\"✅ Metric already exists: {scenario}/{name} v{version}, ID = {metric_id}\")\n", + " return metric_id\n", + "\n", + " # 3️⃣ Create metric if not found\n", + " request_url = f\"{AICORE_BASE_URL}/v2/lm/evaluationMetrics\"\n", + " required_fields = [\"scenario\", \"name\", \"version\", \"evaluationMethod\", \"metricType\"]\n", + " for f in required_fields:\n", + " if f not in custom_metric:\n", + " raise ValueError(f\"❌ Missing required field: {f}\")\n", + "\n", + " resp = requests.post(request_url, headers=_get_headers(), json=custom_metric)\n", + " resp.raise_for_status()\n", + " metric_id = resp.json().get(\"id\")\n", + " print(f\"✅ Metric created successfully: {name} v{version}, ID = {metric_id}\")\n", + " return metric_id\n", + "\n", + "# --- Main pipeline ---\n", + "\n", + "# 1️⃣ Create/fetch metrics from SAP AI Core\n", + "metric_ids = []\n", + "for metric in custom_metric_list:\n", + " try:\n", + " print(f\"metric:{metric}\")\n", + " metric_id = create_or_get_metric(metric)\n", + " metric_ids.append(metric_id)\n", + " except ValueError as e:\n", + " print(f\"Skipping metric due to error: {e}\")\n", + "\n", + "# 2️⃣ Validate user_metric_ids separately if provided\n", + "if user_metric_ids and user_metric_ids.strip():\n", + " all_metrics = fetch_all_metrics()\n", + " # Split comma-separated IDs and strip whitespace\n", + " for uid in [uid.strip() for uid in user_metric_ids.split(\",\")]:\n", + " if any(m.get(\"id\") == uid for m in all_metrics):\n", + " metric_ids.append(uid)\n", + " else:\n", + " print(f\"⚠️ User metric ID {uid} does not exist in AI Core, skipping.\")\n", + "# 3️⃣ Convert to comma-separated string\n", + "custom_metric_ids_str = \",\".join(metric_ids)\n", + "print(\"✅ All processed metric IDs:\", custom_metric_ids_str)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Orchestration Registry Configuration\n", + "\n", + "The following code defines a function `create_orchestration_registry_config()` that creates a new **Orchestration Configuration** in **Orchestration Registry**.\n", + "\n", + "**Note** : If you wish to use an existing orchestration config, skip executing this cell and add the orchestration config id in `orchestration_registry_id` string in the next cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def create_orchestration_registry_config():\n", + " headers = _get_headers()\n", + " prompt_template = {\n", + " \"template\": [\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"\n", + " }\n", + " ]\n", + " }\n", + " CREATE_ORCHESTRATION_REGISTRY = '/v2/registry/v2/orchestrationConfigs'\n", + " request_url = f\"{AICORE_BASE_URL}{CREATE_ORCHESTRATION_REGISTRY}\"\n", + " model_name,model_version=selected_models_str.split(\":\")\n", + " request_body = {\n", + " \"name\": \"genai-eval-test\",\n", + " \"version\": \"1.0.0\",\n", + " \"scenario\": \"genai-evaluations\",\n", + " \"spec\": {\n", + " \"modules\": {\n", + " \"prompt_templating\": {\n", + " \"model\": {\n", + " \"name\": model_name,\n", + " \"version\": model_version\n", + " },\n", + " \"prompt\": prompt_template\n", + " }\n", + " }\n", + " }\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " if(response.status_code != 200):\n", + " print(response.json())\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a orchestration registry id\")\n", + " raise\n", + "orchestration_registry_id = create_orchestration_registry_config()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Manually set orchestration config id\n", + "# orchestration_registry_id=\"\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluation Configuration Creation" + ] + }, + { + "cell_type": "code", + "execution_count": 80, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Selected metrics: Pointwise Answer Relevance,Pointwise Instruction Following\n", + "Selected models: gemini-2.5-pro:001\n" + ] + } + ], + "source": [ + "\n", + "import json\n", + "test_data_path = f\"testdata/{DATASET_NAME}\" # specify the test data path here. For the full folder just specifying testdata will work\n", + "test_datasets = json.dumps({'path': test_data_path, 'type': 'csv'})\n", + "metrics_list = \",\".join([selected_metrics_str,custom_metric_ids_str])\n", + "models_list = selected_models_str\n", + "print(f\"Selected metrics: {metrics_list}\")\n", + "print(f\"Selected models: {models_list}\")\n", + "#variable_mapping = json.dumps({'prompt/question': 'data/topic'}) # to map the question prompt variable to the entry in dataset.\n", + "# orchestration_deployment_url = deployment_url # needs to specify this to use a specific deployment id\n", + "orchestration_deployment_url = deployment_url\n", + "repetitions = \"1\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# creating an AICORE Configuration.\n", + "import requests\n", + "\n", + "request_body = {\n", + " \"name\": \"genai-eval-conf\",\n", + " \"scenarioId\": \"genai-evaluations\",\n", + " \"executableId\": \"genai-evaluations-simplified\",\n", + " \"inputArtifactBindings\": [\n", + " {\n", + " \"key\": \"datasetFolder\",\n", + " \"artifactId\": artifact_id\n", + " }\n", + " ],\n", + " \"parameterBindings\": [\n", + " {\n", + " \"key\": \"repetitions\",\n", + " \"value\": repetitions\n", + " },\n", + " {\n", + " \"key\": \"orchestrationDeploymentURL\",\n", + " \"value\": orchestration_deployment_url\n", + " },\n", + " {\n", + " \"key\": \"metrics\",\n", + " \"value\": metrics_list\n", + " },\n", + " {\n", + " \"key\": \"testDataset\",\n", + " \"value\": test_datasets\n", + " },\n", + " {\n", + " \"key\": \"orchestrationRegistryIds\",\n", + " \"value\": orchestration_registry_id\n", + " }\n", + " ]\n", + "}\n", + "\n", + "def create_aicore_configuration():\n", + " headers = _get_headers()\n", + " GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}\"\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 201):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a Configuration\")\n", + " raise\n", + " \n", + "configuration_id = create_aicore_configuration()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluation Execution Creation\n", + "Once Configration is create, we create the AI Core execution which triggers the evaluation workload.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# create an execution with the created configuration.\n", + "\n", + "import requests\n", + "def create_execution():\n", + " headers = _get_headers()\n", + " GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}\"\n", + " request_body = {\"configurationId\" : configuration_id} \n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create an execution\")\n", + " raise\n", + " \n", + "\n", + "execution_id = create_execution()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# get execution status\n", + "import requests\n", + "def get_execution_status(execution_id):\n", + " headers = _get_headers()\n", + " LOG_EXECUTIONS_ENDPOINT = f'/v2/lm/executions/{execution_id}'\n", + " request_url = f\"{AICORE_BASE_URL}{LOG_EXECUTIONS_ENDPOINT}\"\n", + " try:\n", + " response = requests.get(\n", + " request_url, headers=headers, timeout=120\n", + " )\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to get execution status\")\n", + " raise\n", + " \n", + "\n", + "get_execution_status(execution_id)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "1. Run the following cells only when the status field in the Execution response is \"COMPLETED\" to view the results.\n", + "2. The status field progresses through different states over time: UNKNOWN → PENDING → RUNNING → COMPLETED. Ensure it reaches COMPLETED before proceeding.\n", + "\n", + "\n", + "Note: The targetStatus will always be COMPLETED from the start, as it represents the intended final state of the Execution. Do not confuse it with the actual status field.\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluation Result\n", + "The evaluation job produces two outputs\n", + "1. A SQLite DB file which stores the orchestration input, orchestration output, values for all the metrics calculated for this orchestration output and statistics such as latency for this orchestration output. These metric values are called raw metric values. This SQLite DB file is stored in the object store as an AI Core output artifact.\n", + "2. A set of metrics whose values are aggregated from the raw metric values. The aggregate metrics are stored in the tracking service. The user-defined tags along with the run names are stored with the metrics.\n", + "Post execution completion user can see the runs generated by the workload along with the aggregate metrics by calling the tracking api as show below" + ] + }, + { + "cell_type": "code", + "execution_count": 90, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "response received is \n" + ] + } + ], + "source": [ + "# Get aggregate metrics using execution id\n", + "import requests\n", + "def retrieve_aggregate_metrics(execution_id):\n", + " headers = _get_headers()\n", + " GET_METRICS_ENDPOINT = f'/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={execution_id}'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_METRICS_ENDPOINT}\"\n", + " try:\n", + " response = requests.get(request_url, headers=headers, timeout=120)\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to retreive aggeregate metrics for the run\")\n", + " raise\n", + "\n", + "runs_data = retrieve_aggregate_metrics(execution_id)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To further drill down , User can also download the SQLite DB file from object storage and analyse the results(instance level metrics, logs etc) locally." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# download the result artifacts from Object store.\n", + "import boto3\n", + "\n", + "def download_all_objects(prefix, destination_folder):\n", + " \"\"\"\n", + " Recursively download all objects from an S3 bucket starting with a specific prefix.\n", + "\n", + " :param bucket_name: Name of the S3 bucket.\n", + " :param prefix: Prefix to filter objects in the bucket.\n", + " :param destination_folder: Local folder to save the downloaded files.\n", + " \"\"\"\n", + " s3_client = boto3.client(\n", + " 's3',\n", + " aws_access_key_id=AWS_ACCESS_KEY,\n", + " aws_secret_access_key=AWS_SECRET_ACCESS_KEY,\n", + " region_name=AWS_REGION\n", + " )\n", + "\n", + " # Ensure the destination folder exists\n", + " if not os.path.exists(destination_folder):\n", + " os.makedirs(destination_folder)\n", + "\n", + " # Paginate through objects\n", + " paginator = s3_client.get_paginator('list_objects_v2')\n", + " pages = paginator.paginate(Bucket=AWS_BUCKET_ID, Prefix=prefix)\n", + "\n", + " for page in pages:\n", + " if 'Contents' in page:\n", + " for obj in page['Contents']:\n", + " key = obj['Key']\n", + " local_file_path = os.path.join(destination_folder, os.path.relpath(key, prefix))\n", + "\n", + " # Ensure the local directory structure exists\n", + " local_directory = os.path.dirname(local_file_path)\n", + " if not os.path.exists(local_directory):\n", + " os.makedirs(local_directory)\n", + "\n", + " # Download the object\n", + " print(f\"Downloading {key} to {local_file_path}\")\n", + " s3_client.download_file(AWS_BUCKET_ID, key, local_file_path)\n", + "\n", + "\n", + "# Download the evaluation results from the object store. Look at execution status under \"outputArtifacts\" key to see the 'url'\n", + "# which shows the data path of where your output results are stored\n", + "EXECUTION_ID = execution_id\n", + "sqlite_db_prefix = f'{EXECUTION_ID}/tmp/' # change the prefix based on where your output artifact is stored in the bucket.\n", + "destination_folder = 'results-new'\n", + "\n", + "download_all_objects(sqlite_db_prefix, destination_folder)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "NOTE: The below Cell shows results of top 5 rows of the Evaluation Results across all SQLite tables. IF you wish to see all the entries you can comment the line saying df.head(5) in the below cell or modify the number accordingly." + ] + }, + { + "cell_type": "code", + "execution_count": 95, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n", + "
\n", + "\n", + "
\n", + "

Table: run

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
idnameconfigtagscreated_atupdated_at
1571f78d465d4d53961f08758a243bb8Run-genai-eval-test-gemini-2.5-pro-001{\"modules\": {\"prompt_templating\": {\"prompt\": {\"template\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\", \"role\": \"user\"}]}, \"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\", \"timeout\": 600, \"max_retries\": 2}}}}{}2026-02-10 07:23:05.0367702026-02-10 07:23:05.036775
19722d52bde94ac488b1bd8abbd5bec9Run-genai-eval-test-gemini-2.5-pro-001{\"modules\": {\"prompt_templating\": {\"prompt\": {\"template\": [{\"content\": \"{{?question}}\", \"role\": \"user\"}]}, \"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\", \"timeout\": 600, \"max_retries\": 2}}}}{}2026-02-10 07:23:05.0367792026-02-10 07:23:05.036779
\n", + "
\n", + " \n", + "
\n", + "

Table: configuration

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
idtest_datasetsmetricsvariable_mappingtagsorchestration_deployment_urlrepetitionsmetric_templatescreated_atupdated_at
7ea46306da41430b88d3e4d3e83554c9{\"path\": \"testdata/medicalqna_dataset.csv\", \"type\": \"csv\"}[\"Pointwise Answer Relevance\", \"Pointwise Instruction Following\"]{}{}https://api.ai.aicore-pr.eu-west-1.mlf-aws-dev.com/v2/inference/deployments/d0d6f232abfea6721[{\"evaluationMethod\": \"llm-as-a-judge\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"0ae30283-0140-451e-8a88-267ef801f35c\", \"name\": \"Pointwise Answer Relevance\", \"description\": \"Measures how closely the model\\u2019s response relates to the user prompt, for both general and RAG use cases. Scores range from 1 to 5, with higher values indicating greater relevance.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"categorical\", \"promptType\": \"structured\", \"configuration\": {\"modelConfiguration\": {\"name\": \"gpt-4.1\", \"version\": \"2025-04-14\", \"parameters\": [{\"key\": \"temperature\", \"value\": \"0\"}]}, \"promptConfiguration\": {\"evaluationTask\": \"You are an expert evaluator. Your task is to evaluate the relevance of responses generated by AI models.\\nWe will provide you with the user input and an AI-generated response.\\nYou should first read the user input carefully to understand the context and intention, and then evaluate the relevance of the response based on the criteria provided in the Evaluation section below.\\nYou will assign the response a rating following the Rating Rubric and Evaluation Steps.\\nGive step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\", \"definition\": \"You will be assessing relevance, which measures the ability to provide a response that is pertinent and useful based on the user prompt and the context provided.\", \"criteria\": \"Relevance: Does the response address the user's query appropriately and provide pertinent information?\", \"ratingRubric\": [{\"rating\": \"1\", \"rule\": \"(Irrelevant). The response is irrelevant and does not address the user's query.\"}, {\"rating\": \"2\", \"rule\": \"(Slightly relevant). The response is slightly relevant and largely misses the user's query.\"}, {\"rating\": \"3\", \"rule\": \"(Somewhat relevant). The response is somewhat relevant but may miss key aspects of the user's query.\"}, {\"rating\": \"4\", \"rule\": \"(Mostly relevant). The response is mostly relevant and generally addresses the user's query with useful information.\"}, {\"rating\": \"5\", \"rule\": \"(Highly relevant). The response is highly relevant, directly addresses the user's query, and provides useful information.\"}], \"evaluationSteps\": [\"Assess the response in terms of Relevance. Identify how well the response aligns with the user's query and context according to the Criteria.\", \"Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Relevance.\"]}}}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [], \"supported_values\": [1, 5], \"experimental\": true}}, {\"evaluationMethod\": \"llm-as-a-judge\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"cd3ffd21-faae-4f06-8184-52541182d9a5\", \"name\": \"Pointwise Instruction Following\", \"description\": \"Evaluates the model\\u2019s ability to follow the instructions provided in the user prompt. Scores range from 1 to 5, with 1 indicating no fulfillment and 5 indicating complete fulfillment.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"categorical\", \"promptType\": \"structured\", \"configuration\": {\"modelConfiguration\": {\"name\": \"gpt-4.1\", \"version\": \"2025-04-14\", \"parameters\": [{\"key\": \"temperature\", \"value\": \"0\"}]}, \"promptConfiguration\": {\"evaluationTask\": \"Please act as an impartial judge and evaluate the quality of the responses based on the prompt and following criteria:\", \"definition\": \"You will be assessing model's the ability to follow instructions provided in the user prompt.\", \"criteria\": \"Instruction following: The response demonstrates a clear understanding of the instructions in the user prompt, satisfying all of the instruction's requirements. Evaluate the responses STRICTLY on the ability to follow instruction ONLY.\", \"ratingRubric\": [{\"rating\": \"1\", \"rule\": \"(No fulfillment). Response does not address the most important aspects of the instruction. The user would feel like their request was not at all understood.\"}, {\"rating\": \"2\", \"rule\": \"(Poor fulfillment). Response addresses some aspects of the instruction but misses key requirements or major components. The user would feel like their instruction was misunderstood in significant ways.\"}, {\"rating\": \"3\", \"rule\": \"(Some fulfillment). Response does not address some minor aspects and/or ignores some requirements of the instruction. The user would feel like their instruction was partially understood.\"}, {\"rating\": \"4\", \"rule\": \"(Good fulfillment). Response addresses most aspects and requirements of the instruction. It might miss very minor details or have slight deviations from requirements. The user would feel like their instruction was well understood.\"}, {\"rating\": \"5\", \"rule\": \"(Complete fulfillment). Response addresses all aspects and adheres to all requirements of the instruction. The user would feel like their instruction was completely understood.\"}]}}}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [], \"supported_values\": [1, 5], \"experimental\": false}}]2026-02-10 07:23:05.0276612026-02-10 07:23:05.027666
\n", + "
\n", + " \n", + "
\n", + "

Table: submission

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
idrun_idorchestration_configurationtemplate_variablescreated_atupdated_at
b5f494721730469f922759caf919d4701571f78d465d4d53961f08758a243bb8{\"modules\": {\"prompt_templating\": {\"prompt\": {\"template\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\", \"role\": \"user\"}]}, \"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\", \"timeout\": 600, \"max_retries\": 2}}}}{\"question\": \"how does rivatigmine and otc sleep medicine interact\", \"sentiment\": \"Interaction\", \"reference\": \"tell your doctor and pharmacist what prescription and nonprescription medications, vitamins, nutritional supplements, and herbal products you are taking or plan to take. Be sure to mention any of the following: antihistamines; aspirin and other nonsteroidal anti-inflammatory medications (NSAIDs) such as ibuprofen (Advil, Motrin) and naproxen (Aleve, Naprosyn); bethanechol (Duvoid, Urecholine); ipratropium (Atrovent, in Combivent, DuoNeb); and medications for Alzheimer's disease, glaucoma, irritable bowel disease, motion sickness, ulcers, or urinary problems. Your doctor may need to change the doses of your medications or monitor you carefully for side effects.\"}2026-02-10 07:23:05.0467472026-02-10 07:23:05.046749
cb37a740df1b4a43a50ce2bf6720eda01571f78d465d4d53961f08758a243bb8{\"modules\": {\"prompt_templating\": {\"prompt\": {\"template\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\", \"role\": \"user\"}]}, \"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\", \"timeout\": 600, \"max_retries\": 2}}}}{\"question\": \"how does valium affect the brain\", \"sentiment\": \"Action\", \"reference\": \"Diazepam is a benzodiazepine that exerts anxiolytic, sedative, muscle-relaxant, anticonvulsant and amnestic effects. Most of these effects are thought to result from a facilitation of the action of gamma aminobutyric acid (GABA), an inhibitory neurotransmitter in the central nervous system.\"}2026-02-10 07:23:05.0467532026-02-10 07:23:05.046753
fe41557ffc8d410681a10dee1da5bc691571f78d465d4d53961f08758a243bb8{\"modules\": {\"prompt_templating\": {\"prompt\": {\"template\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\", \"role\": \"user\"}]}, \"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\", \"timeout\": 600, \"max_retries\": 2}}}}{\"question\": \"what is morphine\", \"sentiment\": \"Information\", \"reference\": \"Morphine is a pain medication of the opiate family which is found naturally in a number of plants and animals.[5][7] It acts directly on the central nervous system (CNS) to decrease the feeling of pain.\"}2026-02-10 07:23:05.0467552026-02-10 07:23:05.046756
03d1b9791ee640f088980fd7cb6426a41571f78d465d4d53961f08758a243bb8{\"modules\": {\"prompt_templating\": {\"prompt\": {\"template\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\", \"role\": \"user\"}]}, \"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\", \"timeout\": 600, \"max_retries\": 2}}}}{\"question\": \"what are the milligrams for oxycodone e\", \"sentiment\": \"Dose\", \"reference\": \"\\ufffd 10 mg \\ufffd 20 mg \\ufffd 40 mg \\ufffd 80 mg ...\"}2026-02-10 07:23:05.0467582026-02-10 07:23:05.046758
d2c9940f373d423b80eb75d1ccc39ad91571f78d465d4d53961f08758a243bb8{\"modules\": {\"prompt_templating\": {\"prompt\": {\"template\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\", \"role\": \"user\"}]}, \"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\", \"timeout\": 600, \"max_retries\": 2}}}}{\"question\": \"81% aspirin contain resin and shellac in it. ?\", \"sentiment\": \"Ingredient\", \"reference\": \"Inactive Ingredients Ingredient Name\"}2026-02-10 07:23:05.0467602026-02-10 07:23:05.046760
\n", + "
\n", + " \n", + "
\n", + "

Table: submission_result

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
submission_idrun_idrepetition_countcompletion_resultlatencycreated_atupdated_at
b5f494721730469f922759caf919d4701571f78d465d4d53961f08758a243bb81{\"request_id\": \"45f4160b-4fb2-9116-abbd-65a96bb43f04\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: how does rivatigmine and otc sleep medicine interact.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708295, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects of the drugs involved in the question, followed by an explanation of their interaction.\\n\\n***Disclaimer: This information is for educational purposes only and is not a substitute for professional medical advice. Always consult with a healthcare provider or pharmacist before starting, stopping, or combining any medications.***\\n\\n### The Interaction: How Rivastigmine and OTC Sleep Medicine Interact\\n\\nThe interaction between rivastigmine and most over-the-counter (OTC) sleep medicines is significant and potentially harmful. They work in opposite ways in the brain.\\n\\n* **Rivastigmine's Job:** To **increase** the levels of a brain chemical called acetylcholine, which is crucial for memory and thinking.\\n* **OTC Sleep Medicine's Job:** Most OTC sleep aids (like ZzzQuil, Unisom, or Benadryl) contain antihistamines (diphenhydramine or doxylamine) that work by **blocking** the action of acetylcholine. This blocking effect causes drowsiness, but it also leads to confusion, dry mouth, and other side effects.\\n\\nBecause these drugs have opposing effects, taking them together can:\\n1. **Cancel out the benefits of rivastigmine**, making it less effective at treating the symptoms of dementia.\\n2. **Worsen cognitive side effects**, such as confusion, memory problems, and disorientation, which is particularly dangerous for a person already being treated for dementia.\\n\\n---\\n\\n### Drug 1: Rivastigmine (e.g., Exelon)\\n\\nThis is a prescription medication used to treat dementia.\\n\\n#### **Benefits:**\\n* **Treats Symptoms of Dementia:** Used for mild to moderate dementia associated with Alzheimer's disease and Parkinson's disease.\\n* **Improves Cognitive Function:** It can help improve or slow the decline of memory, thinking, attention, and the ability to perform simple daily tasks.\\n* **Improves Behavioral Symptoms:** In some patients, it can help manage behavioral symptoms associated with dementia, such as agitation or apathy.\\n* **It is important to note that rivastigmine is not a cure**; it only helps manage the symptoms.\\n\\n#### **Common Side Effects:**\\n* Nausea, vomiting, diarrhea\\n* Loss of appetite and weight loss\\n* Stomach pain or upset\\n* Dizziness or headache\\n* Weakness or fatigue\\n* **For the patch form:** Skin redness, itching, or irritation at the application site.\\n\\n---\\n\\n### Drug 2: Common OTC Sleep Medicines (Antihistamines)\\n\\nThese are non-prescription drugs used for short-term sleeplessness. The most common active ingredients are **Diphenhydramine** (found in Benadryl, ZzzQuil, Aleve PM) and **Doxylamine** (found in Unisom SleepTabs).\\n\\n#### **Benefits:**\\n* **Induces Drowsiness:** Helps a person fall asleep more easily.\\n* **Relieves Short-Term Insomnia:** Effective for occasional sleeplessness caused by stress, travel, or other temporary disruptions.\\n* **Widely Accessible:** Available over-the-counter without a prescription.\\n\\n#### **Common Side Effects (especially problematic in older adults):**\\n* **Cognitive Impairment:** **Confusion, memory problems, and difficulty concentrating.**\\n* **\\\"Hangover Effect\\\":** Next-day drowsiness, grogginess, and poor coordination.\\n* **Anticholinergic Effects:**\\n * Dry mouth, dry eyes\\n * Blurred vision\\n * Constipation\\n * Difficulty urinating (urinary retention)\\n* Dizziness and lightheadedness, which can increase the risk of falls.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2346, \"prompt_tokens\": 29, \"total_tokens\": 2375, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1561}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708295, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects of the drugs involved in the question, followed by an explanation of their interaction.\\n\\n***Disclaimer: This information is for educational purposes only and is not a substitute for professional medical advice. Always consult with a healthcare provider or pharmacist before starting, stopping, or combining any medications.***\\n\\n### The Interaction: How Rivastigmine and OTC Sleep Medicine Interact\\n\\nThe interaction between rivastigmine and most over-the-counter (OTC) sleep medicines is significant and potentially harmful. They work in opposite ways in the brain.\\n\\n* **Rivastigmine's Job:** To **increase** the levels of a brain chemical called acetylcholine, which is crucial for memory and thinking.\\n* **OTC Sleep Medicine's Job:** Most OTC sleep aids (like ZzzQuil, Unisom, or Benadryl) contain antihistamines (diphenhydramine or doxylamine) that work by **blocking** the action of acetylcholine. This blocking effect causes drowsiness, but it also leads to confusion, dry mouth, and other side effects.\\n\\nBecause these drugs have opposing effects, taking them together can:\\n1. **Cancel out the benefits of rivastigmine**, making it less effective at treating the symptoms of dementia.\\n2. **Worsen cognitive side effects**, such as confusion, memory problems, and disorientation, which is particularly dangerous for a person already being treated for dementia.\\n\\n---\\n\\n### Drug 1: Rivastigmine (e.g., Exelon)\\n\\nThis is a prescription medication used to treat dementia.\\n\\n#### **Benefits:**\\n* **Treats Symptoms of Dementia:** Used for mild to moderate dementia associated with Alzheimer's disease and Parkinson's disease.\\n* **Improves Cognitive Function:** It can help improve or slow the decline of memory, thinking, attention, and the ability to perform simple daily tasks.\\n* **Improves Behavioral Symptoms:** In some patients, it can help manage behavioral symptoms associated with dementia, such as agitation or apathy.\\n* **It is important to note that rivastigmine is not a cure**; it only helps manage the symptoms.\\n\\n#### **Common Side Effects:**\\n* Nausea, vomiting, diarrhea\\n* Loss of appetite and weight loss\\n* Stomach pain or upset\\n* Dizziness or headache\\n* Weakness or fatigue\\n* **For the patch form:** Skin redness, itching, or irritation at the application site.\\n\\n---\\n\\n### Drug 2: Common OTC Sleep Medicines (Antihistamines)\\n\\nThese are non-prescription drugs used for short-term sleeplessness. The most common active ingredients are **Diphenhydramine** (found in Benadryl, ZzzQuil, Aleve PM) and **Doxylamine** (found in Unisom SleepTabs).\\n\\n#### **Benefits:**\\n* **Induces Drowsiness:** Helps a person fall asleep more easily.\\n* **Relieves Short-Term Insomnia:** Effective for occasional sleeplessness caused by stress, travel, or other temporary disruptions.\\n* **Widely Accessible:** Available over-the-counter without a prescription.\\n\\n#### **Common Side Effects (especially problematic in older adults):**\\n* **Cognitive Impairment:** **Confusion, memory problems, and difficulty concentrating.**\\n* **\\\"Hangover Effect\\\":** Next-day drowsiness, grogginess, and poor coordination.\\n* **Anticholinergic Effects:**\\n * Dry mouth, dry eyes\\n * Blurred vision\\n * Constipation\\n * Difficulty urinating (urinary retention)\\n* Dizziness and lightheadedness, which can increase the risk of falls.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2346, \"prompt_tokens\": 29, \"total_tokens\": 2375, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1561}}}}28.0859652026-02-10 07:31:29.3301122026-02-10 07:31:29.330117
cb37a740df1b4a43a50ce2bf6720eda01571f78d465d4d53961f08758a243bb81{\"request_id\": \"8c78b558-11d3-9c9b-867f-855688bbdcf8\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: how does valium affect the brain.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708294, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects of Valium (diazepam) based on the question of how it affects the brain.\\n\\n### How Valium (Diazepam) Affects the Brain\\n\\nValium belongs to a class of drugs called benzodiazepines. Its primary effect on the brain is to enhance the activity of a neurotransmitter called **GABA (gamma-aminobutyric acid)**.\\n\\nThink of GABA as the brain's primary \\\"braking system\\\" or calming chemical. It reduces the excitability of nerve cells. By making GABA more effective, Valium essentially \\\"turns down the volume\\\" on brain activity. This leads to a sedative, anti-anxiety, and muscle-relaxing effect.\\n\\n---\\n\\n### Benefits (Therapeutic Uses)\\n\\nBy slowing down excessive nerve signaling in the brain, Valium provides the following medical benefits:\\n\\n* **Anxiety Relief:** It is effective for treating generalized anxiety disorder, panic attacks, and short-term relief from severe anxiety symptoms.\\n* **Muscle Relaxation:** It calms the nerve signals that cause muscle tightness and spasms, making it useful for conditions like back pain, cerebral palsy, or other muscle injuries.\\n* **Seizure Control:** By reducing excessive electrical activity in the brain, it can be used to treat certain types of seizure disorders (epilepsy) and to stop prolonged seizures (status epilepticus).\\n* **Management of Alcohol Withdrawal:** It helps manage and reduce severe withdrawal symptoms like agitation, tremors, and seizures by providing a calming effect on the over-stimulated brain.\\n* **Sedation:** It is often used to calm patients before medical procedures or surgery, causing drowsiness and reducing anxiety.\\n\\n### Side Effects\\n\\nThe same brain-slowing mechanism that provides benefits also causes side effects.\\n\\n#### Common Side Effects:\\n\\n* **Drowsiness and Fatigue:** The most common side effect, as the drug slows overall brain function.\\n* **Dizziness and Lightheadedness:** Can affect balance and spatial awareness.\\n* **Muscle Weakness:** An extension of its muscle-relaxing properties.\\n* **Ataxia (Loss of Coordination):** Difficulty with balance, walking, and fine motor skills.\\n* **Confusion and \\\"Brain Fog\\\":** Slower thinking, difficulty concentrating, and short-term memory impairment.\\n\\n#### Less Common or More Serious Side Effects:\\n\\n* **Anterograde Amnesia:** Difficulty forming new memories while the drug is active.\\n* **Depression:** Can worsen or, in some cases, cause depressive symptoms.\\n* **Slurred Speech:** A result of reduced motor control.\\n* **Paradoxical Reactions:** In rare cases, it can cause the opposite effect, leading to agitation, aggression, anxiety, or hallucinations.\\n* **Respiratory Depression:** Slowed breathing, which can be dangerous, especially when Valium is combined with other depressants like alcohol or opioid painkillers.\\n\\n#### Risks Associated with Long-Term Use:\\n\\n* **Tolerance:** The body adapts to the drug, requiring higher doses to achieve the same effect.\\n* **Dependence:** The brain becomes reliant on the drug to function normally. Stopping the drug abruptly can lead to severe and potentially life-threatening **withdrawal symptoms**, including rebound anxiety, insomnia, tremors, and seizures.\\n* **Addiction:** Compulsive use of the drug despite negative consequences.\\n\\n***\\n\\n**Disclaimer:** This information is for educational purposes only and is not a substitute for professional medical advice. Valium is a potent prescription medication that should only be taken under the guidance of a healthcare provider. It is generally recommended for short-term use due to the high risk of dependence.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 1943, \"prompt_tokens\": 24, \"total_tokens\": 1967, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1174}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708294, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects of Valium (diazepam) based on the question of how it affects the brain.\\n\\n### How Valium (Diazepam) Affects the Brain\\n\\nValium belongs to a class of drugs called benzodiazepines. Its primary effect on the brain is to enhance the activity of a neurotransmitter called **GABA (gamma-aminobutyric acid)**.\\n\\nThink of GABA as the brain's primary \\\"braking system\\\" or calming chemical. It reduces the excitability of nerve cells. By making GABA more effective, Valium essentially \\\"turns down the volume\\\" on brain activity. This leads to a sedative, anti-anxiety, and muscle-relaxing effect.\\n\\n---\\n\\n### Benefits (Therapeutic Uses)\\n\\nBy slowing down excessive nerve signaling in the brain, Valium provides the following medical benefits:\\n\\n* **Anxiety Relief:** It is effective for treating generalized anxiety disorder, panic attacks, and short-term relief from severe anxiety symptoms.\\n* **Muscle Relaxation:** It calms the nerve signals that cause muscle tightness and spasms, making it useful for conditions like back pain, cerebral palsy, or other muscle injuries.\\n* **Seizure Control:** By reducing excessive electrical activity in the brain, it can be used to treat certain types of seizure disorders (epilepsy) and to stop prolonged seizures (status epilepticus).\\n* **Management of Alcohol Withdrawal:** It helps manage and reduce severe withdrawal symptoms like agitation, tremors, and seizures by providing a calming effect on the over-stimulated brain.\\n* **Sedation:** It is often used to calm patients before medical procedures or surgery, causing drowsiness and reducing anxiety.\\n\\n### Side Effects\\n\\nThe same brain-slowing mechanism that provides benefits also causes side effects.\\n\\n#### Common Side Effects:\\n\\n* **Drowsiness and Fatigue:** The most common side effect, as the drug slows overall brain function.\\n* **Dizziness and Lightheadedness:** Can affect balance and spatial awareness.\\n* **Muscle Weakness:** An extension of its muscle-relaxing properties.\\n* **Ataxia (Loss of Coordination):** Difficulty with balance, walking, and fine motor skills.\\n* **Confusion and \\\"Brain Fog\\\":** Slower thinking, difficulty concentrating, and short-term memory impairment.\\n\\n#### Less Common or More Serious Side Effects:\\n\\n* **Anterograde Amnesia:** Difficulty forming new memories while the drug is active.\\n* **Depression:** Can worsen or, in some cases, cause depressive symptoms.\\n* **Slurred Speech:** A result of reduced motor control.\\n* **Paradoxical Reactions:** In rare cases, it can cause the opposite effect, leading to agitation, aggression, anxiety, or hallucinations.\\n* **Respiratory Depression:** Slowed breathing, which can be dangerous, especially when Valium is combined with other depressants like alcohol or opioid painkillers.\\n\\n#### Risks Associated with Long-Term Use:\\n\\n* **Tolerance:** The body adapts to the drug, requiring higher doses to achieve the same effect.\\n* **Dependence:** The brain becomes reliant on the drug to function normally. Stopping the drug abruptly can lead to severe and potentially life-threatening **withdrawal symptoms**, including rebound anxiety, insomnia, tremors, and seizures.\\n* **Addiction:** Compulsive use of the drug despite negative consequences.\\n\\n***\\n\\n**Disclaimer:** This information is for educational purposes only and is not a substitute for professional medical advice. Valium is a potent prescription medication that should only be taken under the guidance of a healthcare provider. It is generally recommended for short-term use due to the high risk of dependence.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 1943, \"prompt_tokens\": 24, \"total_tokens\": 1967, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1174}}}}27.5086212026-02-10 07:31:29.3301172026-02-10 07:31:29.330118
fe41557ffc8d410681a10dee1da5bc691571f78d465d4d53961f08758a243bb81{\"request_id\": \"824ae29d-8409-9f03-8d88-a5edfc13b7f0\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: what is morphine.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708300, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects of morphine, framed as a response to the consumer health question, \\\"what is morphine.\\\"\\n\\n***\\n\\nMorphine is a powerful prescription pain medication that belongs to a class of drugs called opioids. It is derived from the opium poppy plant and works by changing the way your brain and nervous system respond to pain. Because it is very strong and can be habit-forming, it is used to treat severe pain that is not helped by other types of pain relievers.\\n\\nHere are the primary benefits and side effects associated with its use.\\n\\n### Benefits of Morphine\\n\\nWhen used as prescribed by a healthcare professional, morphine is a highly effective medication for managing severe pain. Its main benefits include:\\n\\n* **Powerful Pain Relief:** Morphine is one of the most effective pain relievers available. It is often used for acute pain after major surgery, serious injuries (like severe burns or trauma), or for heart attacks.\\n* **Management of Chronic Pain:** It can be used to manage severe, persistent pain, especially pain related to cancer.\\n* **Palliative and End-of-Life Care:** Morphine is a cornerstone of palliative care, where it not only relieves pain but can also ease the sensation of shortness of breath (dyspnea) in patients with terminal illnesses.\\n* **Anxiety Reduction:** By relieving severe pain, morphine can also help reduce the significant anxiety and distress that often accompany it.\\n\\n### Side Effects of Morphine\\n\\nMorphine has a significant risk of side effects, which is why it must be used under strict medical supervision. These can be divided into common and more serious categories.\\n\\n#### Common Side Effects:\\n\\n* **Drowsiness and Dizziness:** Feeling sleepy, sedated, or lightheaded is very common.\\n* **Constipation:** This is one of the most frequent and persistent side effects of all opioids.\\n* **Nausea and Vomiting:** Many people experience this, especially when first starting the medication.\\n* **Confusion or \\\"Fogginess\\\":** Difficulty thinking clearly or feeling disoriented.\\n* **Itching or Sweating:** These can occur as a reaction to the drug.\\n\\n#### Serious Side Effects (Require Immediate Medical Attention):\\n\\n* **Respiratory Depression:** This is the most dangerous side effect. Morphine can cause breathing to become dangerously slow and shallow, which can lead to unconsciousness, coma, brain damage, or death. The risk is much higher with large doses or when mixed with other substances like alcohol or sedatives.\\n* **Addiction, Dependence, and Tolerance:**\\n * **Dependence:** The body adapts to the drug, and stopping it suddenly can cause withdrawal symptoms (e.g., muscle aches, anxiety, sweating, nausea).\\n * **Tolerance:** Over time, a person may need higher doses of the drug to get the same level of pain relief.\\n * **Addiction:** A compulsive desire to use the drug for its euphoric effects, despite harmful consequences. This is a serious brain disease that can develop with opioid use.\\n* **Severe Low Blood Pressure:** This can cause fainting or extreme dizziness, especially when standing up.\\n* **Overdose:** Taking too much morphine can be fatal, primarily due to respiratory depression. Signs include unresponsiveness, pinpoint pupils, and slowed or stopped breathing.\\n\\n---\\n\\n**Disclaimer:** This information is for educational purposes only and is not a substitute for professional medical advice. Morphine is a controlled substance that should only be taken exactly as prescribed and monitored by a qualified healthcare provider. Always consult a doctor or pharmacist for guidance on your specific health condition and treatments.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2048, \"prompt_tokens\": 20, \"total_tokens\": 2068, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1284}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708300, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects of morphine, framed as a response to the consumer health question, \\\"what is morphine.\\\"\\n\\n***\\n\\nMorphine is a powerful prescription pain medication that belongs to a class of drugs called opioids. It is derived from the opium poppy plant and works by changing the way your brain and nervous system respond to pain. Because it is very strong and can be habit-forming, it is used to treat severe pain that is not helped by other types of pain relievers.\\n\\nHere are the primary benefits and side effects associated with its use.\\n\\n### Benefits of Morphine\\n\\nWhen used as prescribed by a healthcare professional, morphine is a highly effective medication for managing severe pain. Its main benefits include:\\n\\n* **Powerful Pain Relief:** Morphine is one of the most effective pain relievers available. It is often used for acute pain after major surgery, serious injuries (like severe burns or trauma), or for heart attacks.\\n* **Management of Chronic Pain:** It can be used to manage severe, persistent pain, especially pain related to cancer.\\n* **Palliative and End-of-Life Care:** Morphine is a cornerstone of palliative care, where it not only relieves pain but can also ease the sensation of shortness of breath (dyspnea) in patients with terminal illnesses.\\n* **Anxiety Reduction:** By relieving severe pain, morphine can also help reduce the significant anxiety and distress that often accompany it.\\n\\n### Side Effects of Morphine\\n\\nMorphine has a significant risk of side effects, which is why it must be used under strict medical supervision. These can be divided into common and more serious categories.\\n\\n#### Common Side Effects:\\n\\n* **Drowsiness and Dizziness:** Feeling sleepy, sedated, or lightheaded is very common.\\n* **Constipation:** This is one of the most frequent and persistent side effects of all opioids.\\n* **Nausea and Vomiting:** Many people experience this, especially when first starting the medication.\\n* **Confusion or \\\"Fogginess\\\":** Difficulty thinking clearly or feeling disoriented.\\n* **Itching or Sweating:** These can occur as a reaction to the drug.\\n\\n#### Serious Side Effects (Require Immediate Medical Attention):\\n\\n* **Respiratory Depression:** This is the most dangerous side effect. Morphine can cause breathing to become dangerously slow and shallow, which can lead to unconsciousness, coma, brain damage, or death. The risk is much higher with large doses or when mixed with other substances like alcohol or sedatives.\\n* **Addiction, Dependence, and Tolerance:**\\n * **Dependence:** The body adapts to the drug, and stopping it suddenly can cause withdrawal symptoms (e.g., muscle aches, anxiety, sweating, nausea).\\n * **Tolerance:** Over time, a person may need higher doses of the drug to get the same level of pain relief.\\n * **Addiction:** A compulsive desire to use the drug for its euphoric effects, despite harmful consequences. This is a serious brain disease that can develop with opioid use.\\n* **Severe Low Blood Pressure:** This can cause fainting or extreme dizziness, especially when standing up.\\n* **Overdose:** Taking too much morphine can be fatal, primarily due to respiratory depression. Signs include unresponsiveness, pinpoint pupils, and slowed or stopped breathing.\\n\\n---\\n\\n**Disclaimer:** This information is for educational purposes only and is not a substitute for professional medical advice. Morphine is a controlled substance that should only be taken exactly as prescribed and monitored by a qualified healthcare provider. Always consult a doctor or pharmacist for guidance on your specific health condition and treatments.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2048, \"prompt_tokens\": 20, \"total_tokens\": 2068, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1284}}}}33.4001852026-02-10 07:31:29.3301182026-02-10 07:31:29.330120
03d1b9791ee640f088980fd7cb6426a41571f78d465d4d53961f08758a243bb81{\"request_id\": \"a0b58a8c-b636-962f-a2ca-21df8d71ccb3\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: what are the milligrams for oxycodone e.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708307, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. It is crucial to consult with a qualified healthcare provider for any questions about prescription medications like oxycodone. The information below is for educational purposes only and is not a substitute for professional medical advice, diagnosis, or treatment.\\n\\nRegarding your question about milligrams, oxycodone dosage is highly individualized and must be determined by a doctor. It depends on many factors, including:\\n\\n* The severity of the pain.\\n* The patient's age, weight, and overall health (especially liver and kidney function).\\n* Whether the patient has taken opioids before (tolerance).\\n* The specific formulation (immediate-release vs. extended-release).\\n\\nCommon strengths for immediate-release oxycodone tablets include 5 mg, 10 mg, 15 mg, 20 mg, and 30 mg. It is also available in combination with other drugs like acetaminophen (e.g., Percocet). **Never take oxycodone without a doctor's prescription and guidance.**\\n\\nHere is a list of the medical benefits and potential side effects of oxycodone.\\n\\n### Benefits (Medical Uses) of Oxycodone\\n\\nOxycodone is a powerful opioid analgesic prescribed for its primary benefit:\\n\\n* **Pain Relief:** Its main purpose is to manage moderate to severe pain that is not adequately controlled by other pain medications. It works by binding to opioid receptors in the brain and spinal cord, changing the way your body feels and responds to pain. It is used for:\\n * **Acute Pain:** Short-term, severe pain, such as after surgery or a major injury.\\n * **Chronic Pain:** Long-term, around-the-clock pain, often associated with conditions like cancer. For this, an extended-release formulation is typically used.\\n\\n---\\n\\n### Side Effects & Risks of Oxycodone\\n\\nOxycodone carries significant risks and a wide range of side effects, which can be categorized from common to severe.\\n\\n#### Common Side Effects\\n\\nThese are the most frequently reported side effects. While not typically life-threatening, they can be very uncomfortable.\\n\\n* **Constipation:** This is a very common and often persistent side effect of all opioids.\\n* **Drowsiness, Dizziness, or Lightheadedness:** Can impair your ability to drive or operate heavy machinery.\\n* **Nausea and Vomiting**\\n* **Headache**\\n* **Dry Mouth**\\n* **Itching or Sweating**\\n* **Feeling tired or weak (fatigue)**\\n\\n#### Serious Side Effects (Require Immediate Medical Attention)\\n\\nThese side effects can be dangerous and require you to contact a doctor or seek emergency medical help right away.\\n\\n* **Severe Respiratory Depression:** This is the most dangerous risk. Signs include slow, shallow, or stopped breathing. It can lead to coma and death.\\n* **Extreme Drowsiness or Inability to Wake Up**\\n* **Confusion, Hallucinations, or Severe Mood Changes**\\n* **Seizures**\\n* **Low Blood Pressure (Hypotension):** Signs include feeling faint, dizzy, or fainting.\\n* **Allergic Reaction:** Signs include rash, hives, difficulty breathing, and swelling of the face, lips, tongue, or throat.\\n\\n#### Major Risks and Long-Term Warnings\\n\\n* **Addiction, Abuse, and Dependence:** Oxycodone has a very high potential for creating physical dependence and psychological addiction.\\n * **Dependence** means your body adapts to the drug, and you will experience withdrawal symptoms if you stop taking it abruptly.\\n * **Addiction** is a brain disease characterized by compulsive drug-seeking and use despite harmful consequences.\\n* **Overdose:** Taking too much oxycodone can be fatal, primarily due to respiratory depression (stopped breathing). The risk is significantly higher when mixed with alcohol, benzodiazepines (like Xanax or Valium), or other central nervous system depressants.\\n* **Withdrawal Symptoms:** If you stop taking oxycodone suddenly after long-term use, you may experience severe withdrawal symptoms, including muscle aches, anxiety, insomnia, sweating, nausea, diarrhea, and intense cravings.\\n\\n**If you or someone you know is struggling with substance use, please call the SAMHSA National Helpline at 1-800-662-HELP (4357).**\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2437, \"prompt_tokens\": 25, \"total_tokens\": 2462, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1528}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708307, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. It is crucial to consult with a qualified healthcare provider for any questions about prescription medications like oxycodone. The information below is for educational purposes only and is not a substitute for professional medical advice, diagnosis, or treatment.\\n\\nRegarding your question about milligrams, oxycodone dosage is highly individualized and must be determined by a doctor. It depends on many factors, including:\\n\\n* The severity of the pain.\\n* The patient's age, weight, and overall health (especially liver and kidney function).\\n* Whether the patient has taken opioids before (tolerance).\\n* The specific formulation (immediate-release vs. extended-release).\\n\\nCommon strengths for immediate-release oxycodone tablets include 5 mg, 10 mg, 15 mg, 20 mg, and 30 mg. It is also available in combination with other drugs like acetaminophen (e.g., Percocet). **Never take oxycodone without a doctor's prescription and guidance.**\\n\\nHere is a list of the medical benefits and potential side effects of oxycodone.\\n\\n### Benefits (Medical Uses) of Oxycodone\\n\\nOxycodone is a powerful opioid analgesic prescribed for its primary benefit:\\n\\n* **Pain Relief:** Its main purpose is to manage moderate to severe pain that is not adequately controlled by other pain medications. It works by binding to opioid receptors in the brain and spinal cord, changing the way your body feels and responds to pain. It is used for:\\n * **Acute Pain:** Short-term, severe pain, such as after surgery or a major injury.\\n * **Chronic Pain:** Long-term, around-the-clock pain, often associated with conditions like cancer. For this, an extended-release formulation is typically used.\\n\\n---\\n\\n### Side Effects & Risks of Oxycodone\\n\\nOxycodone carries significant risks and a wide range of side effects, which can be categorized from common to severe.\\n\\n#### Common Side Effects\\n\\nThese are the most frequently reported side effects. While not typically life-threatening, they can be very uncomfortable.\\n\\n* **Constipation:** This is a very common and often persistent side effect of all opioids.\\n* **Drowsiness, Dizziness, or Lightheadedness:** Can impair your ability to drive or operate heavy machinery.\\n* **Nausea and Vomiting**\\n* **Headache**\\n* **Dry Mouth**\\n* **Itching or Sweating**\\n* **Feeling tired or weak (fatigue)**\\n\\n#### Serious Side Effects (Require Immediate Medical Attention)\\n\\nThese side effects can be dangerous and require you to contact a doctor or seek emergency medical help right away.\\n\\n* **Severe Respiratory Depression:** This is the most dangerous risk. Signs include slow, shallow, or stopped breathing. It can lead to coma and death.\\n* **Extreme Drowsiness or Inability to Wake Up**\\n* **Confusion, Hallucinations, or Severe Mood Changes**\\n* **Seizures**\\n* **Low Blood Pressure (Hypotension):** Signs include feeling faint, dizzy, or fainting.\\n* **Allergic Reaction:** Signs include rash, hives, difficulty breathing, and swelling of the face, lips, tongue, or throat.\\n\\n#### Major Risks and Long-Term Warnings\\n\\n* **Addiction, Abuse, and Dependence:** Oxycodone has a very high potential for creating physical dependence and psychological addiction.\\n * **Dependence** means your body adapts to the drug, and you will experience withdrawal symptoms if you stop taking it abruptly.\\n * **Addiction** is a brain disease characterized by compulsive drug-seeking and use despite harmful consequences.\\n* **Overdose:** Taking too much oxycodone can be fatal, primarily due to respiratory depression (stopped breathing). The risk is significantly higher when mixed with alcohol, benzodiazepines (like Xanax or Valium), or other central nervous system depressants.\\n* **Withdrawal Symptoms:** If you stop taking oxycodone suddenly after long-term use, you may experience severe withdrawal symptoms, including muscle aches, anxiety, insomnia, sweating, nausea, diarrhea, and intense cravings.\\n\\n**If you or someone you know is struggling with substance use, please call the SAMHSA National Helpline at 1-800-662-HELP (4357).**\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2437, \"prompt_tokens\": 25, \"total_tokens\": 2462, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1528}}}}40.4348192026-02-10 07:31:29.3301212026-02-10 07:31:29.330122
d2c9940f373d423b80eb75d1ccc39ad91571f78d465d4d53961f08758a243bb81{\"request_id\": \"fb75ca86-e855-95c2-ba17-646c5297ca17\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: 81% aspirin contain resin and shellac in it. ?.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708312, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. It appears there may be a misunderstanding in the original question, which is very common. Let's clarify that first and then detail the benefits and side effects of the drug in question.\\n\\n### Clarification of the Question\\n\\nIt seems you are asking about **low-dose aspirin**, which is commonly available in an **81 mg** tablet (not 81%). The \\\"resin and shellac\\\" you mentioned are not active ingredients; they are inactive components used to create an **enteric coating**.\\n\\n* **81 mg Aspirin:** This is a low dose, often called \\\"baby aspirin,\\\" used for long-term preventative health.\\n* **Enteric Coating (Resin and Shellac):** This is a special, hard coating on the pill. Its purpose is to prevent the aspirin from dissolving in the stomach. Instead, it dissolves later in the small intestine. This coating helps reduce stomach irritation, a common side effect of aspirin. Shellac and resin are common, FDA-approved ingredients used in many medications and even some foods for this purpose.\\n\\nSo, the drug is **Aspirin (at a low dose of 81 mg)**. Here are its primary benefits and potential side effects.\\n\\n---\\n\\n### Benefits of Low-Dose (81 mg) Aspirin\\n\\nThe main benefit of a daily low-dose aspirin regimen is its ability to prevent blood clots. It works as an antiplatelet agent, making platelets in the blood less \\\"sticky.\\\" This is primarily used for cardiovascular protection.\\n\\n**1. Prevention of Heart Attack:** For individuals who have already had a heart attack, daily aspirin can significantly reduce the risk of having a second one.\\n\\n**2. Prevention of Ischemic Stroke:** An ischemic stroke is caused by a blood clot in the brain. For people who have had a previous ischemic stroke or a transient ischemic attack (TIA or \\\"mini-stroke\\\"), daily aspirin can help prevent another one.\\n\\n**3. After Cardiovascular Procedures:** Doctors often prescribe it after procedures like stent placement, coronary artery bypass surgery, or angioplasty to prevent new clots from forming on the treated vessels.\\n\\n**4. Primary Prevention in High-Risk Individuals:** In some cases, a doctor might recommend daily aspirin for people who have *not* yet had a heart attack or stroke but are at very high risk due to factors like diabetes, high blood pressure, and high cholesterol. **However, this practice is now less common, as the risks can outweigh the benefits for many.**\\n\\n### Side Effects and Risks of Aspirin\\n\\nEven at a low dose, daily aspirin is a powerful medication with significant risks. The decision to take it must always be made with a doctor.\\n\\n**Common Side Effects:**\\n\\n* **Stomach Upset:** Heartburn, indigestion, or nausea. The enteric coating is designed to minimize this, but it can still occur.\\n\\n**Serious Side Effects and Risks:**\\n\\n* **Gastrointestinal (GI) Bleeding:** This is the most significant risk. Aspirin can irritate the stomach lining and lead to ulcers or bleeding. Signs include black or tarry stools, vomiting blood (or what looks like coffee grounds), and persistent stomach pain. The risk increases with age and in people with a history of ulcers.\\n* **Hemorrhagic Stroke (Bleeding in the Brain):** While aspirin helps prevent strokes caused by clots, it increases the risk of strokes caused by bleeding. This is a critical reason why it should only be taken under medical supervision.\\n* **Allergic Reaction:** Some people are allergic to aspirin. Symptoms can include hives, facial swelling, wheezing, and anaphylactic shock.\\n* **Tinnitus:** Ringing in the ears, which usually occurs at higher doses but can happen with long-term use.\\n* **Kidney Problems:** Long-term use can, in some cases, affect kidney function.\\n\\n**\\u26a0\\ufe0f Special Warning: Reye's Syndrome**\\n\\nAspirin should **NEVER** be given to children or teenagers recovering from a viral illness like the flu or chickenpox. It is linked to Reye's syndrome, a rare but extremely serious condition that can cause swelling in the liver and brain.\\n\\n---\\n\\n### **Important: Talk to Your Doctor**\\n\\nThe decision to start or stop taking daily low-dose aspirin is a medical one that you should make with your healthcare provider. They will weigh your personal risk of a heart attack or stroke against your personal risk of serious bleeding.\\n\\n**Do not start taking daily aspirin on your own without consulting a doctor.**\\n\\nThis information is for educational purposes only and does not constitute medical advice.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2415, \"prompt_tokens\": 30, \"total_tokens\": 2445, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1458}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770708312, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. It appears there may be a misunderstanding in the original question, which is very common. Let's clarify that first and then detail the benefits and side effects of the drug in question.\\n\\n### Clarification of the Question\\n\\nIt seems you are asking about **low-dose aspirin**, which is commonly available in an **81 mg** tablet (not 81%). The \\\"resin and shellac\\\" you mentioned are not active ingredients; they are inactive components used to create an **enteric coating**.\\n\\n* **81 mg Aspirin:** This is a low dose, often called \\\"baby aspirin,\\\" used for long-term preventative health.\\n* **Enteric Coating (Resin and Shellac):** This is a special, hard coating on the pill. Its purpose is to prevent the aspirin from dissolving in the stomach. Instead, it dissolves later in the small intestine. This coating helps reduce stomach irritation, a common side effect of aspirin. Shellac and resin are common, FDA-approved ingredients used in many medications and even some foods for this purpose.\\n\\nSo, the drug is **Aspirin (at a low dose of 81 mg)**. Here are its primary benefits and potential side effects.\\n\\n---\\n\\n### Benefits of Low-Dose (81 mg) Aspirin\\n\\nThe main benefit of a daily low-dose aspirin regimen is its ability to prevent blood clots. It works as an antiplatelet agent, making platelets in the blood less \\\"sticky.\\\" This is primarily used for cardiovascular protection.\\n\\n**1. Prevention of Heart Attack:** For individuals who have already had a heart attack, daily aspirin can significantly reduce the risk of having a second one.\\n\\n**2. Prevention of Ischemic Stroke:** An ischemic stroke is caused by a blood clot in the brain. For people who have had a previous ischemic stroke or a transient ischemic attack (TIA or \\\"mini-stroke\\\"), daily aspirin can help prevent another one.\\n\\n**3. After Cardiovascular Procedures:** Doctors often prescribe it after procedures like stent placement, coronary artery bypass surgery, or angioplasty to prevent new clots from forming on the treated vessels.\\n\\n**4. Primary Prevention in High-Risk Individuals:** In some cases, a doctor might recommend daily aspirin for people who have *not* yet had a heart attack or stroke but are at very high risk due to factors like diabetes, high blood pressure, and high cholesterol. **However, this practice is now less common, as the risks can outweigh the benefits for many.**\\n\\n### Side Effects and Risks of Aspirin\\n\\nEven at a low dose, daily aspirin is a powerful medication with significant risks. The decision to take it must always be made with a doctor.\\n\\n**Common Side Effects:**\\n\\n* **Stomach Upset:** Heartburn, indigestion, or nausea. The enteric coating is designed to minimize this, but it can still occur.\\n\\n**Serious Side Effects and Risks:**\\n\\n* **Gastrointestinal (GI) Bleeding:** This is the most significant risk. Aspirin can irritate the stomach lining and lead to ulcers or bleeding. Signs include black or tarry stools, vomiting blood (or what looks like coffee grounds), and persistent stomach pain. The risk increases with age and in people with a history of ulcers.\\n* **Hemorrhagic Stroke (Bleeding in the Brain):** While aspirin helps prevent strokes caused by clots, it increases the risk of strokes caused by bleeding. This is a critical reason why it should only be taken under medical supervision.\\n* **Allergic Reaction:** Some people are allergic to aspirin. Symptoms can include hives, facial swelling, wheezing, and anaphylactic shock.\\n* **Tinnitus:** Ringing in the ears, which usually occurs at higher doses but can happen with long-term use.\\n* **Kidney Problems:** Long-term use can, in some cases, affect kidney function.\\n\\n**\\u26a0\\ufe0f Special Warning: Reye's Syndrome**\\n\\nAspirin should **NEVER** be given to children or teenagers recovering from a viral illness like the flu or chickenpox. It is linked to Reye's syndrome, a rare but extremely serious condition that can cause swelling in the liver and brain.\\n\\n---\\n\\n### **Important: Talk to Your Doctor**\\n\\nThe decision to start or stop taking daily low-dose aspirin is a medical one that you should make with your healthcare provider. They will weigh your personal risk of a heart attack or stroke against your personal risk of serious bleeding.\\n\\n**Do not start taking daily aspirin on your own without consulting a doctor.**\\n\\nThis information is for educational purposes only and does not constitute medical advice.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2415, \"prompt_tokens\": 30, \"total_tokens\": 2445, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1458}}}}45.2776842026-02-10 07:31:29.3301222026-02-10 07:31:29.330122
\n", + "
\n", + " \n", + "
\n", + "

Table: evaluation_result

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
submission_idrun_idrepetition_countmetricaggregating_valuemetric_resulterrorcreated_atupdated_at
b5f494721730469f922759caf919d4701571f78d465d4d53961f08758a243bb81\"Pointwise Answer Relevance\"5{\"explanation\": \"The response directly addresses the user's query by explaining both the benefits and side effects of rivastigmine and common OTC sleep medicines, as requested. It also provides a clear and detailed explanation of their interaction, including the potential risks and why combining them can be harmful. The information is organized, accurate, and highly relevant to the user's question, offering both general drug information and specific interaction details. This fully meets the criteria for relevance.\", \"rating\": 5}None2026-02-10 07:42:50.3289072026-02-10 07:42:50.328912
cb37a740df1b4a43a50ce2bf6720eda01571f78d465d4d53961f08758a243bb81\"Pointwise Answer Relevance\"5{\"explanation\": \"The response directly addresses the user's question about how Valium affects the brain by explaining its mechanism of action (enhancing GABA activity), which is highly relevant. It then clearly lists both the benefits (therapeutic uses) and side effects, specifically focusing on those related to brain function and overall neurological impact. The information is detailed, accurate, and tailored to the context of the question. The response also includes a disclaimer, which is appropriate for consumer health information. Overall, the response is highly relevant, directly answering the user's query and providing useful, pertinent information.\", \"rating\": 5}None2026-02-10 07:42:50.3289132026-02-10 07:42:50.328913
fe41557ffc8d410681a10dee1da5bc691571f78d465d4d53961f08758a243bb81\"Pointwise Answer Relevance\"5{\"explanation\": \"The response directly addresses the user's query by providing a clear explanation of what morphine is, followed by a detailed list of its benefits and side effects. It covers both common and serious side effects, and explains the contexts in which morphine is used, such as pain management and palliative care. The information is pertinent, comprehensive, and framed appropriately for a consumer health question. The response is highly relevant and useful for someone seeking information about morphine's benefits and risks.\", \"rating\": 5}None2026-02-10 07:42:50.3289142026-02-10 07:42:50.328916
03d1b9791ee640f088980fd7cb6426a41571f78d465d4d53961f08758a243bb81\"Pointwise Answer Relevance\"5{\"explanation\": \"The user asked for the benefits and side effects of oxycodone in the context of a consumer health question about its milligram dosages. The response first addresses the milligram strengths available for oxycodone, then provides a thorough and detailed list of both the benefits (medical uses) and side effects (common, serious, and long-term risks) of the drug. The information is accurate, comprehensive, and directly relevant to the user's query. The response also includes appropriate safety warnings and resources, which are pertinent for a consumer health context. Overall, the response is highly relevant and fully addresses the user's request.\", \"rating\": 5}None2026-02-10 07:42:50.3289172026-02-10 07:42:50.328918
d2c9940f373d423b80eb75d1ccc39ad91571f78d465d4d53961f08758a243bb81\"Pointwise Answer Relevance\"5{\"explanation\": \"The response directly addresses the user's query by clarifying the confusion in the question (81% vs. 81 mg aspirin, and the role of resin and shellac as enteric coating agents). It then provides a comprehensive and accurate list of the benefits and side effects of low-dose aspirin, which is the drug in question. The explanation is detailed, relevant, and includes important warnings and context for consumer health. The response is highly pertinent and useful for the user's needs.\", \"rating\": 5}None2026-02-10 07:42:50.3289192026-02-10 07:42:50.328919
\n", + "
\n", + "
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "# viewing the results from sqlite db in tabular format..\n", + "import sqlite3\n", + "import pandas as pd\n", + "from IPython.display import display, HTML\n", + "\n", + "# Path to your SQLite database file\n", + "db_file = 'results-new/results.db'\n", + "\n", + "connection = sqlite3.connect(db_file)\n", + "\n", + "# Specify the table names you want to display\n", + "table_names = ['run','configuration', 'submission', 'submission_result', 'evaluation_result'] \n", + "\n", + "# Create the CSS and HTML container\n", + "html_content = \"\"\"\n", + "\n", + "
\n", + "\"\"\"\n", + "\n", + "for table_name in table_names:\n", + " query = f\"SELECT * FROM {table_name};\"\n", + " df = pd.read_sql_query(query, connection)\n", + " # If you want to see all the rows across all tables, remove/comment the next line\n", + " df = df.head(5) # Limiting the number of rows displayed\n", + " table_html = df.to_html(classes='table-container', index=False)\n", + " html_content += f\"\"\"\n", + "
\n", + "

Table: {table_name}

\n", + " {table_html}\n", + "
\n", + " \"\"\"\n", + "\n", + "html_content += \"
\"\n", + "\n", + "display(HTML(html_content))\n", + "\n", + "# Close the connection\n", + "connection.close()" + ] + }, + { + "cell_type": "code", + "execution_count": 94, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n", + "
\n", + "\n", + "
\n", + "

Categorical Comparison

\n", + "

Values: Weighted Average (1-5 scale). Win Rate based on head-to-head performance.

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
run_idrun_namemodelWin RateFinal RankPointwise Instruction FollowingPointwise Answer Relevance
19722d52bde94ac488b1bd8abbd5bec9Run-genai-eval-test-gemini-2.5-pro-001gemini-2.5-pro0.015.05.0
\n", + "
\n", + "
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import pandas as pd\n", + "import numpy as np\n", + "import sqlite3\n", + "import json\n", + "import os\n", + "from IPython.display import display, HTML\n", + "\n", + "# ==========================================\n", + "# 1. CONFIGURATION (Separated Groups)\n", + "# ==========================================\n", + "METRIC_GROUPS = {\n", + " \"Categorical\": {\n", + " \"type\": \"categorical\",\n", + " \"description\": \"Weighted Average (1-5 scale)\",\n", + " \"metrics\": [\n", + " \"Pointwise Conciseness\", \n", + " \"Pointwise Instruction Following\", \n", + " \"Pointwise Correctness\", \n", + " \"Pointwise Answer Relevance\"\n", + " ]\n", + " },\n", + " \"Boolean\": {\n", + " \"type\": \"categorical\", # Uses same weighted avg logic (0 or 1)\n", + " \"description\": \"Pass Rate (0-1 scale)\",\n", + " \"metrics\": [\n", + " \"Exact Match\",\n", + " \"Content Filter on Input\",\n", + " \"Content Filter on Output\",\n", + " \"Language Match\",\n", + " \"JSON Schema Match\"\n", + " ]\n", + " },\n", + " \"Numerical\": {\n", + " \"type\": \"numerical\",\n", + " \"description\": \"Mean Value\",\n", + " \"metrics\": [\n", + " \"BLEU\", \n", + " \"ROUGE\", \n", + " \"BERT Score\",\n", + " \"test-metric\"\n", + " ]\n", + " }\n", + "}\n", + "\n", + "# ==========================================\n", + "# 2. DATA EXTRACTION\n", + "# ==========================================\n", + "def extract_db_metadata(db_path):\n", + " if not os.path.exists(db_path): return pd.DataFrame()\n", + " conn = sqlite3.connect(db_path)\n", + " df_runs = pd.read_sql_query(\"SELECT id, name, tags, config FROM run\", conn)\n", + " conn.close()\n", + " \n", + " meta_data = []\n", + " for _, row in df_runs.iterrows():\n", + " run_id = str(row[\"id\"])\n", + " run_name = str(row[\"name\"])\n", + " tags = {}\n", + " config = {}\n", + " try: tags = json.loads(row[\"tags\"]) if isinstance(row[\"tags\"], str) else row[\"tags\"]\n", + " except: pass\n", + " try: config = json.loads(row[\"config\"]) if isinstance(row[\"config\"], str) else row[\"config\"]\n", + " except: pass\n", + "\n", + " model = \"Unknown\"\n", + " try: model = config[\"modules\"][\"prompt_templating\"][\"model\"][\"name\"]\n", + " except:\n", + " if isinstance(tags, dict): model = tags.get(\"evaluation.ai.sap.com/model\", \"Unknown\")\n", + " elif isinstance(tags, list):\n", + " for t in tags: \n", + " if t.get(\"key\") == \"evaluation.ai.sap.com/model\": model = t.get(\"value\")\n", + "\n", + " meta_data.append({\"run_id\": run_id, \"run_name\": run_name, \"model\": model})\n", + " return pd.DataFrame(meta_data)\n", + "\n", + "def extract_api_metrics(runs_data_resource):\n", + " flat_data = []\n", + " for run in runs_data_resource:\n", + " model = \"Unknown\"\n", + " for t in run.get(\"tags\", []):\n", + " if t.get(\"name\") == \"evaluation.ai.sap.com/model\":\n", + " model = t.get(\"value\")\n", + " break\n", + " for m in run.get(\"metrics\", []):\n", + " clean_name = m.get(\"name\", \"\").replace('\"', '').strip()\n", + " flat_data.append({\n", + " \"model\": model,\n", + " \"metrics_name_clean\": clean_name,\n", + " \"metric_value\": m.get(\"value\")\n", + " })\n", + " df = pd.DataFrame(flat_data)\n", + " df['metric_value'] = pd.to_numeric(df['metric_value'], errors='coerce')\n", + " return df\n", + "\n", + "# ==========================================\n", + "# 3. SCORING & HELM LOGIC\n", + "# ==========================================\n", + "def calculate_weighted_avg_score(row, cols):\n", + " \"\"\" Returns a score based on counts. \n", + " Categorical: 1-5 scale. \n", + " Boolean: 0-1 scale (Pass Rate). \n", + " \"\"\"\n", + " total_score = 0\n", + " total_count = 0\n", + " # Check counts 0-5 (covers Boolean 0/1 and Categorical 1-5)\n", + " for rating in range(0, 6):\n", + " col_name = next((c for c in cols if f\"/{rating}/count\" in c), None)\n", + " if col_name and not pd.isna(row[col_name]):\n", + " count = row[col_name]\n", + " total_score += count * rating\n", + " total_count += count\n", + " return total_score / total_count if total_count > 0 else 0.0\n", + "\n", + "def get_metric_score_series(df_metrics, metric_name, group_type):\n", + " \"\"\" Returns a Series of SCORES (Scalar) for each model for a specific metric \"\"\"\n", + " subset = df_metrics[df_metrics['metrics_name_clean'].str.startswith(metric_name)]\n", + " if subset.empty: return None\n", + "\n", + " # Pivot to get columns for this metric\n", + " pivot = subset.pivot_table(index='model', columns='metrics_name_clean', values='metric_value', aggfunc='first')\n", + " cols = pivot.columns.tolist()\n", + " \n", + " if group_type == \"categorical\":\n", + " # Calculate Weighted Average (or Pass Rate for Boolean)\n", + " return pivot.apply(lambda row: calculate_weighted_avg_score(row, cols), axis=1)\n", + " else:\n", + " # Calculate Mean (Numerical)\n", + " c_mean = next((c for c in cols if \"mean\" in c), None)\n", + " if c_mean: return pivot[c_mean]\n", + " return None\n", + "\n", + "def calculate_group_win_rate(score_table):\n", + " \"\"\"\n", + " Calculates HELM Win Rate: % of times a model beats another model across all metrics in this group.\n", + " \"\"\"\n", + " models = score_table.index.tolist()\n", + " metrics = score_table.columns.tolist()\n", + " win_rates = {}\n", + "\n", + " for model_a in models:\n", + " wins = 0\n", + " comparisons = 0\n", + " \n", + " for model_b in models:\n", + " if model_a == model_b: continue\n", + " \n", + " # Compare across ALL metrics in this table\n", + " for metric in metrics:\n", + " score_a = score_table.at[model_a, metric]\n", + " score_b = score_table.at[model_b, metric]\n", + " \n", + " # Only compare valid scores\n", + " if pd.isna(score_a) or pd.isna(score_b): continue\n", + " \n", + " comparisons += 1\n", + " if score_a > score_b:\n", + " wins += 1\n", + " \n", + " win_rates[model_a] = wins / comparisons if comparisons > 0 else 0.0\n", + " \n", + " return pd.Series(win_rates)\n", + "\n", + "# ==========================================\n", + "# 4. EXECUTION\n", + "# ==========================================\n", + "db_file = 'results-new/results.db'\n", + "\n", + "# A. Metadata\n", + "df_db_meta = extract_db_metadata(db_file)\n", + "df_db_unique = df_db_meta.drop_duplicates(subset=['model'], keep='last')\n", + "\n", + "# B. CSS\n", + "html_content = \"\"\"\n", + "\n", + "
\n", + "\"\"\"\n", + "if 'runs_data' in locals() and runs_data:\n", + " df_metrics_all = extract_api_metrics(runs_data['resources'])\n", + " \n", + " for group_name, config in METRIC_GROUPS.items():\n", + " \n", + " # 1. Build Score Table\n", + " score_table = pd.DataFrame(index=df_db_unique['model'].unique())\n", + " score_table.index.name = 'model'\n", + " \n", + " valid_metrics = []\n", + " \n", + " # 2. Calculate Scores\n", + " for metric in config[\"metrics\"]:\n", + " scores = get_metric_score_series(df_metrics_all, metric, config[\"type\"])\n", + " if scores is not None:\n", + " score_table[metric] = scores\n", + " valid_metrics.append(metric)\n", + " \n", + " if not valid_metrics:\n", + " continue\n", + "\n", + " # 3. Calculate HELM Win Rate (Specific to this group)\n", + " score_table['Win Rate'] = calculate_group_win_rate(score_table[valid_metrics])\n", + " \n", + " # 4. Calculate Final Rank\n", + " score_table['Final Rank'] = score_table['Win Rate'].rank(ascending=False, method='min')\n", + " \n", + " # 5. Merge & Format\n", + " df_final = pd.merge(df_db_unique, score_table, on='model', how='inner')\n", + " df_final = df_final.sort_values('Final Rank')\n", + " \n", + " # Rounding\n", + " for c in valid_metrics: df_final[c] = df_final[c].fillna(0.0).astype(float).round(4)\n", + " df_final['Win Rate'] = df_final['Win Rate'].fillna(0.0).astype(float).round(4)\n", + " df_final['Final Rank'] = df_final['Final Rank'].fillna(0).astype(int)\n", + " \n", + " # Columns\n", + " meta_cols = ['run_id', 'run_name', 'model']\n", + " final_cols = meta_cols + ['Win Rate', 'Final Rank'] + valid_metrics\n", + " \n", + " # 6. Generate HTML\n", + " table_html = df_final[final_cols].to_html(classes='table-container', index=False)\n", + " \n", + " html_content += f\"\"\"\n", + "
\n", + "

{group_name} Comparison

\n", + "

Values: {config['description']}. Win Rate based on head-to-head performance.

\n", + " {table_html}\n", + "
\n", + " \"\"\"\n", + "\n", + " html_content += \"
\"\n", + " display(HTML(html_content))\n", + " \n", + "else:\n", + " print(\"'runs_data' missing.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#Delete Execution Id\n", + "def delete_execution():\n", + " headers = _get_headers()\n", + " EXEC_ID = execution_id\n", + " GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions/'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}{EXEC_ID}\"\n", + " try:\n", + " response = requests.delete(\n", + " request_url, headers=headers, params={\"AI-Resource-Group\":AICORE_RESOURCE_GROUP}, timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 202):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " except:\n", + " logging.error(\"Error occurred while attempting to delete a Configuration\")\n", + " raise\n", + " \n", + "delete_execution()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/AI_Core.json b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/AI_Core.json new file mode 100644 index 0000000000..bb30bf61b4 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/AI_Core.json @@ -0,0 +1,1578 @@ +{ + "name": "AI Core", + "version": "1", + "items": [ + { + "type": "http", + "name": "get_token", + "filename": "get_token.bru", + "seq": 1, + "request": { + "url": "{{ai_auth_url}}/oauth/token", + "method": "POST", + "headers": [ + { + "name": "Content-Type", + "value": "application/x-www-form-urlencoded", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "formUrlEncoded", + "formUrlEncoded": [ + { + "name": "grant_type", + "value": "client_credentials", + "enabled": true + }, + { + "name": "client_id", + "value": "{{client_id}}", + "enabled": true + }, + { + "name": "client_secret", + "value": "{{client_secret}}", + "enabled": true + } + ], + "multipartForm": [], + "file": [] + }, + "script": { + "res": "if (res.getStatus() == 200) {\n bru.setEnvVar(\"access_token\", res.body.access_token);\n}" + }, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "none" + } + } + }, + { + "type": "folder", + "name": "admin", + "filename": "admin", + "root": { + "meta": { + "name": "admin" + } + }, + "items": [ + { + "type": "folder", + "name": "objectStoreSecrets", + "filename": "objectStoreSecrets", + "root": { + "meta": { + "name": "objectStoreSecrets" + } + }, + "items": [ + { + "type": "http", + "name": "Create a secret", + "filename": "Create a secret.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/admin/objectStoreSecrets", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + }, + { + "name": "Authorization", + "value": "", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"genai-data\",\n \"data\": {\n \"AWS_ACCESS_KEY_ID\": \"\",\n \"AWS_SECRET_ACCESS_KEY\": \"\"\n },\n \"type\": \"S3\",\n \"bucket\": \"\",\n \"endpoint\": \"https://s3.eu-central-1.amazonaws.com\",\n \"region\": \"\",\n \"pathPrefix\": \"\" \n }", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a secret based on the configuration in the request body\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get a list of metadata of available secrets.", + "filename": "Get a list of metadata of available secrets.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets?$top=&$skip=&$count=", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "$top", + "value": "", + "type": "query", + "enabled": true + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": true + }, + { + "name": "$count", + "value": "", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of metadata of the stored secrets.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "folder", + "name": "{objectStoreName}", + "filename": "{objectStoreName}", + "root": { + "meta": { + "name": "{objectStoreName}" + } + }, + "items": [ + { + "type": "http", + "name": "Delete object store secret", + "filename": "Delete object store secret.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets/:objectStoreName", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "objectStoreName", + "value": "qKoZ-aHSe", + "type": "path", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Delete a secret with the name of objectStoreName if it exists.", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "http", + "name": "Returns the of metadata of secrets which match the query parameter.", + "filename": "Returns the of metadata of secrets which match the query parameter.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/admin/objectStoreSecrets", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "This retrieves the metadata of the stored secret which match the parameter objectStoreName.\nThe fetched secret is constructed like objectStoreName-object-store-secret\nThe base64 encoded field for the stored secret is not returned.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "http", + "name": "Update object store secret", + "filename": "Update object store secret.bru", + "seq": 3, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets/:objectStoreName", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "objectStoreName", + "value": "qKoZ-aHSe", + "type": "path", + "enabled": true + } + ], + "body": { + "mode": "json", + "json": "{\n \"name\": \"\",\n \"type\": \"\",\n \"data\": {},\n \"bucket\": \"\",\n \"endpoint\": \"\",\n \"region\": \"\",\n \"pathPrefix\": \"\",\n \"verifyssl\": \"\",\n \"usehttps\": \"1\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Update a secret with name of objectStoreName if it exists.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + } + ] + }, + { + "type": "folder", + "name": "lm", + "filename": "lm", + "root": { + "meta": { + "name": "lm" + } + }, + "items": [ + { + "type": "folder", + "name": "configurations", + "filename": "configurations", + "root": { + "meta": { + "name": "configurations" + } + }, + "items": [ + { + "type": "http", + "name": "Create configuration Copy", + "filename": "Create configuration Copy.bru", + "seq": 3, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"id\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a new configuration linked to a specific scenario and executable for use in an execution\nor deployment.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Create configuration", + "filename": "Create configuration.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"genai-eval-conf\",\n \"scenarioId\": \"genai-evaluations\",\n \"executableId\": \"genai-evaluations-simplified\",\n \"inputArtifactBindings\": [\n {\n \"key\": \"datasetFolder\",\n \"artifactId\": \"\"\n }\n ],\n \"parameterBindings\": [\n {\n \"key\": \"repetitions\",\n \"value\": \"1\"\n },\n {\n \"key\": \"orchestrationDeploymentURL\",\n \"value\": \"\"\n\n },\n {\n \"key\": \"metrics\",\n \"value\": \"language_match\"\n },\n {\n \"key\": \"testDataset\",\n \"value\": \"{\\\"path\\\": \\\"testdata/global_customer_queries.csv\\\", \\\"type\\\": \\\"csv\\\"}\"\n },\n {\n \"key\": \"promptTemplate\",\n \"value\": \"\"\n },\n {\n \"key\": \"models\",\n \"value\": \"gpt-4.1:latest\"\n }\n ]\n}\n", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a new configuration linked to a specific scenario and executable for use in an execution\nor deployment.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of configurations", + "filename": "Get list of configurations.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of configurations. Filter results by scenario ID or a list of executable IDs.\nSearch for configurations containing the search string as substring in the configuration name.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "{configurationId}", + "filename": "{configurationId}", + "root": { + "meta": { + "name": "{configurationId}" + } + }, + "items": [ + { + "type": "http", + "name": "Get configuration by ID", + "filename": "Get configuration by ID.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve details for configuration with configurationId.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of configurations", + "filename": "Get number of configurations.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/configurations/$count?scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&$search=}\"NI2Kn!V&searchCaseInsensitive=false&executableIds=T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": true + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": true + }, + { + "name": "executableIds", + "value": "T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available configurations that match the specified filter criteria.\nFilter criteria include a scenarioId or executableIdsList. Search by substring of configuration name is also possible.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "artifacts", + "filename": "artifacts", + "root": { + "meta": { + "name": "artifacts" + } + }, + "items": [ + { + "type": "http", + "name": "Get list of artifacts", + "filename": "Get list of artifacts.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/artifacts", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "executionId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "name", + "value": "[G7 ovyt8i", + "type": "query", + "enabled": false + }, + { + "name": "kind", + "value": "other", + "type": "query", + "enabled": false + }, + { + "name": "artifactLabelSelector", + "value": "ext.ai.sap.com/bXN1EAk=D*", + "type": "query", + "enabled": false + }, + { + "name": "$top", + "value": "10000", + "type": "query", + "enabled": false + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": false + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": false + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": false + }, + { + "name": "$expand", + "value": "scenario", + "type": "query", + "enabled": false + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of artifacts that matches the specified filter criteria.\nFilter criteria include scenario ID, execution ID, an artifact name, artifact kind, or artifact labels.\nUse top/skip parameters to paginate the result list.\nSearch by substring of artifact name or description, if required.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Register artifact", + "filename": "Register artifact.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/artifacts", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"aiconfig\",\n \"kind\": \"dataset\",\n \"url\": \"ai://genai-data/genaiEvaluation/14af1af80b974edb8731632d17286343\",\n \"scenarioId\": \"genai-evaluations\"\n}\n", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Register an artifact for use in a configuration, for example a model or a dataset.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of artifacts", + "filename": "Get number of artifacts.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/artifacts/$count?scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&executionId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&name=[G7 ovyt8i&kind=other&$search=}\"NI2Kn!V&searchCaseInsensitive=false&artifactLabelSelector=ext.ai.sap.com/bXN1EAk=D*", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "executionId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "name", + "value": "[G7 ovyt8i", + "type": "query", + "enabled": true + }, + { + "name": "kind", + "value": "other", + "type": "query", + "enabled": true + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": true + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": true + }, + { + "name": "artifactLabelSelector", + "value": "ext.ai.sap.com/bXN1EAk=D*", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available artifacts that match the specified filter criteria.\nFilter criteria include a scenarioId, executionId, an artifact name, artifact kind, or artifact labels.\nSearch by substring of artifact name or description is also possible.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "executions", + "filename": "executions", + "root": { + "meta": { + "name": "executions" + } + }, + "items": [ + { + "type": "http", + "name": "Create execution", + "filename": "Create execution.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/executions", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create an execution using the configuration specified by configurationId.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of executions", + "filename": "Get list of executions.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/executions/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "executionScheduleId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "status", + "value": "DEAD", + "type": "query", + "enabled": false + }, + { + "name": "$top", + "value": "10000", + "type": "query", + "enabled": false + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": false + }, + { + "name": "$select", + "value": "status", + "type": "query", + "enabled": false + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of executions that match the specified filter criteria.\nFilter criteria include a list of executableIds, a scenarioId, a configurationId, or a execution status.\nWith top/skip parameters it is possible to paginate the result list.\nWith select parameter it is possible to select only status.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + } + } + ] + }, + { + "type": "folder", + "name": "deployments", + "filename": "deployments", + "root": { + "meta": { + "name": "deployments" + } + }, + "items": [ + { + "type": "http", + "name": "Create deployment", + "filename": "Create deployment.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/deployments", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a deployment using the configuration specified by configurationId after synchronously checking the\ncorrectness of the configuration.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of deployments", + "filename": "Get list of deployments.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/deployments", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of deployments that match the specified filter criteria.\nFilter criteria include a list of executableIds, a scenarioId, a configurationId, or a deployment status.\nWith top/skip parameters it is possible to paginate the result list.\nWith select parameter it is possible to select only status.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of deployments", + "filename": "Get number of deployments.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/deployments/$count?executableIds=T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE&configurationId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&status=DEAD", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "executableIds", + "value": "T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "type": "query", + "enabled": true + }, + { + "name": "configurationId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "status", + "value": "DEAD", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available deployments. The number can be filtered by\nscenarioId, configurationId, executableIdsList or by deployment status.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "metrics", + "filename": "metrics", + "root": { + "meta": { + "name": "metrics" + } + }, + "items": [ + { + "type": "http", + "name": "Evaluation Metrics via Execution ID", + "filename": "Evaluation Metrics via Execution ID.bru", + "seq": 4, + "request": { + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of=", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "tagFilters", + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of=", + "value": "evaluation.ai.sap.com/child-of=", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Metrics by Run Name", + "filename": "Metrics by Run Name.bru", + "seq": 5, + "request": { + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/run-name=run1", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "tagFilters", + "value": "evaluation.ai.sap.com/run-name=run1", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + } + ] + } + ], + "activeEnvironmentUid": "lWUmIcEkGnkMxwNBILLmY", + "environments": [ + { + "variables": [ + { + "name": "ai_auth_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "ai_api_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "resource_group", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "orchestration_service_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "access_token", + "value": "", + "enabled": true, + "secret": true, + "type": "text" + } + ], + "name": "intprod" + } + ], + "root": { + "request": { + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "state": "", + "pkce": false, + "credentialsPlacement": "basic_auth_header", + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + }, + "vars": { + "req": [ + { + "name": "region", + "value": "prod.eu-central-1.aws", + "enabled": true, + "local": false, + "uid": "oYVk4DuVpyYqqP2roBVjE" + }, + { + "name": "baseUrl", + "value": "", + "enabled": true, + "local": false, + "uid": "I4KjDm7FxpSRwUYzjwfPG" + }, + { + "name": "auth_url", + "value": "", + "enabled": true, + "local": false, + "uid": "zuftvyCURtA9XYErCYDgo" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "local": false, + "uid": "JfGEVKm71BYTgR8UkQUGv" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "local": false, + "uid": "ls3RYTJ40baTl8eYmilGt" + }, + { + "name": "AWS_ACCESS_KEY_ID", + "value": "", + "enabled": true, + "local": false, + "uid": "2O0YTTAdmYltm5XiHMhP2" + }, + { + "name": "AWS_SECRET_ACCESS_KEY", + "value": "", + "enabled": true, + "local": false, + "uid": "8rc4RYyPcHXyTkAnnI981" + }, + { + "name": "BUCKET_NAME", + "value": "", + "enabled": true, + "local": false, + "uid": "HqFIe8Rvc14i41WIAGGkl" + }, + { + "name": "DATABASE_URL", + "value": "https://s3-eu-central-1.amazonaws.com", + "enabled": true, + "local": false, + "uid": "aWIwuJZH5XQ5Guu2D69Sq" + } + ] + } + }, + "docs": "Provides tools to manage your scenarios and workflows in SAP AI Core. Execute pipelines as a batch job, for example to pre-process or train your models, or perform batch inference. Serve inference requests of trained models. Deploy а trained machine learning model as a web service to serve inference requests with high performance. Register your own Docker registry, synchronize your AI content from your own git repository, and register your own object store for training data and trained models.\n", + "meta": { + "name": "AI Core" + } + }, + "brunoConfig": { + "version": "1", + "name": "AI Core", + "type": "collection", + "ignore": [ + "node_modules", + ".git" + ], + "size": 0.10747432708740234, + "filesCount": 151 + } +} diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br01.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br01.png new file mode 100644 index 0000000000..5424ea51d0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br01.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br02.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br02.png new file mode 100644 index 0000000000..4ed9d9ab02 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br02.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br03.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br03.png new file mode 100644 index 0000000000..2347470e78 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br03.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br04.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br04.png new file mode 100644 index 0000000000..9f8a175e47 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br04.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br05.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br05.png new file mode 100644 index 0000000000..69a105ef01 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br05.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br06.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br06.png new file mode 100644 index 0000000000..81128b34bb Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image-br06.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_007.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_007.png new file mode 100644 index 0000000000..0cdc4cf4a7 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_007.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_008.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_008.png new file mode 100644 index 0000000000..2f12f021a4 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_008.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_009.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_009.png new file mode 100644 index 0000000000..1c979c6b0a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_009.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_1.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_1.png new file mode 100644 index 0000000000..6db3eb05c3 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_10.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_10.png new file mode 100644 index 0000000000..275de82544 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_10.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_19.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_19.png new file mode 100644 index 0000000000..91498a203a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_19.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_21.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_21.png new file mode 100644 index 0000000000..dd9f9f22bb Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_21.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_22.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_22.png new file mode 100644 index 0000000000..abcae67d60 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_22.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_23.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_23.png new file mode 100644 index 0000000000..97b0bc60f0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_23.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_24.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_24.png new file mode 100644 index 0000000000..5471c2e38f Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_24.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_25.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_25.png new file mode 100644 index 0000000000..afdb0e1975 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_25.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_26.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_26.png new file mode 100644 index 0000000000..a2107fe852 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_26.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_27.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_27.png new file mode 100644 index 0000000000..ec99b587ca Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_27.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_29.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_29.png new file mode 100644 index 0000000000..bd3a81ebc5 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_29.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_31.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_31.png new file mode 100644 index 0000000000..7a1a959fb0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_31.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_32.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_32.png new file mode 100644 index 0000000000..fe827f3460 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_32.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_33.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_33.png new file mode 100644 index 0000000000..546d43b52b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_33.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_34.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_34.png new file mode 100644 index 0000000000..4fa0960a1d Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_34.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_40.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_40.png new file mode 100644 index 0000000000..bc104b4655 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_40.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_41.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_41.png new file mode 100644 index 0000000000..975e57dc36 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_41.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_43.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_43.png new file mode 100644 index 0000000000..d594ffa7c3 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_43.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_44.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_44.png new file mode 100644 index 0000000000..8b352c79ec Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_44.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_45.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_45.png new file mode 100644 index 0000000000..7cf1a3f633 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_45.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46.png new file mode 100644 index 0000000000..ef67d82f29 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46_01.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46_01.png new file mode 100644 index 0000000000..131317edd6 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46_01.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46a.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46a.png new file mode 100644 index 0000000000..c493e2a5d2 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_46a.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_47.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_47.png new file mode 100644 index 0000000000..fc729b5ea1 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_47.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_48.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_48.png new file mode 100644 index 0000000000..a7d8b132fb Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_48.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_49.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_49.png new file mode 100644 index 0000000000..2a2bbcd757 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_49.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_5.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_5.png new file mode 100644 index 0000000000..bc6b2a187a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_5.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_50.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_50.png new file mode 100644 index 0000000000..74fea1ca6d Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_50.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_6.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_6.png new file mode 100644 index 0000000000..0d7a4a11aa Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_6.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or1.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or1.png new file mode 100644 index 0000000000..1c754cd040 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or2.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or2.png new file mode 100644 index 0000000000..aac9bd73a2 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or2.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or3.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or3.png new file mode 100644 index 0000000000..c3548b9f83 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_ail_or3.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_dt.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_dt.png new file mode 100644 index 0000000000..841683c510 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_dt.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_mtrs.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_mtrs.png new file mode 100644 index 0000000000..b2fe6925ae Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_mtrs.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_or1.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_or1.png new file mode 100644 index 0000000000..8af37314e4 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_or1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_pr.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_pr.png new file mode 100644 index 0000000000..22d143968b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_br_pr.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_objsec.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_objsec.png new file mode 100644 index 0000000000..f4905708cc Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_objsec.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py03.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py03.png new file mode 100644 index 0000000000..e82630d0a4 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py03.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_con.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_con.png new file mode 100644 index 0000000000..12bf2650b4 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_con.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_dtst.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_dtst.png new file mode 100644 index 0000000000..71f8ba2eea Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_dtst.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_or1.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_or1.png new file mode 100644 index 0000000000..0469ab08c5 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_or1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_rk.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_rk.png new file mode 100644 index 0000000000..36b500fa11 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_rk.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_rnk1.png b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_rnk1.png new file mode 100644 index 0000000000..af5cb4a4d1 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/image_py_rnk1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/img/requirements.txt b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/requirements.txt new file mode 100644 index 0000000000..c9e0b941db --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-comprehensive/img/requirements.txt @@ -0,0 +1,7 @@ +generative-ai-hub-sdk==4.4.3 +python-dotenv==1.0.1 +boto3==1.37.4 +pandas==2.2.3 +json2html==1.3.0 +numpy==1.26.4 +ipywidgets==8.1.0 \ No newline at end of file diff --git a/tutorials/ai-core-genaihub-evaluation-comprehensive/sample.env b/tutorials/ai-core-genaihub-evaluation-comprehensive/sample.env new file mode 100644 index 0000000000..09eeddf3f3 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-comprehensive/sample.env @@ -0,0 +1,13 @@ +# AICORE CREDENTIALS +AICORE_CLIENT_ID= +AICORE_CLIENT_SECRET=AICORE CLIENT SECRET> +AICORE_AUTH_URL= +AICORE_BASE_URL= + +# AWS CREDENTIALS +AWS_ACCESS_KEY= +AWS_BUCKET_ID=> +AWS_REGION= +AWS_SECRET_ACCESS_KEY= +AWS_USERNAME= +AWS_HOST= diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/AICore_feature_description.pdf b/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/AICore_feature_description.pdf new file mode 100644 index 0000000000..b1de4b0405 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/AICore_feature_description.pdf differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/context_output.pdf b/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/context_output.pdf new file mode 100644 index 0000000000..b990413c0b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/context_output.pdf differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/testdata/emanual.csv b/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/testdata/emanual.csv new file mode 100644 index 0000000000..7ccddeeec5 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-with-grounding/DATASET_RAG/testdata/emanual.csv @@ -0,0 +1,51 @@ +topic,answer,context +'I want to enter into Ambient mode. How can I do that?',"'To enter into Ambient Mode, you need to press the specified button on the remote control. In the provided context, it mentions that to enter Ambient Mode, you should press the button on the Samsung TV remote control.'","Changing the content and settings for Ambient Mode. When you press the button in Ambient Mode , the Ambient Mode browser screen appears. In the Ambient Mode browser screen, you can select content and change the Ambient Mode settings. Setting up the content for Ambient Mode The Ambient Mode browser screen displays content at the top and categories at the bottom. Use the left or right directional buttons in the content list at the top to move the focus to content you want, and then press the Select button. The selected content is played in Ambient Mode In the future, more content that you can set up in the Ambient Mode browser will be provided. You can select the following categories and content: Decor : Allows you to select beautiful screens. Info : Provides information such as weather, news headlines, and more. This function may not be supported depending on the geographical area. Photo : Allows you to set a picture stored in your mobile device as the wallpaper of the Ambient Mode screen. You can configure special layouts using your photos. To save photos from your mobile device to the TV and import them in Ambient Mode , use the SmartThings app on your mobile device. Setting up the Ambient Mode details In the Ambient Mode browser screen, move the focus to , and then press the Select button. You can change the following settings: Brightness : Adjusts the screen brightness for Ambient Mode Color Tone : Adjusts the colors of the screen for Ambient Mode Auto Brightness : Changes the auto brightness setting for Ambient Mode When this function is set to Off , the brightness level of the TV screen is not automatically adjusted according to the ambient light level. Ambient Off Timer : Sets the time that the Ambient Mode screen turns off automatically. If there is no remote control input for the set time, the TV switches to the black screen state. Changing the background color of Ambient Mode To change the background color of Ambient Mode , move the focus to in the Ambient Mode browser screen, and then press the Select button. You can change the background color or pattern. Move the focus to a color or pattern you want, and then press the Select button. Take a picture of a wall using the SmartThings app on your mobile device to set it as the background of Ambient Mode This function may have a delay in image transmission and optimization depending on the network conditions. Changing the content and settings for Ambient Mode. When you press the button in Ambient Mode , the Ambient Mode browser screen appears. In the Ambient Mode browser screen, you can select content and change the Ambient Mode settings. Setting up the content for Ambient Mode The Ambient Mode browser screen displays content at the top and categories at the bottom. Use the left or right directional buttons in the content list at the top to move the focus to content you want, and then press the Select button. The selected content is played in Ambient Mode In the future, more content that you can set up in the Ambient Mode browser will be provided. You can select the following categories and content: Decor : Allows you to select beautiful screens. Info : Provides information such as weather, news headlines, and more. This function may not be supported depending on the geographical area. Photo : Allows you to set a picture stored in your mobile device as the wallpaper of the Ambient Mode screen. You can configure special layouts using your photos. To save photos from your mobile device to the TV and import them in Ambient Mode , use the SmartThings app on your mobile device. Setting up the Ambient Mode details In the Ambient Mode browser screen, move the focus to , and then press the Select button. You can change the following settings: Brightness : Adjusts the screen brightness for Ambient Mode Color Tone : Adjusts the colors of the screen for Ambient Mode Auto Brightness : Changes the auto brightness setting for Ambient Mode When this function is set to Off , the brightness level of the TV screen is not automatically adjusted according to the ambient light level. Ambient Off Timer : Sets the time that the Ambient Mode screen turns off automatically. If there is no remote control input for the set time, the TV switches to the black screen state. Changing the background color of Ambient Mode To change the background color of Ambient Mode , move the focus to in the Ambient Mode browser screen, and then press the Select button. You can change the background color or pattern. Move the focus to a color or pattern you want, and then press the Select button. Take a picture of a wall using the SmartThings app on your mobile device to set it as the background of Ambient Mode This function may have a delay in image transmission and optimization depending on the network conditions. Using the Ambient Mode. Learn about the functions available in Ambient Mode, which is a QLED TV-specific function. Ambient Mode The image on your TV may differ from the image above depending on the model and geographical area. Ambient Mode , you can view beautiful screens, various visual information, and notifications. To enter Ambient Mode , press the button. To return to the TV mode, press the button. To shut off the TV, press the button. If you press the button when the TV is turned off, the TV turns on in Ambient Mode If you use a remote control other than the Samsung TV remote control, there may be restrictions to entering Ambient Mode Because this function is a QLED TV-specific function, it may not be supported depending on the model. Decor Info Photo " +'Where do I find signal information ?',"'You can find signal information under ""Settings"" -> ""Support"" -> ""Self Diagnosis"" -> ""Signal Information.""'","Checking digital channel signal info and strength. Settings Support Self Diagnosis Signal Information Try Now If your TV is attached to a separate, over-the-air antenna, you can use the signal strength information to adjust the antenna for increased signal strength and improved HD channel reception. Viewing Broadcast Information at a Glance. See an overview of each channel's program lineup.Finding the contact information for service. Settings Support About This TV Try Now You can view the address of the Samsung website, the call center phone number, your TV's model number, your TV's software version, Open Source License and other information you may need to get service support from a Samsung call agent or the Samsung website. You can also view information by scanning the QR code of your TV. You can also start this function by pressing and holding the button for 5 or more seconds. For standard remote control, press and hold the button for 5 or more seconds." +'How can I view the channels that are serached by auto program function and How can I view first five favorite channel?',"'To view the channels that are searched by the auto program function, you can access the Channel List screen by pressing the right directional button on your remote control. From there, you can select ""All Channels"" to display the channels found during the Auto Program channel search.\n\nTo view the first five favorite channels, you can also access the Channel List screen by pressing the right directional button on your remote control. From there, you can select ""Favorites"" and then choose Favorites 1 from the five favorites lists displayed on the screen. This will show you the channels in the first favorites list.'","Scanning for available channels. Settings Broadcasting Auto Program Try Now Do not use this function if your TV is connected to a cable box or satellite box. If the TV has saved channels, Auto Program deletes the existing list, and then saves the newly scanned channels.Viewing and selecting channels on Favorites lists only. Live TV Channel List Press the right directional button on your remote control. Select Favorites Select the desired list among the five favorites lists on the screen. The channels in the selected favorites list appear. Only favorites lists that contain registered channels will display channels.Using the Channel List. Change the channel or check programs on other digital channels. Live TV Channel List Using the Channel List , you can change the channel or check programs on other digital channels while watching TV. The Channel List screen contains the following icons: : A favorite channel : A locked channel. If you press the right directional button when viewing the Channel List screen, you can use the following functions: All Channels Displays the channels that the TV found during the Auto Program channel search. Favorites Displays Favorites 1 Favorites 5 Air Cable To allow you to select Air Cable as the DTV mode, move the focus to Air Cable , and then press the Select button. Air is suitable for homes equipped with separate, over-the-air antennas. Cable is suitable for homes that subscribe to a local cable-service. You do not need to use this function if your TV is connected to a cable box or satellite box. TV PLUS While the TV is connected to the Internet, you can watch popular programs or highlights for each theme through a virtual channel at any time. As with regular channels, the TV provides the daily program schedules for TV PLUS This function may not be supported depending on the model or geographical area. " +'Can I configure Tint?',"'Yes, you can configure Tint by accessing the Picture settings under Expert Settings in the TV menu.'","Inverting the screen color. Settings General Accessibility Color Inversion Try Now You can invert the colors of the text and background displayed on the TV screen to make it easier to read them. Color Inversion is on, some Accessibility menus are not available. This function may not be supported depending on the model or geographical area.Configuring advanced picture settings. Settings Picture Expert Settings Try Now Configure the screen settings to your taste by using the following functions: Backlight Try Now Brightness Try Now Contrast Try Now Sharpness Try Now Color Try Now Tint (G/R) Try Now Apply Picture Settings Try Now Digital Clean View Try Now Auto Motion Plus Settings Try Now When LED Clear Motion is set to , the screen appears darker than when it is Off Local Dimming This function may not be supported depending on the model or geographical area. Contrast Enhancer Try Now HDR+ Mode Try Now Automatically provide an optimal HDR effect based on the video source. The HDR (High Dynamic Range) technology implements video images that are very similar to the images seen through human eyes by finely adjusting the contrast of the source. This function may not be supported depending on the model or geographical area. Film Mode Try Now This function is only available when the input signal is TV, AV, Component (480i, 1080i), or HDMI (1080i). Color Tone Try Now White Balance Try Now Gamma Try Now RGB Only Mode Try Now Color Space Settings Try Now Reset Picture Try Now Changing the content and settings for Ambient Mode. When you press the button in Ambient Mode , the Ambient Mode browser screen appears. In the Ambient Mode browser screen, you can select content and change the Ambient Mode settings. Setting up the content for Ambient Mode The Ambient Mode browser screen displays content at the top and categories at the bottom. Use the left or right directional buttons in the content list at the top to move the focus to content you want, and then press the Select button. The selected content is played in Ambient Mode In the future, more content that you can set up in the Ambient Mode browser will be provided. You can select the following categories and content: Decor : Allows you to select beautiful screens. Info : Provides information such as weather, news headlines, and more. This function may not be supported depending on the geographical area. Photo : Allows you to set a picture stored in your mobile device as the wallpaper of the Ambient Mode screen. You can configure special layouts using your photos. To save photos from your mobile device to the TV and import them in Ambient Mode , use the SmartThings app on your mobile device. Setting up the Ambient Mode details In the Ambient Mode browser screen, move the focus to , and then press the Select button. You can change the following settings: Brightness : Adjusts the screen brightness for Ambient Mode Color Tone : Adjusts the colors of the screen for Ambient Mode Auto Brightness : Changes the auto brightness setting for Ambient Mode When this function is set to Off , the brightness level of the TV screen is not automatically adjusted according to the ambient light level. Ambient Off Timer : Sets the time that the Ambient Mode screen turns off automatically. If there is no remote control input for the set time, the TV switches to the black screen state. Changing the background color of Ambient Mode To change the background color of Ambient Mode , move the focus to in the Ambient Mode browser screen, and then press the Select button. You can change the background color or pattern. Move the focus to a color or pattern you want, and then press the Select button. Take a picture of a wall using the SmartThings app on your mobile device to set it as the background of Ambient Mode This function may have a delay in image transmission and optimization depending on the network conditions. " +'How do I fix the missing/wrong color issue ?',"""To fix the missing/wrong color issue on your TV, you can try the following steps:\n\n1. Run the Start Picture Test to determine if the problem is caused by the TV's settings. \n2. Adjust the color settings by going to Picture Expert Settings and modifying the Color and Tint (G/R) settings.\n3. Ensure that the video input connectors are connected properly to the external device.\n4. If the colors are still incorrect, try adjusting the Picture Mode settings such as Brightness and Sharpness.\n5. Check if Power Saving Mode is enabled and disable it.\n6. If the issue persists, try resetting the picture settings in the Picture Expert Settings or change the Picture Mode to standard 16:9 settings.\n7. If you are using AV or analog equipment, make sure the adapter is connected to the correct input port on the TV.\n8. If none of the above steps resolve the issue, contact Samsung's Call Center for further assistance.""","Testing the picture. Settings Support Self Diagnosis Start Picture Test Try Now Before you review the list of problems and solutions below, use Start Picture Test to determine if the problem is caused by the TV. Start Picture Test displays a high definition picture you can examine for flaws or faults. The problem Try this! Flickering and Dimming Component Connections/ Screen Color Screen Brightness Ghosting, Blurring, or Juddering If your TV is flickering or dimming sporadically, you may need to disable some of the energy efficiency features. Disable Ambient Light Detection Power Saving Mode , or Motion Lighting Settings General Eco Solution Ambient Light Detection Settings General Eco Solution Power Saving Mode Settings General Eco Solution Motion Lighting If the color on your TV screen is not correct or the black and white colors are off, run Start Picture Test Settings Support Self Diagnosis Start Picture Test If the test results indicate that the problem is not caused by the TV, do the following: Confirm that the video input connectors are connected to the correct external device video output connectors. Check the other connections as well. If the TV is connected to an external device via a component cable, confirm that the Pb, Pr, and Y jacks are plugged into their proper connectors. If the colors on your TV are correct but just a little too dark or bright, try adjusting the following settings first. Settings Picture Expert Settings Backlight Settings Picture Expert Settings Contrast Settings Picture Expert Settings Brightness Settings Picture Expert Settings Sharpness Settings Picture Expert Settings Color Settings Picture Expert Settings Tint (G/R) If you notice ghosting or blurring on the screen, use the Auto Motion Plus Settings function to resolve the issue. Settings Picture Expert Settings Auto Motion Plus Settings The problem Try this! Unwanted Powering Off Problems Powering On Unable to find a Channel The TV image does not look as good as it did in the store. The picture is distorted. The color is wrong or missing. If your TV appears to turn off by itself, try disabling some of the TV's energy efficiency functions. See if Sleep Timer has been enabled. The Sleep Timer automatically turns the TV off after a specified period of time. Settings General System Manager Time Sleep Timer If the Sleep Timer has not been enabled, see if Auto Power Off Off Timer has been enabled and disable it. Settings General Eco Solution Auto Power Off Settings General System Manager Time Off Timer If you are having problems powering on your TV, there are a number of things to check before calling the service department. Confirm that the TV's power cord is connected correctly at both ends and that the remote control is operating normally. Make sure that the antenna cable or cable TV cable is firmly connected. If you have a cable box or satellite box, confirm that it is plugged in and turned on. If your TV is not connected to a cable box or satellite box, run Auto Program Settings Broadcasting Auto Program Store displays are all tuned to digital, HD (high definition) channels. If you have an analog cable box or satellite box, upgrade to a digital cable box or satellite box. Use HDMI or Component cables to deliver HD (high definition) picture quality. Many HD channels are upscaled from SD (Standard Definition) content. Look for a channel that is broadcasting HD content. Cable/Satellite Subscribers: Try HD channels from the channel lineup. Air/Cable Antenna Connection: Try HD channels after running the Auto Program function. Settings Broadcasting Auto Program Adjust the cable box or satellite box's video output resolution to 1080i or 720p. The compression of video content may cause picture distortions, especially in fast moving pictures from sports programs and action movies. If the signal reception is weak or poor, screen distortion may be visible but it is not a malfunction. Mobile phones used close to the TV (within 3.2 ft) may cause noise on analog and digital channels. If you're using a Component connection, make sure that the Component cables are connected to the correct jacks. Incorrect or loose connections may cause color problems or a blank screen. The problem Try this! The color is poor or the picture is not bright enough. There is a dotted line on the edge of the screen. The picture is black and white. If the test picture does not appear or there is noise or distortion, the TV may have a problem. Contact Samsung's Call Center for assistance. If the test picture is displayed properly, there may be a problem with an external device. Please check the connections. If the problem persists, check the signal strength or refer to the external device's user manual. Go to Picture and then adjust the Picture Mode Brightness Sharpness , and Color settings. Settings Picture Picture Mode Settings Picture Expert Settings Brightness Settings Picture Expert Settings Sharpness Settings Picture Expert Settings Color See if Power Saving Mode has been enabled. Settings General Eco Solution Power Saving Mode Try resetting the picture. Settings Picture Expert Settings Reset Picture Change Picture Size Settings 16:9 Standard Settings Picture Picture Size Settings 16:9 Standard If you are using AV or analog equipment, disconnect the adapter from the Component (Blue) input port on the TV and connect it to the AV (Yellow) input port. The Component or AV input port may not be provided depending on the model or geographical area. Check whether Grayscale is set to Settings General Accessibility Grayscale There Is a Problem with the Picture. When the TV has trouble with the picture, these steps may help resolve the problem.Inverting the screen color. Settings General Accessibility Color Inversion Try Now You can invert the colors of the text and background displayed on the TV screen to make it easier to read them. Color Inversion is on, some Accessibility menus are not available. This function may not be supported depending on the model or geographical area." +'How do I fix blurring issues on TV screen?',"'To fix blurring issues on a TV screen, you can try adjusting the Auto Motion Plus settings in the Picture Expert settings. This function can help resolve the issue of ghosting or blurring on the screen.'","There Is a Problem with the Picture. When the TV has trouble with the picture, these steps may help resolve the problem.Setting the screen to black and white. Settings General Accessibility Grayscale Try Now You can change the color of the TV screen to black and white to sharpen blurred edges caused by colors. Grayscale is on, some Accessibility menus are not available. This function may not be supported depending on the model or geographical area.Testing the picture. Settings Support Self Diagnosis Start Picture Test Try Now Before you review the list of problems and solutions below, use Start Picture Test to determine if the problem is caused by the TV. Start Picture Test displays a high definition picture you can examine for flaws or faults. The problem Try this! Flickering and Dimming Component Connections/ Screen Color Screen Brightness Ghosting, Blurring, or Juddering If your TV is flickering or dimming sporadically, you may need to disable some of the energy efficiency features. Disable Ambient Light Detection Power Saving Mode , or Motion Lighting Settings General Eco Solution Ambient Light Detection Settings General Eco Solution Power Saving Mode Settings General Eco Solution Motion Lighting If the color on your TV screen is not correct or the black and white colors are off, run Start Picture Test Settings Support Self Diagnosis Start Picture Test If the test results indicate that the problem is not caused by the TV, do the following: Confirm that the video input connectors are connected to the correct external device video output connectors. Check the other connections as well. If the TV is connected to an external device via a component cable, confirm that the Pb, Pr, and Y jacks are plugged into their proper connectors. If the colors on your TV are correct but just a little too dark or bright, try adjusting the following settings first. Settings Picture Expert Settings Backlight Settings Picture Expert Settings Contrast Settings Picture Expert Settings Brightness Settings Picture Expert Settings Sharpness Settings Picture Expert Settings Color Settings Picture Expert Settings Tint (G/R) If you notice ghosting or blurring on the screen, use the Auto Motion Plus Settings function to resolve the issue. Settings Picture Expert Settings Auto Motion Plus Settings The problem Try this! Unwanted Powering Off Problems Powering On Unable to find a Channel The TV image does not look as good as it did in the store. The picture is distorted. The color is wrong or missing. If your TV appears to turn off by itself, try disabling some of the TV's energy efficiency functions. See if Sleep Timer has been enabled. The Sleep Timer automatically turns the TV off after a specified period of time. Settings General System Manager Time Sleep Timer If the Sleep Timer has not been enabled, see if Auto Power Off Off Timer has been enabled and disable it. Settings General Eco Solution Auto Power Off Settings General System Manager Time Off Timer If you are having problems powering on your TV, there are a number of things to check before calling the service department. Confirm that the TV's power cord is connected correctly at both ends and that the remote control is operating normally. Make sure that the antenna cable or cable TV cable is firmly connected. If you have a cable box or satellite box, confirm that it is plugged in and turned on. If your TV is not connected to a cable box or satellite box, run Auto Program Settings Broadcasting Auto Program Store displays are all tuned to digital, HD (high definition) channels. If you have an analog cable box or satellite box, upgrade to a digital cable box or satellite box. Use HDMI or Component cables to deliver HD (high definition) picture quality. Many HD channels are upscaled from SD (Standard Definition) content. Look for a channel that is broadcasting HD content. Cable/Satellite Subscribers: Try HD channels from the channel lineup. Air/Cable Antenna Connection: Try HD channels after running the Auto Program function. Settings Broadcasting Auto Program Adjust the cable box or satellite box's video output resolution to 1080i or 720p. The compression of video content may cause picture distortions, especially in fast moving pictures from sports programs and action movies. If the signal reception is weak or poor, screen distortion may be visible but it is not a malfunction. Mobile phones used close to the TV (within 3.2 ft) may cause noise on analog and digital channels. If you're using a Component connection, make sure that the Component cables are connected to the correct jacks. Incorrect or loose connections may cause color problems or a blank screen. The problem Try this! The color is poor or the picture is not bright enough. There is a dotted line on the edge of the screen. The picture is black and white. If the test picture does not appear or there is noise or distortion, the TV may have a problem. Contact Samsung's Call Center for assistance. If the test picture is displayed properly, there may be a problem with an external device. Please check the connections. If the problem persists, check the signal strength or refer to the external device's user manual. Go to Picture and then adjust the Picture Mode Brightness Sharpness , and Color settings. Settings Picture Picture Mode Settings Picture Expert Settings Brightness Settings Picture Expert Settings Sharpness Settings Picture Expert Settings Color See if Power Saving Mode has been enabled. Settings General Eco Solution Power Saving Mode Try resetting the picture. Settings Picture Expert Settings Reset Picture Change Picture Size Settings 16:9 Standard Settings Picture Picture Size Settings 16:9 Standard If you are using AV or analog equipment, disconnect the adapter from the Component (Blue) input port on the TV and connect it to the AV (Yellow) input port. The Component or AV input port may not be provided depending on the model or geographical area. Check whether Grayscale is set to Settings General Accessibility Grayscale " +'What is the use of universal guide?',"""The Universal Guide app allows users to search for and enjoy various content such as TV shows, dramas, movies, sports broadcasts, and music in one place. It can recommend content based on the user's preferences and notify them of new drama series. Additionally, the Universal Guide app can be accessed on a mobile device using the Samsung SmartThings app.""","Using the Universal Guide App. Search for and enjoy content such as TV shows, dramas, movies, sports broadcasts, and music. Universal Guide The image on your TV may differ from the image above depending on the model and geographical area. Universal Guide is an app that allows you to search for and enjoy various content such as TV shows, dramas, movies, and music in one place. Universal Guide can recommend content tailored to your preferences and notify you of new drama series. You can use this feature on your mobile with Samsung SmartThings app. To enjoy the content from these apps on your TV, they must be installed on the TV. When you watch some paid content, you may need to make a payment using their associated app. Images may look blurry depending on the service provider's circumstances. This function may not be supported depending on the model or geographical area. For You TV Shows Movies Sports Music On Now Accessibility Guidance. Provides a menu and a remote control guide that aid the visually impaired.Using additional functions. You can use the following features on the Source screen. Connection Guide : Displays device connection instructions. Universal Remote : Lets you register external devices to your Samsung Smart Remote and control them using the Remote. This function may not be supported depending on the model or geographical area. For more information, refer to External Devices with the Samsung Smart Remote - Using the Universal " +'What is the feature of Bixby guide?','The feature of the Bixby guide is that it provides a tutorial on how to use Bixby quickly.',"Quick Guides. You can learn quickly how to run and use the frequently used functions, such as Bixby, Ambient Mode, and Smart Hub.Running Bixby. Press and hold the button on your Samsung Smart Remote, say a command, and then release the button. The TV recognizes the voice command. To view the Bixby guide, press the button once: When you press the button for the first time, the Using Bixby button appears at the bottom of the screen. Press the Select button. The Using Bixby popup window appears and a tutorial on using Bixby is shown. When you press the button after the first time, the Enter My Bixby button appears at the bottom of the screen. Press the Select button to go to the My Bixby screen. Running Bixby. Press and hold the button on your Samsung Smart Remote, say a command, and then release the button. The TV recognizes the voice command. To view the Bixby guide, press the button once: When you press the button for the first time, the Using Bixby button appears at the bottom of the screen. Press the Select button. The Using Bixby popup window appears and a tutorial on using Bixby is shown. When you press the button after the first time, the Enter My Bixby button appears at the bottom of the screen. Press the Select button to go to the My Bixby screen. " +'How to launch the last used app automatically?',"'To launch the last used app automatically, go to Settings -> General -> Smart Features -> Autorun Last App. Then, turn on the Autorun Last App function by pressing the Select button at the current menu. When this function is enabled, the last used app will automatically run when you turn on the TV.'","Launching the last used app automatically. Settings General Smart Features Autorun Last App Try Now Autorun Last App is set to , the last used app is automatically run when you turn on the TV. You can also turn this function on or off: press the Select button at the current menu. This function may not be supported depending on the app. Move Remove Installing and running an app. Installing an app Move to the app you want to install, and then press the Select button. The detailed information screen appears. Select Install . When the installation is complete, the Open menu appears. Select Open to run the app immediately. You can view installed apps on the Settings screen. If the TV's internal memory is insufficient, you can install some specific apps on a USB device. You can run an app installed on a USB device only when the USB device is connected to the TV. If the USB device is disconnected while an app is running, the app is terminated. You cannot run an app installed on a USB device on a PC or another TV. Launching an app You can run the app desired from Downloaded App The icons below appear within the selected app's icon and indicate the following: : The app is installed on a USB device. : The app has a password. : The app needs to be updated. : The app supports the mirroring function. Managing purchased or installed apps. Select Settings APPS . You can manage installed apps. The image on your TV may differ from the image above depending on the model and geographical area. Removing an app Select an app to delete. Select Delete The selected app is deleted. Ensure that the related app data is also removed when you remove an app. Adding apps to the Home Screen Select an app to add. Select Add to Home After the Home screen appears, move the selected app to the desired location. Press the Select button. The selected app is added to the Home Screen. Settings Delete Add to Home Lock/Unlock Move Reinstall View Details Auto Update : ON Locking and unlocking apps Select an app to lock or unlock. Select Lock/Unlock The selected app is locked or unlocked. To lock or unlock an app, enter the PIN. The default PIN is You can set the PIN in Settings General System Manager Change PIN Moving apps Select an app to move. Select Move Move the app to the desired location. Press the Select button. The selected app is moved. This function may not be supported depending on the model or geographical area. Reinstalling an app Select the app to install again. Select Reinstall Reinstallation starts. Checking the app information details Select the app to check. Select View Details The app information screen appears. Automatically updating apps Installed apps can be automatically updated. If you do not want apps automatically updated, set Auto Update OFF Automatic update is enabled only when the TV is connected to the Internet." +'Where do I find the list of my favorite channels?',"'You can find the list of your favorite channels by pressing the right directional button on your remote control to access the Live TV Channel List, then selecting ""Favorites"" and choosing the desired list among the five favorites lists displayed on the screen. Only favorites lists that contain registered channels will display the channels you have selected as your favorites.'","Creating a Personal Favorites List. Designate frequently watched channels as favorite channels. Favorite channels are highlighted in the Edit Channels and Channel List screens with the symbol. You can create up to five favorites lists so that your family members can create their own personal favorites list.Viewing and selecting channels on Favorites lists only. Live TV Channel List Press the right directional button on your remote control. Select Favorites Select the desired list among the five favorites lists on the screen. The channels in the selected favorites list appear. Only favorites lists that contain registered channels will display channels.Adding channels to a favorites list. Select channels to add, and then select the icon. The selected channels are added to the favorites list." +' I want to setup a beautiful screens. How can I do that?',"'To set up beautiful screens, you can use the Ambient Mode feature on your TV. Simply press the button on your remote control to enter Ambient Mode, where you can view various visual information and notifications. To return to TV mode, press the button.'","Setting Up a Schedule Viewing. Configure the TV to show a specific channel or program at a specific time and date. The icon appears next to programs that have been configured for a schedule viewing. To set up a schedule viewing, you must first set the TV's clock ( Settings General System Manager Time Clock Setting up a schedule viewing The Guide Screen On the Guide screen, select a program you would like to view, and then press and hold the Select button. Select Schedule Viewing on the pop-up menu that appears. The Program Info Screen Press the Select button while watching the TV. The Program Info window appears. Select a broadcast scheduled program by using the left or right directional buttons, and then press the Select button. You can schedule watching the program by selecting Schedule Viewing Setting the screen to black and white. Settings General Accessibility Grayscale Try Now You can change the color of the TV screen to black and white to sharpen blurred edges caused by colors. Grayscale is on, some Accessibility menus are not available. This function may not be supported depending on the model or geographical area.Displaying the Home Screen. Press the button. The image on your TV may differ from the image above depending on the model and geographical area. On the Home Screen, you can easily run the apps you have used previously or frequently. The apps can also be moved or deleted from the screen. Notification You can view a list of notifications for all events that occur on your TV. A notification appears on the screen when it is time to view a scheduled program or when an event occurs on a registered device. If you move the focus to Notification , and then press the Select button, a notification window appears on the right and the following functions are available: Delete All You can delete all your notifications. Settings You can select services you want to be notified about. When you select Allow sound , notifications are displayed with a notification sound. Sources Connection HDMI 1 HDMI 2 USB 1 USB 2 Source Universal Remote Guide Settings When the focus is moved to the icon, a list of quick settings icons appears above the top of the menu. You can quickly set frequently used functions by clicking the icons. Picture Mode You can select the picture mode that provides the best viewing experience. To change the picture mode, press the Select button. To make fine adjustments, press the up directional button, and then select Picture Setup Sound Mode You can select a sound mode to optimize your listening experience. To change the sound mode, press the Select button. To make fine adjustments, press the up directional button, and then select Equalizer Setup Sound Output You can select which speakers the TV uses for audio output. To change the audio output, press the Select button. To connect to a Bluetooth speaker, press the up directional button, and then select Speaker List Connecting Bluetooth speaker may not be supported depending on the model or geographical area. Caption You can watch TV broadcasts with captions. To activate/deactivate the Caption function, press the Select button. To run Accessibility Shortcuts , press the up directional button, and then select Accessibility Shortcuts Sleep Timer You can have the TV automatically turn off at a specific time. To change the sleep time, press the Select button. To set the specific time at which the TV turns off automatically, press the up directional button, and then select Set Up Off Timer Network You can view the current network and Internet status. Press the up directional button, and then select Network Status Network Settings Pressing Settings displays all setting menus available. This function may not be supported depending on the model or geographical area. Source You can select an external device connected to the TV. For more information, refer to between external devices connected to the Search You can search the apps or games in Smart Hub services. To use this feature, the TV must be connected to the Internet. APPS You can enjoy a wide range of content including news, sports, weather, and games by installing the corresponding apps on your TV. To use this feature, the TV must be connected to the Internet. For more information, refer to the APPS Ambient Mode Ambient Mode , you can view beautiful screens, various visual information, and notifications. To enter Ambient Mode , press the button. To return to the TV mode, press the button. To shut off the TV, press the button. If you press the button when the TV is turned off, the TV turns on in Ambient Mode If you use a remote control other than the Samsung TV remote control, there may be restrictions to entering Ambient Mode Because this function is a QLED TV-specific function, it may not be supported depending on the model. For more information about the Ambient Mode , refer to the Ambient Universal Guide Universal Guide is an app that allows you to search for and enjoy various content such as TV shows, dramas, movies, and music in one place. Universal Guide can recommend content tailored to your preferences and notify you of new drama series. For more information about the Universal Guide , refer to the Universal Guide Images may look blurry depending on the service provider's circumstances. This function may not be supported depending on the model or geographical area. The image on your TV may differ from the image above depending on the model and geographical area. Moving an item on the Home Screen Move the focus to the app you want to move, press the down directional button on the remote control, and then select Move . Move the app to the desired position by pressing the left or right directional button, and then press the Select button. The selected app is moved on the Home screen. Removing an item on the Home Screen Move the focus to the app you want to delete, press the down directional button on the remote control, and then select Remove . The selected app is deleted. You can add the apps you want to use often to the Home Screen using APPS. To add your favorite apps to the Home Screen, refer to purchased or installed" +'How do I record using time Timeshift function?',"'To record using the Timeshift function, you can press the button while watching the program to display a menu. From this menu, you can select the ""Record"" option, which immediately starts to record the program you are watching.'","Using Timeshift. Play, pause, or rewind live TV. You can pause and rewind live TV by using Timeshift . Activating this function disables the Broadcasting function. Try Now To activate the Timeshift function while watching TV, press the button and then select Timeshift , or press the button. This function is not available in the U.S.A. and Canada. This function may not be supported depending on the model or geographical area. When the Timeshift function is running, some functions are not available in Settings Broadcasting To use Timeshift, you must connect a USB device for recording. Read all precautions before using the Timeshift function. For more information, refer to Using the Recording and TimeshiftBefore Using the Recording and Timeshift Functions. Read these instructions before using Recording and Timeshift.Using the Instant and Schedule Recording options from the program info. window Recording the program you are watching To display a menu for recording a program or starting the Timeshift function, press the button while watching the program. You can select the following functions on the menu: Record : Immediately starts to record the program you are watching. Timeshift : Starts the Timeshift function. Stop : Stops the recording or the Timeshift function that is running. Info : Displays the details about the program you ware watching. Scheduling viewing from the program info window Press the Select button while viewing a program to display the program info window. In the program info window, use the left and right directional buttons to select a broadcast scheduled program, and then press the Select button. To schedule a recording for the program, select Schedule Recording" +'My IP auto setting failed. How to configure it?',"'To configure the IP auto setting that failed, go to Settings > General > Network > Network Status > IP Settings. Make sure that the DHCP server is enabled on the router. Then, unplug the router and plug it back in. Reset the router if required. For a wireless connection, make sure to enter the correct password if needed.'","Troubleshooting Internet connectivity issues. Try Now After reading the following content, troubleshoot the Internet connection issue. If the problem persists, contact your Internet Service Provider. No network cable found Make sure that the LAN cable is plugged in on both ends. If it is plugged in, make sure that the router is turned on. If the router is on, try turning it off and then on. Wireless network connection failed If a selected wireless router is not found, go to Open Network Settings , and then select the correct router. Settings General Network Open Network Settings Unable to connect to a wireless router Check if the router is turned on. If it is, turn it off and then on. Enter the correct password if required. IP auto setting failed Configure the settings in IP Settings Settings General Network Network Status IP Settings Make sure that the DHCP server is enabled on the router, and then unplug the router and plug it back in. Reset the router if required. For wireless connection, enter the correct password if required. Unable to connect to the network Check all IP Settings Settings General Network Network Status IP Settings After checking the DHCP server status (must be active) on the router, remove the LAN cable, and then connect it again. For wireless connection, enter the correct password if required. Connected to a local network, but not to the Internet Make sure that the Internet LAN cable is connected to the router's external LAN port. Check the DNS values in IP Settings Settings General Network Network Status IP Settings Network setup is complete, but unable to connect to the Internet If the problem persists, contact your Internet Service Provider. Setting up an Internet connection over IPv6. Configuring the IPv6 connection settings Settings General Network Expert Settings IPv6 This function may not be supported depending on the model or geographical area. Checking the Internet connection set up over IPv6 Settings General Network Expert Settings IPv6 Status This function may not be supported depending on the model or geographical area.Connecting an IP control device to the TV. Settings General Network Expert Settings IP Remote You can connect an IP control device to the TV remotely to manage your IP device. To use this function, Power On with Mobile must be turned on. Turning this feature on may allow other IP control devices to control your TV. We recommend turning this feature on only if an authorized third-party custom controller is installed and configured specifically for a Samsung TV and if your Wi-Fi network is password protected. This function may not be supported depending on the model or geographical area." +'How can I connect my mobile device to the TV?',"'To connect your mobile device to the TV, you can follow these steps:\n\n1. Make sure your TV supports SmartThings and that you are signed in to your Samsung account on the TV.\n2. Start the SmartThings app on your mobile device.\n3. Tap on Add Device on the dashboard of the SmartThings app.\n4. Connectable TVs will be searched for - select the model of your TV from the search results.\n5. Enter the PIN number displayed on the TV screen.\n6. Once the TV is registered with your Samsung account, you can use the SmartThings app to control the TV.\n7. Make sure both your mobile device and the TV are connected to the same network for optimal functionality.'","Connecting to the TV via the SmartThings app. Turn on the TV. Start the SmartThings app on your mobile device. Tap Add Device on the dashboard of the SmartThings app. Connectable TVs are searched for. From the search results, select the model of your TV. Enter the PIN number displayed on the TV screen. Once the TV is registered with your Samsung account, you can use the SmartThings app to control the TV. If your TV is not found on your mobile device, turn both of them off and on, and then try again. If you have more than one TV, you can easily select the TV you want by setting different TV names at Settings General System Manager Device Name Connecting Your Mobile Device. Connect your mobile device to the TV on the same network to control the TV with the mobile device or enjoy the content of the mobile device on the TV. Use the SmartThings app on your mobile device to quickly and easily connect to the TV and control its main settings. In addition, you can remotely check and control the statuses of various devices such as Samsung smart devices, home appliances, and speakers registered to the server. Make sure that your TV supports SmartThings. You can check this with the Supported Device menu in the SmartThings app. To use the SmartThings app, you must be signed in to your Samsung account on the TV. This function may not be supported depending on the TV model or mobile device. The supported functions may differ depending on the version of the SmartThings app. You can install the SmartThings app from App Store or Google Play Store.Connection notes for mobile devices. To use the Smart View function, the mobile device must support a mirroring function such as Screen Mirroring or Smart View. To check whether your mobile device supports the mirroring function, please visit the mobile device manufacturer's website. To use Wi-Fi Direct, the mobile device must support the Wi-Fi Direct function. Please check if your mobile device supports Wi-Fi Direct. The mobile device and your Smart TV must be connected to each other on the same network. The video or audio may stop intermittently, depending on network conditions. When sharing content with other network-based devices such as those in an IP (Internet Protocol) storage system, sharing may not be supported due to the network's configuration, quality, or functionality, for example, if the network has an NAS (Network-Attached Storage) device. " +'How to configure Contrast and Sharpness?',"'To configure Contrast and Sharpness, go to the Picture Expert Settings and use the functions provided there. Adjust the Contrast and Sharpness settings to your preference.'","Configuring advanced picture settings. Settings Picture Expert Settings Try Now Configure the screen settings to your taste by using the following functions: Backlight Try Now Brightness Try Now Contrast Try Now Sharpness Try Now Color Try Now Tint (G/R) Try Now Apply Picture Settings Try Now Digital Clean View Try Now Auto Motion Plus Settings Try Now When LED Clear Motion is set to , the screen appears darker than when it is Off Local Dimming This function may not be supported depending on the model or geographical area. Contrast Enhancer Try Now HDR+ Mode Try Now Automatically provide an optimal HDR effect based on the video source. The HDR (High Dynamic Range) technology implements video images that are very similar to the images seen through human eyes by finely adjusting the contrast of the source. This function may not be supported depending on the model or geographical area. Film Mode Try Now This function is only available when the input signal is TV, AV, Component (480i, 1080i), or HDMI (1080i). Color Tone Try Now White Balance Try Now Gamma Try Now RGB Only Mode Try Now Color Space Settings Try Now Reset Picture Try Now Adjusting the Picture Quality. Change the Picture Mode and adjust Expert Settings.Setting the screen to black and white. Settings General Accessibility Grayscale Try Now You can change the color of the TV screen to black and white to sharpen blurred edges caused by colors. Grayscale is on, some Accessibility menus are not available. This function may not be supported depending on the model or geographical area." +'What are the steps to reset network?',"'To reset the network, follow these steps:\n\n1. Go to Settings.\n2. Select General.\n3. Select Network.\n4. Choose Network Reset.\n5. Select Try Now.\n6. Restore the network settings to the factory default.'","Resetting Your Network. Settings General Network Reset Network Try Now Restore the network settings to the factory default.Restoring the TV to the factory settings. Settings Support Self Diagnosis Reset Try Now You can restore all TV settings (excluding the Internet and network settings) to the factory defaults. Select Reset . The security PIN entry window appears. Enter the security PIN, and then select Yes . All settings are then reset. The TV turns off and on again automatically and displays the Start Setup screen. For more information on Start Setup , refer to the user manual that came with the TV. Troubleshooting Internet connectivity issues. Try Now After reading the following content, troubleshoot the Internet connection issue. If the problem persists, contact your Internet Service Provider. No network cable found Make sure that the LAN cable is plugged in on both ends. If it is plugged in, make sure that the router is turned on. If the router is on, try turning it off and then on. Wireless network connection failed If a selected wireless router is not found, go to Open Network Settings , and then select the correct router. Settings General Network Open Network Settings Unable to connect to a wireless router Check if the router is turned on. If it is, turn it off and then on. Enter the correct password if required. IP auto setting failed Configure the settings in IP Settings Settings General Network Network Status IP Settings Make sure that the DHCP server is enabled on the router, and then unplug the router and plug it back in. Reset the router if required. For wireless connection, enter the correct password if required. Unable to connect to the network Check all IP Settings Settings General Network Network Status IP Settings After checking the DHCP server status (must be active) on the router, remove the LAN cable, and then connect it again. For wireless connection, enter the correct password if required. Connected to a local network, but not to the Internet Make sure that the Internet LAN cable is connected to the router's external LAN port. Check the DNS values in IP Settings Settings General Network Network Status IP Settings Network setup is complete, but unable to connect to the Internet If the problem persists, contact your Internet Service Provider. " +'How do I view a list of mobile devices registered to the TV?',"'To view a list of mobile devices registered to the TV, you would need to go to the ""Managing Mobile Devices"" section on the TV settings. Under the Device Connect Manager, there should be an option called ""Device List"" where you can view, edit, or remove a list of mobile devices registered to the TV.'","Managing Mobile Devices. Settings General External Device Manager Device Connect Manager Try Now Access Notification Set whether to display a notification when a mobile device attempts to connect to the TV. Device List View, edit, or remove a list of mobile devices registered to the TV.Connecting to the TV via the SmartThings app. Turn on the TV. Start the SmartThings app on your mobile device. Tap Add Device on the dashboard of the SmartThings app. Connectable TVs are searched for. From the search results, select the model of your TV. Enter the PIN number displayed on the TV screen. Once the TV is registered with your Samsung account, you can use the SmartThings app to control the TV. If your TV is not found on your mobile device, turn both of them off and on, and then try again. If you have more than one TV, you can easily select the TV you want by setting different TV names at Settings General System Manager Device Name Using the SmartThings App. Monitor and configure the smart devices you have connected to the SmartThings service. SmartThings The image on your TV may differ from the image above depending on the model and geographical area. From your TV, you can monitor and configure smart devices you connected to the SmartThings service via the SmartThings app installed on them. You can receive notifications about the statuses of your smart devices. To use this function, you must be logged in to your Samsung account. This function may not be supported depending on the model or geographical area. Location selection list Press the Select button to identify all connected devices or the devices specific to a location. Sign In Go to the Samsung Account screen to create a new account or sign in to your account. Set Your Scene Devices All Devices Settings Turn notifications on or off for the connected smart devices. Set Your Scene Create a custom mode to easily control multiple devices with a single command. Device list View a list of the connected smart devices and their statuses. You can add a new device through the SmartThings app on your mobile phone. Please refer to the user guide of the SmartThings app for more details." +"""I get this error 'some files cannot be played'. How do I fix this?""","""To fix the error of 'some files cannot be played', you can try the following steps:\n\n1. Make sure the file format is supported by the media player or device you are using.\n2. Check the encoding method of the file and ensure it is compatible with the media player.\n3. If the content is not playing smoothly due to network communication problems, try using a USB device instead.\n4. Connect a USB device to the USB port of your device, then browse the list of multimedia content stored on the device and select a content item to play.""","My File Won't Play. When files don't play, this may help resolve the problem. The problem Try this! Some files can't be played.Read Before Playing Photo, Video, or Music Files. Read this information before playing media content.Playing media content. Playing multimedia content on a PC or mobile device Select a device with media content in Source . The media content list in the device appears. Select a media content item from the list. The content is played. For more information about buttons, refer to and functions available while playing multimedia The content may not be played depending on the encoding method or file format. Furthermore, several functions may be restricted. Content on devices connected to the TV via your network may not play smoothly due to network communication problems. If this occurs, use a USB device. For more information about connecting to mobile devices, refer to Your Mobile Playing multimedia content on a USB device Connect a USB device to the USB port. When a pop up message appears on the screen, select Browse to easily move to the list of multimedia content stored on the device. Select a content item to play from the list. The content is played. " +'How do I set scheduled viewing time?',"'To set scheduled viewing time, first set the TV\'s clock by going to Settings --> General --> System Manager --> Time Clock. Then, on the Guide screen, select a program you would like to view, press and hold the Select button, and choose ""Schedule Viewing"" from the pop-up menu that appears. Alternatively, while watching TV, press the Select button to bring up the Program Info screen, select a program using the directional buttons, and choose ""Schedule Viewing"" to set the viewing time.'","Setting Up a Schedule Viewing. Configure the TV to show a specific channel or program at a specific time and date. The appears next to programs that have been configured for a scheduled viewing. To set up a scheduled viewing, you must first set the TV's clock ( Settings General System Manager Time ClockSetting Up a Schedule Viewing. Configure the TV to show a specific channel or program at a specific time and date. The icon appears next to programs that have been configured for a schedule viewing. To set up a schedule viewing, you must first set the TV's clock ( Settings General System Manager Time Clock Setting up a schedule viewing The Guide Screen On the Guide screen, select a program you would like to view, and then press and hold the Select button. Select Schedule Viewing on the pop-up menu that appears. The Program Info Screen Press the Select button while watching the TV. The Program Info window appears. Select a broadcast scheduled program by using the left or right directional buttons, and then press the Select button. You can schedule watching the program by selecting Schedule Viewing Setting up a scheduled viewing for a channel. Using the schedule viewing options from the guide screen On the Guide screen, select a program you would like to view, press and hold the Select button, and then select Schedule Viewing in the pop-up menu that appears. Using the schedule viewing options in the program info window Press the Select button while watching TV. The Program Info window appears. Select a program that will be broadcast using the left or right directional buttons. Press the Select button, and then select Schedule Viewing schedule viewing of that program. Using schedule viewing to view programs at a specified time on a specified date (U.S.A and Canada) Live TV Schedule Manager Add Viewing (Other countries) Live TV Schedule Manager Recordings Schedules Add Viewing To schedule a viewing, set the values for Antenna Channel Repeat Date and Start Time of the broadcast program to view, and then select Save Check the current time in Settings General System Manager Time Clock This function may not be supported depending on the model or geographical area. " +'Can I scan TV for malicious code ?',"'Yes, you can scan your TV for malicious code by accessing the Smart Security feature in the Settings of your TV. It will scan the TV and connected storage for any malicious code, and if any is found, you can quarantine it to prevent it from executing.'","Scanning the TV and connected storage for malicious code. Settings General System Manager Smart Security Scan If malicious code is found, the results of the scan will appear on the screen. This result window displays all malicious code found. You can quarantine (isolate) the code. Quarantining prevents it from executing. Select all the malicious code to quarantine. Select Isolate The selected malicious code is relocated to the Isolated List The Isolated List displays all quarantined malicious codes. Protecting the TV from Hacking and Malicious Code. Protect your TV from hacking attempts and malicious code. Settings General System Manager Smart Security Try Now You can use Smart Security to protect the TV from hacking and malicious code when it is connected to the Internet.Scanning for available channels. Settings Broadcasting Auto Program Try Now Do not use this function if your TV is connected to a cable box or satellite box. If the TV has saved channels, Auto Program deletes the existing list, and then saves the newly scanned channels." +'What is decor and how to set wallpaper of the Ambient Mode screen?',"'Decor is a category in Ambient Mode that allows you to select beautiful screens for your TV. To set a wallpaper for the Ambient Mode screen from your mobile device, you can use the Photo category. Select a picture stored in your mobile device and import it into Ambient Mode using the SmartThings app.'","Changing the content and settings for Ambient Mode. When you press the button in Ambient Mode , the Ambient Mode browser screen appears. In the Ambient Mode browser screen, you can select content and change the Ambient Mode settings. Setting up the content for Ambient Mode The Ambient Mode browser screen displays content at the top and categories at the bottom. Use the left or right directional buttons in the content list at the top to move the focus to content you want, and then press the Select button. The selected content is played in Ambient Mode In the future, more content that you can set up in the Ambient Mode browser will be provided. You can select the following categories and content: Decor : Allows you to select beautiful screens. Info : Provides information such as weather, news headlines, and more. This function may not be supported depending on the geographical area. Photo : Allows you to set a picture stored in your mobile device as the wallpaper of the Ambient Mode screen. You can configure special layouts using your photos. To save photos from your mobile device to the TV and import them in Ambient Mode , use the SmartThings app on your mobile device. Setting up the Ambient Mode details In the Ambient Mode browser screen, move the focus to , and then press the Select button. You can change the following settings: Brightness : Adjusts the screen brightness for Ambient Mode Color Tone : Adjusts the colors of the screen for Ambient Mode Auto Brightness : Changes the auto brightness setting for Ambient Mode When this function is set to Off , the brightness level of the TV screen is not automatically adjusted according to the ambient light level. Ambient Off Timer : Sets the time that the Ambient Mode screen turns off automatically. If there is no remote control input for the set time, the TV switches to the black screen state. Changing the background color of Ambient Mode To change the background color of Ambient Mode , move the focus to in the Ambient Mode browser screen, and then press the Select button. You can change the background color or pattern. Move the focus to a color or pattern you want, and then press the Select button. Take a picture of a wall using the SmartThings app on your mobile device to set it as the background of Ambient Mode This function may have a delay in image transmission and optimization depending on the network conditions. Changing the content and settings for Ambient Mode. When you press the button in Ambient Mode , the Ambient Mode browser screen appears. In the Ambient Mode browser screen, you can select content and change the Ambient Mode settings. Setting up the content for Ambient Mode The Ambient Mode browser screen displays content at the top and categories at the bottom. Use the left or right directional buttons in the content list at the top to move the focus to content you want, and then press the Select button. The selected content is played in Ambient Mode In the future, more content that you can set up in the Ambient Mode browser will be provided. You can select the following categories and content: Decor : Allows you to select beautiful screens. Info : Provides information such as weather, news headlines, and more. This function may not be supported depending on the geographical area. Photo : Allows you to set a picture stored in your mobile device as the wallpaper of the Ambient Mode screen. You can configure special layouts using your photos. To save photos from your mobile device to the TV and import them in Ambient Mode , use the SmartThings app on your mobile device. Setting up the Ambient Mode details In the Ambient Mode browser screen, move the focus to , and then press the Select button. You can change the following settings: Brightness : Adjusts the screen brightness for Ambient Mode Color Tone : Adjusts the colors of the screen for Ambient Mode Auto Brightness : Changes the auto brightness setting for Ambient Mode When this function is set to Off , the brightness level of the TV screen is not automatically adjusted according to the ambient light level. Ambient Off Timer : Sets the time that the Ambient Mode screen turns off automatically. If there is no remote control input for the set time, the TV switches to the black screen state. Changing the background color of Ambient Mode To change the background color of Ambient Mode , move the focus to in the Ambient Mode browser screen, and then press the Select button. You can change the background color or pattern. Move the focus to a color or pattern you want, and then press the Select button. Take a picture of a wall using the SmartThings app on your mobile device to set it as the background of Ambient Mode This function may have a delay in image transmission and optimization depending on the network conditions. Using the Ambient Mode. Learn about the functions available in Ambient Mode, which is a QLED TV-specific function. Ambient Mode The image on your TV may differ from the image above depending on the model and geographical area. Ambient Mode , you can view beautiful screens, various visual information, and notifications. To enter Ambient Mode , press the button. To return to the TV mode, press the button. To shut off the TV, press the button. If you press the button when the TV is turned off, the TV turns on in Ambient Mode If you use a remote control other than the Samsung TV remote control, there may be restrictions to entering Ambient Mode Because this function is a QLED TV-specific function, it may not be supported depending on the model. Decor Info Photo " +'Can I request service I am having problem with the TV?',"'Yes, you can request service if you are having a problem with the TV by selecting the item matching the problem you encountered and then selecting ""Request Now"", ""Schedule Appointment"", or ""Send"". Your service request will be registered, and the Samsung Contact Center will contact you to set up or confirm your service appointment.'","Requesting service. Settings Support Request Support Try Now You can request service when you encounter a problem with the TV. Select the item matching the problem that you encountered, and then select Request Now Schedule Appointment Send . Your service request will be registered. The Samsung Contact Center will contact you to set up or confirm your service appointment. You must agree to the terms and conditions for the service request. This function may not be supported depending on the geographical area. This function requires an Internet connection. Requesting service. Settings Support Request Support Try Now You can request service when you encounter a problem with the TV. Select the item matching the problem that you encountered, and then select Request Now Schedule Appointment Send . Your service request will be registered. The Samsung Contact Center will contact you to set up or confirm your service appointment. You must agree to the terms and conditions for the service request. This function may not be supported depending on the geographical area. This function requires an Internet connection.Troubleshooting. If you have a problem with your TV, you can access various solutions while watching TV. If your problem is not addressed in the Troubleshooting section, contact the Samsung Call Center." +'From where I can see program information?',"'You can see program information on the Guide screen or the Program Info window. To access program information on the Guide screen, select a program you would like to view and press and hold the Select button, then select Schedule Viewing. To access program information on the Program Info window, press the Select button while watching TV, select a program using the directional buttons, and then select Schedule Viewing.'","Viewing Broadcast Information at a Glance. See an overview of each channel's program lineup.Setting up a scheduled viewing for a channel. Using the schedule viewing options from the guide screen On the Guide screen, select a program you would like to view, press and hold the Select button, and then select Schedule Viewing in the pop-up menu that appears. Using the schedule viewing options in the program info window Press the Select button while watching TV. The Program Info window appears. Select a program that will be broadcast using the left or right directional buttons. Press the Select button, and then select Schedule Viewing schedule viewing of that program. Using schedule viewing to view programs at a specified time on a specified date (U.S.A and Canada) Live TV Schedule Manager Add Viewing (Other countries) Live TV Schedule Manager Recordings Schedules Add Viewing To schedule a viewing, set the values for Antenna Channel Repeat Date and Start Time of the broadcast program to view, and then select Save Check the current time in Settings General System Manager Time Clock This function may not be supported depending on the model or geographical area. Setting Up a Schedule Viewing. Configure the TV to show a specific channel or program at a specific time and date. The icon appears next to programs that have been configured for a schedule viewing. To set up a schedule viewing, you must first set the TV's clock ( Settings General System Manager Time Clock Setting up a schedule viewing The Guide Screen On the Guide screen, select a program you would like to view, and then press and hold the Select button. Select Schedule Viewing on the pop-up menu that appears. The Program Info Screen Press the Select button while watching the TV. The Program Info window appears. Select a broadcast scheduled program by using the left or right directional buttons, and then press the Select button. You can schedule watching the program by selecting Schedule Viewing " +'What is source and how to serch data for channels?',"'Source refers to the input signal that is currently displayed on the TV screen. To search for data on channels, you can use the Channel List feature on your TV. This feature allows you to change the channel or check programs on other digital channels while watching TV. The Channel List screen contains icons for favorite channels and locked channels. Additionally, by pressing the right directional button on the Channel List screen, you can access functions such as displaying all channels found during a channel search and selecting between Air and Cable as the DTV mode.'","Changing the input signal. Source When you select a connected external device on the Source screen, the output of the selected device is displayed on the TV's screen. To control a device (Blu-ray player, game console, etc.) that supports universal remote control with the TV's remote, connect the device to an HDMI port on the TV, and then turn on the device. You can switch to the output of the device automatically or you can configure universal remote control for the device automatically. To configure the device for universal remote control automatically, point the TV's remote control at the device. When a USB device is connected to the USB port, a pop-up message appears that lets you switch easily to the media content listed on the device. This function may not be supported depending on the device and geographical area. Connection Guide. You can view detailed information about external devices that can be connected to the TV. Source Connection Guide It shows you how to connect various external devices such as set-top boxes, game consoles, and PCs using pictures. If you select the connection method and an external device, the connection details appear. The image on your TV may differ from the image above depending on the model and geographical area.Using the Channel List. Change the channel or check programs on other digital channels. Live TV Channel List Using the Channel List , you can change the channel or check programs on other digital channels while watching TV. The Channel List screen contains the following icons: : A favorite channel : A locked channel. If you press the right directional button when viewing the Channel List screen, you can use the following functions: All Channels Displays the channels that the TV found during the Auto Program channel search. Favorites Displays Favorites 1 Favorites 5 Air Cable To allow you to select Air Cable as the DTV mode, move the focus to Air Cable , and then press the Select button. Air is suitable for homes equipped with separate, over-the-air antennas. Cable is suitable for homes that subscribe to a local cable-service. You do not need to use this function if your TV is connected to a cable box or satellite box. TV PLUS While the TV is connected to the Internet, you can watch popular programs or highlights for each theme through a virtual channel at any time. As with regular channels, the TV provides the daily program schedules for TV PLUS This function may not be supported depending on the model or geographical area. " +'How can I change Antenna type?',"'To change the antenna type on your TV, you can use the remote control. Press the right directional button to move the focus to Air Cable and then press the Select button. This function allows you to switch between different types of antenna signals. However, this function is not necessary if your TV is already connected to a cable box or satellite box.'","Connecting an Antenna. You can connect an antenna cable to your TV. An antenna connection is not necessary if you connect a cable box or satellite box. Cable Box/Disc Player HDMI Game Console Audio Device Mobile Device External Storage Accessory ANT IN Connection Guide Changing the broadcast signal. Live TV Channel List Press the right directional button on your remote control, move the focus to Air Cable , and then press the Select button. You do not need to use this function if your TV is connected to a cable box or satellite box.Connections. You can watch live broadcasts by connecting an antenna and antenna cable to your TV and can get access to the Internet by using a LAN cable or wireless router. Using various connectors, you can also connect external devices." +'Can I turn on the TV with a mobile device?',"'Yes, you can turn on the TV with a mobile device when both the TV and the mobile device are connected to the same network. This can be done using the SmartThings app on your mobile device.'","Turning on the TV with a mobile device. Settings General Network Expert Settings Power On with Mobile You can turn on the TV with your mobile device when both of them are connected to the same network.Connecting to the TV via the SmartThings app. Turn on the TV. Start the SmartThings app on your mobile device. Tap Add Device on the dashboard of the SmartThings app. Connectable TVs are searched for. From the search results, select the model of your TV. Enter the PIN number displayed on the TV screen. Once the TV is registered with your Samsung account, you can use the SmartThings app to control the TV. If your TV is not found on your mobile device, turn both of them off and on, and then try again. If you have more than one TV, you can easily select the TV you want by setting different TV names at Settings General System Manager Device Name Connecting Your Mobile Device. Connect your mobile device to the TV on the same network to control the TV with the mobile device or enjoy the content of the mobile device on the TV. Use the SmartThings app on your mobile device to quickly and easily connect to the TV and control its main settings. In addition, you can remotely check and control the statuses of various devices such as Samsung smart devices, home appliances, and speakers registered to the server. Make sure that your TV supports SmartThings. You can check this with the Supported Device menu in the SmartThings app. To use the SmartThings app, you must be signed in to your Samsung account on the TV. This function may not be supported depending on the TV model or mobile device. The supported functions may differ depending on the version of the SmartThings app. You can install the SmartThings app from App Store or Google Play Store." +"""What is the function of 'Learn TV Remote'?""","""The function of 'Learn TV Remote' is to help individuals with visual impairments learn the positions of the buttons on the remote control. When activated, pressing a button on the remote will prompt the TV to tell the user the name of that button.""","Learning about the remote control (for the visually impaired). Settings General Accessibility Learn TV Remote Try Now This function helps individuals with a visual impairment to learn the positions of the buttons on the remote control. When this function is activated, you can press a button on the remote control and the TV will tell you its name. To return to the previous screen, press the button twice. To exit, press and hold the button. This function is only available in the U.S.A. and Canada. This function is only available when Voice Guide is enabled. About the Samsung Smart Remote (UHD TV). Learn about the buttons on the Samsung Smart Remote that comes with the UHD TV. Button Description (Bixby) (Number button) (Color button) Directional pad (up/ down/left/right) Select (Return) Press to return to the Home Screen. (Smart Hub) (Play/pause) Move the button up or down to adjust the volume. To mute the sound, press the button. When VOL (Volume) pressed for 1 second or more, the Accessibility Shortcuts appear. (Channel) Use the Samsung Smart Remote less than 20 feet from the TV. The usable distance may vary with the wireless environmental conditions. The images, buttons, and functions of the Samsung Smart Remote may differ with the model. The Samsung Smart Remote may not be supported depending on the model or geographical area. To use the Samsung Smart Remote to control a compatible external device that does not support HDMI-CEC (Anynet+), you must configure universal remote control for the device. For more information, refer to External Devices with the Samsung Smart Remote - Using the Universal To use the Samsung Smart Remote to control a compatible external device that does support HDMI-CEC (Anynet+), connect the device to an HDMI port on the TV using an HDMI cable. For more information, refer to Anynet+ (HDMI- Runs Bixby . Press and hold the button, say a command, and then release the button to run Bixby When you press the button, the Enter My Bixby menu appears at the bottom of the screen. Press to bring up the virtual numeric pad on the screen. Use the numeric keypad to enter numeric values. Select numbers and then select Done to enter a numeric value. Use to change the channel, enter a PIN, enter a ZIP code, etc. If the Color Button appears with a number strip on the screen, select the Color Button and then select a specific color by using the directional pad (up/down/left/right). Use this to access additional options that are specific to the feature in use. Use these colored buttons to access additional options that are specific to the feature in use. These buttons are not available in the U.S.A. and Canada. Moves the focus. Selects or runs a focused item. When pressed while you are watching content, detailed program information appears. Press to return to the previous menu. When pressed for 1 second or more, the running function is terminated. When pressed while you are watching a program, the previous channel appears. When pressed, the playback controls appear. Using these controls, you can control the media content that is playing. Move the button up or down to change the channel. To see the Guide screen, press the button. Press and hold the button up or down to quickly change the channel. Using the e-Manual. Control and read the manual embedded in your TV." +'Can I select Ambient Light Detection ?',"'Yes, you can select Ambient Light Detection, which automatically adjusts the brightness level of the TV according to the ambient light level to reduce power consumption.'","Reducing the energy consumption of the TV. Settings General Eco Solution Try Now You can adjust the brightness level of the TV, reduce overall power consumption, and prevent overheating. Ambient Light Detection Try Now Automatically adjusts the brightness level of the TV, according to the ambient light level, to reduce power consumption. If Ambient Light Detection has adjusted the screen brightness to a too bright or dark level, you can select Minimum Backlight to manually adjust the minimum screen brightness. Minimum Backlight Try Now When Ambient Light Detection is turned on, you can manually adjust the minimum brightness of the TV screen. This function acts only when the value is less than the setting in Settings Picture Expert Settings Backlight Power Saving Mode Try Now Allows you to select a brightness setting from the list to reduce the TV's power consumption. Motion Lighting Try Now Adjusts the brightness in response to on-screen movements to reduce power consumption. Auto Power Off Try Now Automatically turns off the TV to reduce unnecessary power consumption if there is no operation for 4 hours. Changing the content and settings for Ambient Mode. When you press the button in Ambient Mode , the Ambient Mode browser screen appears. In the Ambient Mode browser screen, you can select content and change the Ambient Mode settings. Setting up the content for Ambient Mode The Ambient Mode browser screen displays content at the top and categories at the bottom. Use the left or right directional buttons in the content list at the top to move the focus to content you want, and then press the Select button. The selected content is played in Ambient Mode In the future, more content that you can set up in the Ambient Mode browser will be provided. You can select the following categories and content: Decor : Allows you to select beautiful screens. Info : Provides information such as weather, news headlines, and more. This function may not be supported depending on the geographical area. Photo : Allows you to set a picture stored in your mobile device as the wallpaper of the Ambient Mode screen. You can configure special layouts using your photos. To save photos from your mobile device to the TV and import them in Ambient Mode , use the SmartThings app on your mobile device. Setting up the Ambient Mode details In the Ambient Mode browser screen, move the focus to , and then press the Select button. You can change the following settings: Brightness : Adjusts the screen brightness for Ambient Mode Color Tone : Adjusts the colors of the screen for Ambient Mode Auto Brightness : Changes the auto brightness setting for Ambient Mode When this function is set to Off , the brightness level of the TV screen is not automatically adjusted according to the ambient light level. Ambient Off Timer : Sets the time that the Ambient Mode screen turns off automatically. If there is no remote control input for the set time, the TV switches to the black screen state. Changing the background color of Ambient Mode To change the background color of Ambient Mode , move the focus to in the Ambient Mode browser screen, and then press the Select button. You can change the background color or pattern. Move the focus to a color or pattern you want, and then press the Select button. Take a picture of a wall using the SmartThings app on your mobile device to set it as the background of Ambient Mode This function may have a delay in image transmission and optimization depending on the network conditions. Changing the content and settings for Ambient Mode. When you press the button in Ambient Mode , the Ambient Mode browser screen appears. In the Ambient Mode browser screen, you can select content and change the Ambient Mode settings. Setting up the content for Ambient Mode The Ambient Mode browser screen displays content at the top and categories at the bottom. Use the left or right directional buttons in the content list at the top to move the focus to content you want, and then press the Select button. The selected content is played in Ambient Mode In the future, more content that you can set up in the Ambient Mode browser will be provided. You can select the following categories and content: Decor : Allows you to select beautiful screens. Info : Provides information such as weather, news headlines, and more. This function may not be supported depending on the geographical area. Photo : Allows you to set a picture stored in your mobile device as the wallpaper of the Ambient Mode screen. You can configure special layouts using your photos. To save photos from your mobile device to the TV and import them in Ambient Mode , use the SmartThings app on your mobile device. Setting up the Ambient Mode details In the Ambient Mode browser screen, move the focus to , and then press the Select button. You can change the following settings: Brightness : Adjusts the screen brightness for Ambient Mode Color Tone : Adjusts the colors of the screen for Ambient Mode Auto Brightness : Changes the auto brightness setting for Ambient Mode When this function is set to Off , the brightness level of the TV screen is not automatically adjusted according to the ambient light level. Ambient Off Timer : Sets the time that the Ambient Mode screen turns off automatically. If there is no remote control input for the set time, the TV switches to the black screen state. Changing the background color of Ambient Mode To change the background color of Ambient Mode , move the focus to in the Ambient Mode browser screen, and then press the Select button. You can change the background color or pattern. Move the focus to a color or pattern you want, and then press the Select button. Take a picture of a wall using the SmartThings app on your mobile device to set it as the background of Ambient Mode This function may have a delay in image transmission and optimization depending on the network conditions. " +'How do I fix odd sound of speaker?',"'To fix an odd sound coming from the speakers of the TV, you should first check the volume control of your TV and the external device (such as a cable box or DVD player) connected to your TV. If the issue persists, you can reboot the connected device by disconnecting and then reconnecting its power cable. Additionally, you should ensure that the audio cable is connected to the correct audio output connector on the external device. If there is still an issue, check the signal information for antenna or cable connections as a low signal level may cause sound distortions. If the problem continues, seek assistance from the manufacturer of the audio device connected to your TV.'","Testing the sound. Settings Support Self Diagnosis Start Sound Test Try Now If the TV plays the Start Sound Test melody without distortion, there may be a problem with an external device or the broadcast signal's strength. The problem Try this! There is no sound or the sound is too low at maximum volume. The picture is good but there is no sound. No sound is heard. The speakers are making an odd sound. Check the volume control of your TV, and then check the volume control of the external device (cable box or satellite box, DVD, Blu-ray, etc.) connected to your TV. Set Sound Output TV Speaker Settings Sound Sound Output If you are using an external device, check the device's audio output option. For example, you may need to change your cable box's audio option to HDMI if the box connected to your TV is using an HDMI cable. To listen to computer sound, connect an external speaker to the computer's audio output connector. If your TV has a headphone jack, make sure there is nothing plugged into it. Reboot the connected device by disconnecting and then reconnecting the device's power cable. Check whether the Digital Output Audio Format is set to Dolby Digital+ If you are using a receiver that does not support Dolby Digital Plus, you will hear no sound when you select Dolby Digital+ Settings Sound Expert Settings Digital Output Audio Format Dolby Digital+ Run Start Sound Test Settings Support Self Diagnosis Start Sound Test Make sure that the audio cable is connected to the correct audio output connector on the external device. For antenna or cable connections, check the signal information. A low signal level may cause sound distortions. I Can't Hear the Sound Clearly. When the TV has difficulties with sound, these steps may help resolve the problem.Connection notes for audio devices. For better audio quality, it is a good idea to use an AV receiver. If you connect an external audio device using an optical cable, the Sound Output setting is automatically changed to the connected device. However, to make this happen, you must turn on the external audio device before connecting the optical cable. To manually change the Sound Output setting, do one of the following: Use the Quick Settings screen to change to the connected device: Use the Select button to select Audio Out/Optical on the Sound Output menu. ( Settings directional button Sound Output Use the Settings screen to change to the connected device: Select Audio Out/Optical on the Sound Output menu. ( Settings Sound Sound Output An unusual noise coming from a connected audio device while you are using it may indicate a problem with the audio device itself. If this occurs, ask for assistance from the audio device's manufacturer. Digital audio is only available with 5.1 channel broadcasts. " +'How can I search for the channels?',"'To search for channels, you can use the Channel List feature on your TV. Press the button on the remote to open Smart Hub, then use the directional buttons to move to Live TV and then to Channel List. This will display a list of channels with the program currently on. You can navigate through the list using the directional buttons and enter a channel number to go to a specific channel directly. Additionally, you can add channels to a favorites list for easier access.'","Using the Channel List. See how to list the channels available on your TV. Live TV Channel List Using Channel List , you can change the channel or check programs on other digital channels while watching TV. Press the button on the remote to open Smart Hub , press the left or right directional buttons to move to Live TV press the up directional button to move to the top row, press the right directional button to move to Channel List and then press the Select button. This shows a list of the channels and the program currently on. The focus will be on the channel you are currently watching, showing channel name, channel number, and program title. Use the up and down directional buttons to move within the channel list (or use the button to go up and down a page at a time). You can also use the button to enter a channel number and go to that channel directly. Press the right directional button in the Channel List to move to the Category List . Use the up and down directional buttons to move within this list. Press the Select button to choose a category you want from the Category List . The Category List contains All Channels and Favorites . Use the up and down directional buttons to move to a channel you want to view, and then press the Select button. All Channels Displays the channels that have been automatically searched. Favorites Displays Favorites 1 Favorites 5 . Use the up and down directional buttons to move between the lists of favorites. Highlight the one you want, and then press Select. The channel list will now show only the channels in this list. This option is only available on specific models in specific geographical areas. Adding channels to a favorites list. Select channels to add, and then select the icon. The selected channels are added to the favorites list.Using the Channel List. Change the channel or check programs on other digital channels. Live TV Channel List Using the Channel List , you can change the channel or check programs on other digital channels while watching TV. The Channel List screen contains the following icons: : A favorite channel : A locked channel. If you press the right directional button when viewing the Channel List screen, you can use the following functions: All Channels Displays the channels that the TV found during the Auto Program channel search. Favorites Displays Favorites 1 Favorites 5 Air Cable To allow you to select Air Cable as the DTV mode, move the focus to Air Cable , and then press the Select button. Air is suitable for homes equipped with separate, over-the-air antennas. Cable is suitable for homes that subscribe to a local cable-service. You do not need to use this function if your TV is connected to a cable box or satellite box. TV PLUS While the TV is connected to the Internet, you can watch popular programs or highlights for each theme through a virtual channel at any time. As with regular channels, the TV provides the daily program schedules for TV PLUS This function may not be supported depending on the model or geographical area. " +'Explain the steps how to do Schedule Recording while watching a program?',"'To do a Schedule Recording while watching a program, follow these steps:\n\n1. Press the Select button while watching the program to display the program info window.\n2. Use the left or right directional button to select a scheduled program that you want to record.\n3. Press the Select button.\n4. Select ""Schedule Recording"" from the options presented on the screen.\n5. This will set up a scheduled recording of the selected program.\n6. The program will be scheduled to record at the specified time, and you will be returned to watching the current program.\n\nThese steps allow you to easily schedule a recording of a program while already watching another program.'","Recording programs. Using the instant and schedule recording options from the guide screen Instant Recording Select a live program from the Guide , press and hold the Select button, and then select Record from the pop- up menu to start recording immediately. Schedule Recording Select a program scheduled to be broadcast, press the Select button, and then select Schedule Recording set up a scheduled recording of the scheduled program. Using the instant and schedule recording options while watching a program Instant Recording If you select Record after pressing the button while watching a broadcast, recording starts. Schedule Recording Press the Select button while watching a program to display the program info window. Use the left or right directional button to select a scheduled program. Press the Select button, and then select Schedule Recording to set up a scheduled recording of the scheduled program. Scheduling a video recording after entering the date and time Live TV Recordings Schedules Add Recording To schedule a recording, set the values for Antenna Channel Repeat Date Start Time , and End Time of the broadcast program to record, and then select Save Check the current time in Settings General System Manager Time ClockUsing the instant and schedule recording options from the guide screen:. To record a program that is on now, open the program guide, move to the program, and then press and hold the Select button. You will get a pop-up menu. Use the up and down directional buttons to move to Record . Press select to record the program. You will be returned to the program guide. Recording automatically ends at the time when the program ends. To open the Guide and stop recording the program, press and hold the Select button. The menu is displayed and there is now a Stop Recording option instead of a Record option. Move to the Stop Recording option, and then press the Select button. To record a program that is on later, move to the program in the Guide , and then press Select. You will get a pop-up menu. Use the up and down directional button to move within this menu. Move to the Schedule Recording , and then press Select. You will hear a message saying the program has been set to record and then you will be returned to the program guide. The icon is placed visually next to the program title to show it is set to record. To find out if a program is set to record or to cancel a scheduled recording, press select on the program. If it is already set to record, then the option on the menu will be Cancel Scheduled Recording . You can press the Select button to cancel the scheduled recording or just press the button to go back to the program guide without cancelling the recording. Setting up a scheduled viewing for a channel. Using the schedule viewing options from the guide screen On the Guide screen, select a program you would like to view, press and hold the Select button, and then select Schedule Viewing in the pop-up menu that appears. Using the schedule viewing options in the program info window Press the Select button while watching TV. The Program Info window appears. Select a program that will be broadcast using the left or right directional buttons. Press the Select button, and then select Schedule Viewing schedule viewing of that program. Using schedule viewing to view programs at a specified time on a specified date (U.S.A and Canada) Live TV Schedule Manager Add Viewing (Other countries) Live TV Schedule Manager Recordings Schedules Add Viewing To schedule a viewing, set the values for Antenna Channel Repeat Date and Start Time of the broadcast program to view, and then select Save Check the current time in Settings General System Manager Time Clock This function may not be supported depending on the model or geographical area. " +'How can I use HDMI UHD Color?',"""To use HDMI UHD Color, you need to access the settings on your TV. Go to General > External Device Manager > HDMI UHD Color. If an external UHD player is connected to the TV, use the HDMI UHD Color function to convert the TV's HDMI connector to a mode optimized for the UHD 50P/60P signals. You can select the HDMI connector you want to use for HDMI UHD Color and then press the Select button to set the HDMI UHD Color function to On. This will enable support for up to UHD 50P/60P 4:4:4 and 4:2:2 input signals. If your external device only supports UHD 24 or 30 Hz frequency or any FHD frequency, the HDMI UHD Color function may not be available, and you should set HDMI UHD Color to Off in that case.""","Viewing UHD videos. Settings General External Device Manager HDMI UHD Color Try Now If an external UHD player is connected to the TV, use the HDMI UHD Color function to convert the TV's HDMI connector to a mode optimized for the UHD 50P/60P signals. When you select the HDMI connector you want to use for HDMI UHD Color , and then press the Select button to set the HDMI UHD Color function to , the TV screen flickers. The HDMI connection with HDMI UHD Color set to Off supports up to UHD 50P/60P 4:2:0 input signals, while the HDMI connection with HDMI UHD Color set to supports up to UHD 50P/60P 4:4:4 and 4:2:2 input signals. For more information, refer to Resolutions for UHD Input When you connect the TV to an external device that supports only the UHD 24 or 30 Hz frequency or any FHD frequency, the HDMI UHD Color function may not be available. In this case, set HDMI UHD Color OffIf HDMI UHD Color is set to On. Color Depth / Frame rate (fps) 50 / 60 10 bit RGB 4:4:4 YCbCr 4:4:4 YCbCr 4:2:2 YCbCr 4:2:0 Chroma Sampling RGB 4:4:4 YCbCr 4:4:4 YCbCr 4:2:2 YCbCr 4:2:0 Chroma Sampling 8 bit 12 bit Supported Resolutions for UHD Input Signals. Check the supported resolution for UHD input signals. Resolution: 3840 x 2160p, 4096 x 2160p An HDMI connection with HDMI UHD Color set to Off supports up to UHD 50P/60P 4:2:0 input signals, while an HDMI connection with HDMI UHD Color set to supports up to UHD 50P/60P 4:4:4 and 4:2:2 input signals." +'Can I turn TV in Ambient Mode?',"'Yes, you can turn the TV into Ambient Mode by pressing the button on the remote control when the TV is turned off.'","Using the Ambient Mode. Learn about the functions available in Ambient Mode, which is a QLED TV-specific function. Ambient Mode The image on your TV may differ from the image above depending on the model and geographical area. Ambient Mode , you can view beautiful screens, various visual information, and notifications. To enter Ambient Mode , press the button. To return to the TV mode, press the button. To shut off the TV, press the button. If you press the button when the TV is turned off, the TV turns on in Ambient Mode If you use a remote control other than the Samsung TV remote control, there may be restrictions to entering Ambient Mode Because this function is a QLED TV-specific function, it may not be supported depending on the model. Decor Info Photo Using the Ambient Mode. Learn about the functions available in Ambient Mode, which is a QLED TV-specific function. Ambient Mode The image on your TV may differ from the image above depending on the model and geographical area. Ambient Mode , you can view beautiful screens, various visual information, and notifications. To enter Ambient Mode , press the button. To return to the TV mode, press the button. To shut off the TV, press the button. If you press the button when the TV is turned off, the TV turns on in Ambient Mode If you use a remote control other than the Samsung TV remote control, there may be restrictions to entering Ambient Mode Because this function is a QLED TV-specific function, it may not be supported depending on the model. Decor Info Photo Screen states related to Ambient Mode. Ambient Mode Ambient Mode , you can set up and enjoy visual information and design. In this state, less power is consumed and brightness is automatically adjusted by the illuminance sensor. Getting dark state Ambient Mode , the TV screen changes automatically according to the surroundings. When the surroundings get dark, the TV screen becomes darker. When the surroundings get bright before the TV screen turns completely black or when the remote control or the Bixby function is operated, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV can activate voice guides and process voice commands. Black screen state Ambient Mode , if the surroundings get to a specific level of darkness or there is no external operation, the TV turns soft-off. Then, when the surroundings get bright, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV cannot activate voice guides and process voice commands. TV off state To enter Ambient Mode when the TV is turned off, press the button on the remote control. To enter TV mode when the TV is turned off, press the button. In TV mode, you can watch broadcasts or enjoy the content on the external connected devices. " +'How can I jump forward / jump backward?',"'To jump forward or jump backward while playing multimedia content, you can press the up directional button to move the focus to the playback control bar. Then, select the left or right directional button in the playback control bar to move backward or forward by 10 seconds in the video. This allows you to quickly navigate through the video content.'","Buttons and functions available while recording a program or. Timeshift Review available buttons and functions of how they are used while recording a program or Timeshift. This function is not available in the U.S.A. and Canada. This function may not be supported depending on the model or geographical area. The available buttons and functions may differ with the functions. When you press the down directional button, the control bar appears and the following options are available: Edit Recording Time You can set for what period of time the program will be recorded. This function is only available when a program is being recorded. Record You can switch from the Timeshift function to the recording function. When switching to the recording function, you can include the content recorded by the Timeshift function in the new recording. This function is only available when the Timeshift function is running. Go to Live TV Select to return to the current scenes of the program when the recording or Timeshift function is playing its past scenes. Stop Recording Stop Timeshift Ends the recording or Timeshift function. Info Displays the program info window of the program you are recording or time-shifting. Pause Play You can use the following functions when the video is paused. (Note that with the video paused, the TV does not play audio.) Slow Rewind Slow Forward : Allows you to play the video slowly (1/8, 1/4, 1/2) backward or forward by selecting the option. To increase the rewind or forward speed in slow mode up to 3 times, select the option repeatedly. To return to normal speed, select the option. When the Slow Rewind function is activated, you can view the difference between the current recording time and the current rewind time. Jump Backward / Jump Forward: Press the up directional button to move the focus to the playback control bar, and then select the left or right directional button in the playback control bar to move backward or forward by 10 seconds in the video. When the Jump Backward function is activated, you can view the difference between the current recording time and the current rewind time. Rewind Fast Forward This function is not available while you are watching a program that is currently being broadcast. Buttons and functions available while playing multimedia. content Review available media playback, control, and record buttons and descriptions of how they are used. Press the Select button while playing any video, photo, or recorded content. The following buttons appear. The provided buttons or functions may differ with the media content type. The available buttons and functions may differ with the content you are viewing or playing. Pause Play Start Pauses or plays the multimedia content. You can use the following functions when the video is paused. Slow Rewind Slow Forward : Allows you to play a video slowly backward or forward by selecting the option. There are 3 playback speeds. To change the playback speed, press the option repeatedly. To return to normal speed, select the option or press the button. Jump Backward / Jump Forward: Press the up directional button to move the focus to the playback control bar, and then select the left or right directional button in the playback control bar to move backward or forward by 10 seconds in the video. Move to a specific playback section, move up the focus on the playback bar, and then select one of the five thumbnails. This function may not be supported depending on the file format. Previous Next Displays the previous or the next multimedia content file. Rewind Fast Forward Rewinds or fast forwards the multimedia content. To increase the rewind or fast forward speed up to 3 times faster than normal, select the button repeatedly. To return to normal speed, select the option or press the button. 360 Mode Provides a 360-degree view for videos and photos. This function may not be supported depending on the file format. Repeat Plays the current multimedia content repeatedly or all multimedia content files in the same folder repeatedly. Shuffle Plays music files in random order. Picture Off Plays multimedia content with the screen off. Screen Fit Fits a photo to the screen. Rotate left Rotate right Rotates a photo left or right. Zoom Zooms in a photo by up to a factor of 4. Background Music During a slideshow, pauses or resumes the background music. Options The available options may differ with the model and content. Function Description Slideshow Speed Sets the slideshow speed. Slideshow Effect Applies transition effects to the slide show. Background Music Subtitles Rotate Rotates the video. Audio Language Information Displays detailed information about the current multimedia content. Plays background music while the TV displays photos. The music files must be saved in the same USB device as the photo files. To pause the background music, select Background Music on the playback screen bottom. Controls video subtitles. Subtitles : Turns the language of the subtitles on or off. Sync : Adjusts the synchronization when the video and the subtitles are not synchronized. Reset Sync : Resets the subtitle synchronization adjustment to 0. Size : Changes the font size of the subtitles. Encoding : Displays a list of encoding languages. Allows you to change the encoding language when subtitles are displayed incorrectly. Edge Type : Adjusts the position of text and subtitle edges. Edge Color : Changes the color of subtitle edges. Changes the audio language. This function is only available if the video supports multi-track sound. Changing the recording time. When recording a live program you can specify how long you want to record for. Press the down directional button to display the control bar at the bottom. Press the left or right directional buttons to move to Edit Recording Time , and then press the Select button. A slider scale appears and you will hear the recording time currently set. Use the left and right directional buttons to change the time, and then press the Select button. You will hear a menu that gives information about the recording time. To return to the program you are watching, press the button on the remote. The maximum recording time differs depending on the capacity of the USB device. " +'How can I turn on ambient mode on TV screen?',"'To turn on Ambient Mode on the TV screen, you can press the button on the remote control. If the TV is turned off, pressing the button will turn the TV on in Ambient Mode.'","Screen states related to Ambient Mode. Ambient Mode Ambient Mode , you can set up and enjoy visual information and design. In this state, less power is consumed and brightness is automatically adjusted by the illuminance sensor. Getting dark state Ambient Mode , the TV screen changes automatically according to the surroundings. When the surroundings get dark, the TV screen becomes darker. When the surroundings get bright before the TV screen turns completely black or when the remote control or the Bixby function is operated, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV can activate voice guides and process voice commands. Black screen state Ambient Mode , if the surroundings get to a specific level of darkness or there is no external operation, the TV turns soft-off. Then, when the surroundings get bright, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV cannot activate voice guides and process voice commands. TV off state To enter Ambient Mode when the TV is turned off, press the button on the remote control. To enter TV mode when the TV is turned off, press the button. In TV mode, you can watch broadcasts or enjoy the content on the external connected devices. Screen states related to Ambient Mode. Ambient Mode Ambient Mode , you can set up and enjoy visual information and design. In this state, less power is consumed and brightness is automatically adjusted by the illuminance sensor. Getting dark state Ambient Mode , the TV screen changes automatically according to the surroundings. When the surroundings get dark, the TV screen becomes darker. When the surroundings get bright before the TV screen turns completely black or when the remote control or the Bixby function is operated, the TV screen returns to Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV can activate voice guides and process voice commands. Black screen state Ambient Mode , if the surroundings get to a specific level of darkness or there is no external operation, the TV turns soft-off. Then, when the surroundings get bright, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV cannot activate voice guides and process voice commands. TV off state To enter Ambient Mode when the TV is turned off, press the button on the remote control. To enter TV mode when the TV is turned off, press the button. In TV mode, you can watch broadcasts or enjoy the content on the external connected devices. Using the Ambient Mode. Learn about the functions available in Ambient Mode, which is a QLED TV-specific function. Ambient Mode The image on your TV may differ from the image above depending on the model and geographical area. Ambient Mode , you can view beautiful screens, various visual information, and notifications. To enter Ambient Mode , press the button. To return to the TV mode, press the button. To shut off the TV, press the button. If you press the button when the TV is turned off, the TV turns on in Ambient Mode If you use a remote control other than the Samsung TV remote control, there may be restrictions to entering Ambient Mode Because this function is a QLED TV-specific function, it may not be supported depending on the model. Decor Info Photo " +'What are natural and movie mode?',"'Natural mode reduces eye strain for a comfortable viewing experience, while Movie mode is suitable for watching TV or movies in a dark room.'","Choosing a picture mode. Settings Picture Picture Mode Try Now You can select the Picture Mode that provides the best viewing experience. Dynamic Makes the picture brighter and clearer in bright viewing environments. Standard Is the default mode suitable for general viewing environments. Natural Reduces eye strain for a comfortable viewing experience. Movie Is suitable for watching TV or movies in a dark room. This function may not be supported depending on the model.Playing games on an optimized screen. Settings General External Device Manager Game Mode Settings Game Mode You can enable game mode to optimize the TV's settings for playing video games with a gaming console, such as a PlayStation or Xbox. The Game Mode Settings may not appear depending on the model. The game mode is not available for normal TV viewing. The screen may shake somewhat. When Game Mode is enabled, Picture Mode and Sound Mode are switched to Game automatically. To use a different external device on the same port, remove the game console connection, turn off Game Mode, and then connect the external device to the port.Screen states related to Ambient Mode. Ambient Mode Ambient Mode , you can set up and enjoy visual information and design. In this state, less power is consumed and brightness is automatically adjusted by the illuminance sensor. Getting dark state Ambient Mode , the TV screen changes automatically according to the surroundings. When the surroundings get dark, the TV screen becomes darker. When the surroundings get bright before the TV screen turns completely black or when the remote control or the Bixby function is operated, the TV screen returns to Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV can activate voice guides and process voice commands. Black screen state Ambient Mode , if the surroundings get to a specific level of darkness or there is no external operation, the TV turns soft-off. Then, when the surroundings get bright, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV cannot activate voice guides and process voice commands. TV off state To enter Ambient Mode when the TV is turned off, press the button on the remote control. To enter TV mode when the TV is turned off, press the button. In TV mode, you can watch broadcasts or enjoy the content on the external connected devices. " +'Can you explain Ambient Mode?',"'Ambient Mode is a feature available on some TVs that allows users to set up and enjoy visual information and design when the TV is not in use. In Ambient Mode, the TV consumes less power and automatically adjusts brightness based on the surrounding light conditions using an illuminance sensor. There are different states of Ambient Mode, such as getting dark state and black screen state, where the TV screen changes automatically according to the surroundings, becoming darker when it gets dark and turning off when it reaches a specific level of darkness. Users can also change settings like screen brightness, color tone, and auto brightness for Ambient Mode. Additionally, users can select different categories of content, such as Decor, Info, and Photo, to display on the Ambient Mode screen. The TV can be switched between Ambient Mode and regular TV mode by pressing buttons on the remote control.'","Screen states related to Ambient Mode. Ambient Mode Ambient Mode , you can set up and enjoy visual information and design. In this state, less power is consumed and brightness is automatically adjusted by the illuminance sensor. Getting dark state Ambient Mode , the TV screen changes automatically according to the surroundings. When the surroundings get dark, the TV screen becomes darker. When the surroundings get bright before the TV screen turns completely black or when the remote control or the Bixby function is operated, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV can activate voice guides and process voice commands. Black screen state Ambient Mode , if the surroundings get to a specific level of darkness or there is no external operation, the TV turns soft-off. Then, when the surroundings get bright, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV cannot activate voice guides and process voice commands. TV off state To enter Ambient Mode when the TV is turned off, press the button on the remote control. To enter TV mode when the TV is turned off, press the button. In TV mode, you can watch broadcasts or enjoy the content on the external connected devices. Screen states related to Ambient Mode. Ambient Mode Ambient Mode , you can set up and enjoy visual information and design. In this state, less power is consumed and brightness is automatically adjusted by the illuminance sensor. Getting dark state Ambient Mode , the TV screen changes automatically according to the surroundings. When the surroundings get dark, the TV screen becomes darker. When the surroundings get bright before the TV screen turns completely black or when the remote control or the Bixby function is operated, the TV screen returns to Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV can activate voice guides and process voice commands. Black screen state Ambient Mode , if the surroundings get to a specific level of darkness or there is no external operation, the TV turns soft-off. Then, when the surroundings get bright, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV cannot activate voice guides and process voice commands. TV off state To enter Ambient Mode when the TV is turned off, press the button on the remote control. To enter TV mode when the TV is turned off, press the button. In TV mode, you can watch broadcasts or enjoy the content on the external connected devices. Changing the content and settings for Ambient Mode. When you press the button in Ambient Mode , the Ambient Mode browser screen appears. In the Ambient Mode browser screen, you can select content and change the Ambient Mode settings. Setting up the content for Ambient Mode The Ambient Mode browser screen displays content at the top and categories at the bottom. Use the left or right directional buttons in the content list at the top to move the focus to content you want, and then press the Select button. The selected content is played in Ambient Mode In the future, more content that you can set up in the Ambient Mode browser will be provided. You can select the following categories and content: Decor : Allows you to select beautiful screens. Info : Provides information such as weather, news headlines, and more. This function may not be supported depending on the geographical area. Photo : Allows you to set a picture stored in your mobile device as the wallpaper of the Ambient Mode screen. You can configure special layouts using your photos. To save photos from your mobile device to the TV and import them in Ambient Mode , use the SmartThings app on your mobile device. Setting up the Ambient Mode details In the Ambient Mode browser screen, move the focus to , and then press the Select button. You can change the following settings: Brightness : Adjusts the screen brightness for Ambient Mode Color Tone : Adjusts the colors of the screen for Ambient Mode Auto Brightness : Changes the auto brightness setting for Ambient Mode When this function is set to Off , the brightness level of the TV screen is not automatically adjusted according to the ambient light level. Ambient Off Timer : Sets the time that the Ambient Mode screen turns off automatically. If there is no remote control input for the set time, the TV switches to the black screen state. Changing the background color of Ambient Mode To change the background color of Ambient Mode , move the focus to in the Ambient Mode browser screen, and then press the Select button. You can change the background color or pattern. Move the focus to a color or pattern you want, and then press the Select button. Take a picture of a wall using the SmartThings app on your mobile device to set it as the background of Ambient Mode This function may have a delay in image transmission and optimization depending on the network conditions. " +'Can I configure Brightness?',"'Yes, you can configure the brightness level of the TV using the advanced picture settings. You can adjust the brightness setting in the Settings Picture Expert Settings to your taste.'","Reducing the energy consumption of the TV. Settings General Eco Solution Try Now You can adjust the brightness level of the TV, reduce overall power consumption, and prevent overheating. Ambient Light Detection Try Now Automatically adjusts the brightness level of the TV, according to the ambient light level, to reduce power consumption. If Ambient Light Detection has adjusted the screen brightness to a too bright or dark level, you can select Minimum Backlight to manually adjust the minimum screen brightness. Minimum Backlight Try Now When Ambient Light Detection is turned on, you can manually adjust the minimum brightness of the TV screen. This function acts only when the value is less than the setting in Settings Picture Expert Settings Backlight Power Saving Mode Try Now Allows you to select a brightness setting from the list to reduce the TV's power consumption. Motion Lighting Try Now Adjusts the brightness in response to on-screen movements to reduce power consumption. Auto Power Off Try Now Automatically turns off the TV to reduce unnecessary power consumption if there is no operation for 4 hours. Configuring advanced picture settings. Settings Picture Expert Settings Try Now Configure the screen settings to your taste by using the following functions: Backlight Try Now Brightness Try Now Contrast Try Now Sharpness Try Now Color Try Now Tint (G/R) Try Now Apply Picture Settings Try Now Digital Clean View Try Now Auto Motion Plus Settings Try Now When LED Clear Motion is set to , the screen appears darker than when it is Off Local Dimming This function may not be supported depending on the model or geographical area. Contrast Enhancer Try Now HDR+ Mode Try Now Automatically provide an optimal HDR effect based on the video source. The HDR (High Dynamic Range) technology implements video images that are very similar to the images seen through human eyes by finely adjusting the contrast of the source. This function may not be supported depending on the model or geographical area. Film Mode Try Now This function is only available when the input signal is TV, AV, Component (480i, 1080i), or HDMI (1080i). Color Tone Try Now White Balance Try Now Gamma Try Now RGB Only Mode Try Now Color Space Settings Try Now Reset Picture Try Now Screen states related to Ambient Mode. Ambient Mode Ambient Mode , you can set up and enjoy visual information and design. In this state, less power is consumed and brightness is automatically adjusted by the illuminance sensor. Getting dark state Ambient Mode , the TV screen changes automatically according to the surroundings. When the surroundings get dark, the TV screen becomes darker. When the surroundings get bright before the TV screen turns completely black or when the remote control or the Bixby function is operated, the TV screen returns to Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV can activate voice guides and process voice commands. Black screen state Ambient Mode , if the surroundings get to a specific level of darkness or there is no external operation, the TV turns soft-off. Then, when the surroundings get bright, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV cannot activate voice guides and process voice commands. TV off state To enter Ambient Mode when the TV is turned off, press the button on the remote control. To enter TV mode when the TV is turned off, press the button. In TV mode, you can watch broadcasts or enjoy the content on the external connected devices. " +'What are the uses of buttons in the e-manual?','The uses of buttons in the e-manual are:\n1. Search: Select an item from the search results to load the corresponding page.\n2. Index: Select a keyword to navigate to the relevant page.\n3. Recently Viewed Topics: Select a topic from the list of recently viewed topics.\n4. Try Now: Allows you to access the corresponding menu item and try out the feature right away.\n5. Link: Access an underlined topic referred to on an e-Manual page immediately.',"Using the buttons in the e-Manual. Search ): Select an item from the search results to load the corresponding page. Index ): Select a keyword to navigate to the relevant page. Recently Viewed Topics ): Select a topic from the list of recently viewed topics. Try Now ): Allows you to access the corresponding menu item and try out the feature right away. Link ): Access an underlined topic referred to on an e-Manual page immediately. Some menu screens cannot be accessed from the e-Manual.Launching the e-Manual. Open the user manual embedded in your TV. Press the E-MANUAL button on the standard remote control to open the menu. Alternatively, press the button to open Smart Hub , press the left directional button to move to Settings , and then press the Select button. Use the up and down directional buttons to move to Support , and then select Open e-Manual to open the e-Manual. Settings Support Open e-Manual You can view the embedded e-Manual containing information about your TV's key features. Alternatively, you can download a copy of the e-Manual from Samsung's website. Words in blue (e.g., Picture Mode ) indicate a menu item. The e-Manual contains 2 rows of menu icons. Use the up and down directional buttons to move between rows and the left and right directional buttons to move within a row. Press Select to open the section you want to read. The e-Manual contains a section called Learn TV Remote in Accessibility Guidance (at the right corner of the screen). This is particularly useful for people who cannot easily see the buttons on the control or who are not clear what each button does. When you are viewing this section of the e-Manual, pressing the buttons on the remote control will not affect the TV. Note that the Accessibility Guidance item is only available when Voice Guide Settings General Accessibility Voice Guide Settings Voice Guide ) is enabled. Launching the e-Manual. Settings Support Open e-Manual You can view the embedded e-Manual that contains information about your TV's key features. Alternatively, you can download a copy of the e-Manual from Samsung's website. Words in blue (e.g., Picture Mode ) indicate a menu item. " +'Can I fix powering on issue?',"""Yes, you can fix the powering on issue by following the steps provided in the context. Some solutions include confirming that the TV's power cord is connected correctly, ensuring the remote control is operating normally, checking the antenna or cable TV connection, confirming the cable box or satellite box is plugged in and turned on, and running the Auto Program function.""","Turning on the TV with a mobile device. Settings General Network Expert Settings Power On with Mobile You can turn on the TV with your mobile device when both of them are connected to the same network.Restoring the TV to the factory settings. Settings Support Self Diagnosis Reset Try Now You can restore all TV settings (excluding the Internet and network settings) to the factory defaults. Select Reset . The security PIN entry window appears. Enter the security PIN, and then select Yes . All settings are then reset. The TV turns off and on again automatically and displays the Start Setup screen. For more information on Start Setup , refer to the user manual that came with the TV. Testing the picture. Settings Support Self Diagnosis Start Picture Test Try Now Before you review the list of problems and solutions below, use Start Picture Test to determine if the problem is caused by the TV. Start Picture Test displays a high definition picture you can examine for flaws or faults. The problem Try this! Flickering and Dimming Component Connections/ Screen Color Screen Brightness Ghosting, Blurring, or Juddering If your TV is flickering or dimming sporadically, you may need to disable some of the energy efficiency features. Disable Ambient Light Detection Power Saving Mode , or Motion Lighting Settings General Eco Solution Ambient Light Detection Settings General Eco Solution Power Saving Mode Settings General Eco Solution Motion Lighting If the color on your TV screen is not correct or the black and white colors are off, run Start Picture Test Settings Support Self Diagnosis Start Picture Test If the test results indicate that the problem is not caused by the TV, do the following: Confirm that the video input connectors are connected to the correct external device video output connectors. Check the other connections as well. If the TV is connected to an external device via a component cable, confirm that the Pb, Pr, and Y jacks are plugged into their proper connectors. If the colors on your TV are correct but just a little too dark or bright, try adjusting the following settings first. Settings Picture Expert Settings Backlight Settings Picture Expert Settings Contrast Settings Picture Expert Settings Brightness Settings Picture Expert Settings Sharpness Settings Picture Expert Settings Color Settings Picture Expert Settings Tint (G/R) If you notice ghosting or blurring on the screen, use the Auto Motion Plus Settings function to resolve the issue. Settings Picture Expert Settings Auto Motion Plus Settings The problem Try this! Unwanted Powering Off Problems Powering On Unable to find a Channel The TV image does not look as good as it did in the store. The picture is distorted. The color is wrong or missing. If your TV appears to turn off by itself, try disabling some of the TV's energy efficiency functions. See if Sleep Timer has been enabled. The Sleep Timer automatically turns the TV off after a specified period of time. Settings General System Manager Time Sleep Timer If the Sleep Timer has not been enabled, see if Auto Power Off Off Timer has been enabled and disable it. Settings General Eco Solution Auto Power Off Settings General System Manager Time Off Timer If you are having problems powering on your TV, there are a number of things to check before calling the service department. Confirm that the TV's power cord is connected correctly at both ends and that the remote control is operating normally. Make sure that the antenna cable or cable TV cable is firmly connected. If you have a cable box or satellite box, confirm that it is plugged in and turned on. If your TV is not connected to a cable box or satellite box, run Auto Program Settings Broadcasting Auto Program Store displays are all tuned to digital, HD (high definition) channels. If you have an analog cable box or satellite box, upgrade to a digital cable box or satellite box. Use HDMI or Component cables to deliver HD (high definition) picture quality. Many HD channels are upscaled from SD (Standard Definition) content. Look for a channel that is broadcasting HD content. Cable/Satellite Subscribers: Try HD channels from the channel lineup. Air/Cable Antenna Connection: Try HD channels after running the Auto Program function. Settings Broadcasting Auto Program Adjust the cable box or satellite box's video output resolution to 1080i or 720p. The compression of video content may cause picture distortions, especially in fast moving pictures from sports programs and action movies. If the signal reception is weak or poor, screen distortion may be visible but it is not a malfunction. Mobile phones used close to the TV (within 3.2 ft) may cause noise on analog and digital channels. If you're using a Component connection, make sure that the Component cables are connected to the correct jacks. Incorrect or loose connections may cause color problems or a blank screen. The problem Try this! The color is poor or the picture is not bright enough. There is a dotted line on the edge of the screen. The picture is black and white. If the test picture does not appear or there is noise or distortion, the TV may have a problem. Contact Samsung's Call Center for assistance. If the test picture is displayed properly, there may be a problem with an external device. Please check the connections. If the problem persists, check the signal strength or refer to the external device's user manual. Go to Picture and then adjust the Picture Mode Brightness Sharpness , and Color settings. Settings Picture Picture Mode Settings Picture Expert Settings Brightness Settings Picture Expert Settings Sharpness Settings Picture Expert Settings Color See if Power Saving Mode has been enabled. Settings General Eco Solution Power Saving Mode Try resetting the picture. Settings Picture Expert Settings Reset Picture Change Picture Size Settings 16:9 Standard Settings Picture Picture Size Settings 16:9 Standard If you are using AV or analog equipment, disconnect the adapter from the Component (Blue) input port on the TV and connect it to the AV (Yellow) input port. The Component or AV input port may not be provided depending on the model or geographical area. Check whether Grayscale is set to Settings General Accessibility Grayscale " +'how do I fix low volume issue?',"""To fix the low volume issue on your TV, you can try the following steps:\n\n1. Check the volume control of your TV and the external device (cable box, satellite box, DVD player, Blu-ray player, etc.) connected to your TV.\n2. If you are using an external device, make sure to check the device's audio output option. For example, you may need to change the audio output option to HDMI if the box connected to your TV is using an HDMI cable.\n3. If you are listening to computer sound, connect an external speaker to the computer's audio output connector.\n4. Make sure there is nothing plugged into the headphone jack of your TV.\n5. Reboot the connected device by disconnecting and then reconnecting the device's power cable.\n6. Check if the Digital Output Audio Format is set to Dolby Digital+. If you are using a receiver that does not support Dolby Digital Plus, you will hear no sound when you select Dolby Digital+.\n7. Make sure the audio cable is connected to the correct audio output connector on the external device.\n8. For antenna or cable connections, check the signal information, as a low signal level may cause sound distortions.\n\nAdditionally, for better audio quality, consider using an AV receiver when connecting external audio devices. If you connect an external audio device using an optical cable, the Sound Output setting is automatically changed to the connected device. To manually change the Sound Output setting, you can use the Quick Settings screen or the Settings screen on your TV.""","Testing the sound. Settings Support Self Diagnosis Start Sound Test Try Now If the TV plays the Start Sound Test melody without distortion, there may be a problem with an external device or the broadcast signal's strength. The problem Try this! There is no sound or the sound is too low at maximum volume. The picture is good but there is no sound. No sound is heard. The speakers are making an odd sound. Check the volume control of your TV, and then check the volume control of the external device (cable box or satellite box, DVD, Blu-ray, etc.) connected to your TV. Set Sound Output TV Speaker Settings Sound Sound Output If you are using an external device, check the device's audio output option. For example, you may need to change your cable box's audio option to HDMI if the box connected to your TV is using an HDMI cable. To listen to computer sound, connect an external speaker to the computer's audio output connector. If your TV has a headphone jack, make sure there is nothing plugged into it. Reboot the connected device by disconnecting and then reconnecting the device's power cable. Check whether the Digital Output Audio Format is set to Dolby Digital+ If you are using a receiver that does not support Dolby Digital Plus, you will hear no sound when you select Dolby Digital+ Settings Sound Expert Settings Digital Output Audio Format Dolby Digital+ Run Start Sound Test Settings Support Self Diagnosis Start Sound Test Make sure that the audio cable is connected to the correct audio output connector on the external device. For antenna or cable connections, check the signal information. A low signal level may cause sound distortions. I Can't Hear the Sound Clearly. When the TV has difficulties with sound, these steps may help resolve the problem.Connection notes for audio devices. For better audio quality, it is a good idea to use an AV receiver. If you connect an external audio device using an optical cable, the Sound Output setting is automatically changed to the connected device. However, to make this happen, you must turn on the external audio device before connecting the optical cable. To manually change the Sound Output setting, do one of the following: Use the Quick Settings screen to change to the connected device: Use the Select button to select Audio Out/Optical on the Sound Output menu. ( Settings directional button Sound Output Use the Settings screen to change to the connected device: Select Audio Out/Optical on the Sound Output menu. ( Settings Sound Sound Output An unusual noise coming from a connected audio device while you are using it may indicate a problem with the audio device itself. If this occurs, ask for assistance from the audio device's manufacturer. Digital audio is only available with 5.1 channel broadcasts. " +'My software update over the Internet has failed. How do I fix this?',"""If your software update over the Internet has failed, you can try updating via a USB device. Download the update file from the Samsung website and store it on a USB device. Connect the USB device to the TV and update using the instructions provided. Remember to save the update package in the USB device's top-level folder to ensure the TV can locate it.""","Updating the TV's software to the latest version. DO NOT turn off the TV's power until the update is complete. The TV will turn off and on automatically after completing the software update. Video and audio settings will be reset to their defaults after a software update. Updating through the Internet Settings Support Software Update Update Now Try Now Updating from the Internet requires an active Internet connection. Updating through a USB device Settings Support Software Update Update Now After downloading the update file from the Samsung website and storing it on a USB device, connect the USB device to the TV to update. To update using a USB flash drive, download the update package from Samsung.com to your computer. Then, save the update package in the USB device's top-level folder. Otherwise, the TV will not be able to locate the update package.Updating the TV's software to the latest version. DO NOT turn off the TV's power until the update is complete. The TV will turn off and on automatically after completing the software update. Video and audio settings will be reset to their defaults after a software update. Updating through the Internet Settings Support Software Update Update Now Try Now Updating from the Internet requires an active Internet connection. Updating through a USB device Settings Support Software Update Update Now After downloading the update file from the Samsung website and storing it on a USB device, connect the USB device to the TV to update. To update using a USB flash drive, download the update package from Samsung.com to your computer. Then, save the update package in the USB device's top-level folder. Otherwise, the TV will not be able to locate the update package.Updating the TV automatically. Settings Support Software Update Auto Update Try Now If the TV is connected to the Internet, you can have the TV's software update itself automatically while you are watching the TV. When the background update is completed, it is applied the next time the TV is turned on. If you agree to the Smart Hub terms and conditions, Auto Update is set to automatically. If you want this function disabled, use the Select button to turn it off. This function may take a longer time if another network function is running concurrently. This function requires an Internet connection. " +'How to turn TV in Ambient Mode?',"'To turn the TV in Ambient Mode, you need to press the button on the remote control. If the TV is turned off and you press the button, the TV will turn on in Ambient Mode.'","Using the Ambient Mode. Learn about the functions available in Ambient Mode, which is a QLED TV-specific function. Ambient Mode The image on your TV may differ from the image above depending on the model and geographical area. Ambient Mode , you can view beautiful screens, various visual information, and notifications. To enter Ambient Mode , press the button. To return to the TV mode, press the button. To shut off the TV, press the button. If you press the button when the TV is turned off, the TV turns on in Ambient Mode If you use a remote control other than the Samsung TV remote control, there may be restrictions to entering Ambient Mode Because this function is a QLED TV-specific function, it may not be supported depending on the model. Decor Info Photo Using the Ambient Mode. Learn about the functions available in Ambient Mode, which is a QLED TV-specific function. Ambient Mode The image on your TV may differ from the image above depending on the model and geographical area. Ambient Mode , you can view beautiful screens, various visual information, and notifications. To enter Ambient Mode , press the button. To return to the TV mode, press the button. To shut off the TV, press the button. If you press the button when the TV is turned off, the TV turns on in Ambient Mode If you use a remote control other than the Samsung TV remote control, there may be restrictions to entering Ambient Mode Because this function is a QLED TV-specific function, it may not be supported depending on the model. Decor Info Photo Screen states related to Ambient Mode. Ambient Mode Ambient Mode , you can set up and enjoy visual information and design. In this state, less power is consumed and brightness is automatically adjusted by the illuminance sensor. Getting dark state Ambient Mode , the TV screen changes automatically according to the surroundings. When the surroundings get dark, the TV screen becomes darker. When the surroundings get bright before the TV screen turns completely black or when the remote control or the Bixby function is operated, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV can activate voice guides and process voice commands. Black screen state Ambient Mode , if the surroundings get to a specific level of darkness or there is no external operation, the TV turns soft-off. Then, when the surroundings get bright, the TV screen returns to the Ambient Mode You can change the auto brightness setting for Ambient Mode . For more information, refer to up the Ambient Mode In this state, the TV cannot activate voice guides and process voice commands. TV off state To enter Ambient Mode when the TV is turned off, press the button on the remote control. To enter TV mode when the TV is turned off, press the button. In TV mode, you can watch broadcasts or enjoy the content on the external connected devices. " +' TV audio is not being played through the receiver. What should I do?',"'To troubleshoot the issue of TV audio not being played through the receiver:\n1. Make sure the audio cable is connected to the correct audio output connector on the external device (receiver).\n2. Check the Digital Output Audio Format settings on the TV. Ensure it is set to Dolby Digital+ and change it if necessary based on the capabilities of the receiver.\n3. If using an HDMI connection, verify the audio output setting on the TV.\n4. Reboot the connected device (receiver) by disconnecting and then reconnecting its power cable.\n5. Ensure that the receiver supports Dolby Digital Plus for sound transmission.'","Testing the sound. Settings Support Self Diagnosis Start Sound Test Try Now If the TV plays the Start Sound Test melody without distortion, there may be a problem with an external device or the broadcast signal's strength. The problem Try this! There is no sound or the sound is too low at maximum volume. The picture is good but there is no sound. No sound is heard. The speakers are making an odd sound. Check the volume control of your TV, and then check the volume control of the external device (cable box or satellite box, DVD, Blu-ray, etc.) connected to your TV. Set Sound Output TV Speaker Settings Sound Sound Output If you are using an external device, check the device's audio output option. For example, you may need to change your cable box's audio option to HDMI if the box connected to your TV is using an HDMI cable. To listen to computer sound, connect an external speaker to the computer's audio output connector. If your TV has a headphone jack, make sure there is nothing plugged into it. Reboot the connected device by disconnecting and then reconnecting the device's power cable. Check whether the Digital Output Audio Format is set to Dolby Digital+ If you are using a receiver that does not support Dolby Digital Plus, you will hear no sound when you select Dolby Digital+ Settings Sound Expert Settings Digital Output Audio Format Dolby Digital+ Run Start Sound Test Settings Support Self Diagnosis Start Sound Test Make sure that the audio cable is connected to the correct audio output connector on the external device. For antenna or cable connections, check the signal information. A low signal level may cause sound distortions. My Computer Won't Connect. When the TV has difficulties connecting to a PC, these steps may help resolve the problem. The problem Try this! The Not Set your PC's output resolution so it matches a resolution supported by the TV. message appears. The video is OK but there is no audio. Select Source to confirm that the correct input source has been selected. If the TV is not connected to a cable box or satellite box, run Auto Program to search for channels. Settings Broadcasting Auto Program Confirm that the coaxial cable is securely connected to the TV. Run Start Setup Auto Program Settings General Start Setup Settings Broadcasting Auto Program Go to Caption Settings and change the Caption Mode Settings General Accessibility Caption Settings Caption Mode Some channels may not have caption data. The compression of the video content may cause picture distortions. This is especially true with fast moving pictures from sports programs and action movies. A weak signal can cause picture distortions. This is not a problem with the TV. If you are using an HDMI connection, check the audio output setting on your PC. If you are using an HDMI-to-DVI cable, a separate audio cable is required. Note that the HDMI-to-DVI connection is only supported by the HDMI (DVI) port and this port does not transmit audio. To listen to the computer sound, connect external speakers to the audio output connection of the computer. I Can't Hear the Sound Clearly. When the TV has difficulties with sound, these steps may help resolve the problem." +'How do I change the current time on TV?',"'To change the current time on your TV, you can set it manually by going to Settings > General > System Manager > Time > Clock > Clock Mode Manual. From there, you can directly enter the current time using the directional buttons on the remote control.'","Setting the current time. Settings General System Manager Time Try Now You can set the Clock manually or automatically. Once the Clock is set, you can view the current time on the TV anytime. You must reset the clock in the following cases: The power cable is disconnected and then connected. The Clock Mode is changed from Manual Auto The TV is not connected to the Internet. No broadcast signals are received. Setting the clock automatically Settings General System Manager Time Clock Clock Mode Auto This function works only when the TV is connected to the Internet or is receiving digital broadcasts through a connected antenna. The accuracy of the time information received may differ with the channel and signal. If you get your TV signal from a cable broadcast receiver/set-top box or a satellite receiver/satellite set-top box connected to an HDMI or Component port, you must set the current time manually. Adjusting the clock for DST and time zone Settings General System Manager Time Clock Sets the right time by setting Daylight Savings Time (DST) and your local time zone. Time Zone Selects your time zone. This function is only available when the Clock Mode is set to Auto DST Automatically adjusts for Daylight Saving Time (DST). This function is only available when the Clock Mode is set to Auto Changing the current time Settings General System Manager Time Clock Time Offset Time Offset adjusts the time through a network connection. This function is only available if Clock Mode is set to Auto and the TV is connected to the Internet though a local area network. Time Offset adjusts the time if the TV fails to receive time information through normal digital broadcast signals. Setting the clock manually Settings General System Manager Time Clock Clock Mode Manual When Clock Mode is set to Manual , you can directly enter the current time. Select the Date and Time by using the directional buttons on the remote control.Setting Up a Schedule Viewing. Configure the TV to show a specific channel or program at a specific time and date. The icon appears next to programs that have been configured for a schedule viewing. To set up a schedule viewing, you must first set the TV's clock ( Settings General System Manager Time Clock Setting up a schedule viewing The Guide Screen On the Guide screen, select a program you would like to view, and then press and hold the Select button. Select Schedule Viewing on the pop-up menu that appears. The Program Info Screen Press the Select button while watching the TV. The Program Info window appears. Select a broadcast scheduled program by using the left or right directional buttons, and then press the Select button. You can schedule watching the program by selecting Schedule Viewing Changing the recording time. When recording a live program you can specify how long you want to record for. Press the down directional button to display the control bar at the bottom. Press the left or right directional buttons to move to Edit Recording Time , and then press the Select button. A slider scale appears and you will hear the recording time currently set. Use the left and right directional buttons to change the time, and then press the Select button. You will hear a menu that gives information about the recording time. To return to the program you are watching, press the button on the remote. The maximum recording time differs depending on the capacity of the USB device. " +'How do I select Optimized sound mode?',"'To select the Optimized sound mode, you would need to go to Settings, then Sound, then Sound Mode, and finally choose the Optimized option.'","Choosing a sound mode. Settings Sound Sound Mode Try Now You can select a favorite sound mode suitable for the content type or the listening environment. Standard Optimized Amplify This function may not be supported depending on the model.Configuring a Sound Mode and Expert Settings. Change the Sound Mode and adjust Expert Settings.Configuring advanced sound settings. Settings Sound Expert Settings Try Now You can personalize the sound quality by adjusting the following settings. Balance Try Now Equalizer Try Now HDMI Input Audio Format Try Now Digital Output Audio Format Try Now If you use a receiver that does not support Dolby Digital Plus, you will hear no sound if you select Dolby Digital+ The Dolby Digital+ option is only available via ARC(HDMI) for external devices that support the format. Audio Delay Try Now Sound Mirroring When the Sound Mirroring function is set to , you can share your mobile device's sound using your TV speakers. This function may not be supported depending on the TV model or mobile device. Auto Volume Sound Feedback Try Now Reset Sound Try Now " +'How do I turn on or off Remote Management?',"'To turn on or off Remote Management, you can access the settings on your TV. From the Settings menu, go to Support, then select Remote Management. In the Remote Management section, you can read and agree to the service agreements and choose to turn it on or off. Alternatively, you can also start this function by pressing and holding the button for 5 or more seconds.'","Getting support through Remote Management. Settings Support Remote Management Try Now After consenting to our service agreement, you can use Remote Management to access Remote Support and have a Samsung service technician diagnose your TV, correct problems, and update your TV's software, remotely, via the web. You can also turn Remote Management on and off. You can also start this function by pressing and holding the button for 5 or more seconds. This function requires an Internet connection. What is Remote Support? Samsung Remote Support service offers you one-on-one support with a Samsung Technician who can remotely: Diagnose your TV Adjust the TV settings for you Perform a factory reset on your TV Install recommended firmware updates How does Remote Support Work? You can easily get Samsung Remote Support service for your TV: Call the Samsung Contact Center and ask for remote support. Open the menu on your TV and go to the Support section. ( Settings Support Select Remote Management , then read and agree to the service agreements. When the PIN screen appears, provide the PIN number to the agent. The agent will then access your TV. Getting support through Remote Management. Settings Support Remote Management Try Now After consenting to our service agreement, you can use Remote Management to access Remote Support and have a Samsung service technician diagnose your TV, correct problems, and update your TV's software remotely, via the web. You can also turn Remote Management on and off. You can also start this function by pressing and holding the button for 5 or more seconds. This function requires an Internet connection. What is Remote Support? Samsung Remote Support service offers you one-on-one support with a Samsung Technician who can remotely: Diagnose your TV Adjust the TV settings for you Perform a factory reset on your TV Install recommended firmware updates How does Remote Support Work? You can easily get Samsung Remote Support service for your TV: Call the Samsung Contact Center and ask for remote support. Open the menu on your TV and go to the Support section. ( Settings Support Select Remote Management , then read and agree to the service agreements. When the PIN screen appears, provide the PIN number to the agent. The agent will then access your TV. Connecting an IP control device to the TV. Settings General Network Expert Settings IP Remote You can connect an IP control device to the TV remotely to manage your IP device. To use this function, Power On with Mobile must be turned on. Turning this feature on may allow other IP control devices to control your TV. We recommend turning this feature on only if an authorized third-party custom controller is installed and configured specifically for a Samsung TV and if your Wi-Fi network is password protected. This function may not be supported depending on the model or geographical area." +'How do I turn on High Contrast using Bixby?',"'To turn on High Contrast using Bixby, you need to press the button on the Samsung Smart Remote, say ""High Contrast on"", and then release the button. This voice command will activate the High Contrast feature on your TV.'","Setting accessibility settings using Bixby. Bixby lets you control your TV with voice commands spoken into the microphone built into the Samsung Smart Remote. To use Bixby , the TV must be set up and tuned and must be connected to the Internet. You can connect the TV to the Internet during the initial setup or do it later through the settings menu. There are many commands you can say to control the TV. These can be used together with Voice Guide to give you voice input and voice output. To speak a command, press and hold the button (located near the top of the Samsung Smart Remote, directly below the LED power indicator and above the directional button) and say the command, and then release the button. The TV will confirm the command. For example you can: Turn Voice Guide on: Press the button on the Samsung Smart Remote, and then say Voice Guide on Turn Video Description on: Press the button on the Samsung Smart Remote, and then say Video Description on Turn High Contrast on: Press the button on the Samsung Smart Remote, and then say High Contrast on White text on black background (high contrast). Settings General Accessibility High Contrast Try Now You can change Smart Hub and setting menu screens to white text on a black background or change the transparent TV menus to opaque so that text can be more easily read. To activate this function, set High Contrast High Contrast is on, some Accessibility menus are not available.Running Bixby. Press and hold the button on your Samsung Smart Remote, say a command, and then release the button. The TV recognizes the voice command. To view the Bixby guide, press the button once: When you press the button for the first time, the Using Bixby button appears at the bottom of the screen. Press the Select button. The Using Bixby popup window appears and a tutorial on using Bixby is shown. When you press the button after the first time, the Enter My Bixby button appears at the bottom of the screen. Press the Select button to go to the My Bixby screen. " +'How do I check scheduled viewings?',"'To check scheduled viewings, you can follow these steps:\n\n1. Open the Smart Hub by pressing the button on your remote.\n2. Use the left or right directional button to move to Live TV.\n3. Press the up directional button to move to the top row.\n4. Use the left or right directional button to move to Schedule Manager Recordings.\n5. Press Select to open Schedule Manager Recordings.\n6. Use the up directional button to move to the Schedules menu.\n7. Use the down directional button to move to the list of programs scheduled to view.\n\nFrom there, you can see the list of programs that are scheduled for viewing.'","Setting up a scheduled viewing for a channel. Using the schedule viewing options from the guide screen On the Guide screen, select a program you would like to view, press and hold the Select button, and then select Schedule Viewing in the pop-up menu that appears. Using the schedule viewing options in the program info window Press the Select button while watching TV. The Program Info window appears. Select a program that will be broadcast using the left or right directional buttons. Press the Select button, and then select Schedule Viewing schedule viewing of that program. Using schedule viewing to view programs at a specified time on a specified date (U.S.A and Canada) Live TV Schedule Manager Add Viewing (Other countries) Live TV Schedule Manager Recordings Schedules Add Viewing To schedule a viewing, set the values for Antenna Channel Repeat Date and Start Time of the broadcast program to view, and then select Save Check the current time in Settings General System Manager Time Clock This function may not be supported depending on the model or geographical area. Setting Up a Schedule Viewing. Configure the TV to show a specific channel or program at a specific time and date. The icon appears next to programs that have been configured for a schedule viewing. To set up a schedule viewing, you must first set the TV's clock ( Settings General System Manager Time Clock Setting up a schedule viewing The Guide Screen On the Guide screen, select a program you would like to view, and then press and hold the Select button. Select Schedule Viewing on the pop-up menu that appears. The Program Info Screen Press the Select button while watching the TV. The Program Info window appears. Select a broadcast scheduled program by using the left or right directional buttons, and then press the Select button. You can schedule watching the program by selecting Schedule Viewing Cancelling a scheduled viewing. You have two ways to cancel a scheduled viewing. 1. Cancelling a scheduled viewing from the Guide Live TV Guide Move to the program you want to cancel scheduled viewing for, and then press the Select button. Move to Cancel Scheduled Viewing , and then press the Select button. When a pop-up message asking you to cancel the selected scheduled viewing appears, select Yes . The selected scheduled viewing is canceled and the screen returns to the Guide screen. 2. Cancelling a scheduled viewing from Smart Hub Live TV Schedule Manager Recordings Schedules Delete Press the button to open Smart Hub , and then press the left or right directional button to move to Live TV Press the up directional button to move to the top row, and then press the left or right directional button to move to Schedule Manager Recordings Press Select to open Schedule Manager Recordings Press the up directional button to move to the Schedules menu. Use the down directional button to move to the list of programs scheduled to view. Use the right directional button to move to the Delete option, and then press the Select button to delete the selected item. This function may not be supported depending on the model or geographical area. " +'What do I do if wireless network connection failed?',"'If wireless network connection failed, you should try going to Open Network Settings and then selecting the correct router. Additionally, make sure the selected router is turned on, and if it is, turn it off and then back on. Enter the correct password if required for the wireless connection.'","Troubleshooting Internet connectivity issues. Try Now After reading the following content, troubleshoot the Internet connection issue. If the problem persists, contact your Internet Service Provider. No network cable found Make sure that the LAN cable is plugged in on both ends. If it is plugged in, make sure that the router is turned on. If the router is on, try turning it off and then on. Wireless network connection failed If a selected wireless router is not found, go to Open Network Settings , and then select the correct router. Settings General Network Open Network Settings Unable to connect to a wireless router Check if the router is turned on. If it is, turn it off and then on. Enter the correct password if required. IP auto setting failed Configure the settings in IP Settings Settings General Network Network Status IP Settings Make sure that the DHCP server is enabled on the router, and then unplug the router and plug it back in. Reset the router if required. For wireless connection, enter the correct password if required. Unable to connect to the network Check all IP Settings Settings General Network Network Status IP Settings After checking the DHCP server status (must be active) on the router, remove the LAN cable, and then connect it again. For wireless connection, enter the correct password if required. Connected to a local network, but not to the Internet Make sure that the Internet LAN cable is connected to the router's external LAN port. Check the DNS values in IP Settings Settings General Network Network Status IP Settings Network setup is complete, but unable to connect to the Internet If the problem persists, contact your Internet Service Provider. The TV Won't Connect to the Internet. When the TV has difficulties connecting to the Internet, these steps may help resolve the problem. The problem Try this! The TV cannot connect to your network or apps (for Internet compatible models only). The wireless network Confirm your wireless modem/router is on and connected to the Internet. connection failed. The wireless network signal is too weak.Troubleshooting Internet Connectivity Issues. If your TV won't connect to the Internet, try the solutions below." diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/ai-core-genaihub-evaluation-with-grounding.md b/tutorials/ai-core-genaihub-evaluation-with-grounding/ai-core-genaihub-evaluation-with-grounding.md new file mode 100644 index 0000000000..1143bd8c09 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-with-grounding/ai-core-genaihub-evaluation-with-grounding.md @@ -0,0 +1,1692 @@ +--- +parser: v2 +auto_validation: true +time: 45 +primary_tag: software-product>sap-ai-core +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-ai-core ] +author_name: Smita Naik +author_profile: https://github.com/I321506 +--- + +# GenAI Grounding Evaluations with SAP AI Core + This guide describes how to use SAP AI Core Custom Evaluation to benchmark Large Language Models (LLMs) in a Retrieval-Augmented Generation (RAG) scenario, with a specific focus on groundedness evaluation. + +In RAG-based enterprise applications, model responses must be grounded in trusted data sources such as enterprise documents, knowledge bases, or curated repositories. SAP AI Core’s evaluation capabilities allow you to systematically measure grounding quality, retrieval relevance, and alignment of generated responses with source content. + +## You will learn +- How to configure a grounding evaluation workflow in SAP AI Core. +- How to upload and manage RAG-based test datasets that include retrieved context. +- How to define grounding-specific evaluation metrics for assessing LLM responses. +- How to execute grounding evaluations and analyze the grounding results. + +## Prerequisites +1. **BTP Account** + Set up your SAP Business Technology Platform (BTP) account. + [Create a BTP Account](https://developers.sap.com/group.btp-setup.html) +2. **For SAP Developers or Employees** + Internal SAP stakeholders should refer to the following documentation: [How to create BTP Account For Internal SAP Employee](https://me.sap.com/notes/3493139), [SAP AI Core Internal Documentation](https://help.sap.com/docs/sap-ai-core) +3. **For External Developers, Customers, or Partners** + Follow this tutorial to set up your environment and entitlements: [External Developer Setup Tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html), [SAP AI Core External Documentation](https://help.sap.com/docs/sap-ai-core?version=CLOUD) +4. **Create BTP Instance and Service Key for SAP AI Core** + Follow the steps to create an instance and generate a service key for SAP AI Core: + [Create Service Key and Instance](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key?version=CLOUD) +5. **AI Core Setup Guide** + Step-by-step guide to set up and get started with SAP AI Core: + [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-setup.html) +6. An Extended SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard tiers. For more details, refer to +[SAP AI Core Service Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/service-plans?version=CLOUD) +7. **Orchestration Deployment** + Ensure at least one orchestration deployment is ready to be consumed during this process. +Refer to [this tutorial understand the basic consumption of GenAI models using orchestration.](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) +8. **Basic Knowledge** + Familiarity with the orchestration workflow is recommended +9. **Install Dependencies** + Install the required Python packages using the requirements.txt file provided. +Download [requirements.txt](img/requirements.txt) + +💡 Right-click the link above and choose **"Save link as..."** to download it directly. + + +**Below are the Steps to Run a GenAI Evaluation in SAP AI Core** + +### Pre-Read + +This tutorial uses a structured evaluation dataset named **emanual.csv** Placed inside the folder **DATASET_RAG** + +You can access the DATASET_RAG.zip from the GitHub repository. + +- [Download Full Dataset as ZIP](https://github.com/SAP-samples/aicore-genai-samples/tree/main/genai-sample-apps/Evaluation/evaluation_with_grounding/data) + +**NOTE:** If you download the ZIP file, extract it and navigate to the **DATASET_RAG** folder. Place the entire folder in your designated location for further use. + +**Dataset** + +It leverages the publicly available **emanual.csv**, which contains commonly asked emanual questions. Each entry includes: + + - topic (user query) + - answer + - context + +#### How it works + +- A query and its retrieved context are sent to the model. + +- The model generates a grounded response. + +- The grounding metrics evaluate if the output faithfully uses the provided context. + +### Notebook Reference + +For hands-on execution and end-to-end reference, use the accompanying [Evaluation Grounding Notebook](https://github.com/SAP-samples/aicore-genai-samples/blob/main/genai-sample-apps/Evaluation/evaluation_with_grounding/evaluation_RAG.ipynb ). It includes complete Python code examples that align with each step of this tutorial — from dataset preparation and artifact registration to configuration creation, execution, and result retrieval. + +💡 Even though this tutorial provides stepwise code snippets for clarity, the notebook contains all required imports, object initializations, and helper functions to run the flow seamlessly in one place. + +**To use the notebook:** +- Download and open [notebook](https://github.com/SAP-samples/aicore-genai-samples/blob/main/genai-sample-apps/Evaluation/evaluation_with_grounding/evaluation_RAG.ipynb) in your preferred environment (e.g., VS Code, JupyterLab). +- Configure your environment variables such as AICORE_BASE_URL, AICORE_AUTH_TOKEN, and object store credentials . +- Execute each cell in order to reproduce the complete Evaluation Grounding workflow demonstrated in this tutorial. + +### Environment Variables Setup + +[OPTION BEGIN [SAP AI Launchpad]] + +- Navigate to your SAP AI Core Launchpad. + +- In the Workspaces section, click on "Add" to create a new workspace. + - A workspace in SAP AI Core is a logical container that holds your resources (like models and pipelines) and provides the isolation needed for your projects. + +- When prompted, enter your AI Core credentials (such as Client ID, Client Secret, and Base URL). + - Note: If you're unsure about where to find these credentials, refer to this [guide](https://developers.sap.com/tutorials/ai-core-generative-ai.html#1c4f36d7-f345-4822-be00-c15f133ff7d8). + +- Once the workspace is successfully created, select your desired Resource Group to begin the evaluation process. + +Refer to the screenshot below for guidance: +![img](img/image_34.png) + +[OPTION END] + +[OPTION BEGIN [Python]] + +- Open **Visual Studio Code or Jupyter Notebook**. Create a new file with the .ipynb extension (e.g., custom_evaluation.ipynb). +- Create a **.env** file in the root directory of your project. +- Add your **AI Core** and **AWS credentials** as shown below. + +```env +AICORE_CLIENT_ID="" +AICORE_CLIENT_SECRET="" +AICORE_AUTH_URL="" +AICORE_BASE_URL="" +AICORE_RESOURCE_GROUP="default" + +AWS_ACCESS_KEY="" +AWS_SECRET_ACCESS_KEY="" +AWS_BUCKET_ID="" +AWS_REGION="" +``` + +**Note:** Replace the empty strings "" with your actual credentials. + +Refer to the below screenshot for clarity: +![img](img/image_1.png) + +#### Install Dependencies + +Install the required packages using the [requirements.txt](img/requirements.txt) file you downloaded in the Prerequisites section. +```bash +pip install -r requirements.txt +``` +#### Connect to AI Core Instance + +Once the environment variables are set and dependencies are installed, run the following code to connect to your instance: + +```PYTHON +# Loading the credentials from the .env file +from gen_ai_hub.proxy.gen_ai_hub_proxy import GenAIHubProxyClient +from dotenv import load_dotenv +import os + +# Load environment variables +load_dotenv(override=True) + +# AI Core Credentials +AICORE_BASE_URL = os.getenv("AICORE_BASE_URL") +AICORE_RESOURCE_GROUP = os.getenv("AICORE_RESOURCE_GROUP") +AICORE_AUTH_URL = os.getenv("AICORE_AUTH_URL") +AICORE_CLIENT_ID = os.getenv("AICORE_CLIENT_ID") +AICORE_CLIENT_SECRET = os.getenv("AICORE_CLIENT_SECRET") + +# AWS Credentials +AWS_ACCESS_KEY = os.getenv("AWS_ACCESS_KEY") +AWS_BUCKET_ID = os.getenv("AWS_BUCKET_ID") +AWS_REGION = os.getenv("AWS_REGION") +AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY") + +# Initialize GenAIHub Proxy Client +client = GenAIHubProxyClient( + base_url=AICORE_BASE_URL, + auth_url=AICORE_AUTH_URL, + client_id=AICORE_CLIENT_ID, + client_secret=AICORE_CLIENT_SECRET, + resource_group=AICORE_RESOURCE_GROUP +) +``` + +**NOTE:** +- Ensure the **requirements.txt** installation completes successfully before running the code. +- If you face any issues, recheck your **.env** values and installed packages. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +- Download the [Bruno_collections](img/AI_Core.json) file + +- please follow the steps in the [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) to set up your environment, refer step - **Set Up Your Environment and Configure Access** and proceed till generating the token + +[OPTION END] + +**Important Note:** Please note that for using the document grounding service, your request must contain the **document grounding label** set to **true**. Therefore, existing resource groups without the label won’t work. + +### Preparing Dataset Files and Reference Files + +[OPTION BEGIN [SAP AI Launchpad]] + +> **Note:** This step involves local setup using Python and does not require any action on the SAP AI Launchpad. + +[OPTION END] + +[OPTION BEGIN [Python]] + +In this step, we prepare the dataset and optional reference documents required for grounding evaluation. + +The evaluation notebook dynamically detects the dataset file from a predefined folder structure. +You are not required to hardcode the dataset filename. + +```Python +import os +import json +def get_dataset_file_name(folder_path): + """ + Retrieves the name of the first file in the specified folder. + """ + if not os.path.isdir(folder_path): + print(f"The folder path '{folder_path}' does not exist.") + return None + + items_in_folder = os.listdir(folder_path) + + for item in items_in_folder: + item_path = os.path.join(folder_path, item) + if os.path.isfile(item_path): + return item + + print(f"No files were found in the folder '{folder_path}'.") + return None + + +# --- MAIN EXECUTION --- +DATASET_FOLDER = "./DATASET_RAG/testdata" + +DATASET_NAME = get_dataset_file_name(DATASET_FOLDER) + +if DATASET_NAME: + print(f"Dataset name: {DATASET_NAME}") +else: + print("Missing run or dataset file.") + raise SystemExit("Exiting due to missing run/dataset file.") +``` + +![img](img/image_py_dtst.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +> **Note:** This step involves local setup using Python and does not require any action on Bruno. + +[OPTION END] + +### Registering an Object Store Secret in AI Core + +You can upload the orchestration run files, grounding test datasets, and any optional metric definitions to SAP AI Core using the Tracking API. To upload these files, you must first register an object store secret containing your object store credentials + +[OPTION BEGIN [SAP AI Launchpad]] + +- Open the **SAP AI Core Launchpad** and navigate to the **Administration** tab. +- Select the **Object Store** section from the left-hand menu. +- Click on **“Add”** to register a new object store secret. +- Fill in the required bucket details as shown in the screenshot below. + +![img](img/image_33.png) + +In the **Secret** field, use the following structure to provide your AWS credentials: + +```json +{ + "AWS_ACCESS_KEY_ID": "Enter Your value", + "AWS_SECRET_ACCESS_KEY": "Enter Your value" +} +``` + +[OPTION END] + +[OPTION BEGIN [Python]] +To make your evaluation files available for AI Core orchestration, you need to: + +- Upload them to an object store (e.g., AWS S3). +- Register the object store secret in AI Core. + +**Step 4.1: Setup Authentication and Headers** + +First, define the authentication headers for AI Core REST API calls. + +```PYTHON +def _get_headers(): + headers = { + "Authorization": client.get_ai_core_token(), + "AI-Resource-Group": AICORE_RESOURCE_GROUP, + "Content-Type": "application/json", + } + return headers +``` + +**Step 4.2: Register Object Store Secret in AI Core** + +Register your S3 bucket and credentials as a secret. + +```PYTHON +# Register S3 secret with AI Core which will be used an input source +import requests + +def register_oss_secret(): + headers = _get_headers() + + POST_SECRETS_ENDPOINT = '/v2/admin/objectStoreSecrets' + request_url = f"{AICORE_BASE_URL}{POST_SECRETS_ENDPOINT}" + + request_body = { + "name": "genai-data", + "data": { + "AWS_ACCESS_KEY_ID": AWS_ACCESS_KEY, + "AWS_SECRET_ACCESS_KEY": AWS_SECRET_ACCESS_KEY + }, + "type": "S3", + "bucket": AWS_BUCKET_ID, + "endpoint": "s3-eu-central-1.amazonaws.com", + "region": AWS_REGION, + "pathPrefix": "" + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + result = response.json() + print(result) + return result + except: + logging.error("Error occurred while attempting to create object store secret") + raise + +register_oss_secret() +``` +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Generic secrets securely store AWS S3 credentials required for document access + +• Expand **objectStoreSecrets** under admin and select create a secret request + +Use the below payload to create a secret for AWS S3 with NoAuthentication as authentication type. + +```CODE +{ + "name": "genai-data", + "data": { + "AWS_ACCESS_KEY_ID": "", + "AWS_SECRET_ACCESS_KEY": "", + }, + "type": "S3", + "bucket": "", + "endpoint": "", + "region": "", + "pathPrefix": "" + } +``` +• Ensure that all values in the data dictionary are Base64-encoded as per AWS S3 credential requirements + +![img](img/image-br01.png) + +[OPTION END] + +> ⚠️ **Important Note (Must Read)** +> +> - You must **create an object store secret named `default`** to store **output artifacts** from orchestration runs. This is **mandatory**. +> - For **input artifacts**, you may create additional object store secrets with different names if needed. +> - If a secret named `default` is not configured, orchestration runs will **fail** due to missing output target setup. + +### Create a Generic Secret + +In the next step, we create a secret that enables grounding by adding on the "labels" config. This generic secret needs to be created to provide details of the hyperscaler and bucket details so that grounding service will know how to retrieve data from it. + +[OPTION BEGIN [SAP AI Launchpad]] + +**Generic secret for AWS S3** + +1. **Open the Workspaces app** and choose the **AI API connection**. + +2. If needed, toggle between **tenant-level** and **resource-group-level** secret creation. + +3. Navigate to the **SAP AI Core Administration** app and go to **Generic Secrets**. + +4. Choose **Add** to create a new secret. + +5. Fill out the form as follows: + - **Resource Group**: `` + - **Name**: `aws-credentials-1` + - **Secret (JSON format)**: + +```json + { + "access_key_id": "", + "secret_access_key": "", + "bucket": "", + "host": "", + "region": "", + "url": "", + "username": "", + "authentication": "NoAuthentication", + "description": "AWS S3 credentials for document grounding", + "type": "HTTP", + "proxyType": "Internet" + } + +``` + +**Labels** + +Add the following key-value pairs as labels: + +| Key | Value | +|--------------------------------------------------|-------| +| ext.ai.sap.com/document-grounding | true | +| ext.ai.sap.com/documentRepositoryType | S3 | + +![img](img/image078.png) + +Click Add to save the secret. + +[OPTION END] + +[OPTION BEGIN [Python]] + +Generic secrets securely store aws credentials required for document access + +```python +import time +import base64 +def encode_base64(value): + return base64.b64encode(value.encode('utf-8')).decode('utf-8') + +def create_generic_secret(): + payload ={ + "name": "groundingsecret", + "data": { + "url": encode_base64("https://s3-eu-central-1.amazonaws.com"), + "authentication": encode_base64("NoAuthentication"), + "description": encode_base64("grounding secret"), + "access_key_id": encode_base64(AWS_ACCESS_KEY), + "bucket": encode_base64(AWS_BUCKET_ID), + "host": encode_base64(AWS_HOST), + "region": encode_base64("eu-central-1"), + "secret_access_key": encode_base64(AWS_SECRET_ACCESS_KEY), + "username": encode_base64(AWS_USERNAME), + }, + "labels": [ + { + "key": "ext.ai.sap.com/document-grounding", + "value": "true" + }, + { + "key": "ext.ai.sap.com/documentRepositoryType", + "value": "S3" + } + ] +} + time.sleep(60) + try: + headers = _get_headers() + api_url = f"{AICORE_BASE_URL}/v2/admin/secrets" + response = requests.post(api_url, headers=headers, json=payload) + if(response.status_code == 200): + print("Generic secret created successfully") + else: + print(f"Failed to create generic secret: {response}") + except Exception as e: + print(f"Error creating secret: {e}") +create_generic_secret() +``` +![img](img/image_py_sec.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +**Generic secret for AWS S3** + +Generic secrets securely store AWS S3 credentials required for document access + +**Endpoint:** +``` +POST:{{ai_api_url}}/v2/admin/secrets +``` + +Use the below payload to create a secret for AWS S3 with NoAuthentication as authentication type. + +```CODE + +{ + "name": "", // Name of the generic secret to be created + "data": { + "url": "", // Base64 encoded value of url + "authentication": "Tm9BdXRoZW50aWNhdGlvbg=", // Base64 encoded value for NoAuthentication + "description": "", // Base64 encoded description of the secret + "access_key_id": "", // Base64 encoded value of access key id + "bucket": "", // Base64 encoded value of bucket name + "host": "", // Base64 encoded value of host + "region": "", // Base64 encoded value of region + "secret_access_key": "", // Base64 encoded value of secret access key + "username": "", // Base64 encoded value of username + "type": "SFRUUA==", // [Optional] Base64 encoded value for HTTP + "proxyType": "SW50ZXJuZXQ=", // [Optional] Base64 encoded value for Internet + }, + "labels": [ + { + "key": "ext.ai.sap.com/document-grounding", // Label for Document Grounding feature + "value": "true" + }, + { + "key": "ext.ai.sap.com/documentRepositoryType", // Label for Document Repository Type + "value": "S3" + } + ] +} +``` + +•Ensure that all values in the data dictionary are Base64-encoded as per AWS S3 credential requirements. + +![img](img/image_br_sec.png) + +[OPTION END] + +### Create a Grounding Pipeline + +Before running grounding evaluations, you must create a grounding pipeline in SAP AI Core. This pipeline is responsible for reading documents from your object store, processing them, and preparing them for retrieval. + +[OPTION BEGIN [SAP AI Launchpad]] + +1. Navigate to Generative AI Hub from the side menu. + +2. Click on Grounding Management. + +3. Click Create to open the Create Data Repository wizard. + +4. In the **Create Data Repository** form: + - **Embedding Model**: Leave as default (`Text Embedding 3 Large`). + - **Document Store Type**: Select `S3`. + - **Document Grounding Generic Secret**: Select the AWS secret you created in **Step 5** (e.g., `aws-credentials-1`). + +5. Once selected, you're ready to proceed. The required S3 bucket, region, and credentials are handled through the secret. + +6. Click **Create** to finish. + +![img](img/image080.png) + +--- + +> ✅ After completing this step, your knowledge base (data repository) will be linked to your document source. The documents will be embedded and made available for grounding in the chat experience. + +[OPTION END] + +[OPTION BEGIN [Python]] + +The following code creates an S3-based grounding pipeline using the generic secret you created earlier: + +```python +def create_s3_grounding_pipeline(): + headers = _get_headers() + api_url = f"{AICORE_BASE_URL}/v2/lm/document-grounding/pipelines" + payload = { + "type": "S3", + "configuration": { + "destination": "groundingsecret" + } + } + time.sleep(5) + + try: + response = requests.post(api_url, headers=headers, json=payload) + if response.status_code == 201: + print("S3 document grounding pipeline created successfully") + else: + print(f"Failed to create pipeline. Status: {response.status_code}, Response: {response.text}") + except Exception as e: + print(f"Error creating S3 document grounding pipeline: {e}") +create_s3_grounding_pipeline() +``` + +This registers the grounding pipeline in SAP AI Core. After creation, you will upload documents and trigger pipeline runs to populate the data repository. + +![img](img/image_py_pip.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +You can create the pipeline through an API request in Bruno like below: + +**Endpoint:** +``` +POST: {{ai_api_url}}/v2/lm/document-grounding/pipelines +``` + +**Headers:** +```bash +Content-Type: application/json +Authorization: Bearer +``` +**Body:** + +```json +{ + "type": "S3", + "configuration": { + "destination": "s3-secret" + } +} +``` +This creates the S3-based grounding pipeline and links it to your previously created generic secret. + +![img](img/image_br_pip.png) + +[OPTION END] + +### Upload Evaluation Files to Object Store and Register Artifact in AI Core + +[OPTION BEGIN [SAP AI Launchpad]] + +After creating the secret, upload your evaluation files to the S3 bucket and register them as an artifact in AI Core. + +#### Register Uploaded Files as Artifact in AI Core + +To register your evaluation dataset with SAP AI Core, you need to upload it as an artifact. Follow the instructions below using the **SAP AI Launchpad UI**. + +--- + +- Open the **SAP AI Core Launchpad**. +- Navigate to the **Generative AI/Optimization/Artifacts** section to create dataset artifact. + +![img](img/image_19.png) + +- On the **Artifacts** section, click **add**. + +--- + +- On the **General Information** screen, enter the following: + + - **Select Scenario:** `genai-evaluations` + - **Name:** `genai-eval-test-data` + - **Description:** `Demo artifacts for evaluation flow.` + - **Select Object Store:** `genai-data` + - **Sub-folder path:** `genaiEvaluation/` + + > 💡 Replace `` with your **SAP BTP user ID** or the folder path in your object store where the evaluation files are uploaded. + +- On the **Labels** screen, click **“Add Label”** and provide the following: + + - **Key:** `prompt-evaluation` + - **Value:** `true` + *(Note: The prefix `ext.ai.sap.com/` is automatically pre-filled in the UI.)* + + ![img](img/image_21.png) + +- Review all entered details carefully. +- Click **“Add”** to complete the artifact registration. + +[OPTION END] + +[OPTION BEGIN [Python]] + +After creating the secret, upload your evaluation files to the S3 bucket and register them as an artifact in AI Core. + +**Step 5.1: Upload Files to S3 Bucket** + +```PYTHON +# uploading these files to Object store to register as an artifact inside ai core + +import boto3 +import os +import uuid + +def upload_folder_to_s3(folder_path, bucket_name, s3_prefix=""): + """ + Upload a folder to an S3 bucket recursively. + + :param folder_path: The local folder path to upload. + :param bucket_name: The name of the S3 bucket. + :param s3_prefix: Optional prefix to use for the S3 keys (e.g., subfolder in the bucket). + """ + s3_client = boto3.client( + 's3', + aws_access_key_id=AWS_ACCESS_KEY, + aws_secret_access_key=AWS_SECRET_ACCESS_KEY, + region_name=AWS_REGION + ) + + for root, dirs, files in os.walk(folder_path): + for file_name in files: + print("val of root is ", file_name) + local_path = os.path.join(root, file_name) + # Compute the relative path for the S3 key + relative_path = os.path.relpath(local_path, folder_path) + s3_key = os.path.join(s3_prefix, relative_path).replace("\\", "/") # Ensure S3-compatible paths + print("val of s3 key is ", s3_key) + print(f"Uploading {local_path} to s3://{bucket_name}/{s3_key}") + + # Upload the file + s3_client.upload_file(local_path, bucket_name, s3_key) + +# Example usage +folder_to_upload_testdata = "./DATASET_RAG" +user_directory_prefix = "" # replace with your i-number as string here +prefix_guid = user_directory_prefix if user_directory_prefix is not None else str(uuid.uuid4().hex) +s3_testdata_prefix = f"genaiEvaluation/{prefix_guid}/testdata" # Leave empty for root of the bucket + +upload_folder_to_s3(folder_to_upload_testdata, AWS_BUCKET_ID, s3_testdata_prefix) +input_artifact_path = f"ai://genai-simplified-notebook/genaiEvaluation/{prefix_guid}" +``` +![img](img/image_5.png) + +**Step 5.2: Register Uploaded Files as Artifact in AI Core** + +```PYTHON +import requests + +# Registering the uploaded files from AWS as artifacts to use inside configuration. + +def register_artifact(): + headers = _get_headers() + + GET_ARTIFACTS_ENDPOINT = '/v2/lm/artifacts' + request_url = f"{AICORE_BASE_URL}{GET_ARTIFACTS_ENDPOINT}" + + request_body = { + "labels": [ + { + "key": "ext.ai.sap.com/prompt-evaluation", + "value": "true" + } + ], + "name": "genai-eval-test-data", + "kind": "other", + "url": input_artifact_path, # input artifact path + "description": "demo artifacts for evaluation flow.", + "scenarioId": "genai-evaluations" + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create an execution") + raise + + +artifact_id = register_artifact() +``` +![img](img/image_6.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Before registering a dataset artifact in Bruno, you must upload your CSV file to the SAP AI Core object store using the Dataset API. +Bruno cannot upload files directly to S3; therefore, this step is required. + +**Prerequisites** + + - An object store secret must already exist in your resource group.Typically, this is the default secret named **default**. + + - The Dataset API currently supports: + + - S3 object stores only + + - CSV file uploads + +**Upload Your Dataset** + +Use the Dataset API – Upload File request in Bruno: + +```bash +PUT:{{ai_api_url}}/v2/lm/dataset/files/{{secretName}}/{{datasetPath}} +``` + +**Headers** + +```json +Authorization: Bearer {{token}} +AI-Resource-Group: {{resourceGroup}} +Content-Type: text/csv +``` + +**Body** + +Upload your .csv file directly as binary in Bruno’s Body + +Example Path Values: + + - secretName: default + + - datasetPath: testdata/emanual.csv + +![img](img/image_br_dt.png) + +**Note:** + +Save the ai://… URL — you will use this when creating the dataset artifact. + +**Register the Dataset Artifact** + +- Click on **Register artifact** under lm -> artifacts in bruno collection to register the artifact + +```CODE +{ + "name": "aiconfig", + "kind": "dataset", + "url": "ai://default/testdata/emanual.csv", + "scenarioId": "genai-evaluations" +} +``` +![img](img/image-br02.png) + +[OPTION END] + +### Create an Evaluation Configuration + +[OPTION BEGIN [SAP AI Launchpad]] + +**Create Orchestration Registry Configuration** + +The Orchestration Registry allows you to define how different modules—such as prompting, grounding, LLM execution, and safety filters—work together as a single workflow. By creating an orchestration configuration, you specify the exact steps the system will execute for each evaluation input. + +- Go to Generative AI Hub → Orchestration → Orchestration Configurations + +- click create + +- In Grounding section pass input variables, output variable, and in Data repositories add the selected pipeline created in previous step + +![img](img/image_ail_or1.png) + +![img](img/image_ail_or2.png) + +- In templating add the user prompt + +```json +You are a helpful assistant specialized in SAP-related topics. Answer the following SAP question using the provided context. If the answer is not explicitly available in the context, respond with: `The answer is not available in the provided context.` + +Request: {{?topic}}. + +Context: {{?groundingOutput}} +``` +![img](img/image_ail_or3.png) + +- select the model in model configuration and save + +![img](img/image_ail_or4.png) + +![img](img/image_ail_or5.png) + + +To begin evaluating your model, you need to create an Evaluation Configuration using the **genai-evaluations** scenario in SAP AI Core. This configuration defines what to evaluate, using which dataset, with which metrics, and how. + +#### Steps to Create Evaluation Configuration + +1. Go to Generative AI Hub → Optimization. + +2. Click Create to start a new evaluation configuration. + +![img](img/image_25.png) + +- Select Test Input / Runs -> orchestration configuration + +Then: + + - Select your registered dataset artifact + + - Enter the dataset path (example): + testdata/emanual.csv + + - Set the number of test samples (e.g., 20) + + ![img](img/image_26_01.png) + +- Click **Next** to go to Metrics selection. + +#### Select Evaluation Metrics + +Choose the metrics you want to evaluate. + +You may choose one or multiple system-defined or custom metrics—examples: + + - Pointwise RAG Groundedness + + - Pointwise RAG Context Relevance + + - Pointwise RAG Context Precision + + - Pointwise RAG Completeness + +![img](img/image_27.png) + +--- + +> 📘 **Helpful Resources**: +> +> - [System-Defined Evaluation Metrics – SAP Documentation](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/system-defined-evaluation-metrics) +> - [Define Your Own Custom Metrics – SAP Guide](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/custom-metrics) +> *(If your evaluation requires domain-specific or advanced scoring logic)* + +> **Note: You may select additional metrics based on your use case.** + +--- + +#### Additional Configuration + +- Set **Number of Repetitions** to `1`. +- Choose an existing deployment for **Orchestration Endpoint**. +- In the **Input Variable Mapping**, enter the following mapping: + + ```json + { + "prompt/question": "data/topic" + } + ``` + ![img](img/image_29.png) +--- +[Learn more about variable mapping](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/variable-mapping) + +#### Final Review & Start + +- Review all the details on the summary page. +- Once confirmed, click **Create** to start the evaluation job. +![img](img/image_40.png) + +> ✅ You have now successfully configured and triggered a Generative AI Evaluation. + +[OPTION END] + +[OPTION BEGIN [Python]] + +To begin evaluating your model programmatically, you need to create an Evaluation Configuration using the **genai-evaluations** scenario in **SAP AI Core**. This configuration defines what to evaluate, which orchestration deployment to use, which dataset and metrics to apply, and how the evaluation will be executed — all through Python. + +#### Create Orchestration Deployment + +Before proceeding with the evaluation configuration, you must first deploy your orchestration workflow. + +An **orchestration deployment URL** is required to run the evaluation. Once the deployment is created, you should wait until its status is **running** and the deployment provides a **URL**. + +**NOTE:This URL will be used in the configuration definition in the next step.** + +```PYTHON +import requests +import json +import time + + + +def create_orchestration_configuration(): + headers = _get_headers() + request_body = { + "name": "orchestrationDeployment", + "executableId": "orchestration", + "scenarioId": "orchestration", + "parameterBindings": [ + { + "key": "modelFilterList", + "value": "null" + }, + { + "key": "modelFilterListType", + "value": "allow" + } + ], + "inputArtifactBindings": [] + } + + GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations' + request_url = f"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}" + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + print(response) + if(response.status_code != 201): + raise + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create a Configuration") + raise + +def execute_orchestration_deployment(configuration_id): + headers = _get_headers() + GET_DEPLOYMENTS_ENDPOINT = '/v2/lm/deployments' + request_url = f"{AICORE_BASE_URL}{GET_DEPLOYMENTS_ENDPOINT}" + + request_body = { + "configurationId": configuration_id + } + + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + print(response) + if(response.status_code != 202): + print("Deployment execution failed") + result = response.json() + print(result) + return result['id'] + + except: + logging.error("Error occurred while attempting to create an execution") + raise + +def get_deployment_status(orchestration_deployment_id): + headers = _get_headers() + api_url = f"{AICORE_BASE_URL}/v2/lm/deployments/{orchestration_deployment_id}?$select=status" + timeout = 400 + initial_interval = 30 + pending_interval = 10 + start = time.time() + + status = None + current_interval = initial_interval + + while time.time() - start < timeout: + response = requests.get(api_url, headers=headers) + if response.status_code == 200: + status = response.json().get('status') + print(f"Deployment {orchestration_deployment_id} status: {status}") + # Adjust polling interval based on status + if status == 'RUNNING': + return True + elif status == 'UNKNOWN': + current_interval = initial_interval + elif status == 'PENDING': + current_interval = pending_interval + + else: + print(f"Failed to fetch deployment status. HTTP {response.status_code}") + return False + + # Waiting according to status for API call + time.sleep(current_interval) + +def get_deployment_url(orchestration_deployment_id): + headers = _get_headers() + response = requests.get(f"{AICORE_BASE_URL}/v2/lm/deployments/{orchestration_deployment_id}", headers=headers) + if response.status_code != 200: + raise Exception(f"Failed to get deployment URL: {response.status_code} - {response.text}") + return response.json().get('deploymentUrl') + +# You can skip this step if you already have a orchestration deployment running +deployment_url = DEPLOYMENT_URL +if not deployment_url: + configuration_id = create_orchestration_configuration() + orchestration_deployment_id = execute_orchestration_deployment(configuration_id) + is_running = get_deployment_status(orchestration_deployment_id) + if is_running: + deployment_url = get_deployment_url(orchestration_deployment_id) + print(f"Deployment URL: {deployment_url}") + else: + print("Deployment is not running or failed.") +``` + +![img](img/image_36.png) + +#### Select your Models + +Add the LLMs you wish to use in the string selected_models_str + +```PYTHON +# Manual selection of models +selected_models_str="gpt-4o:2024-05-13" +print("Selected models string:", selected_models_str) +``` + +#### Select system defined metrics + +Add the system defined metrics you wish to use in the string selected_metrics_str. + +Note: If your dataset does not have a reference column, DO NOT Select metrics where reference is required. + +```PYTHON +# Manual Selection of Metrics +selected_metrics_str = "Pointwise RAG Context Precision,Pointwise RAG Completeness" +print(selected_metrics_str) +``` + +#### Create Orchestration Registry Configuration + +The following code defines a function create_orchestration_registry_config() that creates a new Orchestration Configuration in Orchestration Registry. + +Note : If you wish to use an existing orchestration config, skip executing this cell and add the orchestration config id in orchestration_registry_id string in the next cell. + +```PYTHON +def create_orchestration_registry_config(): + headers = _get_headers() + + CREATE_ORCHESTRATION_REGISTRY = '/v2/registry/v2/orchestrationConfigs' + request_url = f"{AICORE_BASE_URL}{CREATE_ORCHESTRATION_REGISTRY}" + model_name,model_version=selected_models_str.split(":") + request_body = { + "name": "genai-eval-test-1", + "version": "0.0.1", + "scenario": "genai-evaluations", + "spec": { + "modules": { + "prompt_templating": { + "prompt": { + "template": [ + { + "role": "user", + "content": "You are a helpful assistant specialized in e-manual topics. Answer the following e-manual questions using the provided context. If the answer is not explicitly available in the context, respond with: `The answer is not available in the provided context.` \\n\\nRequest: {{?topic}}. \\n\\nContext: {{?groundingOutput}}" + } + ], + "defaults": {} + }, + "model": {"name": f"{model_name}", "version": f"{model_version}", + }, + }, + "grounding": { + "type": "document_grounding_service", + "config": { + "filters": [ + { + "id": "helpRepo", + "data_repositories": [ + "*" + ], + "search_config": { + "max_chunk_count": 10 + }, + "data_repository_type": "help.sap.com" + } + ], + "placeholders": { + "input": [ + "topic" + ], + "output": "groundingOutput" + } + } + } + } + } + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + if(response.status_code != 200): + print(response.json()) + raise + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create a orchestration registry id") + raise +orchestration_registry_id = create_orchestration_registry_config() +``` + +#### Define Evaluation Flow Parameters + +Below is an example of defining the required input parameters for the prompt evaluation flow. + +```PYTHON +# Defining required input parameters for the prompt Evaluation Flow +import json +test_data_path = f"testdata/testdata/{DATASET_NAME}" # specify the test data path here. For the full folder just specifying testdata will work +test_datasets = json.dumps({'path': test_data_path, 'type': 'csv'}) +print(test_datasets) +metrics_list = ",".join([selected_metrics_str]) +models_list = selected_models_str +print(f"Selected metrics: {metrics_list}") +print(f"Selected models: {models_list}") +#variable_mapping = json.dumps({'prompt/question': 'data/topic'}) # to map the question prompt variable to the entry in dataset. +# orchestration_deployment_url = deployment_url # needs to specify this to use a specific deployment id +orchestration_deployment_url = deployment_url +repetitions = "1" +``` + +**NOTE: For custom metrics, ensure they follow the structured format: scenario/metric_name/version — for example, genai-evaluations/groundedness_formatted/0.0.1 or genai-evaluations/correctness_structured/0.0.1.** + +> 📘 **Helpful Resources**: +> +> - [System-Defined Evaluation Metrics – SAP Documentation](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/system-defined-evaluation-metrics) +> +> - **If your evaluation requires domain-specific or advanced scoring logic -** [ Define Your Own Custom Metrics – SAP Guide](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/custom-metrics) +> +> - [Learn more about variable mapping](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/variable-mapping) +> +Now, we will create an AI Core Configuration using the defined parameters. + + +```PYTHON +# creating an AICORE Configuration. +import requests + +request_body = { + "name": "genai-eval-conf", + "scenarioId": "genai-evaluations", + "executableId": "genai-evaluations-simplified", + "inputArtifactBindings": [ + { + "key": "datasetFolder", + "artifactId": artifact_id + } + ], + "parameterBindings": [ + { + "key": "repetitions", + "value": repetitions + }, + { + "key": "orchestrationDeploymentURL", + "value": orchestration_deployment_url + }, + { + "key": "metrics", + "value": metrics_list + }, + { + "key": "testDataset", + "value": test_datasets + }, + { + "key": "orchestrationRegistryIds", + "value": orchestration_registry_id + }, + { + "key": "testRowCount", + "value": "2" + } + ] +} + +def create_aicore_configuration(): + headers = _get_headers() + GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations' + request_url = f"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}" + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + print(response) + if(response.status_code != 201): + raise + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create a Configuration") + raise + +configuration_id = create_aicore_configuration() +``` +![img](img/image_8.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Before setting up your evaluation configuration in **Bruno**, you need to create and deploy the orchestration workflow that powers the evaluation process. This is a prerequisite step and must be completed before proceeding. + +The **orchestration deployment URL** is a critical input required in the configuration JSON. Once the orchestration is deployed, ensure its status is set to `running` and the **deployment URL** is generated. You will reference this URL while defining the evaluation configuration. + +> 📘 **Need help deploying the orchestration workflow?** Check the official guide: [Deploy an Orchestration Workflow in SAP AI Core](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) + +--- + +#### Sample Evaluation Configuration in Bruno + +Below is a sample configuration payload that you can use in **Bruno** to trigger an evaluation. Update placeholders like `` and `` with actual values. + +```json +{ + "name": "genai-eval-conf", + "scenarioId": "genai-evaluations", + "executableId": "genai-evaluations-simplified", + "inputArtifactBindings": [ + { + "key": "datasetFolder", + "artifactId": "" + } + ], + "parameterBindings": [ + { + "key": "repetitions", + "value": "1" + }, + { + "key": "orchestrationDeploymentURL", + "value": "" + }, + { + "key": "metrics", + "value": "Pointwise RAG Context Precision,Pointwise RAG Completeness" + }, + { + "key": "testDataset", + "value": "{\"path\": \"testdata/emanual.csv\", \"type\": \"csv\"}" + }, + { + "key": "orchestrationRegistryIds", + "value": "" + }, + { + "key": "testRowCount", + "value": "2" + } + ] +} +``` + +![img](img/image-br03.png) + +**NOTE: For custom metrics, ensure they follow the structured format: scenario/metric_name/version — for example, genai-evaluations/groundedness_formatted/0.0.1 or genai-evaluations/correctness_structured/0.0.1.** + +> 📘 **Helpful Resources**: +> +> - [System-Defined Evaluation Metrics – SAP Documentation](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/system-defined-evaluation-metrics) +> +> - **If your evaluation requires domain-specific or advanced scoring logic -** [ Define Your Own Custom Metrics – SAP Guide](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/custom-metrics) +> +> - [Learn more about variable mapping](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/variable-mapping) + +[OPTION END] + +### Evaluation Execution Creation +[OPTION BEGIN [SAP AI Launchpad]] + +- Once the evaluation configuration is created, the system automatically triggers an evaluation execution. + +- Follow these steps to monitor its progress and verify completion: + + - Navigate to **ML Operations** in the SAP AI Core Launchpad. + + - In the sidebar, click **Executions**. + + ![img](img/image_41.png) + + - Locate the most recent execution triggered by your evaluation configuration. You can use the timestamp or configuration name to identify it. + + - Click on the execution entry to open its details. The Current Status will update as the process runs. + + ![img](img/image_31.png) + +- Once the Target Status reaches **COMPLETED** , your evaluation has successfully finished. + +![img](img/image_32.png) + +> [For More information](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/create-evaluation) +> You’ve now completed an evaluation run and are ready to view and interpret the results. + +[OPTION END] + +[OPTION BEGIN [Python]] + +After creating the configuration, the next step is to trigger the evaluation workload by creating an AI Core execution. + +**Create an Execution with the Created Configuration** + +- The code below will initiate the evaluation process based on your configuration. + +```PYTHON +# create an execution with the created configuration. + +import requests +def create_execution(): + headers = _get_headers() + GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions' + request_url = f"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}" + request_body = {"configurationId" : configuration_id} # replace with your created configuration id + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + print("response received is ", response) + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create an execution") + raise + + +execution_id = create_execution() +``` +![img](img/image_11.png) + +**Get Execution Status** + +Check the status of the triggered execution. You’ll need to wait for the status to be **COMPLETED** before moving to the next steps. + +```PYTHON +# get execution status +import requests +def get_execution_status(execution_id): + headers = _get_headers() + LOG_EXECUTIONS_ENDPOINT = f'/v2/lm/executions/{execution_id}' + request_url = f"{AICORE_BASE_URL}{LOG_EXECUTIONS_ENDPOINT}" + try: + response = requests.get( + request_url, headers=headers, timeout=120 + ) + print("response received is ", response) + result = response.json() + return result + except: + logging.error("Error occurred while attempting to get execution status") + raise + + +get_execution_status(execution_id) +``` + +- The status field progresses through different states over time: +UNKNOWN → PENDING → RUNNING → COMPLETED. + +![img](img/image_9.png) + +![img](img/image_10.png) + +![img](img/image_12.png) + +- Ensure it reaches COMPLETED before proceeding. + +> [For More information](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/create-evaluation) + +**NOTE:** After triggering the execution, wait a few minutes, then re-run the **get_execution_status()** function. Once the status is **COMPLETED**, proceed to the next steps. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +After creating the configuration, the next step is to trigger the evaluation workload by creating an AI Core execution. + +**Create an Execution with the Created Configuration** + +- Click on create execution under executions, pass the configuration id created in previous step + +![img](img/image-br04.png) + +- The status field progresses through different states over time: +UNKNOWN → PENDING → RUNNING → COMPLETED. + +**Get Execution Status** + +check the status of created execution by passing the execution ID, The Current Status will update as the process runs. please refer the below image + +![img](img/image-br05.png) + +[OPTION END] + +### Evaluation Results Analysis + +[OPTION BEGIN [SAP AI Launchpad]] + +#### Retrieve Aggregate Metrics Using Run Name + +Once the evaluation workflow execution is completed, this step retrieves the aggregated evaluation metrics from the SAP AI Core service by specifying the run name. + +![img](img/image_35.png) + +[OPTION END] + +[OPTION BEGIN [Python]] +#### Retrieve Aggregate Metrics Using Execution ID + +Once the evaluation workflow execution is completed, we can retrieve the aggregated evaluation metrics using the execution ID. These metrics provide a quick summary of the model's performance across all completions. + +Below is the Python code that calls the AI Core Tracking API to fetch these aggregated metrics. + +```PYTHON +# Get aggregate metrics using execution id +import requests +def retrieve_aggregate_metrics(execution_id): + headers = _get_headers() + GET_METRICS_ENDPOINT = f'/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={execution_id}' + request_url = f"{AICORE_BASE_URL}{GET_METRICS_ENDPOINT}" + try: + response = requests.get(request_url, headers=headers, timeout=120) + print("response received is ", response) + result = response.json() + return result + except: + logging.error("Error occurred while attempting to retreive aggeregate metrics for the run") + raise + +runs_data = retrieve_aggregate_metrics(execution_id) +``` + +**Example Output** +![img](img/image_13.png) +The **Response [200]** indicates that the request was successful, and the aggregated metrics have been retrieved. + +#### Download the Result Artifacts from Object Store for Further Analysis + +- To drill down further into the **instance-level metrics, logs, or additional result files**, you can download the **SQLite DB** and other artifacts from object storage. + +```PYTHON +# download the result artifacts from Object store. +import boto3 + +def download_all_objects(prefix, destination_folder): + """ + Recursively download all objects from an S3 bucket starting with a specific prefix. + + :param bucket_name: Name of the S3 bucket. + :param prefix: Prefix to filter objects in the bucket. + :param destination_folder: Local folder to save the downloaded files. + """ + s3_client = boto3.client( + 's3', + aws_access_key_id=AWS_ACCESS_KEY, + aws_secret_access_key=AWS_SECRET_ACCESS_KEY, + region_name=AWS_REGION + ) + + # Ensure the destination folder exists + if not os.path.exists(destination_folder): + os.makedirs(destination_folder) + + # Paginate through objects + paginator = s3_client.get_paginator('list_objects_v2') + pages = paginator.paginate(Bucket=AWS_BUCKET_ID, Prefix=prefix) + + for page in pages: + if 'Contents' in page: + for obj in page['Contents']: + key = obj['Key'] + local_file_path = os.path.join(destination_folder, os.path.relpath(key, prefix)) + + # Ensure the local directory structure exists + local_directory = os.path.dirname(local_file_path) + if not os.path.exists(local_directory): + os.makedirs(local_directory) + + # Download the object + print(f"Downloading {key} to {local_file_path}") + s3_client.download_file(AWS_BUCKET_ID, key, local_file_path) + +# Example usage +EXECUTION_ID = execution_id +sqlite_db_prefix = f'{EXECUTION_ID}/evaluation_result/' +destination_folder = 'results-new' + +download_all_objects(sqlite_db_prefix, destination_folder) +``` + +**Sample Output** +![img](img/image_15.png) + +#### Viewing Results from SQLite Database in a Tabular Format + +In this step, we will visualize the evaluation results stored in the SQLite database (results.db) in a clean and readable tabular format directly within the notebook. This allows for quick inspection and validation of the data across different tables such as run, configuration, submission, etc. + +**Objective** + +- Connect to the SQLite database. + +- Query specific tables. + +- Display their contents in a structured HTML format. + +- Enhance readability using custom CSS styling. + +```PYTHON +# viewing the results from sqlite db in tabular format.. +import sqlite3 +import pandas as pd +from IPython.display import display, HTML + +# Path to your SQLite database file +db_file = 'results-new/results.db' + +connection = sqlite3.connect(db_file) + +# Specify the table names you want to display +table_names = ['run','configuration', 'submission', 'submission_result', 'evaluation_result'] + +# Create the CSS and HTML container +html_content = """ + +
+""" + +for table_name in table_names: + query = f"SELECT * FROM {table_name};" + df = pd.read_sql_query(query, connection) + # If you want to see all the rows across all tables, remove/comment the next line + df = df.head(5) # Limiting the number of rows displayed + table_html = df.to_html(classes='table-container', index=False) + html_content += f""" +
+

Table: {table_name}

+ {table_html} +
+ """ + +html_content += "
" + +display(HTML(html_content)) + +# Close the connection +connection.close() +``` + +**Output Example** + +Below is an example of the output rendered in the notebook. +![img](img/image_16.png) + +#### Delete an Execution by Execution ID (Optional) + +Once you have completed the evaluation and gathered the necessary aggregated metrics, you may want to delete the execution associated with a specific run. This helps maintain a clean and organized workspace in your SAP AI Core environment by removing outdated or unnecessary executions. + +**NOTE:** Deleting an execution is irreversible. Ensure you have saved all relevant results and metrics before proceeding. + +**Delete Execution by ID** + +```PYTHON +#Delete Execution Id +def delete_execution(): + headers = _get_headers() + EXEC_ID = execution_id + GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions/' + request_url = f"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}{EXEC_ID}" + try: + response = requests.delete( + request_url, headers=headers, params={"AI-Resource-Group":AICORE_RESOURCE_GROUP}, timeout=120 + ) + print(response) + if(response.status_code != 202): + raise + result = response.json() + print(result) + except: + logging.error("Error occurred while attempting to delete a Configuration") + raise + +delete_execution() +``` +**How It Works** + +- **Execution ID:** This is the unique identifier for the execution you wish to delete. Ensure the execution_id variable is properly assigned in your script. + +- **DELETE Request:** The function sends an HTTP DELETE request to SAP AI Core’s executions endpoint. + +- **Response Handling:** + - If the status code is 202 Accepted, the deletion request was successfully initiated. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +**Retrieve Aggregate Metrics Using Execution ID** + +Once the evaluation workflow execution is completed, we can retrieve the aggregated evaluation metrics using the execution ID. These metrics provide a quick summary of the model's performance across all completions. + +![img](img/image-br06.png) + +[OPTION END] + diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/evaluation_RAG.ipynb b/tutorials/ai-core-genaihub-evaluation-with-grounding/evaluation_RAG.ipynb new file mode 100644 index 0000000000..bfac281a62 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-with-grounding/evaluation_RAG.ipynb @@ -0,0 +1,1430 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Generative AI Custom Evaluation\n", + "This is an example notebook which showcases how a user can use AI Core custom evaluation to benchmark their large language models, evaluate orchestration configuration or prompts for their use case.\n", + "It uses publicly available emanual.csv. The workload computes industry standard metrics to check the reliability of the response generate by llm.\n", + "
**Note: For detailed instructions please refer to [Readme](./Readme.md)**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# SetUp (Step 1)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "! pip install -r ../requirements.txt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load your environment variables\n", + "\n", + "Ensure that your environment variables are set in a `.env` file (see sample.env for an example). If there is a missing field the notebook will prompt you for a value." + ] + }, + { + "cell_type": "code", + "execution_count": 351, + "metadata": {}, + "outputs": [], + "source": [ + "# Loading the credentials from the env file\n", + "from gen_ai_hub.proxy.gen_ai_hub_proxy import GenAIHubProxyClient\n", + "from dotenv import load_dotenv\n", + "import os\n", + "\n", + "load_dotenv(override=True)\n", + "\n", + "\n", + "# Fetching environment variables or prompting the user if missing\n", + "AICORE_BASE_URL = os.getenv(\"AICORE_BASE_URL\") or input(\"AICORE_BASE_URL is missing. Please enter it: \")\n", + "AICORE_RESOURCE_GROUP = os.getenv(\"AICORE_RESOURCE_GROUP\") or input(\"AICORE_RESOURCE_GROUP is missing. Please enter it (default: 'default'): \") or \"default\"\n", + "AICORE_AUTH_URL = os.getenv(\"AICORE_AUTH_URL\") or input(\"AICORE_AUTH_URL is missing. Please enter it: \")\n", + "AICORE_CLIENT_ID = os.getenv(\"AICORE_CLIENT_ID\") or input(\"AICORE_CLIENT_ID is missing. Please enter it: \")\n", + "AICORE_CLIENT_SECRET = os.getenv(\"AICORE_CLIENT_SECRET\") or input(\"AICORE_CLIENT_SECRET is missing. Please enter it: \")\n", + "\n", + "AWS_ACCESS_KEY = os.getenv(\"AWS_ACCESS_KEY\") or input(\"AWS_ACCESS_KEY is missing. Please enter it: \")\n", + "AWS_BUCKET_ID = os.getenv(\"AWS_BUCKET_ID\") or input(\"AWS_BUCKET_ID is missing. Please enter it: \")\n", + "AWS_REGION = os.getenv(\"AWS_REGION\") or input(\"AWS_REGION is missing. Please enter it: \")\n", + "AWS_SECRET_ACCESS_KEY = os.getenv(\"AWS_SECRET_ACCESS_KEY\") or input(\"AWS_SECRET_ACCESS_KEY is missing. Please enter it: \")\n", + "DEPLOYMENT_URL = os.getenv(\"DEPLOYMENT_URL\", None)\n", + "AWS_USERNAME = os.getenv(\"AWS_USERNAME\")\n", + "AWS_HOST = os.getenv(\"AWS_HOST\")\n", + "\n", + "# Initializing the GenAIHubProxyClient\n", + "client = GenAIHubProxyClient(\n", + " base_url=AICORE_BASE_URL,\n", + " auth_url=AICORE_AUTH_URL,\n", + " client_id=AICORE_CLIENT_ID,\n", + " client_secret=AICORE_CLIENT_SECRET,\n", + " resource_group=AICORE_RESOURCE_GROUP\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Dependencies and Helper Functions (Step 2)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import json\n", + "\n", + "\n", + "\n", + "def get_dataset_file_name(folder_path):\n", + " \"\"\"\n", + " Retrieves the name of the first file in the specified folder.\n", + " \"\"\"\n", + " if not os.path.isdir(folder_path):\n", + " print(f\"The folder path '{folder_path}' does not exist.\")\n", + " return None\n", + "\n", + " items_in_folder = os.listdir(folder_path)\n", + "\n", + " for item in items_in_folder:\n", + " item_path = os.path.join(folder_path, item)\n", + " if os.path.isfile(item_path):\n", + " return item\n", + "\n", + " print(f\"No files were found in the folder '{folder_path}'.\")\n", + " return None\n", + "\n", + "\n", + "\n", + "# --- MAIN EXECUTION ---\n", + "DATASET_FOLDER = \"../DATASET\"\n", + "\n", + "DATASET_NAME = get_dataset_file_name(DATASET_FOLDER)\n", + "\n", + "if DATASET_NAME:\n", + " print(f\"Dataset name: {DATASET_NAME}\")\n", + "else:\n", + " print(\"Missing run or dataset file.\")\n", + " raise SystemExit(\"Exiting due to missing run/dataset file.\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a Bearer token" + ] + }, + { + "cell_type": "code", + "execution_count": 360, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Response status code: \n" + ] + } + ], + "source": [ + "import requests\n", + "def create_token():\n", + " \n", + " payload = {\n", + " 'grant_type': 'client_credentials',\n", + " 'client_id': AICORE_CLIENT_ID,\n", + " 'client_secret':AICORE_CLIENT_SECRET\n", + " }\n", + " response = requests.post(AICORE_AUTH_URL, data=payload)\n", + " print(f\"Response status code: {response}\")\n", + " response_data = response.json()\n", + " if 'access_token' in response_data:\n", + " return response_data['access_token']\n", + " else:\n", + " raise Exception(f\"Failed to get token: {response_data}\")\n", + "token = create_token()\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a Resource Group" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "If you already have a resource group provisioned with grounding and rag enabled, you can add the name fo your resource group at `user_resource_group_id`\n", + "\n", + "**Note: the \"labels\" config is required to enable your resource group to use grounding and Rag metrics. Ensure you set the value to true**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "def create_resource_group():\n", + " headers = {\n", + " 'Authorization': f'Bearer {token}',\n", + " 'Content-Type': 'application/json',\n", + " }\n", + " resource_group_id =f\"rag-notebook-test\"\n", + " api_url = f\"{AICORE_BASE_URL}/v2/admin/resourceGroups\"\n", + " payload = {\n", + " \"resourceGroupId\": resource_group_id,\n", + " \"labels\": [\n", + " {\n", + " \"key\": \"ext.ai.sap.com/document-grounding\",\n", + " \"value\": \"true\"\n", + " }\n", + " ]\n", + " }\n", + " response = requests.post(api_url, json=payload, headers=headers)\n", + " if response.status_code == 202:\n", + " return resource_group_id\n", + " else:\n", + " raise Exception(f\"Failed to create resource group: {response.json()}\")\n", + "\n", + "user_resource_group_id = \"\" # add your provisioned resource group id here, if you have one\n", + "resource_group_id = user_resource_group_id or create_resource_group()\n", + "print(f\"Resource Group created with ID: {resource_group_id}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Register an Object Store Secret\n", + "To use the evaluations service, you must register an object store with the name default. Optionally, you can register an additional object store with a name of your choice." + ] + }, + { + "cell_type": "code", + "execution_count": 361, + "metadata": {}, + "outputs": [], + "source": [ + "# setup authentication and headers needed for AI Core requests\n", + "def _get_headers():\n", + " headers = {\n", + " \"Authorization\": client.get_ai_core_token(),\n", + " \"AI-Resource-Group\": resource_group_id,\n", + " \"Content-Type\": \"application/json\",\n", + " }\n", + " return headers" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Register S3 secret with AI Core which will be used an input source \n", + "import requests\n", + "import json\n", + "import logging\n", + "\n", + "def delete_oss_secret(oss_name=\"\"):\n", + " headers = _get_headers()\n", + " \n", + " DELETE_SECRETS_ENDPOINT = f'/v2/admin/objectStoreSecrets/{oss_name}'\n", + " request_url = f\"{AICORE_BASE_URL}{DELETE_SECRETS_ENDPOINT}\"\n", + " \n", + " try:\n", + " response = requests.delete(request_url, headers=headers, timeout=120)\n", + " if response.status_code == 202:\n", + " print(f\"Successfully deleted object store secret: {oss_name}\")\n", + " elif response.status_code == 404:\n", + " print(f\"Object store secret not found: {oss_name}. It may not exist.\")\n", + " else:\n", + " logging.error(f\"Failed to delete object store secret: {oss_name}, Status Code: {response.status_code}\")\n", + " except Exception as e:\n", + " logging.error(f\"Error occurred while attempting to delete object store secret: {e}\")\n", + " raise\n", + "\n", + "def register_oss_secret(oss_name=\"\", path_prefix=\"\"):\n", + " headers = _get_headers()\n", + " \n", + " POST_SECRETS_ENDPOINT = '/v2/admin/objectStoreSecrets'\n", + " request_url = f\"{AICORE_BASE_URL}{POST_SECRETS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"name\": oss_name,\n", + " \"data\": {\n", + " \"AWS_ACCESS_KEY_ID\": AWS_ACCESS_KEY,\n", + " \"AWS_SECRET_ACCESS_KEY\": AWS_SECRET_ACCESS_KEY\n", + " },\n", + " \"type\": \"S3\",\n", + " \"bucket\": AWS_BUCKET_ID,\n", + " \"endpoint\": \"s3-eu-central-1.amazonaws.com\",\n", + " \"region\": AWS_REGION,\n", + " \"pathPrefix\": path_prefix,\n", + " \"verifyssl\": \"0\",\n", + " \"usehttps\": \"1\",\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create object store secret\")\n", + " raise\n", + " \n", + "delete_oss_secret(oss_name=\"default\")\n", + "delete_oss_secret(oss_name=\"genai-simplified-notebook\")\n", + " \n", + "register_oss_secret(oss_name=\"default\", path_prefix=\"\")\n", + "register_oss_secret(oss_name=\"genai-simplified-notebook\", path_prefix=\"\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a Grounding Secret\n", + "\n", + "In the next step, we create a secret that enables grounding by adding on the \"labels\" config. This generic secret needs to be created to provide details of the hyperscaler and bucket details so that grounding service will know how to retrieve data from it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import time\n", + "import base64\n", + "def encode_base64(value):\n", + " return base64.b64encode(value.encode('utf-8')).decode('utf-8')\n", + " \n", + "def create_generic_secret():\n", + " payload ={\n", + " \"name\": \"groundingsecret\",\n", + " \"data\": {\n", + " \"url\": encode_base64(\"https://s3-eu-central-1.amazonaws.com\"), \n", + " \"authentication\": encode_base64(\"NoAuthentication\"),\n", + " \"description\": encode_base64(\"grounding secret\"),\n", + " \"access_key_id\": encode_base64(AWS_ACCESS_KEY),\n", + " \"bucket\": encode_base64(AWS_BUCKET_ID),\n", + " \"host\": encode_base64(AWS_HOST),\n", + " \"region\": encode_base64(\"eu-central-1\"),\n", + " \"secret_access_key\": encode_base64(AWS_SECRET_ACCESS_KEY),\n", + " \"username\": encode_base64(AWS_USERNAME),\n", + " },\n", + " \"labels\": [\n", + " {\n", + " \"key\": \"ext.ai.sap.com/document-grounding\",\n", + " \"value\": \"true\"\n", + " },\n", + " {\n", + " \"key\": \"ext.ai.sap.com/documentRepositoryType\",\n", + " \"value\": \"S3\"\n", + " }\n", + " ]\n", + "}\n", + " time.sleep(60)\n", + " try:\n", + " headers = _get_headers()\n", + " api_url = f\"{AICORE_BASE_URL}/v2/admin/secrets\"\n", + " response = requests.post(api_url, headers=headers, json=payload)\n", + " if(response.status_code == 200):\n", + " print(\"Generic secret created successfully\")\n", + " else:\n", + " print(f\"Failed to create generic secret: {response}\")\n", + " except Exception as e:\n", + " print(f\"Error creating secret: {e}\")\n", + "create_generic_secret()\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a Grounding Pipeline\n", + "\n", + "This step creates the connection to where you have stored your grounding documents and allows a path to retrieve the documents during evaluation " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def create_s3_grounding_pipeline():\n", + " headers = _get_headers()\n", + " api_url = f\"{AICORE_BASE_URL}/v2/lm/document-grounding/pipelines\"\n", + " payload = {\n", + " \"type\": \"S3\",\n", + " \"configuration\": {\n", + " \"destination\": \"groundingsecret\"\n", + " }\n", + " }\n", + " time.sleep(5) # Optional wait for secret availability\n", + "\n", + " try:\n", + " response = requests.post(api_url, headers=headers, json=payload)\n", + " if response.status_code == 201:\n", + " print(\"S3 document grounding pipeline created successfully\")\n", + " else:\n", + " print(f\"Failed to create pipeline. Status: {response.status_code}, Response: {response.text}\")\n", + " except Exception as e:\n", + " print(f\"Error creating S3 document grounding pipeline: {e}\")\n", + "create_s3_grounding_pipeline()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note: Check that the next step successfully runs to ensure you have set up properly**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def test_get_retrieval_repository(headers):\n", + " # To ensure chunking happens.\n", + " api_url = f\"{AICORE_BASE_URL}/v2/lm/document-grounding/retrieval/dataRepositories\"\n", + "\n", + " try:\n", + " response = requests.get(api_url, headers=headers)\n", + " print(\"Check to see if the s3 is added in the body:\", response.json())\n", + " if response.status_code == 200:\n", + " print(\"S3 document retrieval successfull\")\n", + " else:\n", + " raise Exception(f\"Failed to create pipeline. Status: {response.status_code}, Response: {response.text}\")\n", + " except Exception as e:\n", + " raise Exception(f\"Error creating S3 document grounding pipeline: {e}\")\n", + "test_get_retrieval_repository(_get_headers())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# uploading these files to Object store to register as an artifact inside ai core\n", + "\n", + "import boto3\n", + "import os\n", + "import uuid\n", + "\n", + "def upload_folder_to_s3(folder_path, bucket_name, s3_prefix=\"\"):\n", + " \"\"\"\n", + " Upload a folder to an S3 bucket recursively.\n", + "\n", + " :param folder_path: The local folder path to upload.\n", + " :param bucket_name: The name of the S3 bucket.\n", + " :param s3_prefix: Optional prefix to use for the S3 keys (e.g., subfolder in the bucket).\n", + " \"\"\"\n", + " s3_client = boto3.client(\n", + " 's3',\n", + " aws_access_key_id=AWS_ACCESS_KEY,\n", + " aws_secret_access_key=AWS_SECRET_ACCESS_KEY,\n", + " region_name=AWS_REGION\n", + " )\n", + "\n", + " for root, dirs, files in os.walk(folder_path):\n", + " for file_name in files:\n", + " print(\"val of root is \", file_name)\n", + " local_path = os.path.join(root, file_name)\n", + " # Compute the relative path for the S3 key\n", + " relative_path = os.path.relpath(local_path, folder_path)\n", + " s3_key = os.path.join(s3_prefix, relative_path).replace(\"\\\\\", \"/\") # Ensure S3-compatible paths\n", + " print(\"val of s3 key is \", s3_key)\n", + " print(f\"Uploading {local_path} to s3://{bucket_name}/{s3_key}\")\n", + " \n", + " # Upload the file\n", + " s3_client.upload_file(local_path, bucket_name, s3_key)\n", + "\n", + "# Example usage\n", + "folder_to_upload_testdata = \"../DATASET_RAG\"\n", + "user_directory_prefix = \"\" # replace with your i-number as string here\n", + "prefix_guid = user_directory_prefix if user_directory_prefix is not None else str(uuid.uuid4().hex)\n", + "s3_testdata_prefix = f\"genaiEvaluation/{prefix_guid}/testdata\" # Leave empty for root of the bucket\n", + "\n", + "\n", + "upload_folder_to_s3(folder_to_upload_testdata, AWS_BUCKET_ID, s3_testdata_prefix)\n", + "input_artifact_path = f\"ai://genai-simplified-notebook/genaiEvaluation/{prefix_guid}\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The user stores the input files in the object store and registers the root folder as artifact with AI Core. The File Upload and Artifact endpoints of AI Core API may be used for this purpose. In this example `genaiEvaluation\\{prefix_guid}` is the root folder containing the orchestration configurations and test data which is registered as AI Core artifact." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "import logging\n", + "# Registering the uploaded files from AWS as artifacts to use inside configuration.\n", + "\n", + "def register_artifact():\n", + " headers = _get_headers()\n", + " \n", + " GET_ARTIFACTS_ENDPOINT = '/v2/lm/artifacts'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_ARTIFACTS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"labels\": [\n", + " {\n", + " \"key\": \"ext.ai.sap.com/prompt-evaluation\",\n", + " \"value\": \"true\"\n", + " }\n", + " ],\n", + " \"name\": \"genai-eval-simplified-test-data\",\n", + " \"kind\": \"other\",\n", + " \"url\": input_artifact_path, # input artifact path\n", + " \"description\": \"demo artifacts for evaluation flow.\",\n", + " \"scenarioId\": \"genai-evaluations\"\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " print(\"Error occurred while attempting to create an execution\")\n", + " raise\n", + " \n", + "\n", + "artifact_id = register_artifact()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create Orchestration Deployment\n", + "An orchestration Deployment URL is required for us to run our evaluation. Once created we need to wait until the deployment is running and provides us a deployment url which will be add to our configuration file in the next step. You can skip this step if you already have a orchestration deployment running." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "import json\n", + "import time\n", + "\n", + "\n", + "\n", + "def create_orchestration_configuration():\n", + " headers = _get_headers()\n", + " request_body = {\n", + " \"name\": \"orchestrationDeployment\",\n", + " \"executableId\": \"orchestration\",\n", + " \"scenarioId\": \"orchestration\",\n", + " \"parameterBindings\": [\n", + " {\n", + " \"key\": \"modelFilterList\",\n", + " \"value\": \"null\"\n", + " },\n", + " {\n", + " \"key\": \"modelFilterListType\",\n", + " \"value\": \"allow\"\n", + " }\n", + " ],\n", + " \"inputArtifactBindings\": []\n", + " }\n", + " \n", + " GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}\"\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 201):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a Configuration\")\n", + " raise\n", + " \n", + "def execute_orchestration_deployment(configuration_id):\n", + " headers = _get_headers()\n", + " GET_DEPLOYMENTS_ENDPOINT = '/v2/lm/deployments'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_DEPLOYMENTS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"configurationId\": configuration_id\n", + " }\n", + " \n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 202):\n", + " print(\"Deployment execution failed\")\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " \n", + " except:\n", + " logging.error(\"Error occurred while attempting to create an execution\")\n", + " raise\n", + "\n", + "def get_deployment_status(orchestration_deployment_id):\n", + " headers = _get_headers()\n", + " api_url = f\"{AICORE_BASE_URL}/v2/lm/deployments/{orchestration_deployment_id}?$select=status\"\n", + " timeout = 400 \n", + " initial_interval = 30 \n", + " pending_interval = 10\n", + " start = time.time()\n", + "\n", + " status = None\n", + " current_interval = initial_interval\n", + "\n", + " while time.time() - start < timeout:\n", + " response = requests.get(api_url, headers=headers)\n", + " if response.status_code == 200:\n", + " status = response.json().get('status')\n", + " print(f\"Deployment {orchestration_deployment_id} status: {status}\")\n", + " # Adjust polling interval based on status\n", + " if status == 'RUNNING':\n", + " return True\n", + " elif status == 'UNKNOWN':\n", + " current_interval = initial_interval\n", + " elif status == 'PENDING':\n", + " current_interval = pending_interval\n", + "\n", + " else:\n", + " print(f\"Failed to fetch deployment status. HTTP {response.status_code}\")\n", + " return False\n", + "\n", + " # Waiting according to status for API call\n", + " time.sleep(current_interval)\n", + "\n", + "def get_deployment_url(orchestration_deployment_id):\n", + " headers = _get_headers()\n", + " response = requests.get(f\"{AICORE_BASE_URL}/v2/lm/deployments/{orchestration_deployment_id}\", headers=headers)\n", + " if response.status_code != 200:\n", + " raise Exception(f\"Failed to get deployment URL: {response.status_code} - {response.text}\")\n", + " return response.json().get('deploymentUrl')\n", + "\n", + "# You can skip this step if you already have a orchestration deployment running\n", + "deployment_url = DEPLOYMENT_URL\n", + "if not deployment_url:\n", + " configuration_id = create_orchestration_configuration()\n", + " orchestration_deployment_id = execute_orchestration_deployment(configuration_id)\n", + " is_running = get_deployment_status(orchestration_deployment_id) \n", + " if is_running:\n", + " deployment_url = get_deployment_url(orchestration_deployment_id)\n", + " print(f\"Deployment URL: {deployment_url}\")\n", + " else:\n", + " print(\"Deployment is not running or failed.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Manually set the orchestration deployment url\n", + "# deployment_url=\"\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Select your Models\n", + " \n", + "Add the LLMs you wish to use in the string `selected_models_str`\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 368, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Selected models string: gpt-4o:2024-05-13\n" + ] + } + ], + "source": [ + "# Manual selection of models\n", + "selected_models_str=\"gpt-4o:2024-05-13\"\n", + "print(\"Selected models string:\", selected_models_str)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Select system defined metrics\n", + " \n", + "Add the system defined metrics you wish to use in the string `selected_metrics_str`.\n", + "\n", + "**Note: If your dataset does not have a reference column, DO NOT Select metrics where reference is required.**" + ] + }, + { + "cell_type": "code", + "execution_count": 369, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Pointwise RAG Context Precision,Pointwise RAG Completeness\n" + ] + } + ], + "source": [ + "# Manual Selection of Metrics\n", + "selected_metrics_str = \"Pointwise RAG Context Precision,Pointwise RAG Completeness\"\n", + "print(selected_metrics_str)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Custom Metric Creation and Selection\n", + "This script checks for an evaluation metric in SAP AI Core.\n", + "\n", + "1. You can provide Metric ID's directly by setting the variable as comma separated string:\n", + " user_metric_ids = `\"\"`\n", + " - ✅ If the ID exists, it will be returned.\n", + " \n", + "2. You can create a new custom metric by adding the json in `custom_metric_list` string\n", + " - The script will use the contents of the `custom_metric_list`\n", + " to search for an existing metric by scenario + name + version.\n", + "\n", + "3. If no existing metric is found:\n", + " - A new metric will be created using the details in `custom_metric_list`.\n", + " - Required fields in custom_metric: scenario, name, version, evaluationMethod.\n", + "\n", + "4. At the end:\n", + " - The script prints the final Metric ID that was found or created.\n", + "\n", + "Note: Skip the two following cell if you do not want to create/select a custom metric for your workload" + ] + }, + { + "cell_type": "code", + "execution_count": 370, + "metadata": {}, + "outputs": [], + "source": [ + "user_metric_ids = \"d1868b00-1601-407a-92cd-0b9065682d1f,dbf56851-8444-45d3-a0c1-adbe210c7e771\"\n", + "\n", + "custom_metric_list = [\n", + " {\n", + " \"name\": \"test-metric\",\n", + " \"scenario\": \"genai-evaluations-test\",\n", + " \"version\": \"0.0.1\",\n", + " \"evaluationMethod\": \"llm-as-a-judge\",\n", + " \"managedBy\": \"imperative\",\n", + " \"systemPredefined\": False,\n", + " \"metricType\": \"evaluation\",\n", + " \"spec\": {\n", + " \"outputType\": \"numerical\",\n", + " \"promptType\": \"structured\",\n", + " \"configuration\": {\n", + " \"modelConfiguration\": {\n", + " \"name\": \"gpt-4o\",\n", + " \"version\": \"2024-05-13\",\n", + " \"parameters\": [\n", + " {\n", + " \"key\": \"max_tokens\",\n", + " \"value\": \"10000\"\n", + " }\n", + " ]\n", + " },\n", + " \"promptConfiguration\": {\n", + " \"definition\": \"You will be assessing Groundedness (also known as Faithfulness), which measures whether the response relies solely on the provided context and avoids introducing external information or making claims not supported by it.\",\n", + " \"evaluationTask\": \"You are an expert evaluator. Your task is to evaluate the groundedness of responses generated by AI models based on provided context.\\nWe will provide you with the provided context (information the AI was supposed to use) and the AI-generated response. The original user query is also provided for background.\\nYou should first read the provided context carefully, then evaluate if the response is fully supported by this context, based on the criteria provided in the Evaluation section below.\\nYou will assign the response a rating following the Rating Rubric and Evaluation Steps.\\nGive step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\",\n", + " \"criteria\": \"Groundedness: Is all the information presented in the response verifiable against the provided context? Does the response avoid making claims or stating facts not present in the context?\",\n", + " \"ratingRubric\": [\n", + " {\n", + " \"rating\": 3,\n", + " \"rule\": \"Response is completely factual with no unsupported claims\"\n", + " },\n", + " {\n", + " \"rating\": 2,\n", + " \"rule\": \"Response has minor inaccuracies but no major contradictions\"\n", + " },\n", + " {\n", + " \"rating\": 1,\n", + " \"rule\": \"Response contains significant factual errors or hallucinations\"\n", + " }\n", + " ]\n", + " }\n", + " }\n", + " },\n", + " \"includeProperties\": [\n", + " \"grounding_response\"\n", + " ],\n", + " \"additionalProperties\": {\n", + " \"variables\": [],\n", + " \"supported_values\": [\n", + " 1,\n", + " 3\n", + " ],\n", + " \"experimental\": False\n", + " }\n", + "}\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import json\n", + "import requests\n", + "\n", + "\n", + "# --- Fetch all metrics from SAP AI Core ---\n", + "def fetch_all_metrics():\n", + " request_url = f\"{AICORE_BASE_URL}/v2/lm/evaluationMetrics\"\n", + " resp = requests.get(request_url, headers=_get_headers())\n", + " resp.raise_for_status()\n", + " return resp.json().get(\"resources\", [])\n", + "\n", + "# --- Create or fetch a metric ---\n", + "def create_or_get_metric(custom_metric, user_metric_id=None):\n", + " all_metrics = fetch_all_metrics()\n", + "\n", + " # 1️⃣ User-supplied ID lookup\n", + " if user_metric_id:\n", + " for m in all_metrics:\n", + " if m.get(\"id\") == user_metric_id:\n", + " print(f\"✅ Metric already exists by ID: {user_metric_id}\")\n", + " return user_metric_id\n", + " print(f\"⚠️ User metric ID {user_metric_id} not found, will only include if valid later\")\n", + "\n", + " # 2️⃣ Check by scenario, name, version\n", + " scenario = custom_metric.get(\"scenario\")\n", + " name = custom_metric.get(\"name\")\n", + " version = custom_metric.get(\"version\")\n", + " if not all([scenario, name, version]):\n", + " raise ValueError(\"Metric must include 'scenario', 'name', and 'version'\")\n", + "\n", + " for m in all_metrics:\n", + " if (m.get(\"scenario\") == scenario and\n", + " m.get(\"name\") == name and\n", + " m.get(\"version\") == version):\n", + " metric_id = m.get(\"id\")\n", + " print(f\"✅ Metric already exists: {scenario}/{name} v{version}, ID = {metric_id}\")\n", + " return metric_id\n", + "\n", + " # 3️⃣ Create metric if not found\n", + " request_url = f\"{AICORE_BASE_URL}/v2/lm/evaluationMetrics\"\n", + " required_fields = [\"scenario\", \"name\", \"version\", \"evaluationMethod\", \"metricType\"]\n", + " for f in required_fields:\n", + " if f not in custom_metric:\n", + " raise ValueError(f\"❌ Missing required field: {f}\")\n", + "\n", + " resp = requests.post(request_url, headers=_get_headers(), json=custom_metric)\n", + " resp.raise_for_status()\n", + " metric_id = resp.json().get(\"id\")\n", + " print(f\"✅ Metric created successfully: {name} v{version}, ID = {metric_id}\")\n", + " return metric_id\n", + "\n", + "# --- Main pipeline ---\n", + "\n", + "# 1️⃣ Create/fetch metrics from SAP AI Core\n", + "metric_ids = []\n", + "for metric in custom_metric_list:\n", + " try:\n", + " print(f\"metric:{metric}\")\n", + " metric_id = create_or_get_metric(metric)\n", + " metric_ids.append(metric_id)\n", + " except ValueError as e:\n", + " print(f\"Skipping metric due to error: {e}\")\n", + "\n", + "# 2️⃣ Validate user_metric_ids separately if provided\n", + "if user_metric_ids and user_metric_ids.strip():\n", + " all_metrics = fetch_all_metrics()\n", + " # Split comma-separated IDs and strip whitespace\n", + " for uid in [uid.strip() for uid in user_metric_ids.split(\",\")]:\n", + " if any(m.get(\"id\") == uid for m in all_metrics):\n", + " metric_ids.append(uid)\n", + " else:\n", + " print(f\"⚠️ User metric ID {uid} does not exist in AI Core, skipping.\")\n", + "# 3️⃣ Convert to comma-separated string\n", + "custom_metric_ids_str = \",\".join(metric_ids)\n", + "print(\"✅ All processed metric IDs:\", custom_metric_ids_str)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Orchestration Registry Configuration\n", + "\n", + "The following code defines a function `create_orchestration_registry_config()` that creates a new **Orchestration Configuration** in **Orchestration Registry**.\n", + "\n", + "**Note** : If you wish to use an existing orchestration config, skip executing this cell and add the orchestration config id in `orchestration_registry_id` string in the next cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'message': 'Orchestration config updated successfully.', 'id': '22ba2b67-ca81-41ab-989e-cd63a54a6499', 'scenario': 'genai-evaluations', 'name': 'genai-eval-test-1', 'version': '0.0.1'}\n" + ] + } + ], + "source": [ + "def create_orchestration_registry_config():\n", + " headers = _get_headers()\n", + " \n", + " CREATE_ORCHESTRATION_REGISTRY = '/v2/registry/v2/orchestrationConfigs'\n", + " request_url = f\"{AICORE_BASE_URL}{CREATE_ORCHESTRATION_REGISTRY}\"\n", + " model_name,model_version=selected_models_str.split(\":\")\n", + " request_body = {\n", + " \"name\": \"genai-eval-test-1\",\n", + " \"version\": \"0.0.1\",\n", + " \"scenario\": \"genai-evaluations\",\n", + " \"spec\": {\n", + " \"modules\": {\n", + " \"prompt_templating\": {\n", + " \"prompt\": {\n", + " \"template\": [\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"You are a helpful assistant specialized in e-manual topics. Answer the following e-manual questions using the provided context. If the answer is not explicitly available in the context, respond with: `The answer is not available in the provided context.` \\\\n\\\\nRequest: {{?topic}}. \\\\n\\\\nContext: {{?groundingOutput}}\"\n", + " }\n", + " ],\n", + " \"defaults\": {}\n", + " },\n", + " \"model\": {\"name\": f\"{model_name}\", \"version\": f\"{model_version}\",\n", + " },\n", + " },\n", + " \"grounding\": {\n", + " \"type\": \"document_grounding_service\",\n", + " \"config\": {\n", + " \"filters\": [\n", + " {\n", + " \"id\": \"helpRepo\",\n", + " \"data_repositories\": [\n", + " \"*\"\n", + " ],\n", + " \"search_config\": {\n", + " \"max_chunk_count\": 10\n", + " },\n", + " \"data_repository_type\": \"help.sap.com\"\n", + " }\n", + " ],\n", + " \"placeholders\": {\n", + " \"input\": [\n", + " \"topic\"\n", + " ],\n", + " \"output\": \"groundingOutput\"\n", + " }\n", + " }\n", + " }\n", + " }\n", + " }\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " if(response.status_code != 200):\n", + " print(response.json())\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a orchestration registry id\")\n", + " raise\n", + "orchestration_registry_id = create_orchestration_registry_config()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Manually set orchestration config id\n", + "# orchestration_registry_id=\"\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluation Configuration Creation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "import json\n", + "test_data_path = f\"testdata/testdata/{DATASET_NAME}\" # specify the test data path here. For the full folder just specifying testdata will work\n", + "test_datasets = json.dumps({'path': test_data_path, 'type': 'csv'})\n", + "print(test_datasets)\n", + "metrics_list = \",\".join([selected_metrics_str,custom_metric_ids_str])\n", + "models_list = selected_models_str\n", + "print(f\"Selected metrics: {metrics_list}\")\n", + "print(f\"Selected models: {models_list}\")\n", + "#variable_mapping = json.dumps({'prompt/question': 'data/topic'}) # to map the question prompt variable to the entry in dataset.\n", + "# orchestration_deployment_url = deployment_url # needs to specify this to use a specific deployment id\n", + "orchestration_deployment_url = deployment_url\n", + "repetitions = \"1\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# creating an AICORE Configuration.\n", + "import requests\n", + "\n", + "request_body = {\n", + " \"name\": \"genai-eval-conf\",\n", + " \"scenarioId\": \"genai-evaluations\",\n", + " \"executableId\": \"genai-evaluations-simplified\",\n", + " \"inputArtifactBindings\": [\n", + " {\n", + " \"key\": \"datasetFolder\",\n", + " \"artifactId\": artifact_id\n", + " }\n", + " ],\n", + " \"parameterBindings\": [\n", + " {\n", + " \"key\": \"repetitions\",\n", + " \"value\": repetitions\n", + " },\n", + " {\n", + " \"key\": \"orchestrationDeploymentURL\",\n", + " \"value\": orchestration_deployment_url\n", + " },\n", + " {\n", + " \"key\": \"metrics\",\n", + " \"value\": metrics_list\n", + " },\n", + " {\n", + " \"key\": \"testDataset\",\n", + " \"value\": test_datasets\n", + " },\n", + " {\n", + " \"key\": \"orchestrationRegistryIds\",\n", + " \"value\": orchestration_registry_id\n", + " },\n", + " {\n", + " \"key\": \"testRowCount\",\n", + " \"value\": \"2\"\n", + " }\n", + " ]\n", + "}\n", + "\n", + "def create_aicore_configuration():\n", + " headers = _get_headers()\n", + " GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}\"\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 201):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a Configuration\")\n", + " raise\n", + " \n", + "configuration_id = create_aicore_configuration()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluation Execution Creation\n", + "Once Configration is create, we create the AI Core execution which triggers the evaluation workload.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# create an execution with the created configuration.\n", + "\n", + "import requests\n", + "def create_execution():\n", + " headers = _get_headers()\n", + " GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}\"\n", + " request_body = {\"configurationId\" : configuration_id} \n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create an execution\")\n", + " raise\n", + " \n", + "\n", + "execution_id = create_execution()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# get execution status\n", + "import requests\n", + "def get_execution_status(execution_id):\n", + " headers = _get_headers()\n", + " LOG_EXECUTIONS_ENDPOINT = f'/v2/lm/executions/{execution_id}'\n", + " request_url = f\"{AICORE_BASE_URL}{LOG_EXECUTIONS_ENDPOINT}\"\n", + " try:\n", + " response = requests.get(\n", + " request_url, headers=headers, timeout=120\n", + " )\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to get execution status\")\n", + " raise\n", + " \n", + "\n", + "get_execution_status(execution_id)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "1. Run the following cells only when the status field in the Execution response is \"COMPLETED\" to view the results.\n", + "2. The status field progresses through different states over time: UNKNOWN → PENDING → RUNNING → COMPLETED. Ensure it reaches COMPLETED before proceeding.\n", + "\n", + "\n", + "Note: The targetStatus will always be COMPLETED from the start, as it represents the intended final state of the Execution. Do not confuse it with the actual status field.\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluation Result\n", + "The evaluation job produces two outputs\n", + "1. A SQLite DB file which stores the orchestration input, orchestration output, values for all the metrics calculated for this orchestration output and statistics such as latency for this orchestration output. These metric values are called raw metric values. This SQLite DB file is stored in the object store as an AI Core output artifact.\n", + "2. A set of metrics whose values are aggregated from the raw metric values. The aggregate metrics are stored in the tracking service. The user-defined tags along with the run names are stored with the metrics.\n", + "Post execution completion user can see the runs generated by the workload along with the aggregate metrics by calling the tracking api as show below" + ] + }, + { + "cell_type": "code", + "execution_count": 299, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "response received is \n" + ] + } + ], + "source": [ + "# Get aggregate metrics using execution id\n", + "import requests\n", + "def retrieve_aggregate_metrics(execution_id):\n", + " headers = _get_headers()\n", + " GET_METRICS_ENDPOINT = f'/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={execution_id}'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_METRICS_ENDPOINT}\"\n", + " try:\n", + " response = requests.get(request_url, headers=headers, timeout=120)\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to retreive aggeregate metrics for the run\")\n", + " raise\n", + "\n", + "runs_data = retrieve_aggregate_metrics(execution_id)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To further drill down , User can also download the SQLite DB file from object storage and analyse the results(instance level metrics, logs etc) locally." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# download the result artifacts from Object store.\n", + "import boto3\n", + "\n", + "def download_all_objects(prefix, destination_folder):\n", + " \"\"\"\n", + " Recursively download all objects from an S3 bucket starting with a specific prefix.\n", + "\n", + " :param bucket_name: Name of the S3 bucket.\n", + " :param prefix: Prefix to filter objects in the bucket.\n", + " :param destination_folder: Local folder to save the downloaded files.\n", + " \"\"\"\n", + " s3_client = boto3.client(\n", + " 's3',\n", + " aws_access_key_id=AWS_ACCESS_KEY,\n", + " aws_secret_access_key=AWS_SECRET_ACCESS_KEY,\n", + " region_name=AWS_REGION\n", + " )\n", + "\n", + " # Ensure the destination folder exists\n", + " if not os.path.exists(destination_folder):\n", + " os.makedirs(destination_folder)\n", + "\n", + " # Paginate through objects\n", + " paginator = s3_client.get_paginator('list_objects_v2')\n", + " pages = paginator.paginate(Bucket=AWS_BUCKET_ID, Prefix=prefix)\n", + "\n", + " for page in pages:\n", + " if 'Contents' in page:\n", + " for obj in page['Contents']:\n", + " key = obj['Key']\n", + " local_file_path = os.path.join(destination_folder, os.path.relpath(key, prefix))\n", + "\n", + " # Ensure the local directory structure exists\n", + " local_directory = os.path.dirname(local_file_path)\n", + " if not os.path.exists(local_directory):\n", + " os.makedirs(local_directory)\n", + "\n", + " # Download the object\n", + " print(f\"Downloading {key} to {local_file_path}\")\n", + " s3_client.download_file(AWS_BUCKET_ID, key, local_file_path)\n", + "\n", + "\n", + "# Download the evaluation results from the object store. Look at execution status under \"outputArtifacts\" key to see the 'url'\n", + "# which shows the data path of where your output results are stored\n", + "EXECUTION_ID = execution_id\n", + "sqlite_db_prefix = f'{EXECUTION_ID}/tmp/' # change the prefix based on where your output artifact is stored in the bucket.\n", + "destination_folder = 'results-new'\n", + "\n", + "download_all_objects(sqlite_db_prefix, destination_folder)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "NOTE: The below Cell shows results of top 5 rows of the Evaluation Results across all SQLite tables. IF you wish to see all the entries you can comment the line saying df.head(5) in the below cell or modify the number accordingly." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# viewing the results from sqlite db in tabular format..\n", + "import sqlite3\n", + "import pandas as pd\n", + "from IPython.display import display, HTML\n", + "\n", + "# Path to your SQLite database file\n", + "db_file = 'results-new/results.db'\n", + "\n", + "connection = sqlite3.connect(db_file)\n", + "\n", + "# Specify the table names you want to display\n", + "table_names = ['run','configuration', 'submission', 'submission_result', 'evaluation_result'] \n", + "\n", + "# Create the CSS and HTML container\n", + "html_content = \"\"\"\n", + "\n", + "
\n", + "\"\"\"\n", + "\n", + "for table_name in table_names:\n", + " query = f\"SELECT * FROM {table_name};\"\n", + " df = pd.read_sql_query(query, connection)\n", + " # If you want to see all the rows across all tables, remove/comment the next line\n", + " df = df.head(5) # Limiting the number of rows displayed\n", + " table_html = df.to_html(classes='table-container', index=False)\n", + " html_content += f\"\"\"\n", + "
\n", + "

Table: {table_name}

\n", + " {table_html}\n", + "
\n", + " \"\"\"\n", + "\n", + "html_content += \"
\"\n", + "\n", + "display(HTML(html_content))\n", + "\n", + "# Close the connection\n", + "connection.close()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#Delete Execution Id\n", + "def delete_execution():\n", + " headers = _get_headers()\n", + " EXEC_ID = execution_id\n", + " GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions/'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}{EXEC_ID}\"\n", + " try:\n", + " response = requests.delete(\n", + " request_url, headers=headers, params={\"AI-Resource-Group\":AICORE_RESOURCE_GROUP}, timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 202):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " except:\n", + " logging.error(\"Error occurred while attempting to delete a Configuration\")\n", + " raise\n", + " \n", + "delete_execution()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.13" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/AI_Core.json b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/AI_Core.json new file mode 100644 index 0000000000..6ceca23fc6 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/AI_Core.json @@ -0,0 +1,1578 @@ +{ + "name": "AI Core", + "version": "1", + "items": [ + { + "type": "http", + "name": "get_token", + "filename": "get_token.bru", + "seq": 1, + "request": { + "url": "{{ai_auth_url}}/oauth/token", + "method": "POST", + "headers": [ + { + "name": "Content-Type", + "value": "application/x-www-form-urlencoded", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "formUrlEncoded", + "formUrlEncoded": [ + { + "name": "grant_type", + "value": "client_credentials", + "enabled": true + }, + { + "name": "client_id", + "value": "{{client_id}}", + "enabled": true + }, + { + "name": "client_secret", + "value": "{{client_secret}}", + "enabled": true + } + ], + "multipartForm": [], + "file": [] + }, + "script": { + "res": "if (res.getStatus() == 200) {\n bru.setEnvVar(\"access_token\", res.body.access_token);\n}" + }, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "none" + } + } + }, + { + "type": "folder", + "name": "admin", + "filename": "admin", + "root": { + "meta": { + "name": "admin" + } + }, + "items": [ + { + "type": "folder", + "name": "objectStoreSecrets", + "filename": "objectStoreSecrets", + "root": { + "meta": { + "name": "objectStoreSecrets" + } + }, + "items": [ + { + "type": "http", + "name": "Create a secret", + "filename": "Create a secret.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/admin/objectStoreSecrets", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + }, + { + "name": "Authorization", + "value": "", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"genai-data\",\n \"data\": {\n \"AWS_ACCESS_KEY_ID\": \"\",\n \"AWS_SECRET_ACCESS_KEY\": \"\"\n },\n \"type\": \"S3\",\n \"bucket\": \"\",\n \"endpoint\": \"https://s3.eu-central-1.amazonaws.com\",\n \"region\": \"\",\n \"pathPrefix\": \"example-dataset/veritasai\" \n }", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a secret based on the configuration in the request body\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get a list of metadata of available secrets.", + "filename": "Get a list of metadata of available secrets.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets?$top=&$skip=&$count=", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "$top", + "value": "", + "type": "query", + "enabled": true + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": true + }, + { + "name": "$count", + "value": "", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of metadata of the stored secrets.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "folder", + "name": "{objectStoreName}", + "filename": "{objectStoreName}", + "root": { + "meta": { + "name": "{objectStoreName}" + } + }, + "items": [ + { + "type": "http", + "name": "Delete object store secret", + "filename": "Delete object store secret.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets/:objectStoreName", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "objectStoreName", + "value": "qKoZ-aHSe", + "type": "path", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Delete a secret with the name of objectStoreName if it exists.", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "http", + "name": "Returns the of metadata of secrets which match the query parameter.", + "filename": "Returns the of metadata of secrets which match the query parameter.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/admin/objectStoreSecrets", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "This retrieves the metadata of the stored secret which match the parameter objectStoreName.\nThe fetched secret is constructed like objectStoreName-object-store-secret\nThe base64 encoded field for the stored secret is not returned.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "http", + "name": "Update object store secret", + "filename": "Update object store secret.bru", + "seq": 3, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets/:objectStoreName", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "objectStoreName", + "value": "qKoZ-aHSe", + "type": "path", + "enabled": true + } + ], + "body": { + "mode": "json", + "json": "{\n \"name\": \"\",\n \"type\": \"\",\n \"data\": {},\n \"bucket\": \"\",\n \"endpoint\": \"\",\n \"region\": \"\",\n \"pathPrefix\": \"\",\n \"verifyssl\": \"\",\n \"usehttps\": \"1\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Update a secret with name of objectStoreName if it exists.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + } + ] + }, + { + "type": "folder", + "name": "lm", + "filename": "lm", + "root": { + "meta": { + "name": "lm" + } + }, + "items": [ + { + "type": "folder", + "name": "configurations", + "filename": "configurations", + "root": { + "meta": { + "name": "configurations" + } + }, + "items": [ + { + "type": "http", + "name": "Create configuration Copy", + "filename": "Create configuration Copy.bru", + "seq": 3, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"id\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a new configuration linked to a specific scenario and executable for use in an execution\nor deployment.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Create configuration", + "filename": "Create configuration.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"genai-eval-conf\",\n \"scenarioId\": \"genai-evaluations\",\n \"executableId\": \"genai-evaluations\",\n \"inputArtifactBindings\": [\n {\n \"key\": \"rootFolder\",\n \"artifactId\": \"\"\n }\n ],\n \"parameterBindings\": [\n {\n \"key\": \"repetitions\",\n \"value\": \"2\"\n },\n {\n \"key\": \"orchestrationDeploymentURL\",\n \"value\": \"\"\n },\n {\n \"key\": \"tags\",\n \"value\": \"{}\"\n },\n {\n \"key\": \"variableMapping\",\n \"value\": \"{\\\"prompt/question\\\": \\\"data/topic\\\"}\"\n },\n {\n \"key\": \"metrics\",\n \"value\": \"bert_score,bleu,rouge,content_filter_on_input,content_filter_on_output,exact_match,pointwise_instruction_following,pointwise_correctness,genai-evaluations/groundedness_formatted/0.0.1,genai-evaluations/correctness_structured/0.0.1\"\n },\n {\n \"key\": \"testDataset\",\n \"value\": \"{\\\"path\\\":\\\"testdata/medicalqna_dataset.csv\\\", \\\"type\\\": \\\"csv\\\"}\"\n },\n {\n \"key\": \"runs\",\n \"value\": \"runs/run1.json\"\n },\n {\n \"key\": \"customMetricConfig\",\n \"value\": \"custom-llm-metric.jsonl\"\n },\n {\n \"key\": \"testRowCount\",\n \"value\": \"2\"\n },\n {\n \"key\": \"debugMode\",\n \"value\": \"ON\"\n }\n ]\n}\n", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a new configuration linked to a specific scenario and executable for use in an execution\nor deployment.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of configurations", + "filename": "Get list of configurations.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of configurations. Filter results by scenario ID or a list of executable IDs.\nSearch for configurations containing the search string as substring in the configuration name.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "{configurationId}", + "filename": "{configurationId}", + "root": { + "meta": { + "name": "{configurationId}" + } + }, + "items": [ + { + "type": "http", + "name": "Get configuration by ID", + "filename": "Get configuration by ID.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve details for configuration with configurationId.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of configurations", + "filename": "Get number of configurations.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/configurations/$count?scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&$search=}\"NI2Kn!V&searchCaseInsensitive=false&executableIds=T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": true + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": true + }, + { + "name": "executableIds", + "value": "T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available configurations that match the specified filter criteria.\nFilter criteria include a scenarioId or executableIdsList. Search by substring of configuration name is also possible.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "artifacts", + "filename": "artifacts", + "root": { + "meta": { + "name": "artifacts" + } + }, + "items": [ + { + "type": "http", + "name": "Get list of artifacts", + "filename": "Get list of artifacts.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/artifacts", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "executionId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "name", + "value": "[G7 ovyt8i", + "type": "query", + "enabled": false + }, + { + "name": "kind", + "value": "other", + "type": "query", + "enabled": false + }, + { + "name": "artifactLabelSelector", + "value": "ext.ai.sap.com/bXN1EAk=D*", + "type": "query", + "enabled": false + }, + { + "name": "$top", + "value": "10000", + "type": "query", + "enabled": false + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": false + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": false + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": false + }, + { + "name": "$expand", + "value": "scenario", + "type": "query", + "enabled": false + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of artifacts that matches the specified filter criteria.\nFilter criteria include scenario ID, execution ID, an artifact name, artifact kind, or artifact labels.\nUse top/skip parameters to paginate the result list.\nSearch by substring of artifact name or description, if required.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Register artifact", + "filename": "Register artifact.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/artifacts", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"aiconfig\",\n \"kind\": \"dataset\",\n \"url\": \"ai://genai-data/genaiEvaluation/14af1af80b974edb8731632d17286343\",\n \"scenarioId\": \"genai-evaluations\"\n}\n", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Register an artifact for use in a configuration, for example a model or a dataset.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of artifacts", + "filename": "Get number of artifacts.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/artifacts/$count?scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&executionId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&name=[G7 ovyt8i&kind=other&$search=}\"NI2Kn!V&searchCaseInsensitive=false&artifactLabelSelector=ext.ai.sap.com/bXN1EAk=D*", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "executionId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "name", + "value": "[G7 ovyt8i", + "type": "query", + "enabled": true + }, + { + "name": "kind", + "value": "other", + "type": "query", + "enabled": true + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": true + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": true + }, + { + "name": "artifactLabelSelector", + "value": "ext.ai.sap.com/bXN1EAk=D*", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available artifacts that match the specified filter criteria.\nFilter criteria include a scenarioId, executionId, an artifact name, artifact kind, or artifact labels.\nSearch by substring of artifact name or description is also possible.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "executions", + "filename": "executions", + "root": { + "meta": { + "name": "executions" + } + }, + "items": [ + { + "type": "http", + "name": "Create execution", + "filename": "Create execution.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/executions", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create an execution using the configuration specified by configurationId.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of executions", + "filename": "Get list of executions.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/executions/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "executionScheduleId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "status", + "value": "DEAD", + "type": "query", + "enabled": false + }, + { + "name": "$top", + "value": "10000", + "type": "query", + "enabled": false + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": false + }, + { + "name": "$select", + "value": "status", + "type": "query", + "enabled": false + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of executions that match the specified filter criteria.\nFilter criteria include a list of executableIds, a scenarioId, a configurationId, or a execution status.\nWith top/skip parameters it is possible to paginate the result list.\nWith select parameter it is possible to select only status.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + } + } + ] + }, + { + "type": "folder", + "name": "deployments", + "filename": "deployments", + "root": { + "meta": { + "name": "deployments" + } + }, + "items": [ + { + "type": "http", + "name": "Create deployment", + "filename": "Create deployment.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/deployments", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a deployment using the configuration specified by configurationId after synchronously checking the\ncorrectness of the configuration.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of deployments", + "filename": "Get list of deployments.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/deployments", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of deployments that match the specified filter criteria.\nFilter criteria include a list of executableIds, a scenarioId, a configurationId, or a deployment status.\nWith top/skip parameters it is possible to paginate the result list.\nWith select parameter it is possible to select only status.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of deployments", + "filename": "Get number of deployments.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/deployments/$count?executableIds=T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE&configurationId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&status=DEAD", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "executableIds", + "value": "T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "type": "query", + "enabled": true + }, + { + "name": "configurationId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "status", + "value": "DEAD", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available deployments. The number can be filtered by\nscenarioId, configurationId, executableIdsList or by deployment status.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "metrics", + "filename": "metrics", + "root": { + "meta": { + "name": "metrics" + } + }, + "items": [ + { + "type": "http", + "name": "Evaluation Metrics via Execution ID", + "filename": "Evaluation Metrics via Execution ID.bru", + "seq": 4, + "request": { + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of=", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "tagFilters", + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of=", + "value": "evaluation.ai.sap.com/child-of=", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Metrics by Run Name", + "filename": "Metrics by Run Name.bru", + "seq": 5, + "request": { + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/run-name=run1", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "tagFilters", + "value": "evaluation.ai.sap.com/run-name=run1", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + } + ] + } + ], + "activeEnvironmentUid": "lWUmIcEkGnkMxwNBILLmY", + "environments": [ + { + "variables": [ + { + "name": "ai_auth_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "ai_api_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "resource_group", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "orchestration_service_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "access_token", + "value": "", + "enabled": true, + "secret": true, + "type": "text" + } + ], + "name": "intprod" + } + ], + "root": { + "request": { + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "state": "", + "pkce": false, + "credentialsPlacement": "basic_auth_header", + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + }, + "vars": { + "req": [ + { + "name": "region", + "value": "prod.eu-central-1.aws", + "enabled": true, + "local": false, + "uid": "oYVk4DuVpyYqqP2roBVjE" + }, + { + "name": "baseUrl", + "value": "", + "enabled": true, + "local": false, + "uid": "I4KjDm7FxpSRwUYzjwfPG" + }, + { + "name": "auth_url", + "value": "", + "enabled": true, + "local": false, + "uid": "zuftvyCURtA9XYErCYDgo" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "local": false, + "uid": "JfGEVKm71BYTgR8UkQUGv" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "local": false, + "uid": "ls3RYTJ40baTl8eYmilGt" + }, + { + "name": "AWS_ACCESS_KEY_ID", + "value": "", + "enabled": true, + "local": false, + "uid": "2O0YTTAdmYltm5XiHMhP2" + }, + { + "name": "AWS_SECRET_ACCESS_KEY", + "value": "", + "enabled": true, + "local": false, + "uid": "8rc4RYyPcHXyTkAnnI981" + }, + { + "name": "BUCKET_NAME", + "value": "", + "enabled": true, + "local": false, + "uid": "HqFIe8Rvc14i41WIAGGkl" + }, + { + "name": "DATABASE_URL", + "value": "https://s3-eu-central-1.amazonaws.com", + "enabled": true, + "local": false, + "uid": "aWIwuJZH5XQ5Guu2D69Sq" + } + ] + } + }, + "docs": "Provides tools to manage your scenarios and workflows in SAP AI Core. Execute pipelines as a batch job, for example to pre-process or train your models, or perform batch inference. Serve inference requests of trained models. Deploy а trained machine learning model as a web service to serve inference requests with high performance. Register your own Docker registry, synchronize your AI content from your own git repository, and register your own object store for training data and trained models.\n", + "meta": { + "name": "AI Core" + } + }, + "brunoConfig": { + "version": "1", + "name": "AI Core", + "type": "collection", + "ignore": [ + "node_modules", + ".git" + ], + "size": 0.10747432708740234, + "filesCount": 151 + } +} \ No newline at end of file diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/custom-eval.jpg b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/custom-eval.jpg new file mode 100644 index 0000000000..034cdba2d1 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/custom-eval.jpg differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br01.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br01.png new file mode 100644 index 0000000000..5424ea51d0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br01.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br02.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br02.png new file mode 100644 index 0000000000..4ed9d9ab02 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br02.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br03.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br03.png new file mode 100644 index 0000000000..acba788ceb Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br03.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br04.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br04.png new file mode 100644 index 0000000000..9f8a175e47 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br04.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br05.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br05.png new file mode 100644 index 0000000000..8ed409f630 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br05.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br06.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br06.png new file mode 100644 index 0000000000..cdcd63eef1 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br06.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br07.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br07.png new file mode 100644 index 0000000000..e4607d8171 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image-br07.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image078.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image078.png new file mode 100644 index 0000000000..91824dea4e Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image078.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image080.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image080.png new file mode 100644 index 0000000000..fae2959a08 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image080.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_1.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_1.png new file mode 100644 index 0000000000..6db3eb05c3 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_10.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_10.png new file mode 100644 index 0000000000..3d7e561b22 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_10.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_11.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_11.png new file mode 100644 index 0000000000..d229b3e6e7 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_11.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_12.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_12.png new file mode 100644 index 0000000000..49fda0642a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_12.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_13.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_13.png new file mode 100644 index 0000000000..a26476d70d Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_13.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_14.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_14.png new file mode 100644 index 0000000000..c4d522fb21 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_14.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_15.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_15.png new file mode 100644 index 0000000000..c8fcfd6b23 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_15.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_16.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_16.png new file mode 100644 index 0000000000..106fc608e7 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_16.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_17.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_17.png new file mode 100644 index 0000000000..7625601321 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_17.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_18.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_18.png new file mode 100644 index 0000000000..45641b91ae Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_18.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_19.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_19.png new file mode 100644 index 0000000000..91498a203a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_19.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_2.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_2.png new file mode 100644 index 0000000000..1232299cc5 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_2.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_20.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_20.png new file mode 100644 index 0000000000..3b58f47cea Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_20.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_21.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_21.png new file mode 100644 index 0000000000..dd9f9f22bb Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_21.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_22.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_22.png new file mode 100644 index 0000000000..abcae67d60 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_22.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_23.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_23.png new file mode 100644 index 0000000000..97b0bc60f0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_23.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_24.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_24.png new file mode 100644 index 0000000000..a1cb408220 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_24.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_25.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_25.png new file mode 100644 index 0000000000..afdb0e1975 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_25.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_26.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_26.png new file mode 100644 index 0000000000..1b0bdc0136 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_26.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_26_01.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_26_01.png new file mode 100644 index 0000000000..0e76115abc Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_26_01.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_27.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_27.png new file mode 100644 index 0000000000..10711ea625 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_27.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_28.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_28.png new file mode 100644 index 0000000000..89ba9a7534 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_28.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_29.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_29.png new file mode 100644 index 0000000000..bc30129b11 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_29.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_3.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_3.png new file mode 100644 index 0000000000..4020c1f58e Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_3.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_30.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_30.png new file mode 100644 index 0000000000..7cfc063425 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_30.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_31.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_31.png new file mode 100644 index 0000000000..7a1a959fb0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_31.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_32.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_32.png new file mode 100644 index 0000000000..fe827f3460 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_32.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_33.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_33.png new file mode 100644 index 0000000000..546d43b52b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_33.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_34.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_34.png new file mode 100644 index 0000000000..4fa0960a1d Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_34.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_35.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_35.png new file mode 100644 index 0000000000..0f08f722b2 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_35.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_36.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_36.png new file mode 100644 index 0000000000..e16733c527 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_36.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_37.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_37.png new file mode 100644 index 0000000000..93052a3066 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_37.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_38.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_38.png new file mode 100644 index 0000000000..19c9bce7f9 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_38.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_39.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_39.png new file mode 100644 index 0000000000..2fa160e3f6 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_39.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_4.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_4.png new file mode 100644 index 0000000000..db1a55ca0a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_4.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_40.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_40.png new file mode 100644 index 0000000000..d7a45cf538 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_40.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_41.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_41.png new file mode 100644 index 0000000000..455af09dc3 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_41.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_42.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_42.png new file mode 100644 index 0000000000..43ca443e08 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_42.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_5.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_5.png new file mode 100644 index 0000000000..2e5ddd73fa Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_5.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_6.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_6.png new file mode 100644 index 0000000000..2e8d9e4f91 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_6.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_8.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_8.png new file mode 100644 index 0000000000..7b15dacbb8 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_8.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_9.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_9.png new file mode 100644 index 0000000000..c995a0bb36 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_9.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or1.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or1.png new file mode 100644 index 0000000000..9bc2a9787d Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or2.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or2.png new file mode 100644 index 0000000000..fedc326219 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or2.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or3.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or3.png new file mode 100644 index 0000000000..4819de3fc1 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or3.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or4.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or4.png new file mode 100644 index 0000000000..d523dc873c Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or4.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or5.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or5.png new file mode 100644 index 0000000000..a039b11fad Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_ail_or5.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_dt.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_dt.png new file mode 100644 index 0000000000..75aba902dc Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_dt.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_pip.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_pip.png new file mode 100644 index 0000000000..30b41816db Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_pip.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_sec.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_sec.png new file mode 100644 index 0000000000..1f08fcea14 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_br_sec.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_dtst.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_dtst.png new file mode 100644 index 0000000000..bc861c0168 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_dtst.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_pip.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_pip.png new file mode 100644 index 0000000000..0376b73079 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_pip.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_sec.png b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_sec.png new file mode 100644 index 0000000000..bbb4239783 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/image_py_sec.png differ diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/img/requirements.txt b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/requirements.txt new file mode 100644 index 0000000000..7d4a6ccffc --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-with-grounding/img/requirements.txt @@ -0,0 +1,5 @@ +generative-ai-hub-sdk==4.4.3 +python-dotenv==1.0.1 +boto3==1.37.4 +pandas==2.2.3 +json2html==1.3.0 \ No newline at end of file diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/requirements.txt b/tutorials/ai-core-genaihub-evaluation-with-grounding/requirements.txt new file mode 100644 index 0000000000..7d4a6ccffc --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-with-grounding/requirements.txt @@ -0,0 +1,5 @@ +generative-ai-hub-sdk==4.4.3 +python-dotenv==1.0.1 +boto3==1.37.4 +pandas==2.2.3 +json2html==1.3.0 \ No newline at end of file diff --git a/tutorials/ai-core-genaihub-evaluation-with-grounding/sample.env b/tutorials/ai-core-genaihub-evaluation-with-grounding/sample.env new file mode 100644 index 0000000000..09eeddf3f3 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation-with-grounding/sample.env @@ -0,0 +1,13 @@ +# AICORE CREDENTIALS +AICORE_CLIENT_ID= +AICORE_CLIENT_SECRET=AICORE CLIENT SECRET> +AICORE_AUTH_URL= +AICORE_BASE_URL= + +# AWS CREDENTIALS +AWS_ACCESS_KEY= +AWS_BUCKET_ID=> +AWS_REGION= +AWS_SECRET_ACCESS_KEY= +AWS_USERNAME= +AWS_HOST= diff --git a/tutorials/ai-core-genaihub-evaluation/ai-core-genaihub-evaluation.md b/tutorials/ai-core-genaihub-evaluation/ai-core-genaihub-evaluation.md new file mode 100644 index 0000000000..0356b561e0 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation/ai-core-genaihub-evaluation.md @@ -0,0 +1,1951 @@ +--- +parser: v2 +auto_validation: true +time: 45 +primary_tag: software-product>sap-ai-core +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-ai-core ] +author_name: Smita Naik +author_profile: https://github.com/I321506 +--- + +# Generative AI Custom Evaluation - Quickstart + This tutorial demonstrates how to use SAP AI Core Custom Evaluation to benchmark Large Language Models (LLMs) using **Prompt Registry**. It guides you through environment setup, configuration creation, execution, and result analysis in a unified and simplified workflow. + +## You will learn +- How to prepare and organize datasets for evaluation. +- How to configure and run evaluations in SAP AI Core. +- How to analyze and interpret aggregated evaluation results. + +## Prerequisites +1. **BTP Account** + Set up your SAP Business Technology Platform (BTP) account. + [Create a BTP Account](https://developers.sap.com/group.btp-setup.html) +2. **For SAP Developers or Employees** + Internal SAP stakeholders should refer to the following documentation: [How to create BTP Account For Internal SAP Employee](https://me.sap.com/notes/3493139), [SAP AI Core Internal Documentation](https://help.sap.com/docs/sap-ai-core) +3. **For External Developers, Customers, or Partners** + Follow this tutorial to set up your environment and entitlements: [External Developer Setup Tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html), [SAP AI Core External Documentation](https://help.sap.com/docs/sap-ai-core?version=CLOUD) +4. **Create BTP Instance and Service Key for SAP AI Core** + Follow the steps to create an instance and generate a service key for SAP AI Core: + [Create Service Key and Instance](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key?version=CLOUD) +5. **AI Core Setup Guide** + Step-by-step guide to set up and get started with SAP AI Core: + [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-setup.html) +6. An Extended SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard tiers. For more details, refer to +[SAP AI Core Service Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/service-plans?version=CLOUD) +7. **Orchestration Deployment** + Ensure at least one orchestration deployment is ready to be consumed during this process. +Refer to [this tutorial understand the basic consumption of GenAI models using orchestration.](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) +8. **Basic Knowledge** + Familiarity with the orchestration workflow is recommended +9. **Install Dependencies** + Install the required Python packages using the requirements.txt file provided. +Download [requirements.txt](img/requirements.txt) + +💡 Right-click the link above and choose **"Save link as..."** to download it directly. + +## Pre-Read + +This tutorial is designed for users who are unfamiliar with AI Core services and do not require flexibility in their use case. This tutorial is setup in a way that provides automatic setup for your evaluation where only the dataset is minimally required. +It demonstrates a quick start simplified workflow for using AI Core's custom evaluation capabilities to benchmark Large Language Models (LLMs), and evaluate different prompts for a specific use case. It utilizes the public [MedicationQA dataset](https://langtest.org/docs/pages/benchmarks/medical/medicationqa/) to showcase how to compute industry-standard metrics and assess the reliability of LLM-generated responses. + +### Environment Variables Setup + +[OPTION BEGIN [SAP AI Launchpad]] + +- Navigate to your SAP AI Core Launchpad. + +- In the Workspaces section, click on "Add" to create a new workspace. + - A workspace in SAP AI Core is a logical container that holds your resources (like models and pipelines) and provides the isolation needed for your projects. + +- When prompted, enter your AI Core credentials (such as Client ID, Client Secret, and Base URL). + - Note: If you're unsure about where to find these credentials, refer to this [guide](https://developers.sap.com/tutorials/ai-core-generative-ai.html#1c4f36d7-f345-4822-be00-c15f133ff7d8). + +- Once the workspace is successfully created, select your desired Resource Group to begin the evaluation process. + +Refer to the screenshot below for guidance: +![img](img/image_34.png) + +[OPTION END] + +[OPTION BEGIN [Python]] + +- Open **Visual Studio Code or Jupyter Notebook**. Create a new file with the .ipynb extension (e.g., custom_evaluation.ipynb). +- Create a **.env** file in the root directory of your project. +- Add your **AI Core** and **AWS credentials** as shown below. + +```env +# AICORE CREDENTIALS +AICORE_CLIENT_ID= +AICORE_CLIENT_SECRET= +AICORE_AUTH_URL= +AICORE_BASE_URL= +AICORE_RESOURCE_GROUP= + +# AWS CREDENTIALS +AWS_ACCESS_KEY= +AWS_BUCKET_ID= +AWS_REGION= +AWS_SECRET_ACCESS_KEY= + +# ORCHESTRATION DEPLOYMENT URL +DEPLOYMENT_URL= +``` + +**Note:** Replace placeholders (e.g., CLIENT_ID, CLIENT_SECRET, etc) with your actual environment credentials. + +Refer to the below screenshot for clarity: +![img](img/image_1.png) + +#### Install Dependencies + +Install the required packages using the [requirements.txt](img/requirements.txt) file you downloaded in the Prerequisites section. +```bash +pip install -r requirements.txt +``` +#### Connect to AI Core Instance + +Once the environment variables are set and dependencies are installed, run the following code to connect to your instance: + +```PYTHON +# Loading the credentials from the env file +from gen_ai_hub.proxy.gen_ai_hub_proxy import GenAIHubProxyClient +from dotenv import load_dotenv +import os + +load_dotenv(override=True) + +# Fetching environment variables +AICORE_BASE_URL = os.getenv("AICORE_BASE_URL") +AICORE_RESOURCE_GROUP = os.getenv("AICORE_RESOURCE_GROUP") +AICORE_AUTH_URL = os.getenv("AICORE_AUTH_URL") +AICORE_CLIENT_ID = os.getenv("AICORE_CLIENT_ID") +AICORE_CLIENT_SECRET = os.getenv("AICORE_CLIENT_SECRET") + +AWS_ACCESS_KEY = os.getenv("AWS_ACCESS_KEY") +AWS_BUCKET_ID = os.getenv("AWS_BUCKET_ID") +AWS_REGION = os.getenv("AWS_REGION") +AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY") +DEPLOYMENT_URL = os.getenv("DEPLOYMENT_URL") + +# Initializing the GenAIHubProxyClient +client = GenAIHubProxyClient( + base_url=AICORE_BASE_URL, + auth_url=AICORE_AUTH_URL, + client_id=AICORE_CLIENT_ID, + client_secret=AICORE_CLIENT_SECRET, + resource_group=AICORE_RESOURCE_GROUP +) +``` + +**NOTE:** +- Ensure the **requirements.txt** installation completes successfully before running the code. +- If you face any issues, recheck your **.env** values and installed packages. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +- Download the [Bruno_collections](img/AI_Core.json) file + +- please follow the steps in the [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) to set up your environment, refer step - **Set Up Your Environment and Configure Access** and proceed till generating the token + +[OPTION END] + +### Preparing Dataset Files + +[OPTION BEGIN [SAP AI Launchpad]] + +> **Note:** This step involves local setup using Python and does not require any action on the SAP AI Launchpad. + +[OPTION END] + +[OPTION BEGIN [Python]] + +In this step, the evaluation notebook dynamically detects the dataset file from a predefined folder structure. +You are not required to hardcode the dataset filename. + +```Python +import os +import json + +def get_dataset_file_name(folder_path): + """ + Retrieves the name of the first file in the specified folder. + """ + if not os.path.isdir(folder_path): + print(f"The folder path '{folder_path}' does not exist.") + return None + + items_in_folder = os.listdir(folder_path) + + for item in items_in_folder: + item_path = os.path.join(folder_path, item) + if os.path.isfile(item_path): + return item + + print(f"No files were found in the folder '{folder_path}'.") + return None + + +# --- MAIN EXECUTION --- +DATASET_FOLDER = "./DATASET" + +DATASET_NAME = get_dataset_file_name(DATASET_FOLDER) + +if DATASET_NAME: + print(f"Dataset name: {DATASET_NAME}") +else: + print("Missing run or dataset file.") + raise SystemExit("Exiting due to missing run/dataset file.") +``` + +![img](img/image_py_dtst.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +> **Note:** This step involves local setup using Python and does not require any action on Bruno. + +[OPTION END] + +### Registering an Object Store Secret in AI Core + +[OPTION BEGIN [SAP AI Launchpad]] + +- Open the **SAP AI Core Launchpad** and navigate to the **Administration** tab. +- Select the **Object Store** section from the left-hand menu. +- Click on **“Add”** to register a new object store secret. +- Fill in the required bucket details as shown in the screenshot below. + +![img](img/image_33.png) + +In the **Secret** field, use the following structure to provide your AWS credentials: + +```json +{ + "AWS_ACCESS_KEY_ID": "Enter Your value", + "AWS_SECRET_ACCESS_KEY": "Enter Your value" +} +``` + +[OPTION END] + +[OPTION BEGIN [Python]] + +To make your evaluation files available for AI Core orchestration, you need to: + +- Upload them to an object store (e.g., AWS S3). +- Register the object store secret in AI Core. + +#### **Setup Authentication and Headers** + +First, define the authentication headers for AI Core REST API calls. + +```PYTHON +def _get_headers(): + headers = { + "Authorization": client.get_ai_core_token(), + "AI-Resource-Group": AICORE_RESOURCE_GROUP, + "Content-Type": "application/json", + } + return headers +``` + +#### **Register Object Store Secret in AI Core** + +Register your S3 bucket and credentials as a secret. + +```PYTHON +# Register S3 secret with AI Core which will be used an input source +import requests +import json +import logging + +def delete_oss_secret(oss_name=""): + headers = _get_headers() + + DELETE_SECRETS_ENDPOINT = f'/v2/admin/objectStoreSecrets/{oss_name}' + request_url = f"{AICORE_BASE_URL}{DELETE_SECRETS_ENDPOINT}" + + try: + response = requests.delete(request_url, headers=headers, timeout=120) + if response.status_code == 202: + print(f"Successfully deleted object store secret: {oss_name}") + elif response.status_code == 404: + print(f"Object store secret not found: {oss_name}. It may not exist.") + else: + logging.error(f"Failed to delete object store secret: {oss_name}, Status Code: {response.status_code}") + except Exception as e: + logging.error(f"Error occurred while attempting to delete object store secret: {e}") + raise + +def register_oss_secret(oss_name="", path_prefix=""): + headers = _get_headers() + + POST_SECRETS_ENDPOINT = '/v2/admin/objectStoreSecrets' + request_url = f"{AICORE_BASE_URL}{POST_SECRETS_ENDPOINT}" + + request_body = { + "name": oss_name, + "data": { + "AWS_ACCESS_KEY_ID": AWS_ACCESS_KEY, + "AWS_SECRET_ACCESS_KEY": AWS_SECRET_ACCESS_KEY + }, + "type": "S3", + "bucket": AWS_BUCKET_ID, + "endpoint": "s3-eu-central-1.amazonaws.com", + "region": AWS_REGION, + "pathPrefix": path_prefix, + "verifyssl": "0", + "usehttps": "1", + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + result = response.json() + return result + except: + logging.error("Error occurred while attempting to create object store secret") + raise + +delete_oss_secret(oss_name="default") +delete_oss_secret(oss_name="genai-quick-data-notebook") + +register_oss_secret(oss_name="default", path_prefix="") +register_oss_secret(oss_name="genai-quick-data-notebook", path_prefix="") +``` + +![img](img/image_objsec.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Generic secrets securely store AWS S3 credentials required for document access + +• Expand **objectStoreSecrets** under admin and select create a secret request + +Use the below payload to create a secret for AWS S3 with NoAuthentication as authentication type. + +```CODE +{ + "name": "genai-data", + "data": { + "AWS_ACCESS_KEY_ID": "", + "AWS_SECRET_ACCESS_KEY": "", + }, + "type": "S3", + "bucket": "", + "endpoint": "", + "region": "", + "pathPrefix": "" + } +``` +• Ensure that all values in the data dictionary are Base64-encoded as per AWS S3 credential requirements + +![img](img/image-br01.png) + +[OPTION END] + +> ⚠️ **Important Note (Must Read)** +> +> - You must **create an object store secret** with a user defined name (for eg: default) to store **output artifacts** from orchestration runs. This is **mandatory**. +> - For **input artifacts**, you may create additional object store secrets with different names if needed. +> - If a user defined name (for eg: default) is not configured, orchestration runs will **fail** due to missing output target setup. + + +### Upload and Register Dataset + +[OPTION BEGIN [SAP AI Launchpad]] + +After creating the secret, upload your evaluation files to the S3 bucket and register them as an artifact in AI Core. + +#### **Register Uploaded Files as Artifact in AI Core** + +To register your evaluation dataset with SAP AI Core, you need to upload it as an artifact. Follow the instructions below using the **SAP AI Launchpad UI**. + +--- + +- Open the **SAP AI Core Launchpad**. +- Navigate to the **Generative AI/Optimization/Artifacts** section to create dataset artifact. + +![img](img/image_19.png) + +- On the **Artifacts** section, click **add**. + +--- + +- On the **General Information** screen, enter the following: + + - **Select Scenario:** `genai-evaluations` + - **Name:** `genai-eval-test-data` + - **Description:** `Demo artifacts for evaluation flow.` + - **Select Object Store:** `genai-data` + - **Sub-folder path:** `genaiEvaluation/` + + > 💡 Replace `` with your **SAP BTP user ID** or the folder path in your object store where the evaluation files are uploaded. + +- On the **Labels** screen, click **“Add Label”** and provide the following: + + - **Key:** `prompt-evaluation` + - **Value:** `true` + *(Note: The prefix `ext.ai.sap.com/` is automatically pre-filled in the UI.)* + + ![img](img/image_21.png) + +- Review all entered details carefully. +- Click **“Add”** to complete the artifact registration. + +[OPTION END] + +[OPTION BEGIN [Python]] + +After creating the secret, organize your evaluation files into the eval/ folder testdata. Upload them to S3 and register as artifacts in AI Core. + +#### **Upload Files to S3 Bucket** +```python +# uploading these files to Object store to register as an artifact inside ai core + +import boto3 +import os +import uuid + +def upload_folder_to_s3(folder_path, bucket_name, s3_prefix=""): + """ + Upload a folder to an S3 bucket recursively. + + :param folder_path: The local folder path to upload. + :param bucket_name: The name of the S3 bucket. + :param s3_prefix: Optional prefix to use for the S3 keys (e.g., subfolder in the bucket). + """ + s3_client = boto3.client( + 's3', + aws_access_key_id=AWS_ACCESS_KEY, + aws_secret_access_key=AWS_SECRET_ACCESS_KEY, + region_name=AWS_REGION + ) + + for root, dirs, files in os.walk(folder_path): + for file_name in files: + print("val of root is ", file_name) + local_path = os.path.join(root, file_name) + # Compute the relative path for the S3 key + relative_path = os.path.relpath(local_path, folder_path) + s3_key = os.path.join(s3_prefix, relative_path).replace("\\", "/") # Ensure S3-compatible paths + print("val of s3 key is ", s3_key) + print(f"Uploading {local_path} to s3://{bucket_name}/{s3_key}") + + # Upload the file + s3_client.upload_file(local_path, bucket_name, s3_key) + +# Example usage +folder_to_upload_testdata = "../DATASET" +user_directory_prefix = "" # replace with your i-number as string here +prefix_guid = user_directory_prefix if user_directory_prefix is not None else str(uuid.uuid4().hex) +s3_testdata_prefix = f"genaiEvaluation/{prefix_guid}/testdata" # Leave empty for root of the bucket + + +upload_folder_to_s3(folder_to_upload_testdata, AWS_BUCKET_ID, s3_testdata_prefix) +input_artifact_path = f"ai://genai-quick-data-notebook/genaiEvaluation/{prefix_guid}" +``` + ![img](img/image_5.png) + +#### **Register Uploaded Files as Artifact in AI Core** + +```Python +import requests +import logging +# Registering the uploaded files from AWS as artifacts to use inside configuration. + +def register_artifact(): + headers = _get_headers() + + GET_ARTIFACTS_ENDPOINT = '/v2/lm/artifacts' + request_url = f"{AICORE_BASE_URL}{GET_ARTIFACTS_ENDPOINT}" + + request_body = { + "labels": [ + { + "key": "ext.ai.sap.com/prompt-evaluation", + "value": "true" + } + ], + "name": "genai-eval-simplified-test-data", + "kind": "other", + "url": input_artifact_path, # input artifact path + "description": "demo artifacts for evaluation flow.", + "scenarioId": "genai-evaluations" + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + result = response.json() + print(result) + return result['id'] + except: + print("Error occurred while attempting to register artifact") + raise + +artifact_id = register_artifact() +``` +![img](img/image_6.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Before registering a dataset artifact in Bruno, you must upload your CSV file to the SAP AI Core object store using the Dataset API. + +Bruno cannot upload files directly to S3. therefore, this step is required. + +**Prerequisites** + +- An object store secret must already exist in your resource group.Typically, this is the default secret named **default** + +- The Dataset API currently supports: + + - S3 object stores only + + - CSV file uploads + +**Upload Your Dataset** + +Use the Dataset API – Upload File request in Bruno: + +```bash +PUT:{{ai_api_url}}/v2/lm/dataset/files/{{secretName}}/{{datasetPath}} +``` + +**Headers** + +```json +Authorization: Bearer {{token}} +AI-Resource-Group: {{resourceGroup}} +Content-Type: text/csv +``` + +**Body** + +Upload your .csv file directly as binary in Bruno’s Body + +Example Path Values: + + - secretName: default + + - datasetPath: testdata/medicalqna_dataset.csv + +![img](img/image_br_dt.png) + +**Note:** + +Save the ai://… URL — you will use this when creating the dataset artifact. + +**Register the Dataset Artifact** + +- Click on **Register artifact** under lm -> artifacts in bruno collection to register the artifact + +```CODE +{ + "name": "aiconfig", + "kind": "dataset", + "url": "ai://default/testdata/medicalqna_dataset.csv", + "scenarioId": "genai-evaluations" +} +``` +![img](img/image-br02.png) + +[OPTION END] + +### Create a Prompt Template in Prompt Registry + +[OPTION BEGIN [SAP AI Launchpad]] + +A Prompt Template defines: + + - The message roles (system, user, etc.) + + - Variables that get substituted from your dataset (e.g., questions) + + - Optional model configuration (temperature, max tokens, etc.) + +We’ll create a prompt template to guide the model to answer the questions + +**create the Prompt Template** + +- In SAP AI Launchpad, go to the left-hand menu and select Generative AI Hub → Prompt Management. + +- click on Templates → create + +- This is where you can define reusable templates with variables for evaluations. + +![img](img/image_007.png) + +**Define the Prompt** + +In the Message Blocks section: + +- Add a System role message: +```json +{ + "template": [ + { + "role": "user", + "content": "List the benefits and side effects of the drug in the following consumer health question: {{?question}}." + } + ] +} +``` + +**Configure Variables** + +Scroll down to Variable Definitions and add entries for each variable: + +- question + + - Default Value: leave empty or set to en for fallback + +This ensures the placeholders are dynamically substituted during evaluation. + +![img](img/image_008.png) + +**Save the Template** + +Click Save Template (top right): + +- Scenario → genai-evaluations + +- Name → prompt-registry-eval-acc-test + +- Version → 1.0.0 + +Click Save to persist the template. + +**Verify the Template** + +Go to Generative AI Hub → Prompt Management → Templates and confirm: + +- The template appears with the correct name, scenario, and version. + +- Managed By → shows how the template is stored. + +- Versioning is tracked automatically + +![img](img/image_10.png) + +[OPTION END] + +[OPTION BEGIN [Python]] + +The following code defines a function `create_prompt_template()` that creates a new **Prompt Template** in the SAP AI Core **Prompt Registry**. + +```python +def create_prompt_template(): + headers = _get_headers() + GET_PROMPT_TEMPLATES_ENDPOINT = '/v2/lm/promptTemplates' + request_url = f"{AICORE_BASE_URL}{GET_PROMPT_TEMPLATES_ENDPOINT}" + + + prompt_template = { + "template": [ + { + "role": "user", + "content": "List the benefits and side effects of the drug in the following consumer health question: {{?question}}." + } + ] + } + + request_body = { + "name": "prompt-registry-eval-demo", + "version": "1.0.0", + "scenario": "genai-evaluations", + "spec": prompt_template + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + if(response.status_code != 200): + raise + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create a prompt template") + raise + +prompt_template_id = create_prompt_template() +``` +![img](img/image__py_pmtreg.png) + +**Note** + +If you wish to use a prompt template that already exists in prompt registry, you can manually set prompt_template_id in the next cell and skip executing this cell + +If you already have an existing template set the ID manually: + +```python +prompt_template_id = "" +``` + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +In Bruno, you can create a prompt template by sending a POST request to the AI Core API: + +**Request: Create Prompt Template** + +**URL:** + +```bash +POST {{api_url}}/v2/lm/promptTemplates +``` + +**Headers:** +``` +Authorization: Bearer {{access_token}} +Content-Type: application/json +``` + +**Body (JSON):** +```json +{ + "name": "prompt-registry-eval-acc-test", + "version": "1.0.0", + "scenario": "genai-evaluations", + "spec": { + "template": [ + { + "role": "user", + "content": "List the benefits and side effects of the drug in the following consumer health question: {{?question}}." + } + ], + "defaults": {}, + "additional_fields": { + "modelParams": { + "temperature": 0.3, + "max_tokens": 100 + }, + "modelGroup": "chat" + } + } +} +``` +![img](img/image_br_pr.png) + +[OPTION END] + +🔑 Tip: Always increment the version (e.g., 1.0.1, 1.0.2) when updating a template. This ensures reproducibility across evaluations. + +### Providing Models and Metrics for Evaluation + +Metrics determine how your model outputs are evaluated during an evaluation run. They define the scoring logic that SAP AI Core uses to compare models, measure quality, and validate improvements over time. + +Metrics must be supplied before creating an Evaluation Configuration. + +[OPTION BEGIN [SAP AI Launchpad]] + +In SAP AI Launchpad, metrics are selected visually during the Evaluation Configuration creation flow, the UI provides a selectable list of available metrics. + +1. Go to Generative AI Hub → Optimization. + +2. Click Create to start a new evaluation configuration. + +![img](img/image_25.png) + +- Select Test Input, then: + + - Select the prompt and select more than one model + + - Select your registered dataset artifact + + - Enter the dataset path (example): + testdata/medicalqna_dataset.csv + + - Set the number of test samples (e.g., 20) + + ![img](img/image_ail_26.png) + +- Click **Next** to go to Metrics selection. + +#### Select Evaluation Metrics + +Choose the metrics you want to evaluate. + +You may choose one or multiple system-defined metrics—examples: + + - BERT Score + + - Pointwise Answer Relevance + + - Pointwise Correctness + + - Pointwise Instruction Following + +![img](img/image_27.png) + +--- + +> 📘 **Helpful Resources**: +> +> - [System-Defined Evaluation Metrics – SAP Documentation](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/system-defined-evaluation-metrics) + +> **Note: You may select additional metrics based on your use case.** + +--- + +[OPTION END] + +[OPTION BEGIN [Python]] + +**Select your Models** + +Add the models you wish to use in the string `selected_models_str` + +```Python +# Manual selection of models +selected_models_str="gemini-2.5-pro:001,gpt-4o:2024-08-06,gpt-5:2025-08-07" +print("Selected models string:", selected_models_str) +``` + +**Metrics Handling in Python Notebook (Automatic Detection & Creation)** + +When running the evaluation through the Python notebook, metric setup is partially automated. +Before the evaluation configuration is created, the script performs the following: + + - Users can manually specify metric IDs + + - It checks if each metric already exists in AI Core + + - If not found → creates it automatically + + - Prints final list of metric IDs used for evaluation + +This ensures all metrics exist before the evaluation configuration is created. + +```python +# Manual Selection of Metrics +selected_metrics_str = "Pointwise Conciseness,Pointwise Instruction Following,Pointwise Correctness,Pointwise Answer Relevance,Exact Match,BLEU,ROUGE,Content Filter on Input,Content Filter on Output" +print(selected_metrics_str) +``` +![img](img/image_py03.png) + +This ensures all required metrics are available before launching the evaluation. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +You can directly pass models and system metrics in your configuration: + +Example Models: + +```json +"models":"gemini-2.5-pro:001,gpt-4o:2024-08-06,gpt-5:2025-08-07" +``` + +Example metrics: + +```json +"metrics": "Pointwise Conciseness,Pointwise Instruction Following,Pointwise Correctness,Pointwise Answer Relevance,Exact Match,BLEU,ROUGE,Content Filter on Input,Content Filter on Output" +``` + +[OPTION END] + +**Note:** + +To compare different models and generate a leaderboard, you must select more than one model. +When multiple models are provided, the evaluation system automatically creates separate +evaluation runs for each model within the same execution. This enables the evaluation workflow +to compare the runs and compute head-to-head win rates across the selected models. + +### Define and Create Evaluation Configurations + +[OPTION BEGIN [SAP AI Launchpad]] + +Once your dataset artifact is registered, the next step is to create an Evaluation Configuration. + +An Evaluation Configuration tells SAP AI Core: + + - which dataset to evaluate + + - which prompt/model or orchestration config to use + + - which metrics to compute + + - which orchestration deployment endpoint to call + + - how many repetitions to run + + - which test dataset file to load + +This configuration becomes the blueprint for your evaluation execution. + +**Steps to Create Evaluation Configuration** + +In Additional Configuration + +- Set **Number of Repetitions** to `1`. +- Choose an existing deployment for **Orchestration Endpoint**. + + ![img](img/image_29.png) +--- + +#### Final Review & Start + +- Review all the details on the summary page. +- Once confirmed, click **Create** to start the evaluation job. + +![img](img/image_40.png) + +> ✅ You have now successfully configured and triggered a Generative AI Evaluation. + +[OPTION END] + +[OPTION BEGIN [Python]] + +When using the Python notebook, the evaluation configuration is created automatically based on your selections. +Before creating the configuration, the notebook will: + + - Load the dataset artifact ID + + - Resolve metric IDs + + - Load prompt template IDs + + - Validate all required parameters + +**Sample parameter setup:** + +```Python +import json +test_data_path = f"testdata/{DATASET_NAME}" # specify the test data path here. For the full folder just specifying testdata will work +test_datasets = json.dumps({'path': test_data_path, 'type': 'csv'}) +metrics_list = selected_metrics_str +models_list = selected_models_str +print(f"Selected metrics: {metrics_list}") +print(f"Selected models: {models_list}") +orchestration_deployment_url = deployment_url # needs to specify this to use a specific deployment id +repetitions = "1" +``` + +#### Create Configuration Body + +The notebook builds the configuration using the required SAP AI Core fields: + + - scenarioId + + - executableId + + - dataset artifact binding + + - selected metrics + + - test dataset details + + - repetitions + + - orchestration deployment URL + + - promptTemplate + + - models. + +The following function dynamically creates the configuration body for AI Core. + +```Python +# creating an AICORE Configuration. +import requests + +request_body = { + "name": "genai-eval-conf", + "scenarioId": "genai-evaluations", + "executableId": "genai-evaluations-simplified", + "inputArtifactBindings": [ + { + "key": "datasetFolder", + "artifactId": artifact_id + } + ], + "parameterBindings": [ + { + "key": "repetitions", + "value": repetitions + }, + { + "key": "orchestrationDeploymentURL", + "value": orchestration_deployment_url + }, + { + "key": "metrics", + "value": metrics_list + }, + { + "key": "testDataset", + "value": test_datasets + }, + { + "key": "promptTemplate", + "value": prompt_template_id + }, + { + "key": "models", + "value": models_list + } + ] +} + +def create_aicore_configuration(): + headers = _get_headers() + GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations' + request_url = f"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}" + try: + print(request_body) + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + print(response) + if(response.status_code != 201): + raise Exception(f"Failed to create configuration: {response.status_code} - {response.text}") + result = response.json() + print(result) + print(request_body) + return result['id'] + except: + logging.error("Error occurred while attempting to create a Configuration") + raise + +configuration_id = create_aicore_configuration() +``` + +You will receive a configuration ID, which is required for the next step (Execution). + +![img](img/image_py_con.png) + +SAP AI Core returns a configuration ID, which is used to trigger the evaluation execution. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +When creating an Evaluation Configuration through Bruno, you call: + +```bash +POST {{api_url}}/v2/lm/configurations +``` + +Below is the sample request body to create configuration. + +```json +{ + "name": "genai-eval-conf", + "scenarioId": "genai-evaluations", + "executableId": "genai-evaluations-simplified", + "inputArtifactBindings": [ + { + "key": "datasetFolder", + "artifactId": "{{artifactId}}" + } + ], + "parameterBindings": [ + { + "key": "repetitions", + "value": "1" + }, + { + "key": "orchestrationDeploymentURL", + "value": "{{deployment_url}}" + }, + { + "key": "metrics", + "value": "language_match" + }, + { + "key": "testDataset", + "value": "{\"path\": \"testdata/{{dataset_file}}\", \"type\": \"csv\"}" + }, + { + "key": "promptTemplate", + "value": "{{prompt_template_id}}" + }, + { + "key": "models", + "value": "{{model_name}}:{{model_version}}" + } + ] +} +``` +![img](img/image-br03.png) + +[OPTION END] + +### Create and Run Evaluation Execution + +After creating the Evaluation Configuration, the next step is to execute it. + +Execution triggers the evaluation workflow, which: + + - Reads the test dataset + + - Generates submissions to the orchestration service + + - Collects model outputs + + - Computes all selected metrics + + - Produces aggregate and raw evaluation results + +The process is identical for SAP AI Launchpad, Python, and Bruno, with only the invocation method differing. + +[OPTION BEGIN [SAP AI Launchpad]] + +- Once the evaluation configuration is created, the system automatically triggers an evaluation execution. + +- Follow these steps to monitor its progress and verify completion: + + - Navigate to **ML Operations** in the SAP AI Core Launchpad. + + - In the sidebar, click **Executions**. + + ![img](img/image_41.png) + + - Locate the most recent execution triggered by your evaluation configuration. You can use the timestamp or configuration name to identify it. + + - Click on the execution entry to open its details. The Current Status will update as the process runs. + + ![img](img/image_31.png) + +- Once the Target Status reaches **COMPLETED** , your evaluation has successfully finished. + +> [For More information](https://help.sap.com/docs/sap-ai-core/generative-ai-hub/create-evaluation) + +Track Execution Status + +The execution page will show: + + - Unknown + + - Pending + + - Running + + - Completed + +Once completed, you can navigate to: + + - Outputs → Tracking Metrics (aggregate results) + + - Output Artifacts (raw results stored in the SQLite DB) + +[OPTION END] + +[OPTION BEGIN [Python]] + +Once the configuration is ready, the next step is to trigger an execution. +An execution is a single evaluation run based on the configuration you defined. + +**Create Execution** + +The following function starts the evaluation in SAP AI Core using the configuration ID: + +```python +# create an execution with the created configuration. + +import requests +def create_execution(): + headers = _get_headers() + GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions' + request_url = f"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}" + request_body = {"configurationId" : configuration_id} + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + print("response received is ", response) + result = response.json() + print(result) + return result['id'] + except: + logging.error("Error occurred while attempting to create an execution") + raise + + +execution_id = create_execution() +``` +![img](img/image_44.png) + +#### Monitor Execution Status + +The execution progresses through states: + +UNKNOWN → PENDING → RUNNING → COMPLETED + +```python +# get execution status +import requests +def get_execution_status(execution_id): + headers = _get_headers() + LOG_EXECUTIONS_ENDPOINT = f'/v2/lm/executions/{execution_id}' + request_url = f"{AICORE_BASE_URL}{LOG_EXECUTIONS_ENDPOINT}" + try: + response = requests.get( + request_url, headers=headers, timeout=120 + ) + print("response received is ", response) + result = response.json() + return result + except: + logging.error("Error occurred while attempting to get execution status") + raise + + +get_execution_status(execution_id) +``` + +#### Automatic Polling + +To continuously monitor until the evaluation finishes: + +```python +# Polling the execution status until it is COMPLETED or DEAD or timeout occurs +def poll_execution_status(execution_id, timeout_minutes=1800, poll_interval=30): + start_time = time.time() + while True: + result = get_execution_status(execution_id) + print(f"Execution Status: {result.get('status')}") + if result.get("status") == "COMPLETED": + print(f"Execution completed successfully in {time.time() - start_time} seconds, proceed to fetch results.") + break + if result.get("status") == "DEAD": + print(f"Execution failed with status DEAD in {time.time() - start_time} seconds. Check the logs for more details.") + break + if time.time() - start_time > timeout_minutes * 60: + raise TimeoutError(f"Execution status polling timed out after {timeout_minutes} minutes.") + time.sleep(poll_interval) + +``` + +![img](img/image_45.png) + +✅ Once the execution status shows COMPLETED, the evaluation results are available and can be analyzed in the next step. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +After creating the configuration, the next step is to trigger the evaluation workload by creating an AI Core execution. + +**Create an Execution with the Created Configuration** + +- Click on create execution under executions, pass the configuration id created in previous step + +![img](img/image-br04.png) + +- The status field progresses through different states over time: +UNKNOWN → PENDING → RUNNING → COMPLETED. + +**Get Execution Status** + +check the status of created execution by passing the execution ID, The Current Status will update as the process runs. please refer the below image + +![img](img/image-br05.png) + +[OPTION END] + +### View and Analyze Evaluation Results + +Once the evaluation execution is complete, SAP AI Core generates both aggregated metrics and detailed instance-level results. +These results help compare model performance, understand quality metrics, and debug issues. + +[OPTION BEGIN [SAP AI Launchpad]] + +Once the evaluation workflow execution is completed, this step retrieves the aggregated evaluation metrics from the SAP AI Core service by specifying the run name. + +1. Go to Optimizations + +2. In the runs section , select the runs you created + +3. you can View detailed results of a runs across your selected metrics and models + +This is the easiest way to visually inspect evaluation outcomes and compare multiple model runs. + +![img](img/image_46_01.png) + +- Compare run performance across your selected metrics. Metrics are aggregated at run level. + +![img](img/image_46.png) + +![img](img/image_46a.png) + +[OPTION END] + +[OPTION BEGIN [Python]] + +The notebook includes utility scripts to retrieve aggregated metrics, download detailed artifacts, and inspect SQLite results.This returns all metric values per evaluated run, which your notebook then: + + - Aggregated evaluation metrics + + - Raw instance-level results + + - Prepares for ranking and scoring + +**Retrieve Aggregate Metrics (Tracking API)** + +Aggregated metrics summarize performance across all test samples. +To fetch them using execution ID: + +```Python +# Get aggregate metrics using execution id +import requests +def retrieve_aggregate_metrics(execution_id): + headers = _get_headers() + GET_METRICS_ENDPOINT = f'/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={execution_id}' + request_url = f"{AICORE_BASE_URL}{GET_METRICS_ENDPOINT}" + try: + response = requests.get(request_url, headers=headers, timeout=120) + print("response received is ", response) + result = response.json() + return result + except: + logging.error("Error occurred while attempting to retreive aggeregate metrics for the run") + raise + +runs_data = retrieve_aggregate_metrics(execution_id) +``` + +**Transform Metrics by Model** + +Each run contains tags that identify the evaluated model. + +```python +import pandas as pd + +def get_model_from_run(run): + for tag in run.get("tags", []): + if tag.get("name") == "evaluation.ai.sap.com/model": + return tag.get("value") + +def aggregate_metrics_by_model(runs_list): + transformed_data = [] + for run in runs_list: + model = get_model_from_run(run) + for metric in run["metrics"]: + metric_value = metric.get("value") + + # Override only for /mode + if metric.get("name").endswith("/mode"): + for label in metric.get("labels", []): + if label.get("name") == "evaluation.ai.sap.com/mode_category": + metric_value = label.get("value") + break + output_json = { + "model": model, + "metrics_name": metric.get("name"), + "metric_value": metric_value + } + transformed_data.append(output_json) + return transformed_data + + +def create_metrics_pivot_table(transformed_data): + """ + Creates a pivot table where rows are models and columns are metrics. + + Args: + transformed_data: List of dictionaries with 'model', 'metrics_name', 'metric_value' + + Returns: + DataFrame with models as rows and metrics as columns + """ + # Convert list of dictionaries to DataFrame + df = pd.DataFrame(transformed_data) + + # Create pivot table + pivot_table = df.pivot_table( + index='model', + columns='metrics_name', + values='metric_value', + aggfunc='first' # Use 'first' to get the single value, or 'mean' if there are duplicates + ) + + return pivot_table + +transformed_data = aggregate_metrics_by_model(runs_data['resources']) +metrics_pivot = create_metrics_pivot_table(transformed_data) + +HTML(metrics_pivot.to_html()) +``` +![img](img/image_47.png) + +**Download Raw Results (Output Artifact)** + +All detailed evaluation outputs are stored as an output artifact in your object store. To download all output files programmatically: + +```python +# download the result artifacts from Object store. +import boto3 + +def download_all_objects(prefix, destination_folder): + """ + Recursively download all objects from an S3 bucket starting with a specific prefix. + + :param bucket_name: Name of the S3 bucket. + :param prefix: Prefix to filter objects in the bucket. + :param destination_folder: Local folder to save the downloaded files. + """ + s3_client = boto3.client( + 's3', + aws_access_key_id=AWS_ACCESS_KEY, + aws_secret_access_key=AWS_SECRET_ACCESS_KEY, + region_name=AWS_REGION + ) + + # Ensure the destination folder exists + if not os.path.exists(destination_folder): + os.makedirs(destination_folder) + + # Paginate through objects + paginator = s3_client.get_paginator('list_objects_v2') + pages = paginator.paginate(Bucket=AWS_BUCKET_ID, Prefix=prefix) + + for page in pages: + if 'Contents' in page: + for obj in page['Contents']: + key = obj['Key'] + local_file_path = os.path.join(destination_folder, os.path.relpath(key, prefix)) + + # Ensure the local directory structure exists + local_directory = os.path.dirname(local_file_path) + if not os.path.exists(local_directory): + os.makedirs(local_directory) + + # Download the object + print(f"Downloading {key} to {local_file_path}") + s3_client.download_file(AWS_BUCKET_ID, key, local_file_path) + + +# Download the evaluation results from the object store. Look at execution status under "outputArtifacts" key to see the 'url' +# which shows the data path of where your output results are stored +EXECUTION_ID = execution_id +sqlite_db_prefix = f'{EXECUTION_ID}/tmp/' # change the prefix based on where your output artifact is stored in the bucket. +destination_folder = 'results-new' + +download_all_objects(sqlite_db_prefix, destination_folder) +``` + +![img](img/image_48.png) + +**View Detailed Results (SQLite DB)** + +The evaluation stores detailed instance-level results in results.db. + +Example: Reading SQLite tables: + +```python +# viewing the results from sqlite db in tabular format.. +import sqlite3 +import pandas as pd +from IPython.display import display, HTML + +# Path to your SQLite database file +db_file = 'results-new/results.db' + +connection = sqlite3.connect(db_file) + +# Specify the table names you want to display +table_names = ['run','configuration', 'submission', 'submission_result', 'evaluation_result'] + +# Create the CSS and HTML container +html_content = """ + +
+""" + +for table_name in table_names: + query = f"SELECT * FROM {table_name};" + df = pd.read_sql_query(query, connection) + # If you want to see all the rows across all tables, remove/comment the next line + df = df.head(5) # Limiting the number of rows displayed + table_html = df.to_html(classes='table-container', index=False) + html_content += f""" +
+

Table: {table_name}

+ {table_html} +
+ """ + +html_content += "
" + +display(HTML(html_content)) + +# Close the connection +connection.close() +``` + +![img](img/image_py_rk.png) + +#### Process and Rank Results + +This step generates a leaderboard ranking models by their Win Rate (percentage of pairwise victories), providing a robust, comparative measure of the best-performing model and prompt configuration. + +```Python + +import pandas as pd +import numpy as np +import sqlite3 +import json +import os +from IPython.display import display, HTML + +# ========================================== +# 1. CONFIGURATION (Separated Groups) +# ========================================== +METRIC_GROUPS = { + "Categorical": { + "type": "categorical", + "description": "Weighted Average (1-5 scale)", + "metrics": [ + "Pointwise Conciseness", + "Pointwise Instruction Following", + "Pointwise Correctness", + "Pointwise Answer Relevance" + ] + }, + "Boolean": { + "type": "categorical", # Uses same weighted avg logic (0 or 1) + "description": "Pass Rate (0-1 scale)", + "metrics": [ + "Exact Match", + "Content Filter on Input", + "Content Filter on Output", + "Language Match", + "JSON Schema Match" + ] + }, + "Numerical": { + "type": "numerical", + "description": "Mean Value", + "metrics": [ + "BLEU", + "ROUGE", + "BERT Score", + "test-metric" + ] + } +} + +# ========================================== +# 2. DATA EXTRACTION +# ========================================== +def extract_db_metadata(db_path): + if not os.path.exists(db_path): return pd.DataFrame() + conn = sqlite3.connect(db_path) + df_runs = pd.read_sql_query("SELECT id, name, tags, config FROM run", conn) + conn.close() + + meta_data = [] + for _, row in df_runs.iterrows(): + run_id = str(row["id"]) + run_name = str(row["name"]) + tags = {} + config = {} + try: tags = json.loads(row["tags"]) if isinstance(row["tags"], str) else row["tags"] + except: pass + try: config = json.loads(row["config"]) if isinstance(row["config"], str) else row["config"] + except: pass + + model = "Unknown" + try: model = config["modules"]["prompt_templating"]["model"]["name"] + except: + if isinstance(tags, dict): model = tags.get("evaluation.ai.sap.com/model", "Unknown") + elif isinstance(tags, list): + for t in tags: + if t.get("key") == "evaluation.ai.sap.com/model": model = t.get("value") + + meta_data.append({"run_id": run_id, "run_name": run_name, "model": model}) + return pd.DataFrame(meta_data) + +def extract_api_metrics(runs_data_resource): + flat_data = [] + for run in runs_data_resource: + model = "Unknown" + for t in run.get("tags", []): + if t.get("name") == "evaluation.ai.sap.com/model": + model = t.get("value") + break + for m in run.get("metrics", []): + clean_name = m.get("name", "").replace('"', '').strip() + flat_data.append({ + "model": model, + "metrics_name_clean": clean_name, + "metric_value": m.get("value") + }) + df = pd.DataFrame(flat_data) + df['metric_value'] = pd.to_numeric(df['metric_value'], errors='coerce') + return df + +# ========================================== +# 3. SCORING & HELM LOGIC +# ========================================== +def calculate_weighted_avg_score(row, cols): + """ Returns a score based on counts. + Categorical: 1-5 scale. + Boolean: 0-1 scale (Pass Rate). + """ + total_score = 0 + total_count = 0 + # Check counts 0-5 (covers Boolean 0/1 and Categorical 1-5) + for rating in range(0, 6): + col_name = next((c for c in cols if f"/{rating}/count" in c), None) + if col_name and not pd.isna(row[col_name]): + count = row[col_name] + total_score += count * rating + total_count += count + return total_score / total_count if total_count > 0 else 0.0 + +def get_metric_score_series(df_metrics, metric_name, group_type): + """ Returns a Series of SCORES (Scalar) for each model for a specific metric """ + subset = df_metrics[df_metrics['metrics_name_clean'].str.startswith(metric_name)] + if subset.empty: return None + + # Pivot to get columns for this metric + pivot = subset.pivot_table(index='model', columns='metrics_name_clean', values='metric_value', aggfunc='first') + cols = pivot.columns.tolist() + + if group_type == "categorical": + # Calculate Weighted Average (or Pass Rate for Boolean) + return pivot.apply(lambda row: calculate_weighted_avg_score(row, cols), axis=1) + else: + # Calculate Mean (Numerical) + c_mean = next((c for c in cols if "mean" in c), None) + if c_mean: return pivot[c_mean] + return None + +def calculate_group_win_rate(score_table): + """ + Calculates HELM Win Rate: % of times a model beats another model across all metrics in this group. + """ + models = score_table.index.tolist() + metrics = score_table.columns.tolist() + win_rates = {} + + for model_a in models: + wins = 0 + comparisons = 0 + + for model_b in models: + if model_a == model_b: continue + + # Compare across ALL metrics in this table + for metric in metrics: + score_a = score_table.at[model_a, metric] + score_b = score_table.at[model_b, metric] + + # Only compare valid scores + if pd.isna(score_a) or pd.isna(score_b): continue + + comparisons += 1 + if score_a > score_b: + wins += 1 + + win_rates[model_a] = wins / comparisons if comparisons > 0 else 0.0 + + return pd.Series(win_rates) + +# ========================================== +# 4. EXECUTION +# ========================================== +db_file = 'results-new/results.db' + +# A. Metadata +df_db_meta = extract_db_metadata(db_file) +df_db_unique = df_db_meta.drop_duplicates(subset=['model'], keep='last') + +# B. CSS +html_content = """ + +
+""" +if 'runs_data' in locals() and runs_data: + df_metrics_all = extract_api_metrics(runs_data['resources']) + + for group_name, config in METRIC_GROUPS.items(): + + # 1. Build Score Table + score_table = pd.DataFrame(index=df_db_unique['model'].unique()) + score_table.index.name = 'model' + + valid_metrics = [] + + # 2. Calculate Scores + for metric in config["metrics"]: + scores = get_metric_score_series(df_metrics_all, metric, config["type"]) + if scores is not None: + score_table[metric] = scores + valid_metrics.append(metric) + + if not valid_metrics: + continue + + # 3. Calculate HELM Win Rate (Specific to this group) + score_table['Win Rate'] = calculate_group_win_rate(score_table[valid_metrics]) + + # 4. Calculate Final Rank + score_table['Final Rank'] = score_table['Win Rate'].rank(ascending=False, method='min') + + # 5. Merge & Format + df_final = pd.merge(df_db_unique, score_table, on='model', how='inner') + df_final = df_final.sort_values('Final Rank') + + # Rounding + for c in valid_metrics: df_final[c] = df_final[c].fillna(0.0).astype(float).round(4) + df_final['Win Rate'] = df_final['Win Rate'].fillna(0.0).astype(float).round(4) + df_final['Final Rank'] = df_final['Final Rank'].fillna(0).astype(int) + + # Columns + meta_cols = ['run_id', 'run_name', 'model'] + final_cols = meta_cols + ['Win Rate', 'Final Rank'] + valid_metrics + + # 6. Generate HTML + table_html = df_final[final_cols].to_html(classes='table-container', index=False) + + html_content += f""" +
+

{group_name} Comparison

+

Values: {config['description']}. Win Rate based on head-to-head performance.

+ {table_html} +
+ """ + + html_content += "
" + display(HTML(html_content)) + +else: + print("'runs_data' missing.") +``` +![img](img/image_py_rnk1.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +**Retrieve Aggregate Metrics by execution_id** + +Send a GET request: + +**GET** +```bash +{{apiurl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={{execution_id}} +``` + +**Retrieve Aggregate Metrics Using Run Name** + +Send a GET request: + +**GET** +```bash +{{apiurl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/run-name={{run_name}} +``` + +This returns aggregated values for: + + - latency + + - token usage + + - metric scores + + - completion count + +**Download Raw Results** + +1. Open the execution details + +2. Copy the output artifact URL + +3. Download the folder to obtain + + - step-wise results + + - sqlite_combined/results.db + +**Inspect Detailed Results** + +Open the SQLite DB in any client to inspect: + + - submissions + + - completion responses + + - evaluation_results (raw metric scores) + + - aggregation_results + + - custom_logs + +![img](img/image_49.png) + +![img](img/image_49a.png) + +[OPTION END] + +### Delete Evaluation Artifacts and Configurations + +Over time, your workspace may accumulate old configurations, executions, and metrics. +SAP AI Core allows you to safely delete these resources once they are no longer needed. + +This section explains how to delete: + + - Evaluation Executions + + - Evaluation Configurations + +⚠️ Important: + +Deletions are permanent and cannot be undone. + +[OPTION BEGIN [SAP AI Launchpad]] + +**Delete Executions** + +1. Go to ML Operations → Executions + +2. Select the execution + +3. Click Delete + +4. Confirm the deletion + +**Delete Evaluation Configurations** + +1. Go to ML Operations → Configurations + +2. Select the configuration you created + +3. Click Delete + +[OPTION END] + +[OPTION BEGIN [Python]] + +**1. Delete an Evaluation Execution** + +```python +#Delete Execution Id +def delete_execution(): + headers = _get_headers() + EXEC_ID = execution_id + GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions/' + request_url = f"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}{EXEC_ID}" + try: + response = requests.delete( + request_url, headers=headers, params={"AI-Resource-Group":AICORE_RESOURCE_GROUP}, timeout=120 + ) + print(response) + if(response.status_code != 202): + raise + result = response.json() + print(result) + except: + logging.error("Error occurred while attempting to delete a Configuration") + raise + +delete_execution() +``` +**2. Delete an Evaluation Configuration** + +```python +def delete_configuration(configuration_id): + headers = _get_headers() + endpoint = f"/v2/lm/configurations/{configuration_id}" + url = f"{AICORE_BASE_URL}{endpoint}" + + response = requests.delete(url, headers=headers) + print("Status:", response.status_code) + print(response.text) + +# Example: +delete_configuration(configuration_id) +``` + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +**1. Delete Execution** + +**DELETE Request** +```bash +{{apiurl}}/v2/lm/executions/{{execution_id}} +``` +**Headers:** +``` +Authorization: Bearer {{access_token}} +AI-Resource-Group: {{resource_group}} +``` +**2. Delete Configuration** + +```bash +DELETE {{apiurl}}/v2/lm/configurations/{{configuration_id}} +``` + +[OPTION END] diff --git a/tutorials/ai-core-genaihub-evaluation/img/AI_Core.json b/tutorials/ai-core-genaihub-evaluation/img/AI_Core.json new file mode 100644 index 0000000000..bb30bf61b4 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation/img/AI_Core.json @@ -0,0 +1,1578 @@ +{ + "name": "AI Core", + "version": "1", + "items": [ + { + "type": "http", + "name": "get_token", + "filename": "get_token.bru", + "seq": 1, + "request": { + "url": "{{ai_auth_url}}/oauth/token", + "method": "POST", + "headers": [ + { + "name": "Content-Type", + "value": "application/x-www-form-urlencoded", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "formUrlEncoded", + "formUrlEncoded": [ + { + "name": "grant_type", + "value": "client_credentials", + "enabled": true + }, + { + "name": "client_id", + "value": "{{client_id}}", + "enabled": true + }, + { + "name": "client_secret", + "value": "{{client_secret}}", + "enabled": true + } + ], + "multipartForm": [], + "file": [] + }, + "script": { + "res": "if (res.getStatus() == 200) {\n bru.setEnvVar(\"access_token\", res.body.access_token);\n}" + }, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "none" + } + } + }, + { + "type": "folder", + "name": "admin", + "filename": "admin", + "root": { + "meta": { + "name": "admin" + } + }, + "items": [ + { + "type": "folder", + "name": "objectStoreSecrets", + "filename": "objectStoreSecrets", + "root": { + "meta": { + "name": "objectStoreSecrets" + } + }, + "items": [ + { + "type": "http", + "name": "Create a secret", + "filename": "Create a secret.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/admin/objectStoreSecrets", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + }, + { + "name": "Authorization", + "value": "", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"genai-data\",\n \"data\": {\n \"AWS_ACCESS_KEY_ID\": \"\",\n \"AWS_SECRET_ACCESS_KEY\": \"\"\n },\n \"type\": \"S3\",\n \"bucket\": \"\",\n \"endpoint\": \"https://s3.eu-central-1.amazonaws.com\",\n \"region\": \"\",\n \"pathPrefix\": \"\" \n }", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a secret based on the configuration in the request body\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get a list of metadata of available secrets.", + "filename": "Get a list of metadata of available secrets.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets?$top=&$skip=&$count=", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "$top", + "value": "", + "type": "query", + "enabled": true + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": true + }, + { + "name": "$count", + "value": "", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of metadata of the stored secrets.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "folder", + "name": "{objectStoreName}", + "filename": "{objectStoreName}", + "root": { + "meta": { + "name": "{objectStoreName}" + } + }, + "items": [ + { + "type": "http", + "name": "Delete object store secret", + "filename": "Delete object store secret.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets/:objectStoreName", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "objectStoreName", + "value": "qKoZ-aHSe", + "type": "path", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Delete a secret with the name of objectStoreName if it exists.", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "http", + "name": "Returns the of metadata of secrets which match the query parameter.", + "filename": "Returns the of metadata of secrets which match the query parameter.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/admin/objectStoreSecrets", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "This retrieves the metadata of the stored secret which match the parameter objectStoreName.\nThe fetched secret is constructed like objectStoreName-object-store-secret\nThe base64 encoded field for the stored secret is not returned.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + }, + { + "type": "http", + "name": "Update object store secret", + "filename": "Update object store secret.bru", + "seq": 3, + "request": { + "url": "{{baseUrl}}/admin/objectStoreSecrets/:objectStoreName", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "objectStoreName", + "value": "qKoZ-aHSe", + "type": "path", + "enabled": true + } + ], + "body": { + "mode": "json", + "json": "{\n \"name\": \"\",\n \"type\": \"\",\n \"data\": {},\n \"bucket\": \"\",\n \"endpoint\": \"\",\n \"region\": \"\",\n \"pathPrefix\": \"\",\n \"verifyssl\": \"\",\n \"usehttps\": \"1\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Update a secret with name of objectStoreName if it exists.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + } + ] + }, + { + "type": "folder", + "name": "lm", + "filename": "lm", + "root": { + "meta": { + "name": "lm" + } + }, + "items": [ + { + "type": "folder", + "name": "configurations", + "filename": "configurations", + "root": { + "meta": { + "name": "configurations" + } + }, + "items": [ + { + "type": "http", + "name": "Create configuration Copy", + "filename": "Create configuration Copy.bru", + "seq": 3, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"id\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a new configuration linked to a specific scenario and executable for use in an execution\nor deployment.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Create configuration", + "filename": "Create configuration.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"genai-eval-conf\",\n \"scenarioId\": \"genai-evaluations\",\n \"executableId\": \"genai-evaluations-simplified\",\n \"inputArtifactBindings\": [\n {\n \"key\": \"datasetFolder\",\n \"artifactId\": \"\"\n }\n ],\n \"parameterBindings\": [\n {\n \"key\": \"repetitions\",\n \"value\": \"1\"\n },\n {\n \"key\": \"orchestrationDeploymentURL\",\n \"value\": \"\"\n\n },\n {\n \"key\": \"metrics\",\n \"value\": \"language_match\"\n },\n {\n \"key\": \"testDataset\",\n \"value\": \"{\\\"path\\\": \\\"testdata/global_customer_queries.csv\\\", \\\"type\\\": \\\"csv\\\"}\"\n },\n {\n \"key\": \"promptTemplate\",\n \"value\": \"\"\n },\n {\n \"key\": \"models\",\n \"value\": \"gpt-4.1:latest\"\n }\n ]\n}\n", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a new configuration linked to a specific scenario and executable for use in an execution\nor deployment.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of configurations", + "filename": "Get list of configurations.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of configurations. Filter results by scenario ID or a list of executable IDs.\nSearch for configurations containing the search string as substring in the configuration name.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "{configurationId}", + "filename": "{configurationId}", + "root": { + "meta": { + "name": "{configurationId}" + } + }, + "items": [ + { + "type": "http", + "name": "Get configuration by ID", + "filename": "Get configuration by ID.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/configurations", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve details for configuration with configurationId.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of configurations", + "filename": "Get number of configurations.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/configurations/$count?scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&$search=}\"NI2Kn!V&searchCaseInsensitive=false&executableIds=T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": true + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": true + }, + { + "name": "executableIds", + "value": "T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available configurations that match the specified filter criteria.\nFilter criteria include a scenarioId or executableIdsList. Search by substring of configuration name is also possible.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "artifacts", + "filename": "artifacts", + "root": { + "meta": { + "name": "artifacts" + } + }, + "items": [ + { + "type": "http", + "name": "Get list of artifacts", + "filename": "Get list of artifacts.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/artifacts", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "executionId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "name", + "value": "[G7 ovyt8i", + "type": "query", + "enabled": false + }, + { + "name": "kind", + "value": "other", + "type": "query", + "enabled": false + }, + { + "name": "artifactLabelSelector", + "value": "ext.ai.sap.com/bXN1EAk=D*", + "type": "query", + "enabled": false + }, + { + "name": "$top", + "value": "10000", + "type": "query", + "enabled": false + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": false + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": false + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": false + }, + { + "name": "$expand", + "value": "scenario", + "type": "query", + "enabled": false + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of artifacts that matches the specified filter criteria.\nFilter criteria include scenario ID, execution ID, an artifact name, artifact kind, or artifact labels.\nUse top/skip parameters to paginate the result list.\nSearch by substring of artifact name or description, if required.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Register artifact", + "filename": "Register artifact.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/artifacts", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"aiconfig\",\n \"kind\": \"dataset\",\n \"url\": \"ai://genai-data/genaiEvaluation/14af1af80b974edb8731632d17286343\",\n \"scenarioId\": \"genai-evaluations\"\n}\n", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Register an artifact for use in a configuration, for example a model or a dataset.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of artifacts", + "filename": "Get number of artifacts.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/artifacts/$count?scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&executionId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&name=[G7 ovyt8i&kind=other&$search=}\"NI2Kn!V&searchCaseInsensitive=false&artifactLabelSelector=ext.ai.sap.com/bXN1EAk=D*", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "executionId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "name", + "value": "[G7 ovyt8i", + "type": "query", + "enabled": true + }, + { + "name": "kind", + "value": "other", + "type": "query", + "enabled": true + }, + { + "name": "$search", + "value": "}\"NI2Kn!V", + "type": "query", + "enabled": true + }, + { + "name": "searchCaseInsensitive", + "value": "false", + "type": "query", + "enabled": true + }, + { + "name": "artifactLabelSelector", + "value": "ext.ai.sap.com/bXN1EAk=D*", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available artifacts that match the specified filter criteria.\nFilter criteria include a scenarioId, executionId, an artifact name, artifact kind, or artifact labels.\nSearch by substring of artifact name or description is also possible.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "executions", + "filename": "executions", + "root": { + "meta": { + "name": "executions" + } + }, + "items": [ + { + "type": "http", + "name": "Create execution", + "filename": "Create execution.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/executions", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create an execution using the configuration specified by configurationId.", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of executions", + "filename": "Get list of executions.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/executions/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "executionScheduleId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": false + }, + { + "name": "status", + "value": "DEAD", + "type": "query", + "enabled": false + }, + { + "name": "$top", + "value": "10000", + "type": "query", + "enabled": false + }, + { + "name": "$skip", + "value": "", + "type": "query", + "enabled": false + }, + { + "name": "$select", + "value": "status", + "type": "query", + "enabled": false + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of executions that match the specified filter criteria.\nFilter criteria include a list of executableIds, a scenarioId, a configurationId, or a execution status.\nWith top/skip parameters it is possible to paginate the result list.\nWith select parameter it is possible to select only status.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + } + } + ] + }, + { + "type": "folder", + "name": "deployments", + "filename": "deployments", + "root": { + "meta": { + "name": "deployments" + } + }, + "items": [ + { + "type": "http", + "name": "Create deployment", + "filename": "Create deployment.bru", + "seq": 2, + "request": { + "url": "{{baseUrl}}/v2/lm/deployments", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Create a deployment using the configuration specified by configurationId after synchronously checking the\ncorrectness of the configuration.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Get list of deployments", + "filename": "Get list of deployments.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/v2/lm/deployments", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve a list of deployments that match the specified filter criteria.\nFilter criteria include a list of executableIds, a scenarioId, a configurationId, or a deployment status.\nWith top/skip parameters it is possible to paginate the result list.\nWith select parameter it is possible to select only status.\n", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "$count", + "filename": "$count", + "root": { + "meta": { + "name": "$count" + } + }, + "items": [ + { + "type": "http", + "name": "Get number of deployments", + "filename": "Get number of deployments.bru", + "seq": 1, + "request": { + "url": "{{baseUrl}}/lm/deployments/$count?executableIds=T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE&configurationId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&scenarioId=iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN&status=DEAD", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "", + "enabled": true + }, + { + "name": "Accept", + "value": "text/plain", + "enabled": true + } + ], + "params": [ + { + "name": "executableIds", + "value": "T_jtbUJzwg0e.okSV667jeZejqVb,3e0cmfc4c-6YavNz92uztZE", + "type": "query", + "enabled": true + }, + { + "name": "configurationId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "scenarioId", + "value": "iiwMZ8.BjeF0SgmlZJM11XXkDUxP7Sg5GQLKEEsaWb.om5wMy1gN3AtN", + "type": "query", + "enabled": true + }, + { + "name": "status", + "value": "DEAD", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "Retrieve the number of available deployments. The number can be filtered by\nscenarioId, configurationId, executableIdsList or by deployment status.\n", + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "credentialsPlacement": "basic_auth_header", + "pkce": false, + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + } + } + } + ] + } + ] + }, + { + "type": "folder", + "name": "metrics", + "filename": "metrics", + "root": { + "meta": { + "name": "metrics" + } + }, + "items": [ + { + "type": "http", + "name": "Evaluation Metrics via Execution ID", + "filename": "Evaluation Metrics via Execution ID.bru", + "seq": 4, + "request": { + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of=", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "tagFilters", + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of=", + "value": "evaluation.ai.sap.com/child-of=", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "Metrics by Run Name", + "filename": "Metrics by Run Name.bru", + "seq": 5, + "request": { + "url": "{{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/run-name=run1", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Accept", + "value": "application/json", + "enabled": true + } + ], + "params": [ + { + "name": "tagFilters", + "value": "evaluation.ai.sap.com/run-name=run1", + "type": "query", + "enabled": true + } + ], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + } + ] + } + ], + "activeEnvironmentUid": "lWUmIcEkGnkMxwNBILLmY", + "environments": [ + { + "variables": [ + { + "name": "ai_auth_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "ai_api_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "resource_group", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "orchestration_service_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "access_token", + "value": "", + "enabled": true, + "secret": true, + "type": "text" + } + ], + "name": "intprod" + } + ], + "root": { + "request": { + "auth": { + "mode": "oauth2", + "oauth2": { + "grantType": "authorization_code", + "callbackUrl": "", + "authorizationUrl": "", + "accessTokenUrl": "", + "refreshTokenUrl": "", + "clientId": "", + "clientSecret": "", + "scope": "", + "state": "", + "pkce": false, + "credentialsPlacement": "basic_auth_header", + "credentialsId": "credentials", + "tokenPlacement": "header", + "tokenHeaderPrefix": "Bearer", + "tokenQueryKey": "access_token", + "autoFetchToken": true, + "autoRefreshToken": false + } + }, + "vars": { + "req": [ + { + "name": "region", + "value": "prod.eu-central-1.aws", + "enabled": true, + "local": false, + "uid": "oYVk4DuVpyYqqP2roBVjE" + }, + { + "name": "baseUrl", + "value": "", + "enabled": true, + "local": false, + "uid": "I4KjDm7FxpSRwUYzjwfPG" + }, + { + "name": "auth_url", + "value": "", + "enabled": true, + "local": false, + "uid": "zuftvyCURtA9XYErCYDgo" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "local": false, + "uid": "JfGEVKm71BYTgR8UkQUGv" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "local": false, + "uid": "ls3RYTJ40baTl8eYmilGt" + }, + { + "name": "AWS_ACCESS_KEY_ID", + "value": "", + "enabled": true, + "local": false, + "uid": "2O0YTTAdmYltm5XiHMhP2" + }, + { + "name": "AWS_SECRET_ACCESS_KEY", + "value": "", + "enabled": true, + "local": false, + "uid": "8rc4RYyPcHXyTkAnnI981" + }, + { + "name": "BUCKET_NAME", + "value": "", + "enabled": true, + "local": false, + "uid": "HqFIe8Rvc14i41WIAGGkl" + }, + { + "name": "DATABASE_URL", + "value": "https://s3-eu-central-1.amazonaws.com", + "enabled": true, + "local": false, + "uid": "aWIwuJZH5XQ5Guu2D69Sq" + } + ] + } + }, + "docs": "Provides tools to manage your scenarios and workflows in SAP AI Core. Execute pipelines as a batch job, for example to pre-process or train your models, or perform batch inference. Serve inference requests of trained models. Deploy а trained machine learning model as a web service to serve inference requests with high performance. Register your own Docker registry, synchronize your AI content from your own git repository, and register your own object store for training data and trained models.\n", + "meta": { + "name": "AI Core" + } + }, + "brunoConfig": { + "version": "1", + "name": "AI Core", + "type": "collection", + "ignore": [ + "node_modules", + ".git" + ], + "size": 0.10747432708740234, + "filesCount": 151 + } +} diff --git a/tutorials/ai-core-genaihub-evaluation/img/image-br01.png b/tutorials/ai-core-genaihub-evaluation/img/image-br01.png new file mode 100644 index 0000000000..5424ea51d0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image-br01.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image-br02.png b/tutorials/ai-core-genaihub-evaluation/img/image-br02.png new file mode 100644 index 0000000000..4ed9d9ab02 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image-br02.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image-br03.png b/tutorials/ai-core-genaihub-evaluation/img/image-br03.png new file mode 100644 index 0000000000..cbfd0b4c19 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image-br03.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image-br04.png b/tutorials/ai-core-genaihub-evaluation/img/image-br04.png new file mode 100644 index 0000000000..9f8a175e47 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image-br04.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image-br05.png b/tutorials/ai-core-genaihub-evaluation/img/image-br05.png new file mode 100644 index 0000000000..69a105ef01 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image-br05.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image-br06.png b/tutorials/ai-core-genaihub-evaluation/img/image-br06.png new file mode 100644 index 0000000000..81128b34bb Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image-br06.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_007.png b/tutorials/ai-core-genaihub-evaluation/img/image_007.png new file mode 100644 index 0000000000..0cdc4cf4a7 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_007.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_008.png b/tutorials/ai-core-genaihub-evaluation/img/image_008.png new file mode 100644 index 0000000000..0582d66f28 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_008.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_009.png b/tutorials/ai-core-genaihub-evaluation/img/image_009.png new file mode 100644 index 0000000000..1c979c6b0a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_009.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_1.png b/tutorials/ai-core-genaihub-evaluation/img/image_1.png new file mode 100644 index 0000000000..6db3eb05c3 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_10.png b/tutorials/ai-core-genaihub-evaluation/img/image_10.png new file mode 100644 index 0000000000..f5a0fec8e8 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_10.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_19.png b/tutorials/ai-core-genaihub-evaluation/img/image_19.png new file mode 100644 index 0000000000..91498a203a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_19.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_21.png b/tutorials/ai-core-genaihub-evaluation/img/image_21.png new file mode 100644 index 0000000000..dd9f9f22bb Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_21.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_22.png b/tutorials/ai-core-genaihub-evaluation/img/image_22.png new file mode 100644 index 0000000000..abcae67d60 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_22.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_23.png b/tutorials/ai-core-genaihub-evaluation/img/image_23.png new file mode 100644 index 0000000000..97b0bc60f0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_23.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_24.png b/tutorials/ai-core-genaihub-evaluation/img/image_24.png new file mode 100644 index 0000000000..5471c2e38f Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_24.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_25.png b/tutorials/ai-core-genaihub-evaluation/img/image_25.png new file mode 100644 index 0000000000..afdb0e1975 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_25.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_26.png b/tutorials/ai-core-genaihub-evaluation/img/image_26.png new file mode 100644 index 0000000000..d2166f1b25 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_26.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_27.png b/tutorials/ai-core-genaihub-evaluation/img/image_27.png new file mode 100644 index 0000000000..5bd8e53b74 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_27.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_29.png b/tutorials/ai-core-genaihub-evaluation/img/image_29.png new file mode 100644 index 0000000000..72d40ecdf1 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_29.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_31.png b/tutorials/ai-core-genaihub-evaluation/img/image_31.png new file mode 100644 index 0000000000..7a1a959fb0 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_31.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_32.png b/tutorials/ai-core-genaihub-evaluation/img/image_32.png new file mode 100644 index 0000000000..fe827f3460 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_32.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_33.png b/tutorials/ai-core-genaihub-evaluation/img/image_33.png new file mode 100644 index 0000000000..546d43b52b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_33.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_34.png b/tutorials/ai-core-genaihub-evaluation/img/image_34.png new file mode 100644 index 0000000000..4fa0960a1d Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_34.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_40.png b/tutorials/ai-core-genaihub-evaluation/img/image_40.png new file mode 100644 index 0000000000..1498cb5cb7 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_40.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_41.png b/tutorials/ai-core-genaihub-evaluation/img/image_41.png new file mode 100644 index 0000000000..1bb7780d5a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_41.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_43.png b/tutorials/ai-core-genaihub-evaluation/img/image_43.png new file mode 100644 index 0000000000..d594ffa7c3 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_43.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_44.png b/tutorials/ai-core-genaihub-evaluation/img/image_44.png new file mode 100644 index 0000000000..8b352c79ec Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_44.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_45.png b/tutorials/ai-core-genaihub-evaluation/img/image_45.png new file mode 100644 index 0000000000..7cf1a3f633 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_45.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_46.png b/tutorials/ai-core-genaihub-evaluation/img/image_46.png new file mode 100644 index 0000000000..ef67d82f29 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_46.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_46_01.png b/tutorials/ai-core-genaihub-evaluation/img/image_46_01.png new file mode 100644 index 0000000000..131317edd6 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_46_01.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_46a.png b/tutorials/ai-core-genaihub-evaluation/img/image_46a.png new file mode 100644 index 0000000000..c493e2a5d2 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_46a.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_47.png b/tutorials/ai-core-genaihub-evaluation/img/image_47.png new file mode 100644 index 0000000000..861ec6d0a5 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_47.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_48.png b/tutorials/ai-core-genaihub-evaluation/img/image_48.png new file mode 100644 index 0000000000..78731db098 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_48.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_49.png b/tutorials/ai-core-genaihub-evaluation/img/image_49.png new file mode 100644 index 0000000000..2a2bbcd757 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_49.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_49a.png b/tutorials/ai-core-genaihub-evaluation/img/image_49a.png new file mode 100644 index 0000000000..07bcac05cd Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_49a.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_5.png b/tutorials/ai-core-genaihub-evaluation/img/image_5.png new file mode 100644 index 0000000000..b3a46a40ec Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_5.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_50.png b/tutorials/ai-core-genaihub-evaluation/img/image_50.png new file mode 100644 index 0000000000..74fea1ca6d Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_50.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_6.png b/tutorials/ai-core-genaihub-evaluation/img/image_6.png new file mode 100644 index 0000000000..d2936a402a Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_6.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image__py_pmtreg.png b/tutorials/ai-core-genaihub-evaluation/img/image__py_pmtreg.png new file mode 100644 index 0000000000..f0d907cf08 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image__py_pmtreg.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_ail_26.png b/tutorials/ai-core-genaihub-evaluation/img/image_ail_26.png new file mode 100644 index 0000000000..753e255051 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_ail_26.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_ail_or1.png b/tutorials/ai-core-genaihub-evaluation/img/image_ail_or1.png new file mode 100644 index 0000000000..060af6b829 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_ail_or1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_ail_or2.png b/tutorials/ai-core-genaihub-evaluation/img/image_ail_or2.png new file mode 100644 index 0000000000..7ceaf72448 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_ail_or2.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_ail_or3.png b/tutorials/ai-core-genaihub-evaluation/img/image_ail_or3.png new file mode 100644 index 0000000000..0b60b1541d Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_ail_or3.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_br_dt.png b/tutorials/ai-core-genaihub-evaluation/img/image_br_dt.png new file mode 100644 index 0000000000..841683c510 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_br_dt.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_br_or1.png b/tutorials/ai-core-genaihub-evaluation/img/image_br_or1.png new file mode 100644 index 0000000000..8af37314e4 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_br_or1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_br_pr.png b/tutorials/ai-core-genaihub-evaluation/img/image_br_pr.png new file mode 100644 index 0000000000..22d143968b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_br_pr.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_objsec.png b/tutorials/ai-core-genaihub-evaluation/img/image_objsec.png new file mode 100644 index 0000000000..cccf2d1b4b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_objsec.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_py03.png b/tutorials/ai-core-genaihub-evaluation/img/image_py03.png new file mode 100644 index 0000000000..44de78ff69 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_py03.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_py_con.png b/tutorials/ai-core-genaihub-evaluation/img/image_py_con.png new file mode 100644 index 0000000000..b929a58a25 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_py_con.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_py_dtst.png b/tutorials/ai-core-genaihub-evaluation/img/image_py_dtst.png new file mode 100644 index 0000000000..ec0dca7c3b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_py_dtst.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_py_or1.png b/tutorials/ai-core-genaihub-evaluation/img/image_py_or1.png new file mode 100644 index 0000000000..0469ab08c5 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_py_or1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_py_rk.png b/tutorials/ai-core-genaihub-evaluation/img/image_py_rk.png new file mode 100644 index 0000000000..f38fe6241b Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_py_rk.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/image_py_rnk1.png b/tutorials/ai-core-genaihub-evaluation/img/image_py_rnk1.png new file mode 100644 index 0000000000..12b49a4d52 Binary files /dev/null and b/tutorials/ai-core-genaihub-evaluation/img/image_py_rnk1.png differ diff --git a/tutorials/ai-core-genaihub-evaluation/img/requirements.txt b/tutorials/ai-core-genaihub-evaluation/img/requirements.txt new file mode 100644 index 0000000000..2c0a06e40e --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation/img/requirements.txt @@ -0,0 +1,7 @@ +generative-ai-hub-sdk==4.4.3 +python-dotenv==1.0.1 +boto3==1.37.4 +pandas==2.2.3 +json2html==1.3.0 +numpy==1.26.4 +ipywidgets==8.1.0 diff --git a/tutorials/ai-core-genaihub-evaluation/quick_start.ipynb b/tutorials/ai-core-genaihub-evaluation/quick_start.ipynb new file mode 100644 index 0000000000..b2a5e24d03 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation/quick_start.ipynb @@ -0,0 +1,2402 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Generative AI Custom Evaluation\n", + "This is an example notebook which showcases how a user can use AI Core custom evaluation to benchmark their large language models, evaluate orchestration configuration or prompts for their use case.\n", + "It uses publicly available [MedicationQA dataset](https://langtest.org/docs/pages/benchmarks/medical/medicationqa/) which consists of commonly asked consumer questions about medications. The workload computes industry standard metrics to check the reliability of the response generate by llm.\n", + "
**Note: For detailed instructions please refer to [Readme](./Readme.md)**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# SetUp (Step 1)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "! pip install -r ../requirements.txt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load your environment variables\n", + "\n", + "Ensure that your environment variables are set in a `.env` file (see sample.env for an example). If there is a missing field the notebook will prompt you for a value." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "\n", + "# Loading the credentials from the env file\n", + "from gen_ai_hub.proxy.gen_ai_hub_proxy import GenAIHubProxyClient\n", + "from dotenv import load_dotenv\n", + "import os\n", + "\n", + "load_dotenv(override=True)\n", + "\n", + "\n", + "# Fetching environment variables or prompting the user if missing\n", + "AICORE_BASE_URL = os.getenv(\"AICORE_BASE_URL\") or input(\"AICORE_BASE_URL is missing. Please enter it: \")\n", + "AICORE_RESOURCE_GROUP = os.getenv(\"AICORE_RESOURCE_GROUP\") or input(\"AICORE_RESOURCE_GROUP is missing. Please enter it (default: 'default'): \") or \"default\"\n", + "AICORE_AUTH_URL = os.getenv(\"AICORE_AUTH_URL\") or input(\"AICORE_AUTH_URL is missing. Please enter it: \")\n", + "AICORE_CLIENT_ID = os.getenv(\"AICORE_CLIENT_ID\") or input(\"AICORE_CLIENT_ID is missing. Please enter it: \")\n", + "AICORE_CLIENT_SECRET = os.getenv(\"AICORE_CLIENT_SECRET\") or input(\"AICORE_CLIENT_SECRET is missing. Please enter it: \")\n", + "\n", + "AWS_ACCESS_KEY = os.getenv(\"AWS_ACCESS_KEY\") or input(\"AWS_ACCESS_KEY is missing. Please enter it: \")\n", + "AWS_BUCKET_ID = os.getenv(\"AWS_BUCKET_ID\") or input(\"AWS_BUCKET_ID is missing. Please enter it: \")\n", + "AWS_REGION = os.getenv(\"AWS_REGION\") or input(\"AWS_REGION is missing. Please enter it: \")\n", + "AWS_SECRET_ACCESS_KEY = os.getenv(\"AWS_SECRET_ACCESS_KEY\") or input(\"AWS_SECRET_ACCESS_KEY is missing. Please enter it: \")\n", + "DEPLOYMENT_URL = os.getenv(\"DEPLOYMENT_URL\", None)\n", + "\n", + "# Initializing the GenAIHubProxyClient\n", + "client = GenAIHubProxyClient(\n", + " base_url=AICORE_BASE_URL,\n", + " auth_url=AICORE_AUTH_URL,\n", + " client_id=AICORE_CLIENT_ID,\n", + " client_secret=AICORE_CLIENT_SECRET,\n", + " resource_group=AICORE_RESOURCE_GROUP\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Dependencies and Helper Functions (Step 2)" + ] + }, + { + "cell_type": "code", + "execution_count": 193, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Dataset name: medicalqna_dataset.csv\n" + ] + } + ], + "source": [ + "import os\n", + "import json\n", + "\n", + "\n", + "def get_dataset_file_name(folder_path):\n", + " \"\"\"\n", + " Retrieves the name of the first file in the specified folder.\n", + " \"\"\"\n", + " if not os.path.isdir(folder_path):\n", + " print(f\"The folder path '{folder_path}' does not exist.\")\n", + " return None\n", + "\n", + " items_in_folder = os.listdir(folder_path)\n", + "\n", + " for item in items_in_folder:\n", + " item_path = os.path.join(folder_path, item)\n", + " if os.path.isfile(item_path):\n", + " return item\n", + "\n", + " print(f\"No files were found in the folder '{folder_path}'.\")\n", + " return None\n", + "\n", + "\n", + "# --- MAIN EXECUTION ---\n", + "DATASET_FOLDER = \"../DATASET\"\n", + "\n", + "DATASET_NAME = get_dataset_file_name(DATASET_FOLDER)\n", + "\n", + "if DATASET_NAME:\n", + " print(f\"Dataset name: {DATASET_NAME}\")\n", + "else:\n", + " print(\"Missing run or dataset file.\")\n", + " raise SystemExit(\"Exiting due to missing run/dataset file.\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Register an Object Store Secret\n", + "To use the evaluations service, you must register an object store with the name default. Optionally, you can register an additional object store with a name of your choice." + ] + }, + { + "cell_type": "code", + "execution_count": 194, + "metadata": {}, + "outputs": [], + "source": [ + "# setup authentication and headers needed for AI Core requests\n", + "def _get_headers():\n", + " headers = {\n", + " \"Authorization\": client.get_ai_core_token(),\n", + " \"AI-Resource-Group\": AICORE_RESOURCE_GROUP,\n", + " \"Content-Type\": \"application/json\",\n", + " }\n", + " return headers" + ] + }, + { + "cell_type": "code", + "execution_count": 195, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Successfully deleted object store secret: default\n", + "Successfully deleted object store secret: genai-quick-data-notebook\n" + ] + }, + { + "data": { + "text/plain": [ + "{'message': 'secret has been created'}" + ] + }, + "execution_count": 195, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Register S3 secret with AI Core which will be used an input source \n", + "import requests\n", + "import json\n", + "import logging\n", + "\n", + "def delete_oss_secret(oss_name=\"\"):\n", + " headers = _get_headers()\n", + " \n", + " DELETE_SECRETS_ENDPOINT = f'/v2/admin/objectStoreSecrets/{oss_name}'\n", + " request_url = f\"{AICORE_BASE_URL}{DELETE_SECRETS_ENDPOINT}\"\n", + " \n", + " try:\n", + " response = requests.delete(request_url, headers=headers, timeout=120)\n", + " if response.status_code == 202:\n", + " print(f\"Successfully deleted object store secret: {oss_name}\")\n", + " elif response.status_code == 404:\n", + " print(f\"Object store secret not found: {oss_name}. It may not exist.\")\n", + " else:\n", + " logging.error(f\"Failed to delete object store secret: {oss_name}, Status Code: {response.status_code}\")\n", + " except Exception as e:\n", + " logging.error(f\"Error occurred while attempting to delete object store secret: {e}\")\n", + " raise\n", + "\n", + "def register_oss_secret(oss_name=\"\", path_prefix=\"\"):\n", + " headers = _get_headers()\n", + " \n", + " POST_SECRETS_ENDPOINT = '/v2/admin/objectStoreSecrets'\n", + " request_url = f\"{AICORE_BASE_URL}{POST_SECRETS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"name\": oss_name,\n", + " \"data\": {\n", + " \"AWS_ACCESS_KEY_ID\": AWS_ACCESS_KEY,\n", + " \"AWS_SECRET_ACCESS_KEY\": AWS_SECRET_ACCESS_KEY\n", + " },\n", + " \"type\": \"S3\",\n", + " \"bucket\": AWS_BUCKET_ID,\n", + " \"endpoint\": \"s3-eu-central-1.amazonaws.com\",\n", + " \"region\": AWS_REGION,\n", + " \"pathPrefix\": path_prefix,\n", + " \"verifyssl\": \"0\",\n", + " \"usehttps\": \"1\",\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create object store secret\")\n", + " raise\n", + " \n", + "delete_oss_secret(oss_name=\"default\")\n", + "delete_oss_secret(oss_name=\"genai-quick-data-notebook\")\n", + " \n", + "register_oss_secret(oss_name=\"default\", path_prefix=\"\")\n", + "register_oss_secret(oss_name=\"genai-quick-data-notebook\", path_prefix=\"\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The user stores the input files in the object store and registers the root folder as artifact with AI Core. The File Upload and Artifact endpoints of AI Core API may be used for this purpose. In this example eval-data is the root folder containing the orchestration configurations and test data which is registered as AI Core artifact." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# uploading these files to Object store to register as an artifact inside ai core\n", + "\n", + "import boto3\n", + "import os\n", + "import uuid\n", + "\n", + "def upload_folder_to_s3(folder_path, bucket_name, s3_prefix=\"\"):\n", + " \"\"\"\n", + " Upload a folder to an S3 bucket recursively.\n", + "\n", + " :param folder_path: The local folder path to upload.\n", + " :param bucket_name: The name of the S3 bucket.\n", + " :param s3_prefix: Optional prefix to use for the S3 keys (e.g., subfolder in the bucket).\n", + " \"\"\"\n", + " s3_client = boto3.client(\n", + " 's3',\n", + " aws_access_key_id=AWS_ACCESS_KEY,\n", + " aws_secret_access_key=AWS_SECRET_ACCESS_KEY,\n", + " region_name=AWS_REGION\n", + " )\n", + "\n", + " for root, dirs, files in os.walk(folder_path):\n", + " for file_name in files:\n", + " print(\"val of root is \", file_name)\n", + " local_path = os.path.join(root, file_name)\n", + " # Compute the relative path for the S3 key\n", + " relative_path = os.path.relpath(local_path, folder_path)\n", + " s3_key = os.path.join(s3_prefix, relative_path).replace(\"\\\\\", \"/\") # Ensure S3-compatible paths\n", + " print(\"val of s3 key is \", s3_key)\n", + " print(f\"Uploading {local_path} to s3://{bucket_name}/{s3_key}\")\n", + " \n", + " # Upload the file\n", + " s3_client.upload_file(local_path, bucket_name, s3_key)\n", + "\n", + "# Example usage\n", + "folder_to_upload_testdata = \"../DATASET\"\n", + "user_directory_prefix = \"\" # replace with your i-number as string here\n", + "prefix_guid = user_directory_prefix if user_directory_prefix is not None else str(uuid.uuid4().hex)\n", + "s3_testdata_prefix = f\"genaiEvaluation/{prefix_guid}/testdata\" # Leave empty for root of the bucket\n", + "\n", + "\n", + "upload_folder_to_s3(folder_to_upload_testdata, AWS_BUCKET_ID, s3_testdata_prefix)\n", + "input_artifact_path = f\"ai://genai-quick-data-notebook/genaiEvaluation/{prefix_guid}\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The user stores the input files in the object store and registers the root folder as artifact with AI Core. The File Upload and Artifact endpoints of AI Core API may be used for this purpose. In this example `genaiEvaluation\\{prefix_guid}` is the root folder containing the orchestration configurations and test data which is registered as AI Core artifact." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "import logging\n", + "# Registering the uploaded files from AWS as artifacts to use inside configuration.\n", + "\n", + "def register_artifact():\n", + " headers = _get_headers()\n", + " \n", + " GET_ARTIFACTS_ENDPOINT = '/v2/lm/artifacts'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_ARTIFACTS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"labels\": [\n", + " {\n", + " \"key\": \"ext.ai.sap.com/prompt-evaluation\",\n", + " \"value\": \"true\"\n", + " }\n", + " ],\n", + " \"name\": \"genai-eval-simplified-test-data\",\n", + " \"kind\": \"other\",\n", + " \"url\": input_artifact_path, # input artifact path\n", + " \"description\": \"demo artifacts for evaluation flow.\",\n", + " \"scenarioId\": \"genai-evaluations\"\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " print(\"Error occurred while attempting to register artifact\")\n", + " raise\n", + " \n", + "\n", + "artifact_id = register_artifact()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create Orchestration Deployment\n", + "An orchestration Deployment URL is required for us to run our evaluation. Once created we need to wait until the deployment is running and provides us a deployment url which will be add to our configuration file in the next step. \n", + "\n", + "**Note**: You can skip this step if you already have a orchestration deployment running and set the deployment url in the next cell." + ] + }, + { + "cell_type": "code", + "execution_count": 198, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "import json\n", + "import time\n", + "\n", + "\n", + "\n", + "def create_orchestration_configuration():\n", + " headers = _get_headers()\n", + " request_body = {\n", + " \"name\": \"orchestrationDeployment\",\n", + " \"executableId\": \"orchestration\",\n", + " \"scenarioId\": \"orchestration\",\n", + " \"parameterBindings\": [\n", + " {\n", + " \"key\": \"modelFilterList\",\n", + " \"value\": \"null\"\n", + " },\n", + " {\n", + " \"key\": \"modelFilterListType\",\n", + " \"value\": \"allow\"\n", + " }\n", + " ],\n", + " \"inputArtifactBindings\": []\n", + " }\n", + " \n", + " GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}\"\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 201):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a Configuration\")\n", + " raise\n", + " \n", + "def execute_orchestration_deployment(configuration_id):\n", + " headers = _get_headers()\n", + " GET_DEPLOYMENTS_ENDPOINT = '/v2/lm/deployments'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_DEPLOYMENTS_ENDPOINT}\"\n", + " \n", + " request_body = {\n", + " \"configurationId\": configuration_id\n", + " }\n", + " \n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 202):\n", + " print(\"Deployment execution failed\")\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " \n", + " except:\n", + " logging.error(\"Error occurred while attempting to create an execution\")\n", + " raise\n", + "\n", + "def get_deployment_status(orchestration_deployment_id):\n", + " headers = _get_headers()\n", + " api_url = f\"{AICORE_BASE_URL}/v2/lm/deployments/{orchestration_deployment_id}?$select=status\"\n", + " timeout = 400 \n", + " initial_interval = 30 \n", + " pending_interval = 10\n", + " start = time.time()\n", + "\n", + " status = None\n", + " current_interval = initial_interval\n", + "\n", + " while time.time() - start < timeout:\n", + " response = requests.get(api_url, headers=headers)\n", + " if response.status_code == 200:\n", + " status = response.json().get('status')\n", + " print(f\"Deployment {orchestration_deployment_id} status: {status}\")\n", + " # Adjust polling interval based on status\n", + " if status == 'RUNNING':\n", + " return True\n", + " elif status == 'UNKNOWN':\n", + " current_interval = initial_interval\n", + " elif status == 'PENDING':\n", + " current_interval = pending_interval\n", + "\n", + " else:\n", + " print(f\"Failed to fetch deployment status. HTTP {response.status_code}\")\n", + " return False\n", + "\n", + " # Waiting according to status for API call\n", + " time.sleep(current_interval)\n", + "\n", + "def get_deployment_url(orchestration_deployment_id):\n", + " headers = _get_headers()\n", + " response = requests.get(f\"{AICORE_BASE_URL}/v2/lm/deployments/{orchestration_deployment_id}\", headers=headers)\n", + " if response.status_code != 200:\n", + " raise Exception(f\"Failed to get deployment URL: {response.status_code} - {response.text}\")\n", + " return response.json().get('deploymentUrl')\n", + "\n", + "# You can skip this step if you already have a orchestration deployment running\n", + "deployment_url = DEPLOYMENT_URL\n", + "if not deployment_url:\n", + " configuration_id = create_orchestration_configuration()\n", + " orchestration_deployment_id = execute_orchestration_deployment(configuration_id)\n", + " is_running = get_deployment_status(orchestration_deployment_id) \n", + " if is_running:\n", + " deployment_url = get_deployment_url(orchestration_deployment_id)\n", + " print(f\"Deployment URL: {deployment_url}\")\n", + " else:\n", + " print(\"Deployment is not running or failed.\")" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "metadata": {}, + "outputs": [], + "source": [ + "# Manually set the orchestration deployment url\n", + "# deployment_url=\"\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a Prompt Template in Prompt Registry \n", + "\n", + "The following code defines a function `create_prompt_template()` that creates a new **Prompt Template** in the SAP AI Core **Prompt Registry**.\n", + "\n", + "**Note** : If you wish to use a prompt template that already exists in prompt registry, you can manually set `prompt_template_id` in the next cell and skip executing this cell" + ] + }, + { + "cell_type": "code", + "execution_count": 199, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'message': 'Prompt updated successfully.', 'id': '29deed9b-6678-4548-94c1-e68e1fc2e2bd', 'scenario': 'genai-evaluations', 'name': 'prompt-registry-eval-demo', 'version': '1.0.0'}\n" + ] + } + ], + "source": [ + "def create_prompt_template():\n", + " headers = _get_headers()\n", + " GET_PROMPT_TEMPLATES_ENDPOINT = '/v2/lm/promptTemplates'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_PROMPT_TEMPLATES_ENDPOINT}\"\n", + " \n", + " \n", + " prompt_template = {\n", + " \"template\": [\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"\n", + " }\n", + " ]\n", + " }\n", + "\n", + " request_body = {\n", + " \"name\": \"prompt-registry-eval-demo\",\n", + " \"version\": \"1.0.0\",\n", + " \"scenario\": \"genai-evaluations\",\n", + " \"spec\": prompt_template\n", + " }\n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " if(response.status_code != 200):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a prompt template\")\n", + " raise\n", + "\n", + "prompt_template_id = create_prompt_template()" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [], + "source": [ + "# Manually set prompt_template_id here if you wish to use pre existing prompt template\n", + "# prompt_template_id=\"\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Select your metrics\n", + " \n", + "Add the metrices need to be evaluated in `selected_metrics_str`.\n", + "\n", + "**Note: If your dataset does not have a reference column, DO NOT Select metrcis where reference is required.**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Pointwise Conciseness,Pointwise Instruction Following,Pointwise Correctness,Pointwise Answer Relevance,Exact Match,BLEU,ROUGE,Content Filter on Input,Content Filter on Output\n" + ] + } + ], + "source": [ + "# Manual Selection of Metrics\n", + "selected_metrics_str = \"Pointwise Conciseness,Pointwise Instruction Following,Pointwise Correctness,Pointwise Answer Relevance,Exact Match,BLEU,ROUGE,Content Filter on Input,Content Filter on Output\"\n", + "print(selected_metrics_str)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Select your Models\n", + " \n", + "Add the models you wish to use in the string `selected_models_str`\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Selected models string: gemini-2.5-pro:001,gpt-4o:2024-08-06,gpt-5:2025-08-07\n" + ] + } + ], + "source": [ + "# Manual selection of models\n", + "selected_models_str=\"gemini-2.5-pro:001,gpt-4o:2024-08-06,gpt-5:2025-08-07\"\n", + "print(\"Selected models string:\", selected_models_str)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Start Evaluation Run (Step 3)" + ] + }, + { + "cell_type": "code", + "execution_count": 217, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Selected metrics: Pointwise Conciseness,Pointwise Instruction Following,Pointwise Correctness,Pointwise Answer Relevance,Exact Match,BLEU,ROUGE,Content Filter on Input,Content Filter on Output\n", + "Selected models: gemini-2.5-pro:001,gpt-4o:2024-08-06,gpt-5:2025-08-07\n" + ] + } + ], + "source": [ + "\n", + "import json\n", + "test_data_path = f\"testdata/{DATASET_NAME}\" # specify the test data path here. For the full folder just specifying testdata will work\n", + "test_datasets = json.dumps({'path': test_data_path, 'type': 'csv'})\n", + "metrics_list = selected_metrics_str\n", + "models_list = selected_models_str\n", + "print(f\"Selected metrics: {metrics_list}\")\n", + "print(f\"Selected models: {models_list}\")\n", + "#variable_mapping = json.dumps({'prompt/question': 'data/topic'}) # to map the question prompt variable to the entry in dataset.\n", + "orchestration_deployment_url = deployment_url # needs to specify this to use a specific deployment id\n", + "repetitions = \"1\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# creating an AICORE Configuration.\n", + "import requests\n", + "\n", + "request_body = {\n", + " \"name\": \"genai-eval-conf\",\n", + " \"scenarioId\": \"genai-evaluations\",\n", + " \"executableId\": \"genai-evaluations-simplified\",\n", + " \"inputArtifactBindings\": [\n", + " {\n", + " \"key\": \"datasetFolder\",\n", + " \"artifactId\": artifact_id\n", + " }\n", + " ],\n", + " \"parameterBindings\": [\n", + " {\n", + " \"key\": \"repetitions\",\n", + " \"value\": repetitions\n", + " },\n", + " {\n", + " \"key\": \"orchestrationDeploymentURL\",\n", + " \"value\": orchestration_deployment_url\n", + " },\n", + " {\n", + " \"key\": \"metrics\",\n", + " \"value\": metrics_list\n", + " },\n", + " {\n", + " \"key\": \"testDataset\",\n", + " \"value\": test_datasets\n", + " },\n", + " {\n", + " \"key\": \"promptTemplate\",\n", + " \"value\": prompt_template_id\n", + " },\n", + " {\n", + " \"key\": \"models\",\n", + " \"value\": models_list\n", + " }\n", + " ]\n", + "}\n", + "\n", + "def create_aicore_configuration():\n", + " headers = _get_headers()\n", + " GET_CONFIGURATIONS_ENDPOINT = '/v2/lm/configurations'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_CONFIGURATIONS_ENDPOINT}\"\n", + " try:\n", + " print(request_body)\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 201):\n", + " raise Exception(f\"Failed to create configuration: {response.status_code} - {response.text}\")\n", + " result = response.json()\n", + " print(result)\n", + " print(request_body)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create a Configuration\")\n", + " raise\n", + " \n", + "configuration_id = create_aicore_configuration()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluation Execution Creation\n", + "Once Configration is create, we create the AI Core execution which triggers the evaluation workload.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# create an execution with the created configuration.\n", + "\n", + "import requests\n", + "def create_execution():\n", + " headers = _get_headers()\n", + " GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}\"\n", + " request_body = {\"configurationId\" : configuration_id} \n", + " try:\n", + " response = requests.post(\n", + " request_url, headers=headers, data=json.dumps(request_body), timeout=120\n", + " )\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " print(result)\n", + " return result['id']\n", + " except:\n", + " logging.error(\"Error occurred while attempting to create an execution\")\n", + " raise\n", + " \n", + "\n", + "execution_id = create_execution()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# get execution status\n", + "import requests\n", + "def get_execution_status(execution_id):\n", + " headers = _get_headers()\n", + " LOG_EXECUTIONS_ENDPOINT = f'/v2/lm/executions/{execution_id}'\n", + " request_url = f\"{AICORE_BASE_URL}{LOG_EXECUTIONS_ENDPOINT}\"\n", + " try:\n", + " response = requests.get(\n", + " request_url, headers=headers, timeout=120\n", + " )\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to get execution status\")\n", + " raise\n", + " \n", + "\n", + "get_execution_status(execution_id)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "Run the following cells only when the status field in the Execution response is \"COMPLETED\" to view the results.\n", + "\n", + "The status field progresses through different states over time: UNKNOWN → PENDING → RUNNING → COMPLETED. Ensure it reaches COMPLETED before proceeding.\n", + "\n", + "\n", + "Note: The targetStatus will always be COMPLETED from the start, as it represents the intended final state of the Execution. Do not confuse it with the actual status field.\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Evaluation Result (Step 4)\n", + "The evaluation job produces two outputs\n", + "1. A SQLite DB file which stores the orchestration input, orchestration output, values for all the metrics calculated for this orchestration output and statistics such as latency for this orchestration output. These metric values are called raw metric values. This SQLite DB file is stored in the object store as an AI Core output artifact.\n", + "2. A set of metrics whose values are aggregated from the raw metric values. The aggregate metrics are stored in the tracking service. The user-defined tags along with the run names are stored with the metrics.\n", + "Post execution completion user can see the runs generated by the workload along with the aggregate metrics by calling the tracking api as show below" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "response received is \n" + ] + } + ], + "source": [ + "# Get aggregate metrics using execution id\n", + "import requests\n", + "def retrieve_aggregate_metrics(execution_id):\n", + " headers = _get_headers()\n", + " GET_METRICS_ENDPOINT = f'/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={execution_id}'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_METRICS_ENDPOINT}\"\n", + " try:\n", + " response = requests.get(request_url, headers=headers, timeout=120)\n", + " print(\"response received is \", response)\n", + " result = response.json()\n", + " return result\n", + " except:\n", + " logging.error(\"Error occurred while attempting to retreive aggeregate metrics for the run\")\n", + " raise\n", + "\n", + "runs_data = retrieve_aggregate_metrics(execution_id)" + ] + }, + { + "cell_type": "code", + "execution_count": 229, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
run_namerunIdmetrics_namemetric_value
Run-prompt-registry-eval-demo-gpt-4o-2024-05-13007d20838a084869a00767bbe2b39667Pointwise Conciseness/1/count0.0
Run-prompt-registry-eval-demo-gpt-4o-2024-05-13007d20838a084869a00767bbe2b39667Pointwise Conciseness/1/relative_frequency0.0
Run-prompt-registry-eval-demo-gpt-4o-2024-05-13007d20838a084869a00767bbe2b39667Pointwise Conciseness/2/count0.0
Run-prompt-registry-eval-demo-gpt-4o-2024-05-13007d20838a084869a00767bbe2b39667Pointwise Conciseness/2/relative_frequency0.0
Run-prompt-registry-eval-demo-gpt-4o-2024-05-13007d20838a084869a00767bbe2b39667Pointwise Conciseness/3/count7.0
" + ], + "text/plain": [ + "" + ] + }, + "execution_count": 229, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# A Sample method to transform the aggregate metric data returned by tracking service as per use case\n", + "\n", + "from json2html import *\n", + "from IPython.display import HTML\n", + "def transform_run_data(runs_list): \n", + " transformed_data = []\n", + " for run in runs_list:\n", + " try:\n", + " output_json = {}\n", + "\n", + " # Extract run_name from tags\n", + " for tag in run.get(\"tags\", []):\n", + " if tag.get(\"name\") == \"evaluation.ai.sap.com/run-name\":\n", + " run_name = tag.get(\"value\")\n", + " break \n", + " if run_name is None:\n", + " continue\n", + "\n", + " # Rename executionId to runId\n", + " run_id = run.get(\"executionId\")\n", + " if run_id is None: \n", + " continue\n", + " \n", + " # Extract metrics_name and metric_value from metrics\n", + " metrics = run.get(\"metrics\", [])\n", + " if not metrics: \n", + " continue;\n", + " for metric in metrics:\n", + " output_json = {\n", + " \"run_name\": run_name,\n", + " \"runId\": run_id,\n", + " \"metrics_name\": metric.get(\"name\"),\n", + " \"metric_value\": metric.get(\"value\")\n", + " }\n", + " transformed_data.append(output_json)\n", + "\n", + " except (TypeError, AttributeError): # Handle potential errors if input is not in the expected format\n", + " continue;\n", + " return transformed_data\n", + "# Transform the run data first\n", + "transformed_data = transform_run_data(runs_data['resources'])\n", + "# Fetch unique run names\n", + "unique_run_names = list({entry['run_name'] for entry in transformed_data})\n", + "\n", + "HTML(json2html.convert(json = transformed_data[:5]))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To view this data into a more comaprison-friendly format, we transform the data into a tabular format, with rows being the different models evaluated, and columns being the different metrics calculated." + ] + }, + { + "cell_type": "code", + "execution_count": 255, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
metrics_nameBLEU/meanBLEU/medianBLEU/p90BLEU/p95BLEU/stddevContent Filter on Input/0/countContent Filter on Input/0/relative_frequencyContent Filter on Input/1/countContent Filter on Input/1/relative_frequencyContent Filter on Input/entropyContent Filter on Input/modeContent Filter on Output/0/countContent Filter on Output/0/relative_frequencyContent Filter on Output/1/countContent Filter on Output/1/relative_frequencyContent Filter on Output/entropyContent Filter on Output/modeExact Match/0/countExact Match/0/relative_frequencyExact Match/1/countExact Match/1/relative_frequencyExact Match/entropyExact Match/modePointwise Answer Relevance/1/countPointwise Answer Relevance/1/relative_frequencyPointwise Answer Relevance/2/countPointwise Answer Relevance/2/relative_frequencyPointwise Answer Relevance/3/countPointwise Answer Relevance/3/relative_frequencyPointwise Answer Relevance/4/countPointwise Answer Relevance/4/relative_frequencyPointwise Answer Relevance/5/countPointwise Answer Relevance/5/relative_frequencyPointwise Answer Relevance/entropyPointwise Answer Relevance/modePointwise Conciseness/1/countPointwise Conciseness/1/relative_frequencyPointwise Conciseness/2/countPointwise Conciseness/2/relative_frequencyPointwise Conciseness/3/countPointwise Conciseness/3/relative_frequencyPointwise Conciseness/4/countPointwise Conciseness/4/relative_frequencyPointwise Conciseness/5/countPointwise Conciseness/5/relative_frequencyPointwise Conciseness/entropyPointwise Conciseness/modePointwise Correctness/1/countPointwise Correctness/1/relative_frequencyPointwise Correctness/2/countPointwise Correctness/2/relative_frequencyPointwise Correctness/3/countPointwise Correctness/3/relative_frequencyPointwise Correctness/4/countPointwise Correctness/4/relative_frequencyPointwise Correctness/5/countPointwise Correctness/5/relative_frequencyPointwise Correctness/entropyPointwise Correctness/modePointwise Instruction Following/1/countPointwise Instruction Following/1/relative_frequencyPointwise Instruction Following/2/countPointwise Instruction Following/2/relative_frequencyPointwise Instruction Following/3/countPointwise Instruction Following/3/relative_frequencyPointwise Instruction Following/4/countPointwise Instruction Following/4/relative_frequencyPointwise Instruction Following/5/countPointwise Instruction Following/5/relative_frequencyPointwise Instruction Following/entropyPointwise Instruction Following/modeROUGE/meanROUGE/medianROUGE/p90ROUGE/p95ROUGE/stddevcompletion_tokens/sumlatency/averageprompt_tokens/sumsubmission/sum
model
gemini-2.5-pro0.0030340.00.0095820.0140530.00625349.01.00.00.00.0049.01.00.00.00.0049.01.00.00.00.002.00.0408161.00.0204082.00.0408161.00.02040843.00.8775510.77125350.00.00.00.027.00.5510222.00.448980.00.00.99247632.00.0408161.00.0204080.00.00.00.046.00.9387760.38850950.00.00.00.02.00.0408161.00.02040846.00.9387760.38850950.0793560.0710480.1623270.1721740.063277118298.0124.846381250.049.0
gpt-4o0.0041790.00.0167570.026590.01105449.01.00.00.00.0049.01.00.00.00.0049.01.00.00.00.001.00.0204080.00.03.00.0612246.00.12244939.00.7959180.99439750.00.00.00.00.00.027.00.5510222.00.448980.99247641.00.0204080.00.01.00.0204082.00.04081645.00.9183670.53035550.00.00.00.01.00.0204082.00.04081646.00.9387760.38850950.1078620.1117020.1880920.2224620.06887920490.0302.0736281608.049.0
gpt-50.0021450.00.0087390.0111190.00633549.01.00.00.00.0049.01.00.00.00.0049.01.00.00.00.002.00.0408162.00.0408163.00.0612243.00.06122439.00.7959181.13225150.00.00.00.03.00.06122421.00.42857125.00.5102041.26593554.00.0816332.00.0408161.00.0204081.00.02040841.00.8367350.92777951.00.0204081.00.0204083.00.0612243.00.06122441.00.8367350.93778350.0929840.1009170.1645940.2022170.06209198856.0511.5321931559.049.0
" + ], + "text/plain": [ + "" + ] + }, + "execution_count": 255, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import pandas as pd\n", + "\n", + "def get_model_from_run(run):\n", + " for tag in run.get(\"tags\", []):\n", + " if tag.get(\"name\") == \"evaluation.ai.sap.com/model\":\n", + " return tag.get(\"value\")\n", + "\n", + "def aggregate_metrics_by_model(runs_list):\n", + " transformed_data = []\n", + " for run in runs_list:\n", + " model = get_model_from_run(run)\n", + " for metric in run[\"metrics\"]:\n", + " metric_value = metric.get(\"value\")\n", + "\n", + " # Override only for /mode\n", + " if metric.get(\"name\").endswith(\"/mode\"):\n", + " for label in metric.get(\"labels\", []):\n", + " if label.get(\"name\") == \"evaluation.ai.sap.com/mode_category\":\n", + " metric_value = label.get(\"value\")\n", + " break\n", + " output_json = {\n", + " \"model\": model,\n", + " \"metrics_name\": metric.get(\"name\"),\n", + " \"metric_value\": metric_value\n", + " }\n", + " transformed_data.append(output_json)\n", + " return transformed_data\n", + "\n", + "\n", + "def create_metrics_pivot_table(transformed_data):\n", + " \"\"\"\n", + " Creates a pivot table where rows are models and columns are metrics.\n", + " \n", + " Args:\n", + " transformed_data: List of dictionaries with 'model', 'metrics_name', 'metric_value'\n", + " \n", + " Returns:\n", + " DataFrame with models as rows and metrics as columns\n", + " \"\"\"\n", + " # Convert list of dictionaries to DataFrame\n", + " df = pd.DataFrame(transformed_data)\n", + " \n", + " # Create pivot table\n", + " pivot_table = df.pivot_table(\n", + " index='model',\n", + " columns='metrics_name',\n", + " values='metric_value',\n", + " aggfunc='first' # Use 'first' to get the single value, or 'mean' if there are duplicates\n", + " )\n", + " \n", + " return pivot_table\n", + "\n", + "transformed_data = aggregate_metrics_by_model(runs_data['resources'])\n", + "metrics_pivot = create_metrics_pivot_table(transformed_data)\n", + "\n", + "HTML(metrics_pivot.to_html())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To further drill down , User can also download the SQLite DB file from object storage and analyse the results(instance level metrics, logs etc) locally." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# download the result artifacts from Object store.\n", + "import boto3\n", + "\n", + "def download_all_objects(prefix, destination_folder):\n", + " \"\"\"\n", + " Recursively download all objects from an S3 bucket starting with a specific prefix.\n", + "\n", + " :param bucket_name: Name of the S3 bucket.\n", + " :param prefix: Prefix to filter objects in the bucket.\n", + " :param destination_folder: Local folder to save the downloaded files.\n", + " \"\"\"\n", + " s3_client = boto3.client(\n", + " 's3',\n", + " aws_access_key_id=AWS_ACCESS_KEY,\n", + " aws_secret_access_key=AWS_SECRET_ACCESS_KEY,\n", + " region_name=AWS_REGION\n", + " )\n", + "\n", + " # Ensure the destination folder exists\n", + " if not os.path.exists(destination_folder):\n", + " os.makedirs(destination_folder)\n", + "\n", + " # Paginate through objects\n", + " paginator = s3_client.get_paginator('list_objects_v2')\n", + " pages = paginator.paginate(Bucket=AWS_BUCKET_ID, Prefix=prefix)\n", + "\n", + " for page in pages:\n", + " if 'Contents' in page:\n", + " for obj in page['Contents']:\n", + " key = obj['Key']\n", + " local_file_path = os.path.join(destination_folder, os.path.relpath(key, prefix))\n", + "\n", + " # Ensure the local directory structure exists\n", + " local_directory = os.path.dirname(local_file_path)\n", + " if not os.path.exists(local_directory):\n", + " os.makedirs(local_directory)\n", + "\n", + " # Download the object\n", + " print(f\"Downloading {key} to {local_file_path}\")\n", + " s3_client.download_file(AWS_BUCKET_ID, key, local_file_path)\n", + "\n", + "\n", + "# Download the evaluation results from the object store. Look at execution status under \"outputArtifacts\" key to see the 'url'\n", + "# which shows the data path of where your output results are stored\n", + "EXECUTION_ID = execution_id\n", + "sqlite_db_prefix = f'{EXECUTION_ID}/tmp/' # change the prefix based on where your output artifact is stored in the bucket.\n", + "destination_folder = 'results-new'\n", + "\n", + "download_all_objects(sqlite_db_prefix, destination_folder)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "NOTE: The below Cell shows results of top 5 rows of the Evaluation Results across all SQLite tables. IF you wish to see all the entries you can comment the line saying df.head(10) in the below cell or modify the number accordingly." + ] + }, + { + "cell_type": "code", + "execution_count": 256, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n", + "
\n", + "\n", + "
\n", + "

Table: run

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
idnameconfigtagscreated_atupdated_at
83d31ab3552644d2bccddc73d9c9f30dRun-prompt-registry-eval-demo-gemini-2.5-pro-001{\"modules\": {\"prompt_templating\": {\"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\"}, \"prompt\": {\"template\": [{\"role\": \"user\", \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"}], \"defaults\": {}}}}}{\"promptTemplateId\": \"ee5205b4-a3c4-4217-9292-e663e6df0012\"}2026-02-04 15:25:39.1047282026-02-04 15:25:39.104731
2663332bb4da43c089217193cbae88ceRun-prompt-registry-eval-demo-gpt-4o-2024-08-06{\"modules\": {\"prompt_templating\": {\"model\": {\"name\": \"gpt-4o\", \"version\": \"2024-08-06\"}, \"prompt\": {\"template\": [{\"role\": \"user\", \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"}], \"defaults\": {}}}}}{\"promptTemplateId\": \"ee5205b4-a3c4-4217-9292-e663e6df0012\"}2026-02-04 15:25:39.1047352026-02-04 15:25:39.104735
a5be2752cae64582922f96b80c890dc8Run-prompt-registry-eval-demo-gpt-5-2025-08-07{\"modules\": {\"prompt_templating\": {\"model\": {\"name\": \"gpt-5\", \"version\": \"2025-08-07\"}, \"prompt\": {\"template\": [{\"role\": \"user\", \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"}], \"defaults\": {}}}}}{\"promptTemplateId\": \"ee5205b4-a3c4-4217-9292-e663e6df0012\"}2026-02-04 15:25:39.1047392026-02-04 15:25:39.104740
\n", + "
\n", + " \n", + "
\n", + "

Table: configuration

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
idtest_datasetsmetricsvariable_mappingtagsorchestration_deployment_urlrepetitionsmetric_templatescreated_atupdated_at
e7c519079615412c820ef465baa88905{\"path\": \"testdata/medicalqna_dataset.csv\", \"type\": \"csv\"}[\"Pointwise Conciseness\", \"Pointwise Instruction Following\", \"Pointwise Correctness\", \"Pointwise Answer Relevance\", \"Exact Match\", \"BLEU\", \"ROUGE\", \"Content Filter on Input\", \"Content Filter on Output\"]{}{}https://api.ai.aicore-pr.eu-west-1.mlf-aws-dev.com/v2/inference/deployments/d0d6f232abfea6721[{\"evaluationMethod\": \"llm-as-a-judge\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"95c03e1b-3938-42dd-bc69-3ec5cd0e5e18\", \"name\": \"Pointwise Conciseness\", \"description\": \"Measures how short and concise the model\\u2019s response is. Scores range from 1 to 5, with higher values indicating a more concise answer.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"categorical\", \"promptType\": \"structured\", \"configuration\": {\"modelConfiguration\": {\"name\": \"gpt-4.1\", \"version\": \"2025-04-14\", \"parameters\": [{\"key\": \"temperature\", \"value\": \"0\"}]}, \"promptConfiguration\": {\"evaluationTask\": \"You are an expert evaluator. Your task is to evaluate the conciseness of responses generated by AI models.\\nWe will provide you with the user input and an AI-generated response.\\nYou should first read the user input carefully to understand the context and intention, and then evaluate the conciseness of the response based on the criteria provided in the Evaluation section below.\\nYou will assign the response a rating following the Rating Rubric and Evaluation Steps.\\nGive step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\", \"definition\": \"You will be assessing conciseness, which measures the ability to convey the necessary information in a clear and succinct manner.\", \"criteria\": \"Conciseness: Does the response deliver the essential information without unnecessary words or redundancy?\", \"ratingRubric\": [{\"rating\": \"1\", \"rule\": \"(Not concise). The response is not concise and is filled with unnecessary or redundant content that obscures the main points.\"}, {\"rating\": \"2\", \"rule\": \"(Slightly concise). The response is slightly concise and contains a significant amount of unnecessary or redundant information.\"}, {\"rating\": \"3\", \"rule\": \"(Somewhat concise). The response is somewhat concise but may include some unnecessary words or slightly redundant information.\"}, {\"rating\": \"4\", \"rule\": \"(Mostly concise). The response is mostly concise and generally avoids unnecessary words while covering the essential information.\"}, {\"rating\": \"5\", \"rule\": \"(Highly concise). The response is very concise, delivering all necessary information in a succinct manner without any superfluous content.\"}], \"evaluationSteps\": [\"Assess the response in terms of Conciseness. Identify how effectively the response communicates essential information without unnecessary words according to the Criteria.\", \"Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Conciseness.\"]}}}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [], \"supported_values\": [1, 5], \"experimental\": true}}, {\"evaluationMethod\": \"llm-as-a-judge\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"cd3ffd21-faae-4f06-8184-52541182d9a5\", \"name\": \"Pointwise Instruction Following\", \"description\": \"Evaluates the model\\u2019s ability to follow the instructions provided in the user prompt. Scores range from 1 to 5, with 1 indicating no fulfillment and 5 indicating complete fulfillment.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"categorical\", \"promptType\": \"structured\", \"configuration\": {\"modelConfiguration\": {\"name\": \"gpt-4.1\", \"version\": \"2025-04-14\", \"parameters\": [{\"key\": \"temperature\", \"value\": \"0\"}]}, \"promptConfiguration\": {\"evaluationTask\": \"Please act as an impartial judge and evaluate the quality of the responses based on the prompt and following criteria:\", \"definition\": \"You will be assessing model's the ability to follow instructions provided in the user prompt.\", \"criteria\": \"Instruction following: The response demonstrates a clear understanding of the instructions in the user prompt, satisfying all of the instruction's requirements. Evaluate the responses STRICTLY on the ability to follow instruction ONLY.\", \"ratingRubric\": [{\"rating\": \"1\", \"rule\": \"(No fulfillment). Response does not address the most important aspects of the instruction. The user would feel like their request was not at all understood.\"}, {\"rating\": \"2\", \"rule\": \"(Poor fulfillment). Response addresses some aspects of the instruction but misses key requirements or major components. The user would feel like their instruction was misunderstood in significant ways.\"}, {\"rating\": \"3\", \"rule\": \"(Some fulfillment). Response does not address some minor aspects and/or ignores some requirements of the instruction. The user would feel like their instruction was partially understood.\"}, {\"rating\": \"4\", \"rule\": \"(Good fulfillment). Response addresses most aspects and requirements of the instruction. It might miss very minor details or have slight deviations from requirements. The user would feel like their instruction was well understood.\"}, {\"rating\": \"5\", \"rule\": \"(Complete fulfillment). Response addresses all aspects and adheres to all requirements of the instruction. The user would feel like their instruction was completely understood.\"}]}}}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [], \"supported_values\": [1, 5], \"experimental\": false}}, {\"evaluationMethod\": \"llm-as-a-judge\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"36d1abca-cf01-48f6-9bd1-8d5e1272a374\", \"name\": \"Pointwise Correctness\", \"description\": \"Evaluates whether an LLM response is correct, accurate, and factual using a user-provided reference, for both general and retrieval-augmented (RAG) use cases. Scores range from 1 to 5, with 1 indicating completely incorrect and 5 indicating fully correct.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"categorical\", \"promptType\": \"structured\", \"configuration\": {\"modelConfiguration\": {\"name\": \"gpt-4.1\", \"version\": \"2025-04-14\", \"parameters\": [{\"key\": \"temperature\", \"value\": \"0\"}]}, \"promptConfiguration\": {\"evaluationTask\": \"You are an expert evaluator. Your task is to evaluate the correctness of responses generated by AI models.\\nWe will provide you with the user input, an AI-generated response, and a reference answer.\\nYou should first read the user input carefully to understand the task and intent, then evaluate the correctness of the response based on the criteria and rubric below.\\nAssign the response a rating using the Rating Rubric and Evaluation Steps.\\nGive step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\", \"definition\": \"You will be assessing correctness, which measures whether the response is factually accurate, complete, and directly answers the user's query as intended, using the reference as the main authority.\", \"criteria\": \"Correctness:\\n - Is the response factually accurate and free from errors?\\n- Does the response address all parts of the user's question?\\n- Does the answer avoid missing key information?\\n- Does the response avoid introducing incorrect, misleading, or unrelated information?\\n- If the question is ambiguous or lacks context, is an appropriate clarification or expression of uncertainty provided?\\n- If the reference is a refusal, clarification, or contains specific instructions (such as links or attributions), does the response follow this appropriately?\\n- Are alternative correct answers recognized, not just verbatim matches to the reference?\", \"ratingRubric\": [{\"rating\": \"1\", \"rule\": \"(Incorrect). The response is fundamentally incorrect, misleading, irrelevant, or an inappropriate refusal.\"}, {\"rating\": \"2\", \"rule\": \"(Somewhat incorrect). The response contains significant inaccuracies, omissions, or context mismatch; unreliable as an answer.\"}, {\"rating\": \"3\", \"rule\": \"(Somewhat correct). The response is partially correct, but with notable errors, missing key aspects, or context insensitivity.\"}, {\"rating\": \"4\", \"rule\": \"(Mostly correct). The response is mostly correct, with only minor omissions, ambiguities, or context mismatches.\"}, {\"rating\": \"5\", \"rule\": \"(Completely correct). The response is fully correct, complete, and directly answers the user's query as intended.\"}], \"evaluationSteps\": [\"Assess the response for factual accuracy, completeness, and directness in answering the user's query. Identify any errors, omissions, or irrelevant information.\", \"Consider if the response appropriately handles ambiguity, context, or special instructions from the reference.\", \"Score based on the rating rubric. Give a concise, unbiased rationale for your evaluation, focusing on correctness.\"]}}}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [], \"supported_values\": [1, 5], \"experimental\": false}}, {\"evaluationMethod\": \"llm-as-a-judge\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"0ae30283-0140-451e-8a88-267ef801f35c\", \"name\": \"Pointwise Answer Relevance\", \"description\": \"Measures how closely the model\\u2019s response relates to the user prompt, for both general and RAG use cases. Scores range from 1 to 5, with higher values indicating greater relevance.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"categorical\", \"promptType\": \"structured\", \"configuration\": {\"modelConfiguration\": {\"name\": \"gpt-4.1\", \"version\": \"2025-04-14\", \"parameters\": [{\"key\": \"temperature\", \"value\": \"0\"}]}, \"promptConfiguration\": {\"evaluationTask\": \"You are an expert evaluator. Your task is to evaluate the relevance of responses generated by AI models.\\nWe will provide you with the user input and an AI-generated response.\\nYou should first read the user input carefully to understand the context and intention, and then evaluate the relevance of the response based on the criteria provided in the Evaluation section below.\\nYou will assign the response a rating following the Rating Rubric and Evaluation Steps.\\nGive step-by-step explanations for your rating, and only choose ratings from the Rating Rubric.\", \"definition\": \"You will be assessing relevance, which measures the ability to provide a response that is pertinent and useful based on the user prompt and the context provided.\", \"criteria\": \"Relevance: Does the response address the user's query appropriately and provide pertinent information?\", \"ratingRubric\": [{\"rating\": \"1\", \"rule\": \"(Irrelevant). The response is irrelevant and does not address the user's query.\"}, {\"rating\": \"2\", \"rule\": \"(Slightly relevant). The response is slightly relevant and largely misses the user's query.\"}, {\"rating\": \"3\", \"rule\": \"(Somewhat relevant). The response is somewhat relevant but may miss key aspects of the user's query.\"}, {\"rating\": \"4\", \"rule\": \"(Mostly relevant). The response is mostly relevant and generally addresses the user's query with useful information.\"}, {\"rating\": \"5\", \"rule\": \"(Highly relevant). The response is highly relevant, directly addresses the user's query, and provides useful information.\"}], \"evaluationSteps\": [\"Assess the response in terms of Relevance. Identify how well the response aligns with the user's query and context according to the Criteria.\", \"Score based on the rating rubric. Give a brief rationale to explain your evaluation considering Relevance.\"]}}}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [], \"supported_values\": [1, 5], \"experimental\": true}}, {\"evaluationMethod\": \"computed\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"39f4a0ba-8a21-4cda-be95-703dba47e4f1\", \"name\": \"Exact Match\", \"description\": \"Boolean indicating whether the output exactly matches the reference.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"boolean\"}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [\"reference\"], \"supported_values\": [0, 1], \"experimental\": false}}, {\"evaluationMethod\": \"computed\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"3ea07c1f-5b10-4b12-bf46-6d429faf8010\", \"name\": \"BLEU\", \"description\": \"BLEU (Bilingual Evaluation Understudy) evaluates machine-translated text quality by calculating n-gram precision between candidate and reference translations. Scores range from 0 to 1, with higher values indicating greater similarity.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"numerical\"}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [\"reference\"], \"supported_values\": [0, 1], \"experimental\": false}}, {\"evaluationMethod\": \"computed\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"3904208a-b886-41b1-8448-d363245d5397\", \"name\": \"ROUGE\", \"description\": \"ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a set of metrics for evaluating summarization and machine translation by measuring overlap in n-grams, word sequences, and word pairs between generated and reference texts. This implementation is case-insensitive.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"numerical\"}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [\"reference\"], \"supported_values\": [0, 1], \"experimental\": false}}, {\"evaluationMethod\": \"computed\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"ba2ece64-d4ed-4645-96d3-8728af12515f\", \"name\": \"Content Filter on Input\", \"description\": \"Boolean indicating whether the input content filter was invoked.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"boolean\"}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [], \"supported_values\": [0, 1], \"experimental\": false}}, {\"evaluationMethod\": \"computed\", \"scenario\": \"genai-evaluations\", \"createdAt\": \"2025-11-19 00:00:00+00:00\", \"managedBy\": \"imperative\", \"metricType\": \"evaluation\", \"systemPredefined\": true, \"id\": \"e677ac61-5c39-4f32-8feb-4460ff6b3c23\", \"name\": \"Content Filter on Output\", \"description\": \"Boolean indicating whether the output content filter was invoked.\", \"version\": \"1.0.0\", \"spec\": {\"outputType\": \"boolean\"}, \"usageType\": [\"evaluation\"], \"additionalProperties\": {\"variables\": [], \"supported_values\": [0, 1], \"experimental\": false}}]2026-02-04 15:25:39.0961172026-02-04 15:25:39.096121
\n", + "
\n", + " \n", + "
\n", + "

Table: submission

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
idrun_idorchestration_configurationtemplate_variablescreated_atupdated_at
7516fcd4de96441a8ae9e8f68bdb3f5c83d31ab3552644d2bccddc73d9c9f30d{\"modules\": {\"prompt_templating\": {\"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\"}, \"prompt\": {\"template\": [{\"role\": \"user\", \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"}], \"defaults\": {}}}}}{\"question\": \"how does rivatigmine and otc sleep medicine interact\", \"sentiment\": \"Interaction\", \"reference\": \"tell your doctor and pharmacist what prescription and nonprescription medications, vitamins, nutritional supplements, and herbal products you are taking or plan to take. Be sure to mention any of the following: antihistamines; aspirin and other nonsteroidal anti-inflammatory medications (NSAIDs) such as ibuprofen (Advil, Motrin) and naproxen (Aleve, Naprosyn); bethanechol (Duvoid, Urecholine); ipratropium (Atrovent, in Combivent, DuoNeb); and medications for Alzheimer's disease, glaucoma, irritable bowel disease, motion sickness, ulcers, or urinary problems. Your doctor may need to change the doses of your medications or monitor you carefully for side effects.\"}2026-02-04 15:25:39.1149732026-02-04 15:25:39.114975
fbe9fd3eaa1945bfa97204593045861483d31ab3552644d2bccddc73d9c9f30d{\"modules\": {\"prompt_templating\": {\"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\"}, \"prompt\": {\"template\": [{\"role\": \"user\", \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"}], \"defaults\": {}}}}}{\"question\": \"how does valium affect the brain\", \"sentiment\": \"Action\", \"reference\": \"Diazepam is a benzodiazepine that exerts anxiolytic, sedative, muscle-relaxant, anticonvulsant and amnestic effects. Most of these effects are thought to result from a facilitation of the action of gamma aminobutyric acid (GABA), an inhibitory neurotransmitter in the central nervous system.\"}2026-02-04 15:25:39.1149782026-02-04 15:25:39.114979
c64e5a8aef454b1f864d90611e2c1a9b83d31ab3552644d2bccddc73d9c9f30d{\"modules\": {\"prompt_templating\": {\"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\"}, \"prompt\": {\"template\": [{\"role\": \"user\", \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"}], \"defaults\": {}}}}}{\"question\": \"what is morphine\", \"sentiment\": \"Information\", \"reference\": \"Morphine is a pain medication of the opiate family which is found naturally in a number of plants and animals.[5][7] It acts directly on the central nervous system (CNS) to decrease the feeling of pain.\"}2026-02-04 15:25:39.1149812026-02-04 15:25:39.114981
97a2a28dcc9b49029b3101c342461c8f83d31ab3552644d2bccddc73d9c9f30d{\"modules\": {\"prompt_templating\": {\"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\"}, \"prompt\": {\"template\": [{\"role\": \"user\", \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"}], \"defaults\": {}}}}}{\"question\": \"what are the milligrams for oxycodone e\", \"sentiment\": \"Dose\", \"reference\": \"\\ufffd 10 mg \\ufffd 20 mg \\ufffd 40 mg \\ufffd 80 mg ...\"}2026-02-04 15:25:39.1149842026-02-04 15:25:39.114984
a14b6ad38ea7484b9b48a486a6d1b7d783d31ab3552644d2bccddc73d9c9f30d{\"modules\": {\"prompt_templating\": {\"model\": {\"name\": \"gemini-2.5-pro\", \"version\": \"001\"}, \"prompt\": {\"template\": [{\"role\": \"user\", \"content\": \"List the benefits and side effects of the drug in the following consumer health question: {{?question}}.\"}], \"defaults\": {}}}}}{\"question\": \"81% aspirin contain resin and shellac in it. ?\", \"sentiment\": \"Ingredient\", \"reference\": \"Inactive Ingredients Ingredient Name\"}2026-02-04 15:25:39.1149862026-02-04 15:25:39.114987
\n", + "
\n", + " \n", + "
\n", + "

Table: submission_result

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
submission_idrun_idrepetition_countcompletion_resultlatencycreated_atupdated_at
7516fcd4de96441a8ae9e8f68bdb3f5c83d31ab3552644d2bccddc73d9c9f30d1{\"request_id\": \"37930f04-8112-9067-b59e-696e43a24410\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: how does rivatigmine and otc sleep medicine interact.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218849, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects for each drug, followed by an explanation of their interaction, based on the consumer health question.\\n\\n***\\n\\n### 1. Rivastigmine (e.g., brand name Exelon)\\n\\nThis is a prescription medication used to treat dementia.\\n\\n**Benefits:**\\n* **Improves Cognitive Function:** It is used to treat mild to moderate dementia associated with Alzheimer's disease and Parkinson's disease.\\n* **Slows Symptom Progression:** It can help improve or stabilize functions like memory, thinking, and language. It does not cure the disease but can help manage the symptoms.\\n* **Mechanism of Action:** It works by increasing the levels of a chemical in the brain called acetylcholine, which is important for memory and thought processes.\\n\\n**Common Side Effects:**\\n* Nausea, vomiting, diarrhea\\n* Loss of appetite and weight loss\\n* Dizziness, headache, or fatigue\\n* Stomach upset or pain\\n\\n### 2. Common OTC (Over-the-Counter) Sleep Medicines\\n\\nThis category most often includes drugs containing **diphenhydramine** (found in Benadryl, ZzzQuil, Advil PM) or **doxylamine** (found in Unisom SleepTabs).\\n\\n**Benefits:**\\n* **Induces Drowsiness:** Helps individuals fall asleep and manage occasional insomnia.\\n* **Widely Accessible:** Available without a prescription for short-term use.\\n\\n**Common Side Effects:**\\n* Next-day drowsiness or a \\\"hangover\\\" effect\\n* Dizziness and confusion (especially in older adults)\\n* Dry mouth, blurred vision\\n* Constipation and difficulty urinating\\n* **Mechanism of Action:** These drugs have strong **anticholinergic** properties, which means they work by **blocking** the action of acetylcholine. This blocking effect causes drowsiness, but also the other side effects listed.\\n\\n---\\n\\n### The Interaction Between Rivastigmine and OTC Sleep Medicine\\n\\nThis is a significant interaction that should be avoided. The two types of drugs have opposite effects on the brain, creating a direct conflict.\\n\\n**1. Reduced Effectiveness of Rivastigmine:**\\n* Rivastigmine works by **increasing** acetylcholine to help with dementia symptoms.\\n* OTC sleep aids work by **blocking** acetylcholine.\\n* **Result:** Taking an OTC sleep aid can directly counteract the intended benefit of rivastigmine, making the dementia medication less effective.\\n\\n**2. Increased Risk of Negative Side Effects:**\\n* Both medications can cause dizziness and confusion on their own. Taking them together significantly increases the risk and severity of these side effects.\\n* For an older adult, this heightened confusion and dizziness can lead to a much higher risk of **falls, injury, and accidents**.\\n* The anticholinergic side effects from the sleep aid (dry mouth, constipation, blurry vision) can become more pronounced and problematic.\\n\\n### Summary and Recommendation\\n\\n**Do not combine rivastigmine with common OTC sleep medicines like ZzzQuil, Benadryl, or Unisom.** The sleep medicine is likely to make the rivastigmine less effective and increase the risk of dangerous side effects like severe confusion and falls.\\n\\nIf you or someone you care for is taking rivastigmine and having trouble sleeping, it is essential to **speak with their doctor or a pharmacist**. They can recommend safer alternatives, which may include non-drug strategies (like improving sleep hygiene) or a different type of medication that does not conflict with rivastigmine.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2332, \"prompt_tokens\": 29, \"total_tokens\": 2361, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1581}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218849, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects for each drug, followed by an explanation of their interaction, based on the consumer health question.\\n\\n***\\n\\n### 1. Rivastigmine (e.g., brand name Exelon)\\n\\nThis is a prescription medication used to treat dementia.\\n\\n**Benefits:**\\n* **Improves Cognitive Function:** It is used to treat mild to moderate dementia associated with Alzheimer's disease and Parkinson's disease.\\n* **Slows Symptom Progression:** It can help improve or stabilize functions like memory, thinking, and language. It does not cure the disease but can help manage the symptoms.\\n* **Mechanism of Action:** It works by increasing the levels of a chemical in the brain called acetylcholine, which is important for memory and thought processes.\\n\\n**Common Side Effects:**\\n* Nausea, vomiting, diarrhea\\n* Loss of appetite and weight loss\\n* Dizziness, headache, or fatigue\\n* Stomach upset or pain\\n\\n### 2. Common OTC (Over-the-Counter) Sleep Medicines\\n\\nThis category most often includes drugs containing **diphenhydramine** (found in Benadryl, ZzzQuil, Advil PM) or **doxylamine** (found in Unisom SleepTabs).\\n\\n**Benefits:**\\n* **Induces Drowsiness:** Helps individuals fall asleep and manage occasional insomnia.\\n* **Widely Accessible:** Available without a prescription for short-term use.\\n\\n**Common Side Effects:**\\n* Next-day drowsiness or a \\\"hangover\\\" effect\\n* Dizziness and confusion (especially in older adults)\\n* Dry mouth, blurred vision\\n* Constipation and difficulty urinating\\n* **Mechanism of Action:** These drugs have strong **anticholinergic** properties, which means they work by **blocking** the action of acetylcholine. This blocking effect causes drowsiness, but also the other side effects listed.\\n\\n---\\n\\n### The Interaction Between Rivastigmine and OTC Sleep Medicine\\n\\nThis is a significant interaction that should be avoided. The two types of drugs have opposite effects on the brain, creating a direct conflict.\\n\\n**1. Reduced Effectiveness of Rivastigmine:**\\n* Rivastigmine works by **increasing** acetylcholine to help with dementia symptoms.\\n* OTC sleep aids work by **blocking** acetylcholine.\\n* **Result:** Taking an OTC sleep aid can directly counteract the intended benefit of rivastigmine, making the dementia medication less effective.\\n\\n**2. Increased Risk of Negative Side Effects:**\\n* Both medications can cause dizziness and confusion on their own. Taking them together significantly increases the risk and severity of these side effects.\\n* For an older adult, this heightened confusion and dizziness can lead to a much higher risk of **falls, injury, and accidents**.\\n* The anticholinergic side effects from the sleep aid (dry mouth, constipation, blurry vision) can become more pronounced and problematic.\\n\\n### Summary and Recommendation\\n\\n**Do not combine rivastigmine with common OTC sleep medicines like ZzzQuil, Benadryl, or Unisom.** The sleep medicine is likely to make the rivastigmine less effective and increase the risk of dangerous side effects like severe confusion and falls.\\n\\nIf you or someone you care for is taking rivastigmine and having trouble sleeping, it is essential to **speak with their doctor or a pharmacist**. They can recommend safer alternatives, which may include non-drug strategies (like improving sleep hygiene) or a different type of medication that does not conflict with rivastigmine.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2332, \"prompt_tokens\": 29, \"total_tokens\": 2361, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1581}}}}25.2656732026-02-04 15:37:07.9352152026-02-04 15:37:07.935219
fbe9fd3eaa1945bfa97204593045861483d31ab3552644d2bccddc73d9c9f30d1{\"request_id\": \"5843d2e6-ff62-9ede-aaff-f759bdac5bab\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: how does valium affect the brain.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218857, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects of Valium (diazepam) in the context of how it affects the brain, presented in a clear, consumer-friendly format.\\n\\n**IMPORTANT:** This information is for educational purposes only and is not a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or another qualified health provider with any questions you may have regarding a medical condition or medication.\\n\\n***\\n\\n### How Valium (Diazepam) Affects the Brain\\n\\nValium belongs to a class of drugs called benzodiazepines. Its primary effect on the brain is to enhance the activity of a neurotransmitter called **GABA (gamma-aminobutyric acid)**.\\n\\nThink of GABA as the brain's main **calming agent or \\\"brake pedal.\\\"** It works by slowing down nerve cell activity. Valium essentially makes this \\\"brake pedal\\\" more effective, leading to a widespread calming and slowing of the central nervous system. This action is what produces both its therapeutic benefits and its side effects.\\n\\n---\\n\\n### Benefits (Therapeutic Uses)\\n\\nBy slowing down brain activity, Valium is prescribed to treat several conditions:\\n\\n* **Anxiety Disorders:** It effectively reduces the physical and emotional symptoms of anxiety, such as excessive worry, tension, panic attacks, and fear.\\n* **Muscle Spasms:** It acts as a potent muscle relaxant, helpful for conditions caused by muscle spasms, such as chronic back pain or certain neurological disorders like cerebral palsy.\\n* **Seizure Disorders:** By reducing excessive electrical activity in the brain, it can be used to stop an active seizure (status epilepticus) or as an add-on therapy to prevent seizures.\\n* **Alcohol Withdrawal:** It is used to manage and prevent severe symptoms of alcohol withdrawal, including agitation, tremors, and life-threatening seizures.\\n* **Sedation:** It is often administered before surgery or other medical procedures to reduce anxiety and induce sedation.\\n\\n---\\n\\n### Side Effects & Risks\\n\\nThe same brain-slowing mechanism that provides benefits can also cause a range of side effects.\\n\\n#### Common Side Effects:\\nThese are often related to the sedative effects of the drug.\\n\\n* **Drowsiness, Fatigue, and Dizziness:** Feeling tired, sleepy, or unsteady on your feet.\\n* **Impaired Coordination and Unsteadiness:** Difficulty with balance and fine motor skills, which increases the risk of falls. This is why driving or operating heavy machinery is strongly discouraged.\\n* **Confusion and Memory Problems:** Difficulty concentrating and, notably, **anterograde amnesia**, which is trouble forming new memories while the drug is active.\\n* **Muscle Weakness:** A feeling of general weakness, separate from the therapeutic muscle relaxation.\\n* **Slurred Speech.**\\n\\n#### Serious Side Effects (Require Immediate Medical Attention):\\n\\n* **Severe Respiratory Depression:** Dangerously slow or shallow breathing, especially when mixed with alcohol, opioids, or other depressants. This can be fatal.\\n* **Paradoxical Reactions:** Instead of calming, the drug can cause the opposite effect, such as increased agitation, aggression, hallucinations, or rage.\\n* **Severe Drowsiness or Unresponsiveness:** Difficulty waking up.\\n* **Suicidal Thoughts or Worsening Depression.**\\n\\n#### Risks of Long-Term Use:\\n\\n* **Tolerance:** Over time, the body adapts to the drug, and higher doses are needed to achieve the same effect.\\n* **Dependence:** The brain and body become reliant on Valium to function normally. Stopping the drug abruptly after long-term use can lead to a severe and potentially dangerous withdrawal syndrome.\\n* **Withdrawal Symptoms:** Can include rebound anxiety, insomnia, tremors, sweating, agitation, and in severe cases, seizures. Withdrawal should always be managed by a doctor through a gradual tapering schedule.\\n* **Addiction (Substance Use Disorder):** Characterized by compulsive drug-seeking and use despite harmful consequences.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2397, \"prompt_tokens\": 24, \"total_tokens\": 2421, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1562}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218857, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. Here is a breakdown of the benefits and side effects of Valium (diazepam) in the context of how it affects the brain, presented in a clear, consumer-friendly format.\\n\\n**IMPORTANT:** This information is for educational purposes only and is not a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or another qualified health provider with any questions you may have regarding a medical condition or medication.\\n\\n***\\n\\n### How Valium (Diazepam) Affects the Brain\\n\\nValium belongs to a class of drugs called benzodiazepines. Its primary effect on the brain is to enhance the activity of a neurotransmitter called **GABA (gamma-aminobutyric acid)**.\\n\\nThink of GABA as the brain's main **calming agent or \\\"brake pedal.\\\"** It works by slowing down nerve cell activity. Valium essentially makes this \\\"brake pedal\\\" more effective, leading to a widespread calming and slowing of the central nervous system. This action is what produces both its therapeutic benefits and its side effects.\\n\\n---\\n\\n### Benefits (Therapeutic Uses)\\n\\nBy slowing down brain activity, Valium is prescribed to treat several conditions:\\n\\n* **Anxiety Disorders:** It effectively reduces the physical and emotional symptoms of anxiety, such as excessive worry, tension, panic attacks, and fear.\\n* **Muscle Spasms:** It acts as a potent muscle relaxant, helpful for conditions caused by muscle spasms, such as chronic back pain or certain neurological disorders like cerebral palsy.\\n* **Seizure Disorders:** By reducing excessive electrical activity in the brain, it can be used to stop an active seizure (status epilepticus) or as an add-on therapy to prevent seizures.\\n* **Alcohol Withdrawal:** It is used to manage and prevent severe symptoms of alcohol withdrawal, including agitation, tremors, and life-threatening seizures.\\n* **Sedation:** It is often administered before surgery or other medical procedures to reduce anxiety and induce sedation.\\n\\n---\\n\\n### Side Effects & Risks\\n\\nThe same brain-slowing mechanism that provides benefits can also cause a range of side effects.\\n\\n#### Common Side Effects:\\nThese are often related to the sedative effects of the drug.\\n\\n* **Drowsiness, Fatigue, and Dizziness:** Feeling tired, sleepy, or unsteady on your feet.\\n* **Impaired Coordination and Unsteadiness:** Difficulty with balance and fine motor skills, which increases the risk of falls. This is why driving or operating heavy machinery is strongly discouraged.\\n* **Confusion and Memory Problems:** Difficulty concentrating and, notably, **anterograde amnesia**, which is trouble forming new memories while the drug is active.\\n* **Muscle Weakness:** A feeling of general weakness, separate from the therapeutic muscle relaxation.\\n* **Slurred Speech.**\\n\\n#### Serious Side Effects (Require Immediate Medical Attention):\\n\\n* **Severe Respiratory Depression:** Dangerously slow or shallow breathing, especially when mixed with alcohol, opioids, or other depressants. This can be fatal.\\n* **Paradoxical Reactions:** Instead of calming, the drug can cause the opposite effect, such as increased agitation, aggression, hallucinations, or rage.\\n* **Severe Drowsiness or Unresponsiveness:** Difficulty waking up.\\n* **Suicidal Thoughts or Worsening Depression.**\\n\\n#### Risks of Long-Term Use:\\n\\n* **Tolerance:** Over time, the body adapts to the drug, and higher doses are needed to achieve the same effect.\\n* **Dependence:** The brain and body become reliant on Valium to function normally. Stopping the drug abruptly after long-term use can lead to a severe and potentially dangerous withdrawal syndrome.\\n* **Withdrawal Symptoms:** Can include rebound anxiety, insomnia, tremors, sweating, agitation, and in severe cases, seizures. Withdrawal should always be managed by a doctor through a gradual tapering schedule.\\n* **Addiction (Substance Use Disorder):** Characterized by compulsive drug-seeking and use despite harmful consequences.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2397, \"prompt_tokens\": 24, \"total_tokens\": 2421, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1562}}}}33.0435612026-02-04 15:37:07.9352202026-02-04 15:37:07.935220
c64e5a8aef454b1f864d90611e2c1a9b83d31ab3552644d2bccddc73d9c9f30d1{\"request_id\": \"16cb6606-07ba-9462-88cd-62e5fe83f39a\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: what is morphine.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218856, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Based on the consumer health question \\\"what is morphine,\\\" here is a list of the drug's benefits and side effects.\\n\\n***\\n\\n### **Benefits of Morphine**\\n\\nMorphine is a powerful opioid pain medication prescribed for the management of severe pain. Its primary benefits are related to its potent analgesic (pain-relieving) effects.\\n\\n* **Management of Severe Acute Pain:** It is highly effective for treating intense, short-term pain, such as pain experienced after major surgery, a serious injury (like a broken bone or severe burn), or a heart attack.\\n* **Cancer-Related Pain:** Morphine is a cornerstone of pain management for patients with moderate to severe pain caused by cancer or its treatments.\\n* **Palliative and End-of-Life Care:** It is used to provide comfort to patients with terminal illnesses by relieving persistent pain and the sensation of shortness of breath (dyspnea).\\n* **Chronic Pain Management:** In some specific and carefully monitored cases, it may be prescribed for severe, long-term chronic pain when other treatments have failed.\\n\\n### **Side Effects of Morphine**\\n\\nMorphine has a significant risk of side effects, ranging from common and manageable to severe and life-threatening.\\n\\n#### **Common Side Effects:**\\n\\n* **Drowsiness and Sedation:** Feeling sleepy, tired, or mentally \\\"foggy.\\\"\\n* **Constipation:** This is a very common and often persistent side effect.\\n* **Nausea and Vomiting:** Especially common when first starting the medication.\\n* **Dizziness and Lightheadedness:** Can increase the risk of falls, particularly in the elderly.\\n* **Confusion:** Difficulty thinking clearly or remembering things.\\n* **Itching or Sweating.**\\n\\n#### **Serious and Potentially Life-Threatening Side Effects:**\\n\\n**Seek immediate medical attention if you experience any of the following:**\\n\\n* **Respiratory Depression (Slowed or Stopped Breathing):** This is the most dangerous side effect of morphine and can be fatal. Signs include shallow, difficult, or very slow breathing.\\n* **Extreme Drowsiness or Inability to Wake Up:** Progressing from sedation to unresponsiveness.\\n* **Low Blood Pressure and Slow Heart Rate:** Causing faintness, dizziness, or confusion.\\n* **Severe Confusion, Hallucinations, or Agitation.**\\n* **Allergic Reaction:** Signs include rash, hives, swelling of the face, lips, or throat, and difficulty breathing.\\n\\n#### **Long-Term Risks and Other Important Considerations:**\\n\\n* **Addiction, Dependence, and Withdrawal:** Morphine has a high potential for abuse and addiction (the compulsive use of a drug despite harmful consequences). Physical dependence can also occur, meaning the body adapts to the drug and will experience withdrawal symptoms (e.g., anxiety, muscle aches, nausea, diarrhea) if the medication is stopped suddenly.\\n* **Tolerance:** Over time, the body may require higher doses of morphine to achieve the same level of pain relief.\\n* **Risk of Overdose:** Taking too much morphine, or combining it with other substances like **alcohol, benzodiazepines (e.g., Xanax, Valium), or other sedatives**, can lead to a fatal overdose, primarily by stopping breathing.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2131, \"prompt_tokens\": 20, \"total_tokens\": 2151, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1445}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218856, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Based on the consumer health question \\\"what is morphine,\\\" here is a list of the drug's benefits and side effects.\\n\\n***\\n\\n### **Benefits of Morphine**\\n\\nMorphine is a powerful opioid pain medication prescribed for the management of severe pain. Its primary benefits are related to its potent analgesic (pain-relieving) effects.\\n\\n* **Management of Severe Acute Pain:** It is highly effective for treating intense, short-term pain, such as pain experienced after major surgery, a serious injury (like a broken bone or severe burn), or a heart attack.\\n* **Cancer-Related Pain:** Morphine is a cornerstone of pain management for patients with moderate to severe pain caused by cancer or its treatments.\\n* **Palliative and End-of-Life Care:** It is used to provide comfort to patients with terminal illnesses by relieving persistent pain and the sensation of shortness of breath (dyspnea).\\n* **Chronic Pain Management:** In some specific and carefully monitored cases, it may be prescribed for severe, long-term chronic pain when other treatments have failed.\\n\\n### **Side Effects of Morphine**\\n\\nMorphine has a significant risk of side effects, ranging from common and manageable to severe and life-threatening.\\n\\n#### **Common Side Effects:**\\n\\n* **Drowsiness and Sedation:** Feeling sleepy, tired, or mentally \\\"foggy.\\\"\\n* **Constipation:** This is a very common and often persistent side effect.\\n* **Nausea and Vomiting:** Especially common when first starting the medication.\\n* **Dizziness and Lightheadedness:** Can increase the risk of falls, particularly in the elderly.\\n* **Confusion:** Difficulty thinking clearly or remembering things.\\n* **Itching or Sweating.**\\n\\n#### **Serious and Potentially Life-Threatening Side Effects:**\\n\\n**Seek immediate medical attention if you experience any of the following:**\\n\\n* **Respiratory Depression (Slowed or Stopped Breathing):** This is the most dangerous side effect of morphine and can be fatal. Signs include shallow, difficult, or very slow breathing.\\n* **Extreme Drowsiness or Inability to Wake Up:** Progressing from sedation to unresponsiveness.\\n* **Low Blood Pressure and Slow Heart Rate:** Causing faintness, dizziness, or confusion.\\n* **Severe Confusion, Hallucinations, or Agitation.**\\n* **Allergic Reaction:** Signs include rash, hives, swelling of the face, lips, or throat, and difficulty breathing.\\n\\n#### **Long-Term Risks and Other Important Considerations:**\\n\\n* **Addiction, Dependence, and Withdrawal:** Morphine has a high potential for abuse and addiction (the compulsive use of a drug despite harmful consequences). Physical dependence can also occur, meaning the body adapts to the drug and will experience withdrawal symptoms (e.g., anxiety, muscle aches, nausea, diarrhea) if the medication is stopped suddenly.\\n* **Tolerance:** Over time, the body may require higher doses of morphine to achieve the same level of pain relief.\\n* **Risk of Overdose:** Taking too much morphine, or combining it with other substances like **alcohol, benzodiazepines (e.g., Xanax, Valium), or other sedatives**, can lead to a fatal overdose, primarily by stopping breathing.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2131, \"prompt_tokens\": 20, \"total_tokens\": 2151, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1445}}}}32.1152832026-02-04 15:37:07.9352212026-02-04 15:37:07.935223
97a2a28dcc9b49029b3101c342461c8f83d31ab3552644d2bccddc73d9c9f30d1{\"request_id\": \"c0c31906-4033-9321-affd-947bcb4bdb5b\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: what are the milligrams for oxycodone e.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218864, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. It is extremely important to understand that **only a qualified healthcare provider can determine the correct and safe dosage of oxycodone for an individual.** The information below is for educational purposes only.\\n\\nBased on your question about oxycodone, here is a summary of its medical uses (benefits), common side effects, and important warnings.\\n\\n***\\n\\n### **Medical Use (Benefit) of Oxycodone**\\n\\nThe primary benefit of oxycodone is to manage pain that is severe enough to require an opioid pain reliever and when other pain treatments (like non-opioid medications or combination products) have not worked or cannot be tolerated.\\n\\n* **How it works:** Oxycodone is an opioid analgesic. It works by changing the way the brain and nervous system respond to pain signals.\\n* **Types:** It comes in two main forms:\\n * **Immediate-release (IR):** Used for managing acute or \\\"breakthrough\\\" pain. Dosages often start at **5 mg to 15 mg** and are taken every 4 to 6 hours as needed.\\n * **Extended-release (ER):** Used for managing chronic, around-the-clock pain. These tablets are taken once or twice a day and should never be crushed, chewed, or dissolved. Dosages are higher (e.g., **10 mg, 20 mg, 40 mg, 80 mg**) because the medication is released slowly over time.\\n\\n### **Common Side Effects**\\n\\nThese are side effects that many people experience. While common, you should still discuss them with your doctor.\\n\\n* Drowsiness, dizziness, or feeling lightheaded\\n* Nausea and vomiting\\n* Constipation (very common with long-term use)\\n* Headache\\n* Dry mouth\\n* Sweating\\n* Itching\\n\\n### **Serious Side Effects and Important Warnings**\\n\\nThese side effects can be dangerous and require immediate medical attention. **Call 911 if you experience signs of an overdose.**\\n\\n* **Slowed or Shallow Breathing (Respiratory Depression):** This is the most dangerous side effect and can be fatal.\\n* **Extreme Drowsiness or Fainting:** Difficulty waking up or feeling faint.\\n* **Confusion, Hallucinations, or unusual thoughts.**\\n* **Signs of an Allergic Reaction:** Rash, hives, swelling of the face, lips, or throat, and difficulty breathing.\\n* **Seizures.**\\n\\n#### **BOXED WARNING: Addiction, Abuse, and Overdose**\\n\\nOxycodone carries a high risk of addiction, physical dependence, and overdose, which can lead to death.\\n\\n* **Addiction and Misuse:** Oxycodone has a high potential for abuse and can lead to psychological and physical addiction, even when taken as prescribed.\\n* **Physical Dependence and Withdrawal:** Your body can become dependent on the drug, meaning you will experience withdrawal symptoms (like anxiety, muscle aches, sweating, nausea) if you stop taking it suddenly. Your doctor will help you taper off the medication slowly to avoid this.\\n* **Overdose:** Taking too much oxycodone, or combining it with alcohol or other central nervous system depressants (like benzodiazepines, sedatives, or other opioids), can cause a fatal overdose.\\n\\n> **Important Medical Disclaimer:**\\n> This information is for educational purposes and is not a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your doctor or another qualified health provider with any questions you may have regarding a medical condition or medication. Never use oxycodone that was not prescribed for you, and never share your prescription with others.\\n>\\n> If you or someone you know is struggling with substance use, please contact the **SAMHSA National Helpline at 1-800-662-4357**.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2349, \"prompt_tokens\": 25, \"total_tokens\": 2374, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1544}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218864, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. It is extremely important to understand that **only a qualified healthcare provider can determine the correct and safe dosage of oxycodone for an individual.** The information below is for educational purposes only.\\n\\nBased on your question about oxycodone, here is a summary of its medical uses (benefits), common side effects, and important warnings.\\n\\n***\\n\\n### **Medical Use (Benefit) of Oxycodone**\\n\\nThe primary benefit of oxycodone is to manage pain that is severe enough to require an opioid pain reliever and when other pain treatments (like non-opioid medications or combination products) have not worked or cannot be tolerated.\\n\\n* **How it works:** Oxycodone is an opioid analgesic. It works by changing the way the brain and nervous system respond to pain signals.\\n* **Types:** It comes in two main forms:\\n * **Immediate-release (IR):** Used for managing acute or \\\"breakthrough\\\" pain. Dosages often start at **5 mg to 15 mg** and are taken every 4 to 6 hours as needed.\\n * **Extended-release (ER):** Used for managing chronic, around-the-clock pain. These tablets are taken once or twice a day and should never be crushed, chewed, or dissolved. Dosages are higher (e.g., **10 mg, 20 mg, 40 mg, 80 mg**) because the medication is released slowly over time.\\n\\n### **Common Side Effects**\\n\\nThese are side effects that many people experience. While common, you should still discuss them with your doctor.\\n\\n* Drowsiness, dizziness, or feeling lightheaded\\n* Nausea and vomiting\\n* Constipation (very common with long-term use)\\n* Headache\\n* Dry mouth\\n* Sweating\\n* Itching\\n\\n### **Serious Side Effects and Important Warnings**\\n\\nThese side effects can be dangerous and require immediate medical attention. **Call 911 if you experience signs of an overdose.**\\n\\n* **Slowed or Shallow Breathing (Respiratory Depression):** This is the most dangerous side effect and can be fatal.\\n* **Extreme Drowsiness or Fainting:** Difficulty waking up or feeling faint.\\n* **Confusion, Hallucinations, or unusual thoughts.**\\n* **Signs of an Allergic Reaction:** Rash, hives, swelling of the face, lips, or throat, and difficulty breathing.\\n* **Seizures.**\\n\\n#### **BOXED WARNING: Addiction, Abuse, and Overdose**\\n\\nOxycodone carries a high risk of addiction, physical dependence, and overdose, which can lead to death.\\n\\n* **Addiction and Misuse:** Oxycodone has a high potential for abuse and can lead to psychological and physical addiction, even when taken as prescribed.\\n* **Physical Dependence and Withdrawal:** Your body can become dependent on the drug, meaning you will experience withdrawal symptoms (like anxiety, muscle aches, sweating, nausea) if you stop taking it suddenly. Your doctor will help you taper off the medication slowly to avoid this.\\n* **Overdose:** Taking too much oxycodone, or combining it with alcohol or other central nervous system depressants (like benzodiazepines, sedatives, or other opioids), can cause a fatal overdose.\\n\\n> **Important Medical Disclaimer:**\\n> This information is for educational purposes and is not a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your doctor or another qualified health provider with any questions you may have regarding a medical condition or medication. Never use oxycodone that was not prescribed for you, and never share your prescription with others.\\n>\\n> If you or someone you know is struggling with substance use, please contact the **SAMHSA National Helpline at 1-800-662-4357**.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2349, \"prompt_tokens\": 25, \"total_tokens\": 2374, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1544}}}}40.1377342026-02-04 15:37:07.9352242026-02-04 15:37:07.935224
a14b6ad38ea7484b9b48a486a6d1b7d783d31ab3552644d2bccddc73d9c9f30d1{\"request_id\": \"cf6c8a62-85ee-9d9a-9fab-aad2cce5a825\", \"intermediate_results\": {\"templating\": [{\"content\": \"List the benefits and side effects of the drug in the following consumer health question: 81% aspirin contain resin and shellac in it. ?.\", \"role\": \"user\"}], \"llm\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218867, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. It's important to clarify the user's question first, as it contains a common misunderstanding.\\n\\nThe question \\\"81% aspirin contain resin and shellac in it. ?.\\\" likely has two points of confusion:\\n\\n1. **\\\"81%\\\" is almost certainly a typo for \\\"81 mg\\\"**. 81 mg is the standard \\\"low-dose\\\" or \\\"baby aspirin\\\" dosage used for cardiovascular protection.\\n2. **Resin and Shellac** are inactive ingredients used to create the \\\"enteric coating\\\" on some aspirin tablets. This special coating is designed to prevent the aspirin from dissolving in the stomach, allowing it to pass into the small intestine before being absorbed. This helps reduce stomach irritation and bleeding.\\n\\nBased on this clarification, the drug in question is **Enteric-Coated 81 mg Aspirin**. Here are its benefits and side effects.\\n\\n---\\n\\n### Benefits of Low-Dose (81 mg) Aspirin\\n\\nLow-dose aspirin is primarily used for its antiplatelet effects, meaning it helps prevent blood clots from forming. Its main benefits are related to cardiovascular health, typically under a doctor's recommendation.\\n\\n* **Prevention of Heart Attack:** For individuals who have already had a heart attack, daily low-dose aspirin can significantly reduce the risk of having a second one. It is also used in people with a very high risk of a first heart attack.\\n* **Prevention of Ischemic Stroke:** By preventing blood clots, aspirin can lower the risk of an ischemic stroke (a stroke caused by a blockage in a blood vessel supplying the brain). It is often prescribed to patients who have previously had this type of stroke or a transient ischemic attack (TIA or \\\"mini-stroke\\\").\\n* **Management of Cardiovascular Disease:** It is a cornerstone of therapy for people with known coronary artery disease, angina (chest pain from heart disease), or who have had procedures like bypass surgery or stent placement.\\n* **Potential Cancer Prevention:** Some studies suggest that long-term daily aspirin use may lower the risk of certain cancers, particularly colorectal cancer. However, this is not its primary approved use and should be discussed with a doctor.\\n\\n### Side Effects of Low-Dose (81 mg) Aspirin\\n\\nEven at a low dose, aspirin is a powerful medication with potential risks and side effects. The enteric coating helps reduce some risks but does not eliminate them.\\n\\n#### **Common Side Effects:**\\n\\n* **Stomach Upset:** Heartburn, indigestion, or nausea.\\n* **Easy Bruising or Minor Bleeding:** Since aspirin thins the blood, you may notice more bruising or that small cuts take longer to stop bleeding.\\n\\n#### **Serious Side Effects (Require Immediate Medical Attention):**\\n\\n* **Gastrointestinal (GI) Bleeding:** This is the most significant risk. The enteric coating reduces but does not eliminate this risk. **Signs include:**\\n * Black, bloody, or tarry stools.\\n * Vomiting blood or a substance that looks like coffee grounds.\\n * Severe stomach pain that doesn't go away.\\n* **Hemorrhagic Stroke (Bleeding in the Brain):** While aspirin helps prevent strokes caused by clots, it can increase the risk of strokes caused by bleeding. **Signs include:**\\n * Sudden severe headache, confusion, or vision problems.\\n * Numbness or weakness, especially on one side of the body.\\n* **Allergic Reaction:** Although rare, some people are allergic to aspirin. **Signs include:**\\n * Hives, rash, or itching.\\n * Swelling of the face, lips, or tongue.\\n * Wheezing or difficulty breathing.\\n* **Tinnitus:** Ringing or buzzing in the ears, which can be a sign of taking too much aspirin.\\n\\n### **Important Warnings**\\n\\n* **Reye's Syndrome:** Aspirin should **NEVER** be given to children or teenagers recovering from a viral infection (like the flu or chickenpox) as it can cause Reye's syndrome, a rare but life-threatening condition that causes swelling in the liver and brain.\\n* **Drug Interactions:** Aspirin can interact with other medications, especially other blood thinners (like warfarin, clopidogrel), NSAIDs (like ibuprofen, naproxen), and some antidepressants.\\n* **Alcohol:** Drinking alcohol while taking daily aspirin can increase your risk of stomach bleeding.\\n\\n> **Disclaimer:** This information is for educational purposes only and is not a substitute for professional medical advice. The decision to start, stop, or continue taking aspirin should only be made in consultation with a qualified healthcare provider who can assess your individual health risks and benefits.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2362, \"prompt_tokens\": 30, \"total_tokens\": 2392, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1367}}}}, \"final_result\": {\"id\": \"\", \"object\": \"chat.completion\", \"created\": 1770218867, \"model\": \"gemini-2.5-pro\", \"choices\": [{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Of course. It's important to clarify the user's question first, as it contains a common misunderstanding.\\n\\nThe question \\\"81% aspirin contain resin and shellac in it. ?.\\\" likely has two points of confusion:\\n\\n1. **\\\"81%\\\" is almost certainly a typo for \\\"81 mg\\\"**. 81 mg is the standard \\\"low-dose\\\" or \\\"baby aspirin\\\" dosage used for cardiovascular protection.\\n2. **Resin and Shellac** are inactive ingredients used to create the \\\"enteric coating\\\" on some aspirin tablets. This special coating is designed to prevent the aspirin from dissolving in the stomach, allowing it to pass into the small intestine before being absorbed. This helps reduce stomach irritation and bleeding.\\n\\nBased on this clarification, the drug in question is **Enteric-Coated 81 mg Aspirin**. Here are its benefits and side effects.\\n\\n---\\n\\n### Benefits of Low-Dose (81 mg) Aspirin\\n\\nLow-dose aspirin is primarily used for its antiplatelet effects, meaning it helps prevent blood clots from forming. Its main benefits are related to cardiovascular health, typically under a doctor's recommendation.\\n\\n* **Prevention of Heart Attack:** For individuals who have already had a heart attack, daily low-dose aspirin can significantly reduce the risk of having a second one. It is also used in people with a very high risk of a first heart attack.\\n* **Prevention of Ischemic Stroke:** By preventing blood clots, aspirin can lower the risk of an ischemic stroke (a stroke caused by a blockage in a blood vessel supplying the brain). It is often prescribed to patients who have previously had this type of stroke or a transient ischemic attack (TIA or \\\"mini-stroke\\\").\\n* **Management of Cardiovascular Disease:** It is a cornerstone of therapy for people with known coronary artery disease, angina (chest pain from heart disease), or who have had procedures like bypass surgery or stent placement.\\n* **Potential Cancer Prevention:** Some studies suggest that long-term daily aspirin use may lower the risk of certain cancers, particularly colorectal cancer. However, this is not its primary approved use and should be discussed with a doctor.\\n\\n### Side Effects of Low-Dose (81 mg) Aspirin\\n\\nEven at a low dose, aspirin is a powerful medication with potential risks and side effects. The enteric coating helps reduce some risks but does not eliminate them.\\n\\n#### **Common Side Effects:**\\n\\n* **Stomach Upset:** Heartburn, indigestion, or nausea.\\n* **Easy Bruising or Minor Bleeding:** Since aspirin thins the blood, you may notice more bruising or that small cuts take longer to stop bleeding.\\n\\n#### **Serious Side Effects (Require Immediate Medical Attention):**\\n\\n* **Gastrointestinal (GI) Bleeding:** This is the most significant risk. The enteric coating reduces but does not eliminate this risk. **Signs include:**\\n * Black, bloody, or tarry stools.\\n * Vomiting blood or a substance that looks like coffee grounds.\\n * Severe stomach pain that doesn't go away.\\n* **Hemorrhagic Stroke (Bleeding in the Brain):** While aspirin helps prevent strokes caused by clots, it can increase the risk of strokes caused by bleeding. **Signs include:**\\n * Sudden severe headache, confusion, or vision problems.\\n * Numbness or weakness, especially on one side of the body.\\n* **Allergic Reaction:** Although rare, some people are allergic to aspirin. **Signs include:**\\n * Hives, rash, or itching.\\n * Swelling of the face, lips, or tongue.\\n * Wheezing or difficulty breathing.\\n* **Tinnitus:** Ringing or buzzing in the ears, which can be a sign of taking too much aspirin.\\n\\n### **Important Warnings**\\n\\n* **Reye's Syndrome:** Aspirin should **NEVER** be given to children or teenagers recovering from a viral infection (like the flu or chickenpox) as it can cause Reye's syndrome, a rare but life-threatening condition that causes swelling in the liver and brain.\\n* **Drug Interactions:** Aspirin can interact with other medications, especially other blood thinners (like warfarin, clopidogrel), NSAIDs (like ibuprofen, naproxen), and some antidepressants.\\n* **Alcohol:** Drinking alcohol while taking daily aspirin can increase your risk of stomach bleeding.\\n\\n> **Disclaimer:** This information is for educational purposes only and is not a substitute for professional medical advice. The decision to start, stop, or continue taking aspirin should only be made in consultation with a qualified healthcare provider who can assess your individual health risks and benefits.\"}, \"finish_reason\": \"stop\"}], \"usage\": {\"completion_tokens\": 2362, \"prompt_tokens\": 30, \"total_tokens\": 2392, \"prompt_tokens_details\": {\"cached_tokens\": 0}, \"completion_tokens_details\": {\"reasoning_tokens\": 1367}}}}43.8727362026-02-04 15:37:07.9352252026-02-04 15:37:07.935225
\n", + "
\n", + " \n", + "
\n", + "

Table: evaluation_result

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
submission_idrun_idrepetition_countmetricaggregating_valuemetric_resulterrorcreated_atupdated_at
7516fcd4de96441a8ae9e8f68bdb3f5c83d31ab3552644d2bccddc73d9c9f30d1\"Pointwise Conciseness\"4.0{\"explanation\": \"The response is thorough and clearly structured, providing the benefits and side effects of both rivastigmine and common OTC sleep medicines, followed by a detailed explanation of their interaction. However, the response includes some redundant information (e.g., repeating the mechanism of action for both drugs, restating the risks in multiple sections) and uses more words than necessary to convey the essential points. While the information is accurate and helpful, the response could be made more concise by removing some repetition and condensing explanations. Therefore, it is mostly concise but not highly concise.\", \"rating\": 4}None2026-02-04 15:51:18.6698132026-02-04 15:51:18.669817
fbe9fd3eaa1945bfa97204593045861483d31ab3552644d2bccddc73d9c9f30d1\"Pointwise Conciseness\"4.0{\"explanation\": \"The response provides a thorough and well-organized explanation of how Valium affects the brain, including both benefits and side effects. It avoids excessive repetition and presents information in a clear, consumer-friendly manner. However, there is some introductory and cautionary language (e.g., the medical disclaimer and the 'clear, consumer-friendly format' statement) that, while helpful, is not strictly necessary for conciseness. The main content is detailed but not overly verbose, and each point is relevant to the user's question. There is some slight elaboration (such as analogies and expanded explanations) that could be trimmed for maximum conciseness, but overall, the response is mostly concise and covers the essential information without significant superfluous content.\", \"rating\": 4}None2026-02-04 15:51:18.6698182026-02-04 15:51:18.669818
c64e5a8aef454b1f864d90611e2c1a9b83d31ab3552644d2bccddc73d9c9f30d1\"Pointwise Conciseness\"4.0{\"explanation\": \"The response provides a thorough and well-organized list of both benefits and side effects of morphine, clearly separated into categories. While the information is comprehensive and relevant, there is some redundancy and elaboration that could be condensed. For example, some side effects are explained in detail with examples or additional warnings, which, while helpful, add to the length. The response could be more succinct by listing the benefits and side effects without the extra explanatory sentences and examples. Therefore, it is mostly concise but not highly concise.\", \"rating\": 4}None2026-02-04 15:51:18.6698182026-02-04 15:51:18.669821
97a2a28dcc9b49029b3101c342461c8f83d31ab3552644d2bccddc73d9c9f30d1\"Pointwise Conciseness\"3.0{\"explanation\": \"The response provides a thorough overview of oxycodone's benefits, side effects, and important warnings, including dosage information for different formulations. However, it contains a significant amount of extra information, such as repeated medical disclaimers, detailed warnings, and explanations about how the drug works, which, while helpful, go beyond the essential request to list benefits and side effects. The response could be made more concise by focusing strictly on the benefits and side effects, with a brief mention of dosage forms if relevant. The inclusion of multiple warnings and disclaimers, while important for safety, adds to the length and reduces conciseness. Therefore, the response is somewhat concise but includes some unnecessary and slightly redundant information.\", \"rating\": 3}None2026-02-04 15:51:18.6698212026-02-04 15:51:18.669822
a14b6ad38ea7484b9b48a486a6d1b7d783d31ab3552644d2bccddc73d9c9f30d1\"Pointwise Conciseness\"4.0{\"explanation\": \"The response is thorough and provides all the essential information regarding the benefits and side effects of low-dose (81 mg) enteric-coated aspirin. It also clarifies the user's question, which is helpful for understanding. However, the response is quite lengthy and includes some extended explanations, such as detailed symptom lists for side effects, background on enteric coating, and a disclaimer. While these details are informative, they go beyond the essential information requested (a list of benefits and side effects). The response could be made more concise by summarizing or omitting some of the explanatory content and focusing more directly on the requested lists. Therefore, it is mostly concise but not highly concise.\", \"rating\": 4}None2026-02-04 15:51:18.6698232026-02-04 15:51:18.669823
\n", + "
\n", + "
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "# viewing the results from sqlite db in tabular format..\n", + "import sqlite3\n", + "import pandas as pd\n", + "from IPython.display import display, HTML\n", + "\n", + "# Path to your SQLite database file\n", + "db_file = 'results-new/results.db'\n", + "\n", + "connection = sqlite3.connect(db_file)\n", + "\n", + "# Specify the table names you want to display\n", + "table_names = ['run','configuration', 'submission', 'submission_result', 'evaluation_result'] \n", + "\n", + "# Create the CSS and HTML container\n", + "html_content = \"\"\"\n", + "\n", + "
\n", + "\"\"\"\n", + "\n", + "for table_name in table_names:\n", + " query = f\"SELECT * FROM {table_name};\"\n", + " df = pd.read_sql_query(query, connection)\n", + " # If you want to see all the rows across all tables, remove/comment the next line\n", + " df = df.head(5) # Limiting the number of rows displayed\n", + " table_html = df.to_html(classes='table-container', index=False)\n", + " html_content += f\"\"\"\n", + "
\n", + "

Table: {table_name}

\n", + " {table_html}\n", + "
\n", + " \"\"\"\n", + "\n", + "html_content += \"
\"\n", + "\n", + "display(HTML(html_content))\n", + "\n", + "# Close the connection\n", + "connection.close()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n", + "
\n", + "\n", + "
\n", + "

Categorical Comparison

\n", + "

Values: Weighted Average (1-5 scale). Win Rate based on head-to-head performance.

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
run_idrun_namemodelWin RateFinal RankPointwise ConcisenessPointwise Instruction FollowingPointwise CorrectnessPointwise Answer Relevance
2663332bb4da43c089217193cbae88ceRun-prompt-registry-eval-demo-gpt-4o-2024-08-06gpt-4o0.75014.4494.91844.83674.6735
83d31ab3552644d2bccddc73d9c9f30dRun-prompt-registry-eval-demo-gemini-2.5-pro-001gemini-2.5-pro0.37523.4494.89804.77554.6735
a5be2752cae64582922f96b80c890dc8Run-prompt-registry-eval-demo-gpt-5-2025-08-07gpt-50.12534.4494.67354.48984.5306
\n", + "
\n", + " \n", + "
\n", + "

Boolean Comparison

\n", + "

Values: Pass Rate (0-1 scale). Win Rate based on head-to-head performance.

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
run_idrun_namemodelWin RateFinal RankExact MatchContent Filter on InputContent Filter on Output
83d31ab3552644d2bccddc73d9c9f30dRun-prompt-registry-eval-demo-gemini-2.5-pro-001gemini-2.5-pro0.010.00.00.0
2663332bb4da43c089217193cbae88ceRun-prompt-registry-eval-demo-gpt-4o-2024-08-06gpt-4o0.010.00.00.0
a5be2752cae64582922f96b80c890dc8Run-prompt-registry-eval-demo-gpt-5-2025-08-07gpt-50.010.00.00.0
\n", + "
\n", + " \n", + "
\n", + "

Numerical Comparison

\n", + "

Values: Mean Value. Win Rate based on head-to-head performance.

\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
run_idrun_namemodelWin RateFinal RankBLEUROUGE
2663332bb4da43c089217193cbae88ceRun-prompt-registry-eval-demo-gpt-4o-2024-08-06gpt-4o1.0010.00420.1079
83d31ab3552644d2bccddc73d9c9f30dRun-prompt-registry-eval-demo-gemini-2.5-pro-001gemini-2.5-pro0.2520.00300.0794
a5be2752cae64582922f96b80c890dc8Run-prompt-registry-eval-demo-gpt-5-2025-08-07gpt-50.2520.00210.0930
\n", + "
\n", + "
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import pandas as pd\n", + "import numpy as np\n", + "import sqlite3\n", + "import json\n", + "import os\n", + "from IPython.display import display, HTML\n", + "\n", + "# ==========================================\n", + "# 1. CONFIGURATION (Separated Groups)\n", + "# ==========================================\n", + "METRIC_GROUPS = {\n", + " \"Categorical\": {\n", + " \"type\": \"categorical\",\n", + " \"description\": \"Weighted Average (1-5 scale)\",\n", + " \"metrics\": [\n", + " \"Pointwise Conciseness\", \n", + " \"Pointwise Instruction Following\", \n", + " \"Pointwise Correctness\", \n", + " \"Pointwise Answer Relevance\"\n", + " ]\n", + " },\n", + " \"Boolean\": {\n", + " \"type\": \"categorical\", # Uses same weighted avg logic (0 or 1)\n", + " \"description\": \"Pass Rate (0-1 scale)\",\n", + " \"metrics\": [\n", + " \"Exact Match\",\n", + " \"Content Filter on Input\",\n", + " \"Content Filter on Output\",\n", + " \"Language Match\",\n", + " \"JSON Schema Match\"\n", + " ]\n", + " },\n", + " \"Numerical\": {\n", + " \"type\": \"numerical\",\n", + " \"description\": \"Mean Value\",\n", + " \"metrics\": [\n", + " \"BLEU\", \n", + " \"ROUGE\", \n", + " \"BERT Score\",\n", + " \"test-metric\"\n", + " ]\n", + " }\n", + "}\n", + "\n", + "# ==========================================\n", + "# 2. DATA EXTRACTION\n", + "# ==========================================\n", + "def extract_db_metadata(db_path):\n", + " if not os.path.exists(db_path): return pd.DataFrame()\n", + " conn = sqlite3.connect(db_path)\n", + " df_runs = pd.read_sql_query(\"SELECT id, name, tags, config FROM run\", conn)\n", + " conn.close()\n", + " \n", + " meta_data = []\n", + " for _, row in df_runs.iterrows():\n", + " run_id = str(row[\"id\"])\n", + " run_name = str(row[\"name\"])\n", + " tags = {}\n", + " config = {}\n", + " try: tags = json.loads(row[\"tags\"]) if isinstance(row[\"tags\"], str) else row[\"tags\"]\n", + " except: pass\n", + " try: config = json.loads(row[\"config\"]) if isinstance(row[\"config\"], str) else row[\"config\"]\n", + " except: pass\n", + "\n", + " model = \"Unknown\"\n", + " try: model = config[\"modules\"][\"prompt_templating\"][\"model\"][\"name\"]\n", + " except:\n", + " if isinstance(tags, dict): model = tags.get(\"evaluation.ai.sap.com/model\", \"Unknown\")\n", + " elif isinstance(tags, list):\n", + " for t in tags: \n", + " if t.get(\"key\") == \"evaluation.ai.sap.com/model\": model = t.get(\"value\")\n", + "\n", + " meta_data.append({\"run_id\": run_id, \"run_name\": run_name, \"model\": model})\n", + " return pd.DataFrame(meta_data)\n", + "\n", + "def extract_api_metrics(runs_data_resource):\n", + " flat_data = []\n", + " for run in runs_data_resource:\n", + " model = \"Unknown\"\n", + " for t in run.get(\"tags\", []):\n", + " if t.get(\"name\") == \"evaluation.ai.sap.com/model\":\n", + " model = t.get(\"value\")\n", + " break\n", + " for m in run.get(\"metrics\", []):\n", + " clean_name = m.get(\"name\", \"\").replace('\"', '').strip()\n", + " flat_data.append({\n", + " \"model\": model,\n", + " \"metrics_name_clean\": clean_name,\n", + " \"metric_value\": m.get(\"value\")\n", + " })\n", + " df = pd.DataFrame(flat_data)\n", + " df['metric_value'] = pd.to_numeric(df['metric_value'], errors='coerce')\n", + " return df\n", + "\n", + "# ==========================================\n", + "# 3. SCORING & HELM LOGIC\n", + "# ==========================================\n", + "def calculate_weighted_avg_score(row, cols):\n", + " \"\"\" Returns a score based on counts. \n", + " Categorical: 1-5 scale. \n", + " Boolean: 0-1 scale (Pass Rate). \n", + " \"\"\"\n", + " total_score = 0\n", + " total_count = 0\n", + " # Check counts 0-5 (covers Boolean 0/1 and Categorical 1-5)\n", + " for rating in range(0, 6):\n", + " col_name = next((c for c in cols if f\"/{rating}/count\" in c), None)\n", + " if col_name and not pd.isna(row[col_name]):\n", + " count = row[col_name]\n", + " total_score += count * rating\n", + " total_count += count\n", + " return total_score / total_count if total_count > 0 else 0.0\n", + "\n", + "def get_metric_score_series(df_metrics, metric_name, group_type):\n", + " \"\"\" Returns a Series of SCORES (Scalar) for each model for a specific metric \"\"\"\n", + " subset = df_metrics[df_metrics['metrics_name_clean'].str.startswith(metric_name)]\n", + " if subset.empty: return None\n", + "\n", + " # Pivot to get columns for this metric\n", + " pivot = subset.pivot_table(index='model', columns='metrics_name_clean', values='metric_value', aggfunc='first')\n", + " cols = pivot.columns.tolist()\n", + " \n", + " if group_type == \"categorical\":\n", + " # Calculate Weighted Average (or Pass Rate for Boolean)\n", + " return pivot.apply(lambda row: calculate_weighted_avg_score(row, cols), axis=1)\n", + " else:\n", + " # Calculate Mean (Numerical)\n", + " c_mean = next((c for c in cols if \"mean\" in c), None)\n", + " if c_mean: return pivot[c_mean]\n", + " return None\n", + "\n", + "def calculate_group_win_rate(score_table):\n", + " \"\"\"\n", + " Calculates HELM Win Rate: % of times a model beats another model across all metrics in this group.\n", + " \"\"\"\n", + " models = score_table.index.tolist()\n", + " metrics = score_table.columns.tolist()\n", + " win_rates = {}\n", + "\n", + " for model_a in models:\n", + " wins = 0\n", + " comparisons = 0\n", + " \n", + " for model_b in models:\n", + " if model_a == model_b: continue\n", + " \n", + " # Compare across ALL metrics in this table\n", + " for metric in metrics:\n", + " score_a = score_table.at[model_a, metric]\n", + " score_b = score_table.at[model_b, metric]\n", + " \n", + " # Only compare valid scores\n", + " if pd.isna(score_a) or pd.isna(score_b): continue\n", + " \n", + " comparisons += 1\n", + " if score_a > score_b:\n", + " wins += 1\n", + " \n", + " win_rates[model_a] = wins / comparisons if comparisons > 0 else 0.0\n", + " \n", + " return pd.Series(win_rates)\n", + "\n", + "# ==========================================\n", + "# 4. EXECUTION\n", + "# ==========================================\n", + "db_file = 'results-new/results.db'\n", + "\n", + "# A. Metadata\n", + "df_db_meta = extract_db_metadata(db_file)\n", + "df_db_unique = df_db_meta.drop_duplicates(subset=['model'], keep='last')\n", + "\n", + "# B. CSS\n", + "html_content = \"\"\"\n", + "\n", + "
\n", + "\"\"\"\n", + "if 'runs_data' in locals() and runs_data:\n", + " df_metrics_all = extract_api_metrics(runs_data['resources'])\n", + " \n", + " for group_name, config in METRIC_GROUPS.items():\n", + " \n", + " # 1. Build Score Table\n", + " score_table = pd.DataFrame(index=df_db_unique['model'].unique())\n", + " score_table.index.name = 'model'\n", + " \n", + " valid_metrics = []\n", + " \n", + " # 2. Calculate Scores\n", + " for metric in config[\"metrics\"]:\n", + " scores = get_metric_score_series(df_metrics_all, metric, config[\"type\"])\n", + " if scores is not None:\n", + " score_table[metric] = scores\n", + " valid_metrics.append(metric)\n", + " \n", + " if not valid_metrics:\n", + " continue\n", + "\n", + " # 3. Calculate HELM Win Rate (Specific to this group)\n", + " score_table['Win Rate'] = calculate_group_win_rate(score_table[valid_metrics])\n", + " \n", + " # 4. Calculate Final Rank\n", + " score_table['Final Rank'] = score_table['Win Rate'].rank(ascending=False, method='min')\n", + " \n", + " # 5. Merge & Format\n", + " df_final = pd.merge(df_db_unique, score_table, on='model', how='inner')\n", + " df_final = df_final.sort_values('Final Rank')\n", + " \n", + " # Rounding\n", + " for c in valid_metrics: df_final[c] = df_final[c].fillna(0.0).astype(float).round(4)\n", + " df_final['Win Rate'] = df_final['Win Rate'].fillna(0.0).astype(float).round(4)\n", + " df_final['Final Rank'] = df_final['Final Rank'].fillna(0).astype(int)\n", + " \n", + " # Columns\n", + " meta_cols = ['run_id', 'run_name', 'model']\n", + " final_cols = meta_cols + ['Win Rate', 'Final Rank'] + valid_metrics\n", + " \n", + " # 6. Generate HTML\n", + " table_html = df_final[final_cols].to_html(classes='table-container', index=False)\n", + " \n", + " html_content += f\"\"\"\n", + "
\n", + "

{group_name} Comparison

\n", + "

Values: {config['description']}. Win Rate based on head-to-head performance.

\n", + " {table_html}\n", + "
\n", + " \"\"\"\n", + "\n", + " html_content += \"
\"\n", + " display(HTML(html_content))\n", + " \n", + "else:\n", + " print(\"'runs_data' missing.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#Delete Execution Id\n", + "def delete_execution():\n", + " headers = _get_headers()\n", + " EXEC_ID = execution_id\n", + " GET_EXECUTIONS_ENDPOINT = '/v2/lm/executions/'\n", + " request_url = f\"{AICORE_BASE_URL}{GET_EXECUTIONS_ENDPOINT}{EXEC_ID}\"\n", + " try:\n", + " response = requests.delete(\n", + " request_url, headers=headers, params={\"AI-Resource-Group\":AICORE_RESOURCE_GROUP}, timeout=120\n", + " )\n", + " print(response)\n", + " if(response.status_code != 202):\n", + " raise\n", + " result = response.json()\n", + " print(result)\n", + " except:\n", + " logging.error(\"Error occurred while attempting to delete a Configuration\")\n", + " raise\n", + " \n", + "delete_execution()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.13" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/tutorials/ai-core-genaihub-evaluation/requirements.txt b/tutorials/ai-core-genaihub-evaluation/requirements.txt new file mode 100644 index 0000000000..c63e2f2893 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation/requirements.txt @@ -0,0 +1,7 @@ +generative-ai-hub-sdk==4.4.3 +python-dotenv==1.0.1 +boto3==1.37.4 +pandas==2.2.3 +json2html==1.3.0 +numpy==1.26.4 +ipywidgets==8.1.0 diff --git a/tutorials/ai-core-genaihub-evaluation/sample.env b/tutorials/ai-core-genaihub-evaluation/sample.env new file mode 100644 index 0000000000..09eeddf3f3 --- /dev/null +++ b/tutorials/ai-core-genaihub-evaluation/sample.env @@ -0,0 +1,13 @@ +# AICORE CREDENTIALS +AICORE_CLIENT_ID= +AICORE_CLIENT_SECRET=AICORE CLIENT SECRET> +AICORE_AUTH_URL= +AICORE_BASE_URL= + +# AWS CREDENTIALS +AWS_ACCESS_KEY= +AWS_BUCKET_ID=> +AWS_REGION= +AWS_SECRET_ACCESS_KEY= +AWS_USERNAME= +AWS_HOST= diff --git a/tutorials/ai-core-genaihub-prompt-optimization/ai-core-genaihub-prompt-optimization.md b/tutorials/ai-core-genaihub-prompt-optimization/ai-core-genaihub-prompt-optimization.md new file mode 100644 index 0000000000..8f37452171 --- /dev/null +++ b/tutorials/ai-core-genaihub-prompt-optimization/ai-core-genaihub-prompt-optimization.md @@ -0,0 +1,1133 @@ +--- +parser: v2 +auto_validation: true +time: 45 +primary_tag: software-product>sap-ai-core +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-ai-core ] +author_name: Smita Naik +author_profile: https://github.com/I321506 +--- + +# Prompt optimization + This tutorial demonstrates how to use Prompt Optimization in SAP AI Core to automatically refine prompt templates using labeled datasets and evaluation metrics. +The process optimizes a prompt for a specific model, stores metrics in the ML Tracking Service, and saves the optimized prompt and results back to the Prompt Registry and Object Store. + +## You will learn +- How to prepare datasets and object stores for prompt optimization. +- How to create and register prompt templates in the Prompt Registry. +- How to configure and run prompt optimization via AI Launchpad, Bruno, and the Python SDK. +- How to monitor executions, review metrics, and save optimized prompts for reuse. + +## Prerequisites +1. **BTP Account** + Set up your SAP Business Technology Platform (BTP) account. + [Create a BTP Account](https://developers.sap.com/group.btp-setup.html) +2. **For SAP Developers or Employees** + Internal SAP stakeholders should refer to the following documentation: [How to create BTP Account For Internal SAP Employee](https://me.sap.com/notes/3493139), [SAP AI Core Internal Documentation](https://help.sap.com/docs/sap-ai-core) +3. **For External Developers, Customers, or Partners** + Follow this tutorial to set up your environment and entitlements: [External Developer Setup Tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html), [SAP AI Core External Documentation](https://help.sap.com/docs/sap-ai-core?version=CLOUD) +4. **Create BTP Instance and Service Key for SAP AI Core** + Follow the steps to create an instance and generate a service key for SAP AI Core: + [Create Service Key and Instance](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key?version=CLOUD) +5. **AI Core Setup Guide** + Step-by-step guide to set up and get started with SAP AI Core: + [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-setup.html) +6. An Extended SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard tiers. For more details, refer to +[SAP AI Core Service Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/service-plans?version=CLOUD) +7. You've prepared a prompt template and your template is available in the prompt registry. For more information, see [Save a Template](https://help.sap.com/docs/AI_LAUNCHPAD/3f71b1e9d5124e26ace1aa1edb11e450/49d4248485644184ab3ca2ddf36119a6.html?locale=en-US&state=DRAFT&version=DEV) + +### Pre-Read +Before starting this tutorial, ensure that you: +- Understand the basics of Generative AI workflows in SAP AI Core. +- Are familiar with creating and managing prompt templates, artifacts, and object stores +- Have the required roles such as genai_manager or custom_evaluation. +- Have completed the Quick Start tutorial or equivalent setup for SAP AI Core and AI Launchpad access. + +### Architecture Overview + +- Prompt Optimization in SAP AI Core connects the Prompt Registry, Object Store, and ML Tracking Service to form an end-to-end optimization workflow. +- The dataset (for example, Test-Data.json) is stored in the Object Store and registered as an artifact. +- During execution, the system uses the selected prompt template, metric, and model to evaluate multiple prompt variants. +- Metrics are tracked in the ML Tracking Service, and both the optimized prompt and results are saved back to the registry and object store. +- This process runs as an execution and is model-specific, ensuring the optimized prompt aligns with the target model’s behavior. + +![img](img/image_arch.png) + +### Notebook Reference + +For hands-on execution and end-to-end reference, use the accompanying [Prompt Optimization Notebook](https://github.com/SAP-samples/aicore-genai-samples/blob/main/genai-sample-apps/prompt-optimizer/prompt-optimizer.ipynb). It includes complete Python code examples that align with each step of this tutorial — from dataset preparation and artifact registration to configuration creation, execution, and result retrieval. + +💡 Even though this tutorial provides stepwise code snippets for clarity, the notebook contains all required imports, object initializations, and helper functions to run the flow seamlessly in one place. + +**To use the notebook:** +- Download and open [notebook](https://github.com/SAP-samples/aicore-genai-samples/blob/main/genai-sample-apps/prompt-optimizer/prompt-optimizer.ipynb) in your preferred environment (e.g., VS Code, JupyterLab). +- Configure your environment variables such as AICORE_BASE_URL, AICORE_AUTH_TOKEN, and object store credentials . +- Execute each cell in order to reproduce the complete prompt optimization workflow demonstrated in this tutorial. + +### Environment Variables Setup + +[OPTION BEGIN [SAP AI Launchpad]] + +- Navigate to your SAP AI Core Launchpad. + +- In the Workspaces section, click on "Add" to create a new workspace. + - A workspace in SAP AI Core is a logical container that holds your resources (like models and pipelines) and provides the isolation needed for your projects. + +- When prompted, enter your AI Core credentials (such as Client ID, Client Secret, and Base URL). + - Note: If you're unsure about where to find these credentials, refer to this [guide](https://developers.sap.com/tutorials/ai-core-generative-ai.html#1c4f36d7-f345-4822-be00-c15f133ff7d8). + +- Once the workspace is successfully created, select your desired Resource Group to begin the evaluation process. + +Refer to the screenshot below for guidance: +![img](img/image_34.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +- Open **Visual Studio Code or Jupyter Notebook**. Create a new file with the .ipynb extension (e.g., prompt_optimization.ipynb). +- Create a **.env** file in the root directory of your project. +- Add your **AI Core** and **AWS credentials** as shown below. + +```env +# AICORE CREDENTIALS +AICORE_CLIENT_ID= +AICORE_CLIENT_SECRET= +AICORE_AUTH_URL= +AICORE_BASE_URL= +AICORE_RESOURCE_GROUP= + +# AWS CREDENTIALS +AWS_ACCESS_KEY= +AWS_BUCKET_ID= +AWS_REGION= +AWS_SECRET_ACCESS_KEY= + +# ORCHESTRATION DEPLOYMENT URL +DEPLOYMENT_URL= +``` + +**Note:** Replace placeholders (e.g., CLIENT_ID, CLIENT_SECRET, etc) with your actual environment credentials. + +Refer to the below screenshot for clarity: +![img](img/image_1.png) + +#### Connect to AI Core Instance + +Once the environment variables are set and dependencies are installed, run the following code to connect to your instance: + +```PYTHON +# Loading the credentials from the env file +from gen_ai_hub.proxy.gen_ai_hub_proxy import GenAIHubProxyClient +from dotenv import load_dotenv +import os + +load_dotenv(override=True) + +# Fetching environment variables +AICORE_BASE_URL = os.getenv("AICORE_BASE_URL") +AICORE_RESOURCE_GROUP = os.getenv("AICORE_RESOURCE_GROUP") +AICORE_AUTH_URL = os.getenv("AICORE_AUTH_URL") +AICORE_CLIENT_ID = os.getenv("AICORE_CLIENT_ID") +AICORE_CLIENT_SECRET = os.getenv("AICORE_CLIENT_SECRET") + +AWS_ACCESS_KEY = os.getenv("AWS_ACCESS_KEY") +AWS_BUCKET_ID = os.getenv("AWS_BUCKET_ID") +AWS_REGION = os.getenv("AWS_REGION") +AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY") + +# Initializing the GenAIHubProxyClient +client = GenAIHubProxyClient( + base_url=AICORE_BASE_URL, + auth_url=AICORE_AUTH_URL, + client_id=AICORE_CLIENT_ID, + client_secret=AICORE_CLIENT_SECRET, + resource_group=AICORE_RESOURCE_GROUP +) +``` + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +- please follow the steps in the [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) to set up your environment, refer step - **Set Up Your Environment and Configure Access** and proceed till generating the token + +[OPTION END] + +### Register Object Store Secret in AI Core + +The object store is used by Prompt Optimization to read datasets and store generated artifacts and results. +In most environments, a default object store is already registered. +If your workspace already shows an entry named default under Object Stores, you can skip this step. +Otherwise, follow the instructions below to register a new one. + +[OPTION BEGIN [SAP AI Launchpad]] + +- Open the **SAP AI Core Launchpad** and navigate to the **Administration** tab. +- Select the **Object Store** section from the left-hand menu. +- Click on **“Add”** to register a new object store secret. +- Fill in the required bucket details as shown in the screenshot below. + +![img](img/image_33.png) + +In the **Secret** field, use the following structure to provide your AWS credentials: + +```json +{ + "AWS_ACCESS_KEY_ID": "Enter Your value", + "AWS_SECRET_ACCESS_KEY": "Enter Your value" +} +``` +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +If you’re running this tutorial in a Python environment and need to create a new S3-based object store, you can register it manually: + +```PYTHON +def _get_headers(): + headers = { + "Authorization": client.get_ai_core_token(), + "AI-Resource-Group": AICORE_RESOURCE_GROUP, + "Content-Type": "application/json", + } + return headers +``` + +Register your S3 bucket and credentials as a secret. + +```PYTHON +# Register S3 secret with AI Core which will be used an input source +import requests + +def register_oss_secret(): + headers = _get_headers() + + POST_SECRETS_ENDPOINT = '/v2/admin/objectStoreSecrets' + request_url = f"{AICORE_BASE_URL}{POST_SECRETS_ENDPOINT}" + + request_body = { + "name": "default", + "data": { + "AWS_ACCESS_KEY_ID": AWS_ACCESS_KEY, + "AWS_SECRET_ACCESS_KEY": AWS_SECRET_ACCESS_KEY + }, + "type": "S3", + "bucket": AWS_BUCKET_ID, + "endpoint": "s3-eu-central-1.amazonaws.com", + "region": AWS_REGION, + "pathPrefix": "" + } + try: + response = requests.post( + request_url, headers=headers, data=json.dumps(request_body), timeout=120 + ) + result = response.json() + print(result) + return result + except: + logging.error("Error occurred while attempting to create object store secret") + raise + +register_oss_secret() +``` + +After registration, verify that your store is visible under Object Stores in AI Launchpad or through the SDK call: + +```python +client.list_object_stores() +``` +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Generic secrets securely store AWS S3 credentials required for document access + +• Expand **objectStoreSecrets** under admin and select create a secret request + +Use the below payload to create a secret for AWS S3 with NoAuthentication as authentication type. + +```CODE +{ + "name": "default", + "data": { + "AWS_ACCESS_KEY_ID": "", + "AWS_SECRET_ACCESS_KEY": "", + }, + "type": "S3", + "bucket": "", + "endpoint": "", + "region": "", + "pathPrefix": "" + } +``` +• Ensure that all values in the data dictionary are Base64-encoded as per AWS S3 credential requirements + +![img](img/image-br01.png) + +[OPTION END] + +### Prepare Dataset + +The dataset provides the examples used by the Prompt Optimization process to evaluate and refine your input prompt. +Each record should contain a sample input message and its corresponding expected structured JSON output, which represents the correct behavior you want the model to learn. + +**Dataset structure** + +Each record must include: + + - input – the user message or text prompt + + - answer – the expected model response (in valid JSON format) + +Example record from facility-train.json: + +```json +[ + { + "fields": { + "input": "Subject: Urgent Assistance Required for Specialized Cleaning Services\n\nDear ProCare + Facility Solutions Support Team. Could you please arrange for a specialized cleaning team to + visit our home at the earliest convenience? We would greatly appreciate it if this could be + prioritized since we want to host a large party this week.\n\nThank you for your prompt + attention to this matter. We look forward to your swift response and assistance.\n\nBest + regards,\n[Sender]" + }, + "answer": "{\"categories\": {\"routine_maintenance_requests\": false, + \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, + \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, + \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, + \"emergency_repair_services\": false, \"facility_management_issues\": false, + \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}" + }, + {...} +] +``` + +**Guidelines** + + - Verify that all answer values are valid JSON strings following the schema defined in your prompt. + +- Include diverse examples that represent various urgencies, sentiments, and categories. + +- Save the file as facility-train.json. + +- Ensure it’s available locally for upload in the next step. + +### Register Dataset Artifact + +The dataset used for optimization must be registered as an artifact in SAP AI Core. +Artifacts act as the link between your files stored in the object store and the services that use them during prompt optimization runs.Each artifact is uniquely identified by its name and associated with a scenario. + +In this step, you’ll create an artifact entry for your prepared dataset (facility-train.json). + +[OPTION BEGIN [SAP AI Launchpad]] + +1. In SAP AI Launchpad, go to the Workspaces app. + +2. Select the connection to your SAP AI Core runtime, and choose the resource group used for your Generative AI Hub deployment. + +3. In the side navigation, expand Generative AI Hub and choose Optimizations. + +4. Select the Artifacts tab and choose Add → Create. +A wizard appears to guide you through the process of uploading an artifact for optimizations. + +5. Complete the wizard fields with the following information: + + - Scenario: genai-optimizations + + - Name: facility-train + + - Description: (Optional) Dataset for facility prompt optimization + +6. Choose Add. + +7. Select how you want to add your artifact: + +**Option 1 – Upload File:** + + - Available to users with genai_manager or custom_evaluation roles only. + + - Select Upload File. + + - Add your object store (for example, default). + + - Specify a subpath, relative to the object store, e.g. datasets/. + + - Select your dataset file (facility-train.json). + + - Use the switch if you want to replace an existing file. + +**Option 2 – Use Existing URL:** + + - Available for users without upload privileges. + + - Select Existing URL. + + - Add your object store. + + - Specify the relative subpath for your file in the object store. + +8. (Optional) Choose Add Labels to include key-value tags that describe your artifact. + + - Use the ➕ icon to add more labels or the ✖ icon to delete labels. + + **Example:** + + - Key: prompt-optimization + + - Value: true + +9. Review all information and choose Add to complete the artifact registration. + +![img](img/image_ail01.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +You can register the dataset as an artifact programmatically using the SAP Generative AI SDK: + +```python +from typing import List +import requests +import mimetypes +from urllib.parse import quote +import pathlib +import json + + +def validate_dataset(dataset: str | pathlib.Path | list, expected_keys: None | List[str] = None) -> bool: + if isinstance(dataset, (str, pathlib.Path)): + with open(dataset, "r") as f: + try: + dataset = json.load(f) + except json.JSONDecodeError as e: + raise ValueError(f"Invalid JSON in file: {e}") + if not isinstance(dataset, list): + raise ValueError("Dataset must be a list of dictionaries.") + +def upload_dataset(secret: str, + local_path: str | pathlib.Path, + remote_path: str, + scenario: str, + description: str | None = None, + overwrite: bool = False, + expected_keys: None | List[str] = None, + + allow_bucket_root: bool = False) -> str: + # Validate dataset + validate_dataset(local_path, expected_keys) + # check if secret exists + secrets = [r.name for r in client.ai_core_client.object_store_secrets.query().resources] + if secret not in secrets: + raise ValueError(f"Secret '{secret}' not found in object store secrets. Known secrets: {secrets}") + + # Check if local path exists + remote_path = remote_path.lstrip("/") + if "/" not in remote_path and not allow_bucket_root: + raise ValueError( + "Remote path must use subdirectories. Otherwise the whole bucket will be used as an input artifact. Set allow_bucket_root=True to allow this." + ) + + # URL-encode the path parameter + path = f"{secret}/" + remote_path.lstrip("/") + encoded_path = quote(path, safe="") + url = f"{client.ai_core_client.base_url}/lm/dataset/files/{encoded_path}" + params = {"overwrite": str(overwrite).lower()} + + # Prepare headers + headers = { + **client.request_header, + "Content-Type": "application/octet-stream", + } + # Guess MIME type + guessed_type, _ = mimetypes.guess_type(local_path) + if guessed_type: + headers["Content-Type"] = guessed_type + + with open(local_path, "rb") as f: + response = requests.put(url, params=params, headers=headers, data=f) + + # Handle response + if response.status_code == 201: + response = response.json() + elif response.status_code in (400, 409, 413): + # Return error details + raise requests.HTTPError(f"Upload failed ({response.status_code}): {response.text}") + else: + response.raise_for_status() + artifact_url = "/".join(response["url"].split("/")[:-1]) + for artifact in client.ai_core_client.artifact.query().resources: + if response["url"].startswith(artifact.url + "/"): + return artifact, response["url"].removeprefix(artifact.url).lstrip("/") + + # Create new artifact + path = response["url"].split("/")[-1] + new_artifact = client.ai_core_client.artifact.create( + name=f"{scenario}-prompt-optimization-data", + kind=Artifact.Kind.DATASET, + url=artifact_url, + scenario_id=scenario, + description="Datasets for prompt optimization" if description is None else description, + resource_group=headers[client.ai_core_client.rest_client.resource_group_header] + ) + return new_artifact, path + +artifact, dataset_path = upload_dataset( + secret=dataset_secret, + local_path=dataset_local_path, + remote_path=dataset_remote_path, + expected_keys=base_template.placeholders, + scenario=scenario, + overwrite=True +) + +print(f"Dataset uploaded to {artifact.url}/{dataset_path} -> Artifact ID: {artifact.id}") +``` + +![img](img/image_py01.png) + +After registration, the artifact will be visible in AI Launchpad → Workspaces → Artifacts, and can be reused in future prompt optimization runs + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Before registering a dataset artifact in Bruno, you must upload your json file to the SAP AI Core object store using the Dataset API. +Bruno cannot upload files directly to S3; therefore, this step is required. + +**Prerequisites** + + - An object store secret must already exist in your resource group.Typically, this is the default secret named **default**. + + - The Dataset API currently supports: + + - S3 object stores only + + - json file uploads + +**Upload Your Dataset** + +Use the Dataset API – Upload File request in Bruno: + +```bash +PUT:{{ai_api_url}}/v2/lm/dataset/files/{{secretName}}/{{datasetPath}} +``` + +**Headers** + +```json +Authorization: Bearer {{token}} +AI-Resource-Group: {{resourceGroup}} +Content-Type: text/csv +``` + +**Body** + +Upload your .csv file directly as binary in Bruno’s Body + +Example Path Values: + + - secretName: default + + - datasetPath: dataset/facility-train.json + +![img](img/image_br_dt.png) + +**Note:** + +Save the ai://… URL — you will use this when creating the dataset artifact. + +**Register the Dataset Artifact** + +- Click on **Register artifact** under lm -> artifacts in bruno collection to register the artifact + +```CODE +{ + "name": "facility-train", + "kind": "dataset", + "url": "ai://default/datasets", + "scenarioId": "genai-optimizations" +} +``` +![img](img/image_br02.png) + +A successful response returns the artifact ID, which you’ll use later in the optimization configuration. + +[OPTION END] + +### Create and save the prompt template + +[OPTION BEGIN [SAP AI Launchpad]] + +Prompt templates define how the model interprets each dataset input. +In this step, you’ll create a structured prompt that guides the model to extract the correct fields (urgency, sentiment, and categories) from a facility-related message and return a well-formatted JSON response. +The template is registered in the Prompt Registry and later referenced by the optimization execution + +#### create the Prompt Template + +- In SAP AI Launchpad, go to the left-hand menu and select Generative AI Hub → Prompt Management. + +- click on Templates → create + +![img](img/image_007.png) + +#### Define the Prompt + +In the Message Blocks section: + +- Add a System and user role message: +```json +system: |- + You are a helpful assistant. + +user: |- + Giving the following message: + --- + {{?input}} + --- + Extract and return a JSON object with the following structure: + { + "urgency": "", + "sentiment": "", + "categories": { + "emergency_repair_services": , + "routine_maintenance_requests": , + "quality_and_safety_concerns": , + "specialized_cleaning_services": , + "general_inquiries": , + "sustainability_and_environmental_practices": , + "training_and_support_requests": , + "cleaning_services_scheduling": , + "customer_feedback_and_complaints": , + "facility_management_issues": + } + } + + Your response must: + - Contain only this JSON structure (no extra text). + - Be valid JSON (parsable without errors). + - Match the keys and value types exactly. +``` + +![img](img/image_008.png) + +#### Save the Template + +Click Save Template (top right): + +- Scenario → genai-optimizations + +- Name → facility-json-template + +- Version → 1.0.0 + +Click Save to persist the template. The template will appear under your Prompt Registry and can be referenced by name in optimization jobs. + +#### Verify the Template + +Go to Generative AI Hub → Prompt Management → Templates and confirm: + +- The template appears with the correct name, scenario, and version. + +- Managed By → shows how the template is stored. + +- Versioning is tracked automatically + +![img](img/image_ail02.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +In your notebook or Python environment, you can define and register the same template programmatically using the SAP Generative AI SDK. + +```python +from gen_ai_hub.prompt_registry.client import PromptTemplateClient +from gen_ai_hub.prompt_registry.models.prompt_template import PromptTemplateSpec, PromptTemplate + +# Initialize Prompt Registry Client +prompt_registry_client = PromptTemplateClient(proxy_client=client) + +prompt_template_spec = PromptTemplateSpec( + template=[ + PromptTemplate( + role="system", + content=( + "You are a helpful assistant." + ) + ), + PromptTemplate( + role="user", + content=( + """Giving the following message: + --- + {{?input}} + --- + Extract and return a json with the follwoing keys and values: + - "urgency" as one of `high`, `medium`, `low` + - "sentiment" as one of `negative`, `neutral`, `positive` + - "categories" Create a dictionary with categories as keys and boolean values (True/False), where the value indicates whether the category is one of the best matching support category tags from: `emergency_repair_services`, `routine_maintenance_requests`, `quality_and_safety_concerns`, `specialized_cleaning_services`, `general_inquiries`, `sustainability_and_environmental_practices`, `training_and_support_requests`, `cleaning_services_scheduling`, `customer_feedback_and_complaints`, `facility_management_issues` + Your complete message should be a valid json string that can be read directly and only contain the keys mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces.""" + ) + ) + ] +) + +# Create prompt template in registry +template = prompt_registry_client.create_prompt_template( + scenario="genai-optimizations", + name="facility-json-template", + version="1.0.0", + prompt_template_spec=prompt_template_spec +) + +print(f"✅ Created Prompt Template with ID: {template.id}") +``` +**Notes** + +- The placeholder {{?input}} will automatically be replaced by each record’s input field during optimization. + +- The resulting optimized prompt version will be saved back into the Prompt Registry. + +- Ensure you use the same template name (facility-json-template) in the optimization configuration. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +In Bruno, you can create a prompt template by sending a POST request to the AI Core API: + +**Request: Create Prompt Template** + +**URL: ** + +```bash +{{api_url}}/v2/lm/promptTemplates +``` + +**Headers:** +``` +Authorization: Bearer {{access_token}} +Content-Type: application/json +``` + +**Body (JSON):** +```json +{ + "name": "facility-json-template", + "version": "1.0.0", + "scenario": "genai-optimizations", + "spec": { + "template": [ + { + "role": "system", + "content": "You are a helpful assistant." + }, + { + "role": "user", + "content": "Giving the following message:\n---\n{{?input}}\n---\nExtract and return a JSON object with the following keys and values:\n- \"urgency\" as one of `high`, `medium`, or `low`\n- \"sentiment\" as one of `negative`, `neutral`, or `positive`\n- \"categories\" should be a dictionary with category names as keys and boolean values (true/false), indicating whether each category applies. The categories are: `emergency_repair_services`, `routine_maintenance_requests`, `quality_and_safety_concerns`, `specialized_cleaning_services`, `general_inquiries`, `sustainability_and_environmental_practices`, `training_and_support_requests`, `cleaning_services_scheduling`, `customer_feedback_and_complaints`, `facility_management_issues`.\nYour complete message must be a valid JSON string that can be parsed directly and should only contain the keys listed above. Never enclose it in ```json``` or include extra whitespace or newlines." + } + ] + } +} +``` +![img](img/image_br_pr.png) + +[OPTION END] + +### Register an Optimization Configuration + +The optimization configuration defines how prompt optimization runs — it links the dataset artifact, prompt template, model, and metric into one executable setup. +When you run the optimization, SAP AI Core uses this configuration to iteratively tune your prompt so that the chosen metric (for example, json_exact_match) is maximized. + +[OPTION BEGIN [SAP AI Launchpad]] + +1. In SAP AI Launchpad, open the Workspaces app. + +2. Select your AI Core runtime connection and the resource group for your Generative AI Hub deployment. + +3. In the side navigation, expand Generative AI Hub → Optimizations. + +4. Choose Create to launch the configuration wizard + +5. On the General Information screen, provide: + + - Scenario: genai-optimizations + + - Name: facility-prompt-optimization + + - Description: Configuration for facility prompt optimization + +6. On the Configuration Details page: + + - Dataset: facility-train + + - Prompt Template: facility-json-template + + - Reference Model: select one of the supported base models (e.g., gpt-4o-2024-08-06). + + - Target Models: list the models to optimize for (e.g., gemini-2.5-pro--latest). + + - Metric: json_exact_match + + - Optimization Objective: maximize + +7. Review your inputs and click Create. + +The configuration will appear in the Optimizations → Configurations list. + +![img](img/image_ail03.png) + +![img](img/image_ail04.png) + +![img](img/image_ail05.png) + +![img](img/image_ail06.png) + +![img](img/image_ail07.png) + +![img](img/image_ail08.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +You can register the same configuration programmatically in your notebook: + +```Python +old_new_name_mapping = { + "gemini-2.5-pro:001": "gemini-2.5-pro--001", + "gpt-4o:2024-08-06": "openai/gpt-4o-2024-08-06" +} + +old_new_name_mapping.update({old_new_name_mapping[k]: k for k, v in old_new_name_mapping.items()}) + + +def create_config(metric: str, + reference_model: str, + targets: dict, + dataset_path: str, + scenario: str, + prompt: PromptTemplateSpec) -> str: + assert metric in SUPPORTED_METRICS, f"Unsupported metric: {metric}. Supported metrics: {SUPPORTED_METRICS}" + assert reference_model in SUPPORTED_MODELS, f"Unsupported reference model: {reference_model}. Supported models: {SUPPORTED_MODELS}" + assert all(model in SUPPORTED_MODELS for model in targets.keys()), f"Unsupported target models: {targets}. Supported models: {SUPPORTED_MODELS}" + input_parameters = [ + ParameterBinding(key="dataset", value=dataset_path), + ParameterBinding(key="optimizationMetric", value=metric), + ParameterBinding(key="basePrompt", value=f'{scenario}/{prompt["name"]}:{prompt["version"]}'), + ParameterBinding(key="baseModel", value=reference_model), + ParameterBinding(key="targetModels", value=','.join(targets.keys())), + ParameterBinding(key="targetPromptMapping", value=",".join([f"{old_new_name_mapping[k]}={v}" for k, v in targets.items()])) + ] + existing_configs = client.ai_core_client.configuration.query(scenario_id='genai-optimizations', executable_ids=['genai-optimizations']) + params = {par.key: par.value for par in input_parameters} + for conf in existing_configs.resources: + if {par.key: par.value for par in conf.parameter_bindings} == params: + return conf.id + + input_artifacts = [InputArtifactBinding(key="prompt-data", artifact_id=artifact.id)] + + response = client.ai_core_client.configuration.create( + name = "prompt-optimization-configuration", # custom name of configuration + scenario_id = "genai-optimizations", # value from workflow + executable_id = "genai-optimizations", # value from workflow + resource_group = resource_group, + parameter_bindings = input_parameters, + input_artifact_bindings = input_artifacts + ) + + return response.id + +# Create the configuration +configuration_id = create_config( + metric=metric, + reference_model=reference_model, + targets=targets, + dataset_path=dataset_path, + scenario=scenario, + prompt=prompt +) +print("Optimization Configuration ID:", configuration_id) +``` +![img](img/image_py02.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +In Bruno, you can create a configuration by sending a POST request to the AI Core API: + +**URL:** + +```json +{{base_url}}/v2/lm/configurations +``` +**Headers:** + +```json +Authorization: Bearer {{access_token}} +Content-Type: application/json +Accept: application/json +ai-resource-group: {{resource_group}} +``` + +**Body (JSON):** + +```json +{ + "name": "prompt-optimization-configuration", + "scenarioId": "genai-optimizations", + "executableId": "genai-optimizations", + "description": "Configuration for facility prompt optimization", + "parameterBindings": [ + { "key": "dataset", "value": "facility-train.json" }, + { "key": "optimizationMetric", "value": "JSON_Match" }, + { "key": "basePrompt", "value": "genai-optimizations/evaluate-base:0.0.1" }, + { "key": "baseModel", "value": "gpt-4o:2024-08-06" }, + { "key": "targetModels", "value": "gemini-2.5-pro:001" }, + { "key": "targetPromptMapping", "value": "gemini-2.5-pro:001=evaluate-base-gemini-2_5-pro:0.0.1" } + ], + "inputArtifactBindings": [ + { "key": "prompt-data", "artifactId": "" } + ] +} +``` +💡 Save the returned id — it represents your configuration and will be used in the next step to run the prompt optimization execution. + +![img](img/image_br03.png) + +[OPTION END] + +⚠️ Note: Model availability and versions (for example, gpt-4o:2024-08-06, gemini-2.5-pro:latest) may vary across SAP AI Core tenants. Always verify available models in Generative AI Hub → Models before use. +For the latest updates, refer to [SAP Note 3437766](https://me.sap.com/notes/3437766) – Model Availability and Support for Generative AI Hub +. + +### Run the Prompt Optimization Execution + +After registering the optimization configuration, the next step is to execute the optimization run. +This execution launches the prompt optimization workflow in SAP AI Core, which iteratively refines your prompt using the specified dataset and metric. +When the execution completes, the optimized prompt and results will be stored automatically in the prompt registry and object store. + +[OPTION BEGIN [SAP AI Launchpad]] + +Once you complete Review your inputs and click Create in the Register an Optimization Configuration step, the Optimization job starts automatically. + +After the job reaches Completed status, you can inspect logs, review evaluation metrics, and view the optimized prompt details. + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +In your notebook, execute the optimization programmatically using the SDK: + +```Python +response = client.ai_core_client.execution.create( + configuration_id = configuration_id, # Change this value. + resource_group = resource_group +) + +execution_id = response.id +print('Execution started with ID:', execution_id) +``` +![img](img/image_br04.png) + +When the execution completes, the optimized prompt is stored in the Prompt Registry and the metrics are stored in the ML Tracking Service. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +You can also trigger the optimization execution using Bruno by sending the following API request. + +**URL:** + +```bash +{{base_url}}/v2/lm/executions +``` + +**Headers:** + +``` +Authorization: Bearer {{access_token}} +Content-Type: application/json +Accept: application/json +ai-resource-group: {{resource_group}} +``` + +**Body (JSON):** + +```json +{ + "configurationId": "" +} +``` + +![img](img/image_br05.png) + +[OPTION END] + +### Monitor and View Optimization Progress + +After triggering the prompt optimization execution, you can monitor the progress and verify its status in real time. +Monitoring helps ensure that your run completes successfully and allows you to access intermediate and final optimization results. + +[OPTION BEGIN [SAP AI Launchpad]] + +- Navigate to Generative AI Hub → ML operations in your connected workspace. + +- Open the Executions tab to view all recent prompt optimization runs. + +- Each execution displays: + + - Execution ID – unique identifier for the run. + + - Status – shows Pending, Running, Succeeded, or Failed. + + - Start/End Time – indicates when the job started and finished. + +- Select a specific execution to open the Logs tab. + + - Review live logs to check model mapping, prompt upload, and metric evaluation progress. + + - A “completed” message indicates the optimization finished successfully. + +Once the execution succeeds, proceed to view the generated optimized prompt and metric results in the following step. + +![img](img/image_ail10.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +Use the SDK to programmatically monitor the status of your optimization execution. + +```Python +# Query latest executions +executions = client.ai_core_client.execution.query(scenario_id="genai-optimizations") + +for e in executions.resources: + print(f"Execution ID: {e.id}, Status: {e.status}, Created At: {e.created_at}") + +# Get detailed information about a specific execution +execution_id = execution_id +execution_details = client.ai_core_client.execution.get(execution_id) +print(execution_details) +``` +![img](img/image_py04.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Use the GET executions request to fetch all executions under your resource group: + +**URL** +```bash +GET {{baseurl}}/v2/lm/executions +``` +**Headers:** + +```json +Authorization: Bearer {{access_token}} +ai-resource-group: {{resource_group}} +``` +The response will include the latest execution details such as: + +```json +{ + "id": "", + "status": "COMPLETED", + "scenarioId": "genai-optimizations", + "configurationId": "", + "targetStatus": "COMPLETED", + "submissionTime": "2025-11-06T06:48:53Z", + "startTime": "...", + "completionTime": "...", +} +``` +These messages confirm a successful optimization. + +![img](img/image_br06.png) + +[OPTION END] + +### Review Optimization Results + +Once the prompt optimization execution completes successfully, the system generates an optimized version of your prompt and stores it in the Prompt Registry. +You can review the optimization results, inspect metrics, and compare the base and optimized prompts to understand how performance has improved. + +[OPTION BEGIN [SAP AI Launchpad]] + +- Navigate to Generative AI Hub → ML Operations → Executions. + +- Select your completed execution (status: completed). + +- Under the Artifacts, review the linked optimized prompt and result files stored in the Object Store. + +- Next, go to Prompt Management under Generative AI Hub and search for the newly created optimized prompt. + + Example: evaluate-base-gemini-2_5-pro:0.0.1 + +- You can open the prompt entry to review the prompt structure, version, and metadata, including the metric used during optimization. + +- To view detailed metric scores, navigate to the Optimization under Generative AI Hub → Runs, click on any Run name which was executed recently. + +![img](img/image_ail11.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +Use the SDK to programmatically fetch and analyze your optimization results. + +```Python +result = fetch_results(execution_id) +print_result(result) +``` + +![img](img/image_py03.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Use the GET executions by ID request to review the output of your specific optimization execution: + +**URL** +```bash +GET {{baseUrl}}/v2/lm/metrics?tagFilters=evaluation.ai.sap.com/child-of={evaluation-id} +``` +**Headers:** +```json +Authorization: Bearer {{access_token}} +ai-resource-group: {{resource_group}} +``` +![img](img/image_br07.png) + +You can then retrieve the optimized prompt directly from the Prompt Templates endpoint: + +**URL** +```bash +GET {{baseurl}}/v2/lm/promptTemplates +``` +**Headers:** + +```json +Authorization: Bearer {{access_token}} +ai-resource-group: {{resource_group}} +``` + +Look for the prompt name corresponding to your optimization output, for example: + +```json +"name": "evaluate-base-gemini-2_5-pro", +"version": "0.0.1" +``` + +![img](img/image_br08.png) + +[OPTION END] + diff --git a/tutorials/ai-core-genaihub-prompt-optimization/facility-synth-train/facility-train.json b/tutorials/ai-core-genaihub-prompt-optimization/facility-synth-train/facility-train.json new file mode 100644 index 0000000000..cdd4835977 --- /dev/null +++ b/tutorials/ai-core-genaihub-prompt-optimization/facility-synth-train/facility-train.json @@ -0,0 +1 @@ +[{"fields": {"input": "Subject: Urgent Assistance Required for Specialized Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and my family and I have been availing your services for our home for the past year. We have always appreciated the high standards and professionalism your team brings to maintaining our living environment.\n\nHowever, we are currently facing an urgent issue that requires immediate attention. We recently hosted a large gathering at our home, and despite our best efforts, there are several areas that now require specialized cleaning. Specifically, we need deep cleaning for our carpets and upholstery, as well as thorough window washing. The situation is quite pressing as we have more guests arriving soon, and we want to ensure our home is in pristine condition to welcome them.\n\nWe have tried some basic cleaning ourselves, but the results have not been satisfactory. Given the high standards we have come to expect from ProCare, we are confident that your team can handle this situation efficiently and effectively.\n\nCould you please arrange for a specialized cleaning team to visit our home at the earliest convenience? We would greatly appreciate it if this could be prioritized due to the urgency of the situation.\n\nThank you for your prompt attention to this matter. We look forward to your swift response and assistance.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Specialized Cleaning Services\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is Alex, and I've been a client of ProCare Facility Solutions for a few months now. I must say, your services have been quite satisfactory so far, especially the routine maintenance and cleaning schedules.\n\nI am reaching out to inquire about your specialized cleaning services. Specifically, I am interested in deep cleaning and carpet maintenance for my residential property. While the regular cleaning has been great, I feel that a more thorough cleaning would really help maintain the pristine condition of my home.\n\nI haven't taken any steps yet to address this, as I wanted to get more information from your team first. Could you please provide me with details on how these specialized services work, the scheduling options available, and any additional costs involved?\n\nLooking forward to your response.\n\nBest regards,\nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Guidance Needed for Routine Plumbing Maintenance\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is Dr. Samuel Thompson, and I have been a satisfied client of ProCare Facility Solutions for the past two years. Your commitment to quality and sustainability has always resonated deeply with my values, and I am grateful for the exceptional service your team consistently provides.\n\nI am writing to seek your assistance with a minor plumbing issue that has recently come to my attention. While it is not an urgent matter, I believe addressing it sooner rather than later would be beneficial. Specifically, there seems to be a small leak in the plumbing system of my office building. Although it has not caused any significant disruption, I would appreciate your expert guidance on how to proceed.\n\nIn an effort to mitigate the issue, I have already inspected the area and ensured that the immediate surroundings are dry and safe. However, given the importance of maintaining a well-functioning facility, I would like to request a professional assessment and any necessary routine maintenance at your earliest convenience.\n\nYour expertise and dedication to excellence have always been a source of reassurance for me, and I am confident that your team will handle this matter with the same level of care and professionalism that I have come to expect.\n\nThank you for your attention to this matter. I look forward to your prompt response and guidance.\n\nWarm regards,\n\nDr. Samuel Thompson"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent HVAC Repair Needed\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a delighted customer of ProCare Facility Solutions for the past year. Your services have always been stellar, and I truly appreciate the dedication and professionalism your team brings to maintaining our residential complex.\n\nHowever, I\u2019m reaching out with an urgent issue that needs immediate attention. Our HVAC system has been acting up for the past two days, and it\u2019s starting to affect the comfort of our living space. Given the current weather, this is becoming quite unbearable. I\u2019ve tried resetting the system and checking the filters, but nothing seems to work.\n\nCould you please send someone over as soon as possible to diagnose and fix the problem? Your prompt assistance would be greatly appreciated, as we rely heavily on a well-functioning HVAC system, especially during these times.\n\nThank you so much for your help and understanding. I look forward to your swift response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Immediate Attention Required: Serious Safety Concerns\n\nHi ProCare Support Team,\n\nI'm really upset and need your help right away. My name is Jamie, and I live in one of the residential complexes you manage. I've always thought you guys were the best at keeping everything clean and safe, but something really bad happened, and I'm not sure what to do.\n\nYesterday, I noticed some weird smells and noises coming from the HVAC system in my apartment. It was so bad that I couldn't sleep, and I'm worried it might be dangerous. I tried calling your emergency repair line, but no one picked up, and I left a message that hasn't been returned yet. This is really frustrating because I thought you guys were supposed to be on top of things like this.\n\nI need someone to come and check it out immediately. I'm really scared something might go wrong, and I don't want to wait any longer. Please send someone over as soon as possible to fix this. I don't feel safe in my own home right now, and that's just not okay.\n\nThanks,\nJamie"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Assistance Needed for HVAC Maintenance in Apartment\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I reside in the apartment next door to a fellow resident who often provides a lovely backdrop of piano music to my writing sessions. I have been a resident here for a few years and have always appreciated the meticulous care your team provides to our building.\n\nRecently, I've encountered an issue with the HVAC system in my apartment. The unit seems to be malfunctioning, as it is not maintaining a consistent temperature. This has made it quite uncomfortable, especially during my extended writing sessions. While it's not an immediate crisis, it is becoming increasingly inconvenient.\n\nI have tried adjusting the thermostat and even reset the unit, but the problem persists. Given the importance of a comfortable environment for both my work and well-being, I would greatly appreciate it if your team could look into this matter at your earliest convenience.\n\nThank you for your attention to this routine maintenance request. I look forward to your prompt assistance.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent Assistance Needed for Mold Remediation\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a dedicated audio engineer for over two decades. I\u2019ve always appreciated the meticulous care and attention to detail that ProCare Facility Solutions brings to maintaining our studio environment.\n\nRecently, we\u2019ve encountered a situation that requires your specialized cleaning services urgently. Our recording studio has experienced an unexpected issue with mold growth in the soundproofing materials. Given the sensitive nature of our equipment and the potential health risks, we need this addressed as soon as possible to ensure the safety and functionality of our space.\n\nWe\u2019ve taken some initial steps to mitigate the problem, such as increasing ventilation and isolating the affected areas, but it\u2019s clear that professional intervention is necessary. Your team\u2019s expertise in handling such specialized cleaning tasks is exactly what we need right now.\n\nCould you please arrange for a team to visit our studio at the earliest convenience? We\u2019re looking for a thorough deep cleaning and mold remediation to ensure that our environment remains pristine and safe for our ongoing projects.\n\nThank you for your prompt attention to this matter. I\u2019m confident that with ProCare\u2019s support, we\u2019ll have our studio back to its optimal condition in no time.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a first-time author currently working on improving the readability of my manuscript. While my primary focus is on writing, I am also responsible for managing a small residential property where I live and work.\n\nI am reaching out to inquire about your services. Specifically, I am interested in understanding how your team can assist in maintaining a clean and efficient environment, which is crucial for my productivity and well-being. Given that this is my first time managing such responsibilities, I would appreciate any guidance or recommendations you can provide.\n\nSo far, I have tried to handle basic maintenance and cleaning tasks on my own, but I find it challenging to keep up with everything while focusing on my writing. I am particularly interested in your customized maintenance plans and eco-friendly cleaning services, as these align with my values and needs.\n\nCould you please provide more information on how your services can be tailored to a small residential property like mine? Additionally, I would like to know about the process for setting up a consultation or initial assessment.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent Assistance Needed for HVAC System Issue\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a satisfied client of ProCare Facility Solutions for the past year. Your team has always provided exceptional service, and I truly appreciate the dedication and professionalism you bring to maintaining my property.\n\nHowever, I\u2019m currently facing an urgent issue that requires immediate attention. Over the past few days, I\u2019ve noticed a significant drop in the efficiency of the HVAC system in my home. Despite the routine maintenance checks, the system seems to be struggling to maintain a consistent temperature, which is crucial for my training and recovery as a professional athlete.\n\nI\u2019ve already tried adjusting the thermostat and checking the filters, but the problem persists. Given the high stakes of my athletic performance, I need this issue resolved as quickly as possible to ensure my living environment remains optimal for my needs.\n\nCould you please arrange for an emergency repair at the earliest convenience? Your prompt assistance in this matter would be greatly appreciated, as it directly impacts my daily routine and overall well-being.\n\nThank you for your understanding and swift action. I look forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Services\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been working closely with Mike Lee on various digital marketing campaigns for ProCare Facility Solutions. I wanted to reach out with a few questions regarding your facility management services.\n\nWe are currently exploring options to enhance the efficiency and sustainability of our office building's operations. Given ProCare's reputation for excellence in facility management, I believe your services could be a great fit for our needs. Could you provide more details on how your comprehensive oversight and management of facility operations work? Specifically, I am interested in understanding the coordination of space utilization and the implementation of best practices for energy efficiency and environmental impact reduction.\n\nAdditionally, I would appreciate it if you could share any case studies or examples of similar projects you have successfully managed. This information will help us make an informed decision and potentially move forward with your services.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Assistance Needed for Facility Management Issue\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is Dr. Emily Carter, and I am a transplant surgeon at the City Medical Center. We have been utilizing ProCare Facility Solutions for our facility management needs for the past two years, and I must say, your services have significantly contributed to maintaining a safe and efficient environment for our patients and staff.\n\nRecently, we have encountered an issue with the coordination of space utilization in our surgical wing. Specifically, there seems to be a recurring problem with the allocation of operating rooms, which has led to some scheduling conflicts and minor delays in our procedures. While this has not yet impacted patient care, it is a concern that we would like to address promptly to prevent any future complications.\n\nWe have attempted to manage the situation internally by adjusting our scheduling protocols and communicating with your on-site team. However, the issue persists, and we believe that a more comprehensive review and adjustment of the space utilization plan might be necessary.\n\nCould you please assist us in resolving this matter? We would greatly appreciate it if your team could conduct a thorough assessment and provide recommendations to optimize the use of our surgical spaces. Your expertise and support have always been invaluable to us, and we are confident that with your help, we can find an effective solution.\n\nThank you for your attention to this matter. We look forward to your prompt response and continued partnership in ensuring the best possible environment for our medical team and patients.\n\nWarm regards,\n\nDr. Emily Carter\nCity Medical Center"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Follow-Up on HVAC Maintenance Issue\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. [Sender], a retired professor and a long-time resident of [Residential Complex Name]. I have been utilizing your maintenance services for some time now and generally appreciate the professionalism and thoroughness your team brings to the table.\n\nHowever, I would like to bring to your attention a recent issue I encountered with the routine maintenance of the HVAC system in my apartment. While the technician was courteous and seemed knowledgeable, the problem with the system persists. Despite the service visit, the HVAC unit continues to make an unusual noise, which is quite disruptive.\n\nI have not taken any further steps beyond the initial service call, as I wanted to first communicate my concerns directly with your support team. I would appreciate it if you could arrange for a follow-up visit to address this issue more comprehensively.\n\nThank you for your attention to this matter. I look forward to your prompt response and resolution.\n\nBest regards,\nDr. [Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry About Service Quality and Safety\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I recently came across your services while looking for facility management solutions. I\u2019m not very familiar with the technical aspects, but I wanted to reach out regarding some concerns I have about the quality and safety of your services.\n\nI\u2019ve been considering your company for managing the maintenance and cleaning of my residential property. However, I\u2019ve read some reviews and heard from a few acquaintances that there might be issues related to the quality and safety standards of your services. This has made me a bit hesitant to proceed.\n\nI haven\u2019t taken any steps yet to address these concerns, as I thought it would be best to get in touch with you directly. Could you please provide more information on how you ensure the quality and safety of your services? Any details about your protocols, certifications, or customer satisfaction rates would be really helpful.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for Facility Management Issue\n\nDear ProCare Support Team,\n\nTere! I hope this message finds you well. My name is Jaan, and I have been a loyal customer of ProCare Facility Solutions for quite some time now. As someone who has driven countless miles on the Tugimaantee 17, I know the importance of smooth operations and well-maintained environments, and I have always appreciated the exceptional service your team provides.\n\nHowever, I am currently facing a pressing issue with the facility management at my residential complex. The coordination of space utilization and security measures seems to have gone awry, causing significant inconvenience to the residents. The situation has escalated to a point where immediate intervention is required to restore order and ensure the safety and efficiency of our living environment.\n\nI have already tried to address the issue by speaking with the on-site management team, but unfortunately, the problem persists. Given the urgency of the situation, I am reaching out to you for swift and effective assistance. Your expertise and experience in facility management are highly valued, and I am confident that your intervention will help resolve this matter promptly.\n\nPlease let me know the next steps we can take to address this issue. I am available at your earliest convenience to discuss further details and provide any additional information you may need.\n\nThank you for your prompt attention to this matter. I look forward to your swift response and resolution.\n\nParimate soovidega,\n\nJaan\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Request for Eco-Friendly Deep Cleaning Services\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a client of ProCare Facility Solutions for a while now. As someone who\u2019s spent years in the fast lane, both on and off the track, I appreciate the importance of precision and attention to detail, which is why I\u2019ve always trusted your services.\n\nI\u2019m reaching out today because I need some specialized cleaning services for my property. Specifically, I\u2019m looking for a deep cleaning of my garage and workshop area. These spaces have accumulated quite a bit of grime and dust over time, and I\u2019d like to get them back to a pristine condition. Given the nature of the work I do there, it\u2019s crucial that the cleaning is thorough and uses eco-friendly products.\n\nI haven\u2019t taken any steps to address this issue yet, as I wanted to consult with the experts first. I\u2019m hoping you can provide a customized cleaning plan that fits my needs and schedule.\n\nCould you please let me know the next steps and any details you need from my end to get this sorted? I\u2019m looking forward to your assistance in making my workspace spotless again.\n\nThanks in advance for your help.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduling Cleaning Services for Optimal Facility Maintenance\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a consultant specializing in optimizing facility management practices for businesses. I have had the pleasure of working with several clients who have benefited immensely from ProCare Facility Solutions' exceptional services.\n\nI am reaching out to discuss the scheduling of cleaning services for one of my clients, who is keen on maintaining a pristine environment in their commercial property. They have been very impressed with the quality and eco-friendliness of your cleaning solutions and are eager to establish a regular cleaning schedule that aligns with their operational needs.\n\nTo provide some context, we are looking to implement a cleaning routine that includes daily maintenance for high-traffic areas, weekly deep cleaning sessions, and monthly specialized services such as window washing and carpet maintenance. This approach will ensure that the facility remains in top condition, promoting a healthy and productive environment for all occupants.\n\nWe have reviewed the various options available and believe that a customized plan tailored to the specific requirements of the facility would be most effective. I would appreciate your assistance in coordinating a meeting to discuss the details and finalize the schedule.\n\nThank you for your attention to this matter. I look forward to your prompt response and am confident that, with your expertise, we can develop a cleaning schedule that meets and exceeds my client's expectations.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Scheduling Cleaning Services\n\nHey ProCare Team,\n\nHope you\u2019re all doing well! I\u2019m a big fan of your services and have been relying on you guys to keep my place spotless for a while now. You\u2019ve always done a fantastic job, and I really appreciate it.\n\nI wanted to touch base about scheduling my next round of cleaning services. I\u2019m looking to set up a regular cleaning schedule, maybe something like a bi-weekly or monthly plan. My place isn\u2019t too big, so I think that should work out just fine.\n\nI haven\u2019t taken any steps yet to set this up, so I thought I\u2019d reach out to you directly. Could you help me get this sorted? I\u2019m pretty flexible with dates and times, so whatever works best for your team should be good for me.\n\nThanks a bunch for your help! Looking forward to hearing from you soon.\n\nBest,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Scheduled Maintenance Request for Minor Plumbing Issue\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been availing your excellent facility management services for my residential property for the past year. I must say, your team's dedication to maintaining a pristine and efficient environment has been truly commendable.\n\nI am writing to bring to your attention a minor issue that has recently arisen with the plumbing system in my home. While it is not an urgent matter, I believe it would be prudent to address it sooner rather than later to prevent any potential complications. Specifically, there seems to be a small leak in one of the bathroom faucets, which, although not severe, has been persistent over the past few days.\n\nI have attempted to tighten the faucet myself, but the issue persists. Given your team's expertise, I am confident that this can be resolved efficiently with your assistance. Could you kindly arrange for a technician to visit at their earliest convenience to inspect and repair the faucet as part of the routine maintenance?\n\nThank you for your attention to this matter. I appreciate your continued support and look forward to your prompt response.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Immediate Attention Required for Electrical Safety Concern\n\nDear [Receiver],\n\nI hope this message finds you well. My name is [Sender], and I am currently a resident at [Residential Property Name], where ProCare Facility Solutions has been providing exceptional facility management and maintenance services. I have always been impressed with the quality and professionalism of your team.\n\nHowever, I am writing to bring to your immediate attention a critical safety concern that requires urgent resolution. Over the past few days, I have noticed a significant issue with the electrical system in my apartment. There have been frequent power surges and flickering lights, which I believe could pose a serious safety hazard.\n\nGiven the potential risks associated with electrical malfunctions, I have taken the precaution of unplugging all non-essential devices and avoiding the use of high-power appliances. Despite these measures, the problem persists, and I am deeply concerned about the safety of my living environment.\n\nI kindly request that a qualified technician be dispatched as soon as possible to assess and rectify the issue. Ensuring the safety and well-being of residents is paramount, and I trust that ProCare Facility Solutions will address this matter with the urgency it deserves.\n\nThank you for your prompt attention to this critical issue. I look forward to your swift response and resolution.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry on Sustainability Practices and Career Guidance\n\nHi ProCare Support Team,\n\nI hope this message finds you well! My name is [Sender], and I'm a high school student with a keen interest in cloud computing and sustainability. I recently came across ProCare Facility Solutions and was really impressed by your commitment to environmentally friendly practices.\n\nI'm reaching out because I'm eager to learn more about the sustainability and environmental practices you implement in your facility management and cleaning services. Specifically, I'm interested in how these practices can be integrated into a career in cloud computing. I believe that understanding these aspects will help me align my future career with my passion for sustainability.\n\nSo far, I've done some research on my own and have read through the information available on your website. However, I would love to get more detailed insights or any additional resources you might have. Are there any specific programs or initiatives that ProCare Facility Solutions is particularly proud of? Additionally, any advice on how I can incorporate these practices into my future career would be greatly appreciated.\n\nThank you so much for your time and assistance. I'm really looking forward to learning from your expertise and applying it to my future endeavors.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Attention Required for Cleaning Schedule Issue\n\nHi [Receiver],\n\nI'm reaching out because I'm beyond frustrated with the cleaning services scheduling. My name is [Sender], and I've been using ProCare Facility Solutions for my office building for the past year. Frankly, I'm not impressed right now.\n\nThe cleaning crew was supposed to be here yesterday for the weekly cleaning, but no one showed up. This isn't the first time this has happened, and it's becoming a serious problem. I don't have time to keep chasing this up, and it's unacceptable for a company that claims to be \"premier\" in facility management.\n\nI've already called your support line twice, and all I got were empty promises that someone would get back to me. Well, no one has, and my office is still a mess. I need this resolved immediately. Send a cleaning crew today, or I'll have to consider other options.\n\nSort this out.\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Urgent: Ongoing Maintenance Issues at Our Facility\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the community manager for [Community Name]. I have been overseeing our facility's operations and maintenance for quite some time now, and I must say, the recent experiences with your maintenance services have been less than satisfactory.\n\nWe have been facing several recurring issues with our HVAC and plumbing systems that have not been adequately addressed despite multiple service requests. The lack of timely and effective solutions is causing significant inconvenience to our residents and staff, and it is becoming increasingly difficult to manage the situation.\n\nTo give you a clearer picture, we have had technicians visit our facility on three separate occasions over the past month. Each time, the problem was either temporarily fixed or not resolved at all. This has led to a lot of frustration among our community members, and it is reflecting poorly on our management.\n\nI am reaching out to request a more permanent and effective solution to these ongoing maintenance issues. We need a thorough inspection and a comprehensive plan to address the root causes of these problems. It is crucial for us to ensure a safe and comfortable environment for everyone in our community.\n\nI trust that you understand the urgency of this matter and will prioritize our request accordingly. We have always valued the quality of service provided by ProCare Facility Solutions, and we hope to see a swift resolution to these issues.\n\nThank you for your attention to this matter. I look forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Concerns About Sustainability Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well, though I must admit, my recent experiences with your services have left me quite disheartened. As an art student deeply invested in the preservation of our environment, I was initially drawn to ProCare Facility Solutions because of your advertised commitment to sustainability. However, my recent observations have led me to question the authenticity of these claims.\n\nI have been a client for several months now, utilizing your cleaning services for my studio space. While the cleaning itself has been satisfactory, I have noticed a troubling lack of transparency regarding the eco-friendly products and practices you claim to use. On multiple occasions, I have seen your staff using what appear to be conventional, chemical-laden cleaning agents, which is quite disconcerting given your stated focus on environmentally friendly practices.\n\nI have attempted to address this issue by speaking directly with the cleaning staff, but their responses have been vague and unconvincing. This lack of clarity and apparent disregard for genuine sustainability is not only disappointing but also undermines the trust I placed in your company.\n\nI am reaching out to request a detailed explanation of the specific eco-friendly products and practices you employ. Additionally, I would appreciate information on how you ensure compliance with these practices across all your teams. It is crucial for me to understand whether your commitment to sustainability is more than just a marketing ploy.\n\nThank you for your attention to this matter. I look forward to your prompt and thorough response.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Immediate Training Support Needed for In-House Maintenance Team\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been working with ProCare Facility Solutions for the past three years, managing our commercial property portfolio. I must say, your services have always been top-notch, and I truly appreciate the dedication and expertise your team brings to the table.\n\nHowever, we\u2019re currently facing a pressing issue that requires your immediate attention. Our in-house maintenance team is in urgent need of comprehensive training on the latest facility management best practices. We\u2019ve recently expanded our operations, and the new team members are struggling to keep up with the standards we\u2019ve come to expect from ProCare.\n\nI\u2019ve already tried to address this by conducting a few internal training sessions, but it\u2019s clear that we need professional guidance to ensure everyone is up to speed. We need a detailed training program that covers everything from routine maintenance to emergency repair protocols.\n\nCould you please arrange for a training session at the earliest convenience? Given the urgency of the situation, we would appreciate it if this could be prioritized. Your prompt assistance in this matter would be greatly valued and would help us maintain the high standards we strive for.\n\nThank you for your attention to this matter. Looking forward to your swift response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Request for Training and Support on Facility Management Best Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is Dr. Alex Turner, and I am a wildlife ecologist who has been utilizing your facility management services for our research center. We have been quite satisfied with the overall maintenance and cleaning services provided by ProCare Facility Solutions.\n\nI am reaching out to request some additional training and support for our in-house maintenance team. As our research activities expand, we find ourselves needing to better understand the best practices in facility management, particularly in areas related to energy efficiency and environmental impact reduction. This knowledge is crucial for us to maintain our facility in a way that aligns with our ecological research goals.\n\nSo far, we have tried to implement some basic practices based on general guidelines, but we believe that a more structured training program from your experts would be highly beneficial. We are looking for comprehensive training sessions that can be scheduled at a convenient time for our team.\n\nCould you please provide us with information on the available training programs and how we can arrange for these sessions? Additionally, any resources or documentation that could help us in the interim would be greatly appreciated.\n\nThank you for your attention to this matter. We look forward to your guidance and support.\n\nBest regards,\n\nDr. Alex Turner \nWildlife Ecologist \n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I manage a small equestrian facility that has been benefiting from your services for the past year. I\u2019m reaching out today regarding a routine maintenance request for our HVAC system.\n\nWe\u2019ve noticed that the system isn\u2019t performing as efficiently as it used to, and with the changing seasons, it\u2019s crucial for us to maintain a stable environment for our horses. I believe it\u2019s time for a scheduled check-up to ensure everything is running smoothly.\n\nSo far, we\u2019ve tried basic troubleshooting like cleaning the filters and checking the thermostat settings, but the issue persists. Could you please arrange for a technician to come by and perform the necessary maintenance? We\u2019re flexible with timing but would appreciate it if this could be addressed within the next week or so.\n\nThank you for your attention to this matter. Looking forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Facility Management Coordination\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. Alex Thompson, and I have been thoroughly impressed with the exceptional services provided by ProCare Facility Solutions for our residential complex. Your team's dedication to maintaining a pristine and efficient environment has not gone unnoticed.\n\nI am reaching out to discuss a minor issue we have encountered with the coordination of space utilization within our facility. While the overall management has been stellar, we have noticed a slight misalignment in the scheduling of common area usage, which occasionally leads to overlapping bookings. This is not an urgent matter, but I believe addressing it could further enhance the seamless experience we have come to expect from your services.\n\nTo provide some context, we have already attempted to manually adjust the schedules to avoid conflicts, but a more systematic approach might be beneficial. We are confident that with your expertise, a more efficient solution can be implemented.\n\nCould you please assist us in reviewing the current scheduling system and suggest any improvements or adjustments that could be made? Your guidance and support in this matter would be greatly appreciated.\n\nThank you for your attention to this matter. I look forward to your response and continuing our positive relationship with ProCare Facility Solutions.\n\nBest regards,\n\nDr. Alex Thompson"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: A Green Inquiry from a Bill Maher Enthusiast\n\nHey ProCare Support Team,\n\nHope this email finds you all in good spirits and with a dash of humor! I'm [Sender], a long-time admirer of your top-notch facility solutions. You guys are like the unsung heroes of the maintenance world, keeping everything running smoothly while the rest of us focus on our daily grind.\n\nSo, here's the deal. As a die-hard fan of Bill Maher's satirical take on the world, I can't help but appreciate the importance of sustainability and environmental practices. It's like the punchline to a joke that actually matters. I've been super impressed with your commitment to eco-friendly cleaning products and energy-efficient practices. Kudos to you for that!\n\nHowever, I've been wondering if there's more we can do to up our green game. Are there any additional initiatives or practices that you guys are planning to roll out soon? Or maybe some tips and tricks that we can implement on our end to further reduce our carbon footprint? I'm all ears and ready to take notes!\n\nI haven't really taken any steps yet, just thought I'd reach out to the experts first. After all, why reinvent the wheel when you have a team of pros at your disposal, right?\n\nLooking forward to hearing from you and getting some insights on how we can make our facility even more environmentally friendly. Keep up the fantastic work, and thanks for being the rockstars that you are!\n\nBest,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a college student majoring in East Asian Studies with a focus on contemporary Japanese culture and media. I live in a residential complex managed by ProCare Facility Solutions.\n\nI am writing to request routine maintenance for the HVAC system in my apartment. While everything is functioning adequately, I believe it would be beneficial to have a check-up to ensure everything continues to run smoothly, especially as we transition into the colder months.\n\nI haven't taken any steps to address this issue myself, as I trust your team\u2019s expertise in handling such matters. Could you please schedule a maintenance visit at your earliest convenience?\n\nThank you for your attention to this matter. I appreciate the quality service ProCare consistently provides.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry About Specialized Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the owner of a small bed and breakfast. I have been very pleased with the general cleaning services your team has provided over the past year. Your attention to detail and commitment to quality have truly made a difference in maintaining the welcoming atmosphere of my establishment.\n\nI am writing to inquire about your specialized cleaning services, particularly deep cleaning and carpet maintenance. While our regular cleaning schedule has been effective, I believe that a more thorough cleaning would greatly benefit our property, especially as we prepare for the upcoming holiday season.\n\nI have not yet taken any steps to address this need, as I wanted to consult with your team first to ensure we proceed in the best possible manner. Could you please provide more information on the specialized cleaning services you offer, including any recommendations for a property like ours?\n\nThank you for your time and assistance. I look forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Request for Deep Cleaning and Carpet Maintenance\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been utilizing your services for my residential property for the past year. I must say, I have been quite satisfied with the quality and professionalism your team consistently delivers.\n\nRecently, I have encountered a situation that requires specialized cleaning services. Specifically, I need a thorough deep cleaning of my home, including window washing and carpet maintenance. Given the importance of maintaining a pristine environment, I believe your expertise in this area would be invaluable.\n\nCould you please provide me with information on the availability of your specialized cleaning services and any necessary preparations I should make before your team arrives? I would appreciate it if we could schedule this service at your earliest convenience, though I understand that it may not be immediate.\n\nThank you for your attention to this matter. I look forward to your prompt response and continued excellent service.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Hey ProCare Support Team,\n\nHope you all are doing great! My name is Alex, and I've been using your awesome services for my apartment complex for a few months now. I must say, you guys are doing a fantastic job keeping everything spick and span.\n\nI wanted to reach out because I've been thinking a lot about how we can make our building more eco-friendly. I know you guys are big on sustainability, which is one of the reasons I chose ProCare in the first place. I was wondering if you could share some tips or maybe even offer some additional services that could help us reduce our environmental impact even more.\n\nI haven't really done much on my own yet, just some basic recycling and switching to LED bulbs, but I feel like there's so much more we could be doing. Any advice or guidance you could provide would be super helpful.\n\nThanks a ton for your help and for all the great work you do!\n\nBest,\nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Seeking Advice on Ensuring a Safe Environment for My Cats\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Margaret, and I have been a satisfied customer of ProCare Facility Solutions for the past year. I truly appreciate the excellent service your team provides in maintaining my home, which has always been a safe and clean environment for me and my beloved cats.\n\nI am writing to seek your advice on a matter that has been on my mind lately. As an elderly woman with several cats, their health and safety are of utmost importance to me. I have noticed that while your cleaning services are impeccable, I am concerned about the potential impact of certain cleaning products on my cats' health. They are very sensitive, and I want to ensure that their environment remains as safe as possible.\n\nI have not encountered any specific issues so far, but I would like to be proactive in addressing any potential risks. Could you please provide me with information on the cleaning products used in my home and whether they are pet-friendly? Additionally, if there are any alternative products or practices that could further enhance the safety of my home for my cats, I would greatly appreciate your recommendations.\n\nThank you for your attention to this matter. I look forward to your guidance and continuing to enjoy the excellent service provided by ProCare Facility Solutions.\n\nWarm regards,\n\nMargaret\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry on Sustainability and Environmental Practices\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a client of ProCare Facility Solutions for the past year, primarily utilizing your comprehensive facility management and maintenance services for my commercial property.\n\nI am reaching out to inquire about the sustainability and environmental practices that ProCare Facility Solutions implements. As someone who deeply appreciates the intricate world-building and thought-provoking themes in Orson Scott Card's works, I find myself equally fascinated by the real-world application of sustainable practices and their long-term impact on our environment.\n\nWhile I am generally satisfied with the services provided, I am keen to understand more about the specific eco-friendly products and practices your team employs. Additionally, I would like to know how these practices align with current environmental standards and what measures are taken to ensure continuous improvement in this area.\n\nI have not encountered any immediate issues or concerns, but I believe that having a deeper understanding of your sustainability efforts will not only enhance my appreciation of your services but also allow me to better communicate these benefits to my stakeholders.\n\nCould you please provide detailed information on your sustainability initiatives and any relevant documentation or resources that outline your environmental practices? I am particularly interested in any recent updates or future plans you may have in this regard.\n\nThank you for your time and assistance. I look forward to your response and continuing our partnership in maintaining a safe, efficient, and environmentally conscious facility.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Specialized Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], a retired pilot of the Royal New Zealand Air Force. I have recently moved into a new residential property and have been considering your specialized cleaning services to maintain the pristine condition of my home.\n\nI am particularly interested in your deep cleaning and carpet maintenance services. While I have no immediate concerns, I would like to understand more about the process, the products used, and the scheduling options available. Ensuring a clean and healthy living environment is important to me, and I appreciate your commitment to eco-friendly practices.\n\nI have not taken any steps yet, as I wanted to gather more information before proceeding. Could you please provide me with details on how to get started, any available packages, and the associated costs?\n\nThank you for your assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About Training Programs for Facility Management\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a film producer currently overseeing a project that involves extensive use of special effects. Given the nature of our work, safety and feasibility are paramount concerns for me.\n\nI have been considering the implementation of more robust facility management practices to ensure that our working environment remains safe and efficient. I came across your comprehensive training programs on facility management best practices and was intrigued by the potential benefits they could offer to our team.\n\nCould you provide more details about the training programs you offer, particularly those that focus on safety and risk management? Additionally, I would appreciate information on how these programs can be tailored to meet the specific needs of a film production environment, where the use of special effects can introduce unique challenges.\n\nI have not yet taken any steps to address this matter, as I wanted to gather more information from your team first. Your expertise and guidance would be invaluable in helping us create a safer and more efficient working environment.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent Assistance Required for Cleaning Services Scheduling\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am currently managing a commercial property that has been utilizing your services for the past year. I am reaching out to you with an urgent request regarding the scheduling of our cleaning services.\n\nWe have recently encountered a significant issue with our current cleaning schedule. Due to an upcoming event at our facility, we require an immediate adjustment to our cleaning timetable to ensure the premises are in pristine condition. Specifically, we need a deep cleaning service, including window washing and carpet maintenance, to be conducted within the next 48 hours.\n\nI have already attempted to adjust the schedule through your online portal, but it appears that the system is not allowing changes on such short notice. Given the urgency of our situation, I am seeking your immediate assistance to expedite this request.\n\nCould you please prioritize this matter and confirm the availability of your team to accommodate our needs? Your prompt response and support in this matter would be greatly appreciated.\n\nThank you for your attention to this urgent request. I look forward to your swift resolution.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Scheduled Maintenance Request for HVAC System\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a delighted customer of ProCare Facility Solutions for the past year. Your team has always done a fantastic job maintaining our residential complex, and I truly appreciate the dedication and professionalism you bring to your work.\n\nI\u2019m reaching out because we\u2019ve encountered an issue with our HVAC system. It\u2019s not an immediate crisis, but it\u2019s definitely something that needs attention soon. The system has been making unusual noises and isn\u2019t cooling as efficiently as it used to. Given the importance of a comfortable environment for my vocal practice sessions, I\u2019d love to get this sorted out before it becomes a bigger problem.\n\nI\u2019ve tried adjusting the thermostat and checking the filters, but the issue persists. I\u2019m hoping you can schedule a routine maintenance visit to diagnose and repair the system at your earliest convenience. Your prompt assistance would be greatly appreciated, as always.\n\nThank you so much for your help and for continuing to provide such excellent service. Looking forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent Inquiry Regarding Facility Management Services\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is Alex, and I\u2019ve been a huge fan of ProCare Facility Solutions for quite some time now. Your commitment to quality and sustainability has always resonated with me, and I\u2019ve had nothing but positive experiences with your services.\n\nI\u2019m reaching out today because I need some urgent assistance regarding your facility management services. I\u2019m currently overseeing a new residential complex that\u2019s in dire need of comprehensive facility management. We\u2019re looking to implement best practices for energy efficiency and environmental impact reduction, and I believe ProCare is the perfect partner for this.\n\nI\u2019ve already reviewed your service offerings and am particularly interested in the coordination of space utilization and sustainability efforts. However, I need more detailed information on how quickly we can get started and what the initial steps would be. Time is of the essence, as we\u2019re aiming to have everything in place within the next few weeks.\n\nCould you please provide me with a detailed plan or guide on how we can proceed? Any immediate steps we can take to expedite the process would be greatly appreciated. I\u2019m confident that with your expertise, we can create a safe, efficient, and impeccably maintained environment for our residents.\n\nThank you so much for your prompt attention to this matter. I look forward to your swift response and working together to make this project a success.\n\nBest regards,\n\nAlex [Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for HVAC System Issue\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a front-end developer residing in one of the residential complexes managed by ProCare Facility Solutions. I have always appreciated the meticulous attention to detail and the high standards of service your team provides.\n\nHowever, I am currently facing a significant issue that requires urgent attention. Over the past few days, there has been a noticeable decline in the efficiency of the HVAC system in my building. The temperature regulation is inconsistent, and there have been instances where the system has completely shut down, causing considerable discomfort.\n\nGiven the critical nature of this issue, I have already attempted to troubleshoot by checking the thermostat settings and ensuring that the air filters are clean. Unfortunately, these steps have not resolved the problem, and the situation seems to be deteriorating.\n\nI would greatly appreciate it if your team could prioritize this matter and send a technician to inspect and repair the HVAC system as soon as possible. The current state of the system is affecting not only my comfort but also my ability to work efficiently from home.\n\nThank you for your prompt attention to this matter. I look forward to your swift response and resolution.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry Regarding HVAC System Maintenance\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is Dr. Alex Thompson, and I am currently managing the facility operations for our research center. We have been utilizing ProCare Facility Solutions for our maintenance needs for the past two years, and I must commend your team for their consistent and reliable service.\n\nI am writing to bring to your attention a minor issue we have encountered with our HVAC system. While the system is still operational, we have noticed a slight decrease in its efficiency over the past few weeks. Given the critical nature of maintaining optimal environmental conditions for our research equipment, I believe it is prudent to address this matter sooner rather than later.\n\nTo provide some context, we have already performed basic troubleshooting steps, such as checking the filters and ensuring that the thermostat settings are correct. However, the issue persists, and we would appreciate your expertise in diagnosing and resolving the problem.\n\nCould you please arrange for a technician to visit our facility at your earliest convenience to conduct a thorough inspection and perform any necessary maintenance? While this is not an urgent matter, we would prefer to have it addressed within the next couple of weeks to prevent any potential disruptions to our research activities.\n\nThank you for your attention to this matter. I look forward to your prompt response and continued support.\n\nBest regards,\n\nDr. Alex Thompson\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduled Maintenance Request for Kitchen Sink Plumbing\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am currently residing in one of the residential properties managed by ProCare Facility Solutions. I have always appreciated the high level of service and attention to detail your team provides.\n\nI am writing to bring to your attention a minor issue that has arisen in my apartment. Specifically, there seems to be a small leak in the kitchen sink plumbing. While it is not causing any immediate problems, I believe it would be best to address it before it potentially worsens.\n\nI have not taken any steps to fix the issue myself, as I trust your team\u2019s expertise in handling such matters. Could you please arrange for a technician to come by and take a look at the earliest convenience? I understand that this is not an urgent matter, so I am flexible with scheduling the visit.\n\nThank you for your attention to this matter. I look forward to your prompt response and assistance.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for HVAC System Disruption\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is Dr. [Sender], and I am a psychiatrist specializing in mental health. I have been collaborating with ProCare Facility Solutions for the past year to ensure that our facility remains a conducive environment for both our staff and patients.\n\nI am writing to bring to your immediate attention a pressing issue we are currently facing with the facility management at our clinic. Over the past week, we have experienced significant disruptions in our HVAC system, which has resulted in uncomfortable temperatures within the building. This situation is particularly concerning given the nature of our work, where a stable and comfortable environment is crucial for both therapeutic sessions and the overall well-being of our patients.\n\nWe have attempted to address the issue internally by adjusting the thermostat settings and conducting basic troubleshooting, but these measures have not resolved the problem. Given the urgency of maintaining a stable environment for our patients, I am reaching out to request your immediate assistance in resolving this matter.\n\nCould you please arrange for a technician to visit our facility at the earliest convenience to diagnose and fix the HVAC system? Additionally, any interim measures that can be taken to mitigate the discomfort would be greatly appreciated.\n\nThank you for your prompt attention to this matter. I look forward to your swift response and resolution.\n\nBest regards,\n\nDr. [Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Training Programs on Sustainable Facility Management\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am an influential blogger dedicated to raising awareness about the harmful effects of oil and gas operations on our environment. I have been following your company\u2019s commitment to sustainability and eco-friendly practices with great interest.\n\nI am reaching out to inquire about your training programs on facility management best practices, particularly those that focus on sustainability and reducing environmental impact. As someone who advocates for greener solutions, I am keen to learn more about how your training can help in promoting and implementing sustainable facility management practices.\n\nI have not yet taken any steps to enroll in your training programs, as I wanted to first understand the scope and content of what you offer. Specifically, I am interested in any modules or sessions that address energy efficiency, waste reduction, and the use of eco-friendly products.\n\nCould you please provide me with more information on the available training programs, including schedules, content outlines, and any prerequisites? Additionally, I would appreciate any guidance on how to get started with these programs.\n\nThank you for your time and assistance. I look forward to your response and to potentially collaborating with ProCare Facility Solutions to further our shared goal of promoting sustainability.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About ProCare Facility Solutions Services\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a health policy advisor working within the European Parliament. I have recently come across your company, ProCare Facility Solutions, and I am thoroughly impressed by the range of services you offer, particularly your commitment to sustainability and quality.\n\nAs part of my role, I am involved in crafting legislation and advising on public health matters, and I am always on the lookout for exemplary service providers who prioritize environmental impact reduction and energy efficiency. Your comprehensive facility management and maintenance services seem to align perfectly with the values we promote.\n\nI am reaching out to gather more information about your services, specifically regarding your customized maintenance plans and eco-friendly cleaning practices. Could you provide more details on how these services are tailored to meet the unique needs of different facilities? Additionally, I would appreciate any information on the training programs you offer for in-house maintenance teams and cleaning staff.\n\nI have not yet taken any steps to engage your services, as I wanted to first understand the full scope of what you offer and how it might benefit the facilities we oversee. Your prompt response would be greatly appreciated, though there is no immediate urgency.\n\nThank you for your time and assistance. I look forward to learning more about how ProCare Facility Solutions can support our efforts in maintaining safe, efficient, and sustainable environments.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent: Immediate Attention Needed for Cleaning Schedule Issues\n\nHi [Receiver],\n\nI hope this message finds you well, though I can't say the same for my experience with ProCare Facility Solutions lately. I'm [Sender], and I've been using your services for a while now, but I'm starting to question my decision.\n\nI've been trying to get a consistent cleaning schedule set up for my property, but it's been a nightmare. The communication has been spotty at best, and the few times I've managed to get through, the scheduling has been all over the place. It's frustrating to say the least, especially when I'm trying to maintain a clean and healthy environment.\n\nI've already tried calling and emailing multiple times, but it seems like my concerns are falling on deaf ears. I even tried to use your online scheduling tool, but it was more trouble than it was worth.\n\nI need someone to take this seriously and help me get a reliable cleaning schedule in place. It's not too much to ask for a service I'm paying for, is it?\n\nLooking forward to a prompt resolution.\n\nBest,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Immediate Attention Required for Safety Concerns\n\nHey ProCare Support Team,\n\nHope this message finds you well. My name is [Sender], and I've been a client of ProCare Facility Solutions for a while now, enjoying the seamless maintenance and cleaning services you provide. But today, I need to bring something to your attention that can't wait.\n\nIn the rhythm of daily life, we often overlook the small things, but this one stands out like a discordant note in a smooth melody. Recently, I've noticed some issues with the safety protocols in our building. Specifically, the emergency exits seem to be blocked by cleaning equipment, and the fire alarms haven't been tested in a while. This is a serious concern that needs immediate action.\n\nI've tried to address this with the on-site team, but the problem persists. It's like a verse that keeps repeating, unresolved. I need your expertise to ensure that our facility remains safe and compliant with all safety regulations.\n\nPlease send someone over to inspect and rectify these issues as soon as possible. Your prompt attention to this matter would be greatly appreciated.\n\nLooking forward to a swift resolution.\n\nBest,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry on Eco-Friendly Practices\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I've been a client of ProCare Facility Solutions for a few months now. I must say, your services have been quite impressive, especially the way you handle facility management and maintenance.\n\nAs a huge fan of the show \"Wisdom of the Crowd,\" I\u2019ve always been fascinated by how small changes can lead to significant impacts. This got me thinking about the eco-friendly practices that ProCare implements. I\u2019m particularly interested in understanding more about the sustainable cleaning products and methods you use.\n\nI haven\u2019t encountered any issues per se, but I\u2019m curious about the specifics of your sustainability efforts. For instance, what kind of products do you use, and how do they contribute to a healthier environment? Additionally, are there any new initiatives or technologies you\u2019re planning to adopt to further enhance your environmental impact?\n\nI haven\u2019t taken any steps to resolve this query as it\u2019s more of an informational request. I would appreciate it if you could provide me with some detailed insights or direct me to any resources that could help me understand your practices better.\n\nThank you for your time and assistance. Looking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry on Sustainability Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a client of ProCare Facility Solutions for the past year, primarily utilizing your maintenance and cleaning services for my studio and gallery space. As an artist who has found solace and purpose in painting, especially after overcoming significant personal challenges, I deeply value the environment in which I create and exhibit my work.\n\nI am reaching out to inquire about the sustainability and environmental practices that ProCare Facility Solutions implements. While I am generally satisfied with the services provided, I am particularly interested in understanding more about the eco-friendly products and practices you use. Given the nature of my work and my commitment to advocacy through art, it is important to me that the spaces I maintain are not only clean but also environmentally responsible.\n\nI have reviewed some information on your website, but I would appreciate more detailed insights into how your sustainability efforts are integrated into your daily operations. Specifically, I am curious about the types of eco-friendly cleaning products used, any certifications they might have, and how your team ensures minimal environmental impact during maintenance activities.\n\nI have not encountered any specific issues, but I believe that having a deeper understanding of your practices will help me align my own efforts with those of ProCare Facility Solutions. Any additional information or resources you could provide would be greatly appreciated.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Clarification Needed on Your Services\n\nHi [Receiver],\n\nI'm reaching out because I need some straightforward answers about your services. I'm a web developer, not someone who has time for all the marketing fluff.\n\nI've been looking into ProCare Facility Solutions for some facility management and maintenance needs for my office space. However, your website is filled with a lot of jargon and not enough clear information. I need to know exactly what you offer without all the buzzwords.\n\nI've already gone through your website and read the descriptions, but it's still not clear to me what sets you apart from other companies. Can you provide a simple, no-nonsense breakdown of your services and how they can specifically benefit a small office like mine?\n\nLooking forward to a clear and concise response.\n\nThanks,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"negative\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for Facility Management Issue\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am currently residing in one of the residential complexes managed by ProCare Facility Solutions. I am reaching out to you with an urgent matter that requires immediate attention.\n\nOver the past few days, I have noticed significant issues with the facility management in my building. Specifically, there have been recurring problems with the HVAC system, which has resulted in inconsistent heating and cooling. This has made it quite challenging to maintain a comfortable living environment, especially given the current weather conditions.\n\nI have already attempted to address this issue by contacting the building's maintenance staff directly, but unfortunately, the problem persists. Given the urgency of the situation, I am requesting that a qualified technician be dispatched as soon as possible to resolve this matter.\n\nYour prompt assistance in this regard would be greatly appreciated, as it is crucial for me to have a stable and comfortable living environment to focus on my studies.\n\nThank you for your understanding and swift action.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Your Services\n\nDear ProCare Facility Solutions Team,\n\nI hope this message finds you well. My name is [Sender], and I am a painter who finds inspiration in the surreal and the dreamlike. I recently came across your company and was intrigued by the comprehensive range of services you offer.\n\nAs someone who spends a lot of time in my studio, I understand the importance of a well-maintained and clean environment. I am particularly interested in your cleaning services and how they might help create a more inspiring and efficient workspace for my creative endeavors.\n\nCould you provide more details about your specialized cleaning services, particularly deep cleaning and eco-friendly practices? I am curious to know how these services could be tailored to fit the unique needs of an artist's studio.\n\nI haven't taken any steps yet to engage your services, as I wanted to gather more information first. Your expertise and commitment to quality are quite appealing, and I am eager to learn more about how ProCare Facility Solutions can support my artistic journey.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry on Sustainability and Environmental Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a sports agent who has been utilizing your facility management services for my clients' residential and commercial properties. I have always appreciated the high standards of maintenance and cleanliness your team provides, which is crucial for maintaining peak physical condition for my clients.\n\nI am reaching out to inquire about the sustainability and environmental practices implemented by ProCare Facility Solutions. Given the increasing importance of eco-friendly practices in our industry, I am keen to understand how your services align with these values. Specifically, I would like to know more about the eco-friendly cleaning products you use and any initiatives you have in place to reduce the carbon footprint of the facilities you manage.\n\nWhile I have not encountered any specific issues, I believe it is essential to stay informed about the environmental impact of the services we utilize. This information will not only help me make informed decisions but also ensure that we are contributing positively to the environment.\n\nCould you please provide detailed information on your sustainability practices and any certifications or recognitions you have received in this area? Additionally, if there are any upcoming initiatives or changes in your environmental policies, I would appreciate being informed.\n\nThank you for your attention to this matter. I look forward to your prompt response.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the owner of a small business that has been utilizing ProCare Facility Solutions for our maintenance needs for the past two years. Your services have always been reliable, and I appreciate the peace of mind that comes with knowing our facility is in good hands.\n\nI am writing to request routine maintenance for our HVAC system. While there are no immediate issues, I believe it is prudent to ensure everything is functioning optimally, especially as we approach the colder months. Regular upkeep is crucial for us, not only to maintain a comfortable environment for our employees and customers but also to stay compliant with various regulations.\n\nSo far, we have been following the maintenance schedule provided by your team, and it has served us well. However, I would like to ensure that we are not missing any critical checks or updates that might be necessary at this time.\n\nCould you please arrange for a technician to perform a thorough inspection and any required maintenance on our HVAC system? I would appreciate it if this could be scheduled at your earliest convenience, though there is no immediate rush.\n\nThank you for your attention to this matter. I look forward to your prompt response and continued excellent service.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Quality and Safety Standards\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am currently a student with a keen interest in software engineering, particularly in the realm of operating systems. I have been following ProCare Facility Solutions for some time now, and I am impressed by your commitment to quality and sustainability.\n\nI am writing to you today with a few questions regarding the quality and safety standards of your services. While I understand that your team adheres to high standards, I am curious about the specific protocols and measures you have in place to ensure the safety and well-being of both your staff and clients, especially in the context of your cleaning services.\n\nTo provide some context, I have been researching various facility management companies for a project, and I am particularly interested in how companies like yours maintain a balance between efficiency and safety. I have reviewed the information available on your website, but I would appreciate more detailed insights into your safety protocols, especially any recent updates or changes.\n\nI haven't taken any specific steps to address this inquiry beyond reviewing your online resources, as I believe direct communication with your support team would provide the most accurate and comprehensive information.\n\nCould you please provide me with more details on your quality and safety measures? Any additional information or resources you could share would be greatly appreciated.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About Maintenance and Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a culinary instructor specializing in Indian cuisine. I organize cooking classes and events, and I am exploring options to ensure that the facilities I use are maintained to the highest standards.\n\nI have heard positive things about ProCare Facility Solutions and am particularly interested in your maintenance services. Could you provide more details on how your services can be tailored to meet the needs of a culinary teaching environment? Specifically, I am looking for information on your routine maintenance plans and any specialized cleaning services that might be beneficial for a kitchen setting.\n\nI have not yet taken any steps to engage your services, as I wanted to gather more information first. Your assistance in providing detailed information about your offerings would be greatly appreciated.\n\nThank you for your time and support. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Scheduling Cleaning Services for Upcoming Semester\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. [Sender], and I am an engineering professor at [University/Institution]. I have been consistently impressed with the quality of your services, which have significantly contributed to maintaining a conducive learning environment for our students and staff.\n\nAs we prepare for the upcoming semester, I would like to discuss the scheduling of cleaning services for our engineering department. Given the importance of project management skills in technical fields, it is crucial that our facilities remain in top condition to support various hands-on projects and research activities.\n\nI have reviewed our current cleaning schedule and believe that a few adjustments could further enhance the efficiency and cleanliness of our labs and classrooms. Specifically, I am interested in exploring options for more frequent deep cleaning sessions and specialized cleaning for our high-traffic areas, including the use of eco-friendly products.\n\nWhile this request is not urgent, I would appreciate your assistance in coordinating these changes at your earliest convenience. I have not yet taken any steps to modify our existing schedule, as I wanted to consult with your team first to ensure we implement the best possible plan.\n\nThank you for your continued support and dedication to excellence. I look forward to working with you to maintain our facilities at the highest standard.\n\nBest regards,\n\nDr. [Sender] \n[University/Institution] \n[Contact Information]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Scheduling Cleaning Services for My Shop\n\nHey ProCare Team,\n\nHope y'all are doing well. My name's Hank, and I run a small auto repair shop here in town. I've been hearing good things about your cleaning services and figured it's time to give y'all a shout.\n\nMy shop's been getting a bit too dusty and greasy for my liking, and I reckon it's high time for a proper clean-up. I'm looking to set up a regular cleaning schedule\u2014maybe something weekly or bi-weekly. I want to make sure the place stays spick and span, especially the waiting area where my customers hang out.\n\nI haven't tried any other cleaning services yet, but I thought I'd start with the best. Could you let me know what my options are and how soon we can get this rolling? I'm not in a huge rush, but I'd like to get it sorted out sooner rather than later.\n\nThanks a bunch for your help. Looking forward to hearing from you.\n\nBest,\nHank\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Attention Required: Emergency Repair Needed\n\nHey ProCare Support Team,\n\nThis is [Sender], and I have to say, I'm not impressed. I've been using your services for a while now, and I expected better. Right now, I'm dealing with a major issue that needs your immediate attention.\n\nThe HVAC system in my building has completely failed, and it's causing a lot of problems. The temperature is unbearable, and it's affecting everyone here. This isn't just an inconvenience; it's a serious problem that needs to be fixed right away. I've tried resetting the system and checking the circuit breakers, but nothing has worked.\n\nI need your team to come out and fix this immediately. This kind of failure is unacceptable, and I expect a prompt response. If this isn't resolved quickly, I might have to reconsider using your services in the future.\n\nLooking forward to your swift action.\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Immediate Attention Required for Sustainability Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well, though I must admit, I am quite frustrated as I write this. My name is [Sender], and I have been utilizing your services for my dairy farm's facility management for the past year. While I initially chose ProCare Facility Solutions for your reputed expertise and commitment to quality, recent experiences have left me questioning that decision.\n\nAs a dairy farmer who deeply values sustainable practices, I was particularly drawn to your promise of implementing best practices for energy efficiency and environmental impact reduction. However, I have noticed a significant lack of follow-through in this area. Despite multiple assurances, there has been no visible effort to incorporate eco-friendly solutions into the maintenance and cleaning services provided to my farm.\n\nI have already reached out to your team on two separate occasions to address this issue, but the responses have been unsatisfactory and vague at best. I was assured that someone would look into it, yet here we are, with no tangible progress or updates.\n\nI am requesting immediate and concrete action to rectify this situation. Specifically, I need a detailed plan outlining how ProCare intends to integrate sustainable practices into the services provided to my farm. This includes the use of eco-friendly cleaning products, energy-efficient maintenance solutions, and any other measures that align with your advertised commitment to sustainability.\n\nI trust that you will treat this matter with the urgency it deserves and provide a satisfactory resolution promptly. Failure to do so will force me to reconsider my association with ProCare Facility Solutions.\n\nLooking forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Assistance Needed for Routine HVAC Maintenance at Residential Complex\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a single parent and police officer residing in one of the residential complexes managed by ProCare Facility Solutions. I have always appreciated the high level of service and attention to detail your team provides, which is why I am reaching out with confidence today.\n\nRecently, I have encountered an issue with the HVAC system in my apartment. The system has been malfunctioning intermittently, causing significant discomfort for my family, especially during these fluctuating weather conditions. Given my demanding job and the importance of maintaining a comfortable home environment for my children, I am seeking your assistance to address this matter promptly.\n\nI have already tried resetting the system and checked the thermostat settings, but the problem persists. I believe it may require a more thorough inspection and possibly some routine maintenance by your skilled team.\n\nCould you please arrange for a technician to visit and assess the situation at the earliest convenience? While this is not an immediate emergency, it is crucial for us to have a reliable HVAC system, especially considering the current weather patterns.\n\nThank you for your attention to this matter. I am confident that your team will handle this with the same professionalism and efficiency that I have come to expect from ProCare Facility Solutions.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Feedback on Recent Service and Minor Safety Concern\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a satisfied client of ProCare Facility Solutions for the past two years. Your team has consistently provided exceptional service, and I truly appreciate the dedication and professionalism you bring to maintaining my property.\n\nRecently, I noticed a minor issue that I believe warrants your attention. During the last routine maintenance visit, I observed that the emergency exit signs in the building's common areas were not as visible as they should be. While this is not an immediate concern, I think it\u2019s important to address it to ensure the continued safety and quality of our environment.\n\nI haven't taken any steps to resolve this myself, as I trust your expertise in handling such matters. Could you please arrange for someone to inspect and, if necessary, improve the visibility of these signs at your earliest convenience?\n\nThank you for your attention to this matter and for your ongoing commitment to excellence. I look forward to your prompt response and continued outstanding service.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Request for Comprehensive Plumbing Assessment\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been using your services for my home for quite some time now. I appreciate the quality of work your team consistently delivers.\n\nI am writing to bring to your attention a recurring issue with the plumbing system in my house. Given the age of the plumbing, I frequently encounter problems that require repairs. While I understand that older systems can be problematic, I am concerned about the long-term safety and reliability of the plumbing in my home.\n\nIn the past, I have had your team come out for repairs, and while the immediate issues were resolved, new problems seem to arise shortly after. I have not taken any additional steps beyond calling for repairs, as I trust your expertise in handling these matters.\n\nI would like to request a more comprehensive assessment of the plumbing system to identify any underlying issues that might be causing these frequent problems. A detailed inspection and a tailored maintenance plan would be greatly appreciated to ensure the safety and reliability of the plumbing in my home.\n\nThank you for your attention to this matter. I look forward to your prompt response and assistance.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent Attention Needed for HVAC Maintenance Issue\n\nDear ProCare Support Team,\n\nI hope this message finds you well, though I must admit, my current experience with your services has been far from satisfactory. My name is [Sender], and I have been a client of ProCare Facility Solutions for the past year, relying on your team to maintain my residential property. Unfortunately, I am compelled to reach out due to a persistent issue that has not been addressed adequately.\n\nFor the past two weeks, I have been experiencing significant problems with the HVAC system in my home. Despite following the recommended maintenance schedule and even reaching out to your team for a routine check-up, the issue remains unresolved. The system is not functioning correctly, leading to uncomfortable living conditions, which is particularly distressing given the current weather conditions.\n\nI have already contacted your support team twice, and while I appreciate the initial prompt response, the follow-up has been lacking. The technician who visited assured me that the problem was fixed, but it has since reoccurred, causing me considerable inconvenience and frustration.\n\nI am requesting immediate assistance to resolve this matter once and for all. It is disheartening to feel neglected, especially when I have placed my trust in your company to ensure my home remains a safe and comfortable environment. I urge you to prioritize this issue and provide a permanent solution at the earliest convenience.\n\nThank you for your attention to this matter. I look forward to a swift and effective resolution.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Urgent: Immediate Attention Required for Quality and Safety Concerns\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a product owner who has been relying on your facility management services for our commercial properties for the past year. Unfortunately, I am writing to express my dissatisfaction with the recent quality and safety standards observed at our facilities.\n\nOver the past few weeks, I have noticed a significant decline in the overall maintenance and cleanliness of our office spaces. Specifically, there have been recurring issues with the HVAC system, which has led to uncomfortable working conditions for our employees. Additionally, the cleaning services have not been up to the mark, with several areas being neglected and not meeting the expected hygiene standards.\n\nI have already reached out to your support team on a couple of occasions to address these concerns, but the responses have been slow and the issues remain unresolved. This lack of prompt action is quite disappointing, especially considering the premium we pay for your services.\n\nI am requesting an immediate and thorough review of the current maintenance and cleaning protocols in place at our facilities. It is crucial that these issues are addressed promptly to ensure a safe and efficient working environment for our team. I would appreciate it if you could escalate this matter to the appropriate department and provide a detailed plan of action to rectify these problems.\n\nThank you for your urgent attention to this matter. I look forward to your prompt response and a swift resolution to these concerns.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Urgent Assistance Needed for HVAC System\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I've been a loyal customer of ProCare Facility Solutions for a few years now. Your team has always done a fantastic job keeping my property in top shape, and I truly appreciate the dedication and expertise you bring to the table.\n\nI'm reaching out today because I'm facing an urgent issue with the HVAC system in my home. As a music enthusiast and vinyl collector, I spend a lot of time in my dedicated music room, and maintaining the right temperature and humidity levels is crucial for preserving my collection. Unfortunately, the HVAC system seems to have malfunctioned, and it's causing significant discomfort and potential risk to my vinyl records.\n\nI've tried resetting the system and checking the thermostat, but nothing seems to be working. Given the importance of this issue, I would greatly appreciate it if you could send someone over as soon as possible to diagnose and fix the problem. Your prompt assistance with this emergency repair would mean the world to me and help ensure that my cherished collection remains in pristine condition.\n\nThank you so much for your attention to this matter. I look forward to your swift response and resolution.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Your Eco-Friendly Practices\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a mother of a 10-year-old daughter who has recently started Irish dance. We live in a residential complex where your team provides cleaning and maintenance services.\n\nI am writing to inquire about the eco-friendly practices your company employs. As someone who is conscious about the environment and wants to set a good example for my daughter, I am keen to understand how your services align with sustainable practices. Specifically, I am interested in the types of cleaning products you use and any measures you take to reduce the carbon footprint of your operations.\n\nI have noticed that the cleaning staff is very diligent and thorough, which I appreciate. However, I would like to know more about the environmental impact of the products and methods used. Are there any certifications or standards that your company adheres to in this regard?\n\nI haven't taken any steps to address this concern previously, as I wanted to gather more information first. Your prompt response would be greatly appreciated, as it will help me make informed decisions about the services we use.\n\nThank you for your attention to this matter. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Attention Required for Urgent HVAC Repair\n\nDear ProCare Support Team,\n\nI hope this message finds you well, though I must admit I am quite frustrated as I write this. My name is [Sender], and I am the president of the local bar association. We have been utilizing your services for our office building for some time now, and I have generally been satisfied with the quality of your work. However, recent events have left me quite disappointed.\n\nWe are currently facing a significant issue with our HVAC system, which has been malfunctioning for the past two days. This is causing considerable discomfort for our staff and visitors, and it is unacceptable given the high standards we expect from ProCare Facility Solutions. The temperature in our offices has become unbearable, and this is severely impacting our daily operations.\n\nI have already attempted to reach out to your support team via phone and email, but I have yet to receive a satisfactory response or any indication that this issue is being addressed with the urgency it requires. Given the critical nature of our work and the importance of maintaining a conducive environment, this delay is simply not acceptable.\n\nI am requesting immediate assistance to resolve this HVAC issue. We need a technician on-site as soon as possible to diagnose and fix the problem. This matter cannot wait any longer, and I expect prompt action from your team.\n\nAdditionally, I would like to express my dissatisfaction with the lack of timely response to this urgent matter. It is crucial that such emergency repair needs are addressed swiftly to avoid further inconvenience.\n\nThank you for your immediate attention to this urgent matter. I look forward to a swift resolution.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for Theater Facility\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a theater producer who has been working closely with your team to ensure our facility remains in top condition. I have always admired the innovative methods your team employs, which have significantly contributed to the smooth operation of our workshops and classes.\n\nI am writing to request routine maintenance for our theater facility. Specifically, we have noticed that the HVAC system is not performing optimally, and there are minor plumbing issues in the restrooms that need attention. These issues are not urgent but do require timely intervention to prevent any disruption to our scheduled activities.\n\nWe have not taken any steps to address these issues internally, as we prefer to rely on your expertise to ensure everything is handled correctly and efficiently.\n\nCould you please arrange for a maintenance visit at your earliest convenience? We would appreciate it if the visit could be scheduled within the next week to ensure everything is in order before our next major event.\n\nThank you for your attention to this matter. We look forward to your prompt response and continued excellent service.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Minor Facility Management Issue\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been thoroughly enjoying the exceptional facility management services provided by ProCare Facility Solutions for our office building over the past year. Your team's dedication to maintaining a seamless and efficient environment has truly been commendable.\n\nI am writing to bring a minor issue to your attention regarding the coordination of space utilization in our office. While the overall management has been excellent, we have noticed a slight inconsistency in the allocation of meeting rooms, which occasionally leads to double bookings. This is not a pressing concern, but addressing it would certainly enhance our experience further.\n\nWe have tried to manage the bookings internally by adjusting our schedules and communicating with the team, but the issue persists intermittently. I believe a review of the current system or a minor adjustment could resolve this smoothly.\n\nCould you please look into this matter at your earliest convenience? Your expertise and prompt attention to even the smallest details have always been appreciated, and I am confident this will be no exception.\n\nThank you for your continued support and for making our work environment so pleasant.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a satisfied client of ProCare Facility Solutions for the past two years. Your team has always provided exceptional service, and I truly appreciate the dedication and professionalism you bring to maintaining our facility.\n\nRecently, I noticed a minor issue with our HVAC system. While it is still functioning, it seems to be making a slight noise that wasn't there before. It's not causing any immediate problems, but I thought it would be best to address it before it potentially becomes a bigger issue.\n\nI haven't taken any steps to resolve this on my own, as I trust your expertise in handling such matters. Could you please arrange for a technician to come by and take a look at the system at your earliest convenience? I understand this isn't an urgent matter, so scheduling it at a time that works best for your team would be perfectly fine.\n\nThank you for your attention to this matter. I look forward to your prompt response and continued excellent service.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Assistance Needed with Facility Management\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been using ProCare Facility Solutions for managing my home in Wisconsin. As a single parent going through a challenging divorce, I am striving to bring some tranquility and balance back into my life.\n\nRecently, I have noticed a few issues with the facility management services at my residence. Specifically, there seems to be a lack of coordination in space utilization and some inconsistencies in the implementation of energy efficiency practices. While these issues are not urgent, they are affecting the overall harmony and efficiency of my home environment.\n\nI haven't taken any specific steps to address these concerns yet, as I wanted to reach out to your team first for guidance. Could you please assist me in resolving these issues? I am looking for a more streamlined approach to space management and better adherence to energy-saving practices.\n\nThank you for your attention to this matter. I appreciate your support and look forward to your assistance in bringing back the balance and tranquility I seek.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Feedback on Recent Service Experience\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a public relations officer with the [Government Department]. We have been utilizing your facility management and maintenance services for our office building for the past year, and I must say, the experience has been largely positive.\n\nHowever, I wanted to share some feedback regarding a recent interaction that, while not urgent, I believe could help improve your already commendable services. We recently had a scheduled maintenance visit for our HVAC system, and while the technician was professional and thorough, the process seemed to involve a bit more paperwork and procedural steps than necessary. This has been a point of satirical critique within our department, as it mirrors the very bureaucracy we often get lampooned for.\n\nI understand that thoroughness is key to ensuring quality service, and I appreciate the attention to detail. However, streamlining some of these processes could enhance efficiency and reduce the time spent on administrative tasks. I have not taken any steps to address this internally, as I believe your team is best equipped to evaluate and implement any necessary changes.\n\nCould you please look into this and consider if there are ways to simplify the procedural aspects of your maintenance services? Your support in this matter would be greatly appreciated, and I am confident that any improvements will only add to the excellent service we have come to expect from ProCare Facility Solutions.\n\nThank you for your attention to this matter. I look forward to your response and continued collaboration.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Sustainability and Environmental Practices\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am part of the Human Resources department at [TARP Recipient Institution]. We have been utilizing your facility management services for our office building for some time now, and overall, we are quite satisfied with the quality and professionalism your team consistently delivers.\n\nI am reaching out to inquire about the sustainability and environmental practices that ProCare Facility Solutions implements as part of your service offerings. As an institution that places a high value on environmental responsibility, we are keen to understand more about the specific measures and initiatives your company undertakes to promote sustainability.\n\nCould you please provide detailed information on the eco-friendly products and practices you use, particularly in your cleaning services? Additionally, we are interested in learning about any energy efficiency programs or environmental impact reduction strategies you have in place for facility management and maintenance services.\n\nWe have not encountered any issues so far, but we are looking to ensure that our partnership aligns with our institution's sustainability goals. Any documentation or resources you could share would be greatly appreciated.\n\nThank you for your attention to this matter. I look forward to your response.\n\nBest regards,\n\n[Sender] \nHuman Resources Department \n[TARP Recipient Institution]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Hey ProCare Team,\n\nHope you\u2019re all doing well! My name\u2019s Jake, and I\u2019ve been using your services for a while now. Gotta say, you guys have been doing a stellar job keeping my place in top shape. I never thought I\u2019d be the type to care about this stuff, but here we are.\n\nSo, I\u2019ve got a bit of a situation. My sister and I have been spending more time at home lately, and we\u2019ve turned one of the rooms into a mini-library. It\u2019s been great, but I\u2019ve noticed the air quality in there isn\u2019t the best. I\u2019m guessing it\u2019s something to do with the HVAC system. It\u2019s not a huge deal, but it\u2019s definitely something I\u2019d like to get sorted out sooner rather than later.\n\nI haven\u2019t tried fixing it myself because, let\u2019s be honest, I\u2019d probably make it worse. I did check the filters, and they seem fine, but beyond that, I\u2019m out of my depth. Could you guys send someone over to take a look and maybe give the system a bit of a tune-up?\n\nThanks a ton for your help. Looking forward to hearing back from you soon.\n\nBest,\nJake"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduling Cleaning Services for My Studio\n\nHi ProCare Support Team,\n\nI hope this message finds you well! My name is [Sender], and I\u2019m a concept artist who has been enjoying the pristine environment your team helps maintain. Your services have been a game-changer for my creative space, and I truly appreciate the dedication and professionalism you bring to the table.\n\nI\u2019m reaching out to discuss scheduling the next round of cleaning services for my studio. The space has been a whirlwind of creativity lately, and while it\u2019s not urgent, I\u2019d love to get a date on the calendar for a thorough cleaning. Your team\u2019s attention to detail always leaves my workspace feeling fresh and inspiring, which is crucial for my work.\n\nI haven\u2019t taken any steps yet to schedule this, as I wanted to touch base with you first to see what dates might be available. Ideally, I\u2019m looking for a slot sometime in the next couple of weeks, but I\u2019m flexible and happy to work around your schedule.\n\nCould you please assist me in setting up a convenient time for the cleaning? I\u2019m looking forward to continuing our collaboration and keeping my studio in top shape.\n\nThank you so much for your help!\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Assistance Needed for Facility Management Training\n\nHi ProCare Support Team,\n\nHope you're all doing well! My name is Alex, and I've recently started managing a residential complex here in the city. I've been really impressed with the quality of your services and the positive impact they've had on our property.\n\nAs someone who's more familiar with the fast-paced world of motorbike racing, I'm still getting the hang of facility management. I\u2019ve been particularly interested in your training programs and would love to get some guidance on best practices for managing our facility more efficiently.\n\nI've gone through some of the basic materials available on your website, but I feel like I could benefit from a more structured training session. Could you please provide me with more information on the available training programs and how I can enroll? Also, any tips on developing an in-house maintenance team would be greatly appreciated.\n\nLooking forward to your response and thank you in advance for your help!\n\nBest regards,\nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Facility Management Services\n\nDear ProCare Facility Solutions Support Team,\n\nGreetings and blessings to you all. My name is Reverend Johnathan Smith, and I am the head pastor at Grace Community Church. We have been blessed to work with your esteemed company for the past year, and I must say, your services have been a godsend in maintaining our church facilities.\n\nI am writing to inquire about your facility management services, particularly in the context of our church's needs. We are looking to ensure that our environment remains not only clean and efficient but also a sanctuary where our congregation can gather in peace and safety. Your commitment to quality and sustainability aligns perfectly with our values, and we are eager to explore how we can further enhance our facility management practices.\n\nWhile we have been very satisfied with the routine maintenance and cleaning services provided, we are now considering a more comprehensive oversight of our facility operations. Specifically, we are interested in learning more about your space utilization and energy efficiency strategies. We believe that with your expertise, we can create an even more welcoming and sustainable environment for our community.\n\nWe have not yet taken any steps towards this new initiative, as we wanted to first seek your guidance and recommendations. Your support and advice have always been invaluable to us, and we trust that you will provide the best solutions tailored to our needs.\n\nCould you please provide us with more information on your facility management services and how we can integrate them into our current setup? Additionally, we would appreciate any insights on the best practices for energy efficiency and environmental impact reduction that you could share with us.\n\nThank you for your continued support and dedication to excellence. We look forward to your response and to further strengthening our partnership with ProCare Facility Solutions.\n\nBlessings,\n\nReverend Johnathan Smith\nGrace Community Church"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am reaching out to you with some questions about your facility management services. I recently came across your company and am very interested in the comprehensive solutions you offer for both residential and commercial properties.\n\nI am currently in the process of managing a small residential complex and am exploring options to improve our facility management and maintenance routines. Your services, particularly the routine and preventative maintenance for building systems, caught my attention. However, as someone who is relatively new to this field, I find myself a bit overwhelmed and unsure about where to start.\n\nCould you please provide more detailed information on how your facility management services work, especially for someone like me who is just getting started? Additionally, I would appreciate any guidance or recommendations you might have for a novice in this area. \n\nI am also interested in learning more about your training programs and support for developing in-house maintenance teams. Any information on how to enroll in these programs would be greatly appreciated.\n\nI have not taken any specific steps yet, as I wanted to gather more information before making any decisions. Your expertise and advice would be incredibly valuable to me at this stage.\n\nThank you very much for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry on Enhancing Sustainability Practices\n\nHi ProCare Support Team,\n\nI hope this message finds you well! My name is [Sender], and I\u2019m an entrepreneur working on some exciting projects in the entertainment industry. I\u2019ve been really impressed with the comprehensive services ProCare Facility Solutions offers, especially your commitment to sustainability and environmental practices.\n\nAs someone who is passionate about creating open-source platforms that revolutionize our industry, I\u2019m always on the lookout for ways to integrate more sustainable practices into our operations. I\u2019m particularly interested in how ProCare can help us enhance our environmental impact reduction efforts and energy efficiency.\n\nI\u2019ve been exploring various options and have already implemented some basic eco-friendly measures, but I believe there\u2019s a lot more we can do. I\u2019d love to hear more about the specific strategies and technologies you recommend for a business like ours. Additionally, any insights on how we can better coordinate our sustainability efforts would be incredibly valuable.\n\nCould you please provide more details on your sustainability services and perhaps suggest a tailored plan that aligns with our goals? I\u2019m eager to collaborate and take our environmental practices to the next level.\n\nThank you so much for your time and assistance. I\u2019m looking forward to your response and working together to create a greener future.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been using ProCare Facility Solutions for the maintenance of my design studio for the past year. Your services have always been top-notch, and I truly appreciate the peace of mind that comes with knowing my workspace is in good hands.\n\nI\u2019m writing to request a routine maintenance check for the HVAC system in my studio. It\u2019s been a while since the last inspection, and I want to ensure everything is running smoothly, especially with the change in seasons. There\u2019s no immediate issue, but I believe in staying ahead of potential problems.\n\nI haven\u2019t taken any steps yet, as I trust your team\u2019s expertise to handle this efficiently. Could you please schedule a visit at your earliest convenience? I\u2019m flexible with timing, so any slot that works for your team should be fine.\n\nThank you for your attention to this matter. Looking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Eco-Friendly Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a government official currently working on initiatives to promote the use of eco-friendly products, particularly in the realm of cleaning services. I have recently come across your company and am impressed by your commitment to sustainability and the use of eco-friendly cleaning products.\n\nI am reaching out to inquire about the scheduling of your specialized cleaning services that utilize eco-friendly dyes and products. As part of our regulatory efforts, we are looking to partner with organizations that can provide exemplary services while adhering to environmentally friendly practices.\n\nCould you please provide me with more information on how we can schedule these services for a few government buildings we are looking to maintain? Additionally, I would appreciate any details on the frequency and flexibility of your cleaning schedules, as well as any specific protocols you follow to ensure the use of eco-friendly products.\n\nI have not taken any prior steps regarding this matter, as I wanted to first understand the options available through your esteemed company. Your assistance in this regard would be greatly appreciated.\n\nThank you for your time and attention. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent Request for Specialized Cleaning Services\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I manage a bustling bar in the heart of the city. We've been relying on ProCare Facility Solutions for our regular cleaning needs for quite some time now, and I must say, your team has always done a commendable job.\n\nHowever, I'm reaching out today with an urgent request. Last night, we hosted a large event, and despite our best efforts, the place is in dire need of a deep clean. The carpets are stained, the windows are smudged, and there's an overall need for a thorough, specialized cleaning to get everything back to its pristine condition.\n\nGiven the high traffic and the nature of our business, it's crucial that we address this immediately to maintain our standards and ensure a welcoming environment for our patrons. I've already tried to handle some of the cleaning myself, but it's clear that we need professional intervention to get the job done right.\n\nCould you please arrange for a specialized cleaning team to come in as soon as possible? We need this taken care of urgently to avoid any disruption to our operations. Your prompt assistance would be greatly appreciated.\n\nThank you for your attention to this matter. I look forward to your swift response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Frustration with Specialized Cleaning Services\n\nHi [Receiver],\n\nI hope this message finds you well, though I must admit I'm not in the best of moods as I write this. My name is [Sender], and I've been working with ProCare Facility Solutions for a while now, primarily utilizing your specialized cleaning services for our office space.\n\nLately, I've been grappling with a dilemma regarding the balance between open-source and proprietary software for my projects, and it's been quite a headache. However, what\u2019s adding to my frustration is the inconsistency in the cleaning services we've been receiving. Despite having a set schedule, there have been multiple instances where the quality of cleaning has noticeably declined. This is particularly concerning given the importance of maintaining a pristine environment for our team.\n\nI've tried addressing this issue by speaking with your on-site staff, but the results have been less than satisfactory. It's disheartening to see that despite these efforts, the problem persists.\n\nI would appreciate it if you could look into this matter and provide a more reliable solution. Perhaps a review of the current cleaning protocols or a reassessment of the team assigned to our facility could help. I\u2019m really hoping for a resolution that ensures consistent and high-quality cleaning services moving forward.\n\nThank you for your attention to this matter. I look forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Hey ProCare Support Team,\n\nHope you're all doing well and not buried under a mountain of facility management queries! My name's Alex, and I've been a happy camper with ProCare Facility Solutions for a while now. You guys have been the unsung heroes keeping my office space spick and span, and I can't thank you enough for that.\n\nSo, here's the deal: I've got a bit of a head-scratcher for you. We're looking to ramp up our sustainability efforts in the office, and I was wondering if you could shed some light on the best practices for energy efficiency and environmental impact reduction. I know you folks are the wizards of eco-friendly solutions, and I could really use some of that magic right now.\n\nI've poked around your website and read through some of the materials you have, but I think I need a bit more guidance to get things rolling. Maybe a checklist or a step-by-step guide? Anything that can help us make our office greener without turning it into a jungle (though a few more plants wouldn't hurt!).\n\nLooking forward to your expert advice and maybe a few laughs along the way. Thanks a ton in advance!\n\nBest,\nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About ProCare Facility Solutions Services\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I recently retired after a fulfilling career in the entertainment industry, where I had the pleasure of shaping PR strategies for some of the most renowned companies. I have always appreciated the importance of a well-maintained environment, both in professional and personal settings.\n\nI am reaching out to you with a few inquiries regarding the comprehensive services offered by ProCare Facility Solutions. Having heard commendable things about your expertise and commitment to quality, I am considering engaging your services for a residential property I own. Specifically, I am interested in learning more about your facility management and maintenance services, as well as the training programs you offer for in-house staff.\n\nTo provide some context, I have a luxury apartment complex that requires meticulous upkeep and efficient management. While I have a basic understanding of the services you provide, I would appreciate more detailed information on how your customized maintenance plans and eco-friendly cleaning practices could benefit my property. Additionally, I am keen to understand the scope and structure of your training programs, as I believe in empowering my team with the best practices in facility management.\n\nI have not yet taken any steps towards engaging a facility management service, as I wanted to ensure I gather all necessary information before making a decision. Your prompt and detailed response would be greatly appreciated, as it will help me make an informed choice.\n\nThank you for your time and assistance. I look forward to hearing from you soon and potentially working together to maintain a pristine and efficient environment for my residents.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Request for Facility Review and Maintenance\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the administrator at [School Name], where we have been utilizing ProCare Facility Solutions for our facility management and maintenance needs. We have always appreciated the high standards of service your team provides, which aligns with our commitment to ensuring a safe and conducive learning environment for our students.\n\nRecently, we have encountered some issues that I believe need your attention. Specifically, there have been several instances where the quality and safety of our facilities have not met the expected standards. For example, we have noticed that the HVAC system in the main building has been inconsistent, leading to uncomfortable temperatures in classrooms. Additionally, there have been a few minor plumbing issues that, while not urgent, could potentially escalate if not addressed promptly.\n\nWe have taken some initial steps to mitigate these issues, such as adjusting the HVAC settings and performing basic plumbing checks. However, these measures have only provided temporary relief, and we believe a more thorough inspection and maintenance are required to ensure long-term solutions.\n\nGiven the importance of maintaining a safe and comfortable environment for our students and staff, we kindly request that your team conduct a comprehensive review of our facility's systems. We would appreciate it if you could schedule a visit within the next week to address these concerns.\n\nThank you for your attention to this matter. We look forward to your prompt response and continued support in maintaining the high standards we have come to expect from ProCare Facility Solutions.\n\nBest regards,\n\n[Sender] \n[School Name] \n[Contact Information]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Services for Residential Property\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a home healthcare nurse providing dedicated support to elderly expats in Spain. I have recently come across your company and am interested in learning more about your facility management services, particularly for residential properties.\n\nAs part of my role, I often assist my clients with various aspects of their daily lives, including ensuring their living environments are safe and well-maintained. I believe that your comprehensive facility management and maintenance services could greatly benefit the elderly individuals I care for, helping to create a more comfortable and secure living space for them.\n\nCould you please provide me with more information about your residential facility management services? Specifically, I am interested in understanding how your customized maintenance plans work and what kind of support you offer for routine and preventative maintenance.\n\nI have not yet taken any steps to engage your services, as I wanted to gather more information first. Your prompt response would be greatly appreciated, as I am looking to make an informed decision on behalf of my clients.\n\nThank you for your time and assistance. I look forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Attention Required for Specialized Cleaning Service Issue\n\nHi ProCare Support Team,\n\nI hope this message finds you well, though I must admit, I'm not in the best of spirits as I write this. My name is [Sender], and I've been a loyal customer of ProCare Facility Solutions for quite some time now. I've always appreciated your commitment to quality and sustainability, but my recent experience has left me feeling quite disheartened.\n\nI recently scheduled a specialized cleaning service for my property, which was supposed to include deep cleaning and carpet maintenance. However, the service I received was far from satisfactory. The carpets were still stained, and the overall cleanliness of the space was not up to the high standards I have come to expect from ProCare. This is particularly frustrating given the urgency of the situation; I have an important event coming up, and I trusted your team to ensure everything would be spotless.\n\nI've already tried reaching out via phone and email, but I haven't received a response yet. This lack of communication is adding to my frustration, and I need this issue resolved immediately. I am requesting an urgent follow-up and a re-scheduling of the specialized cleaning service to rectify the situation. I expect this to be done at no additional cost, given the inconvenience and the subpar service initially provided.\n\nPlease get back to me as soon as possible to confirm the new appointment and ensure that this time, the service meets the high standards ProCare is known for.\n\nThank you for your prompt attention to this matter.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Urgent: Quality and Safety Concerns at Commercial Property\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been overseeing the forum discussions related to facility management and maintenance for quite some time. I have always appreciated the comprehensive services that ProCare Facility Solutions offers, particularly your commitment to quality and sustainability.\n\nHowever, I have recently encountered some serious concerns regarding the quality and safety standards of the cleaning services provided at one of our commercial properties. Specifically, there have been multiple reports from our staff about inconsistent cleaning practices and potential safety hazards, such as improperly stored cleaning supplies and inadequate signage during maintenance activities.\n\nTo address these issues, I have already conducted a preliminary review and spoken with the on-site cleaning team to understand their procedures better. Despite these efforts, the concerns persist, and I believe a more thorough investigation and immediate intervention from your end are necessary.\n\nI would appreciate it if you could look into this matter urgently and provide guidance on how we can ensure that the quality and safety standards are consistently met. Additionally, any recommendations for immediate corrective actions would be highly valuable.\n\nThank you for your prompt attention to this matter. I look forward to your swift response and assistance in resolving these critical concerns.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Embracing Sustainability Together \ud83c\udf3f\n\nDear ProCare Support Team,\n\nI hope this message finds you well and thriving! My name is [Sender], and I\u2019ve been a delighted client of ProCare Facility Solutions for the past year. As a novelist who spends countless hours crafting tales of love and passion, I deeply appreciate the serene and pristine environment your services provide. It\u2019s like you\u2019ve created the perfect backdrop for my stories to come to life!\n\nRecently, I\u2019ve been reflecting on the importance of sustainability and how it aligns with the values I hold dear. I\u2019m a firm believer in soulmates, and I think our planet is the ultimate soulmate we must cherish and protect. I\u2019ve noticed that ProCare already incorporates eco-friendly cleaning products and practices, which is fantastic! However, I\u2019m curious to learn more about the specific measures you\u2019re taking to further enhance sustainability and reduce environmental impact.\n\nCould you please provide more details on your current sustainability initiatives and any future plans you might have in this area? Additionally, I\u2019d love to know if there are any ways I can contribute or support these efforts as a client.\n\nThank you so much for your time and dedication. I\u2019m looking forward to continuing this beautiful partnership and doing our part to make the world a better place, one clean and efficient facility at a time.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent HVAC System Repair Needed\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the curator at [Museum Name]. We have been utilizing ProCare Facility Solutions for our maintenance needs for the past year, and I have always appreciated the professionalism and quality of your services.\n\nI am writing to inform you of an urgent issue we are currently facing with our HVAC system. Over the past few days, we have noticed irregularities in the temperature control within our exhibit halls. This is particularly concerning as we have several sensitive classical art pieces that require a stable climate to ensure their preservation.\n\nWe have attempted to adjust the settings manually and have conducted a basic inspection of the system, but the problem persists. Given the importance of maintaining an optimal environment for our collections, I would appreciate it if you could send a technician to address this issue immediately.\n\nThank you for your prompt attention to this matter. I look forward to your swift response and resolution.\n\nBest regards,\n\n[Sender] \n[Museum Name] \n[Contact Information]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Immediate Attention Required for HVAC Emergency\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a resident of [Residential Complex Name] in Maine for the past five years. I have generally been satisfied with the services provided by ProCare Facility Solutions, but I am writing to express a concern that requires your immediate attention.\n\nOver the past week, I have noticed a significant issue with the HVAC system in my apartment. Despite setting the thermostat to a comfortable temperature, the system fails to maintain the desired climate, leading to considerable discomfort. Given the current weather conditions, this is not just an inconvenience but a pressing matter that needs to be addressed promptly.\n\nI have already attempted to troubleshoot the problem by resetting the thermostat and checking the air filters, but these efforts have not resolved the issue. I also reached out to your customer service line two days ago and was assured that a technician would be dispatched, but I have yet to see any action taken.\n\nI am requesting that a qualified technician be sent to my residence as soon as possible to diagnose and fix the HVAC system. This matter is urgent, and I would appreciate a swift resolution to avoid further discomfort.\n\nAdditionally, I would like to provide feedback on the delay in response to my initial request. Timely service is crucial, especially for urgent maintenance issues, and I hope this can be improved in the future.\n\nThank you for your prompt attention to this matter. I look forward to your immediate response and resolution.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Urgent Concerns Regarding Sustainability Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well, though I must admit, my current sentiment is far from positive. My name is [Sender], and I have been utilizing your services for my residential property for the past year. As someone who deeply values the intricate layers of environmental consciousness, much like the conceptual depth found in Megan Cutler's novels, I am disheartened by recent observations regarding your sustainability practices.\n\nDespite your claims of prioritizing environmentally friendly methods, I have noticed several inconsistencies that suggest otherwise. For instance, the cleaning products used in my home do not appear to be eco-friendly, as they emit strong chemical odors and lack any certification labels. Additionally, the waste management practices seem haphazard, with recyclables often mixed with general waste.\n\nI have attempted to address these issues by speaking with your on-site staff, but their responses have been dismissive at best. This lack of accountability is not only frustrating but also undermines the trust I placed in your company\u2019s commitment to sustainability.\n\nI am seeking immediate clarification and rectification of these practices. Specifically, I would like a detailed explanation of the eco-friendly products and methods you claim to use, as well as a revised waste management plan that aligns with sustainable practices.\n\nYour prompt attention to this matter would be greatly appreciated, as it is crucial for me to ensure that my living environment aligns with my values of environmental stewardship.\n\nThank you for your understanding and cooperation.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: A Quick Query from a Curious Customer\n\nHi ProCare Support Team,\n\nHope this email finds you well and not knee-deep in cleaning supplies or tangled in HVAC systems! My name is [Sender], and I\u2019ve been enjoying the sparkling services of ProCare Facility Solutions for a while now. You folks really know how to keep things shipshape!\n\nI\u2019ve got a bit of a head-scratcher for you. I\u2019m curious about the eco-friendly cleaning products you use. I\u2019ve been telling my friends about how green and clean my place is, and they\u2019re all ears. Could you provide a bit more detail on the types of products you use and any certifications they might have? I\u2019d love to pass on the good word and maybe even convert a few more eco-warriors to your cause.\n\nI haven\u2019t done much digging myself, apart from a quick glance at your website, which, by the way, is as spotless as my windows after your team\u2019s visit. So, I thought I\u2019d go straight to the source for the juicy details.\n\nLooking forward to your response, and keep up the fantastic work!\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Scheduling Deep Cleaning Services\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is Alex, and I've been using ProCare Facility Solutions for a while now to keep my office space in top shape. I must say, your services have been quite impressive so far.\n\nI'm reaching out today because I need to schedule a cleaning service for our office. We usually have a weekly cleaning routine, but I think it's time for a deep clean, especially with the change in seasons. The carpets could use some attention, and the windows haven't been washed in a while.\n\nI haven't taken any steps yet to schedule this, so I thought I'd start by contacting you directly. Could you please help me set up a time for this deep cleaning? Ideally, we'd like to have it done sometime next week, but we're flexible with the exact day and time.\n\nThanks in advance for your assistance. Looking forward to hearing from you soon.\n\nBest,\nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent HVAC Repair Needed\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a satisfied customer of ProCare Facility Solutions for the past two years. Your team has always provided exceptional service, and I truly appreciate the dedication and professionalism you bring to maintaining our residential complex.\n\nI am writing to you with a high-priority concern regarding our HVAC system. Over the past few days, we have noticed that the system is not functioning as efficiently as it should. The temperature regulation has become inconsistent, and there are unusual noises coming from the unit. Given the current weather conditions, this issue is causing significant discomfort for my family, especially for my young children.\n\nI have already checked the thermostat settings and ensured that the air filters are clean, but the problem persists. Given the urgency of the situation, I kindly request that a technician be dispatched as soon as possible to diagnose and repair the issue. Your prompt attention to this matter would be greatly appreciated, as it directly impacts the comfort and well-being of my family.\n\nThank you for your understanding and swift response. I look forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Seeking Guidance on Facility Management Training Programs\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a client of ProCare Facility Solutions for the past year, benefiting greatly from your exceptional maintenance and cleaning services. Your team's dedication to creating a pristine and efficient environment has allowed me to focus on my writing, which often delves into the intricate dance between human endeavors and the relentless march of technology.\n\nRecently, I have been contemplating the idea of developing an in-house maintenance team to better manage the unique needs of my historical property. The charm of my residence, with its vintage architecture and timeless elegance, requires a delicate touch that I believe could be best achieved through a dedicated team trained in the best practices of facility management.\n\nI am particularly interested in your comprehensive training programs and would love to learn more about how they can be tailored to suit the specific requirements of my property. Could you provide me with detailed information on the available training modules, schedules, and any prerequisites? Additionally, I would appreciate guidance on how to seamlessly integrate these practices into our current maintenance routine.\n\nIn preparation for this transition, I have already begun to outline the key areas that need attention and have identified a few potential team members who are eager to undergo training. However, I believe that your expertise and structured programs will be instrumental in ensuring that we achieve the highest standards of care and efficiency.\n\nThank you for your continued support and for helping me maintain a harmonious balance between the historical essence of my home and the modern conveniences that make life more comfortable. I look forward to your response and to embarking on this new journey with ProCare Facility Solutions.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Feedback on Recent Maintenance Service\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been utilizing your facility management and maintenance services for my residential property for the past year. Overall, I have been quite satisfied with the quality and professionalism of your team.\n\nHowever, I wanted to bring to your attention a recent issue I encountered with the maintenance service. Last week, I scheduled a routine HVAC maintenance check, but the technician arrived late and seemed somewhat rushed. While the job was completed, I noticed that the system is still making an unusual noise, which was not present before the maintenance.\n\nI have not taken any further steps to address this issue yet, as I wanted to first reach out to your support team for guidance. Could you please advise on the next steps to resolve this matter? I would appreciate it if a follow-up visit could be arranged to ensure that everything is functioning correctly.\n\nThank you for your attention to this matter. I look forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Request for Training and Support on Facility Management Best Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a first responder dedicated to educating the public on fire safety and prevention, particularly during firework displays. I have been utilizing ProCare Facility Solutions for our facility management needs and have been quite satisfied with the services provided thus far.\n\nI am reaching out to request some additional training and support on facility management best practices. Given the nature of my work, it is crucial that our facilities are maintained to the highest standards to ensure safety and efficiency. While I have a basic understanding of the necessary protocols, I believe that further training would be beneficial for both myself and my team.\n\nTo date, I have reviewed the materials available on your website and have implemented several of the recommended practices. However, I feel that a more comprehensive training program would greatly enhance our ability to manage our facilities effectively.\n\nCould you please provide information on any upcoming training sessions or support resources that we could access? Additionally, if there are any specific materials or guides that you recommend, I would appreciate your guidance.\n\nThank you for your attention to this matter. I look forward to your response and any assistance you can provide.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for HVAC System Failure\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am an editor at a popular entertainment magazine. We have been utilizing ProCare Facility Solutions for our office maintenance needs for the past year, and I must say, your services have always been top-notch.\n\nHowever, we are currently facing a critical issue that requires your immediate attention. Our HVAC system has completely failed, and with the summer heat, this has created an extremely uncomfortable working environment for our staff. Given the nature of our work, a comfortable and conducive environment is essential for productivity.\n\nWe have tried basic troubleshooting steps, such as resetting the system and checking the circuit breakers, but nothing seems to be working. This issue is beyond our in-house capabilities and needs professional intervention.\n\nCould you please dispatch a technician as soon as possible to address this urgent repair? We are in dire need of a swift resolution to ensure our operations can continue smoothly.\n\nThank you for your prompt attention to this matter. I look forward to your quick response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Request for Specialized Cleaning Services\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a satisfied customer of ProCare Facility Solutions for the past year. Your team has always done a fantastic job maintaining our office space, and I truly appreciate the dedication and professionalism you bring to your work.\n\nRecently, we\u2019ve noticed that our carpets and windows could use some extra attention. While the regular cleaning schedule has been great, I believe a specialized deep cleaning would really help in maintaining the pristine environment we strive for. Given the importance of first impressions in our line of work, having spotless carpets and gleaming windows is crucial.\n\nI haven\u2019t taken any steps yet to address this, as I wanted to reach out to the experts first. Could you please assist us in scheduling a deep cleaning service at your earliest convenience? We\u2019re flexible with timing, but ideally, we\u2019d like to have this done within the next couple of weeks.\n\nThank you so much for your continued support and for always going above and beyond. Looking forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Hey [Receiver],\n\nHope this note finds you well. I'm reaching out to you from the lyrical side of life, where words flow like rivers and rhythms never miss a beat. Name's [Sender], and I've been vibing with ProCare Facility Solutions for a minute now, keeping my creative space spotless and serene.\n\nLately, I've been needing to sync up on a cleaning schedule. My studio's been seeing more action than usual, and it's time to get those deep cleans and window washes back on track. I know y'all got the skills to keep it pristine, just like the verses I pen.\n\nI haven't taken any steps yet, just been letting the dust settle, but it's time to get proactive. Could you help me set up a regular cleaning schedule? Maybe something weekly or bi-weekly, whatever fits best with your flow.\n\nLooking forward to hearing back and getting this sorted. Appreciate the help, as always.\n\nPeace and respect,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Facility Management Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been utilizing your facility management services for my commercial property for the past year. While I appreciate the efforts your team puts into maintaining our space, I have some concerns that I would like to address.\n\nSpecifically, I am curious about the methodologies and practices your team employs in managing our facility. Given the increasing scrutiny on environmental sustainability and ethical business practices, I find it essential to understand how your services align with these principles. For instance, how do you ensure that the energy efficiency measures you implement are genuinely effective and not just superficial? Additionally, what steps are taken to guarantee that the materials and products used in maintenance and cleaning are ethically sourced and environmentally friendly?\n\nI have not encountered any immediate issues that require urgent attention, but I believe it is crucial to have a transparent understanding of these aspects to ensure that we are making informed decisions about our facility management. I have reviewed the information available on your website, but I would appreciate more detailed insights or documentation that can provide clarity on these points.\n\nCould you please provide me with a comprehensive overview of your facility management practices, particularly focusing on sustainability and ethical considerations? Any additional information or resources you can share would be greatly appreciated.\n\nThank you for your attention to this matter. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry on Eco-Friendly Practices\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a satisfied client of ProCare Facility Solutions for several years now. As a philologist, I have always appreciated the meticulous attention to detail and the commitment to excellence that your team brings to facility management and maintenance.\n\nI am writing to you today with a friendly query regarding your eco-friendly measures. As someone who values the preservation of our environment, I am keen to understand more about the specific sustainable practices ProCare employs in its cleaning and maintenance services. I have always admired the way your team seamlessly integrates environmental consciousness into your operations, and I am curious to learn more about the latest initiatives or technologies you might be using to further reduce environmental impact.\n\nWhile I have not encountered any pressing issues, I believe that staying informed about these practices can only enhance my appreciation for the services you provide. I have perused your website and found some information, but I would love to delve deeper into the specifics, especially any recent advancements or future plans you might have in this area.\n\nCould you kindly provide me with more detailed insights or direct me to any resources or contacts within your team who could assist me with this information? I am particularly interested in understanding how your sustainability efforts align with the latest industry standards and what unique approaches ProCare is taking to lead in this domain.\n\nThank you for your time and assistance. I look forward to your response and continuing our positive and productive relationship.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Action Required: Environmental Practices Falling Short\n\nHey ProCare Support Team,\n\nI hope this email finds you well, though I can't say the same for my current experience with your services. I'm Alex, a comedy podcaster who relies on a clean and efficient environment to keep the creative juices flowing. Unfortunately, things have taken a nosedive lately, and I'm not laughing.\n\nI've been a client for a while now, and I chose ProCare Facility Solutions because of your commitment to sustainability and eco-friendly practices. But lately, it feels like those promises are just empty words. The cleaning crew has been using products that smell like a chemical factory, and I've noticed a significant increase in waste and energy consumption around my studio. This is not what I signed up for, and it's definitely not helping my carbon footprint.\n\nI've tried addressing this with your team before, but the responses have been slow and unhelpful. I even provided specific feedback and requested a switch to more eco-friendly products, but nothing has changed. This is incredibly frustrating, and it's affecting my work environment and my peace of mind.\n\nI need immediate action on this. Please ensure that your team switches to the promised eco-friendly products and takes concrete steps to reduce waste and energy usage in my facility. This isn't just about keeping my studio clean; it's about staying true to the values you advertise.\n\nLooking forward to a swift resolution. Please don't make me regret choosing ProCare.\n\nBest,\nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Your Services\n\nDear ProCare Support Team,\n\nI hope this message finds you well, though I must admit, my enthusiasm is somewhat dampened. My name is Dr. Sheldon Cooper, and I have recently been contemplating the utilization of your facility management services for my residential property. However, I find myself in a bit of a quandary.\n\nDespite your claims of expertise and experience, I am skeptical about the actual efficacy of your services. The descriptions on your website are rather vague and lack the empirical data I would expect from a company purporting to be a leader in facility management. For instance, your energy efficiency practices\u2014what specific technologies and methodologies are you employing? Are there any peer-reviewed studies or data sets that validate your claims?\n\nI have perused your website and various promotional materials, but they seem to be more fluff than substance. I have not yet engaged your services, as I am hesitant to commit without a more rigorous understanding of what exactly you offer and how it aligns with my high standards for efficiency and sustainability.\n\nCould you provide me with detailed information, preferably in the form of technical documentation or case studies, that elucidates the specifics of your services? I am particularly interested in your maintenance plans and the eco-friendly cleaning products you use. A comprehensive breakdown of your methodologies would be greatly appreciated.\n\nThank you for your time and attention to this matter. I look forward to your prompt and detailed response, which I hope will assuage my concerns.\n\nBest regards,\n\nDr. Sheldon Cooper"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"negative\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Request for Training and Support\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the Creative Director responsible for overseeing the overall image and branding of an influencer. We've been working with ProCare Facility Solutions for a while now, and I must say, your services have been nothing short of exceptional. The attention to detail and commitment to quality truly stand out.\n\nI'm reaching out today with a request for some additional training and support. As our team continues to grow, we are looking to develop an in-house maintenance team and cleaning staff. We believe that having a well-trained team will help us maintain the high standards we've come to expect from ProCare.\n\nWhile there is no immediate rush, we would love to get started on this as soon as it fits into your schedule. We haven't taken any steps yet, as we wanted to ensure we are aligned with your best practices from the get-go.\n\nCould you please provide us with information on the training programs you offer and how we can get started? Any guidance or resources you can share would be greatly appreciated.\n\nThank you for your continued support and for helping us maintain such a pristine environment. Looking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Request for Specialized Cleaning Services\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is Dr. Emily Carter, and I have been a loyal client of ProCare Facility Solutions for the past three years. As a pediatrician who strongly believes in the importance of a clean and healthy environment for children, I have always appreciated the exceptional services your team provides.\n\nI am reaching out today with a request for specialized cleaning services at my pediatric clinic. We recently had a small renovation, and while the general cleaning has been taken care of, I believe a more thorough, specialized cleaning is necessary to ensure the clinic is in pristine condition for our young patients. Specifically, I am looking for deep cleaning of the carpets, window washing, and a detailed sanitization of all surfaces using eco-friendly products.\n\nIn the past, I have always been impressed with the quality and attention to detail your team brings to every task. I have already scheduled a routine cleaning, but I feel that the current situation requires a more focused approach to maintain the high standards we strive for in our clinic.\n\nCould you please assist me in arranging these specialized cleaning services at your earliest convenience? I understand that scheduling might take some time, but I am confident that your team will be able to accommodate our needs in a timely manner.\n\nThank you for your continued support and dedication to maintaining a healthy environment for our community. I look forward to hearing from you soon.\n\nWarm regards,\n\nDr. Emily Carter"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a satisfied client of ProCare Facility Solutions for the past two years. I am reaching out to request routine maintenance for the HVAC system in my residential property.\n\nAs an avid collector of Baccara memorabilia, I often host exchanges and gatherings in my home. It is crucial that the environment remains comfortable and well-maintained to ensure the preservation of my collection and the comfort of my guests. Recently, I have noticed that the HVAC system is not performing as efficiently as it used to, and I believe it is time for a routine check-up and maintenance.\n\nI have not taken any steps to address this issue myself, as I trust the expertise of your team to handle it professionally. Could you please schedule a maintenance visit at your earliest convenience? I would appreciate it if the visit could be arranged within the next week or two, as I have an upcoming event that I would like to prepare for.\n\nThank you for your attention to this matter. I look forward to your prompt response and the continued excellent service from ProCare Facility Solutions.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Sustainability and Energy Efficiency Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a climate scientist who has recently transitioned into a role that involves overseeing the sustainability efforts of our organization. I was a student of Professor Temi E. Ologunorisa, and his teachings have greatly influenced my career path and my current focus on environmental impact reduction.\n\nI am reaching out to inquire about your services, particularly those related to energy efficiency and sustainability. Our organization is looking to implement best practices in these areas, and I believe ProCare Facility Solutions could be a valuable partner in this endeavor.\n\nCould you provide more detailed information on how your team approaches energy efficiency and environmental impact reduction? Additionally, I would appreciate any case studies or examples of similar projects you have successfully managed.\n\nI have reviewed the information available on your website, but I would like to gain a deeper understanding of how your services can be customized to meet our specific needs. Any additional insights or resources you can share would be greatly appreciated.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a satisfied customer of ProCare Facility Solutions for the past few years. I am writing to request routine maintenance for the HVAC system in my residential property.\n\nAs a golf enthusiast, I spend a lot of time outdoors, but I also value a comfortable indoor environment when I return home. Lately, I've noticed that the HVAC system isn't performing as efficiently as it used to. The airflow seems weaker, and the temperature control is not as consistent. I believe it might be time for a routine check-up to ensure everything is running smoothly.\n\nI haven't taken any steps to address this issue yet, as I trust your team to handle it with the expertise and professionalism I've come to expect from ProCare. Could you please schedule a maintenance visit at your earliest convenience? I would appreciate it if the technician could also check the filters and any other components that might need attention.\n\nThank you for your assistance. I look forward to your prompt response and to having my HVAC system back in top shape.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Request for Routine Maintenance of HVAC System\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a representative from the Ministry of Finance. We have been utilizing your facility management services for our office building for the past year and have generally been satisfied with the level of service provided.\n\nHowever, I am writing to bring to your attention an issue we are currently facing with our HVAC system. Over the past few days, we have noticed irregularities in the system's performance, including inconsistent temperatures and unusual noises. This has caused some discomfort among our staff and has the potential to disrupt our daily operations if not addressed promptly.\n\nWe have attempted to troubleshoot the issue internally by checking the thermostat settings and ensuring that the air filters are clean, but these measures have not resolved the problem. Given the importance of maintaining a comfortable working environment, we would appreciate your assistance in diagnosing and repairing the HVAC system at your earliest convenience.\n\nCould you please arrange for a technician to visit our premises and assess the situation as part of our scheduled maintenance plan? We understand that this may not be an immediate emergency, but we would like to have the issue resolved as soon as possible to prevent any further inconvenience.\n\nThank you for your attention to this matter. We look forward to your prompt response and assistance.\n\nBest regards,\n\n[Sender] \nMinistry of Finance"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Immediate Attention Needed for Cleaning Services at Our Pub\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is Alex, and I manage The Tipsy Tavern, a popular pub in downtown. We've been relying on ProCare Facility Solutions for our cleaning services for the past year, and overall, we've been quite satisfied with the quality of your work.\n\nHowever, I need to bring a pressing issue to your attention. Since the pandemic, we've been extra cautious about maintaining a clean and safe environment for our patrons. Despite your team's regular cleaning schedule, we've noticed a persistent issue with lingering odors and a general lack of freshness in the air, especially during peak hours. This is becoming a significant concern for us, as it affects our customers' experience and could potentially impact our business.\n\nWe've tried increasing ventilation and even added some air purifiers, but the problem persists. Given the current situation, it's crucial for us to ensure that our pub not only looks clean but also feels and smells clean to our customers.\n\nCould you please arrange for an immediate deep cleaning session and possibly review our current cleaning plan? We need to address this issue urgently to maintain the high standards our patrons expect and deserve.\n\nThank you for your prompt attention to this matter. Looking forward to your swift response and a quick resolution.\n\nBest regards,\n\nAlex\nManager, The Tipsy Tavern"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Sustainability and Environmental Practices\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. Anil Shetty, and I am a professor specializing in Tulu language and script studies. I have been utilizing your facility management services for my residential property for the past year and have been quite satisfied with the quality and professionalism your team consistently demonstrates.\n\nI am writing to inquire about the sustainability and environmental practices implemented by ProCare Facility Solutions. As someone deeply invested in preserving cultural heritage, I am equally passionate about environmental conservation and sustainability. I would like to understand more about the specific measures your company takes to ensure eco-friendly practices, particularly in the areas of energy efficiency, waste management, and the use of sustainable materials.\n\nWhile I have noticed the use of eco-friendly cleaning products, I am keen to learn about any additional initiatives or programs you have in place to reduce the environmental impact of your operations. Additionally, I am interested in any opportunities for clients to participate in or support these sustainability efforts.\n\nI have reviewed the information available on your website but would appreciate more detailed insights or any relevant documentation you could provide. Your assistance in this matter would be greatly valued as I consider ways to further align my personal and professional life with sustainable practices.\n\nThank you for your attention to this inquiry. I look forward to your response and any guidance you can offer.\n\nBest regards,\n\nDr. Anil Shetty"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Help Needed with Special Cleanings!\n\nDear ProCare Support Team,\n\nHello! I am writing to you with a bit of a pickle. My name is Boris, and I am a happy customer of your ProCare Facility Solutions for my small office building. You guys do a great job, but now I have a special request that needs your magic touch.\n\nSo, here\u2019s the thing. We had a big party last week, and let\u2019s just say, the office now looks like a tornado had a dance-off with a hurricane. There are stains on the carpet that look like modern art, and the windows are so smudged, I can\u2019t tell if it\u2019s day or night outside. I tried to clean some of it myself, but I think I made it worse. The carpet now looks like a zebra, and the windows... well, let\u2019s not talk about the windows.\n\nI need your specialized cleaning services to come and rescue my poor office. I know you guys are the best at this, so I\u2019m not too worried, but I do need it done soon-ish. Not like yesterday, but maybe in the next few days? My boss is starting to give me the stink eye, and I don\u2019t think it\u2019s because of the smelly carpet.\n\nPlease let me know when you can send your cleaning wizards over. I\u2019ll make sure to stay out of their way and not touch anything this time. Thank you so much for your help!\n\nBest regards,\nBoris"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Assistance Needed for Routine Maintenance\n\nDear ProCare Support Team,\n\nGreetings to you all. My name is Jan van der Meer, a long-time resident of our quaint little town and a humble customer of your esteemed services. I hope this message finds you well.\n\nI am writing to bring to your attention a small issue that has cropped up in my home. It seems that the faucet in my kitchen has developed a minor leak. While it is not causing any immediate distress, I thought it best to inform you before it becomes a more significant problem.\n\nI have tried to tighten the faucet myself, but my old hands are not as steady as they used to be. I would greatly appreciate it if one of your skilled technicians could come by at their earliest convenience to take a look and fix it.\n\nThank you for your attention to this matter. I look forward to your prompt assistance.\n\nWarm regards,\n\nJan van der Meer"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent Cleaning Schedule Adjustment Needed\n\nHi [Receiver],\n\nI hope this message finds you well. My name is [Sender], and I manage several high-profile football players. I\u2019ve been using ProCare Facility Solutions for a while now, and I\u2019ve always appreciated the quality of your services.\n\nHowever, I\u2019m currently facing an issue that requires immediate attention. I need to adjust the cleaning schedule for one of my properties. The current schedule is not working out, and it\u2019s crucial that we make changes as soon as possible. The property in question is a luxury apartment where confidentiality is paramount, and the current cleaning times are causing some disruptions.\n\nI\u2019ve tried to resolve this by speaking with the on-site team, but we haven\u2019t been able to find a suitable solution. I need your support to rearrange the cleaning times to better fit the needs of the residents and ensure their privacy is maintained.\n\nCould you please assist me in rescheduling the cleaning services at the earliest? Your prompt attention to this matter would be greatly appreciated.\n\nThank you for your understanding and cooperation.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Maintenance Request for Dormitory Freezer\n\nHi ProCare Support Team,\n\nI hope this message finds you well! My name is Alex, and I'm a student living in one of the dormitories managed by your team. I just wanted to reach out and say how much I appreciate the excellent maintenance and cleaning services you provide. It really makes a difference in our daily lives.\n\nI wanted to bring a small issue to your attention. The chest freezer in our dormitory's common area seems to be malfunctioning. It's not a major problem, but it doesn't seem to be freezing items as effectively as it used to. I know it's not an urgent matter, but I thought it would be good to get it checked out before it becomes a bigger issue.\n\nI haven't taken any steps to fix it myself, as I don't have the expertise or tools to handle such repairs. I would really appreciate it if someone from your team could come by and take a look at it when convenient.\n\nThanks so much for your help and for all the great work you do!\n\nBest regards,\nAlex\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a graduate student specializing in computational modeling. I am currently exploring various facility management solutions for a project that involves optimizing the maintenance and operational efficiency of residential and commercial properties.\n\nI have been researching your services and am particularly interested in your comprehensive facility management and maintenance offerings. However, I have a few questions regarding the customization of these services to fit specific project requirements, especially in terms of integrating advanced computational models for predictive maintenance and energy efficiency.\n\nCould you provide more detailed information on how your team collaborates with clients to tailor these services? Additionally, I would like to know if there are any case studies or examples of similar projects you have worked on that you could share.\n\nI have reviewed the information available on your website, but I believe a more detailed discussion would be beneficial. I am looking forward to your response and any additional insights you can provide.\n\nThank you for your time and assistance.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Service Quality and Safety\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a machine learning engineer currently working on developing algorithms for predicting economic trends. I have been utilizing ProCare Facility Solutions for the maintenance and cleaning services at our office building for the past year.\n\nI am writing to discuss a few observations related to the quality and safety standards of the services provided. While I understand that maintaining high standards consistently can be challenging, I have noticed some areas that might benefit from a review. Specifically, there have been occasional lapses in the thoroughness of the cleaning services, and I am concerned about the potential impact on the overall safety and hygiene of our workspace.\n\nTo address these concerns, I have already spoken with the on-site cleaning staff and provided feedback on the specific areas that need attention. However, I believe a more comprehensive review of the cleaning protocols and safety measures might be beneficial.\n\nCould you please assist in arranging a detailed inspection or review of the current practices? Additionally, any insights or recommendations on how we can ensure consistent quality and safety would be greatly appreciated.\n\nThank you for your attention to this matter. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry About Deep Cleaning Services\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the Game Design Director at [Company Name]. We've been utilizing your facility management services for our office space, and I must say, the experience has been nothing short of excellent. Your team's dedication to maintaining a pristine environment has allowed us to focus on our creative projects without any distractions.\n\nI am reaching out to inquire about scheduling a deep cleaning session for our office, particularly focusing on areas that require more attention, such as carpets and windows. Given the nature of our work, it's crucial for us to maintain a clean and healthy environment, and I believe your eco-friendly cleaning products and practices align perfectly with our values.\n\nWhile this isn't an urgent request, we would like to schedule this cleaning in the coming weeks. Could you please provide more details on the available slots and any specific preparations we need to make beforehand?\n\nThank you for your continued support and excellent service. Looking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About HVAC Maintenance Best Practices\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is Alex, and I'm a novice electrician currently learning the ropes of working with HVAC units. I've recently started gaining hands-on experience and have been exploring various aspects of facility maintenance.\n\nI wanted to reach out to you with a few general inquiries regarding best practices for maintaining HVAC systems. Given your expertise in facility management and maintenance, I believe your insights would be incredibly valuable to me as I continue to develop my skills.\n\nSo far, I've been following basic maintenance routines like checking filters and ensuring proper airflow, but I would love to know if there are any specific tips or advanced techniques that I should be aware of. Additionally, are there any common pitfalls or mistakes that I should avoid while working with HVAC units?\n\nI haven't encountered any major issues yet, but I want to make sure I'm on the right track and doing everything correctly from the start. Any guidance or resources you could provide would be greatly appreciated. Also, if there are any training programs or support resources available for someone in my position, I would be very interested in learning more about them.\n\nThank you for your time and assistance. I look forward to hearing from you soon.\n\nBest regards, \nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. Samuel Cohen, and I am a professor of Jewish history with a keen interest in the portrayal of Jewish culture and history in cinema. I have recently come across your company and am thoroughly impressed by the comprehensive range of services you offer, particularly in facility management and maintenance.\n\nI am reaching out with a general inquiry regarding your facility management services. As someone who values the importance of a well-maintained environment, I am curious to learn more about how your team ensures the seamless operation of residential properties, especially in terms of sustainability and energy efficiency. Your commitment to eco-friendly practices is particularly commendable and aligns with my own values.\n\nWhile I have not encountered any specific issues, I am interested in understanding the process and benefits of implementing your customized maintenance plans for a residential complex. Additionally, I would appreciate any information on the training programs you offer for developing in-house maintenance teams, as this could be highly beneficial for our community.\n\nThank you for your time and assistance. I look forward to your response and am excited about the possibility of collaborating with ProCare Facility Solutions to enhance the living environment in our residential complex.\n\nWarm regards,\n\nDr. Samuel Cohen"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Assistance Required for Facility Management Issue\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. Emily Hart, and I have been a client of ProCare Facility Solutions for the past two years, utilizing your comprehensive facility management services for my family therapy practice.\n\nI am writing to bring to your attention an issue we have been experiencing with the management of our office space. Specifically, there seems to be a recurring problem with the coordination of space utilization, which has been causing some disruptions to our daily operations. While I deeply respect the expertise and experience your team brings to the table, I believe that addressing this issue promptly will help us maintain a smooth and efficient working environment.\n\nTo provide some context, we have noticed that the allocation of rooms for therapy sessions has been inconsistent, leading to double bookings and confusion among staff and clients. We have tried to manage this internally by adjusting our schedules and communicating with your on-site team, but the problem persists.\n\nGiven the importance of a well-organized space for the therapeutic process, I kindly request your assistance in resolving this matter. Perhaps a review of the current space utilization plan and the implementation of a more streamlined system could be beneficial. Your guidance and support in this regard would be greatly appreciated.\n\nThank you for your attention to this matter. I look forward to your prompt response and a resolution that will allow us to continue providing the best possible care to our clients.\n\nWarm regards,\n\nDr. Emily Hart\nFamily Therapist"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry on Sustainability Practices\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], a mime artist who has had the privilege of performing on stages around the world. I have recently engaged your services for the maintenance and cleaning of my studio and residence.\n\nI am writing to inquire about the sustainability and environmental practices that ProCare Facility Solutions employs. As someone who deeply values the preservation of our environment, I am keen to understand how your services align with eco-friendly principles. Specifically, I am interested in the types of eco-friendly products you use and any measures you take to reduce the carbon footprint of your operations.\n\nWhile I have been satisfied with the quality of your services thus far, I believe it is crucial to ensure that the practices employed are in harmony with my commitment to environmental sustainability. I have not yet taken any steps to address this concern, as I wanted to first gather detailed information from your team.\n\nCould you please provide me with a comprehensive overview of your sustainability initiatives and any certifications or standards you adhere to? Additionally, I would appreciate any recommendations on how I can further enhance the eco-friendliness of my own space with your assistance.\n\nThank you for your attention to this matter. I look forward to your prompt and detailed response.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Scheduling Cleaning Services for My Home\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is Alex, and I\u2019m a high school student currently living in a residential complex that uses your services. I\u2019ve heard great things about ProCare Facility Solutions from my neighbors and have seen firsthand the excellent work your team does.\n\nI\u2019m reaching out because I need to schedule a cleaning service for my home. With school and college applications taking up most of my time, I haven\u2019t been able to keep up with the cleaning as much as I\u2019d like. I\u2019m looking for a regular cleaning schedule, preferably weekly, to help maintain a clean and healthy environment.\n\nI haven\u2019t taken any steps yet to resolve this, as I wanted to get in touch with you directly to understand the best options available. Could you please guide me on how to set up a cleaning schedule and what the next steps would be?\n\nThank you for your assistance. I look forward to hearing from you soon.\n\nBest regards,\nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Attention Required for Facility Management Issues\n\nHey ProCare Support Team,\n\nI'm really frustrated right now and need your help ASAP. My name is [Sender], and I've been using your facility management services for my office building for the past year. Usually, things run smoothly, but right now, it's a complete mess.\n\nThe HVAC system has been acting up for the past week, and it's unbearable. The temperature is all over the place, and it's making it impossible for my team to work efficiently. On top of that, the security system has been glitching, and we had a false alarm last night that woke up the entire neighborhood. This is unacceptable, and it's causing a lot of stress and disruption.\n\nI've already tried reaching out to your emergency repair line, but I haven't received any response. I even tried resetting the HVAC system myself, but it didn't help. This needs to be fixed immediately. I can't afford to have my team working in these conditions any longer.\n\nPlease send someone over right away to address these issues. I need this resolved today, not tomorrow or next week. This is seriously impacting our productivity and security, and I expect a prompt response.\n\nThanks,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Ol\u00e1 equipe de suporte da ProCare Facility Solutions,\n\nEspero que estejam bem! Meu nome \u00e9 [Sender] e sou um jovem empreendedor apaixonado por tecnologia blockchain. Recentemente, comecei a desenvolver minha pr\u00f3pria startup e estou muito animado com as possibilidades que essa jornada pode trazer.\n\nEstou escrevendo para solicitar orienta\u00e7\u00e3o e conselhos sobre como posso implementar as melhores pr\u00e1ticas de gest\u00e3o de instala\u00e7\u00f5es na minha empresa. Sei que a ProCare Facility Solutions \u00e9 refer\u00eancia no setor e acredito que a expertise de voc\u00eas pode ser extremamente valiosa para o crescimento e sucesso da minha startup.\n\nAt\u00e9 agora, tenho pesquisado bastante sobre o assunto e at\u00e9 participei de alguns webinars, mas sinto que preciso de uma orienta\u00e7\u00e3o mais direcionada e pr\u00e1tica. Gostaria de saber se voc\u00eas oferecem algum tipo de programa de treinamento ou suporte espec\u00edfico para startups como a minha, que est\u00e3o come\u00e7ando a estruturar suas opera\u00e7\u00f5es.\n\nAgrade\u00e7o desde j\u00e1 pela aten\u00e7\u00e3o e estou ansioso para aprender com a equipe de especialistas da ProCare. Tenho certeza de que, com a ajuda de voc\u00eas, poderei criar um ambiente de trabalho eficiente e sustent\u00e1vel, alinhado com as melhores pr\u00e1ticas do mercado.\n\nAguardo um retorno e agrade\u00e7o novamente pela disponibilidade.\n\nAtenciosamente,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Request for Specialized Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the director of a nonprofit organization dedicated to raising awareness about the psychological impact of hoarding. We have recently encountered a situation that requires specialized cleaning services, and I am reaching out to seek your assistance.\n\nWe are currently working with a client whose living conditions have deteriorated significantly due to hoarding. The environment poses serious health and safety risks, and we believe that a thorough, professional cleaning is essential to help them regain a livable space. Given the sensitive nature of this situation, we are looking for a team that can handle the task with care and discretion.\n\nWe have not yet taken any steps to address the cleaning needs, as we wanted to consult with experts who have experience in handling such cases. We are aware of your reputation for providing top-notch cleaning services and using eco-friendly products, which aligns with our values and the needs of our client.\n\nCould you please provide us with more information on how we can proceed with scheduling a specialized cleaning service? Additionally, any advice or recommendations you have for preparing the space beforehand would be greatly appreciated.\n\nThank you for your time and assistance. We look forward to your prompt response.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent: Immediate Assistance Required for HVAC Maintenance\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I've been a satisfied customer of ProCare Facility Solutions for the past year. Your team has always done a stellar job maintaining my property, and I truly appreciate the high standards you uphold.\n\nHowever, I\u2019m reaching out today with a bit of urgency. My HVAC system, which your team routinely services, has started making some unusual noises and isn\u2019t performing as efficiently as it usually does. Given the importance of a well-functioning HVAC system, especially with the colder months approaching, I need this addressed as soon as possible.\n\nI\u2019ve already checked the basic troubleshooting steps provided in your maintenance guide, but the issue persists. I\u2019m confident in your team\u2019s expertise and would greatly appreciate it if you could send someone over at the earliest convenience to take a look and perform any necessary maintenance.\n\nThank you for your prompt attention to this matter. I look forward to your swift response and continued excellent service.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Adjusting Bi-Weekly Cleaning Schedule for My Office\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. Alex Turner, and I have been utilizing your services for my office space for the past year. I must say, your team's dedication to maintaining a pristine environment has been commendable and greatly appreciated.\n\nI am reaching out to discuss the scheduling of our regular cleaning services. While I find the logistical challenges of coordinating these services intellectually stimulating, I believe we could optimize the current schedule to better suit the needs of my team and our workflow. Specifically, I would like to explore the possibility of adjusting our cleaning schedule to a bi-weekly arrangement, ideally on Tuesdays and Fridays, to ensure our workspace remains consistently clean without disrupting our research activities.\n\nPreviously, I have attempted to adjust the schedule through the online portal, but I encountered some difficulties in finalizing the changes. I would appreciate your assistance in making these adjustments or guiding me through the process if there is a more efficient way to do so.\n\nThank you for your attention to this matter. I look forward to your response and continued excellent service.\n\nBest regards,\n\nDr. Alex Turner\nCryptography Researcher"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry on Sustainability Practices\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a client of ProCare Facility Solutions for the past year, primarily utilizing your comprehensive facility management and maintenance services for our commercial property.\n\nI am reaching out to inquire about the sustainability and environmental practices that ProCare Facility Solutions implements. As someone deeply interested in futuristic and eco-friendly solutions, I am keen to understand how your services align with sustainable practices and what measures are in place to reduce environmental impact.\n\nSpecifically, I would like to know more about the eco-friendly cleaning products you use and any initiatives you have for energy efficiency and waste reduction. Additionally, I am interested in any future plans or innovations you might be considering to further enhance sustainability within your services.\n\nI have reviewed the information available on your website, but I would appreciate more detailed insights or any additional resources you could provide. Your assistance in this matter would be greatly valued as we aim to ensure our facility operates in the most environmentally responsible manner possible.\n\nThank you for your time and support. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Facility Management at My Gun Shop\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I run a local gun shop here in town. We've been using ProCare Facility Solutions for our facility management needs for a while now, and overall, we've been quite satisfied with the services provided.\n\nHowever, I've noticed a few minor issues lately that I'd like to bring to your attention. Specifically, there seems to be some inconsistency in the coordination of space utilization and security measures. While these aren't urgent problems, I believe addressing them could help maintain the smooth operation of my shop.\n\nI haven't taken any specific steps to resolve these issues yet, as I wanted to get your expert opinion first. Could you please look into this and let me know what can be done to improve the situation?\n\nThanks for your attention to this matter. Looking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About Training Programs for In-House Maintenance Team\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a resident of Toronto for the past few years. As an Iranian expat with a deep passion for cinema and cultural representation, I have always appreciated the diverse and vibrant community here. I am writing to you today with a positive outlook and a request for some guidance.\n\nI have recently started managing a residential complex in the city and have been thoroughly impressed with the comprehensive services offered by ProCare Facility Solutions. Your commitment to quality and sustainability truly stands out, and I am eager to ensure that our facility meets the highest standards.\n\nCurrently, I am looking to develop an in-house maintenance team to handle routine and preventative maintenance tasks. I noticed that ProCare offers training programs on facility management best practices, which I believe would be incredibly beneficial for our team. Could you please provide more information about these training programs? Specifically, I am interested in understanding the curriculum, duration, and any prerequisites that might be required.\n\nI have not yet taken any steps towards organizing this training, as I wanted to reach out to your team first to get a better understanding of what is available. Your expertise and experience in this field are highly valued, and I am confident that your guidance will help us build a competent and efficient maintenance team.\n\nThank you for your time and assistance. I look forward to hearing from you soon and am excited about the possibility of collaborating with ProCare Facility Solutions to enhance our facility's maintenance capabilities.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent HVAC Repair Needed\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been availing your facility management and maintenance services for my residential complex in Kolkata for the past year. While I have generally been satisfied with the quality of your services, I am writing to bring to your attention a pressing issue that requires immediate resolution.\n\nOver the past few weeks, we have been experiencing recurring problems with the HVAC system in our building. Despite multiple service requests and visits from your maintenance team, the issue remains unresolved. The system frequently malfunctions, leading to uncomfortable living conditions for the residents, especially during this hot and humid season.\n\nI have already reported this issue several times through your customer service hotline and even had a few technicians visit our premises. However, the problem persists, and it seems that the root cause has not been adequately addressed. This situation is causing significant inconvenience and frustration among the residents, and we urgently need a permanent solution.\n\nI kindly request that you escalate this matter to the appropriate department and arrange for a thorough inspection and repair of the HVAC system at the earliest. It is crucial that this issue is resolved promptly to ensure the comfort and well-being of all residents.\n\nThank you for your immediate attention to this matter. I look forward to a swift resolution.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Cleaning Services for New Fitness and Play Facility\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am the owner of a children's play center. I am reaching out to explore a potential collaboration with ProCare Facility Solutions as we are in the process of expanding our facility to include a combined fitness and play area.\n\nGiven the nature of our business, maintaining a clean and hygienic environment is paramount. We are particularly interested in your specialized cleaning services, especially those that can cater to the unique needs of a space that will be used by both children and fitness enthusiasts.\n\nWe have not yet taken any steps towards securing a cleaning service provider, as we wanted to first understand the options available and how your services could be tailored to our specific requirements.\n\nCould you please provide more information on the cleaning services you offer, particularly any that would be suitable for a combined fitness and play facility? Additionally, we would appreciate details on your eco-friendly cleaning products and practices, as sustainability is a key concern for us.\n\nThank you for your time and assistance. I look forward to your response and hope we can work together to ensure our new facility is maintained to the highest standards.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent Inquiry Regarding Facility Management Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am currently managing the accounts for our family-owned commercial property. We have been considering your services for some time now, given your reputation for excellence in facility management and maintenance.\n\nI am reaching out with a high-priority inquiry regarding the comprehensive facility management services you offer. We are particularly interested in understanding how your team handles emergency repair services and the implementation of energy efficiency practices. Given the nature of our business, it is crucial for us to ensure that any potential issues are addressed swiftly and effectively to avoid any disruptions.\n\nTo provide some context, we have recently experienced a few unexpected maintenance issues that have caused significant concern. While we have managed to handle these situations internally, it has become clear that we need a more reliable and professional solution moving forward. We are keen to know more about your customized maintenance plans and how they can be tailored to meet the specific needs of our facility.\n\nI would appreciate it if you could provide detailed information on the following:\n1. The process for initiating emergency repair services.\n2. Examples of energy efficiency practices you have successfully implemented for other clients.\n3. The structure and flexibility of your customized maintenance plans.\n\nGiven the urgency of our situation, I kindly request a prompt response. We are eager to move forward and ensure that our facility is maintained to the highest standards, allowing us to focus on our core activities without the constant worry of unexpected issues.\n\nThank you for your attention to this matter. I look forward to your swift response and hope to establish a long-term partnership with ProCare Facility Solutions.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Training Programs\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I've been a resident of Tupper Lake for quite some time now. I've always appreciated the quality of services ProCare Facility Solutions provides, especially in maintaining our local properties.\n\nI'm reaching out because I'm interested in the training programs you offer for facility management best practices. I manage a small residential complex here in town, and I believe that some formal training could really help us improve our maintenance routines and overall efficiency.\n\nSo far, we've been handling things on our own, but I think it's time to get some professional guidance to ensure we're doing everything correctly and sustainably. Could you provide more details on the training programs available and how we can get started?\n\nLooking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About Training Programs for Facility Management\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a satisfied client of ProCare Facility Solutions for the past year. I am writing to express my appreciation for the excellent services your team has provided in maintaining my residential property. Your commitment to quality and sustainability has truly made a positive impact on my living environment.\n\nRecently, I have been exploring alternative methods to manage my knee arthritis, and I am particularly interested in learning more about your training programs on facility management best practices. While my condition limits my physical activities, I believe that gaining knowledge in this area could be beneficial for me in overseeing the maintenance of my property more effectively.\n\nI have not yet taken any steps towards enrolling in a training program, as I wanted to reach out to your support team first to gather more information. Could you please provide me with details on the available training sessions, including schedules, topics covered, and any prerequisites? Additionally, I would appreciate any recommendations you might have for someone in my situation.\n\nThank you for your time and assistance. I look forward to your response and am excited about the possibility of enhancing my understanding of facility management with ProCare's expert guidance.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About Specialized Cleaning Services for Chemistry Lab\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. Jonathan Harris, and I am a professor of chemistry with a particular interest in rare earth elements and their compounds. I have been utilizing your services for the past year to maintain the cleanliness and functionality of my laboratory and office space.\n\nI am writing to inquire about your specialized cleaning services. Given the nature of my work, it is crucial that my laboratory environment remains free from any contaminants that could interfere with my research. Specifically, I am looking for a deep cleaning service that can address the unique requirements of a chemistry lab, including the safe handling and disposal of any chemical residues.\n\nTo date, I have been managing routine cleaning with the help of my research assistants, but I believe a more thorough and professional approach is necessary to maintain the highest standards of cleanliness and safety. I would appreciate it if you could provide me with more information on the specialized cleaning services you offer, particularly those tailored for scientific laboratories.\n\nThank you for your attention to this matter. I look forward to your response and am hopeful that ProCare Facility Solutions can assist in maintaining the pristine condition of my workspace.\n\nBest regards,\n\nDr. Jonathan Harris"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent Concerns Regarding Maintenance and Safety Standards\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a resident at [Residential Complex Name] for the past few years. I have always appreciated the meticulous care and attention your team has provided to our facility, which has allowed me to enjoy my retirement in a clean and safe environment.\n\nHowever, I have recently noticed a few issues that require your immediate attention. Specifically, there have been some lapses in the quality and safety standards of the maintenance services. For instance, the HVAC system in my unit has been making unusual noises, and the air quality seems to have deteriorated. Additionally, I have observed some minor electrical issues in the common areas, such as flickering lights and exposed wiring, which could pose a safety hazard.\n\nI have not yet taken any steps to address these issues myself, as I believe they fall under the purview of your professional maintenance team. Given the importance of maintaining a safe and comfortable living environment, I would appreciate it if you could look into these concerns urgently.\n\nThank you for your prompt attention to this matter. I look forward to your swift response and resolution of these issues.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a satisfied customer of ProCare Facility Solutions for the past two years. As a dog breeder specializing in American Eskimo Dogs, maintaining a comfortable and clean environment is crucial for both my dogs and my clients.\n\nI\u2019m writing to request a routine maintenance check for the HVAC system in my facility. Everything has been running smoothly, but I believe it\u2019s time for a scheduled check-up to ensure everything continues to operate efficiently. Your team has always done a fantastic job, and I trust your expertise to keep things in top shape.\n\nI haven\u2019t encountered any specific issues, but I think it\u2019s always better to be proactive. Could you please schedule a visit at your earliest convenience? I\u2019m flexible with timing, so there\u2019s no rush.\n\nThank you for your continued excellent service. Looking forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for Emergency HVAC Repair\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I've been a satisfied customer of ProCare Facility Solutions for the past two years. Your team has always done a fantastic job maintaining our residential complex here in Memphis, and I truly appreciate the quality of service you provide.\n\nI'm reaching out today because we have an urgent issue that needs immediate attention. Our HVAC system has been acting up, and with the weather getting unpredictable, it's crucial to get this sorted out as soon as possible. The system has been making unusual noises and isn't maintaining the set temperature, which is causing quite a bit of discomfort for the residents.\n\nI've already tried resetting the system and checked the filters, but the problem persists. Given the high standards of service we've come to expect from ProCare, I'm confident that your team can handle this swiftly and efficiently.\n\nCould you please arrange for a technician to come by at the earliest convenience? Your prompt assistance in this matter would be greatly appreciated.\n\nThank you so much for your help and for continuing to provide such excellent service.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Frustration with Routine Maintenance Scheduling\n\nHi ProCare Support Team,\n\nI hope this message finds you well, though I must admit I'm not feeling particularly positive as I write this. My name is [Sender], and I've been using ProCare Facility Solutions for the maintenance of my residential property for a while now. Unfortunately, my recent experiences have left me quite disappointed.\n\nI've been trying to schedule a routine maintenance check for my HVAC system, but the process has been anything but smooth. Despite following the usual steps and reaching out multiple times, I haven't received any confirmation or updates. This lack of communication is really frustrating, especially when all I'm asking for is a simple, routine service.\n\nI've already tried calling your support line and even sent a couple of emails, but it seems like my requests are falling into a black hole. I understand that this might not be a high-priority issue, but the delay and lack of response are quite disheartening.\n\nCould you please assist me in getting this maintenance scheduled? I just need a straightforward answer and a confirmed date. It's really not too much to ask, is it?\n\nLooking forward to your prompt response.\n\nBest,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I work closely with our engineer here at [Company Name] to ensure all facility modifications and maintenance tasks are up to par. We've been utilizing ProCare Facility Solutions for our maintenance needs for quite some time now, and your services have always been reliable.\n\nCurrently, we are facing an issue with our HVAC system that requires routine maintenance. The system has been running a bit less efficiently than usual, and we believe it might be time for a thorough check-up and any necessary adjustments. We haven't encountered any major problems yet, but we want to address this before it escalates.\n\nSo far, we've conducted a basic inspection and cleaned the accessible filters, but the system still seems to be underperforming. Given the importance of maintaining a comfortable and safe environment for our team, we would appreciate it if you could schedule a maintenance visit at your earliest convenience.\n\nPlease let us know the available slots for your team to come in and perform the necessary maintenance. We are looking for a timely yet thorough service to ensure everything is running smoothly again.\n\nThank you for your attention to this matter. Looking forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent Request for Training and Support\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I manage a residential complex that has been utilizing ProCare Facility Solutions for our maintenance and cleaning needs for the past year. Your services have been instrumental in keeping our environment safe and well-maintained.\n\nHowever, I am reaching out with an urgent request. We are in immediate need of comprehensive training for our in-house maintenance team. Given the complexity of our facility's systems, particularly the HVAC and electrical components, it's crucial that our team is well-versed in best practices and troubleshooting techniques. \n\nWe have encountered several issues recently that have highlighted gaps in our current knowledge and capabilities. While we have managed to address these problems temporarily, a more sustainable solution is necessary. We need a detailed training program that covers routine maintenance, emergency repairs, and energy efficiency practices.\n\nCould you please provide information on the available training programs and how quickly we can schedule a session? Time is of the essence, as we aim to prevent any further disruptions to our facility's operations.\n\nThank you for your prompt attention to this matter. I look forward to your swift response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Feedback on Recent HVAC Maintenance\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been using ProCare Facility Solutions for the maintenance of my residential property for the past year. Overall, I\u2019ve been quite satisfied with the services provided, but I wanted to share some feedback regarding a recent experience.\n\nLast week, I scheduled a routine maintenance check for my HVAC system. While the technician was professional and courteous, I noticed that the system still seems to be running less efficiently than before. I understand that these things can happen, but I wanted to bring it to your attention to see if there might be a follow-up or additional steps that could be taken to resolve this.\n\nI haven\u2019t taken any further steps yet, as I wanted to get your input first. Could you please advise on what can be done to ensure the HVAC system is operating at its best? Any guidance or additional service would be appreciated.\n\nThank you for your attention to this matter. Looking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Facility Management Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. [Sender], and I am a large animal veterinarian specializing in bovine podiatry. I am reaching out to you with a few questions regarding your facility management services.\n\nI have recently been considering ways to improve the efficiency and upkeep of my veterinary clinic, which includes both residential and commercial spaces. Given the nature of my work, maintaining a clean, safe, and well-managed environment is crucial. I have heard positive things about ProCare Facility Solutions and am interested in learning more about how your services could benefit my practice.\n\nSpecifically, I would like to understand more about your comprehensive facility management offerings. Could you provide details on how your team handles the coordination of space utilization, security, and sustainability efforts? Additionally, I am curious about the implementation of best practices for energy efficiency and environmental impact reduction, as these are areas of particular interest to me.\n\nI have not yet taken any steps to address these needs, as I wanted to gather more information before making any decisions. Your expertise and experience in this field are highly valued, and I am confident that your insights will help me make an informed choice.\n\nI would appreciate it if you could provide me with more information or direct me to the appropriate resources. If there are any specific plans or packages tailored for veterinary clinics or similar facilities, I would be very interested in learning about those as well.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n\nDr. [Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Scheduled Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a loyal client of ProCare Facility Solutions for the past two years. Your team has always been a beacon of reliability and professionalism, ensuring that my home remains a sanctuary of comfort and peace.\n\nHowever, I find myself in a bit of a predicament. The HVAC system in my home has been acting up lately, and it seems to be struggling to maintain a consistent temperature. While it hasn't completely broken down, the inconsistency is becoming quite bothersome, especially as the weather starts to change. It's almost as if my home is yearning for the same stability and warmth that I seek in my own life.\n\nI have tried adjusting the thermostat and even checked the filters, but the issue persists. Given the importance of a well-functioning HVAC system, I believe it is time to call in the experts for a routine maintenance check. Could you please arrange for one of your skilled technicians to come by and take a look at it? I trust your team to handle this with the same care and attention to detail that you always do.\n\nThank you so much for your prompt attention to this matter. I look forward to hearing from you soon and getting this issue resolved.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry on Sustainability and Environmental Practices\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I represent a software company that specializes in providing a comprehensive online assessment platform with built-in security features. We have been considering your services for our office maintenance and cleaning needs.\n\nAs a company that places a high value on sustainability and environmental responsibility, we are particularly interested in understanding more about your eco-friendly practices and how they align with our own sustainability goals. Specifically, we would like to know more about the types of eco-friendly cleaning products you use, your waste management protocols, and any initiatives you have in place to reduce the carbon footprint of your operations.\n\nWe have reviewed the information available on your website, but we would appreciate more detailed insights into your sustainability efforts. Additionally, if you have any case studies or examples of how you have helped other companies achieve their environmental goals, that would be extremely helpful.\n\nThank you for your attention to this matter. We look forward to your response and hope to establish a partnership that supports both our operational needs and our commitment to environmental stewardship.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Inquiry About Specialized Cleaning Services\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I manage a bustling sports bar downtown. We've been using ProCare Facility Solutions for our regular cleaning needs for a while now, and I must say, your team does a fantastic job keeping our place spotless.\n\nRecently, I've been thinking about getting some specialized cleaning done, particularly for our carpets and windows. With the amount of foot traffic we get, especially during game nights, it's starting to show. I haven't taken any steps yet, but I wanted to reach out to see what options you might recommend for a deep clean that fits our schedule.\n\nCould you provide some details on your specialized cleaning services, including any packages or one-time services that might be suitable for us? Also, if there are any eco-friendly options, that would be a big plus.\n\nLooking forward to hearing from you.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Request for Routine Plumbing Maintenance\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a resident in the [Neighborhood] community for several years now. I have always appreciated the exceptional services provided by ProCare Facility Solutions, and I am grateful for the peace of mind your team brings to our living environment.\n\nRecently, I encountered a minor issue with the plumbing in my apartment. While it is not an urgent matter, I believe it would be best to address it sooner rather than later to prevent any potential complications. Specifically, there seems to be a small leak under the kitchen sink that I noticed a few days ago. I have tried tightening the connections myself, but the leak persists.\n\nGiven the positive experiences I have had with your team in the past, I am confident that you will be able to resolve this issue efficiently. Could you please arrange for a technician to visit my apartment at your earliest convenience to take a look and perform the necessary routine maintenance?\n\nThank you for your attention to this matter. I look forward to your prompt response and appreciate your continued dedication to maintaining our community.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent Attention Needed for Quality and Safety Concerns\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been using your facility management services for my residential complex for the past year. While I have generally been satisfied with your services, I am writing to express some serious concerns regarding the quality and safety of the recent maintenance work carried out at my property.\n\nOver the past few weeks, I have noticed several issues that are quite alarming. The HVAC system, which was supposed to be serviced, is still malfunctioning, causing significant discomfort for the residents. Additionally, there have been multiple instances where the cleaning staff has left hazardous materials unattended, posing a safety risk to everyone in the building.\n\nI have already tried to address these issues by speaking directly with the on-site team, but unfortunately, the problems persist. This is not the level of service I expected from a company that prides itself on quality and safety.\n\nI urgently need your assistance to rectify these issues. Specifically, I would like a thorough inspection of the HVAC system and immediate corrective actions to ensure it is functioning properly. Additionally, I request a review of the cleaning protocols to ensure that safety standards are being strictly followed.\n\nI trust that you will take these concerns seriously and provide a prompt resolution. Thank you for your attention to this matter.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Persistent HVAC Noise - Request for Maintenance\n\nHi [Receiver],\n\nI hope this email finds you well. My name is [Sender], and I've been a client of ProCare Facility Solutions for a while now, primarily utilizing your maintenance services for my office building. Normally, I appreciate the level of service you provide, but today, I'm less than impressed.\n\nWe've been experiencing a persistent issue with our HVAC system. It's been making an unbearable noise, and while it's not an emergency that needs immediate attention, it's certainly disruptive and annoying. I had hoped that your team would have caught this during their routine checks, but alas, here we are.\n\nI've tried to troubleshoot the problem myself, even going as far as consulting a few online forums, but nothing seems to work. It's quite frustrating, especially considering the premium we pay for your services.\n\nCould you please arrange for someone to come and take a look at this issue? I understand it's not a high-priority emergency, but it does need to be addressed sooner rather than later.\n\nLooking forward to your prompt response and hoping this can be resolved without further inconvenience.\n\nBest,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Specialized Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. [Sender], and I am an atmospheric physicist currently engaged in research on the effects of geoengineering on climate patterns. I have been utilizing your facility management services for my research lab and have been quite satisfied with the overall quality and professionalism your team has demonstrated.\n\nI am writing to inquire about your specialized cleaning services. Given the nature of my work, maintaining a pristine and controlled environment is crucial for the accuracy of my experiments. Specifically, I am interested in understanding more about your deep cleaning and eco-friendly cleaning practices, as these are particularly relevant to the sensitive equipment and materials we handle.\n\nWhile there is no immediate urgency, I would appreciate detailed information on the scope of these services, including any protocols you follow to ensure minimal disruption to ongoing research activities. Additionally, if there are any specific preparations or considerations we need to be aware of before scheduling such services, that information would be very helpful.\n\nI have not yet taken any steps towards scheduling a cleaning session, as I wanted to gather all necessary details first. Your guidance on this matter would be greatly appreciated.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\nDr. [Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Attention Required: Quality and Safety Concerns\n\nHi ProCare Support Team,\n\nI'm [Sender], and I've been using your services for a while now. Honestly, I expected better from a company that claims to be a premier provider of facility management and maintenance.\n\nRecently, I've noticed some glaring issues with the quality and safety standards in the cleaning services provided to my office space. The cleaning crew seems to be cutting corners, and I've found several areas that are consistently overlooked. This isn't just about aesthetics; it's about maintaining a safe and healthy environment for my team.\n\nI've tried addressing this with the on-site supervisor, but nothing has changed. It's frustrating to see that my concerns are not being taken seriously.\n\nI need you to step in and ensure that these issues are resolved promptly. I expect a thorough review of the cleaning protocols and immediate action to rectify the situation.\n\nLooking forward to your prompt response.\n\nBest,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Coordination of Space Utilization\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am currently a physics student specializing in ferromagnetism. I reside in one of the residential complexes managed by your esteemed company.\n\nI am writing to bring to your attention a minor issue related to the coordination of space utilization within our building. Specifically, there seems to be a slight misalignment in the scheduling of common area usage, which occasionally leads to overlapping bookings. While this is not an urgent matter, I believe addressing it could enhance the overall efficiency and convenience for all residents.\n\nI have not yet taken any steps to resolve this issue independently, as I thought it best to consult with your team first. I would appreciate it if you could look into this matter and suggest a possible solution or adjustment to the current scheduling system.\n\nThank you for your attention to this matter. I look forward to your guidance and support.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent Assistance Required for Facility Management and Safety Concerns\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a client of ProCare Facility Solutions for the past two years, primarily utilizing your facility management services for our residential complex in Toronto. I am reaching out to seek your immediate assistance with a matter that has recently come to my attention.\n\nOver the past few weeks, we have noticed some serious inconsistencies in the management of our facility operations. Specifically, there have been significant lapses in the coordination of space utilization and security measures. For instance, the common areas are not being utilized efficiently, leading to overcrowding during peak hours. Additionally, there have been a few instances where the security protocols were not followed, causing considerable concern among the residents.\n\nI have already spoken with our on-site manager to address these issues, but unfortunately, the situation has not improved. Given the importance of maintaining a safe and efficient environment for our residents, I believe it is crucial to resolve these matters promptly.\n\nCould you please look into this issue and provide guidance on how we can enhance our facility management practices? Any recommendations or adjustments to our current plan would be greatly appreciated.\n\nThank you for your immediate attention to this matter. I look forward to your prompt response and assistance.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Urgent Training Support Needed for In-House Team\n\nHi [Receiver],\n\nI hope this message finds you well. My name is [Sender], and I am an executive at a podcast network that has been utilizing ProCare Facility Solutions for our office maintenance and cleaning services. We've been quite satisfied with the level of service provided so far.\n\nHowever, we are currently facing a pressing issue that requires immediate attention. Our in-house maintenance team is struggling to keep up with the demands of our rapidly growing office space. Despite their best efforts, there have been several instances where routine maintenance tasks have been overlooked, leading to minor but disruptive issues.\n\nWe have tried to address this internally by conducting brief training sessions, but it seems that our team needs more comprehensive guidance to handle the increasing workload effectively. Given the urgency of the situation, we need your expert support to develop a robust training program for our staff.\n\nCould you please arrange for an intensive training session at the earliest convenience? We need a detailed program that covers best practices in facility management, maintenance protocols, and efficient problem-solving techniques. Additionally, any resources or materials you can provide to support ongoing training would be greatly appreciated.\n\nThank you for your prompt attention to this matter. We look forward to your swift response and assistance in resolving this issue.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Request for Training and Support on Latest Facility Management Trends\n\nHi ProCare Support Team,\n\nHope you're all doing great! My name is Alex, and I've been following ProCare Facility Solutions for a while now. I really appreciate the top-notch services you provide, especially your commitment to sustainability and quality.\n\nI'm reaching out because I think there's a fantastic opportunity for us to stay ahead of the curve with some of the latest online trends in facility management. As someone who grew up with the internet, I've noticed a lot of new tools and practices emerging that could really benefit our operations. I believe integrating these trends could enhance our efficiency and overall service quality.\n\nI've done some preliminary research and tried to implement a few of these trends myself, but I think a more structured training program from your end would be incredibly beneficial. Specifically, I'm interested in learning more about the latest best practices in energy efficiency and environmental impact reduction, as well as any new technologies that could streamline our maintenance and cleaning processes.\n\nCould you please provide some guidance or set up a training session to help us get up to speed with these new trends? I think it would be a great addition to our current practices and help us maintain our high standards.\n\nThanks a lot for your time and assistance. Looking forward to hearing from you soon!\n\nBest regards,\nAlex"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for Facility Management Issue\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am reaching out to you with a matter that requires urgent attention. My partner and I have been relying on ProCare Facility Solutions for the management of our residential complex, and while we have generally been satisfied with your services, we are currently facing a significant issue that needs immediate resolution.\n\nOver the past few days, we have noticed a series of problems with the facility management at our complex. Specifically, there have been lapses in security measures, and the coordination of space utilization has been less than optimal. These issues are causing us considerable concern, especially given the potential risks involved. We are particularly worried about the safety and efficiency of our living environment, which is paramount to us.\n\nWe have already attempted to address these concerns by speaking with the on-site management team, but unfortunately, the issues persist. Given the urgency of the situation, we are seeking your immediate intervention to rectify these problems. We would appreciate it if you could send a team to assess and resolve the security lapses and improve the coordination of space utilization as soon as possible.\n\nThank you for your prompt attention to this matter. We look forward to your swift response and resolution.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Urgent: Immediate Attention Required for Facility Management Issues\n\nDear [Receiver],\n\nI hope this message finds you well, though I must admit, my patience is wearing thin. My name is [Sender], and I have been utilizing ProCare Facility Solutions for my commercial property for the past year. While I initially had high hopes for your services, recent experiences have left me quite disillusioned.\n\nThe primary issue at hand is the ongoing mismanagement of our facility operations. Despite your promises of comprehensive oversight and efficient management, I have encountered numerous problems that suggest otherwise. The coordination of space utilization has been a mess, and the security measures in place are far from adequate. Additionally, the so-called sustainability efforts seem to be more of a marketing gimmick than a practical reality.\n\nI have already attempted to address these concerns through your support channels, but the responses have been lackluster at best. It seems that my complaints are falling on deaf ears, and the lack of urgency in resolving these issues is quite frustrating.\n\nI am reaching out once again, hoping that this time my concerns will be taken seriously. I need a thorough review and immediate improvement in the management of our facility. Specifically, I want to see a detailed plan on how you intend to rectify the current shortcomings and ensure that such issues do not recur in the future.\n\nI trust that you will give this matter the attention it deserves and provide a satisfactory resolution promptly. \n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am currently an economics student with a keen interest in efficient facility management. I have been researching various companies that offer comprehensive facility management services, and ProCare Facility Solutions has come highly recommended.\n\nI am particularly interested in understanding more about your facility management services, specifically how you measure and report on the efficiency and effectiveness of your operations. As someone who values tangible outcomes and data-driven results, I would appreciate detailed information on the metrics and KPIs you use to evaluate your services. Additionally, I am curious about the sustainability practices you implement and how these are quantified in terms of environmental impact reduction.\n\nTo provide some context, I am considering potential facility management solutions for a residential complex project I am working on as part of my studies. I have reviewed the information available on your website, but I would like to delve deeper into the specifics mentioned above.\n\nCould you please provide more detailed documentation or case studies that highlight your approach to facility management, particularly in terms of measurable outcomes? Any additional insights into your sustainability initiatives would also be greatly appreciated.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Concerns Regarding Recent Cleaning Services Quality\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been utilizing your services for my physical therapy practice, which primarily caters to seniors. I must say, I have been quite disappointed with the recent cleaning services provided.\n\nDespite the low urgency of this matter, I feel compelled to address it as the quality of cleaning has noticeably declined. The deep cleaning and carpet maintenance, in particular, have not met the standards I have come to expect from ProCare Facility Solutions. The carpets still appear dingy, and there are areas that seem to have been overlooked entirely.\n\nI have not taken any steps to resolve this issue on my own, as I believe it falls squarely within the scope of the services I am paying for. I would appreciate it if your team could look into this matter and ensure that the cleaning services are brought back up to the high standards that were initially promised.\n\nThank you for your attention to this matter. I look forward to your prompt response and a resolution that restores my confidence in your services.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for HVAC System Disruption\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a client of ProCare Facility Solutions for the past two years, primarily utilizing your facility management services for our commercial property. I have always appreciated the meticulous attention to detail and the seamless management provided by your team.\n\nHowever, I am writing to bring to your immediate attention a pressing issue that has arisen in our facility. Over the past week, we have encountered significant disruptions in our HVAC system, which has led to uncomfortable working conditions for our staff. Despite our best efforts to manage the situation internally, the problem persists and seems to be escalating.\n\nWe have already attempted basic troubleshooting measures, including resetting the system and checking for any visible obstructions or faults. Unfortunately, these steps have not resolved the issue, and we are now facing a critical need for professional intervention.\n\nGiven the urgency of the situation, I kindly request that you dispatch a technician at the earliest possible opportunity to diagnose and rectify the problem. The well-being and productivity of our team are being adversely affected, and we are keen to restore normalcy as swiftly as possible.\n\nThank you for your prompt attention to this matter. I look forward to your swift response and resolution.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Training Programs for Facility Management\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I work closely with the owner of a local liquor store to promote our distillery's bourbon. We've been considering ways to improve our facility management practices and came across your comprehensive services.\n\nI'm particularly interested in learning more about the training programs you offer for facility management best practices. We believe that enhancing our knowledge in this area could significantly benefit our operations. Could you provide more details on the available training sessions, including schedules and any associated costs?\n\nWe haven't taken any steps yet, as we wanted to gather more information before proceeding. Your guidance on how to get started would be greatly appreciated.\n\nThank you for your time and assistance.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent: Immediate Attention Required for Scheduling Issues\n\nDear ProCare Support Team,\n\nI hope this message finds you well. However, I must express my deep frustration with the current state of your scheduling services. As someone who values efficiency and reliability, the lack of coordination in your cleaning services is unacceptable.\n\nI have been a loyal customer of ProCare Facility Solutions, consistently appreciating the quality of your cleaning services. Unfortunately, recent scheduling issues have caused significant disruptions. Despite my repeated attempts to establish a consistent cleaning schedule for my office space, I have encountered numerous delays and miscommunications. This is far below the standard of service I expect from a company that prides itself on professionalism and reliability.\n\nI have reached out to your team multiple times to address this issue, but the responses have been slow and unhelpful. It seems no one is taking responsibility for the scheduling mishaps, leaving me to deal with the consequences.\n\nI urge you to take immediate action to rectify this situation. I need a reliable and consistent cleaning schedule that aligns with my needs. Please assign a competent team member to handle this matter and ensure that such issues do not recur in the future.\n\nThank you for your prompt attention to this matter. I look forward to a swift resolution.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nHi ProCare Support Team,\n\nI hope this message finds you well! My name is [Sender], and I\u2019ve been enjoying the fantastic services provided by ProCare Facility Solutions for my residential property. Your team has always been top-notch, and I truly appreciate the attention to detail and professionalism.\n\nI\u2019m reaching out today with a small request regarding the routine maintenance of our HVAC system. I\u2019ve noticed some minor inconsistencies in the temperature control, and I think it might be time for a check-up to ensure everything is running smoothly. I\u2019m not entirely sure about the technicalities, but I trust your expertise to handle it.\n\nI haven\u2019t taken any steps yet, as I wanted to consult with the professionals first. Could you please schedule a maintenance visit at your earliest convenience? There\u2019s no rush, so whenever it fits into your schedule would be perfect.\n\nThank you so much for your help and for always providing such excellent service. Looking forward to hearing from you soon!\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Concerns Regarding Recent Service Experience\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I represent [Multinational Corporation Name]. We have been utilizing your facility management and maintenance services for our office buildings across several locations. Overall, we have been satisfied with the quality of service provided by ProCare Facility Solutions.\n\nHowever, I would like to bring to your attention a few concerns that have recently come to our notice. Over the past month, we have observed inconsistencies in the cleaning schedules and maintenance routines. Specifically, there have been delays in the routine maintenance of our HVAC systems, which has caused some discomfort for our employees. Additionally, the quality of the cleaning services has not been up to the usual standards we have come to expect from your team.\n\nWe have already reached out to our designated account manager to address these issues, but the response has been slower than anticipated. Given the importance of maintaining a comfortable and clean working environment for our staff, we would appreciate a more prompt resolution to these concerns.\n\nCould you please look into this matter and provide us with an update on the steps being taken to rectify these issues? We value the partnership we have with ProCare Facility Solutions and hope to continue working together to ensure our facilities are well-maintained.\n\nThank you for your attention to this matter. I look forward to your prompt response.\n\nBest regards,\n\n[Sender] \n[Multinational Corporation Name] \n[Contact Information]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Urgent Assistance Needed for HVAC Maintenance\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I recently started using your services for my home. I have an urgent request that I need help with.\n\nI\u2019m not very handy around the house and prefer to keep things simple. I\u2019ve been trying to follow the maintenance plan you provided, but I\u2019m finding it a bit overwhelming. Specifically, I\u2019m struggling with the HVAC system maintenance. The instructions seem too complex for someone like me who isn\u2019t very experienced with these things.\n\nI\u2019ve tried to follow the steps as best as I can, but I\u2019m worried I might mess something up. I haven\u2019t done anything drastic yet, just some basic cleaning and checking, but I\u2019m not confident I\u2019m doing it right.\n\nCould you please provide a simpler, more straightforward guide or perhaps send someone over to help me out? I\u2019d really appreciate any assistance you can offer to make this process easier for me.\n\nThank you for your prompt attention to this matter.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Assistance Required for Facility Management Issue\n\nDear ProCare Support Team,\n\nGreetings and blessings to you all. I hope this message finds you in good health and high spirits.\n\nMy name is [Sender], and I have been a satisfied client of ProCare Facility Solutions for the past two years. Your dedication to maintaining a safe and efficient environment has always resonated with me, and I deeply appreciate the quality of service you provide.\n\nRecently, I have encountered an issue with the facility management at our residential complex. Specifically, there seems to be a recurring problem with the coordination of space utilization in our common areas. While the overall management has been commendable, this particular aspect requires some attention to ensure that all residents can enjoy the shared spaces harmoniously.\n\nI have already spoken with our on-site manager and attempted to address the issue by rearranging some of the furniture and scheduling usage times. However, these measures have not fully resolved the problem, and I believe a more comprehensive solution is needed.\n\nI kindly request your assistance in reviewing the current space utilization plan and providing guidance on how we can optimize it for better efficiency and harmony. Your expertise in facility management is highly valued, and I am confident that with your support, we can find a suitable resolution.\n\nThank you for your attention to this matter. I look forward to your prompt response and am hopeful for a positive outcome.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is Officer John Mitchell, and I oversee the facility management for our precinct. We've been utilizing ProCare Facility Solutions for our maintenance needs for the past two years, and I must say, your services have been instrumental in keeping our operations running smoothly.\n\nI am writing to request routine maintenance for our HVAC system. As you know, maintaining a comfortable environment is crucial for our team, especially given the nature of our work. The system has been functioning adequately, but it's due for its scheduled check-up to ensure everything continues to run efficiently.\n\nSo far, we haven't encountered any major issues, but I believe it's prudent to stay ahead of any potential problems. Could you please arrange for a technician to come by sometime next week? We are flexible with the timing, but a mid-week appointment would be ideal.\n\nThank you for your attention to this matter. I look forward to your prompt response and continued excellent service.\n\nBest regards,\n\nOfficer John Mitchell\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduling Specialized Cleaning Services for My Art Collection\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a client of ProCare Facility Solutions for some time now. I am reaching out to discuss the scheduling of cleaning services for my residence, which houses an extensive collection of Michelangelo's sculptures.\n\nGiven the delicate nature of these artworks, it is imperative that the cleaning is conducted with the utmost care and precision. I would like to arrange a specialized cleaning schedule that ensures the sculptures are maintained in pristine condition without compromising their integrity.\n\nPreviously, I have coordinated with your team for routine cleaning services, and I have been quite satisfied with the results. However, the unique requirements of these sculptures necessitate a more tailored approach. I am looking for a cleaning plan that includes deep cleaning and the use of eco-friendly products to preserve the sculptures' original state.\n\nCould you please assist me in setting up a suitable cleaning schedule? I am available to discuss the specifics at your earliest convenience. Your prompt attention to this matter would be greatly appreciated.\n\nThank you for your continued support and excellent service.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduled Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a proud client of ProCare Facility Solutions for the past three years. I have always been impressed by your commitment to quality and the exceptional service you provide, which is why I am reaching out to you today with a positive outlook.\n\nRecently, I encountered an issue with the HVAC system in my residential complex. While it is not an immediate emergency, it is something that needs to be addressed promptly to ensure the comfort and safety of the residents. The system has been making unusual noises and is not maintaining the desired temperature consistently.\n\nI have already checked the thermostat settings and ensured that the filters are clean, but the problem persists. Given your expertise and my positive experiences with your maintenance services, I am confident that your team can swiftly diagnose and resolve this issue.\n\nCould you please arrange for a technician to visit our property at the earliest convenience for a routine maintenance check? I am looking forward to your prompt assistance and am sure that, as always, ProCare will deliver a solution that exceeds expectations.\n\nThank you for your attention to this matter. I appreciate your support and dedication to maintaining our facilities in top condition.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduled Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a client of ProCare Facility Solutions for the past few years, relying on your expertise to maintain my residential property. I am writing to bring to your attention an issue that has recently arisen with the HVAC system in my home.\n\nOver the past few days, I have noticed that the heating system is not functioning as it should. Despite setting the thermostat to the desired temperature, the system fails to maintain a consistent warmth throughout the house. This inconsistency is becoming increasingly uncomfortable, especially as the weather turns colder.\n\nI have attempted to troubleshoot the problem by checking the thermostat settings and ensuring that the filters are clean. However, these steps have not resolved the issue. Given the importance of a properly functioning HVAC system, I believe it is necessary to seek professional assistance to address this matter promptly.\n\nCould you please arrange for a technician to visit my property and diagnose the problem as part of the routine maintenance schedule? I would appreciate it if this could be scheduled at your earliest convenience, as I am keen to restore a comfortable living environment.\n\nThank you for your attention to this matter. I look forward to your prompt response and resolution.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduling Cleaning Services for Upcoming Event\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I run a food truck specializing in Southern cuisine. We\u2019ve been serving up delicious meals at various country music events, and I\u2019ve been considering your cleaning services to help keep our operation spotless.\n\nWe have a big event coming up next month, and I want to ensure everything is in top shape. I\u2019m looking to schedule a thorough cleaning for our food truck, including deep cleaning and window washing. I\u2019ve heard great things about your eco-friendly cleaning products and practices, and I believe they would be perfect for our needs.\n\nI haven\u2019t taken any steps yet to resolve this, as I wanted to reach out directly to you for the best advice and scheduling options. Could you please provide me with available dates and any additional information I might need to get this set up?\n\nThanks for your help, and I look forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent Assistance Needed with Facility Management Coordination\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a solar panel engineer currently managing a residential complex that benefits from your exceptional facility management services. I have always appreciated the professionalism and dedication your team brings to maintaining our environment.\n\nRecently, I have encountered a few critical challenges with the coordination of space utilization and security measures within our facility. These issues require immediate attention to ensure the safety and well-being of everyone in the complex.\n\nTo provide some context, we have been experiencing significant inconsistencies in the scheduling and execution of space utilization plans, leading to frequent overlaps and confusion among residents. Additionally, there have been concerning lapses in our security protocols that need urgent resolution.\n\nI have already taken the initiative to review our current plans and made some adjustments to mitigate the impact. However, a more comprehensive approach, guided by your team\u2019s expertise, is urgently needed.\n\nCould you please assist us in reviewing and optimizing our space utilization and security measures? Your prompt insights and recommendations would be greatly valued, and I am confident that together we can enhance the overall efficiency and safety of our facility.\n\nThank you for your continued support and dedication. I look forward to your immediate response and assistance.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been utilizing your facility management services for my photo restoration studio for the past year. Your team has always been reliable, and I appreciate the quality of service provided.\n\nI am writing to request routine maintenance for the HVAC system in my studio. While there are no immediate issues, I believe it is prudent to ensure that everything is functioning optimally, especially as we approach the colder months. Regular maintenance is crucial for maintaining the right environment for my vintage photographs, and I trust your expertise in handling this.\n\nCould you please schedule a visit at your earliest convenience to perform the necessary checks and maintenance?\n\nThank you for your attention to this matter. I look forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for HVAC System\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I manage a series of residential properties that have been utilizing your facility management services for the past year. Overall, we have been quite satisfied with the level of professionalism and efficiency your team has demonstrated.\n\nI am writing to request routine maintenance for the HVAC system in one of our residential complexes. While there are no immediate issues, I believe it is prudent to ensure that everything is functioning optimally, especially as we approach the colder months. Regular upkeep is essential to maintain the comfort and satisfaction of our tenants.\n\nTo date, we have not encountered any significant problems with the HVAC system, and this request is purely preventative. We have followed the basic maintenance guidelines provided by your team, but I think a professional check-up would be beneficial at this juncture.\n\nCould you please schedule a routine maintenance visit at your earliest convenience? We are flexible with the timing and can accommodate your team\u2019s availability.\n\nThank you for your attention to this matter. I look forward to your prompt response.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Assistance Needed for Quality and Safety Concerns\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am part of the facilities management team at [Technology Company]. We've been utilizing ProCare Facility Solutions for our office maintenance and cleaning services for the past year, and I must say, your team has consistently delivered exceptional service.\n\nRecently, we've encountered a few issues related to the quality and safety of our facility that we need your assistance with. Specifically, we've noticed some inconsistencies in the cleaning routines, particularly in high-traffic areas like our main lobby and conference rooms. Additionally, there have been a few minor safety concerns regarding the maintenance of our HVAC system.\n\nWe have already taken some initial steps to address these issues internally, such as adjusting our cleaning schedules and conducting a preliminary inspection of the HVAC system. However, we believe that your expertise and comprehensive approach will be invaluable in resolving these concerns effectively.\n\nCould you please arrange for a detailed inspection and provide recommendations on how we can enhance the quality and safety of our facility? We are confident that with your support, we can maintain the high standards we strive for.\n\nThank you for your attention to this matter. We look forward to your prompt response and continued partnership.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Eco-Friendly Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. Alex Thompson, and I am an ecological economist currently engaged in research on the economic impact of insect behavior on agriculture. I have recently come across your company and am interested in your specialized cleaning services.\n\nGiven the nature of my work, maintaining a clean and controlled environment is crucial for the accuracy of my research. I am particularly interested in your deep cleaning and eco-friendly cleaning products, as these align with my commitment to sustainability and environmental health.\n\nAt this stage, I am in the preliminary phase of exploring potential service providers and have not yet taken any specific steps towards engaging a cleaning service. I would appreciate it if you could provide more detailed information about your specialized cleaning services, including any relevant case studies or references that highlight your expertise in maintaining environments similar to mine.\n\nThank you for your time and assistance. I look forward to your response.\n\nBest regards,\n\nDr. Alex Thompson\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": true, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Coordination of Space Utilization in Common Areas\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a satisfied client of ProCare Facility Solutions for the past year. As a devoted korfball player from Suriname, I truly appreciate the excellent services you provide, which allow me to focus on my training and matches without worrying about facility upkeep.\n\nI am writing to bring a minor issue to your attention regarding the management of my residential complex. Recently, I have noticed that the coordination of space utilization in our common areas could be improved. While this is not an urgent matter, I believe addressing it would enhance the overall efficiency and enjoyment of our shared spaces.\n\nI have not taken any specific steps to resolve this issue yet, as I trust your expertise and experience in handling such matters. I would greatly appreciate it if your team could look into this and suggest any possible improvements.\n\nThank you for your continued support and dedication to maintaining our facilities. I look forward to your assistance in resolving this minor issue.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for Upcoming Event\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a publicist managing several high-profile clients. We have been utilizing ProCare Facility Solutions for our maintenance needs, and I must commend your team for the exceptional service provided thus far.\n\nI am reaching out today with a specific request that requires your timely attention. We have an upcoming event at one of our client's properties, and it is imperative that the facility is in pristine condition. Specifically, we need routine maintenance performed on the HVAC system, plumbing, and electrical systems to ensure everything operates flawlessly during the event.\n\nI have already coordinated with your team in the past for similar requests, and I trust that you understand the level of perfection expected. Given the nature of our clientele, any oversight could reflect poorly on both our brands. Therefore, I urge you to prioritize this request and ensure that all necessary maintenance is completed well in advance of the event date.\n\nPlease confirm receipt of this message and provide a timeline for when the maintenance will be conducted. Your prompt attention to this matter is greatly appreciated.\n\nThank you for your continued support and dedication to excellence.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Routine Maintenance Request for My Studio\n\nDear ProCare Facility Solutions Team,\n\nI hope this message finds you well. My name is [Sender], and I am a Bollywood film director currently working on an exciting new project. I have been a satisfied client of ProCare Facility Solutions for quite some time now, and I must say, your services have always been top-notch. Your team's dedication to maintaining a pristine environment has allowed me to focus on my creative endeavors without any distractions.\n\nI am writing to request routine maintenance for my studio. While everything is functioning smoothly, I believe it's always better to stay ahead with preventative measures. Specifically, I would like to schedule a check-up for the HVAC system and a general inspection of the plumbing and electrical systems. Ensuring that everything is in perfect working order will help maintain the seamless workflow we currently enjoy.\n\nI haven't encountered any issues so far, but I believe in the adage, \"Prevention is better than cure.\" Therefore, I haven't taken any steps myself, as I trust your expertise in handling these matters efficiently.\n\nCould you please arrange for a maintenance visit at your earliest convenience? I understand that this is not an urgent request, so I am flexible with the scheduling. Your team's professionalism and attention to detail have always impressed me, and I am confident that this routine check-up will be handled with the same level of excellence.\n\nThank you for your continued support and exceptional service. I look forward to hearing from you soon.\n\nWarm regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Concerns About Facility Safety and Maintenance\n\nHi [Receiver],\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been using ProCare Facility Solutions for our residential complex for a while now. I used to work at a record store, and I often find myself reminiscing about the good old days of flipping through vinyls and admiring album covers. But today, I\u2019m writing to you about something a bit more pressing.\n\nLately, I\u2019ve noticed a few issues around our building that have me a bit concerned. Specifically, there are a few areas where the maintenance seems to be slipping. For instance, the HVAC system has been making some unusual noises, and there\u2019s a persistent leak in the plumbing that hasn\u2019t been addressed. Additionally, I\u2019ve observed that the cleaning in common areas isn\u2019t as thorough as it used to be. These issues are starting to affect the overall safety and quality of our living environment.\n\nI\u2019ve already tried reaching out to the on-site maintenance team, but it seems like they\u2019re either overwhelmed or not fully aware of the extent of these problems. I understand that these things can take time, but I\u2019m hoping we can get them sorted out before they become bigger issues.\n\nCould you please look into these concerns and let me know what steps can be taken to address them? I\u2019m particularly interested in ensuring that our HVAC and plumbing systems are functioning properly and that the cleaning standards are maintained at the high level we\u2019ve come to expect from ProCare.\n\nThank you for your attention to this matter. I look forward to hearing from you soon.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduled Maintenance Request for Bathroom Plumbing\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been utilizing your services for my residential property for some time now. I have always appreciated the quality and reliability of your maintenance services.\n\nI am writing to inform you of a minor issue that has recently come up. There seems to be a small problem with the plumbing in my bathroom. While it is not an urgent matter, I would like to have it addressed at your earliest convenience to prevent any potential complications.\n\nI have not taken any steps to resolve the issue myself, as I trust your team\u2019s expertise in handling such matters. Could you please arrange for a technician to visit my home and take a look at the problem as part of your scheduled maintenance services?\n\nThank you for your attention to this matter. I look forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry Regarding Maintenance Quality and Safety for Exhibit\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is Dr. Evelyn Harper, and I am an expert in agricultural artifacts, currently curating an exhibit that will showcase a variety of historical farming tools and machinery. I have been utilizing your facility management and maintenance services for our exhibit space, and I am reaching out to discuss a few concerns regarding the quality and safety standards of the maintenance work being performed.\n\nWhile I appreciate the comprehensive services provided by ProCare Facility Solutions, I have noticed a few areas where the maintenance quality could be improved to ensure the safety and preservation of our valuable artifacts. Specifically, there have been instances where the cleaning products used seem to leave residues that could potentially harm the delicate surfaces of some items. Additionally, I have observed that the HVAC system's performance has been inconsistent, which is crucial for maintaining the optimal environment for artifact preservation.\n\nI have not yet taken any steps to address these issues directly with your on-site team, as I wanted to first seek guidance from your support team on the best course of action. I would greatly appreciate it if you could provide recommendations or adjustments to the current maintenance protocols to better align with the specific needs of our exhibit.\n\nThank you for your attention to this matter. I look forward to your prompt response and any assistance you can offer to ensure the continued safety and quality of our exhibit space.\n\nBest regards,\n\nDr. Evelyn Harper\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Inquiry About Scheduling Deep Cleaning Services\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a researcher who has recently been in touch with Daniel through various academic forums. We have exchanged ideas and supported each other's work, which has been quite enriching.\n\nI am writing to inquire about scheduling cleaning services for an upcoming visit to my residential property. I have heard great things about your comprehensive cleaning services and would like to ensure that my home is in pristine condition for my guests. Specifically, I am interested in a deep cleaning service, including window washing and carpet maintenance.\n\nI have not yet taken any steps to schedule this service, as I wanted to confirm the availability and any specific requirements you might have. Could you please provide me with the available dates and any additional information needed to proceed with the booking?\n\nThank you for your assistance. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Urgent Assistance Needed for HVAC Repair\n\nHi ProCare Support Team,\n\nI hope y'all are doing well! My name is [Sender], and I've been a happy customer of ProCare Facility Solutions for a while now. Y'all have always done such a fantastic job keeping my home in tip-top shape, and I truly appreciate it.\n\nI'm reaching out because I'm in a bit of a pickle. The HVAC system in my home has suddenly stopped working, and with the Southern heat, it's becoming unbearable. I noticed the issue this morning when the air conditioning just wouldn't kick in, and it's been getting hotter by the hour.\n\nI've tried resetting the thermostat and checking the circuit breaker, but nothing seems to be working. Given the current situation, I really need someone to come out and take a look at it as soon as possible. I know y'all are the best in the business, and I'm confident you can get this sorted out quickly.\n\nCould you please send a technician over at your earliest convenience? I would be so grateful for your prompt assistance with this urgent repair.\n\nThank you so much for your help!\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"high\"}"}, {"fields": {"input": "Subject: Inquiry About Facility Management Training Programs\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am currently exploring career options in facility management. I recently came across ProCare Facility Solutions and was impressed by the range of services and the emphasis on sustainability and quality.\n\nI am particularly interested in your training and support services. Could you provide more details about the comprehensive training programs you offer, especially those related to facility management best practices? I am keen to understand the structure, duration, and any prerequisites for these programs.\n\nI haven't taken any steps yet to resolve this query as I thought it best to reach out directly to your support team for accurate information.\n\nLooking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": true, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": true}, \"sentiment\": \"neutral\", \"urgency\": \"low\"}"}, {"fields": {"input": "Subject: Assistance Needed for HVAC Maintenance\n\nHi [Receiver],\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a long-time friend of Jerome Frey. I\u2019ve always heard great things about ProCare Facility Solutions from him, and I\u2019m reaching out today because I need some help with an issue at my property.\n\nRecently, I\u2019ve been experiencing some problems with the HVAC system in my building. It\u2019s not an immediate crisis, but it\u2019s definitely something that needs attention soon. The system has been making unusual noises and isn\u2019t maintaining the temperature as it should. I\u2019ve tried adjusting the thermostat and checking the filters, but the problem persists.\n\nGiven the situation, I\u2019d appreciate it if you could arrange for someone to come by and take a look at the system. I\u2019m hoping to get this resolved before it turns into a bigger issue. Your team\u2019s expertise in handling routine maintenance is well-known, and I\u2019m confident you\u2019ll be able to help.\n\nLooking forward to your prompt response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Concerns Regarding Cleaning Quality and Safety Standards\n\nHi [Receiver],\n\nI hope this message finds you well. My name is [Sender], and I am a developer specializing in embedded systems. I have been utilizing ProCare Facility Solutions for the maintenance and management of our office building for the past year.\n\nI am writing to discuss a few observations related to the quality and safety standards of the cleaning services provided. While I understand that maintaining a pristine environment is a continuous effort, I have noticed some inconsistencies in the cleaning routines, particularly in high-traffic areas like the main lobby and conference rooms. These areas seem to accumulate dust and debris more quickly than expected, which raises some concerns about the overall effectiveness of the cleaning protocols in place.\n\nTo address this, I have already spoken with the on-site cleaning staff and reviewed the cleaning schedules. However, the issue persists, and I believe it might be beneficial to reassess the current cleaning strategies or perhaps introduce more frequent checks in these critical areas.\n\nCould you please provide some guidance on how we can ensure that these high-traffic zones are maintained to the highest standards? Any recommendations or adjustments to the current cleaning plan would be greatly appreciated.\n\nThank you for your attention to this matter. I look forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Feedback on Recent Maintenance Service\n\nDear [Receiver],\n\nI hope this message finds you well. My name is [Sender], and I have been utilizing the services of ProCare Facility Solutions for the maintenance of my residential property for the past year. I am writing to provide some feedback regarding a recent maintenance service.\n\nWhile I appreciate the overall quality and professionalism that ProCare consistently delivers, I encountered a minor issue during the last scheduled maintenance visit. Specifically, the technician seemed to overlook a routine check on the HVAC system, which is a critical component of my home\u2019s comfort and efficiency. This oversight was not immediately apparent, but I noticed a slight decline in performance a few days after the visit.\n\nI have not yet taken any steps to address this issue, as I wanted to bring it to your attention first. I believe it is important for your team to be aware of such instances to ensure they are addressed promptly and do not recur in the future.\n\nI would appreciate it if you could arrange for a follow-up visit to inspect and service the HVAC system at your earliest convenience. Additionally, any insights or recommendations on how to prevent similar oversights would be greatly valued.\n\nThank you for your attention to this matter. I look forward to your response and continued excellent service from ProCare Facility Solutions.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Concerns About Studio Maintenance and Rent Increase\n\nDear ProCare Facility Solutions Support Team,\n\nI hope this message finds you well. My name is [Sender], and I am a local artist who has been renting a studio space in one of the properties managed by your company. I have generally been satisfied with the services provided, but I am writing to express some concerns that have arisen recently.\n\nOver the past year, my studio rent has doubled, which has been quite challenging to manage. While I understand that rent increases can happen, I am particularly concerned about the maintenance and upkeep of the studio space. Despite the significant increase in rent, I have noticed that the quality of maintenance has not improved correspondingly. There have been recurring issues with the HVAC system, and the plumbing has required frequent attention. These problems have disrupted my work and added to my stress.\n\nI have previously reported these issues to the building management, but the solutions provided have been temporary at best. Given the substantial rent increase, I believe it is reasonable to expect a higher standard of maintenance and more prompt resolutions to these problems.\n\nI am reaching out to request a thorough review of the maintenance services provided for my studio. Specifically, I would appreciate a comprehensive inspection of the HVAC and plumbing systems to ensure they are functioning correctly and any necessary repairs are made promptly. Additionally, I would like to understand if there are any plans to improve the overall maintenance services in light of the increased rent.\n\nThank you for your attention to this matter. I look forward to your prompt response and hope we can resolve these issues satisfactorily.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": true, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": true, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Scheduling Cleaning Services for Upcoming Event\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been a loyal customer of ProCare Facility Solutions for the past year. I manage the performance analysis for the New York Red Bulls, and I must say, your services have been instrumental in maintaining our office environment, allowing us to focus on our work without any distractions.\n\nI\u2019m reaching out to schedule a cleaning service for our office space. We have an important event coming up next month, and I want to ensure that everything is in pristine condition for our guests. Given the nature of our work, a clean and organized environment is crucial for our productivity and overall morale.\n\nI haven\u2019t taken any steps yet to schedule this service, as I wanted to get in touch with your team directly to ensure we get the best possible arrangement. Ideally, we would need a thorough cleaning a few days before the event, including window washing and carpet maintenance.\n\nCould you please assist me in setting up a suitable cleaning schedule? I\u2019m confident that your team will handle this with the same excellence and attention to detail that we\u2019ve come to expect from ProCare.\n\nThank you so much for your help. Looking forward to your response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": true, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"positive\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Disappointed with Service Quality\n\nDear ProCare Support Team,\n\nI hope this email finds you well, though I must admit, I am not in the best of spirits as I write this. My name is Rajesh, and I run a small barber shop in Mumbai. I have been using your facility management services for the past six months, but lately, I have been quite disappointed with the quality of service.\n\nTo be honest, I expected a lot more from a company that claims to be a premier provider of facility management and maintenance. The cleaning services, in particular, have been subpar. The floors are not as clean as they used to be, and the windows have streaks that are quite noticeable. This is not the level of service I was promised when I signed up.\n\nI have tried to address these issues by speaking to your customer service team on a couple of occasions, but the improvements have been minimal, if any. It feels like my concerns are not being taken seriously, and this is quite frustrating.\n\nI would appreciate it if you could look into this matter and ensure that the quality of service improves. I am not asking for anything extraordinary, just the level of service that was promised when I became a customer.\n\nThank you for your attention to this matter. I hope to see some positive changes soon.\n\nBest regards,\nRajesh"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Request for Post-Renovation Cleaning Services\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been utilizing ProCare Facility Solutions for the maintenance and management of my commercial property for the past few years. Your services have always been reliable and efficient, which is why I am reaching out to you today.\n\nI am writing to request assistance with a specialized cleaning service for our office building. We have recently undergone some renovations, and there is a significant amount of dust and debris that needs to be addressed. Additionally, the carpets and windows require a thorough cleaning to restore them to their original condition.\n\nI have already scheduled a routine cleaning, but I believe this situation requires more specialized attention. Given the nature of the work needed, I would appreciate it if you could arrange for a team that specializes in deep cleaning and post-renovation cleanup.\n\nCould you please provide me with the available dates and any additional information required to proceed with this request? Your prompt assistance in this matter would be greatly appreciated, as we aim to have the office fully operational and presentable as soon as possible.\n\nThank you for your attention to this matter. I look forward to your response.\n\nBest regards,\n\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": false, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": true, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Disappointing Experience with Recent Cleaning Service\n\nHi ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I\u2019ve been using ProCare Facility Solutions for my apartment cleaning for the past few months. I\u2019ve always been impressed with your services, but I have to say, my recent experience has left me quite disappointed.\n\nLast week, I scheduled a deep cleaning for my apartment, expecting the usual top-notch service. However, when I returned home, I found several areas that were clearly overlooked. The windows were still smudged, and the carpets didn\u2019t seem to have been cleaned at all. It\u2019s really frustrating because I rely on your team to keep my space in pristine condition, especially with my busy schedule.\n\nI\u2019ve tried to address this by calling your support line, but I haven\u2019t received any follow-up. I understand that things can get busy, but I would appreciate some acknowledgment and a plan to rectify this situation.\n\nCould you please arrange for a follow-up cleaning to address these issues? I really want to continue using your services, but this experience has shaken my confidence a bit.\n\nThank you for your attention to this matter. I look forward to hearing from you soon.\n\nBest,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": true, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": false, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"negative\", \"urgency\": \"medium\"}"}, {"fields": {"input": "Subject: Immediate Assistance Required for Emergency Repair\n\nDear ProCare Support Team,\n\nI hope this message finds you well. My name is [Sender], and I have been a resident at [Residential Complex Name] for the past few years. I have always appreciated the high standards of maintenance and cleanliness that ProCare Facility Solutions provides.\n\nHowever, I am currently facing a significant issue that requires urgent attention. Earlier today, I discovered a severe leak in the plumbing system of my apartment. The water is rapidly spreading, and I am concerned about potential damage to my property and the overall safety of the living environment.\n\nI have attempted to contain the leak by shutting off the main water valve, but the situation seems to be beyond my control. Given the urgency of this matter, I kindly request immediate assistance from your emergency repair team to address and resolve this issue as swiftly as possible.\n\nYour prompt response and action would be greatly appreciated, as I am deeply concerned about the potential impact on my home and the well-being of my neighbors.\n\nThank you for your attention to this matter. I look forward to your swift response.\n\nBest regards,\n[Sender]"}, "answer": "{\"categories\": {\"routine_maintenance_requests\": false, \"customer_feedback_and_complaints\": false, \"training_and_support_requests\": false, \"quality_and_safety_concerns\": true, \"sustainability_and_environmental_practices\": false, \"cleaning_services_scheduling\": false, \"specialized_cleaning_services\": false, \"emergency_repair_services\": true, \"facility_management_issues\": false, \"general_inquiries\": false}, \"sentiment\": \"neutral\", \"urgency\": \"high\"}"}] \ No newline at end of file diff --git a/tutorials/ai-core-genaihub-prompt-optimization/facility-train.json b/tutorials/ai-core-genaihub-prompt-optimization/facility-train.json new file mode 100644 index 0000000000..0a7f50f7b0 --- /dev/null +++ b/tutorials/ai-core-genaihub-prompt-optimization/facility-train.json @@ -0,0 +1 @@ +{"error": {"code": "01700003", "message": "File not found.", "requestId": "4f6e78f7-a702-95e8-a972-ca0476623ee7", "target": "/file/api/v1/files/default/example/facility-train.json"}} \ No newline at end of file diff --git a/tutorials/ai-core-genaihub-prompt-optimization/facility_prompt.yaml b/tutorials/ai-core-genaihub-prompt-optimization/facility_prompt.yaml new file mode 100644 index 0000000000..e10ca3c8b8 --- /dev/null +++ b/tutorials/ai-core-genaihub-prompt-optimization/facility_prompt.yaml @@ -0,0 +1,13 @@ +system: |- + You are a helpful assistant + +user: |- + Giving the following message: + --- + {{?input}} + --- + Extract and return a json with the follwoing keys and values: + - "urgency" as one of `high`, `medium`, `low` + - "sentiment" as one of `negative`, `neutral`, `positive` + - "categories" Create a dictionary with categories as keys and boolean values (True/False), where the value indicates whether the category is one of the best matching support category tags from: `emergency_repair_services`, `routine_maintenance_requests`, `quality_and_safety_concerns`, `specialized_cleaning_services`, `general_inquiries`, `sustainability_and_environmental_practices`, `training_and_support_requests`, `cleaning_services_scheduling`, `customer_feedback_and_complaints`, `facility_management_issues` + Your complete message should be a valid json string that can be read directly and only contain the keys mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces. diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image-br01.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image-br01.png new file mode 100644 index 0000000000..1897aa2e2c Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image-br01.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_007.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_007.png new file mode 100644 index 0000000000..1076e84281 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_007.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_008.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_008.png new file mode 100644 index 0000000000..e1b61605de Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_008.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_1.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_1.png new file mode 100644 index 0000000000..b8740d5997 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_1.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_33.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_33.png new file mode 100644 index 0000000000..1ee321aa5e Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_33.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_34.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_34.png new file mode 100644 index 0000000000..47498a7b07 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_34.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail01.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail01.png new file mode 100644 index 0000000000..5ddf8cf2a8 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail01.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail02.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail02.png new file mode 100644 index 0000000000..4006bb93be Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail02.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail03.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail03.png new file mode 100644 index 0000000000..5a80b84586 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail03.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail04.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail04.png new file mode 100644 index 0000000000..bb87cb3e52 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail04.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail05.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail05.png new file mode 100644 index 0000000000..b6e41a1198 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail05.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail06.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail06.png new file mode 100644 index 0000000000..1361320716 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail06.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail07.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail07.png new file mode 100644 index 0000000000..8e72747ae9 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail07.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail08.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail08.png new file mode 100644 index 0000000000..8ba5ed5c7c Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail08.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail09.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail09.png new file mode 100644 index 0000000000..a49baa84d6 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail09.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail10.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail10.png new file mode 100644 index 0000000000..03c03c0ffa Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail10.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail11.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail11.png new file mode 100644 index 0000000000..5bc18ae4bd Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_ail11.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_arch.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_arch.png new file mode 100644 index 0000000000..e84b599b36 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_arch.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br02.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br02.png new file mode 100644 index 0000000000..2ba2c4733e Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br02.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br03.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br03.png new file mode 100644 index 0000000000..501b92e1dd Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br03.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br04.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br04.png new file mode 100644 index 0000000000..0e23c95f94 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br04.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br05.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br05.png new file mode 100644 index 0000000000..6fdaca4336 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br05.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br06.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br06.png new file mode 100644 index 0000000000..1d84f81492 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br06.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br07.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br07.png new file mode 100644 index 0000000000..cf1dceed06 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br07.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br08.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br08.png new file mode 100644 index 0000000000..12107d1738 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br08.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_dt.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_dt.png new file mode 100644 index 0000000000..fc0a10d6b8 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_dt.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_ex.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_ex.png new file mode 100644 index 0000000000..61554d0c81 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_ex.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_pr.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_pr.png new file mode 100644 index 0000000000..a3935c9a52 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_br_pr.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_py01.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_py01.png new file mode 100644 index 0000000000..e29a2aaa85 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_py01.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_py02.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_py02.png new file mode 100644 index 0000000000..924dbab968 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_py02.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_py03.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_py03.png new file mode 100644 index 0000000000..9dbbccdf76 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_py03.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/image_py04.png b/tutorials/ai-core-genaihub-prompt-optimization/img/image_py04.png new file mode 100644 index 0000000000..d262f05f3a Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-optimization/img/image_py04.png differ diff --git a/tutorials/ai-core-genaihub-prompt-optimization/img/requirements.txt b/tutorials/ai-core-genaihub-prompt-optimization/img/requirements.txt new file mode 100644 index 0000000000..b749a1816d --- /dev/null +++ b/tutorials/ai-core-genaihub-prompt-optimization/img/requirements.txt @@ -0,0 +1,12 @@ +sap-ai-sdk-gen[all] +python-dotenv==1.0.1 +boto3==1.37.4 +pandas==2.2.3 +numpy==1.26.4 +PyYAML +rich +json2html==1.3.0 +ipywidgets==8.1.0 +requests +matplotlib +tqdm diff --git a/tutorials/ai-core-genaihub-prompt-optimization/onboarding-tutorial.ipynb b/tutorials/ai-core-genaihub-prompt-optimization/onboarding-tutorial.ipynb new file mode 100644 index 0000000000..f8e3db101f --- /dev/null +++ b/tutorials/ai-core-genaihub-prompt-optimization/onboarding-tutorial.ipynb @@ -0,0 +1,1116 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "9c7b73af-0100-471b-81e0-4d9fbddc6de3", + "metadata": { + "id": "9c7b73af-0100-471b-81e0-4d9fbddc6de3" + }, + "outputs": [], + "source": [ + "# Loading the credentials from the env file\n", + "from gen_ai_hub.proxy.gen_ai_hub_proxy import GenAIHubProxyClient\n", + "from dotenv import load_dotenv\n", + "import os\n", + "\n", + "load_dotenv(override=True)\n", + "\n", + "# Fetching environment variables\n", + "AICORE_BASE_URL = os.getenv(\"AICORE_BASE_URL\")\n", + "AICORE_RESOURCE_GROUP = os.getenv(\"AICORE_RESOURCE_GROUP\")\n", + "AICORE_AUTH_URL = os.getenv(\"AICORE_AUTH_URL\")\n", + "AICORE_CLIENT_ID = os.getenv(\"AICORE_CLIENT_ID\")\n", + "AICORE_CLIENT_SECRET = os.getenv(\"AICORE_CLIENT_SECRET\")\n", + "\n", + "# Initializing the GenAIHubProxyClient\n", + "client = GenAIHubProxyClient(\n", + " base_url=AICORE_BASE_URL,\n", + " auth_url=AICORE_AUTH_URL,\n", + " client_id=AICORE_CLIENT_ID,\n", + " client_secret=AICORE_CLIENT_SECRET,\n", + " resource_group=AICORE_RESOURCE_GROUP\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "id": "Pd3wsKls4OS5", + "metadata": { + "id": "Pd3wsKls4OS5" + }, + "source": [ + "# Dependencies and Helper Functions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "j7YWZOg103r0", + "metadata": { + "id": "j7YWZOg103r0" + }, + "outputs": [], + "source": [ + "!pip install rich PyYAML \"sap-ai-sdk-gen[all]\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "03050198-427c-4157-9c06-eeda4aa2a47f", + "metadata": { + "id": "03050198-427c-4157-9c06-eeda4aa2a47f" + }, + "outputs": [], + "source": [ + "from gen_ai_hub.proxy import get_proxy_client\n", + "import pathlib\n", + "import yaml\n", + "\n", + "from ai_api_client_sdk.models.input_artifact_binding import InputArtifactBinding\n", + "from ai_api_client_sdk.models.parameter_binding import ParameterBinding\n", + "from ai_api_client_sdk.models.artifact import Artifact\n", + "from ai_api_client_sdk.models.label import Label\n", + "\n", + "SUPPORTED_MODELS = [\n", + " 'gemini-2.5-pro:001',\n", + " 'gpt-4o:2024-08-06'\n", + "]\n", + "\n", + "SUPPORTED_METRICS = [\"LLMaaJ:Sem_Sim_1\", \"JSON_Match\"]" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "f6e39b20", + "metadata": { + "id": "f6e39b20" + }, + "outputs": [], + "source": [ + "client = get_proxy_client()" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "6ea323c6", + "metadata": { + "id": "6ea323c6" + }, + "outputs": [], + "source": [ + "from logging import PlaceHolder\n", + "from pydantic import BaseModel\n", + "from typing import List\n", + "import re\n", + "import requests\n", + "import json\n", + "\n", + "class PromptTemplate(BaseModel):\n", + " role: str\n", + " content: str\n", + "\n", + "\n", + "class PromptTemplateSpec(BaseModel):\n", + " template: List[PromptTemplate]\n", + "\n", + "\n", + " @property\n", + " def placeholders(self):\n", + " placeholders = set()\n", + " pattern = re.compile(r'\\{\\{\\s*\\?\\s*(\\w+)\\s*\\}\\}')\n", + " for message in self.template:\n", + " placeholders.update(pattern.findall(message.content))\n", + " return placeholders\n", + "\n", + " @classmethod\n", + " def from_optimizer_result(cls, input_):\n", + " placeholders = input_[\"user_message_template_fields\"]\n", + " def replace(msg):\n", + " for key in placeholders:\n", + " msg = msg.replace(\"{\"+key+\"}\", \"{{?\"+ key + \"}}\")\n", + " return msg\n", + "\n", + " return cls(\n", + " template=[\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": replace(input_[\"system_prompt\"]),\n", + " },{\n", + " \"role\": \"user\",\n", + " \"content\": replace(input_[\"user_message_template\"]),\n", + " }\n", + " ]\n", + " )\n", + "\n", + " def escape_curly_brackets(self) -> \"PromptTemplateSpec\":\n", + " # 1. Hide each {{?key}} placeholder with a unique token\n", + " placeholder_pattern = re.compile(r'\\{\\{\\s*\\?\\s*(\\w+)\\s*\\}\\}')\n", + " mapping = {}\n", + " counter = 1\n", + "\n", + " def _hide(match):\n", + " nonlocal counter\n", + " token = f\"__PLACEHOLDER_{counter}__\"\n", + " mapping[token] = match.group(0)\n", + " counter += 1\n", + " return token\n", + "\n", + " new_templates = []\n", + " for msg in self.template:\n", + " # a) hide custom placeholders\n", + " hidden = placeholder_pattern.sub(_hide, msg.content)\n", + " # b) escape all remaining braces\n", + " escaped = hidden.replace('{', '{{').replace('}', '}}')\n", + " # c) restore the original placeholders\n", + " print(mapping)\n", + " for token, original in mapping.items():\n", + " escaped = escaped.replace(token, original)\n", + "\n", + " new_templates.append(PromptTemplate(role=msg.role, content=escaped))\n", + "\n", + " # return a fresh copy\n", + " return PromptTemplateSpec(template=new_templates)\n", + "\n", + "\n", + "\n", + "def fetch_prompt_template(prompt_template: str) -> PromptTemplateSpec:\n", + " headers = {\n", + " **client.request_header,\n", + " \"Content-Type\": \"application/json\",\n", + " }\n", + " url = f\"{client.ai_core_client.base_url}/lm/promptTemplates\"\n", + " scenario, sep, name = prompt_template.partition(\"/\")\n", + " if sep:\n", + " name, sep, version = name.partition(\":\")\n", + " if sep:\n", + " body = {\"name\": name,\n", + " \"version\": version,\n", + " \"scenario\": scenario,\n", + " \"includeSpec\": True\n", + " }\n", + " response = requests.get(url, headers=headers, params=body)\n", + " response.raise_for_status()\n", + " response = response.json()\n", + " if response[\"count\"] > 0:\n", + " response = response[\"resources\"][0]\n", + " else:\n", + " raise ValueError(f\"Prompt template {name} not found.\")\n", + " else:\n", + " url += f\"/{prompt_template}\"\n", + " response = requests.get(url, headers=headers)\n", + " response.raise_for_status()\n", + " response = response.json()\n", + " return PromptTemplateSpec.model_validate(response[\"spec\"])\n", + "\n", + "def load_prompt_template(prompt: str | pathlib.Path | list | dict | PromptTemplateSpec) -> PromptTemplateSpec:\n", + " if isinstance(prompt, PromptTemplateSpec):\n", + " return prompt\n", + " if isinstance(prompt, (str, pathlib.Path)) and pathlib.Path(prompt).exists():\n", + " with open(prompt, \"r\") as f:\n", + " prompt = yaml.safe_load(f)\n", + " elif isinstance(prompt, str):\n", + " return fetch_prompt_template(prompt)\n", + " if isinstance(prompt, dict):\n", + " # expect dict with keys \"system\" [optional] and \"user\"\n", + " messages = []\n", + " if \"system\" in prompt:\n", + " messages.append({\"role\": \"system\", \"content\": prompt[\"system\"]})\n", + " messages.append({\"role\": \"user\", \"content\": prompt[\"user\"]})\n", + " return PromptTemplateSpec(template=messages)\n", + " elif isinstance(prompt, list):\n", + " # expect list of dicts with keys \"role\" and \"content\"\n", + " return PromptTemplateSpec(template=messages)\n", + " else:\n", + " raise ValueError(\"Prompt must be a string, Path, list or dict\")\n", + "\n", + "\n", + "def push_prompt_template(prompt_template: PromptTemplateSpec,\n", + " prompt_template_name_registry: str,\n", + " prompt_template_version: str,\n", + " scenario: str,\n", + " update=False):\n", + " headers = {\n", + " **client.request_header,\n", + " \"Content-Type\": \"application/json\",\n", + " }\n", + " url = f\"{client.ai_core_client.base_url}/lm/promptTemplates\"\n", + " body = {\"name\": prompt_template_name_registry,\n", + " \"version\": prompt_template_version,\n", + " \"scenario\": scenario}\n", + " res = requests.get(url, headers=headers, params=body).json()\n", + " if res[\"count\"] > 0 and not update:\n", + " print(f\"Prompt template {prompt_template_name_registry} already exists. Use update=True to update.\")\n", + " return res[\"resources\"][0]\n", + " # Prepare body\n", + "\n", + " body[\"spec\"] = prompt_template.model_dump()\n", + " # Prepare headers\n", + " response = requests.post(url, headers=headers, json=body)\n", + " # Handle response\n", + " if response.status_code == 201:\n", + " response = response.json()\n", + " elif response.status_code in (400, 409, 413):\n", + " # Return error details\n", + " raise requests.HTTPError(f\"Upload failed ({response.status_code}): {response.text}\")\n", + " else:\n", + " response.raise_for_status()\n", + " return response.json()\n", + "\n", + "\n", + "import re\n", + "\n", + "def convert_py_notation(template):\n", + " pattern = re.compile(r'\\{\\{\\s*\\?\\s*(\\w+)\\s*\\}\\}')\n", + " return pattern.sub(lambda match: \"{\" + match.group(1) + \"}\", template)\n", + "\n", + "\n", + "def validate_prompt(prompt: PromptTemplateSpec):\n", + " values = {k: \"???\" for k in prompt.placeholders}\n", + "\n", + " for message in prompt.template:\n", + " if message.role == \"user\":\n", + " try:\n", + " convert_py_notation(message.content).format(**values)\n", + " except KeyError as err:\n", + " msg = [\"Unexpected key error when running test formatting.\"]\n", + " msg += [\"This is most likeyly due to unescaped curly brackets.\"]\n", + " msg += [\"You can try fixing this by running `prompt = prompt.escape_curly_brackets()` and use the new prompt template.\"]\n", + " raise ValueError(\"\\n\".join(msg)) from err\n", + " return True\n", + "\n", + "\n", + "\n", + "\n", + "from rich.console import Console\n", + "from rich.highlighter import RegexHighlighter\n", + "from rich.theme import Theme\n", + "from rich.panel import Panel\n", + "from rich import print\n", + "\n", + "class TemplateHighlighter(RegexHighlighter):\n", + " \"\"\"Apply style to anything that looks like an email.\"\"\"\n", + "\n", + " base_style = \"template.\"\n", + " highlights = [r\"(?P\\{\\{\\s*\\?[^\\{\\}\\s]+\\s*\\}\\})\"]\n", + "\n", + "highlighter = TemplateHighlighter()\n", + "theme = Theme({\"template.placeholder\": \"bold magenta\", \"example.email\": \"bold magenta\"})\n", + "console = Console(highlighter=highlighter, theme=theme)\n", + "\n", + "\n", + "def print_prompt_template(prompt_template: PromptTemplateSpec | str | pathlib.Path, addition: str | None = None):\n", + "\n", + " prompt_template = load_prompt_template(prompt_template)\n", + " addition = f' - {addition}' if addition else ''\n", + "\n", + " for message in prompt_template.template:\n", + " if message.role == \"system\":\n", + " console.print(Panel(highlighter(message.content), title=\"System Message\" + addition, border_style=\"red\"))\n", + " elif message.role == \"user\":\n", + " console.print(Panel(highlighter(message.content), title=\"User Message\" + addition, border_style=\"green\"))\n", + " else:\n", + " console.print(Panel(highlighter(message.content), title=\"Assistant Message\" + addition))\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "067c2cf6", + "metadata": { + "id": "067c2cf6" + }, + "outputs": [], + "source": [ + "from typing import List\n", + "import requests\n", + "import mimetypes\n", + "from urllib.parse import quote\n", + "import pathlib\n", + "import json\n", + "\n", + "\n", + "def validate_dataset(dataset: str | pathlib.Path | list, expected_keys: None | List[str] = None) -> bool:\n", + " if isinstance(dataset, (str, pathlib.Path)):\n", + " with open(dataset, \"r\") as f:\n", + " try:\n", + " dataset = json.load(f)\n", + " except json.JSONDecodeError as e:\n", + " raise ValueError(f\"Invalid JSON in file: {e}\")\n", + " if not isinstance(dataset, list):\n", + " raise ValueError(\"Dataset must be a list of dictionaries.\")\n", + "\n", + " def validate_item(item: dict, excepted_keys: None | List[str]) -> bool:\n", + " excepted_keys = set(excepted_keys) if excepted_keys else None\n", + " if set(item.keys()) != {\"fields\", \"answer\"}:\n", + " raise ValueError(\"Each item must contain 'fields' and 'answer' keys.\")\n", + " if not isinstance(item[\"fields\"], dict):\n", + " raise ValueError(\"'fields' must be a dictionary.\")\n", + " fields = set(item[\"fields\"].keys())\n", + " if excepted_keys is not None:\n", + " if fields != excepted_keys:\n", + " if fields.difference(excepted_keys):\n", + " raise ValueError(f\"Unexpected keys in 'fields'. Expected: {excepted_keys}, Found: {fields}\")\n", + " if excepted_keys.difference(fields):\n", + " raise ValueError(f\"Missing keys in 'fields'. Expected: {excepted_keys}, Found: {fields}\")\n", + " if not all([isinstance(k, str) for k in item[\"fields\"].values()]):\n", + " raise ValueError(\"All values in 'fields' must be strings.\")\n", + " return fields\n", + "\n", + " excepted_keys = expected_keys\n", + " for i, item in enumerate(dataset):\n", + " if not isinstance(item, dict):\n", + " raise ValueError(\"Each item in the dataset must be a dictionary.\")\n", + " try:\n", + " excepted_keys = validate_item(item, excepted_keys)\n", + " except ValueError as e:\n", + " raise ValueError(f\"Error in entry {i}\") from e\n", + " return True\n", + "\n", + "\n", + "def upload_dataset(secret: str,\n", + " local_path: str | pathlib.Path,\n", + " remote_path: str,\n", + " scenario: str,\n", + " description: str | None = None,\n", + " overwrite: bool = False,\n", + " expected_keys: None | List[str] = None,\n", + "\n", + " allow_bucket_root: bool = False) -> str:\n", + " # Validate dataset\n", + " validate_dataset(local_path, expected_keys)\n", + " # check if secret exists\n", + " secrets = [r.name for r in client.ai_core_client.object_store_secrets.query().resources]\n", + " if secret not in secrets:\n", + " raise ValueError(f\"Secret '{secret}' not found in object store secrets. Known secrets: {secrets}\")\n", + "\n", + " # Check if local path exists\n", + " remote_path = remote_path.lstrip(\"/\")\n", + " if \"/\" not in remote_path and not allow_bucket_root:\n", + " raise ValueError(\n", + " \"Remote path must use subdirectories. Otherwise the whole bucket will be used as an input artifact. Set allow_bucket_root=True to allow this.\"\n", + " )\n", + "\n", + " # URL-encode the path parameter\n", + " path = f\"{secret}/\" + remote_path.lstrip(\"/\")\n", + " encoded_path = quote(path, safe=\"\")\n", + " url = f\"{client.ai_core_client.base_url}/lm/dataset/files/{encoded_path}\"\n", + " params = {\"overwrite\": str(overwrite).lower()}\n", + "\n", + " # Prepare headers\n", + " headers = {\n", + " **client.request_header,\n", + " \"Content-Type\": \"application/octet-stream\",\n", + " }\n", + " # Guess MIME type\n", + " guessed_type, _ = mimetypes.guess_type(local_path)\n", + " if guessed_type:\n", + " headers[\"Content-Type\"] = guessed_type\n", + "\n", + " with open(local_path, \"rb\") as f:\n", + " response = requests.put(url, params=params, headers=headers, data=f)\n", + "\n", + " # Handle response\n", + " if response.status_code == 201:\n", + " response = response.json()\n", + " elif response.status_code in (400, 409, 413):\n", + " # Return error details\n", + " raise requests.HTTPError(f\"Upload failed ({response.status_code}): {response.text}\")\n", + " else:\n", + " response.raise_for_status()\n", + " artifact_url = \"/\".join(response[\"url\"].split(\"/\")[:-1])\n", + " for artifact in client.ai_core_client.artifact.query().resources:\n", + " if response[\"url\"].startswith(artifact.url + \"/\"):\n", + " return artifact, response[\"url\"].removeprefix(artifact.url).lstrip(\"/\")\n", + "\n", + " # Create new artifact\n", + " path = response[\"url\"].split(\"/\")[-1]\n", + " new_artifact = client.ai_core_client.artifact.create(\n", + " name=f\"{scenario}-prompt-optimization\",\n", + " kind=Artifact.Kind.DATASET,\n", + " url=artifact_url,\n", + " scenario_id=scenario,\n", + " description=\"Datasets for prompt optimization\" if description is None else description,\n", + " resource_group=headers[client.ai_core_client.rest_client.resource_group_header]\n", + " )\n", + " return new_artifact, path\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "eXT05z-77zuK", + "metadata": { + "id": "eXT05z-77zuK" + }, + "source": [ + "## Create Config" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "yCXzXl1C7zVL", + "metadata": { + "id": "yCXzXl1C7zVL" + }, + "outputs": [], + "source": [ + "old_new_name_mapping = {\n", + " \"gemini-2.5-pro:001\": \"gemini-2.5-pro:001\",\n", + " \"gpt-4o:2024-08-06\": \"openai/gpt-4o-2024-08-06\"\n", + "}\n", + "\n", + "old_new_name_mapping.update({old_new_name_mapping[k]: k for k, v in old_new_name_mapping.items()})\n", + "\n", + "\n", + "def create_config(metric: str,\n", + " reference_model: str,\n", + " targets: dict,\n", + " dataset_path: str,\n", + " scenario: str,\n", + " prompt: PromptTemplateSpec) -> str:\n", + " assert metric in SUPPORTED_METRICS, f\"Unsupported metric: {metric}. Supported metrics: {SUPPORTED_METRICS}\"\n", + " assert reference_model in SUPPORTED_MODELS, f\"Unsupported reference model: {reference_model}. Supported models: {SUPPORTED_MODELS}\"\n", + " assert all(model in SUPPORTED_MODELS for model in targets.keys()), f\"Unsupported target models: {targets}. Supported models: {SUPPORTED_MODELS}\"\n", + " input_parameters = [\n", + " ParameterBinding(key=\"dataset\", value=dataset_path),\n", + " ParameterBinding(key=\"optimizationMetric\", value=metric),\n", + " ParameterBinding(key=\"basePrompt\", value=f'{scenario}/{prompt[\"name\"]}:{prompt[\"version\"]}'),\n", + " ParameterBinding(key=\"baseModel\", value=reference_model),\n", + " ParameterBinding(key=\"targetModels\", value=','.join(targets.keys())),\n", + " ParameterBinding(key=\"targetPromptMapping\", value=\",\".join([f\"{old_new_name_mapping[k]}={v}\" for k, v in targets.items()]))\n", + " \n", + " \n", + " ]\n", + " existing_configs = client.ai_core_client.configuration.query(scenario_id='genai-optimizations', executable_ids=['genai-optimizations'])\n", + " params = {par.key: par.value for par in input_parameters}\n", + " for conf in existing_configs.resources:\n", + " if {par.key: par.value for par in conf.parameter_bindings} == params:\n", + " return conf.id\n", + " \n", + " input_artifacts = [InputArtifactBinding(key=\"prompt-data\", artifact_id=artifact_id)]\n", + "\n", + " response = client.ai_core_client.configuration.create(\n", + " name = \"prompt-optimization-config\", # custom name of configuration\n", + " scenario_id = \"genai-optimizations\", # value from workflow\n", + " executable_id = \"genai-optimizations\", # value from workflow\n", + " resource_group = resource_group,\n", + " parameter_bindings = input_parameters,\n", + " input_artifact_bindings = input_artifacts\n", + " )\n", + "\n", + " return response.id\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "M7NpyMM9NI7k", + "metadata": { + "id": "M7NpyMM9NI7k" + }, + "outputs": [], + "source": [ + "from rich.console import Console\n", + "from rich.table import Table\n", + "\n", + "def fetch_results(execution_id):\n", + " response = client.ai_core_client.execution.get(execution_id = execution_id)\n", + " if response.status.name not in {'DEAD', 'COMPLETED'}:\n", + " raise RuntimeError('Execution not finished!')\n", + " path = f\"default/{execution_id}/result-data/results.json\"\n", + " encoded_path = quote(path, safe=\"\")\n", + " url = f\"{client.ai_core_client.base_url}/lm/dataset/files/{encoded_path}\"\n", + " headers = {\n", + " **client.request_header,\n", + " }\n", + " response = requests.get(url, headers=headers)\n", + " response.raise_for_status()\n", + " return response.json()# results = response.json()\n", + "\n", + "\n", + "def print_result(result):\n", + " origin_model = result[\"origin_model\"]\n", + " table = Table(title=\"Performance\")\n", + " table.add_column(\"Model\", justify=\"right\", style=\"cyan\", no_wrap=True)\n", + " table.add_column(\"Pre Optimization\", style=\"magenta\")\n", + " table.add_column(\"Post Optimization\", justify=\"right\", style=\"green\")\n", + " table.add_row(origin_model[\"model_name\"], f'{origin_model[\"score\"]:.3f}', \"n/a - reference run\")\n", + " for m in result[\"target_models\"]:\n", + " table.add_row(m[\"model_name\"], f'{m[\"pre_optimization_score\"]:.3f}', f'{m[\"post_optimization_score\"]:.3f}')\n", + " console.print(table)\n", + " for m in result[\"target_models\"]:\n", + " print_prompt_template(PromptTemplateSpec.from_optimizer_result(m), addition=m['model_name'])\n" + ] + }, + { + "cell_type": "markdown", + "id": "QcrTeIvD8Rgz", + "metadata": { + "id": "QcrTeIvD8Rgz" + }, + "source": [ + "### Download Demo Data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1BRIRT5a6phA", + "metadata": { + "id": "1BRIRT5a6phA" + }, + "outputs": [], + "source": [ + "import pathlib\n", + "\n", + "files = [\n", + " (\"default/example/base-prompt.yaml\", \"./facility_prompt.yaml\"),\n", + " (\"default/example/facility-train.json\", \"./facility-train.json\")\n", + "]\n", + "\n", + "for remote, local in files:\n", + " local_path = pathlib.Path(local)\n", + " if not local_path.exists():\n", + " url = f\"{client.ai_core_client.base_url}/lm/dataset/files/{remote}\"\n", + " headers = {\n", + " **client.request_header,\n", + " }\n", + " response = requests.get(url, headers=headers)\n", + " with local_path.open(\"w\") as stream:\n", + " stream.write(response.text)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "sDyiLKIM4ha_", + "metadata": { + "id": "sDyiLKIM4ha_" + }, + "outputs": [], + "source": [ + "resource_group = client.request_header[client.ai_core_client.rest_client.resource_group_header]" + ] + }, + { + "cell_type": "markdown", + "id": "fY-fejJ14YLC", + "metadata": { + "id": "fY-fejJ14YLC" + }, + "source": [ + "# Start Prompt Optimizer Run" + ] + }, + { + "cell_type": "markdown", + "id": "ExirmzxlEgYO", + "metadata": { + "id": "ExirmzxlEgYO" + }, + "source": [ + "### Loading a Local Prompt Template\n", + "\n", + "**The prompt template is structured in a `system` and a `user` message. Placeholders in the prompt template have to be wrapped in `{{?key}}`.**\n", + "\n", + "\n", + "Your prompt can be provided in any of the following forms and will be normalized to a `PromptTemplateSpec` under the hood:\n", + "\n", + "#### From Local Disk\n", + "**A file path** (`str` or `Path`) pointing to a YAML or JSON file defining either:\n", + " - a **mapping** with keys \n", + " - `\"user\"` (required) and \n", + " - `\"system\"` (optional)\n", + "\n", + "```yaml\n", + "system: |-\n", + " You are a helpful assistant\n", + "assistant: |-\n", + " Write a poen on {{?topic}}\n", + "```\n", + "or \n", + " - a **list** of message objects, each with `\"role\"` (e.g. `\"system\"` or `\"user\"`) and `\"content\"` (string)\n", + "\n", + "```yaml\n", + "- role: system\n", + " content: |-\n", + " You are a helpful assistant\n", + "- role: user\n", + " content: |-\n", + " Write a poen on {{?topic}}\n", + "```\n", + "\n", + "\n", + "#### Alternative: Prompt Registry\n", + "- **A lookup string** of the form `\"/:\"` (or just `\":\"`) will be fetched from the AI-core prompt‐template API; if you omit the version, the latest will be returned. \n" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "mYsHaqTVFEQZ", + "metadata": { + "id": "mYsHaqTVFEQZ" + }, + "outputs": [ + { + "data": { + "text/html": [ + "
╭──────────────────────────────────────────────── System Message ─────────────────────────────────────────────────╮\n",
+       " You are a helpful assistant                                                                                     \n",
+       "╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n",
+       "
\n" + ], + "text/plain": [ + "\u001b[31m╭─\u001b[0m\u001b[31m───────────────────────────────────────────────\u001b[0m\u001b[31m System Message \u001b[0m\u001b[31m────────────────────────────────────────────────\u001b[0m\u001b[31m─╮\u001b[0m\n", + "\u001b[31m│\u001b[0m You are a helpful assistant \u001b[31m│\u001b[0m\n", + "\u001b[31m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
╭───────────────────────────────────────────────── User Message ──────────────────────────────────────────────────╮\n",
+       " Giving the following message:                                                                                   \n",
+       " ---                                                                                                             \n",
+       " {{?input}}                                                                                                      \n",
+       " ---                                                                                                             \n",
+       " Extract and return a json with the follwoing keys and values:                                                   \n",
+       " - \"urgency\" as one of `high`, `medium`, `low`                                                                   \n",
+       " - \"sentiment\" as one of `negative`, `neutral`, `positive`                                                       \n",
+       " - \"categories\" Create a dictionary with categories as keys and boolean values (True/False), where the value     \n",
+       " indicates whether the category is one of the best matching support category tags from:                          \n",
+       " `emergency_repair_services`, `routine_maintenance_requests`, `quality_and_safety_concerns`,                     \n",
+       " `specialized_cleaning_services`, `general_inquiries`, `sustainability_and_environmental_practices`,             \n",
+       " `training_and_support_requests`, `cleaning_services_scheduling`, `customer_feedback_and_complaints`,            \n",
+       " `facility_management_issues`                                                                                    \n",
+       " Your complete message should be a valid json string that can be read directly and only contain the keys         \n",
+       " mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces.        \n",
+       "╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n",
+       "
\n" + ], + "text/plain": [ + "\u001b[32m╭─\u001b[0m\u001b[32m────────────────────────────────────────────────\u001b[0m\u001b[32m User Message \u001b[0m\u001b[32m─────────────────────────────────────────────────\u001b[0m\u001b[32m─╮\u001b[0m\n", + "\u001b[32m│\u001b[0m Giving the following message: \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m --- \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m \u001b[1;35m{{?input}}\u001b[0m \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m --- \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m Extract and return a json with the follwoing keys and values: \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m - \"urgency\" as one of `high`, `medium`, `low` \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m - \"sentiment\" as one of `negative`, `neutral`, `positive` \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m - \"categories\" Create a dictionary with categories as keys and boolean values (True/False), where the value \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m indicates whether the category is one of the best matching support category tags from: \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m `emergency_repair_services`, `routine_maintenance_requests`, `quality_and_safety_concerns`, \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m `specialized_cleaning_services`, `general_inquiries`, `sustainability_and_environmental_practices`, \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m `training_and_support_requests`, `cleaning_services_scheduling`, `customer_feedback_and_complaints`, \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m `facility_management_issues` \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m Your complete message should be a valid json string that can be read directly and only contain the keys \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces. \u001b[32m│\u001b[0m\n", + "\u001b[32m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
Prompt template loaded successfully. Placeholders found are: {'input'}\n",
+       "
\n" + ], + "text/plain": [ + "Prompt template loaded successfully. Placeholders found are: \u001b[1m{\u001b[0m\u001b[32m'input'\u001b[0m\u001b[1m}\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "base_prompt_template = \"./facility_prompt.yaml\" # local path to the prompt template or Prompt Repository identifier\n", + "\n", + "\n", + "prompt = load_prompt_template(base_prompt_template) # .escape_curly_brackets() if validation fails.\n", + "print_prompt_template(prompt)\n", + "print(f\"Prompt template loaded successfully. Placeholders found are: {prompt.placeholders}\")\n", + "assert validate_prompt(prompt)\n" + ] + }, + { + "cell_type": "markdown", + "id": "wFNUcWAbtgYA", + "metadata": { + "id": "wFNUcWAbtgYA" + }, + "source": [ + "Check if all expected placeholders were found." + ] + }, + { + "cell_type": "markdown", + "id": "963zmQ0rCPkT", + "metadata": { + "id": "963zmQ0rCPkT" + }, + "source": [ + "### Validating Local Dataset\n", + "\n", + "Your dataset must be a JSON‐serializable list where each element is a dictionary with exactly two keys: **`fields`** and **`answer`**. The **`fields`** value should itself be a dictionary whose keys (e.g. `\"question\"`, `\"hint\"`, `\"term\"`, etc.) are **identical** across every entry and whose values are all strings. The **`answer`** value must also be a string.\n", + "\n", + "\n", + "You can validate your dataset using the `validate_dataset` method.\n", + "\n", + "If validation is not passed succesfully this are might be the reasons:\n", + "\n", + "\n", + "| Condition | Exception Raised (inner) | Outer Message |\n", + "| -------------------------------------- | ------------------------------------------------------------------ | -------------------------------- |\n", + "| Non-list top-level | N/A | `Dataset must be a list…` |\n", + "| Item not a dict | N/A | `Each item…must be a dictionary` |\n", + "| Wrong item keys | `ValueError(\"Each item must contain 'fields' and 'answer' keys.\")` | `Error in entry i` |\n", + "| `\"fields\"` not a dict | `ValueError(\"'fields' must be a dictionary.\")` | `Error in entry i` |\n", + "| Field name mismatch (extra or missing) | `ValueError(\"Unexpected keys…\")` or `ValueError(\"Missing keys…\")` | `Error in entry i` |\n", + "| Non-string field value | `ValueError(\"All values in 'fields' must be strings.\")` | `Error in entry i` |\n", + "| Invalid JSON file | `ValueError(\"Invalid JSON in file:…\")` | N/A |\n" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "id": "RMNFj5ZWCyW8", + "metadata": { + "id": "RMNFj5ZWCyW8" + }, + "outputs": [], + "source": [ + "dataset_local_path=\"./facility-synth-train/facility-train.json\" # local path to the dataset\n", + "\n", + "assert validate_dataset(dataset_local_path), \"Dataset not valid\"" + ] + }, + { + "cell_type": "markdown", + "id": "b2nsOfnjEK61", + "metadata": { + "id": "b2nsOfnjEK61" + }, + "source": [ + "### Remaining Config parameter" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "ab10f2a9", + "metadata": { + "id": "ab10f2a9" + }, + "outputs": [], + "source": [ + "scenario = \"genai-optimizations\"\n", + "\n", + "base_prompt_template_registry = \"evaluate-base:0.0.1\" # name:version for the template in the registry\n", + "\n", + "dataset_secret=\"default\" # secret name in the object store you want to use to store the dataset\n", + "dataset_remote_path=\"datasets/facility-train.json\" # remote path in the object store to store the dataset\n", + "\n", + "reference_model = \"gpt-4o:2024-08-06\"\n", + "# Dictionary of models to optimize with their corresponding prompt template names under which the optimized prompt should be stored in the registry\n", + "targets = {\n", + " \"gemini-2.5-pro:001\": \"evaluate-base-gemini-2_5-pro:0.0.1\"\n", + "}\n", + "\n", + "# Metric to use for optimization\n", + "metric = \"JSON_Match\"\n" + ] + }, + { + "cell_type": "markdown", + "id": "461f1569", + "metadata": { + "id": "461f1569" + }, + "source": [ + "## Push Local Prompt to Registry" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "af015c2a", + "metadata": { + "id": "af015c2a" + }, + "outputs": [ + { + "data": { + "text/html": [ + "
Prompt template evaluate-base already exists. Use update=True to update.\n",
+       "
\n" + ], + "text/plain": [ + "Prompt template evaluate-base already exists. Use \u001b[33mupdate\u001b[0m=\u001b[3;92mTrue\u001b[0m to update.\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
Prompt present in registry under id 3a9cfc20-f972-4720-8d0e-3ac48f77f391\n",
+       "
\n" + ], + "text/plain": [ + "Prompt present in registry under id \u001b[93m3a9cfc20-f972-4720-8d0e-3ac48f77f391\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
\n",
+       "\n",
+       "=== Base Prompt ===\n",
+       "
\n" + ], + "text/plain": [ + "\n", + "\n", + "=== Base Prompt ===\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
╭──────────────────────────────────────────────── System Message ─────────────────────────────────────────────────╮\n",
+       " You are a helpful assistant                                                                                     \n",
+       "╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n",
+       "
\n" + ], + "text/plain": [ + "\u001b[31m╭─\u001b[0m\u001b[31m───────────────────────────────────────────────\u001b[0m\u001b[31m System Message \u001b[0m\u001b[31m────────────────────────────────────────────────\u001b[0m\u001b[31m─╮\u001b[0m\n", + "\u001b[31m│\u001b[0m You are a helpful assistant \u001b[31m│\u001b[0m\n", + "\u001b[31m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
╭───────────────────────────────────────────────── User Message ──────────────────────────────────────────────────╮\n",
+       " Giving the following message:                                                                                   \n",
+       " ---                                                                                                             \n",
+       " {{?input}}                                                                                                      \n",
+       " ---                                                                                                             \n",
+       " Extract and return a json with the follwoing keys and values:                                                   \n",
+       " - \"urgency\" as one of `high`, `medium`, `low`                                                                   \n",
+       " - \"sentiment\" as one of `negative`, `neutral`, `positive`                                                       \n",
+       " - \"categories\" Create a dictionary with categories as keys and boolean values (True/False), where the value     \n",
+       " indicates whether the category is one of the best matching support category tags from:                          \n",
+       " `emergency_repair_services`, `routine_maintenance_requests`, `quality_and_safety_concerns`,                     \n",
+       " `specialized_cleaning_services`, `general_inquiries`, `sustainability_and_environmental_practices`,             \n",
+       " `training_and_support_requests`, `cleaning_services_scheduling`, `customer_feedback_and_complaints`,            \n",
+       " `facility_management_issues`                                                                                    \n",
+       " Your complete message should be a valid json string that can be read directly and only contain the keys         \n",
+       " mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces.        \n",
+       "╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n",
+       "
\n" + ], + "text/plain": [ + "\u001b[32m╭─\u001b[0m\u001b[32m────────────────────────────────────────────────\u001b[0m\u001b[32m User Message \u001b[0m\u001b[32m─────────────────────────────────────────────────\u001b[0m\u001b[32m─╮\u001b[0m\n", + "\u001b[32m│\u001b[0m Giving the following message: \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m --- \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m \u001b[1;35m{{?input}}\u001b[0m \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m --- \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m Extract and return a json with the follwoing keys and values: \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m - \"urgency\" as one of `high`, `medium`, `low` \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m - \"sentiment\" as one of `negative`, `neutral`, `positive` \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m - \"categories\" Create a dictionary with categories as keys and boolean values (True/False), where the value \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m indicates whether the category is one of the best matching support category tags from: \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m `emergency_repair_services`, `routine_maintenance_requests`, `quality_and_safety_concerns`, \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m `specialized_cleaning_services`, `general_inquiries`, `sustainability_and_environmental_practices`, \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m `training_and_support_requests`, `cleaning_services_scheduling`, `customer_feedback_and_complaints`, \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m `facility_management_issues` \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m Your complete message should be a valid json string that can be read directly and only contain the keys \u001b[32m│\u001b[0m\n", + "\u001b[32m│\u001b[0m mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces. \u001b[32m│\u001b[0m\n", + "\u001b[32m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "base_template = load_prompt_template(base_prompt_template)\n", + "prompt_template_name_registry, _, prompt_template_version = base_prompt_template_registry.partition(\":\")\n", + "prompt = push_prompt_template(prompt_template=base_template,\n", + " prompt_template_name_registry=prompt_template_name_registry,\n", + " prompt_template_version=prompt_template_version,\n", + " scenario=scenario,\n", + " update=False\n", + ")\n", + "\n", + "print(f\"Prompt present in registry under id {prompt['id']}\")\n", + "\n", + "print('\\n\\n=== Base Prompt ===')\n", + "print_prompt_template(prompt[\"id\"])" + ] + }, + { + "cell_type": "markdown", + "id": "8df4d0a8", + "metadata": { + "id": "8df4d0a8" + }, + "source": [ + "## Push Local Dataset to Object Store and Create Artifact" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f46937ef", + "metadata": { + "id": "f46937ef" + }, + "outputs": [], + "source": [ + "artifact, dataset_path = upload_dataset(\n", + " secret=dataset_secret,\n", + " local_path=dataset_local_path,\n", + " remote_path=dataset_remote_path,\n", + " expected_keys=base_template.placeholders,\n", + " scenario=scenario,\n", + " overwrite=True,\n", + " allow_bucket_root=True\n", + ")\n", + "\n", + "print(f\"Dataset uploaded to {artifact.url}/{dataset_path} -> Artifact ID: {artifact.id}\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "ab4ec0c5", + "metadata": { + "id": "ab4ec0c5" + }, + "source": [ + "## Create Prompt Optimizer Config" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4PuS58nc8i7w", + "metadata": { + "id": "4PuS58nc8i7w" + }, + "outputs": [], + "source": [ + "configuration_id = create_config(metric=metric,\n", + " reference_model=reference_model,\n", + " targets=targets,\n", + " dataset_path=dataset_path,\n", + " scenario=scenario,\n", + " prompt=prompt\n", + " )\n" + ] + }, + { + "cell_type": "markdown", + "id": "5dde0e00", + "metadata": { + "id": "5dde0e00" + }, + "source": [ + "## Start Prompt Optimizer" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aa02b617", + "metadata": { + "id": "aa02b617" + }, + "outputs": [], + "source": [ + "response = client.ai_core_client.execution.create(\n", + " configuration_id = configuration_id, # Change this value.\n", + " resource_group = \"default\"\n", + ")\n", + "\n", + "execution_id = response.id\n", + "print('Execution started with ID:', execution_id)" + ] + }, + { + "cell_type": "markdown", + "id": "aSkihvDXKcAv", + "metadata": { + "id": "aSkihvDXKcAv" + }, + "source": [ + "## Check Execution Status" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "96ee23e5", + "metadata": { + "id": "96ee23e5" + }, + "outputs": [], + "source": [ + "result = fetch_results(execution_id)\n", + "print_result(result)" + ] + } + ], + "metadata": { + "colab": { + "collapsed_sections": [ + "Pd3wsKls4OS5" + ], + "provenance": [] + }, + "kernelspec": { + "display_name": "venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/JS_orchestration_optModules_Tutorial2.ipynb b/tutorials/ai-core-orchestration-consumption-opt-v2/JS_orchestration_optModules_Tutorial2.ipynb new file mode 100644 index 0000000000..6efbee9a06 --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-opt-v2/JS_orchestration_optModules_Tutorial2.ipynb @@ -0,0 +1,903 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load environment variables\n", + "\n", + "In this step, we use the dotenv package to load environment variables from a .env file. This approach helps manage sensitive configuration details like API keys and service credentials without hardcoding them in the code.\n", + "\n", + "Key Points:\n", + "\n", + "dotenv: Automatically loads environment variables defined in a .env file into process.env.\n", + "\n", + "Access Environment Variables: The process.env object is used to access these variables in the application." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import dotenv from 'dotenv';\n", + "dotenv.config();\n", + " \n", + "console.log(process.env.AICORE_SERVICE_KEY); " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Basic Orchestration Pipeline" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\"Subject: Bestellung #1234567890 Verspätet - John Johnson Nachricht: Halle, ich schreibe ihnen um mich nach dem Status meiner Bestellung mit der Bestellnr. +1234567890 zu erkundigen. Die Lieferung war eigentlich für gestern geplant, ist bisher jedoch nicht erfolgt. Mein Name ist John Johnson und meine Lieferadresse lautet 125 Cole Meadows Drive Palo Alto, California 94301. Bitte lassen Sie mich per Telefon unter der Nummer +1 505802 2172 wissen, wann ich mit meiner Lieferung rechnen kann. Danke!\"\n" + ] + } + ], + "source": [ + "const txtContent = await Deno.readTextFile('./support-request.txt');\n", + "console.log(txtContent);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prompt Templating\n", + "\n", + "**LLM Configuration**\n", + "\n", + "Choose the LLM by setting the name property in the promptTemplating.model configuration.\n", + "\n", + "**Template Configuration**\n", + "\n", + "Use the orchestration client with the promptTemplating.prompt.template configuration to define a static prompt. This prompt can include placeholders, which are replaced with values from placeholderValues during a chatCompletion() method call \n", + "\n", + "Key Components:\n", + "- **SystemMessage**: Sets a predefined instruction for the AI assistant. This message typically includes the assistant's role and any specific guidelines it should follow.\n", + "- **UserMessage**: Represents the user's input and how it is structured in the conversation.\n", + " \n", + "In this revised prompt, only queries are passed to the assistant without any additional context. The AI is expected to respond based solely on the provided input.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "import { OrchestrationClient } from '@sap-ai-sdk/orchestration';\n", + "\n", + "const orchestrationClient = new OrchestrationClient({\n", + " promptTemplating: {\n", + " model: {\n", + " name: 'gpt-4o',\n", + " params: {\n", + " max_completion_tokens: 200,\n", + " temperature: 0\n", + " }\n", + " },\n", + " prompt: {\n", + " template: [\n", + " {\n", + " role: 'system',\n", + " content:\n", + " 'You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification.'\n", + " },\n", + " {\n", + " role: 'user',\n", + " content:\n", + " 'Please analyze the sentiment of the following support request: {{ ?support_text }}'\n", + " }\n", + " ]\n", + " }\n", + " }\n", + "});\n", + "\n", + "const response = await orchestrationClient.chatCompletion({\n", + " placeholderValues: {\n", + " support_text: 'User is unhappy with the latest update and facing usability issues.'\n", + " }\n", + "});\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Data masking\n", + "\n", + "The Data Masking Module anonymizes or pseudonymizes personally identifiable information (PII) before it is processed by the LLM module. When data is anonymized, all identifying information is replaced with placeholders (e.g., MASKED_ENTITY), and the original data cannot be recovered, ensuring that no trace of the original information is retained. In contrast, pseudonymized data is substituted with unique placeholders (e.g., MASKED_ENTITY_ID), allowing the original information to be restored if needed. In both cases, the masking module identifies sensitive data and replaces it with appropriate placeholders before further processing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import { buildDpiMaskingProvider } from '@sap-ai-sdk/orchestration';\n", + "\n", + "const maskingProvider = buildDpiMaskingProvider({\n", + " method: 'anonymization',\n", + " entities: [\n", + " 'profile-person',\n", + " 'profile-email',\n", + " 'profile-phone',\n", + " {\n", + " type: 'custom',\n", + " // Example: customer / ticket reference IDs\n", + " regex: '\\\\b(TICKET|CASE)-[0-9]{4,}\\\\b',\n", + " replacement_strategy: {\n", + " method: 'constant',\n", + " value: 'MASKED_REFERENCE_ID'\n", + " }\n", + " }\n", + " ],\n", + " allowlist: ['SAP'] // Optional\n", + "});\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Content Filtering\n", + "\n", + "The Content Filtering Module can be configured to filter both the input to the LLM module (input filter) and the output generated by the LLM (output filter). The module uses predefined classification services to detect inappropriate or unwanted content, allowing flexible configuration through customizable thresholds. These thresholds can be set to control the sensitivity of filtering, ensuring that content meets desired standards before it is processed or returned as output." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "import { buildAzureContentSafetyFilter } from '@sap-ai-sdk/orchestration';\n", + "\n", + "// Input filter: protects what users send (support tickets)\n", + "const inputFilter = buildAzureContentSafetyFilter('input', {\n", + " hate: 'ALLOW_SAFE_LOW',\n", + " self_harm: 'ALLOW_SAFE_LOW',\n", + " sexual: 'ALLOW_SAFE_LOW',\n", + " violence: 'ALLOW_SAFE_LOW',\n", + " prompt_shield: true \n", + "});\n", + "\n", + "// Output filter: protects what the model returns\n", + "const outputFilter = buildAzureContentSafetyFilter('output', {\n", + " hate: 'ALLOW_SAFE',\n", + " self_harm: 'ALLOW_SAFE',\n", + " sexual: 'ALLOW_SAFE',\n", + " violence: 'ALLOW_SAFE'\n", + "});\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Translation" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Translation configuration defined successfully\n" + ] + } + ], + "source": [ + "import { buildTranslationConfig } from '@sap-ai-sdk/orchestration';\n", + "\n", + "const inputTranslation = buildTranslationConfig('input', {\n", + " sourceLanguage: 'de-DE',\n", + " targetLanguage: 'en-US'\n", + "});\n", + "\n", + "const outputTranslation = buildTranslationConfig('output', {\n", + " sourceLanguage: 'en-US',\n", + " targetLanguage: 'de-DE'\n", + "});\n", + "\n", + "// ✅ Combine them into ONE config object\n", + "const translationConfig = {\n", + " input: inputTranslation,\n", + " output: outputTranslation\n", + "};\n", + "\n", + "console.log('✅ Translation configuration defined successfully');\n" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "import { OrchestrationClient } from '@sap-ai-sdk/orchestration';\n", + "\n", + "const orchestrationClient = new OrchestrationClient({\n", + " resourceGroup: 'grounding',\n", + "\n", + " // Sentiment analysis prompt\n", + " promptTemplating: {\n", + " model: {\n", + " name: 'gpt-4o',\n", + " params: {\n", + " max_completion_tokens: 200,\n", + " temperature: 0\n", + " }\n", + " },\n", + " prompt: {\n", + " template: [\n", + " {\n", + " role: 'system',\n", + " content:\n", + " 'You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification.'\n", + " },\n", + " {\n", + " role: 'user',\n", + " content:\n", + " 'Please analyze the sentiment of the following support request: {{ ?support_text }}'\n", + " }\n", + " ]\n", + " }\n", + " },\n", + "\n", + " translation: translationConfig,\n", + "\n", + " masking: {\n", + " masking_providers: [maskingProvider]\n", + " },\n", + "\n", + " filtering: {\n", + " input: {\n", + " filters: [inputFilter]\n", + " },\n", + " output: {\n", + " filters: [outputFilter]\n", + " }\n", + " }\n", + "});\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Generate Responses \n", + "\n", + "This step outlines the process of generating responses for a set of queries using defined llm model. The generateResponsesForModels function iterates through the llm model and executes queries to gather AI-generated responses.\n", + "\n", + "Key Points:\n", + "\n", + "Query Execution: Uses OrchestrationClient to generate responses for each query." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Die Stimmung der Supportanfrage ist neutral. Der Benutzer fragt einfach nach dem Status seiner Bestellung, ohne Frustration oder Unzufriedenheit auszudrücken.\n" + ] + } + ], + "source": [ + "try {\n", + " const response = await orchestrationClient.chatCompletion({\n", + " placeholderValues: {\n", + " support_text: txtContent\n", + " }\n", + " });\n", + "\n", + " console.log(response.getContent());\n", + "\n", + "} catch (error: any) {\n", + " console.error('❌ Error during support sentiment analysis');\n", + " console.error(error.message);\n", + " console.error(error.cause?.response?.data);\n", + "}\n" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "OrchestrationResponse {\n", + " rawResponse: {\n", + " status: \u001b[33m200\u001b[39m,\n", + " statusText: \u001b[32m\"OK\"\u001b[39m,\n", + " headers: Object [AxiosHeaders] {\n", + " date: \u001b[32m\"Thu, 05 Feb 2026 10:29:53 GMT\"\u001b[39m,\n", + " \u001b[32m\"content-type\"\u001b[39m: \u001b[32m\"application/json\"\u001b[39m,\n", + " \u001b[32m\"content-length\"\u001b[39m: \u001b[32m\"1955\"\u001b[39m,\n", + " \u001b[32m\"x-upstream-service-time\"\u001b[39m: \u001b[32m\"693\"\u001b[39m\n", + " },\n", + " config: {\n", + " transitional: {\n", + " silentJSONParsing: \u001b[33mtrue\u001b[39m,\n", + " forcedJSONParsing: \u001b[33mtrue\u001b[39m,\n", + " clarifyTimeoutError: \u001b[33mfalse\u001b[39m\n", + " },\n", + " adapter: [ \u001b[32m\"xhr\"\u001b[39m, \u001b[32m\"http\"\u001b[39m, \u001b[32m\"fetch\"\u001b[39m ],\n", + " transformRequest: [ \u001b[36m[Function: transformRequest]\u001b[39m ],\n", + " transformResponse: [ \u001b[36m[Function: transformResponse]\u001b[39m ],\n", + " timeout: \u001b[33m0\u001b[39m,\n", + " xsrfCookieName: \u001b[32m\"XSRF-TOKEN\"\u001b[39m,\n", + " xsrfHeaderName: \u001b[32m\"X-XSRF-TOKEN\"\u001b[39m,\n", + " maxContentLength: \u001b[33m-1\u001b[39m,\n", + " maxBodyLength: \u001b[33m-1\u001b[39m,\n", + " env: {\n", + " FormData: [Function: FormData] {\n", + " LINE_BREAK: \u001b[32m\"\\r\\n\"\u001b[39m,\n", + " DEFAULT_CONTENT_TYPE: \u001b[32m\"application/octet-stream\"\u001b[39m\n", + " },\n", + " Blob: \u001b[36m[class Blob]\u001b[39m\n", + " },\n", + " validateStatus: \u001b[36m[Function: validateStatus]\u001b[39m,\n", + " headers: Object [AxiosHeaders] {\n", + " Accept: \u001b[32m\"application/json, text/plain, */*\"\u001b[39m,\n", + " \u001b[32m\"Content-Type\"\u001b[39m: \u001b[32m\"application/json\"\u001b[39m,\n", + " authorization: \u001b[32m\"Bearer eyJ0eXAiOiJKV1QiLCJqaWQiOiIzN1RmdE1xQjNvWnI4c0FCekdJU1gra0x3S1dzNk1VblFaQ1N6SUk4UVFVPSIsImFsZyI6IlJTMjU2Iiwiamt1IjoiaHR0cHM6Ly9hbnVyYWd2Mi0zOXdqeTkwMi5hdXRoZW50aWNhdGlvbi5ldTEwLmhhbmEub25kZW1hbmQuY29tL3Rva2VuX2tleXMiLCJraWQiOiJkZWZhdWx0LWp3dC1rZXktM2YwNTUxZTJhYiJ9.eyJzdWIiOiJzYi1hZGNkNDkwNy1mMWE3LTQ2MmQtYmE4ZS02NDYzOTBlZTQxODUhYjM5ODQyNXxhaWNvcmUhYjU0MCIsImlzcyI6Imh0dHBzOi8vYW51cmFndjItMzl3ank5MDIuYXV0aGVudGljYXRpb24uZXUxMC5oYW5hLm9uZGVtYW5kLmNvbS9vYXV0aC90b2tlbiIsImF1dGhvcml0aWVzIjpbImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMud3JpdGUiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MucmVhZCIsImFpY29yZSFiNTQwLnNlcnZpY2VzLnJlYWQiLCJhaWNvcmUhYjU0MC5kb2NrZXJyZWdpc3RyeXNlY3JldC5jcmVkZW50aWFscy53cml0ZSIsImFpY29yZSFiNTQwLnJlc291cmNlZ3JvdXAud3JpdGUiLCJhaWNvcmUhYjU0MC5yZXBvc2l0b3JpZXMucmVhZCIsImFpY29yZSFiNTQwLmRvY2tlcnJlZ2lzdHJ5c2VjcmV0LmNyZWRlbnRpYWxzLnJlYWQiLCJhaWNvcmUhYjU0MC5ub2Rlcy53cml0ZSIsImFpY29yZSFiNTQwLnJlc291cmNlZ3JvdXAucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5tZXRyaWNzLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MucHJvbXB0VGVtcGxhdGVzLndyaXRlIiwiYWljb3JlIWI1NDAubWV0YS5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmNvbmZpZ3VyYXRpb25zLndyaXRlIiwiYWljb3JlIWI1NDAuZXhlY3V0aW9ucy5sb2dzLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3Mub3JjaGVzdHJhdGlvbkNvbmZpZ3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncy53cml0ZSIsImFpY29yZSFiNTQwLmxvZ3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5kZXBsb3ltZW50cy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRhYmxlcy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLnJlYWQiLCJhaWNvcmUhYjU0MC5rcGlzLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZGVwbG95bWVudHMucHJlZGljdCIsImFpY29yZSFiNTQwLmRlcGxveW1lbnRzLmxvZ3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5kZXBsb3ltZW50cy5yZWFkIiwiYWljb3JlIWI1NDAub2JqZWN0c3RvcmVzZWNyZXQuY3JlZGVudGlhbHMud3JpdGUiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MubWV0cmljcy53cml0ZSIsInVhYS5yZXNvdXJjZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5jcmVhdGUiLCJhaWNvcmUhYjU0MC5hcHBsaWNhdGlvbnMucmVhZCIsImFpY29yZSFiNTQwLmFwcGxpY2F0aW9ucy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRpb25zLndyaXRlIiwiYWljb3JlIWI1NDAubm9kZXMucmVhZCIsImFpY29yZSFiNTQwLmRhdGFzZXRzLndyaXRlIiwiYWljb3JlIWI1NDAuc2VjcmV0cy5yZWFkIiwiYWljb3JlIWI1NDAucmVwb3NpdG9yaWVzLndyaXRlIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGlvbnMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5hcnRpZmFjdHMucmVhZCIsImFpY29yZSFiNTQwLnNlY3JldHMud3JpdGUiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MuZGVsZXRlIiwiYWljb3JlIWI1NDAuZGF0YXNldHMuZG93bmxvYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuYXJ0aWZhY3RzLndyaXRlIiwiYWljb3JlIWI1NDAub2JqZWN0c3RvcmVzZWNyZXQuY3JlZGVudGlhbHMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5jb25maWd1cmF0aW9ucy5yZWFkIl0sImNsaWVudF9pZCI6InNiLWFkY2Q0OTA3LWYxYTctNDYyZC1iYThlLTY0NjM5MGVlNDE4NSFiMzk4NDI1fGFpY29yZSFiNTQwIiwiYXVkIjpbImFpY29yZSFiNTQwLm9iamVjdHN0b3Jlc2VjcmV0LmNyZWRlbnRpYWxzIiwiYWljb3JlIWI1NDAuZGF0YXNldHMiLCJhaWNvcmUhYjU0MC5rcGlzIiwiYWljb3JlIWI1NDAuZG9ja2VycmVnaXN0cnlzZWNyZXQuY3JlZGVudGlhbHMiLCJhaWNvcmUhYjU0MC5kZXBsb3ltZW50cy5sb2dzIiwiYWljb3JlIWI1NDAubWV0YSIsInVhYSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5hcnRpZmFjdHMiLCJhaWNvcmUhYjU0MC5yZXBvc2l0b3JpZXMiLCJzYi1hZGNkNDkwNy1mMWE3LTQ2MmQtYmE4ZS02NDYzOTBlZTQxODUhYjM5ODQyNXxhaWNvcmUhYjU0MCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MiLCJhaWNvcmUhYjU0MC5ub2RlcyIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRhYmxlcyIsImFpY29yZSFiNTQwLmxvZ3MiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZGVwbG95bWVudHMiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuY29uZmlndXJhdGlvbnMiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXhlY3V0aW9ucyIsImFpY29yZSFiNTQwLnNlcnZpY2VzIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zIiwiYWljb3JlIWI1NDAuYXBwbGljYXRpb25zIiwiYWljb3JlIWI1NDAucmVzb3VyY2Vncm91cCIsImFpY29yZSFiNTQwLnNlY3JldHMiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MubWV0cmljcyIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncyIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMiLCJhaWNvcmUhYjU0MC5leGVjdXRpb25zLmxvZ3MiXSwiZXh0X2F0dHIiOnsiZW5oYW5jZXIiOiJYU1VBQSIsInN1YmFjY291bnRpZCI6ImQ4MWRjOTE3LWJkYjctNDFjNC1hYjJhLWNjZTkzZjJjZjg2NiIsInpkbiI6ImFudXJhZ3YyLTM5d2p5OTAyIiwic2VydmljZWluc3RhbmNlaWQiOiJhZGNkNDkwNy1mMWE3LTQ2MmQtYmE4ZS02NDYzOTBlZTQxODUifSwiemlkIjoiZDgxZGM5MTctYmRiNy00MWM0LWFiMmEtY2NlOTNmMmNmODY2IiwiZ3JhbnRfdHlwZSI6ImNsaWVudF9jcmVkZW50aWFscyIsImF6cCI6InNiLWFkY2Q0OTA3LWYxYTctNDYyZC1iYThlLTY0NjM5MGVlNDE4NSFiMzk4NDI1fGFpY29yZSFiNTQwIiwic2NvcGUiOlsiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGlvbnNjaGVkdWxlcy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5yZWFkIiwiYWljb3JlIWI1NDAuc2VydmljZXMucmVhZCIsImFpY29yZSFiNTQwLmRvY2tlcnJlZ2lzdHJ5c2VjcmV0LmNyZWRlbnRpYWxzLndyaXRlIiwiYWljb3JlIWI1NDAucmVzb3VyY2Vncm91cC53cml0ZSIsImFpY29yZSFiNTQwLnJlcG9zaXRvcmllcy5yZWFkIiwiYWljb3JlIWI1NDAuZG9ja2VycmVnaXN0cnlzZWNyZXQuY3JlZGVudGlhbHMucmVhZCIsImFpY29yZSFiNTQwLm5vZGVzLndyaXRlIiwiYWljb3JlIWI1NDAucmVzb3VyY2Vncm91cC5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLm1ldHJpY3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMud3JpdGUiLCJhaWNvcmUhYjU0MC5tZXRhLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuY29uZmlndXJhdGlvbnMud3JpdGUiLCJhaWNvcmUhYjU0MC5leGVjdXRpb25zLmxvZ3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGlvbnNjaGVkdWxlcy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLm9yY2hlc3RyYXRpb25Db25maWdzLndyaXRlIiwiYWljb3JlIWI1NDAubG9ncy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmRlcGxveW1lbnRzLndyaXRlIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGFibGVzLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MucmVhZCIsImFpY29yZSFiNTQwLmtwaXMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5kZXBsb3ltZW50cy5wcmVkaWN0IiwiYWljb3JlIWI1NDAuZGVwbG95bWVudHMubG9ncy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmRlcGxveW1lbnRzLnJlYWQiLCJhaWNvcmUhYjU0MC5vYmplY3RzdG9yZXNlY3JldC5jcmVkZW50aWFscy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5tZXRyaWNzLndyaXRlIiwidWFhLnJlc291cmNlIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV2YWx1YXRpb25NZXRyaWNzLmNyZWF0ZSIsImFpY29yZSFiNTQwLmFwcGxpY2F0aW9ucy5yZWFkIiwiYWljb3JlIWI1NDAuYXBwbGljYXRpb25zLndyaXRlIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGlvbnMud3JpdGUiLCJhaWNvcmUhYjU0MC5ub2Rlcy5yZWFkIiwiYWljb3JlIWI1NDAuZGF0YXNldHMud3JpdGUiLCJhaWNvcmUhYjU0MC5zZWNyZXRzLnJlYWQiLCJhaWNvcmUhYjU0MC5yZXBvc2l0b3JpZXMud3JpdGUiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXhlY3V0aW9ucy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLnByb21wdFRlbXBsYXRlcy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmFydGlmYWN0cy5yZWFkIiwiYWljb3JlIWI1NDAuc2VjcmV0cy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5kZWxldGUiLCJhaWNvcmUhYjU0MC5kYXRhc2V0cy5kb3dubG9hZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5hcnRpZmFjdHMud3JpdGUiLCJhaWNvcmUhYjU0MC5vYmplY3RzdG9yZXNlY3JldC5jcmVkZW50aWFscy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmNvbmZpZ3VyYXRpb25zLnJlYWQiXSwiZXhwIjoxNzcwMzMwNTkyLCJpYXQiOjE3NzAyODczOTIsImp0aSI6IjJiNTlmNmQ3ODYyNDRkMzliYTU0NjUwNDYyMjM3MzdlIiwicmV2X3NpZyI6ImJkNWUwNzYwIiwiY2lkIjoic2ItYWRjZDQ5MDctZjFhNy00NjJkLWJhOGUtNjQ2MzkwZWU0MTg1IWIzOTg0MjV8YWljb3JlIWI1NDAifQ.NqErq4XZ2EW3_Coj3sptQBn5uTUCCbWRFtYST2bQPDW9-qKit2mb70VDFOMi-FfrJsTPqw8gvqAHfyVH1RNteFy8FCZt6X8WNHlK__VHkDfFu6qMU2kzOiZMWV7ABKkAeuEwS6vcBC-T4Mx-pBsT05HZhAGS6O5uxRpL4NRbpC1QuE02OiMM895Y2DxE22oWnS05CzfemWILHR8xoJXtc6XgzFRJeXiue2It0D2WlJNvCbDLPRcbTu-iyoJoxltzAH5IMEkz9zrfAirKoa0vlJoft1cQtB9l2FCxMjQbBVrAEU8-kF4blXiAOaLklQIr-KtQbBF4cKfLYpQVKFgzYQ\"\u001b[39m,\n", + " \u001b[32m\"ai-resource-group\"\u001b[39m: \u001b[32m\"default\"\u001b[39m,\n", + " \u001b[32m\"ai-client-type\"\u001b[39m: \u001b[32m\"AI SDK JavaScript\"\u001b[39m,\n", + " \u001b[32m\"User-Agent\"\u001b[39m: \u001b[32m\"axios/1.13.2\"\u001b[39m,\n", + " \u001b[32m\"Content-Length\"\u001b[39m: \u001b[32m\"594\"\u001b[39m,\n", + " \u001b[32m\"Accept-Encoding\"\u001b[39m: \u001b[32m\"gzip, compress, deflate, br\"\u001b[39m\n", + " },\n", + " httpAgent: Agent {\n", + " _events: [Object: null prototype] {\n", + " free: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " newListener: \u001b[36m[Function: maybeEnableKeylog]\u001b[39m\n", + " },\n", + " _eventsCount: \u001b[33m2\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " defaultPort: \u001b[33m80\u001b[39m,\n", + " protocol: \u001b[32m\"http:\"\u001b[39m,\n", + " options: [Object: null prototype] { path: \u001b[1mnull\u001b[22m },\n", + " requests: [Object: null prototype] {},\n", + " sockets: [Object: null prototype] {},\n", + " freeSockets: [Object: null prototype] {},\n", + " keepAliveMsecs: \u001b[33m1000\u001b[39m,\n", + " keepAlive: \u001b[33mfalse\u001b[39m,\n", + " maxSockets: \u001b[33mInfinity\u001b[39m,\n", + " maxFreeSockets: \u001b[33m256\u001b[39m,\n", + " scheduling: \u001b[32m\"lifo\"\u001b[39m,\n", + " maxTotalSockets: \u001b[33mInfinity\u001b[39m,\n", + " totalSocketCount: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m\n", + " },\n", + " httpsAgent: Agent {\n", + " _events: [Object: null prototype] {\n", + " free: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " newListener: \u001b[36m[Function: maybeEnableKeylog]\u001b[39m\n", + " },\n", + " _eventsCount: \u001b[33m2\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " defaultPort: \u001b[33m443\u001b[39m,\n", + " protocol: \u001b[32m\"https:\"\u001b[39m,\n", + " options: [Object: null prototype] {\n", + " rejectUnauthorized: \u001b[33mtrue\u001b[39m,\n", + " path: \u001b[1mnull\u001b[22m\n", + " },\n", + " requests: [Object: null prototype] {},\n", + " sockets: [Object: null prototype] {},\n", + " freeSockets: [Object: null prototype] {},\n", + " keepAliveMsecs: \u001b[33m1000\u001b[39m,\n", + " keepAlive: \u001b[33mfalse\u001b[39m,\n", + " maxSockets: \u001b[33mInfinity\u001b[39m,\n", + " maxFreeSockets: \u001b[33m256\u001b[39m,\n", + " scheduling: \u001b[32m\"lifo\"\u001b[39m,\n", + " maxTotalSockets: \u001b[33mInfinity\u001b[39m,\n", + " totalSocketCount: \u001b[33m-1\u001b[39m,\n", + " maxCachedSessions: \u001b[33m100\u001b[39m,\n", + " _sessionCache: { map: {}, list: [] },\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m\n", + " },\n", + " paramsSerializer: { serialize: \u001b[36m[Function: serialize]\u001b[39m },\n", + " method: \u001b[32m\"post\"\u001b[39m,\n", + " baseURL: \u001b[32m\"https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/d5efb11dee157534/v2/completion\"\u001b[39m,\n", + " params: {},\n", + " proxy: \u001b[33mfalse\u001b[39m,\n", + " data: \u001b[32m'{\"config\":{\"modules\":{\"prompt_templating\":{\"model\":{\"name\":\"gpt-4o\",\"params\":{\"max_completion_tokens\":200,\"temperature\":0}},\"prompt\":{\"template\":[{\"role\":\"system\",\"content\":\"You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification.\"},{\"role\":\"user\",\"content\":\"Please analyze the sentiment of the following support request: {{ ?support_text }}\"}]}}}},\"placeholder_values\":{\"support_text\":\"User is unhappy with the latest update and facing usability issues.\"}}'\u001b[39m,\n", + " allowAbsoluteUrls: \u001b[33mtrue\u001b[39m\n", + " },\n", + " request: \u001b[36m\u001b[39m HttpsClientRequest {\n", + " _events: [Object: null prototype] {\n", + " abort: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " aborted: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " connect: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " error: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " socket: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " timeout: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " drain: \u001b[36m[Function (anonymous)]\u001b[39m\n", + " },\n", + " _eventsCount: \u001b[33m7\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " outputData: [],\n", + " outputSize: \u001b[33m0\u001b[39m,\n", + " writable: \u001b[33mtrue\u001b[39m,\n", + " destroyed: \u001b[33mfalse\u001b[39m,\n", + " _last: \u001b[33mtrue\u001b[39m,\n", + " chunkedEncoding: \u001b[33mfalse\u001b[39m,\n", + " shouldKeepAlive: \u001b[33mfalse\u001b[39m,\n", + " maxRequestsOnConnectionReached: \u001b[33mfalse\u001b[39m,\n", + " _defaultKeepAlive: \u001b[33mtrue\u001b[39m,\n", + " useChunkedEncodingByDefault: \u001b[33mtrue\u001b[39m,\n", + " sendDate: \u001b[33mfalse\u001b[39m,\n", + " _removedConnection: \u001b[33mfalse\u001b[39m,\n", + " _removedContLen: \u001b[33mfalse\u001b[39m,\n", + " _removedTE: \u001b[33mfalse\u001b[39m,\n", + " _contentLength: \u001b[32m\"594\"\u001b[39m,\n", + " _hasBody: \u001b[33mtrue\u001b[39m,\n", + " _trailer: \u001b[32m\"\"\u001b[39m,\n", + " finished: \u001b[33mtrue\u001b[39m,\n", + " _headerSent: \u001b[33mtrue\u001b[39m,\n", + " _closed: \u001b[33mfalse\u001b[39m,\n", + " socket: Socket {\n", + " _events: {\n", + " close: \u001b[36m[Function: onClose]\u001b[39m,\n", + " error: \u001b[36m[Function]\u001b[39m,\n", + " prefinish: \u001b[90mundefined\u001b[39m,\n", + " finish: \u001b[90mundefined\u001b[39m,\n", + " drain: \u001b[90mundefined\u001b[39m,\n", + " data: \u001b[90mundefined\u001b[39m,\n", + " end: \u001b[36m[Function: _onReadableStreamEnd]\u001b[39m,\n", + " readable: \u001b[90mundefined\u001b[39m,\n", + " connect: \u001b[36m[Function: onConnect]\u001b[39m,\n", + " free: \u001b[36m[Function: onFree]\u001b[39m,\n", + " timeout: \u001b[36m[Array]\u001b[39m,\n", + " agentRemove: \u001b[36m[Function: onRemove]\u001b[39m\n", + " },\n", + " _readableState: ReadableState {\n", + " highWaterMark: \u001b[33m16384\u001b[39m,\n", + " buffer: [],\n", + " bufferIndex: \u001b[33m0\u001b[39m,\n", + " length: \u001b[33m0\u001b[39m,\n", + " pipes: [],\n", + " awaitDrainWriters: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kState)\u001b[39m]: \u001b[33m1050484\u001b[39m\n", + " },\n", + " _writableState: WritableState {\n", + " highWaterMark: \u001b[33m16384\u001b[39m,\n", + " length: \u001b[33m0\u001b[39m,\n", + " corked: \u001b[33m0\u001b[39m,\n", + " onwrite: \u001b[36m[Function: bound onwrite]\u001b[39m,\n", + " writelen: \u001b[33m0\u001b[39m,\n", + " bufferedIndex: \u001b[33m0\u001b[39m,\n", + " pendingcb: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kState)\u001b[39m]: \u001b[33m1091450228\u001b[39m,\n", + " [\u001b[32mSymbol(kBufferedValue)\u001b[39m]: \u001b[1mnull\u001b[22m\n", + " },\n", + " allowHalfOpen: \u001b[33mfalse\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " server: \u001b[1mnull\u001b[22m,\n", + " _server: \u001b[1mnull\u001b[22m,\n", + " _peername: \u001b[90mundefined\u001b[39m,\n", + " _sockname: \u001b[90mundefined\u001b[39m,\n", + " _pendingData: \u001b[1mnull\u001b[22m,\n", + " _pendingEncoding: \u001b[32m\"\"\u001b[39m,\n", + " _host: \u001b[32m\"api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " _parent: \u001b[1mnull\u001b[22m,\n", + " _needsSockInitWorkaround: \u001b[33mfalse\u001b[39m,\n", + " autoSelectFamilyAttemptedAddresses: [ \u001b[32m\"18.184.235.243:443\"\u001b[39m, \u001b[32m\"3.126.176.189:443\"\u001b[39m ],\n", + " setTimeout: \u001b[36m[Function: setStreamTimeout]\u001b[39m,\n", + " connecting: \u001b[33mfalse\u001b[39m,\n", + " _eventsCount: \u001b[33m7\u001b[39m,\n", + " timeout: \u001b[33m0\u001b[39m,\n", + " write: \u001b[36m[Function: _writeAfterFIN]\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(asyncIdSymbol)\u001b[39m]: \u001b[33m5\u001b[39m,\n", + " [\u001b[32mSymbol(kHandle)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kSetNoDelay)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(lastWriteQueueSize)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(timeout)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBuffer)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferCb)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferGen)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBytesRead)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kBytesWritten)\u001b[39m]: \u001b[33m0\u001b[39m\n", + " },\n", + " _header: \u001b[32m\"POST /v2/inference/deployments/d5efb11dee157534/v2/completion HTTP/1.1\\r\\n\"\u001b[39m +\n", + " \u001b[32m\"\\r\\n\"\u001b[39m,\n", + " _keepAliveTimeout: \u001b[33m0\u001b[39m,\n", + " _onPendingData: \u001b[36m[Function: nop]\u001b[39m,\n", + " _bodyWriter: WritableStreamDefaultWriter {\n", + " closed: Promise { \u001b[90mundefined\u001b[39m },\n", + " desiredSize: \u001b[33m0\u001b[39m,\n", + " ready: Promise { \u001b[90mundefined\u001b[39m }\n", + " },\n", + " defaultProtocol: \u001b[32m\"https:\"\u001b[39m,\n", + " aborted: \u001b[33mfalse\u001b[39m,\n", + " agent: Agent {\n", + " _events: [Object: null prototype] {\n", + " free: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " newListener: \u001b[36m[Function: maybeEnableKeylog]\u001b[39m\n", + " },\n", + " _eventsCount: \u001b[33m2\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " defaultPort: \u001b[33m443\u001b[39m,\n", + " protocol: \u001b[32m\"https:\"\u001b[39m,\n", + " options: [Object: null prototype] {\n", + " rejectUnauthorized: \u001b[33mtrue\u001b[39m,\n", + " path: \u001b[1mnull\u001b[22m\n", + " },\n", + " requests: [Object: null prototype] {},\n", + " sockets: [Object: null prototype] {},\n", + " freeSockets: [Object: null prototype] {},\n", + " keepAliveMsecs: \u001b[33m1000\u001b[39m,\n", + " keepAlive: \u001b[33mfalse\u001b[39m,\n", + " maxSockets: \u001b[33mInfinity\u001b[39m,\n", + " maxFreeSockets: \u001b[33m256\u001b[39m,\n", + " scheduling: \u001b[32m\"lifo\"\u001b[39m,\n", + " maxTotalSockets: \u001b[33mInfinity\u001b[39m,\n", + " totalSocketCount: \u001b[33m-1\u001b[39m,\n", + " maxCachedSessions: \u001b[33m100\u001b[39m,\n", + " _sessionCache: { map: {}, list: [] },\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m\n", + " },\n", + " method: \u001b[32m\"POST\"\u001b[39m,\n", + " maxHeaderSize: \u001b[90mundefined\u001b[39m,\n", + " insecureHTTPParser: \u001b[90mundefined\u001b[39m,\n", + " path: \u001b[32m\"/v2/inference/deployments/d5efb11dee157534/v2/completion\"\u001b[39m,\n", + " _req: [Object: null prototype] { requestRid: \u001b[33m18\u001b[39m, cancelHandleRid: \u001b[33m19\u001b[39m },\n", + " _encrypted: \u001b[33mtrue\u001b[39m,\n", + " socketPath: \u001b[90mundefined\u001b[39m,\n", + " joinDuplicateHeaders: \u001b[90mundefined\u001b[39m,\n", + " _ended: \u001b[33mfalse\u001b[39m,\n", + " res: IncomingMessageForClient {\n", + " _events: {\n", + " close: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " error: \u001b[36m[Function: handleStreamError]\u001b[39m,\n", + " data: \u001b[36m[Function: handleStreamData]\u001b[39m,\n", + " end: \u001b[36m[Function: handleStreamEnd]\u001b[39m,\n", + " readable: \u001b[90mundefined\u001b[39m,\n", + " aborted: \u001b[36m[Function: handlerStreamAborted]\u001b[39m\n", + " },\n", + " _readableState: ReadableState {\n", + " highWaterMark: \u001b[33m16384\u001b[39m,\n", + " buffer: [],\n", + " bufferIndex: \u001b[33m0\u001b[39m,\n", + " length: \u001b[33m0\u001b[39m,\n", + " pipes: [],\n", + " awaitDrainWriters: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kState)\u001b[39m]: \u001b[33m194512764\u001b[39m\n", + " },\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " decoder: TextDecoder {\n", + " encoding: \u001b[32m\"utf-8\"\u001b[39m,\n", + " fatal: \u001b[33mfalse\u001b[39m,\n", + " ignoreBOM: \u001b[33mfalse\u001b[39m\n", + " },\n", + " socket: Socket {\n", + " _events: \u001b[36m[Object]\u001b[39m,\n", + " _readableState: \u001b[36m[ReadableState]\u001b[39m,\n", + " _writableState: \u001b[36m[WritableState]\u001b[39m,\n", + " allowHalfOpen: \u001b[33mfalse\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " server: \u001b[1mnull\u001b[22m,\n", + " _server: \u001b[1mnull\u001b[22m,\n", + " _peername: \u001b[90mundefined\u001b[39m,\n", + " _sockname: \u001b[90mundefined\u001b[39m,\n", + " _pendingData: \u001b[1mnull\u001b[22m,\n", + " _pendingEncoding: \u001b[32m\"\"\u001b[39m,\n", + " _host: \u001b[32m\"api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " _parent: \u001b[1mnull\u001b[22m,\n", + " _needsSockInitWorkaround: \u001b[33mfalse\u001b[39m,\n", + " autoSelectFamilyAttemptedAddresses: \u001b[36m[Array]\u001b[39m,\n", + " setTimeout: \u001b[36m[Function: setStreamTimeout]\u001b[39m,\n", + " connecting: \u001b[33mfalse\u001b[39m,\n", + " _eventsCount: \u001b[33m7\u001b[39m,\n", + " timeout: \u001b[33m0\u001b[39m,\n", + " write: \u001b[36m[Function: _writeAfterFIN]\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(asyncIdSymbol)\u001b[39m]: \u001b[33m5\u001b[39m,\n", + " [\u001b[32mSymbol(kHandle)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kSetNoDelay)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(lastWriteQueueSize)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(timeout)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBuffer)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferCb)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferGen)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBytesRead)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kBytesWritten)\u001b[39m]: \u001b[33m0\u001b[39m\n", + " },\n", + " httpVersionMajor: \u001b[1mnull\u001b[22m,\n", + " httpVersionMinor: \u001b[1mnull\u001b[22m,\n", + " httpVersion: \u001b[1mnull\u001b[22m,\n", + " complete: \u001b[33mtrue\u001b[39m,\n", + " rawHeaders: [\n", + " \u001b[32m\"date\"\u001b[39m,\n", + " \u001b[32m\"Thu, 05 Feb 2026 10:29:53 GMT\"\u001b[39m,\n", + " \u001b[32m\"content-type\"\u001b[39m,\n", + " \u001b[32m\"application/json\"\u001b[39m,\n", + " \u001b[32m\"content-length\"\u001b[39m,\n", + " \u001b[32m\"1955\"\u001b[39m,\n", + " \u001b[32m\"x-upstream-service-time\"\u001b[39m,\n", + " \u001b[32m\"693\"\u001b[39m\n", + " ],\n", + " rawTrailers: [],\n", + " joinDuplicateHeaders: \u001b[33mfalse\u001b[39m,\n", + " aborted: \u001b[33mfalse\u001b[39m,\n", + " upgrade: \u001b[1mnull\u001b[22m,\n", + " url: \u001b[32m\"https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/d5efb11dee157534/v2/completion\"\u001b[39m,\n", + " method: \u001b[1mnull\u001b[22m,\n", + " statusCode: \u001b[33m200\u001b[39m,\n", + " statusMessage: \u001b[32m\"OK\"\u001b[39m,\n", + " client: Socket {\n", + " _events: \u001b[36m[Object]\u001b[39m,\n", + " _readableState: \u001b[36m[ReadableState]\u001b[39m,\n", + " _writableState: \u001b[36m[WritableState]\u001b[39m,\n", + " allowHalfOpen: \u001b[33mfalse\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " server: \u001b[1mnull\u001b[22m,\n", + " _server: \u001b[1mnull\u001b[22m,\n", + " _peername: \u001b[90mundefined\u001b[39m,\n", + " _sockname: \u001b[90mundefined\u001b[39m,\n", + " _pendingData: \u001b[1mnull\u001b[22m,\n", + " _pendingEncoding: \u001b[32m\"\"\u001b[39m,\n", + " _host: \u001b[32m\"api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " _parent: \u001b[1mnull\u001b[22m,\n", + " _needsSockInitWorkaround: \u001b[33mfalse\u001b[39m,\n", + " autoSelectFamilyAttemptedAddresses: \u001b[36m[Array]\u001b[39m,\n", + " setTimeout: \u001b[36m[Function: setStreamTimeout]\u001b[39m,\n", + " connecting: \u001b[33mfalse\u001b[39m,\n", + " _eventsCount: \u001b[33m7\u001b[39m,\n", + " timeout: \u001b[33m0\u001b[39m,\n", + " write: \u001b[36m[Function: _writeAfterFIN]\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(asyncIdSymbol)\u001b[39m]: \u001b[33m5\u001b[39m,\n", + " [\u001b[32mSymbol(kHandle)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kSetNoDelay)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(lastWriteQueueSize)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(timeout)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBuffer)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferCb)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferGen)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBytesRead)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kBytesWritten)\u001b[39m]: \u001b[33m0\u001b[39m\n", + " },\n", + " _consuming: \u001b[33mtrue\u001b[39m,\n", + " _dumped: \u001b[33mfalse\u001b[39m,\n", + " _eventsCount: \u001b[33m5\u001b[39m,\n", + " req: \u001b[36m[Circular *1]\u001b[39m,\n", + " _bodyRid: \u001b[33m20\u001b[39m,\n", + " responseUrl: \u001b[32m\"https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/d5efb11dee157534/v2/completion\"\u001b[39m,\n", + " redirects: [],\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(kHeaders)\u001b[39m]: {\n", + " date: \u001b[32m\"Thu, 05 Feb 2026 10:29:53 GMT\"\u001b[39m,\n", + " \u001b[32m\"content-type\"\u001b[39m: \u001b[32m\"application/json\"\u001b[39m,\n", + " \u001b[32m\"content-length\"\u001b[39m: \u001b[32m\"1955\"\u001b[39m,\n", + " \u001b[32m\"x-upstream-service-time\"\u001b[39m: \u001b[32m\"693\"\u001b[39m\n", + " },\n", + " [\u001b[32mSymbol(kHeadersCount)\u001b[39m]: \u001b[33m8\u001b[39m,\n", + " [\u001b[32mSymbol(kTrailers)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kTrailersCount)\u001b[39m]: \u001b[33m0\u001b[39m\n", + " },\n", + " upgradeOrConnect: \u001b[33mfalse\u001b[39m,\n", + " parser: \u001b[1mnull\u001b[22m,\n", + " maxHeadersCount: \u001b[1mnull\u001b[22m,\n", + " reusedSocket: \u001b[33mfalse\u001b[39m,\n", + " host: \u001b[32m\"api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " protocol: \u001b[32m\"https:\"\u001b[39m,\n", + " port: \u001b[33m443\u001b[39m,\n", + " hash: \u001b[90mundefined\u001b[39m,\n", + " search: \u001b[90mundefined\u001b[39m,\n", + " auth: \u001b[90mundefined\u001b[39m,\n", + " _redirectable: Writable {\n", + " _events: {\n", + " error: \u001b[36m[Function: handleRequestError]\u001b[39m,\n", + " prefinish: \u001b[90mundefined\u001b[39m,\n", + " finish: \u001b[90mundefined\u001b[39m,\n", + " drain: \u001b[90mundefined\u001b[39m,\n", + " response: \u001b[36m[Function: handleResponse]\u001b[39m,\n", + " socket: \u001b[36m[Array]\u001b[39m\n", + " },\n", + " _writableState: WritableState {\n", + " highWaterMark: \u001b[33m16384\u001b[39m,\n", + " length: \u001b[33m0\u001b[39m,\n", + " corked: \u001b[33m0\u001b[39m,\n", + " onwrite: \u001b[36m[Function: bound onwrite]\u001b[39m,\n", + " writelen: \u001b[33m0\u001b[39m,\n", + " bufferedIndex: \u001b[33m0\u001b[39m,\n", + " pendingcb: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kState)\u001b[39m]: \u001b[33m17580812\u001b[39m,\n", + " [\u001b[32mSymbol(kBufferedValue)\u001b[39m]: \u001b[1mnull\u001b[22m\n", + " },\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " _options: {\n", + " maxRedirects: \u001b[33m21\u001b[39m,\n", + " maxBodyLength: \u001b[33mInfinity\u001b[39m,\n", + " protocol: \u001b[32m\"https:\"\u001b[39m,\n", + " path: \u001b[32m\"/v2/inference/deployments/d5efb11dee157534/v2/completion\"\u001b[39m,\n", + " method: \u001b[32m\"POST\"\u001b[39m,\n", + " headers: \u001b[36m[Object: null prototype]\u001b[39m,\n", + " agents: \u001b[36m[Object]\u001b[39m,\n", + " auth: \u001b[90mundefined\u001b[39m,\n", + " family: \u001b[90mundefined\u001b[39m,\n", + " beforeRedirect: \u001b[36m[Function: dispatchBeforeRedirect]\u001b[39m,\n", + " beforeRedirects: \u001b[36m[Object]\u001b[39m,\n", + " http2Options: \u001b[90mundefined\u001b[39m,\n", + " hostname: \u001b[32m\"api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " port: \u001b[32m\"\"\u001b[39m,\n", + " agent: \u001b[36m[Agent]\u001b[39m,\n", + " nativeProtocols: \u001b[36m[Object]\u001b[39m,\n", + " pathname: \u001b[32m\"/v2/inference/deployments/d5efb11dee157534/v2/completion\"\u001b[39m\n", + " },\n", + " _ended: \u001b[33mtrue\u001b[39m,\n", + " _ending: \u001b[33mtrue\u001b[39m,\n", + " _redirectCount: \u001b[33m0\u001b[39m,\n", + " _redirects: [],\n", + " _requestBodyLength: \u001b[33m594\u001b[39m,\n", + " _requestBodyBuffers: [],\n", + " _eventsCount: \u001b[33m3\u001b[39m,\n", + " _onNativeResponse: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " _currentRequest: \u001b[36m[Circular *1]\u001b[39m,\n", + " _currentUrl: \u001b[32m\"https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/d5efb11dee157534/v2/completion\"\u001b[39m,\n", + " _timeout: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m\n", + " },\n", + " _bodyWritable: WritableStream { locked: \u001b[33mtrue\u001b[39m },\n", + " _bodyWriteRid: \u001b[33m16\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(kNeedDrain)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(corked)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kOutHeaders)\u001b[39m]: [Object: null prototype] {\n", + " accept: [ \u001b[32m\"Accept\"\u001b[39m, \u001b[32m\"application/json, text/plain, */*\"\u001b[39m ],\n", + " \u001b[32m\"content-type\"\u001b[39m: [ \u001b[32m\"Content-Type\"\u001b[39m, \u001b[32m\"application/json\"\u001b[39m ],\n", + " authorization: [\n", + " \u001b[32m\"authorization\"\u001b[39m,\n", + " \u001b[32m\"Bearer eyJ0eXAiOiJKV1QiLCJqaWQiOiIzN1RmdE1xQjNvWnI4c0FCekdJU1gra0x3S1dzNk1VblFaQ1N6SUk4UVFVPSIsImFsZyI6IlJTMjU2Iiwiamt1IjoiaHR0cHM6Ly9hbnVyYWd2Mi0zOXdqeTkwMi5hdXRoZW50aWNhdGlvbi5ldTEwLmhhbmEub25kZW1hbmQuY29tL3Rva2VuX2tleXMiLCJraWQiOiJkZWZhdWx0LWp3dC1rZXktM2YwNTUxZTJhYiJ9.eyJzdWIiOiJzYi1hZGNkNDkwNy1mMWE3LTQ2MmQtYmE4ZS02NDYzOTBlZTQxODUhYjM5ODQyNXxhaWNvcmUhYjU0MCIsImlzcyI6Imh0dHBzOi8vYW51cmFndjItMzl3ank5MDIuYXV0aGVudGljYXRpb24uZXUxMC5oYW5hLm9uZGVtYW5kLmNvbS9vYXV0aC90b2tlbiIsImF1dGhvcml0aWVzIjpbImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMud3JpdGUiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MucmVhZCIsImFpY29yZSFiNTQwLnNlcnZpY2VzLnJlYWQiLCJhaWNvcmUhYjU0MC5kb2NrZXJyZWdpc3RyeXNlY3JldC5jcmVkZW50aWFscy53cml0ZSIsImFpY29yZSFiNTQwLnJlc291cmNlZ3JvdXAud3JpdGUiLCJhaWNvcmUhYjU0MC5yZXBvc2l0b3JpZXMucmVhZCIsImFpY29yZSFiNTQwLmRvY2tlcnJlZ2lzdHJ5c2VjcmV0LmNyZWRlbnRpYWxzLnJlYWQiLCJhaWNvcmUhYjU0MC5ub2Rlcy53cml0ZSIsImFpY29yZSFiNTQwLnJlc291cmNlZ3JvdXAucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5tZXRyaWNzLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MucHJvbXB0VGVtcGxhdGVzLndyaXRlIiwiYWljb3JlIWI1NDAubWV0YS5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmNvbmZpZ3VyYXRpb25zLndyaXRlIiwiYWljb3JlIWI1NDAuZXhlY3V0aW9ucy5sb2dzLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3Mub3JjaGVzdHJhdGlvbkNvbmZpZ3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncy53cml0ZSIsImFpY29yZSFiNTQwLmxvZ3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5kZXBsb3ltZW50cy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRhYmxlcy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLnJlYWQiLCJhaWNvcmUhYjU0MC5rcGlzLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZGVwbG95bWVudHMucHJlZGljdCIsImFpY29yZSFiNTQwLmRlcGxveW1lbnRzLmxvZ3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5kZXBsb3ltZW50cy5yZWFkIiwiYWljb3JlIWI1NDAub2JqZWN0c3RvcmVzZWNyZXQuY3JlZGVudGlhbHMud3JpdGUiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MubWV0cmljcy53cml0ZSIsInVhYS5yZXNvdXJjZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5jcmVhdGUiLCJhaWNvcmUhYjU0MC5hcHBsaWNhdGlvbnMucmVhZCIsImFpY29yZSFiNTQwLmFwcGxpY2F0aW9ucy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRpb25zLndyaXRlIiwiYWljb3JlIWI1NDAubm9kZXMucmVhZCIsImFpY29yZSFiNTQwLmRhdGFzZXRzLndyaXRlIiwiYWljb3JlIWI1NDAuc2VjcmV0cy5yZWFkIiwiYWljb3JlIWI1NDAucmVwb3NpdG9yaWVzLndyaXRlIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGlvbnMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5hcnRpZmFjdHMucmVhZCIsImFpY29yZSFiNTQwLnNlY3JldHMud3JpdGUiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MuZGVsZXRlIiwiYWljb3JlIWI1NDAuZGF0YXNldHMuZG93bmxvYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuYXJ0aWZhY3RzLndyaXRlIiwiYWljb3JlIWI1NDAub2JqZWN0c3RvcmVzZWNyZXQuY3JlZGVudGlhbHMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5jb25maWd1cmF0aW9ucy5yZWFkIl0sImNsaWVudF9pZCI6InNiLWFkY2Q0OTA3LWYxYTctNDYyZC1iYThlLTY0NjM5MGVlNDE4NSFiMzk4NDI1fGFpY29yZSFiNTQwIiwiYXVkIjpbImFpY29yZSFiNTQwLm9iamVjdHN0b3Jlc2VjcmV0LmNyZWRlbnRpYWxzIiwiYWljb3JlIWI1NDAuZGF0YXNldHMiLCJhaWNvcmUhYjU0MC5rcGlzIiwiYWljb3JlIWI1NDAuZG9ja2VycmVnaXN0cnlzZWNyZXQuY3JlZGVudGlhbHMiLCJhaWNvcmUhYjU0MC5kZXBsb3ltZW50cy5sb2dzIiwiYWljb3JlIWI1NDAubWV0YSIsInVhYSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5hcnRpZmFjdHMiLCJhaWNvcmUhYjU0MC5yZXBvc2l0b3JpZXMiLCJzYi1hZGNkNDkwNy1mMWE3LTQ2MmQtYmE4ZS02NDYzOTBlZTQxODUhYjM5ODQyNXxhaWNvcmUhYjU0MCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MiLCJhaWNvcmUhYjU0MC5ub2RlcyIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5leGVjdXRhYmxlcyIsImFpY29yZSFiNTQwLmxvZ3MiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZGVwbG95bWVudHMiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuY29uZmlndXJhdGlvbnMiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXhlY3V0aW9ucyIsImFpY29yZSFiNTQwLnNlcnZpY2VzIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zIiwiYWljb3JlIWI1NDAuYXBwbGljYXRpb25zIiwiYWljb3JlIWI1NDAucmVzb3VyY2Vncm91cCIsImFpY29yZSFiNTQwLnNlY3JldHMiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MubWV0cmljcyIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncyIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMiLCJhaWNvcmUhYjU0MC5leGVjdXRpb25zLmxvZ3MiXSwiZXh0X2F0dHIiOnsiZW5oYW5jZXIiOiJYU1VBQSIsInN1YmFjY291bnRpZCI6ImQ4MWRjOTE3LWJkYjctNDFjNC1hYjJhLWNjZTkzZjJjZjg2NiIsInpkbiI6ImFudXJhZ3YyLTM5d2p5OTAyIiwic2VydmljZWluc3RhbmNlaWQiOiJhZGNkNDkwNy1mMWE3LTQ2MmQtYmE4ZS02NDYzOTBlZTQxODUifSwiemlkIjoiZDgxZGM5MTctYmRiNy00MWM0LWFiMmEtY2NlOTNmMmNmODY2IiwiZ3JhbnRfdHlwZSI6ImNsaWVudF9jcmVkZW50aWFscyIsImF6cCI6InNiLWFkY2Q0OTA3LWYxYTctNDYyZC1iYThlLTY0NjM5MGVlNDE4NSFiMzk4NDI1fGFpY29yZSFiNTQwIiwic2NvcGUiOlsiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGlvbnNjaGVkdWxlcy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5yZWFkIiwiYWljb3JlIWI1NDAuc2VydmljZXMucmVhZCIsImFpY29yZSFiNTQwLmRvY2tlcnJlZ2lzdHJ5c2VjcmV0LmNyZWRlbnRpYWxzLndyaXRlIiwiYWljb3JlIWI1NDAucmVzb3VyY2Vncm91cC53cml0ZSIsImFpY29yZSFiNTQwLnJlcG9zaXRvcmllcy5yZWFkIiwiYWljb3JlIWI1NDAuZG9ja2VycmVnaXN0cnlzZWNyZXQuY3JlZGVudGlhbHMucmVhZCIsImFpY29yZSFiNTQwLm5vZGVzLndyaXRlIiwiYWljb3JlIWI1NDAucmVzb3VyY2Vncm91cC5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLm1ldHJpY3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMud3JpdGUiLCJhaWNvcmUhYjU0MC5tZXRhLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuY29uZmlndXJhdGlvbnMud3JpdGUiLCJhaWNvcmUhYjU0MC5leGVjdXRpb25zLmxvZ3MucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGlvbnNjaGVkdWxlcy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLm9yY2hlc3RyYXRpb25Db25maWdzLndyaXRlIiwiYWljb3JlIWI1NDAubG9ncy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmRlcGxveW1lbnRzLndyaXRlIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGFibGVzLnJlYWQiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MucmVhZCIsImFpY29yZSFiNTQwLmtwaXMucmVhZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5kZXBsb3ltZW50cy5wcmVkaWN0IiwiYWljb3JlIWI1NDAuZGVwbG95bWVudHMubG9ncy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmRlcGxveW1lbnRzLnJlYWQiLCJhaWNvcmUhYjU0MC5vYmplY3RzdG9yZXNlY3JldC5jcmVkZW50aWFscy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5tZXRyaWNzLndyaXRlIiwidWFhLnJlc291cmNlIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV2YWx1YXRpb25NZXRyaWNzLmNyZWF0ZSIsImFpY29yZSFiNTQwLmFwcGxpY2F0aW9ucy5yZWFkIiwiYWljb3JlIWI1NDAuYXBwbGljYXRpb25zLndyaXRlIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmV4ZWN1dGlvbnMud3JpdGUiLCJhaWNvcmUhYjU0MC5ub2Rlcy5yZWFkIiwiYWljb3JlIWI1NDAuZGF0YXNldHMud3JpdGUiLCJhaWNvcmUhYjU0MC5zZWNyZXRzLnJlYWQiLCJhaWNvcmUhYjU0MC5yZXBvc2l0b3JpZXMud3JpdGUiLCJhaWNvcmUhYjU0MC5zY2VuYXJpb3MuZXhlY3V0aW9ucy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLnByb21wdFRlbXBsYXRlcy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmFydGlmYWN0cy5yZWFkIiwiYWljb3JlIWI1NDAuc2VjcmV0cy53cml0ZSIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5kZWxldGUiLCJhaWNvcmUhYjU0MC5kYXRhc2V0cy5kb3dubG9hZCIsImFpY29yZSFiNTQwLnNjZW5hcmlvcy5hcnRpZmFjdHMud3JpdGUiLCJhaWNvcmUhYjU0MC5vYmplY3RzdG9yZXNlY3JldC5jcmVkZW50aWFscy5yZWFkIiwiYWljb3JlIWI1NDAuc2NlbmFyaW9zLmNvbmZpZ3VyYXRpb25zLnJlYWQiXSwiZXhwIjoxNzcwMzMwNTkyLCJpYXQiOjE3NzAyODczOTIsImp0aSI6IjJiNTlmNmQ3ODYyNDRkMzliYTU0NjUwNDYyMjM3MzdlIiwicmV2X3NpZyI6ImJkNWUwNzYwIiwiY2lkIjoic2ItYWRjZDQ5MDctZjFhNy00NjJkLWJhOGUtNjQ2MzkwZWU0MTg1IWIzOTg0MjV8YWljb3JlIWI1NDAifQ.NqErq4XZ2EW3_Coj3sptQBn5uTUCCbWRFtYST2bQPDW9-qKit2mb70VDFOMi-FfrJsTPqw8gvqAHfyVH1RNteFy8FCZt6X8WNHlK__VHkDfFu6qMU2kzOiZMWV7ABKkAeuEwS6vcBC-T4Mx-pBsT05HZhAGS6O5uxRpL4NRbpC1QuE02OiMM895Y2DxE22oWnS05CzfemWILHR8xoJXtc6XgzFRJeXiue2It0D2WlJNvCbDLPRcbTu-iyoJoxltzAH5IMEkz9zrfAirKoa0vlJoft1cQtB9l2FCxMjQbBVrAEU8-kF4blXiAOaLklQIr-KtQbBF4cKfLYpQVKFgzYQ\"\u001b[39m\n", + " ],\n", + " \u001b[32m\"ai-resource-group\"\u001b[39m: [ \u001b[32m\"ai-resource-group\"\u001b[39m, \u001b[32m\"default\"\u001b[39m ],\n", + " \u001b[32m\"ai-client-type\"\u001b[39m: [ \u001b[32m\"ai-client-type\"\u001b[39m, \u001b[32m\"AI SDK JavaScript\"\u001b[39m ],\n", + " \u001b[32m\"user-agent\"\u001b[39m: [ \u001b[32m\"User-Agent\"\u001b[39m, \u001b[32m\"axios/1.13.2\"\u001b[39m ],\n", + " \u001b[32m\"content-length\"\u001b[39m: [ \u001b[32m\"Content-Length\"\u001b[39m, \u001b[32m\"594\"\u001b[39m ],\n", + " \u001b[32m\"accept-encoding\"\u001b[39m: [ \u001b[32m\"Accept-Encoding\"\u001b[39m, \u001b[32m\"gzip, compress, deflate, br\"\u001b[39m ],\n", + " host: [ \u001b[32m\"Host\"\u001b[39m, \u001b[32m\"api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m ]\n", + " },\n", + " [\u001b[32mSymbol(kUniqueHeaders)\u001b[39m]: \u001b[1mnull\u001b[22m\n", + " },\n", + " data: {\n", + " request_id: \u001b[32m\"49c9f5f4-5992-94a0-894f-9434fc112b92\"\u001b[39m,\n", + " intermediate_results: {\n", + " templating: [ \u001b[36m[Object]\u001b[39m, \u001b[36m[Object]\u001b[39m ],\n", + " llm: {\n", + " id: \u001b[32m\"chatcmpl-D5quQNKDNokKQ4cLirGD1kN2AuOdh\"\u001b[39m,\n", + " object: \u001b[32m\"chat.completion\"\u001b[39m,\n", + " created: \u001b[33m1770287394\u001b[39m,\n", + " model: \u001b[32m\"gpt-4o-2024-08-06\"\u001b[39m,\n", + " system_fingerprint: \u001b[32m\"fp_4a331a0222\"\u001b[39m,\n", + " choices: \u001b[36m[Array]\u001b[39m,\n", + " usage: \u001b[36m[Object]\u001b[39m\n", + " }\n", + " },\n", + " final_result: {\n", + " id: \u001b[32m\"chatcmpl-D5quQNKDNokKQ4cLirGD1kN2AuOdh\"\u001b[39m,\n", + " object: \u001b[32m\"chat.completion\"\u001b[39m,\n", + " created: \u001b[33m1770287394\u001b[39m,\n", + " model: \u001b[32m\"gpt-4o-2024-08-06\"\u001b[39m,\n", + " system_fingerprint: \u001b[32m\"fp_4a331a0222\"\u001b[39m,\n", + " choices: [ \u001b[36m[Object]\u001b[39m ],\n", + " usage: {\n", + " completion_tokens: \u001b[33m28\u001b[39m,\n", + " prompt_tokens: \u001b[33m68\u001b[39m,\n", + " total_tokens: \u001b[33m96\u001b[39m,\n", + " prompt_tokens_details: \u001b[36m[Object]\u001b[39m,\n", + " completion_tokens_details: \u001b[36m[Object]\u001b[39m\n", + " }\n", + " }\n", + " }\n", + " },\n", + " _data: {\n", + " request_id: \u001b[32m\"49c9f5f4-5992-94a0-894f-9434fc112b92\"\u001b[39m,\n", + " intermediate_results: {\n", + " templating: [\n", + " {\n", + " role: \u001b[32m\"system\"\u001b[39m,\n", + " content: \u001b[32m\"You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification.\"\u001b[39m\n", + " },\n", + " {\n", + " content: \u001b[32m\"Please analyze the sentiment of the following support request: User is unhappy with the latest update and facing usability issues.\"\u001b[39m,\n", + " role: \u001b[32m\"user\"\u001b[39m\n", + " }\n", + " ],\n", + " llm: {\n", + " id: \u001b[32m\"chatcmpl-D5quQNKDNokKQ4cLirGD1kN2AuOdh\"\u001b[39m,\n", + " object: \u001b[32m\"chat.completion\"\u001b[39m,\n", + " created: \u001b[33m1770287394\u001b[39m,\n", + " model: \u001b[32m\"gpt-4o-2024-08-06\"\u001b[39m,\n", + " system_fingerprint: \u001b[32m\"fp_4a331a0222\"\u001b[39m,\n", + " choices: [ \u001b[36m[Object]\u001b[39m ],\n", + " usage: {\n", + " completion_tokens: \u001b[33m28\u001b[39m,\n", + " prompt_tokens: \u001b[33m68\u001b[39m,\n", + " total_tokens: \u001b[33m96\u001b[39m,\n", + " prompt_tokens_details: \u001b[36m[Object]\u001b[39m,\n", + " completion_tokens_details: \u001b[36m[Object]\u001b[39m\n", + " }\n", + " }\n", + " },\n", + " final_result: {\n", + " id: \u001b[32m\"chatcmpl-D5quQNKDNokKQ4cLirGD1kN2AuOdh\"\u001b[39m,\n", + " object: \u001b[32m\"chat.completion\"\u001b[39m,\n", + " created: \u001b[33m1770287394\u001b[39m,\n", + " model: \u001b[32m\"gpt-4o-2024-08-06\"\u001b[39m,\n", + " system_fingerprint: \u001b[32m\"fp_4a331a0222\"\u001b[39m,\n", + " choices: [ { index: \u001b[33m0\u001b[39m, message: \u001b[36m[Object]\u001b[39m, finish_reason: \u001b[32m\"stop\"\u001b[39m } ],\n", + " usage: {\n", + " completion_tokens: \u001b[33m28\u001b[39m,\n", + " prompt_tokens: \u001b[33m68\u001b[39m,\n", + " total_tokens: \u001b[33m96\u001b[39m,\n", + " prompt_tokens_details: { audio_tokens: \u001b[33m0\u001b[39m, cached_tokens: \u001b[33m0\u001b[39m },\n", + " completion_tokens_details: {\n", + " accepted_prediction_tokens: \u001b[33m0\u001b[39m,\n", + " audio_tokens: \u001b[33m0\u001b[39m,\n", + " reasoning_tokens: \u001b[33m0\u001b[39m,\n", + " rejected_prediction_tokens: \u001b[33m0\u001b[39m\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "response" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Deno", + "language": "typescript", + "name": "deno" + }, + "language_info": { + "codemirror_mode": "typescript", + "file_extension": ".ts", + "mimetype": "text/x.typescript", + "name": "typescript", + "nbconvert_exporter": "script", + "pygments_lexer": "typescript", + "version": "5.8.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/Python_orchestration_optModules_Tutorial2.ipynb b/tutorials/ai-core-orchestration-consumption-opt-v2/Python_orchestration_optModules_Tutorial2.ipynb new file mode 100644 index 0000000000..cef853ee85 --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-opt-v2/Python_orchestration_optModules_Tutorial2.ipynb @@ -0,0 +1,422 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "d4d8f0d470dd5ce2", + "metadata": {}, + "source": [ + "# Orchestration with GenAI Hub" + ] + }, + { + "cell_type": "markdown", + "id": "5cfa2bb46ec75e47", + "metadata": {}, + "source": [ + "This notebook demonstrates setting up data masking and content filtering, configuring an orchestration pipeline, and querying multiple LLM models with GenAI Hub." + ] + }, + { + "cell_type": "markdown", + "id": "05fc1942", + "metadata": {}, + "source": [ + "Installing the required packages" + ] + }, + { + "cell_type": "markdown", + "id": "bdccc0b7", + "metadata": {}, + "source": [ + "The code imports required libraries, reads credentials from a creds.json file, and sets environment variables for authentication and API access" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "e4166703", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Name: sap-ai-sdk-gen\n", + "Version: 6.1.2\n", + "Summary: SAP Cloud SDK for AI (Python): generative AI SDK\n", + "Home-page: https://www.sap.com/\n", + "Author: SAP SE\n", + "Author-email: \n", + "License: SAP DEVELOPER LICENSE AGREEMENT\n", + "Location: C:\\Users\\C5384965\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\n", + "Requires: click, dacite, h11, httpx, langchain, langchain-classic, langchain-community, langchain-openai, openai, overloading, packaging, pydantic, sap-ai-sdk-core\n", + "Required-by: \n" + ] + } + ], + "source": [ + "!pip show sap-ai-sdk-gen" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "96da5c85", + "metadata": {}, + "outputs": [], + "source": [ + "import time\n", + "import json\n", + "import os\n", + "from IPython.display import clear_output\n", + "from ai_core_sdk.ai_core_v2_client import AICoreV2Client\n", + "from ai_api_client_sdk.models.parameter_binding import ParameterBinding\n", + "from enum import Enum\n", + " \n", + "# Inline credentials\n", + "with open('creds.json') as f:\n", + " credCF = json.load(f)\n", + "\n", + "# Set environment variables\n", + "def set_environment_vars(credCF):\n", + " env_vars = {\n", + " 'AICORE_AUTH_URL': credCF['url'] + '/oauth/token',\n", + " 'AICORE_CLIENT_ID': credCF['clientid'],\n", + " 'AICORE_CLIENT_SECRET': credCF['clientsecret'],\n", + " 'AICORE_BASE_URL': credCF[\"serviceurls\"][\"AI_API_URL\"] + \"/v2\",\n", + " 'AICORE_RESOURCE_GROUP': \"grounding\" \n", + " }\n", + "\n", + " for key, value in env_vars.items():\n", + " os.environ[key] = value\n", + " print(value)\n", + "\n", + "# Create AI Core client instance\n", + "def create_ai_core_client(credCF):\n", + " set_environment_vars(credCF) # Ensure environment variables are set\n", + " return AICoreV2Client(\n", + " base_url=os.environ['AICORE_BASE_URL'],\n", + " auth_url=os.environ['AICORE_AUTH_URL'],\n", + " client_id=os.environ['AICORE_CLIENT_ID'],\n", + " client_secret=os.environ['AICORE_CLIENT_SECRET'],\n", + " resource_group=os.environ['AICORE_RESOURCE_GROUP']\n", + " )\n", + "\n", + "ai_core_client = create_ai_core_client(credCF)" + ] + }, + { + "cell_type": "markdown", + "id": "ebc8ab8b", + "metadata": {}, + "source": [ + "## Basic Orchestration Pipeline\n", + "\n", + "Now that you have YOUR_DEPOLYMENT_URL, let's walk through a basic orchestration pipeline." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "be250e32", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\"Subject: Bestellung #1234567890 Verspätet - John Johnson Nachricht: Halle, ich schreibe ihnen um mich nach dem Status meiner Bestellung mit der Bestellnr. +1234567890 zu erkundigen. Die Lieferung war eigentlich für gestern geplant, ist bisher jedoch nicht erfolgt. Mein Name ist John Johnson und meine Lieferadresse lautet 125 Cole Meadows Drive Palo Alto, California 94301. Bitte lassen Sie mich per Telefon unter der Nummer +1 505802 2172 wissen, wann ich mit meiner Lieferung rechnen kann. Danke!\"\n" + ] + } + ], + "source": [ + "\n", + "from gen_ai_hub.orchestration_v2.utils import load_text_file \n", + "# Load the support request file content \n", + "support_request_path = r\"C:\\Users\\C5384965\\OneDrive - SAP SE\\2026\\jan-26\\06-01-26\\ai-core-orchestration-consumption-opt\\support-request.txt\" # Specify the correct path to the file \n", + "support_request = load_text_file(support_request_path) \n", + "# Print the content to verify it has been loaded \n", + "print(support_request)" + ] + }, + { + "cell_type": "markdown", + "id": "8dac7637", + "metadata": {}, + "source": [ + "# Step 1: Templating\n", + "\n", + "Explanation of Templating Code\n", + "\n", + "This code defines a template for an AI assistant using orchestration configuration. The `Template` object is set up with system and user messages to guide the assistant’s response behavior. \n", + "\n", + "Key Components:\n", + "- **SystemMessage**: Sets a predefined instruction for the AI assistant. This message typically includes the assistant's role and any specific guidelines it should follow.\n", + "- **UserMessage**: Represents the user's input and how it is structured in the conversation.\n", + " \n", + "In this revised prompt, only queries are passed to the assistant without any additional context. The AI is expected to respond based solely on the provided input.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "7ff397f4", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.message import SystemMessage, UserMessage\n", + "from gen_ai_hub.orchestration_v2.models.template import Template\n", + "\n", + "# Define the sentiment analysis template\n", + "template = Template(\n", + " template=[\n", + " SystemMessage(content=\"\"\"You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification for your classification.\"\"\"\n", + " ),\n", + " UserMessage(content=\"Please analyze the sentiment of the following support request: {{ ?support_text }}\"\n", + " ),\n", + " ],\n", + " defaults=\n", + " {\"support_text\":\"User is unhappy with the latest update and facing usability issues.\"}\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "id": "dbe251e0", + "metadata": {}, + "source": [ + "# Step 2: Define the LLM -list of models\n", + "\n", + "The LLM class is used to configure and initialize a model for generating text based on specific parameters. In this example, we'll use the list of models to perform the content creation task.\n", + "\n", + "ℹ️Note that virtual deployment of the model is managed automatically by the Orchestration Service, so no additional deployment setup is required on your part." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "62d96bfd", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.llm_model_details import LLMModelDetails\n", + "\n", + "llm = LLMModelDetails(name=\"gpt-5-nano\", parameters={\"max_completion_tokens\": 1028})" + ] + }, + { + "cell_type": "markdown", + "id": "e1e8a4ae", + "metadata": {}, + "source": [ + "This configuration initializes the model to use the list of llm models with the latest updates. The model will generate responses up to 256 tokens in length and produce more predictable and focused output due to the low temperature setting." + ] + }, + { + "cell_type": "markdown", + "id": "adfeeca7", + "metadata": {}, + "source": [ + "### Data Masking\n", + "\n", + "The Data Masking Module anonymizes or pseudonymizes personally identifiable information (PII) before it is processed by the LLM module. When data is anonymized, all identifying information is replaced with placeholders (e.g., MASKED_ENTITY), and the original data cannot be recovered, ensuring that no trace of the original information is retained. In contrast, pseudonymized data is substituted with unique placeholders (e.g., MASKED_ENTITY_ID), allowing the original information to be restored if needed. In both cases, the masking module identifies sensitive data and replaces it with appropriate placeholders before further processing." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "764402de", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.data_masking import MaskingModuleConfig, MaskingProviderConfig, MaskingMethod, DPIStandardEntity, ProfileEntity\n", + "\n", + "data_masking_config = MaskingModuleConfig(\n", + " masking_providers=[MaskingProviderConfig(\n", + " method=MaskingMethod.ANONYMIZATION,\n", + " entities=[\n", + " DPIStandardEntity(type=ProfileEntity.ADDRESS),\n", + " DPIStandardEntity(type=ProfileEntity.EMAIL),\n", + " DPIStandardEntity(type=ProfileEntity.PHONE),\n", + " DPIStandardEntity(type=ProfileEntity.PERSON),\n", + " ]\n", + " )],\n", + "\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "1490f85f", + "metadata": {}, + "source": [ + "### Content Filtering\n", + "\n", + "The Content Filtering Module can be configured to filter both the input to the LLM module (input filter) and the output generated by the LLM (output filter). The module uses predefined classification services to detect inappropriate or unwanted content, allowing flexible configuration through customizable thresholds. These thresholds can be set to control the sensitivity of filtering, ensuring that content meets desired standards before it is processed or returned as output." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "b888b2dd", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.azure_content_filter import AzureContentFilter, AzureThreshold\n", + "from gen_ai_hub.orchestration_v2.models.llama_guard_3_filter import LlamaGuard38bFilter\n", + "from gen_ai_hub.orchestration_v2.models.content_filtering import FilteringModuleConfig, InputFiltering, OutputFiltering\n", + "from gen_ai_hub.orchestration_v2.models.content_filter import ContentFilter, ContentFilterProvider\n", + "\n", + "content_filter_config = FilteringModuleConfig(\n", + " input=InputFiltering(filters=[\n", + " ContentFilter(type=ContentFilterProvider.AZURE, config=AzureContentFilter(hate=AzureThreshold.ALLOW_SAFE,\n", + " violence=AzureThreshold.ALLOW_SAFE,\n", + " self_harm=AzureThreshold.ALLOW_SAFE,\n", + " sexual=AzureThreshold.ALLOW_SAFE)),\n", + " ContentFilter(type=ContentFilterProvider.LLAMA_GUARD_3_8B, config=LlamaGuard38bFilter(hate=True))\n", + " ]),\n", + " output=OutputFiltering(filters=[\n", + " ContentFilter(type=ContentFilterProvider.AZURE, config=AzureContentFilter(hate=AzureThreshold.ALLOW_SAFE,\n", + " violence=AzureThreshold.ALLOW_SAFE,\n", + " self_harm=AzureThreshold.ALLOW_SAFE,\n", + " sexual=AzureThreshold.ALLOW_SAFE)),\n", + " ContentFilter(type=ContentFilterProvider.LLAMA_GUARD_3_8B, config=LlamaGuard38bFilter(hate=True))\n", + " ])\n", + "\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "6a1199e6", + "metadata": {}, + "source": [ + "### Translation\n", + "\n", + "Translation module can be used to translate text from one language to another. You can use this module to translate input text before it is processed by the LLM module, or to translate the output generated by the LLM module. The translation module uses the SAP Document Translation service to perform the translation." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "74b895a3", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.translation import TranslationModuleConfig, SAPDocumentTranslation, TranslationConfig\n", + "\n", + "translation_config = TranslationModuleConfig(\n", + " input=SAPDocumentTranslation(\n", + " config=TranslationConfig(\n", + " source_language=\"de-DE\",\n", + " target_language=\"en-US\"\n", + " )\n", + " ),\n", + " output=SAPDocumentTranslation(\n", + " config=TranslationConfig(\n", + " source_language=\"en-US\",\n", + " target_language=\"de-DE\"\n", + " )\n", + " )\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "c270b13d", + "metadata": {}, + "source": [ + "# Step 3: Create the Orchestration Configuration\n", + "\n", + "The OrchestrationConfig class is used to create a configuration that integrates various components, such as templates and llm models, into a unified orchestration setup. This configuration specifies how these components work together to achieve the desired workflow." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "514b1d4a", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.template import Template, PromptTemplatingModuleConfig\n", + "from gen_ai_hub.orchestration_v2.models.config import ModuleConfig, OrchestrationConfig\n", + "\n", + "prompt_template = PromptTemplatingModuleConfig(prompt=template,\n", + " model=llm)\n", + "\n", + "\n", + "module_config = ModuleConfig(prompt_templating=prompt_template, filtering=content_filter_config, masking=data_masking_config,\n", + " translation= translation_config)\n", + "\n", + "config = OrchestrationConfig(modules=module_config)\n" + ] + }, + { + "cell_type": "markdown", + "id": "dd79ec07", + "metadata": {}, + "source": [ + "# Step 4: Run the Orchestration Request\n", + "\n", + "The OrchestrationService class is used to interact with the orchestration service by providing a configuration and invoking its operations. This service handles the execution of workflows defined by the provided configuration and processes inputs accordingly." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "9e9ac4fa", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.service import OrchestrationService\n", + "\n", + "orchestration_service = OrchestrationService()\n", + "\n", + "# Run orchestration with the provided input (for example, candidate resume content)\n", + "result = orchestration_service.run(config=config, placeholder_values={\"support_text\" : support_request}) \n" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "e8c093da", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Negativ – Die Nachricht vermittelt Frustration und Besorgnis aufgrund einer verspäteten Lieferung und fordert einen sofortigen Status an.\n" + ] + } + ], + "source": [ + "# Extract the response content\n", + "response = result.final_result.choices[0].message.content\n", + "print(response)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/ai-core-orchestration-consumption-opt-v2.md b/tutorials/ai-core-orchestration-consumption-opt-v2/ai-core-orchestration-consumption-opt-v2.md new file mode 100644 index 0000000000..8afb6742d2 --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-opt-v2/ai-core-orchestration-consumption-opt-v2.md @@ -0,0 +1,940 @@ +--- +parser: v2 +auto_validation: true +time: 45 +primary_tag: software-product>sap-ai-core +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-ai-core ] +author_name: Smita Naik +author_profile: https://github.com/I321506 +--- + +# Leveraging Orchestration(V2) Capabilities to Enhance Responses + In this tutorial, we will explore optional orchestration service V2 capabilities available in the Gen AI Hub, such as Data Masking, translation and Content Filtering. + +## You will learn +- Inference of GenAI models using orchestration service V2 along with Data Masking, translation and Content Filtering features + +## Prerequisites +1. **BTP Account** + Set up your SAP Business Technology Platform (BTP) account. + [Create a BTP Account](https://developers.sap.com/group.btp-setup.html) +2. **For SAP Developers or Employees** + Internal SAP stakeholders should refer to the following documentation: [How to create BTP Account For Internal SAP Employee](https://me.sap.com/notes/3493139), [SAP AI Core Internal Documentation](https://help.sap.com/docs/sap-ai-core) +3. **For External Developers, Customers, or Partners** + Follow this tutorial to set up your environment and entitlements: [External Developer Setup Tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html), [SAP AI Core External Documentation](https://help.sap.com/docs/sap-ai-core?version=CLOUD) +4. **Create BTP Instance and Service Key for SAP AI Core** + Follow the steps to create an instance and generate a service key for SAP AI Core and ensure to use service plan **extended**: + [Create Service Key and Instance](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key?version=CLOUD) +5. **AI Core Setup Guide** + Step-by-step guide to set up and get started with SAP AI Core: + [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-genaihub-provisioning.html) +6. An **Extended** SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard tiers. For more details, refer to +[SAP AI Core Service Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/service-plans?version=CLOUD) +8. **AI Launchpad Setup Guide** + Step-by-step guide to set up AI Launchpad: + [AI Launchpad Tutorial](https://developers.sap.com/tutorials/ai-launchpad-provisioning.html) +9. **Orchestration Deployment**: + Refer to the tutorial [the basic consumption of GenAI models using orchestration](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) and ensure at least one orchestration deployment is ready to be consumed during this process. +10. Basic Knowledge: + Familiarity with the orchestration workflow is recommended + + +### Pre-Read + +This tutorial builds on the foundational orchestration concepts introduced in the [beginner's tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) and focuses on enhancing GenAI responses using orchestration service V2 modules such as **data masking**, **translation** and **content filtering**. + +Previously in the [beginner's tutorials](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html), we used a **resume processing use case** to illustrate how to create orchestration workflow to consume models using harmonized API. In this tutorial, we use a sentiment analysis use case to demonstrate how optional orchestration service V2 modules such as **Data Masking**, **Translation**, and **Content Filtering** can be applied to protect sensitive information, translate multilingual support requests, and filter out undesirable or non-compliant content—thereby enhancing the quality, safety, and compliance of generative AI outputs. + +**Data masking** in SAP AI Core allows you to anonymize or pseudonymize personal or confidential data before sending it to the generative AI model. +🔗 [Learn more about Data Masking in SAP AI Core](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/data-masking?version=CLOUD) + +**Translation** in SAP GenAI Orchestration enables automatic language conversion of inputs and outputs during LLM processing. +🔗 [Learn more about Data Masking in SAP AI Core](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/input-translation?version=CLOUD) + +**Content filtering** helps identify and block inappropriate, offensive, or non-compliant input and output content within an orchestration workflow. +🔗 [Learn more about Content Filtering in SAP AI Core](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/input-filtering?version=CLOUD) + +In this tutorial, we specifically focus on **data masking**, **translation** and **content filtering**. Other orchestration modules such as **grounding** are also available in SAP AI Core and it is covered in Separate tutorials. + +You will learn how to: + +- Integrate data masking within the orchestration flow to safeguard personal or confidential information. +- Apply content filtering to identify and restrict inappropriate or non-compliant responses. +- Use relevant SAP AI Core features and configurations to support these capabilities. + +By the end of this tutorial + + * you'll understand how to design a secure and controlled orchestration pipeline suitable for **enterprise-grade GenAI applications**. + + * Learn how to implement the solution using **SAP AI Launchpad**, **Python SDK**, **Java**, **JavaScript**, and **Bruno**. + +### Accessing Orchestration Capabilities + +**In this tutorial**, we will build upon the orchestration framework introduced in [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html). The focus will shift from basic orchestration to leveraging optional modules to enhance data privacy and refine response quality. These enhancements include: + + **Data Masking** : Hiding sensitive information like phone numbers, organizational details, or personal identifiers. + + **Content Filtering** : Screening for categories such as hate speech, self-harm, explicit content, and violence to ensure safe and relevant responses. + + **Translation** : Automatically converts input and/or output text between source and target languages to support multilingual processing. + +- Here, we use a **sentiment analysis** use case, where orchestration is enhanced by incorporating data masking, translation or content filtering. These additions help improve data privacy, security, and response quality. + +[OPTION BEGIN [AI Launchpad]] + +**Access the Generative AI Hub:** +- Navigate to the resource group where your orchestration has been deployed. + +- Go to Generative AI Hub. + +- Select Orchestration and click on create. + +![img](img/image_ail_orch.png) + +![img](img/image_ail_orch1.png) + + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +**NOTE** : If you are continuing with the same notebook from the previous tutorial, skip steps 1 and 2. Otherwise, create a new notebook using the already deployed orchestration URL to access the Harmonized API. + +- The [support-request.txt](img/support-request.txt) file, containing the support request content, must be added to the working directory. Use the following code to load the file content: + + +```python +from gen_ai_hub.orchestration_v2.utils import load_text_file +# Load the support request file content +support_request_path = "support-request.txt"  # Specify the correct path to the file +support_request = load_text_file(support_request_path) +# Print the content to verify it has been loaded +print(support_request) +``` + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +**NOTE** : If you are continuing with the same project from the previous tutorial, skip steps 1 and 2. Otherwise, create a new project using the already deployed orchestration URL to access the Harmonized API. + +For more information, refer to the official [documentation](https://sap.github.io/ai-sdk/docs/js/orchestration/chat-completion) of the [`@sap-ai-sdk/orchestration`](https://github.com/SAP/ai-sdk-js/tree/main/packages/orchestration) package. + +- The [support-request.txt](img/support-request.txt) file, containing the support request content, must be added to the working directory. Use the following code to load the file content: + + +```javascript +const txtContent = await Deno.readTextFile('./support-request.txt'); +console.log(txtContent); +``` + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +**Bruno Setup** : If you have already completed the environment setup, configuration, and deployment as described in the [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html), you can directly proceed to the Data Masking Configuration. If you're new to this, please follow the steps in the [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) to set up your environment, configure, and deploy the orchestration before proceeding with the modules. + +[OPTION END] + + +### Template Configuration + +The templating module is a mandatory step in orchestration. It allows you to define dynamic inputs using placeholders, construct structured prompts, and generate a final query that will be passed to the model configuration module. + +In this step, we create a template that defines how the sentiment analysis prompt will be structured using message components: + +• `system`: Defines assistant behavior and task. + +• `user`: Provides the support request input. + +[OPTION BEGIN [AI Launchpad]] + +- In the **Templating** section, locate the **message** icon with three tabs: **User, Assistance, and System**. + +- Click on the **User** tab. Enter the following details: + +```PROMPT +Please analyze the sentiment of the following support request: {{ ?support_text }} +``` +**Variable Definitions**: + +- The variable **“support_text ”** will be created. + +- Enter default values based on your use case. For this sentiment analysis example, use the following support text: + +```TEXT +"Subject: Bestellung #1234567890 Verspätet - John Johnson Nachricht: Halle, ich schreibe ihnen um mich nach dem Status meiner Bestellung mit der Bestellnr. +1234567890 zu erkundigen. Die Lieferung war eigentlich für gestern geplant, ist bisher jedoch nicht erfolgt. Mein Name ist John Johnson und meine Lieferadresse lautet 125 Cole Meadows Drive Palo Alto, California 94301. Bitte lassen Sie mich per Telefon unter der Nummer +1 505802 2172 wissen, wann ich mit meiner Lieferung rechnen kann. Danke!" +``` + +![img](img/image_ail_019.png) + +- After entering the details, click on **Add**. + +- A new message box will appear. Proceed to configure the **System** tab. + +- In the **System** tab, enter the following details: + +```PROMPT +You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification for your classification. +``` +![img](img/image005.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +Use the following code to create the template: + +```python +from gen_ai_hub.orchestration_v2.models.message import SystemMessage, UserMessage +from gen_ai_hub.orchestration_v2.models.template import Template + +# Define the sentiment analysis template +template = Template( + template=[ + SystemMessage(content="""You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification for your classification.""" + ), + UserMessage(content="Please analyze the sentiment of the following support request: {{ ?support_text }}" + ), + ], + defaults= + {"support_text":"User is unhappy with the latest update and facing usability issues."} +) +``` +- Select the models to be used for this orchestration: + +```python +from gen_ai_hub.orchestration_v2.models.llm_model_details import LLMModelDetails + +llm = LLMModelDetails(name="gpt-5-nano", parameters={"max_completion_tokens": 1028}) +``` +[OPTION END] + +[OPTION BEGIN [JavaScript SDK ]] + +```javascript +import { OrchestrationClient } from '@sap-ai-sdk/orchestration'; + +const orchestrationClient = new OrchestrationClient({ + promptTemplating: { + model: { + name: 'gpt-4o', + params: { + max_completion_tokens: 200, + temperature: 0 + } + }, + prompt: { + template: [ + { + role: 'system', + content: + 'You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification.' + }, + { + role: 'user', + content: + 'Please analyze the sentiment of the following support request: {{ ?support_text }}' + } + ] + } + } +}); + +const response = await orchestrationClient.chatCompletion({ + placeholderValues: { + support_text: 'User is unhappy with the latest update and facing usability issues.' + } +}); +``` + +Orchestration provides direct access to models without requiring separate deployments, you can use any available models. + +[OPTION END] + +### Setting Up Data Masking Parameters + +The **Data Masking** Module ensures data privacy by anonymizing or pseudonymizing sensitive information before it is processed. + + **Anonymization** : Irreversibly replaces personal identifiers with placeholders (e.g., MASKED_ENTITY). + + **Pseudonymization** : Substitutes identifiers with reversible tokens (e.g., MASKED_ENTITY_ID). + +[OPTION BEGIN [AI Launchpad]] + +- Navigate to the **Data Masking** section (see the screenshot below). + + In this tutorial, we have chosen 'anonymize' for enhanced privacy. Depending on your requirements, you can opt for either approach. + +- Check the boxes for the following fields that you want to mask: + - Email Address + - Organization Name + - Person's Name + - Person's Phone Number + - Username & Password + +- Ensure all 5 boxes are checked (refer to the screenshot for reference) + +![img](img/image009.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +For this tutorial, we use anonymization: + +```python +from gen_ai_hub.orchestration_v2.models.data_masking import MaskingModuleConfig, MaskingProviderConfig, MaskingMethod, DPIStandardEntity, ProfileEntity + +data_masking_config = MaskingModuleConfig( + masking_providers=[MaskingProviderConfig( + method=MaskingMethod.ANONYMIZATION, + entities=[ + DPIStandardEntity(type=ProfileEntity.ADDRESS), + DPIStandardEntity(type=ProfileEntity.EMAIL), + DPIStandardEntity(type=ProfileEntity.PHONE), + DPIStandardEntity(type=ProfileEntity.PERSON), + ] + )], + +) +``` + +**NOTE:** We are anonymizing name, phone number, address (location), and email to protect user privacy in the support text. + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +For this tutorial, we use anonymization: + +```javascript +import { buildDpiMaskingProvider } from '@sap-ai-sdk/orchestration'; + +const maskingProvider = buildDpiMaskingProvider({ + method: 'anonymization', + entities: [ + 'profile-person', + 'profile-email', + 'profile-phone', + { + type: 'custom', + // Example: customer / ticket reference IDs + regex: '\\b(TICKET|CASE)-[0-9]{4,}\\b', + replacement_strategy: { + method: 'constant', + value: 'MASKED_REFERENCE_ID' + } + } + ], + allowlist: ['SAP'] // Optional +}); +``` + +**NOTE** : Here, we apply data masking to customer support messages in German, masking sensitive user data like name, phone, and email. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +- Before proceeding with the data masking configuration, ensure the following: + - You have completed the Bruno collection and setup as per the [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html). + - The deployment for the orchestration is already done and configured correctly. + +**Note**: If you have already completed these setup steps, you can proceed directly to the data masking configuration. If not, please follow the steps in the [Tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) to complete the environment setup and deployment. + +For this tutorial, we use anonymization: + +- Navigate to the **'orchestration'** section. + +- In the list of requests, select the **completion** option to open the request designed for consuming the deployed model. + +- Expand the Body section of the request. Replace the current JSON in the Body with the following updated JSON, which includes the data masking configuration + +```JSON +"masking": { + "masking_providers": [ + { + "type": "sap_data_privacy_integration", + "method": "anonymization", + "entities": [ + { "type": "profile-email" }, + { "type": "profile-person" }, + { "type": "profile-phone" }, + { "type": "profile-org" }, + { "type": "profile-location" } + ] + } + ] + } +``` + +- After replacing the JSON, click Send to execute the request. + +- Upon sending the request, the response will return the masked result, where sensitive information like email, phone numbers, and other personal identifiers are anonymized. For reference, you can check the screenshot provided showing how the masked result will appear. + +![img](img/data_masking.png) + +**NOTE:** This will mask sensitive fields from support queries — even if written in non-English languages like German + +[OPTION END] + +### Translation + +The Translation Module enables multilingual processing by translating content sent to and received from the generative AI model. This is especially useful when the user input or model output is not in the default language expected by the LLM. + + - The module uses SAP’s Document Translation service. + + - The target language is mandatory. + + - If source language is not specified, it will be automatically detected. + +[OPTION BEGIN [AI Launchpad]] + +Navigate to the Translation section in the orchestration editor. + +Specify the source and target languages for both: + + - Input Translation: before sending data to the model. + + - Output Translation: after receiving the model's response. + +For example: + + - Input Translation: German ➝ English + + - Output Translation: English ➝ German + +Refer to the screenshots below for guidance: + +![img](img/image004.png) + +![img](img/image025.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +```Python +from gen_ai_hub.orchestration_v2.models.translation import TranslationModuleConfig, SAPDocumentTranslation, TranslationConfig + +translation_config = TranslationModuleConfig( + input=SAPDocumentTranslation( + config=TranslationConfig( + source_language="de-DE", + target_language="en-US" + ) + ), + output=SAPDocumentTranslation( + config=TranslationConfig( + source_language="en-US", + target_language="de-DE" + ) + ) +) +``` + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +Use the buildTranslationConfig helper to configure translation. + +``` javascript +import { buildTranslationConfig } from '@sap-ai-sdk/orchestration'; + +const inputTranslation = buildTranslationConfig('input', { + sourceLanguage: 'de-DE', + targetLanguage: 'en-US' +}); + +const outputTranslation = buildTranslationConfig('output', { + sourceLanguage: 'en-US', + targetLanguage: 'de-DE' +}); + +const translationConfig = { + input: inputTranslation, + output: outputTranslation +}; + +console.log('✅ Translation configuration defined successfully'); +``` + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +To test translation in Bruno: + +1. Open the request in the **05_orchestration** collection. + +2. Add both input and output translation configurations under module_configurations. + + +``` JSON +"translation": { + "input": { + "type": "sap_document_translation", + "config": { + "source_language": "de-DE", + "target_language": "en-US" + } + }, + "output": { + "type": "sap_document_translation", + "config": { + "source_language": "en-US", + "target_language": "de-DE" + } + } + } +``` + 3. Click Send. + + 4. The response will show the model output in the target language, with the input also translated before being passed to the LLM. + +![img](img/translation.png) + +[OPTION END] + +### Defining Content Filtering Rules + +The **Content Filtering** Module allows screening of both input and output content to remove inappropriate or unwanted elements such as hate speech or violent content. This ensures that sentiment analysis is performed on safe and relevant inputs, and the responses generated are also safe for consumption. + +[OPTION BEGIN [AI Launchpad]] + +Navigate to the **Input Filtering** section. + +- Adjust the filtering levels for sentiment analysis inputs, based on your requirements: + + - Hate + + - Self-Harm + + - Sexual Content + + - Violence + +- This step is optional but helps sanitize user-generated content (e.g., tweets, reviews, comments) before performing sentiment analysis. + +![img](img/image013.png) + +![img](img/image026.png) + +Navigate to the Model Configuration section and: + +- Select your Deployment ID + +- Choose an LLM appropriate for text classification tasks (e.g., GPT-4 or Claude) + +**NOTE** : Ensure that your orchestration deployment is in Running Status and ready to be consumed during this process. + + +![img](img/image015.png) + +![img](img/image024.png) + +- Click on the **Output Filtering** section. + +- Adjust filtering levels for content safety criteria, similar to the **Input Filtering** configuration: + + - Hate + + - Self-Harm + + - Sexual Content + + - Violence + +- This step is also optional. + +![img](img/image019.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +```python +from gen_ai_hub.orchestration_v2.models.azure_content_filter import AzureContentFilter, AzureThreshold +from gen_ai_hub.orchestration_v2.models.llama_guard_3_filter import LlamaGuard38bFilter +from gen_ai_hub.orchestration_v2.models.content_filtering import FilteringModuleConfig, InputFiltering, OutputFiltering +from gen_ai_hub.orchestration_v2.models.content_filter import ContentFilter, ContentFilterProvider + +content_filter_config = FilteringModuleConfig( + input=InputFiltering(filters=[ + ContentFilter(type=ContentFilterProvider.AZURE, config=AzureContentFilter(hate=AzureThreshold.ALLOW_SAFE, + violence=AzureThreshold.ALLOW_SAFE, + self_harm=AzureThreshold.ALLOW_SAFE, + sexual=AzureThreshold.ALLOW_SAFE)), + ContentFilter(type=ContentFilterProvider.LLAMA_GUARD_3_8B, config=LlamaGuard38bFilter(hate=True)) + ]), + output=OutputFiltering(filters=[ + ContentFilter(type=ContentFilterProvider.AZURE, config=AzureContentFilter(hate=AzureThreshold.ALLOW_SAFE, + violence=AzureThreshold.ALLOW_SAFE, + self_harm=AzureThreshold.ALLOW_SAFE, + sexual=AzureThreshold.ALLOW_SAFE)), + ContentFilter(type=ContentFilterProvider.LLAMA_GUARD_3_8B, config=LlamaGuard38bFilter(hate=True)) + ]) + +) +``` + +**NOTE** : Adjust thresholds for hate, sexual, self-harm, and violence categories based on your use case. + + +- Then Combine the template, models, and modules into orchestration configurations: + +```python +from gen_ai_hub.orchestration_v2.models.template import Template, PromptTemplatingModuleConfig +from gen_ai_hub.orchestration_v2.models.config import ModuleConfig, OrchestrationConfig + +prompt_template = PromptTemplatingModuleConfig(prompt=template, + model=llm) + + +module_config = ModuleConfig(prompt_templating=prompt_template, filtering=content_filter_config, masking=data_masking_config, + translation= translation_config) + +config = OrchestrationConfig(modules=module_config) +``` +**NOTE** : Ensure that your orchestration deployment is in Running Status and ready to be consumed during this process. + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +```javascript +import { buildAzureContentSafetyFilter } from '@sap-ai-sdk/orchestration'; + +// Input filter: protects what users send (support tickets) +const inputFilter = buildAzureContentSafetyFilter('input', { + hate: 'ALLOW_SAFE_LOW', + self_harm: 'ALLOW_SAFE_LOW', + sexual: 'ALLOW_SAFE_LOW', + violence: 'ALLOW_SAFE_LOW', + prompt_shield: true +}); + +// Output filter: protects what the model returns +const outputFilter = buildAzureContentSafetyFilter('output', { + hate: 'ALLOW_SAFE', + self_harm: 'ALLOW_SAFE', + sexual: 'ALLOW_SAFE', + violence: 'ALLOW_SAFE' +}); +``` + +**NOTE** : Adjust thresholds for hate, sexual, self-harm, and violence categories based on your use case. + +- Then Combine the template, models, and modules into orchestration configurations: + +```javascript +import { OrchestrationClient } from '@sap-ai-sdk/orchestration'; + +const orchestrationClient = new OrchestrationClient({ + resourceGroup: 'grounding', + + // Sentiment analysis prompt + promptTemplating: { + model: { + name: 'gpt-4o', + params: { + max_completion_tokens: 200, + temperature: 0 + } + }, + prompt: { + template: [ + { + role: 'system', + content: + 'You are a customer support assistant. Analyze the sentiment of the user request provided and return whether the sentiment is positive, neutral, or negative. Also provide a one-line justification.' + }, + { + role: 'user', + content: + 'Please analyze the sentiment of the following support request: {{ ?support_text }}' + } + ] + } + }, + + translation: translationConfig, + + masking: { + masking_providers: [maskingProvider] + }, + + filtering: { + input: { + filters: [inputFilter] + }, + output: { + filters: [outputFilter] + } + } +}); +``` + +Multiple content filters can be applied for both input and output. In this tutorial, we use Azure Content Safety Filter, but you can choose from the available providers based on your use case. For more information, refer to the official [documentation](https://sap.github.io/ai-sdk/docs/js/orchestration/chat-completion) of the [`@sap-ai-sdk/orchestration`](https://github.com/SAP/ai-sdk-js/tree/main/packages/orchestration) package. + +The `filtering` configuration created in this step will be used in the next step to initialize an `OrchestrationClient` and consume the orchestration service. + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +Update your JSON body for the **05_orchestration** section: + +```JSON +"filtering": { + "input": { + "filters": [ + { + "type": "azure_content_safety", + "config": { + "hate": 4, + "self_harm": 4, + "sexual": 4, + "violence": 4 + } + } + ] + }, + "output": { + "filters": [ + { + "type": "azure_content_safety", + "config": { + "hate": 0, + "self_harm": 0, + "sexual": 0, + "violence": 0 + } + } + ] + } + } +``` +**NOTE** : Adjust thresholds for hate, sexual, self-harm, and violence categories based on your use case. +![img](img/image028.png) + +[OPTION END] + +### Executing the Orchestration Workflow + +This step runs the orchestration pipeline for each selected LLM model using the provided input text for sentiment analysis. It captures and stores the model-generated responses, enabling comparison of output quality across different models. + +[OPTION BEGIN [AI Launchpad]] + +- After configuring the filtering and model settings, click on the Test icon and run the orchestration. + +- Check the Result section for the response. + +![img](img/image023.png) + +- You can save the orchestration created for future use like shown in below image + +![img](img/image_ail_sav.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +Finally, execute the orchestration and collect the results: + +```python +from gen_ai_hub.orchestration_v2.service import OrchestrationService + +orchestration_service = OrchestrationService() + +# Run orchestration with the provided input (for example, candidate resume content) +result = orchestration_service.run(config=config, placeholder_values={"support_text" : support_request}) + +# Extract the response content +response = result.final_result.choices[0].message.content +print(response) +``` + +- response will be generated, containing outputs from the defined llm model. + +![img](img/image_py_resp.png) + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +```javascript +try { + const response = await orchestrationClient.chatCompletion({ + placeholderValues: { + support_text: txtContent + } + }); + + console.log(response.getContent()); + +} catch (error: any) { + console.error('❌ Error during support sentiment analysis'); + console.error(error.message); + console.error(error.cause?.response?.data); +} +``` + +- response will be generated, containing outputs from the defined llm model. + +**Note**: Ensure that your orchestration deployment is in Running Status and ready to be consumed during this process. + +![img](img/image_js_resp.png) + +![img](img/image_js_v2.png) + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +- Click Send to execute the request with the updated configuration. Validate the returned response. It should contain: + + - Masked Results: Sensitive phrases will be anonymized. + + - Translation: Input and output translation for sentiment analysis. + + - Filtered Content: Unsafe or biased sentiment analysis output will be flagged + +By following these steps, you can successfully mask sensitive data and apply content filtering while consuming the deployed model. + +```JSON +{ +"config": { + "modules": { + "prompt_templating": { + "prompt": { + "template": [ + { + "role": "assistant", + "content": "Support Issue: '''{{?support-issue}}'''\n Context Information: '''{{?issue-context}}'''" + }, + { + "role": "system", + "content": "You are a helpful support assistant. Your task is to help answer a given support issue. \n Your proceed as follows: \n First, check if the provided context information answers the issue. Based on the result do one of the following: \n a) If yes, provide an answer based on the provided context information in form of an email and then finish. \n b) If no, only if you cannot answer the issue you summarize the issue for the human support team. Ignore the context information in this case and provide your answer only based on the support issue. Answer in the following format:\n - Sentiment: [your sentiment analysis] \n - Key Theme: [theme of the support issue] \n - Contact: [any contact information available in the issue]" + } + ] + }, + "model": { + "name": "gpt-4o", + "params": { + "max_completion_tokens": 300, + "temperature": 0.1, + "frequency_penalty": 0, + "presence_penalty": 0 + } + } + }, + "filtering": { + "input": { + "filters": [ + { + "type": "azure_content_safety", + "config": { + "hate": 4, + "self_harm": 4, + "sexual": 4, + "violence": 4 + } + } + ] + }, + "output": { + "filters": [ + { + "type": "azure_content_safety", + "config": { + "hate": 0, + "self_harm": 0, + "sexual": 0, + "violence": 0 + } + } + ] + } + }, + "masking": { + "providers": [ + { + "type": "sap_data_privacy_integration", + "method": "anonymization", + "entities": [ + { + "type": "profile-email" + }, + { + "type": "profile-person" + }, + { + "type": "profile-phone" + }, + { + "type": "profile-address" + } + ] + } + ] + }, + "grounding": { + "type": "document_grounding_service", + "config": { + "filters": [ + { + "id": "helpRepo", + "data_repositories": [ + "*" + ], + "search_config": { + "max_chunk_count": 3 + }, + "data_repository_type": "help.sap.com" + } + ], + "placeholders": { + "input": [ + "support-issue" + ], + "output": "issue-context" + } + } + }, + "translation": { + "input": { + "type": "sap_document_translation", + "config": { + "source_language": "de-DE", + "target_language": "en-US" + } + }, + "output": { + "type": "sap_document_translation", + "config": { + "source_language": "en-US", + "target_language": "de-DE" + } + } + } + } + }, + "placeholder_values": { + "support-issue": "Betreff: Unterstützung benötigt \nNachricht: \nHallo, ich benötige Unterstützung mit SAP Signavio. Insbesondere möchte ich Benachrichtigungen im SAP Signavio Process Manager konfigurieren. Bitte kontaktieren Sie mich mit unter Jane.Janeson@gmx.net." + } +} +``` + +![img](img/image027.png) + +[OPTION END] + +### Conclusion : +Once the orchestration completes, you can observe that the output is now more refined, with sensitive information masked and inappropriate content filtered. This demonstrates the power of modules like data masking and content filtering to enhance privacy and ensure response quality. + +While this tutorial used a sentiment analysis use case, the same principles can be applied to other use cases. You can customize the Data Masking and Content Filtering settings based on your specific requirements to handle sensitive or categorized data effectively. + +By incorporating these optional modules, you can tailor your Response to meet organizational data security policies and ensure safe, reliable responses for diverse scenarios. diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/data_masking.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/data_masking.png new file mode 100644 index 0000000000..83e4e3a265 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/data_masking.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image001.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image001.png new file mode 100644 index 0000000000..e4033d9e7e Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image001.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image003.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image003.png new file mode 100644 index 0000000000..2ffd50b5f8 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image003.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image004.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image004.png new file mode 100644 index 0000000000..49ee1a0cd3 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image004.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image005.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image005.png new file mode 100644 index 0000000000..7931a791fa Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image005.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image009.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image009.png new file mode 100644 index 0000000000..1d67ce7aab Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image009.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image013.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image013.png new file mode 100644 index 0000000000..6c7a09f187 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image013.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image015.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image015.png new file mode 100644 index 0000000000..6746d93103 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image015.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image017.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image017.png new file mode 100644 index 0000000000..bdf9bc53c8 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image017.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image018.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image018.png new file mode 100644 index 0000000000..98bf180234 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image018.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image019.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image019.png new file mode 100644 index 0000000000..2b27cbf024 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image019.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image023.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image023.png new file mode 100644 index 0000000000..c7993b5e89 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image023.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image024.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image024.png new file mode 100644 index 0000000000..e038bd4192 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image024.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image025.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image025.png new file mode 100644 index 0000000000..dd5c03d76f Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image025.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image026.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image026.png new file mode 100644 index 0000000000..b5b454e132 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image026.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image027.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image027.png new file mode 100644 index 0000000000..85c275b603 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image027.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image028.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image028.png new file mode 100644 index 0000000000..0d8ff4e859 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image028.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_019.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_019.png new file mode 100644 index 0000000000..eee159cc5c Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_019.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_orch.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_orch.png new file mode 100644 index 0000000000..a550a99998 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_orch.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_orch1.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_orch1.png new file mode 100644 index 0000000000..997f2ebc7c Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_orch1.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_sav.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_sav.png new file mode 100644 index 0000000000..96fc46a99a Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_ail_sav.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_js_resp.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_js_resp.png new file mode 100644 index 0000000000..312252dfbd Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_js_resp.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_js_v2.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_js_v2.png new file mode 100644 index 0000000000..7124aab0a7 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_js_v2.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_py_resp.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_py_resp.png new file mode 100644 index 0000000000..84a4b30631 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/image_py_resp.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/support-request.txt b/tutorials/ai-core-orchestration-consumption-opt-v2/img/support-request.txt new file mode 100644 index 0000000000..a681f93699 --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-opt-v2/img/support-request.txt @@ -0,0 +1 @@ +"Subject: Bestellung #1234567890 Verspätet - John Johnson Nachricht: Halle, ich schreibe ihnen um mich nach dem Status meiner Bestellung mit der Bestellnr. +1234567890 zu erkundigen. Die Lieferung war eigentlich für gestern geplant, ist bisher jedoch nicht erfolgt. Mein Name ist John Johnson und meine Lieferadresse lautet 125 Cole Meadows Drive Palo Alto, California 94301. Bitte lassen Sie mich per Telefon unter der Nummer +1 505802 2172 wissen, wann ich mit meiner Lieferung rechnen kann. Danke!" \ No newline at end of file diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/img/translation.png b/tutorials/ai-core-orchestration-consumption-opt-v2/img/translation.png new file mode 100644 index 0000000000..d2c8ca1cab Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-opt-v2/img/translation.png differ diff --git a/tutorials/ai-core-orchestration-consumption-opt-v2/support-request.txt b/tutorials/ai-core-orchestration-consumption-opt-v2/support-request.txt new file mode 100644 index 0000000000..3a0c698a9a --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-opt-v2/support-request.txt @@ -0,0 +1 @@ +"Subject: Bestellung #1234567890 Verspätet - John Johnson Nachricht: Halle, ich schreibe ihnen um mich nach dem Status meiner Bestellung mit der Bestellnr. +1234567890 zu erkundigen. Die Lieferung war eigentlich für gestern geplant, ist bisher jedoch nicht erfolgt. Mein Name ist John Johnson und meine Lieferadresse lautet 125 Cole Meadows Drive Palo Alto, California 94301. Bitte lassen Sie mich per Telefon unter der Nummer +1 505802 2172 wissen, wann ich mit meiner Lieferung rechnen kann. Danke!" \ No newline at end of file diff --git a/tutorials/ai-core-orchestration-consumption-opt/ai-core-orchestration-consumption-opt.md b/tutorials/ai-core-orchestration-consumption-opt/ai-core-orchestration-consumption-opt.md index 947e517be7..ba27269fc9 100644 --- a/tutorials/ai-core-orchestration-consumption-opt/ai-core-orchestration-consumption-opt.md +++ b/tutorials/ai-core-orchestration-consumption-opt/ai-core-orchestration-consumption-opt.md @@ -663,6 +663,9 @@ Navigate to the Model Configuration section and: [OPTION BEGIN [Python SDK]] ```python +from gen_ai_hub.orchestration.models.content_filtering import ContentFiltering,InputFiltering, OutputFiltering +from gen_ai_hub.orchestration.models.azure_content_filter import AzureContentFilter, AzureThreshold +from gen_ai_hub.orchestration.models.llama_guard_3_filter import LlamaGuard38bFilter input_filter= AzureContentFilter(hate=AzureThreshold.ALLOW_SAFE, violence=AzureThreshold.ALLOW_SAFE, diff --git a/tutorials/ai-core-orchestration-consumption-v2/JavaScript_orchestration_service_Tutorial1.ipynb b/tutorials/ai-core-orchestration-consumption-v2/JavaScript_orchestration_service_Tutorial1.ipynb new file mode 100644 index 0000000000..ebdfff13dc --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-v2/JavaScript_orchestration_service_Tutorial1.ipynb @@ -0,0 +1,265 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load environment variables\n", + "\n", + "In this step, we use the dotenv package to load environment variables from a .env file. This approach helps manage sensitive configuration details like API keys and service credentials without hardcoding them in the code.\n", + "\n", + "Key Points:\n", + "\n", + "dotenv: Automatically loads environment variables defined in a .env file into process.env.\n", + "\n", + "Access Environment Variables: The process.env object is used to access these variables in the application." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import dotenv from 'dotenv';\n", + "dotenv.config();\n", + " \n", + "console.log(process.env.AICORE_SERVICE_KEY); " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a New Orchestration Configuration\n", + "In this step, we define a function to create an orchestration configuration using the ConfigurationApi from the SAP AI SDK. This configuration integrates various parameters needed for orchestration, such as the executable ID and scenario ID.\n", + "\n", + "Key Points:\n", + "\n", + "ConfigurationApi: Provides methods for interacting with the SAP AI SDK's configuration services.\n", + "\n", + "parameterBindings: Specifies the parameters used for orchestration." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Configuration created\n" + ] + } + ], + "source": [ + "import { ConfigurationApi } from '@sap-ai-sdk/ai-api';\n", + "\n", + "const RESOURCE_GROUP = 'grounding'; // Please change to your desired resource group\n", + "\n", + "// Create orchestration configuration using ConfigurationApi\n", + "async function createOrchestrationConfiguration() {\n", + " try {\n", + " const response = await ConfigurationApi\n", + " .configurationCreate({\n", + " name: 'orchestration-config', // Choose a meaningful name\n", + " executableId: 'orchestration', // Orchestration executable ID\n", + " scenarioId: 'orchestration', // Orchestration scenario ID\n", + " }, {'AI-Resource-Group': RESOURCE_GROUP}).execute();\n", + "\n", + " return response;\n", + " } catch (error: any) {\n", + " // Handle API errors\n", + " console.error('Configuration creation failed:', error.stack);\n", + " }\n", + "}\n", + "\n", + "const configuration = await createOrchestrationConfiguration();\n", + "console.log(configuration?.message); // Print the configuration response message" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Deployment of orchestration\n", + "This step involves creating a deployment using the specified configuration and resource group. The deployment is handled via the DeploymentApi, which streamlines the process of activating the orchestration setup.\n", + "\n", + "Key Points:\n", + "\n", + "DeploymentApi: Used for initiating the deployment based on the given configuration.\n", + "\n", + "createDeployment Function: This function handles the API call to create the deployment." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Deployment scheduled.\n" + ] + } + ], + "source": [ + "import { DeploymentApi } from '@sap-ai-sdk/ai-api'; \n", + "\n", + "// Create Orchestration deployment using DeploymentApi\n", + "async function createOrchestrationDeployment() { \n", + " // Extract the configuration ID from the result of the previous step \n", + " const configurationId = configuration.id;\n", + "\n", + " try { \n", + " const response = await DeploymentApi\n", + " .deploymentCreate(\n", + " { configurationId }, \n", + " { 'AI-Resource-Group': RESOURCE_GROUP }\n", + " ).execute(); \n", + "\n", + " return response;\n", + " } catch (error: any) { \n", + " console.error('Deployment creation failed:', error.stack);\n", + " } \n", + "} \n", + " \n", + "const deployment = await createOrchestrationDeployment();\n", + "console.log(deployment?.message) // Print the deployment creation response" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Basic Orchestration Pipeline" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "const cvContent = await Deno.readTextFile('./cv.txt');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Templating\n", + "\n", + "Explanation of Templating Code\n", + "\n", + "This code defines a template for an AI assistant using orchestration configuration. The `Template` object is set up with system and user messages to guide the assistant’s response behavior. \n", + "\n", + "Key Components:\n", + "- **SystemMessage**: Sets a predefined instruction for the AI assistant. This message typically includes the assistant's role and any specific guidelines it should follow.\n", + "- **UserMessage**: Represents the user's input and how it is structured in the conversation.\n", + " \n", + "In this revised prompt, only queries are passed to the assistant without any additional context. The AI is expected to respond based solely on the provided input.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Define the LLM \n", + "\n", + "The LLM class is used to configure and initialize a model for generating text based on specific parameters. In this example, we'll use the model to perform the content creation task.\n", + "\n", + "ℹ️Note that virtual deployment of the model is managed automatically by the Orchestration Service, so no additional deployment setup is required on your part." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Generate Responses \n", + "\n", + "This step outlines the process of generating responses for a set of queries using a defined model. The generateResponses function executes queries to gather AI-generated responses.\n", + "\n", + "Key Points:\n", + "\n", + "Query Execution: Uses OrchestrationClient to generate responses for each query." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "John Doe is a skilled data scientist with over three years of experience in data analysis, statistical modeling, and machine learning, aiming to utilize his proficiency in predictive modeling and data visualization to inform decision-making at prospective employers. He holds a Master's degree in Data Science from the University of California, Berkeley, and a Bachelor's degree in Computer Science from UCLA. John is proficient in several programming languages including Python, R, SQL, and Java, and is experienced with data analysis and visualization tools such as Pandas, NumPy, Matplotlib, Seaborn, and Tableau, alongside machine learning libraries like Scikit-learn, TensorFlow, and Keras. His professional experience includes working as a Data Scientist at DataCorp Inc., where he developed predictive models that increased ROI by 20% and visualized KPIs using Tableau to improve stakeholder decision-making. Previously, as a Data Analyst Intern at Analytics Solutions, he contributed to business growth by analyzing large datasets and assisting in the development of automated reporting tools. Among his notable projects, John conducted customer segmentation analysis using K-means clustering and achieved 85% accuracy in predictive stock price modeling through time series analysis. He holds certifications as a Certified Data Scientist by the Data Science Council of America and a Machine Learning Specialization from Coursera. John is affiliated with professional bodies such as the Association for Computing Machinery and the Data Science Society. Outside work, he enjoys exploring new technologies, reading on AI and machine learning, traveling, and playing competitive video games, though he has expressed dislike for the Azure Cloud platform and prefers to avoid routine tasks. Contact information includes 1234 Data St, San Francisco, CA 94101, phone number (123) 456-7890, and email johndoe@email.com.\n" + ] + } + ], + "source": [ + "import { OrchestrationClient } from '@sap-ai-sdk/orchestration';\n", + "\n", + "const orchestrationClient = new OrchestrationClient({\n", + " promptTemplating: {\n", + " model: {\n", + " name: 'gpt-4o'\n", + " },\n", + " prompt: {\n", + " template: [\n", + " {\n", + " role: 'system',\n", + " content:\n", + " 'You are a helpful AI assistant for HR. Summarize the following CV in 10 sentences, focusing on key qualifications, work experience, and achievements. Include personal contact information, organizational history, and personal interests.'\n", + " },\n", + " {\n", + " role: 'user',\n", + " content: 'Candidate Resume:\\n{{ ?candidate_resume }}'\n", + " }\n", + " ]\n", + " }\n", + " }\n", + "});\n", + "\n", + "const response = await orchestrationClient.chatCompletion({\n", + " placeholderValues: {\n", + " candidate_resume: cvContent\n", + " }\n", + "});\n", + "console.log(response.getContent());" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "response" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Deno", + "language": "typescript", + "name": "deno" + }, + "language_info": { + "codemirror_mode": "typescript", + "file_extension": ".ts", + "mimetype": "text/x.typescript", + "name": "typescript", + "nbconvert_exporter": "script", + "pygments_lexer": "typescript", + "version": "5.8.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/tutorials/ai-core-orchestration-consumption-v2/Python_orchestration_service_Tutorial1.ipynb b/tutorials/ai-core-orchestration-consumption-v2/Python_orchestration_service_Tutorial1.ipynb new file mode 100644 index 0000000000..9bcfe5253b --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-v2/Python_orchestration_service_Tutorial1.ipynb @@ -0,0 +1,507 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "d4d8f0d470dd5ce2", + "metadata": {}, + "source": [ + "# Orchestration with GenAI Hub" + ] + }, + { + "cell_type": "markdown", + "id": "5cfa2bb46ec75e47", + "metadata": {}, + "source": [ + "This notebook demonstrates, configuring an orchestration pipeline, and querying multiple LLM models with GenAI Hub." + ] + }, + { + "cell_type": "markdown", + "id": "bdccc0b7", + "metadata": {}, + "source": [ + "The code imports required libraries, reads credentials from a creds.json file, and sets environment variables for authentication and API access" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "96da5c85", + "metadata": {}, + "outputs": [], + "source": [ + "import time\n", + "import json\n", + "import os\n", + "from IPython.display import clear_output\n", + "from ai_core_sdk.ai_core_v2_client import AICoreV2Client\n", + " \n", + "# Inline credentials\n", + "with open('creds.json') as f:\n", + " credCF = json.load(f)\n", + "\n", + "# Set environment variables\n", + "def set_environment_vars(credCF):\n", + " env_vars = {\n", + " 'AICORE_AUTH_URL': credCF['url'] + '/oauth/token',\n", + " 'AICORE_CLIENT_ID': credCF['clientid'],\n", + " 'AICORE_CLIENT_SECRET': credCF['clientsecret'],\n", + " 'AICORE_BASE_URL': credCF[\"serviceurls\"][\"AI_API_URL\"] + \"/v2\",\n", + " 'AICORE_RESOURCE_GROUP': \"grounding\" \n", + " }\n", + "\n", + " for key, value in env_vars.items():\n", + " os.environ[key] = value\n", + " print(value)\n", + "\n", + "# Create AI Core client instance\n", + "def create_ai_core_client(credCF):\n", + " set_environment_vars(credCF) # Ensure environment variables are set\n", + " return AICoreV2Client(\n", + " base_url=os.environ['AICORE_BASE_URL'],\n", + " auth_url=os.environ['AICORE_AUTH_URL'],\n", + " client_id=os.environ['AICORE_CLIENT_ID'],\n", + " client_secret=os.environ['AICORE_CLIENT_SECRET'],\n", + " resource_group=os.environ['AICORE_RESOURCE_GROUP']\n", + " )\n", + "\n", + "ai_core_client = create_ai_core_client(credCF)" + ] + }, + { + "cell_type": "markdown", + "id": "07c03e8b", + "metadata": {}, + "source": [ + "### Create a New Orchestration Configuration\n", + "In this step, a new configuration is created using the ai_core_client. It involves defining identifiers like scenario_id, executable_id, and a configuration name. This configuration is essential for setting up the orchestration workflow.\n", + "\n", + "Key Points:\n", + "\n", + "Scenario ID: Specifies the context of the orchestration scenario.\n", + "\n", + "Executable ID: Identifies the executable to be used in orchestration." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "86eecdbc", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Name: sap-ai-sdk-gen\n", + "Version: 6.1.2\n", + "Summary: SAP Cloud SDK for AI (Python): generative AI SDK\n", + "Home-page: https://www.sap.com/\n", + "Author: SAP SE\n", + "Author-email: \n", + "License: SAP DEVELOPER LICENSE AGREEMENT\n", + "Location: C:\\Users\\C5384965\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\n", + "Requires: click, dacite, h11, httpx, langchain, langchain-classic, langchain-community, langchain-openai, openai, overloading, packaging, pydantic, sap-ai-sdk-core\n", + "Required-by: \n" + ] + } + ], + "source": [ + "!pip show sap-ai-sdk-gen" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "30b6253a", + "metadata": {}, + "outputs": [], + "source": [ + "# Define scenario ID, executable ID, and configuration suffix\n", + "scenario_id = \"orchestration\"\n", + "executable_id = \"orchestration\"\n", + "config_suffix = \"config-new\"\n", + "config_name = f\"{config_suffix}-orchestration\"\n", + "\n", + "# Create a new configuration\n", + "config = ai_core_client.configuration.create(\n", + " scenario_id=scenario_id,\n", + " executable_id=executable_id,\n", + " name=config_name\n", + ")\n", + "\n", + "print(f\"Configuration created successfully with ID: {config.id} and Name: {config_name}\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "23f60625", + "metadata": {}, + "source": [ + "### Create and Monitor the Deployment\n", + "This step involves creating a deployment using the previously created configuration ID and monitoring the deployment status until it becomes ready. The deployment is created via the ai_core_client, and a helper function (spinner) is used to check the status periodically.\n", + "\n", + "Key Points:\n", + "\n", + "Deployment Creation: Uses the configuration ID to create a new deployment.\n", + "\n", + "Status Check: Utilizes a callback function to check if the deployment is in a 'RUNNING' state.\n", + "\n", + "Spinner Function: Provides a visual indication while waiting for the deployment to be ready." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b3aa06ac", + "metadata": {}, + "outputs": [], + "source": [ + "# Create a deployment using the configuration ID from the previous cell\n", + "deployment = ai_core_client.deployment.create(configuration_id=config.id)\n", + "\n", + "print(f\"Deployment created successfully with ID: {deployment.id}\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "f08922c5", + "metadata": {}, + "source": [ + "### Monitoring the Deployment" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "32181da4", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Waiting for the deployment to become ready... \\\n", + "Deployment is ready with status: Status.RUNNING\n" + ] + } + ], + "source": [ + "from ai_api_client_sdk.models.status import Status\n", + "\n", + "def spinner(check_callback, timeout=300, check_every_n_seconds=10):\n", + " start = time.time()\n", + " while time.time() - start < timeout:\n", + " return_value = check_callback()\n", + " if return_value:\n", + " return return_value\n", + " for char in '|/-\\\\':\n", + " clear_output(wait=True)\n", + " print(f'Waiting for the deployment to become ready... {char}')\n", + " time.sleep(0.2)\n", + "\n", + "# Define the callback to check if the deployment is ready\n", + "def check_ready():\n", + " updated_deployment = ai_core_client.deployment.get(deployment.id)\n", + " return updated_deployment if updated_deployment.status == Status.RUNNING else None\n", + "\n", + "# Wait for the deployment to be ready\n", + "ready_deployment = spinner(check_ready)\n", + "print(f\"Deployment is ready with status: {ready_deployment.status}\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "ebc8ab8b", + "metadata": {}, + "source": [ + "## Basic Orchestration Pipeline\n", + "\n", + "Now that you have YOUR_DEPOLYMENT_URL, let's walk through a basic orchestration pipeline." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "b8a2a07a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "John Doe\n", + "1234 Data St, San Francisco, CA 94101\n", + "(123) 456-7890\n", + "johndoe@email.com\n", + "LinkedIn Profile\n", + "GitHub Profile\n", + "\n", + "Objective\n", + "Detail-oriented Data Scientist with 3+ years of experience in data analysis, statistical modeling, and machine learning. Seeking to leverage expertise in predictive modeling and data visualization to help drive data-informed decision-making at [Company Name].\n", + "\n", + "Education\n", + "Master of Science in Data Science\n", + "University of California, Berkeley\n", + "Graduated: May 2021\n", + "\n", + "Bachelor of Science in Computer Science\n", + "University of California, Los Angeles\n", + "Graduated: May 2019\n", + "\n", + "Technical Skills\n", + "\n", + "Programming Languages: Python, R, SQL, Java\n", + "Data Analysis & Visualization: Pandas, NumPy, Matplotlib, Seaborn, Tableau\n", + "Machine Learning: Scikit-learn, TensorFlow, Keras, XGBoost\n", + "Big Data Technologies: Hadoop, Spark\n", + "Databases: MySQL, PostgreSQL\n", + "Version Control: Git\n", + "\n", + "Professional Experience\n", + "\n", + "Data Scientist\n", + "DataCorp Inc., San Francisco, CA\n", + "June 2021 – Present\n", + "\n", + "Developed predictive models to optimize marketing campaigns, which increased ROI by 20%.\n", + "Conducted in-depth data analysis using Python and SQL to identify trends and patterns in large datasets.\n", + "Collaborated with cross-functional teams to implement data-driven strategies that improved customer satisfaction scores by 15%.\n", + "Created interactive dashboards using Tableau to visualize KPIs for stakeholders.\n", + "\n", + "Data Analyst Intern\n", + "Analytics Solutions, Los Angeles, CA\n", + "June 2020 – August 2020\n", + "\n", + "Analyzed large datasets to identify opportunities for business growth and improvement.\n", + "Assisted in the development of automated reporting tools using Python and Excel.\n", + "Worked with data visualization tools to create insightful reports for management.\n", + "\n", + "Projects\n", + "\n", + "Customer Segmentation Analysis\n", + "Conducted K-means clustering on customer data to segment the customer base into distinct groups, enabling targeted marketing strategies.\n", + "\n", + "Predictive Stock Price Modeling\n", + "Built a predictive model using time series analysis to forecast stock prices, achieving an accuracy rate of 85%.\n", + "\n", + "Sentiment Analysis on Social Media\n", + "Implemented natural language processing techniques to analyze sentiment from tweets, providing insights into public opinion on various topics.\n", + "\n", + "Certifications\n", + "\n", + "Certified Data Scientist (CDS) – Data Science Council of America\n", + "Machine Learning Specialization – Coursera by Stanford University\n", + "\n", + "Professional Affiliations\n", + "\n", + "Member, Association for Computing Machinery (ACM)\n", + "Member, Data Science Society\n", + "\n", + "References\n", + "Available upon request.\n", + "\n", + "Personal Interests\n", + "- I absolutely love exploring new technologies and working on innovative projects.\n", + "- I enjoy reading books, especially on artificial intelligence and machine learning.\n", + "- I hate people who are dishonest and unreliable.\n", + "- I love traveling and experiencing new cultures.\n", + "- I enjoy playing video games, especially competitive ones.\n", + "- I hate being stuck in a routine; I always seek new challenges and growth opportunities.\n", + "-I hate working in Azure cloud -\"Azure cloud is the most irritating platform i have ever used\"\n", + "\n" + ] + } + ], + "source": [ + "from gen_ai_hub.orchestration_v2.utils import load_text_file\n", + "\n", + "# Load the CV file content\n", + "cv_file_path = \"cv.txt\" # Specify the correct path to the CV file\n", + "cv_content = load_text_file(cv_file_path)\n", + "\n", + "# Print the content to verify it has been loaded\n", + "print(cv_content)\n" + ] + }, + { + "cell_type": "markdown", + "id": "8dac7637", + "metadata": {}, + "source": [ + "# Step 1: Templating\n", + "\n", + "Explanation of Templating Code\n", + "\n", + "This code defines a template for an AI assistant using orchestration configuration. The `Template` object is set up with system and user messages to guide the assistant’s response behavior. \n", + "\n", + "Key Components:\n", + "- **SystemMessage**: Sets a predefined instruction for the AI assistant. This message typically includes the assistant's role and any specific guidelines it should follow.\n", + "- **UserMessage**: Represents the user's input and how it is structured in the conversation.\n", + " \n", + "In this revised prompt, only queries are passed to the assistant without any additional context. The AI is expected to respond based solely on the provided input.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1427a640", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.message import SystemMessage, UserMessage\n", + "from gen_ai_hub.orchestration_v2.models.template import Template\n", + "\n", + "# Define the template for resume screening\n", + "template = Template(\n", + " template=[\n", + " SystemMessage(content=\"\"\"You are a helpful AI assistant for HR. Summarize the following CV in 10 sentences, \n", + " focusing on key qualifications, work experience, and achievements. Include personal contact information, \n", + " organizational history, and personal interests\"\"\"),\n", + " UserMessage(content=\n", + " \"Here is a candidate's resume: {{ ?candidate_resume }}\"\n", + " ),\n", + " ],\n", + " defaults={\"candidate_resume\": \"John Doe's resume content goes here...\"},\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "id": "dbe251e0", + "metadata": {}, + "source": [ + "# Step 2: Define the LLM \n", + "\n", + "The LLM class is used to configure and initialize a model for generating text based on specific parameters. In this example, we'll use the gpt-4o model to perform the content creation task.\n", + "\n", + "ℹ️Note that virtual deployment of the model is managed automatically by the Orchestration Service, so no additional deployment setup is required on your part." + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "d2850b00", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.template import PromptTemplatingModuleConfig\n", + "from gen_ai_hub.orchestration_v2.models.llm_model_details import LLMModelDetails\n", + "\n", + "llm = LLMModelDetails(\n", + " name=\"gpt-5-nano\",\n", + " params={\"max_completion_tokens\": 2048}\n", + ")\n", + "\n", + "prompt_module = PromptTemplatingModuleConfig(\n", + " prompt=template,\n", + " model=llm\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "e1e8a4ae", + "metadata": {}, + "source": [ + "This configuration initializes the model to use the llm models with the latest updates. The model will generate responses up to 256 tokens in length and produce more predictable and focused output due to the low temperature setting." + ] + }, + { + "cell_type": "markdown", + "id": "c270b13d", + "metadata": {}, + "source": [ + "# Step 3: Create the Orchestration Configuration\n", + "\n", + "The OrchestrationConfig class is used to create a configuration that integrates various components, such as templates and llm models, into a unified orchestration setup. This configuration specifies how these components work together to achieve the desired workflow." + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "dc161825", + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.config import ModuleConfig, OrchestrationConfig\n", + "\n", + "modules = ModuleConfig(\n", + " prompt_templating=prompt_module\n", + ")\n", + "\n", + "config = OrchestrationConfig(modules=modules)\n" + ] + }, + { + "cell_type": "markdown", + "id": "dd79ec07", + "metadata": {}, + "source": [ + "# Step 4: Run the Orchestration Request\n", + "\n", + "The OrchestrationService class is used to interact with the orchestration service by providing a configuration and invoking its operations. This service handles the execution of workflows defined by the provided configuration and processes inputs accordingly." + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "6102ba20", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "John Doe, 1234 Data St, San Francisco, CA 94101; contact number (123) 456-7890; email johndoe@email.com, with LinkedIn and GitHub profiles available. \n", + "Objective: A detail-oriented Data Scientist with 3+ years of experience in data analysis, statistical modeling, and machine learning, seeking to leverage predictive modeling and data visualization to drive data-informed decision-making at your company. \n", + "Education: Master of Science in Data Science from the University of California, Berkeley (May 2021) and Bachelor of Science in Computer Science from the University of California, Los Angeles (May 2019). \n", + "Technical Skills: Python, R, SQL, Java; data analysis and visualization with Pandas, NumPy, Matplotlib, Seaborn, Tableau; machine learning with Scikit-learn, TensorFlow, Keras, XGBoost; big data technologies Hadoop and Spark; databases MySQL and PostgreSQL; version control with Git. \n", + "Professional Experience: Data Scientist at DataCorp Inc., San Francisco, CA (June 2021 – Present), developing predictive models to optimize marketing campaigns and increasing ROI by 20%. \n", + "Additional impact at DataCorp: Collaborated with cross-functional teams to implement data-driven strategies that improved customer satisfaction scores by 15% and created interactive Tableau dashboards visualizing KPIs for stakeholders. \n", + "Earlier role: Data Analyst Intern at Analytics Solutions, Los Angeles, CA (June 2020 – August 2020), analyzed large datasets to identify growth opportunities and helped develop automated reporting tools using Python and Excel. \n", + "Projects: Customer Segmentation Analysis using K-means to target marketing; Predictive Stock Price Modeling with time series achieving 85% accuracy; Sentiment Analysis on Social Media using NLP to gauge public opinion. \n", + "Certifications and Affiliations: Certified Data Scientist (CDS) from the Data Science Council of America; Machine Learning Specialization from Coursera (Stanford); member of ACM and Data Science Society. \n", + "References: Available upon request. \n", + "Personal Interests: enjoys exploring new technologies, reading about AI/ML, traveling to experience different cultures, and playing competitive video games; values honesty and growth, dislikes routine, and states a preference against Azure cloud, describing it as an irritating platform.\n" + ] + } + ], + "source": [ + "from gen_ai_hub.orchestration_v2.service import OrchestrationService\n", + "\n", + "orchestration_service = OrchestrationService(config=config)\n", + "\n", + "response = orchestration_service.run(\n", + " placeholder_values={\n", + " \"candidate_resume\": cv_content\n", + " }\n", + ")\n", + "\n", + "print(response.final_result.choices[0].message.content)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/tutorials/ai-core-orchestration-consumption-v2/ai-core-orchestration-consumption-v2.md b/tutorials/ai-core-orchestration-consumption-v2/ai-core-orchestration-consumption-v2.md new file mode 100644 index 0000000000..80ca04c859 --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-v2/ai-core-orchestration-consumption-v2.md @@ -0,0 +1,827 @@ +--- +parser: v2 +auto_validation: true +time: 45 +primary_tag: software-product>sap-ai-core +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-ai-core ] +author_name: Smita Naik +author_profile: https://github.com/I321506 +--- + +# Consumption of GenAI models Using Orchestration(V2) service - A Beginner's Guide + In this tutorial, we are going to learn the simple consumption of Gen AI models using the Orchestration. + +## You will learn +- How to inference GenAI models Using Orchestration Service v2 + +## Prerequisites +1. **BTP Account** + If you do not already have a commerical SAP Business Technology Platform (BTP) account, you can use **BTP Advanced Trial**. + [Create a BTP Account](https://developers.sap.com/group.btp-setup.html) +2. **For SAP Developers or Employees** + Internal SAP stakeholders should refer to the following documentation: [How to create BTP Account For Internal SAP Employee](https://me.sap.com/notes/3493139), [SAP AI Core Internal Documentation](https://help.sap.com/docs/sap-ai-core) +3. **For External Developers, Customers, or Partners** + Follow this tutorial to set up your environment and entitlements: [External Developer Setup Tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html), [SAP AI Core External Documentation](https://help.sap.com/docs/sap-ai-core?version=CLOUD) +4. **Create BTP Instance and Service Key for SAP AI Core** + Follow the steps to create an instance and generate a service key for SAP AI Core. Ensure to use service plan **extended**: + [Create Service Key and Instance](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key?version=CLOUD) +5. **AI Core Setup Guide** + Step-by-step guide to set up and get started with SAP AI Core: + [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-genaihub-provisioning.html) +6. An **Extended** SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard plans. For more details, refer to +[SAP AI Core Service Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/service-plans?version=CLOUD) +7. **AI Launchpad Setup Guide** + Step-by-step guide to set up AI Launchpad: + [AI Launchpad Tutorial](https://developers.sap.com/tutorials/ai-launchpad-provisioning.html) + +### Pre-Read + +This tutorial provides a basic introduction to using **orchestration service V2 in SAP AI Core**. + +**Orchestration service V2 in SAP AI Core is a managed service that enables unified access, control, and execution of generative AI models through standardized APIs, templating, and configurable AI workflow components.** + +You will learn how to deploy and configure orchestration to enable the consumption of ** GenAI models** within a single workflow. + +We will walk through a **step-by-step guide** and demonstrate the orchestration flow using a **resume processing use case**. This real-world scenario highlights how llm models can collaborate within a cohesive pipeline using orchestration. + +> **Note:** In SAP AI Core, orchestration deployment is available by default in the default resource group during the onboarding. For any new or additional resource groups, you must deploy a separate orchestration setup. + +While orchestration in SAP AI Core offers capabilities such as **data masking, content filtering, translation, and grounding**, this tutorial focuses on the basic consumption flow using mandatory modules like **templating** and **model configuration**. Others modules are Optional and usage of those modules are covered in a separate tutorial. + +By the end of this tutorial, + + * you will have a foundational understanding of orchestration through its minimal usage, focusing on practical application of templates and how to use llm models using harmonized APIs. + + * Learn how to implement the solution using **SAP AI Launchpad**, **Python SDK**, **JavaScript**, and **Bruno**. + + Refer to the [orchestration documentation](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/orchestration-8d022355037643cebf775cd3bf662cc5?locale=en-US&version=CLOUD) for more information. + +### Set Up Your Environment and Configure Access + +[OPTION BEGIN [AI Launchpad]] + +• Open AI Launchpad. + +• Connect to your instance using your credentials. + +• Navigate to the desired Resource Group where you plan to deploy the orchestration. + +For the detailed step follow the tutorial - Setup Generative AI Hub in SAP AI Launchpad + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +**Installing sap-ai-sdk-gen** + +To install the **SAP Cloud SDK for AI (Python) - generative package** in your system, open your terminal or command prompt and run the following command. + +``` python +pip install sap-ai-sdk-gen +``` + +Once the package is installed, you need to configure proxy modules to use the large language models. We recommend setting these values as environment variables for AI Core credentials via a configuration file. The default path for this file is ~/.aicore/config.json. + +Open Notepad and replace the placeholder values in the JSON file with your AI Core service keys, which you downloaded from BTP. Save the file by pressing Command + S. When prompted, navigate to ~/.aicore/ and save the file as config.json. + +The configuration file should be: + +![img](img/image005.png) + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +• [Create a service key](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key) for your AI Core service instance and copy the generated JSON object. + +• Set the copied service key as the `AICORE_SERVICE_KEY` environment variable in your local environment. Maintaining a single-line format will prevent parsing errors. + +``` +AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","serviceurls":{"AI_API_URL":"..."}}' +``` + +The SDK parses the service key from the environment variable to interact with the AI Core service. + + +• Optionally, set the AICORE_HOME environment variable to override the default config path. + +• Install the required packages: + +``` + npm install @sap-ai-sdk/ai-api @sap-ai-sdk/orchestration dotenv +``` + +• For detailed setup and usage,refer to the official [GitHub repository](https://github.com/SAP/ai-sdk-js/tree/main?tab=readme-ov-file#sap-ai-sdkorchestration) of **SAP Cloud SDK for AI**. + + +• For detailed installation and usage of the **SAP Cloud SDK for AI (JavaScript)**, visit the official [GitHub repository](https://github.com/SAP/ai-sdk-js) and [Documentation](https://sap.github.io/ai-sdk/). This page provides comprehensive steps to set up, integrate and test the SDK effectively in your projects. + +**Tip:** + +• Ways to load environment variables might vary based on the framework you are using. + +• For example, while the SAP Cloud SDK for AI (JavaScript) uses the [dotenv](https://www.npmjs.com/package/dotenv) library to load environment variables, NextJS uses a [specific configuration](https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables) to load them. + +• Installing JavaScript Kernel for Jupyter Notebooks: If you want to use JavaScript in Jupyter Notebooks, you can refer to [Deno v1.37 blog post](https://deno.com/blog/v1.37) for detailed steps to install the Javascript kernel. Follow the instructions provided to set up the environment and enable JavaScript support in Jupyter. + +[OPTION END] + +[OPTION BEGIN [Bruno]] +#### Download and Import the Bruno Collection +- Download the [bruno_collections](img/Bruno_Collection.json) file + +- Navigate to the Bruno Collections section + +- Upload the .json file to import the collection. Follow the screenshot attached for reference +![img](img/img001.png) +![img](img/img003.png) +![img](img/img005.png) +#### Set Environment Variables +- From the imported collection, select the get_token query. + +- Click on "No Environment" and then select "Configure". + +![img](img/no_env.png) +- Populate the following environment variables with values from the service key: + - ai_auth_url → url from the service key. + - ai_api_url → serviceurls.AI_API_URL from the service key. + - client_id → clientid from the service key. + - client_secret → clientsecret from the service key. + - resource_group → Specify a resource group name. + +![img](img/img009.png) +- Save the environment configuration. + +- Click on "No Environment" in the top-right corner and select "Grounding-test". +![img](img/env_set.png) + +#### Generate the Token + +- Select the get_token request from the root folder of the imported collection. + +- Execute the request to generate the token. +![img](img/get_token.png) + +**NOTE**: If the token expires at any point during execution, repeat this step to regenerate it. + +[OPTION END] + + +### Create Configuration for Orchestration deployment - Optional Step + +> Execute this step only if orchestration deployment is not available. Other wise **skip** this step and proceed to the next step `Consume LLM's in Generative AI Hub through Orchestration` + +> As part of the SAP AI Core onboarding process, an `orchestration deployment` is automatically created in the `default resource group`. + +> This means you can start using orchestration in the Generative AI Hub right away—no need to create a separate deployment. + +In this step you will be: + +* creating the configuration required for the orchestration deployment +* Creating the Orchestration deployment + +[OPTION BEGIN [AI Launchpad]] + +Go to the Configuration section within your chosen Resource Group. + +![img](img/image008.png) + +• Fill in Deployment Details, Under configuration, input the following details: + + Name: "orchestration" + + Executable: "orchestration" + + Scenario: "orchestration" + + Version: "0.0.1" + +• Click Next after entering each detail. + +![img](img/image009.png) + +When prompted, click on Create Deployment. Continue through the setup by clicking Next until you reach the deployment confirmation. + +![img](img/image014.png) + +Once the deployment begins, continue to the status page. Verify that the Deployment Status changes to Running (see attached screenshot for reference). + +![img](img/image015.png) + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +• Create a folder named orchestration, then navigate to this folder using VS Code. + +• Inside the folder, create a new file with any name but ensure it has the .ipynb extension. + +![img](img/image010.png) + +You'll create a configuration that defines the orchestration setup. Use the following code to initialize your configuration. + +```python +import time +import json +import os +from IPython.display import clear_output +from ai_core_sdk.ai_core_v2_client import AICoreV2Client +from ai_api_client_sdk.models.parameter_binding import ParameterBinding +from enum import Enum + +# Create AI Core client instance +def create_ai_core_client(credCF): + set_environment_vars(credCF) # Ensure environment variables are set + return AICoreV2Client( + base_url=os.environ['AICORE_BASE_URL'], + auth_url=os.environ['AICORE_AUTH_URL'], + client_id=os.environ['AICORE_CLIENT_ID'], + client_secret=os.environ['AICORE_CLIENT_SECRET'], + resource_group=os.environ['AICORE_RESOURCE_GROUP'] + ) + +ai_core_client = create_ai_core_client(credCF) + +# Define scenario ID, executable ID, and configuration suffix +scenario_id = "orchestration" +executable_id = "orchestration" +config_suffix = "config-new"   # Enter your configuration name +config_name = f"{config_suffix}-orchestration" + +# Create a new configuration +config = ai_core_client.configuration.create( +    scenario_id=scenario_id, +    executable_id=executable_id, +    name=config_name +) +print(f"Configuration created successfully with ID: {config.id} and Name: {config_name}") +``` + +**Note**: + +• scenario_id and executable_id: Both are set to "orchestration" for this tutorial. + +• config_name: Choose a unique name for the configuration (e.g., "config-new-orchestration") + +![img](img/image011.png) + +With the configuration ID, you can proceed to deploy the orchestration and monitor its progress. + +**Create the Deployment:** + +Run the following code to create a deployment using the configuration ID obtained in Step 2. + +```python +# Create a deployment using the configuration ID from the previous cell + +deployment = ai_core_client.deployment.create(configuration_id=config.id) +print(f"Deployment created successfully with ID: {deployment.id}") +``` + +![img](img/image016.png) + +**Monitor Deployment Status:** + +Execute the following code to monitor the deployment until it’s fully active. The status should eventually display as "Running". + +```PYTHON +from ai_api_client_sdk.models.status import Status + +def spinner(check_callback, timeout=300, check_every_n_seconds=10): +    start = time.time() +    while time.time() - start < timeout: +        return_value = check_callback() +        if return_value: +            return return_value + +        for char in '|/-\\': +            clear_output(wait=True) +            print(f'Waiting for the deployment to become ready... {char}') +            time.sleep(0.2) + +# Define the callback to check if the deployment is ready +def check_ready(): +    updated_deployment = ai_core_client.deployment.get(deployment.id) +    return updated_deployment if updated_deployment.status == Status.RUNNING else None + +# Wait for the deployment to be ready +ready_deployment = spinner(check_ready) +print(f"Deployment is ready with status: {ready_deployment.status}") +``` + +Result: The code will display a loading spinner until the deployment status updates to "Running." Refer to the attached screenshot for confirmation. + +**Note:** API note need to be added here + +![img](img/image017.png) + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +In this step, we will create an orchestration configuration using the [`@sap-ai-sdk/ai-api`](https://github.com/SAP/ai-sdk-js/tree/main/packages/ai-api) package of the SAP Cloud SDK for AI (JavaScript). For more information, refer to the official [documentation](https://sap.github.io/ai-sdk/docs/js/ai-core/ai-api). This configuration integrates various parameters needed for orchestration, such as the executable ID and scenario ID. + +• To start, install the dependency in your project. + +``` +npm install @sap-ai-sdk/ai-api +``` + +• Add the following code to your project to create an orchestration configuration: + +```javascript +import { ConfigurationApi } from '@sap-ai-sdk/ai-api'; + +const RESOURCE_GROUP = 'YourResourceGroupId'; // Please change to your desired resource group + +// Create orchestration configuration using ConfigurationApi +async function createOrchestrationConfiguration() { + try { + const response = await ConfigurationApi + .configurationCreate({ + name: 'orchestration-config', // Choose a meaningful name + executableId: 'orchestration', // Orchestration executable ID + scenarioId: 'orchestration', // Orchestration scenario ID + }, {'AI-Resource-Group': RESOURCE_GROUP}).execute(); + + return response; + } catch (error: any) { + // Handle API errors + console.error('Configuration creation failed:', error.stack); + } +} + +const configuration = await createOrchestrationConfiguration(); +console.log(configuration?.message); // Print the configuration response message +``` + +In this step, we will create a deployment from the configuration created in the previous step using the [`@sap-ai-sdk/ai-api`](https://github.com/SAP/ai-sdk-js/tree/main/packages/ai-api) package of the SAP Cloud SDK for AI (JavaScript). For more information, refer to the official [documentation](https://sap.github.io/ai-sdk/docs/js/ai-core/ai-api). + +• Add the following code to your project to create an orchestration deployment: + +```javascript +import { DeploymentApi } from '@sap-ai-sdk/ai-api'; +import type { AiDeploymentCreationResponse } from '@sap-ai-sdk/ai-api'; + +// Create Orchestration deployment using DeploymentApi +async function createOrchestrationDeployment() { +  // Extract the configuration ID from the result of the previous step + const configurationId = configuration.id; + +  try { +    const response = await DeploymentApi + .deploymentCreate( + { configurationId }, + { 'AI-Resource-Group': RESOURCE_GROUP } + ).execute(); + + return response; +  } catch (error: any) { + console.error('Deployment creation failed:', error.stack); +  } +} + +const deployment = await createOrchestrationDeployment(); +console.log(deployment?.message) // Print the deployment creation response +``` + +[OPTION END] + +[OPTION BEGIN [Bruno]] + +#### Create Resource Group + +- Expand the 01_resource_group section in the collection. + +- Click on the create request and execute it to create a resource group. +![img](img/resource_group.png) +- Next, execute the get_by_id request to verify the resource group status. + - Ensure the status is PROVISIONED. + +- Follow the screenshot attached for reference. +![img](img/get_resource_group.png) + +#### Create Configuration +- Navigate to the configuration request and execute it to create a configuration. + +- Copy the ID from the response for use in subsequent steps. + +- Follow the screenshot attached for reference. +![img](img/deployment_create_config.png) + +- Navigate to the create_deployment request. + +- Update the Configuration ID in the request body with the ID obtained in the previous step. + +- Execute the request to create a deployment. Follow the screenshot attached for reference. +![img](img/img021.png) + +#### Verify Deployment Status +- Execute the get_deployment request repeatedly until: + - The status is RUNNING. + - The deploymentUrl appears in the response. + +Follow the screenshot attached for reference. +![img](img/deployement_running.png) + +#### Update Environment Variable +- Copy the deploymentUrl from the response. + +- Paste it into the orchestration_service_url field of the Grounding-test environment. + +- Save the updated environment. Follow the screenshot attached for reference. +![img](img/service_update_creds.png) + +[OPTION END] + +### Consume LLM's in Generative AI Hub through Orchestration + +[OPTION BEGIN [AI Launchpad]] + +• Navigate to the resource group where your orchestration has been deployed. + +• Go to Generative AI Hub. + +• Select Orchestration and click on create. + +![img](img/image_ail_orch.png) + +• In the Templating section, locate the message icon with three tabs: User, Assistance, and System. + + +Click on the User tab, Enter the following details: + +**Prompt:** + +```CODE +Here is a candidate's resume: {{ ?candidate_resume }} +``` +**Variable Definitions:** + +• The variable “candidate_resume” will be created. + +• Enter the default values according to your use case. For this example, use the following resume information (you can copy-paste this text): + +```TEXT +John Doe +1234 Data St, San Francisco, CA 94101 +(123) 456-7890 +johndoe@email.com +LinkedIn Profile +GitHub Profile + +Objective +Detail-oriented Data Scientist with 3+ years of experience in data analysis, statistical modeling, and machine learning. Seeking to leverage expertise in predictive modeling and data visualization to help drive data-informed decision-making at [Company Name]. + +Education +Master of Science in Data Science +University of California, Berkeley +Graduated: May 2021 +Bachelor of Science in Computer Science +University of California, Los Angeles +Graduated: May 2019 + +Technical Skills +Programming Languages: Python, R, SQL, Java +Data Analysis & Visualization: Pandas, NumPy, Matplotlib, Seaborn, Tableau +Machine Learning: Scikit-learn, TensorFlow, Keras, XGBoost +Big Data Technologies: Hadoop, Spark +Databases: MySQL, PostgreSQL +Version Control: Git + +Professional Experience +Data Scientist +DataCorp Inc., San Francisco, CA +June 2021 – Present + +Developed predictive models to optimize marketing campaigns, which increased ROI by 20%. +Conducted in-depth data analysis using Python and SQL to identify trends and patterns in large datasets. +Collaborated with cross-functional teams to implement data-driven strategies that improved customer satisfaction scores by 15%. +Created interactive dashboards using Tableau to visualize KPIs for stakeholders. + +Data Analyst Intern +Analytics Solutions, Los Angeles, CA +June 2020 – August 2020 + +Analyzed large datasets to identify opportunities for business growth and improvement. +Assisted in the development of automated reporting tools using Python and Excel. +Worked with data visualization tools to create insightful reports for management. + +Projects +Customer Segmentation Analysis +Conducted K-means clustering on customer data to segment the customer base into distinct groups, enabling targeted marketing strategies. + +Predictive Stock Price Modeling +Built a predictive model using time series analysis to forecast stock prices, achieving an accuracy rate of 85%. + +Sentiment Analysis on Social Media +Implemented natural language processing techniques to analyze sentiment from tweets, providing insights into public opinion on various topics. + +Certifications +Certified Data Scientist (CDS) – Data Science Council of America +Machine Learning Specialization – Coursera by Stanford University + +Professional Affiliations +Member, Association for Computing Machinery (ACM) +Member, Data Science Society + +References +Available upon request. + +Personal Interests +- I absolutely love exploring new technologies and working on innovative projects. +- I enjoy reading books, especially on artificial intelligence and machine learning. +- I hate people who are dishonest and unreliable. +- I love traveling and experiencing new cultures. +- I enjoy playing video games, especially competitive ones. +- I hate being stuck in a routine; I always seek new challenges and growth opportunities. +-I hate working in Azure cloud -"Azure cloud is the most irritating platform i have ever used" +``` + +![img](img/image019.png) + +• After entering the details, click on Add. + +• A new message box will appear. Proceed to configure the System tab. enter the following details: + +**Prompt:** + +```CODE +You are a helpful AI assistant for HR. Summarize the following CV in 10 sentences, focusing on key qualifications, work experience, and achievements. Include personal contact information, organizational history, and personal interests. +``` + +![img](img/image020.png) + +• Navigate to the Model Configuration section. + +• Select your Deployment ID and choose the model you want to use for this orchestration. + +![img](img/image023.png) + +![img](img/image024.png) + +After configuring model settings, click on the run icon and run the orchestration. + +Check the Result section for the response. + +![img](img/image026.png) + +You can save the created orchestration using the save option as shown in below image + +![img](img/image_ail_sav.png) + +**Important Note** + +Ensure at least one orchestration deployment is ready to be consumed during this process. + +**Optional Advanced Modules** + +Data masking and content filtering are available to enhance data privacy and safety. Data masking hides sensitive information like phone numbers or organization names, while content filtering can screen for categories such as hate self-harm, sexual content, and violence. In this tutorial, the response generated by the LLM models may carry sensitive information, such as names and phone numbers. For further enhancement, refer to the next tutorial on implementing these modules. + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +To begin the consumption process for the orchestration you’ve deployed, follow the process below: + +**Prepare the CV File** + +- Download the [cv.txt](img/cv.txt) file, which contains the CV data that will be used in this use case. + +- Place the cv.txt file in the same folder where you have created your .ipynb file. + +- Load the CV file using the following code to read its content + +```python +from gen_ai_hub.orchestration_v2.utils import load_text_file + +# Load the CV file content +cv_file_path = "cv.txt"  # Specify the correct path to the CV file +cv_content = load_text_file(cv_file_path) + +# Print the content to verify it has been loaded +print(cv_content) +``` + +The next step involves creating a template that specifies how the AI should handle the resume content. The template will include both SystemMessage and UserMessage components. + +• SystemMessage: Defines the AI assistant's role and instructions. + +• UserMessage: Represents the user's input (i.e., the CV content) to be processed by the AI. + +```python +from gen_ai_hub.orchestration_v2.models.message import SystemMessage, UserMessage +from gen_ai_hub.orchestration_v2.models.template import Template + +# Define the template for resume screening +template = Template( + template=[ + SystemMessage(content="""You are a helpful AI assistant for HR. Summarize the following CV in 10 sentences, + focusing on key qualifications, work experience, and achievements. Include personal contact information, + organizational history, and personal interests"""), + UserMessage(content= + "Here is a candidate's resume: {{ ?candidate_resume }}" + ), + ], + defaults={"candidate_resume": "John Doe's resume content goes here..."}, +) +``` + +Here’s an example of how to configure them: + +```python +from gen_ai_hub.orchestration_v2.models.template import PromptTemplatingModuleConfig +from gen_ai_hub.orchestration_v2.models.llm_model_details import LLMModelDetails + +llm = LLMModelDetails( + name="gpt-5-nano", + params={"max_completion_tokens": 2048} +) + +prompt_module = PromptTemplatingModuleConfig( + prompt=template, + model=llm +) +``` + +**Execute the Orchestration and Collect Results** + +Now, you can run the orchestration with the prepared configurations. + +```python +from gen_ai_hub.orchestration_v2.models.config import ModuleConfig, OrchestrationConfig +from gen_ai_hub.orchestration_v2.service import OrchestrationService + +modules = ModuleConfig( + prompt_templating=prompt_module +) + +config = OrchestrationConfig(modules=modules) + +orchestration_service = OrchestrationService(config=config) + +response = orchestration_service.run( + placeholder_values={ + "candidate_resume": cv_content + } +) + +print(response.final_result.choices[0].message.content) +``` +![img](img/image_py_orch_v2.png) + +After executing the orchestration, check the response object to view the results. + +**Important Note** + +Ensure at least one orchestration deployment is ready to be consumed during this process. + +**Optional Advanced Modules** + +Data masking and content filtering are available to enhance data privacy and safety. Data masking hides sensitive information like phone numbers or organization names, while content filtering can screen for categories such as hate self-harm, sexual content, and violence. In this tutorial, the response generated by the LLM models may carry sensitive information, such as names and phone numbers. For further enhancement, refer to the next tutorial on implementing these modules. + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +In this step, we will consume the orchestration service using the [`@sap-ai-sdk/orchestration`](https://github.com/SAP/ai-sdk-js/tree/main/packages/orchestration) package of the SAP Cloud SDK for AI (JavaScript). For more information, refer to the official [documentation](https://sap.github.io/ai-sdk/docs/js/orchestration/chat-completion). + +**Prepare the CV File** + +- Download the [cv.txt](img/cv.txt) file, which contains the CV data used in this tutorial. + +- Place the cv.txt file in the current working directory. + +- Load the CV file using the following code to read its content + +```javascript +import { readFile } from 'fs/promises'; + +const cvContent = await readFile('path/to/cv.txt', 'utf-8'); +``` + +The next step involves creating a template that specifies how the AI should handle the CV content. The template will include message components with different roles: + +• `system`: Defines the AI assistant's role and instructions. + +• `user`: Represents the user's input to be processed. + +```javascript +import { OrchestrationClient } from '@sap-ai-sdk/orchestration'; + +const orchestrationClient = new OrchestrationClient({ + promptTemplating: { + model: { + name: 'gpt-4o' + }, + prompt: { + template: [ + { + role: 'system', + content: + 'You are a helpful AI assistant for HR. Summarize the following CV in 10 sentences, focusing on key qualifications, work experience, and achievements. Include personal contact information, organizational history, and personal interests.' + }, + { + role: 'user', + content: 'Candidate Resume:\n{{ ?candidate_resume }}' + } + ] + } + } +}); +``` + +We can use any of the llm models for this tutorial. Since orchestration provides direct access to models without requiring separate deployments, you can use any available models. For this example, we have selected the gpt-4o model. + +**Generate Responses for llm Models** + +This step outlines the process of generating responses for a set of queries using llm models with the created template. + +Query Execution: Uses `OrchestrationClient` to generate responses for each query. + +```javascript +const response = await orchestrationClient.chatCompletion({ + placeholderValues: { + candidate_resume: cvContent + } +}); +console.log(response.getContent()); +``` + +![img](img/image_orch_js_v2.png) + +After executing the orchestration, check the response object to view the results. + +![img](img/image_js_resp_v2.png) + +**Important Note** + +Ensure at least one orchestration deployment is ready to be consumed during this process. + +**Optional Advanced Modules** + +- Data masking and content filtering are available to enhance data privacy and safety. Data masking hides sensitive information like phone numbers or organization names, while content filtering can screen for categories such as hate self-harm, sexual content, and violence. In this tutorial, the response generated by the LLM models may carry sensitive information, such as names and phone numbers. For further enhancement, refer to the next tutorial on implementing these modules. + +- Grounding is available to integrate external, contextually relevant, domain-specific, or real-time data into your workflows. + +For more information about the orchestration module configurations, refer to the official [documentation](https://sap.github.io/ai-sdk/docs/js/orchestration/chat-completion). + +[OPTION END] + +[OPTION BEGIN [Bruno]] +- Go to the 08_consume_model section in the collection. + +- Select the direct_model_usage request for consuming the deployed model. + +- Expand the Body section of the request. Replace the current JSON in the Body with the following updated JSON + +```JSON +{ + "config": { + "modules": { + "prompt_templating": { + "prompt": { + "template": [ + { + "role": "system", + "content": "You are an AI assistant designed to screen resumes for HR purposes. Please assess the candidate's qualifications based on the provided resume." + }, + { + "role": "user", + "content": "Candidate Resume:\n'''{{ ?candidate_resume }}'''" + } + ] + }, + "model": { + "name": "gpt-4o", + "params": { + "max_tokens": 500, + "temperature": 0.2, + "frequency_penalty": 0, + "presence_penalty": 0 + } + } + } + } + }, + "placeholder_values": { + "candidate_resume": "John Doe\n1234 Data St, San Francisco, CA 94101\n(123) 456-7890\njohndoe@email.com\nLinkedIn Profile\nGitHub Profile\n\nObjective\nDetail-oriented Data Scientist with 3+ years of experience in data analysis, statistical modeling, and machine learning.\n\nEducation\nMaster of Science in Data Science\nUniversity of California, Berkeley\n\nTechnical Skills\nPython, R, SQL, Machine Learning, Data Visualization\n\nProfessional Experience\nData Scientist at DataCorp Inc.\n\nPersonal Interests\n- I absolutely love exploring new technologies.\n- I hate people who are dishonest and unreliable." + } +} +``` + +- In the request body, specify the model name that you want to consume. + +- Ensure the input parameters are formatted as per the model's requirements. Follow the screenshot attached for reference. + +- Click on Send to execute the request. + +- Review the response to see the output from the model. Follow the screenshot attached for reference. +![img](img/tut_1_result.png) + + +**Optional Advanced Modules** + +Data masking and content filtering are available to enhance data privacy and safety. Data masking hides sensitive information like phone numbers or organization names, while content filtering can screen for categories such as hate self-harm, sexual content, and violence. In this tutorial, the response generated by the LLM models may carry sensitive information, such as names and phone numbers etc.. For further enhancement, refer to the next tutorial on implementing these modules. + +[OPTION END] diff --git a/tutorials/ai-core-orchestration-consumption-v2/cv.txt b/tutorials/ai-core-orchestration-consumption-v2/cv.txt new file mode 100644 index 0000000000..002b35fc8f --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-v2/cv.txt @@ -0,0 +1,79 @@ +John Doe +1234 Data St, San Francisco, CA 94101 +(123) 456-7890 +johndoe@email.com +LinkedIn Profile +GitHub Profile + +Objective +Detail-oriented Data Scientist with 3+ years of experience in data analysis, statistical modeling, and machine learning. Seeking to leverage expertise in predictive modeling and data visualization to help drive data-informed decision-making at [Company Name]. + +Education +Master of Science in Data Science +University of California, Berkeley +Graduated: May 2021 + +Bachelor of Science in Computer Science +University of California, Los Angeles +Graduated: May 2019 + +Technical Skills + +Programming Languages: Python, R, SQL, Java +Data Analysis & Visualization: Pandas, NumPy, Matplotlib, Seaborn, Tableau +Machine Learning: Scikit-learn, TensorFlow, Keras, XGBoost +Big Data Technologies: Hadoop, Spark +Databases: MySQL, PostgreSQL +Version Control: Git + +Professional Experience + +Data Scientist +DataCorp Inc., San Francisco, CA +June 2021 – Present + +Developed predictive models to optimize marketing campaigns, which increased ROI by 20%. +Conducted in-depth data analysis using Python and SQL to identify trends and patterns in large datasets. +Collaborated with cross-functional teams to implement data-driven strategies that improved customer satisfaction scores by 15%. +Created interactive dashboards using Tableau to visualize KPIs for stakeholders. + +Data Analyst Intern +Analytics Solutions, Los Angeles, CA +June 2020 – August 2020 + +Analyzed large datasets to identify opportunities for business growth and improvement. +Assisted in the development of automated reporting tools using Python and Excel. +Worked with data visualization tools to create insightful reports for management. + +Projects + +Customer Segmentation Analysis +Conducted K-means clustering on customer data to segment the customer base into distinct groups, enabling targeted marketing strategies. + +Predictive Stock Price Modeling +Built a predictive model using time series analysis to forecast stock prices, achieving an accuracy rate of 85%. + +Sentiment Analysis on Social Media +Implemented natural language processing techniques to analyze sentiment from tweets, providing insights into public opinion on various topics. + +Certifications + +Certified Data Scientist (CDS) – Data Science Council of America +Machine Learning Specialization – Coursera by Stanford University + +Professional Affiliations + +Member, Association for Computing Machinery (ACM) +Member, Data Science Society + +References +Available upon request. + +Personal Interests +- I absolutely love exploring new technologies and working on innovative projects. +- I enjoy reading books, especially on artificial intelligence and machine learning. +- I hate people who are dishonest and unreliable. +- I love traveling and experiencing new cultures. +- I enjoy playing video games, especially competitive ones. +- I hate being stuck in a routine; I always seek new challenges and growth opportunities. +-I hate working in Azure cloud -"Azure cloud is the most irritating platform i have ever used" diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/access_token.png b/tutorials/ai-core-orchestration-consumption-v2/img/access_token.png new file mode 100644 index 0000000000..f6a229c960 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/access_token.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/bruno_config.json b/tutorials/ai-core-orchestration-consumption-v2/img/bruno_config.json new file mode 100644 index 0000000000..6effe16020 --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-v2/img/bruno_config.json @@ -0,0 +1,1734 @@ +{ + "name": "bruno_config", + "version": "1", + "items": [ + { + "type": "http", + "name": "get_token", + "filename": "get_token.bru", + "seq": 1, + "request": { + "url": "{{ai_auth_url}}/oauth/token", + "method": "POST", + "headers": [ + { + "name": "Content-Type", + "value": "application/x-www-form-urlencoded", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "formUrlEncoded", + "formUrlEncoded": [ + { + "name": "grant_type", + "value": "client_credentials", + "enabled": true + }, + { + "name": "client_id", + "value": "{{client_id}}", + "enabled": true + }, + { + "name": "client_secret", + "value": "{{client_secret}}", + "enabled": true + } + ], + "multipartForm": [], + "file": [] + }, + "script": { + "res": "if (res.getStatus() == 200) {\n bru.setEnvVar(\"access_token\", res.body.access_token);\n}" + }, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "none" + } + } + }, + { + "type": "http", + "name": "health", + "filename": "health.bru", + "seq": 2, + "request": { + "url": "{{ai_api_url}}/api/v1/healthz", + "method": "GET", + "headers": [], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "02_deployments", + "filename": "02_deployments", + "items": [ + { + "type": "http", + "name": "create_configuration", + "filename": "create_configuration.bru", + "seq": 3, + "request": { + "url": "{{ai_api_url}}/v2/lm/configurations", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"orchestration-config\",\n \"executableId\": \"orchestration\",\n \"scenarioId\": \"orchestration\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "create_deployment", + "filename": "create_deployment.bru", + "seq": 5, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"ttl\": \"24H\",\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_deployment_id", + "filename": "delete_deployment_id.bru", + "seq": 9, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments/", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "json": "{\n \"ttl\": \"24H\",\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_configuration", + "filename": "get_configuration.bru", + "seq": 4, + "request": { + "url": "{{ai_api_url}}/v2/lm/configurations", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_deployment", + "filename": "get_deployment.bru", + "seq": 6, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "json": "{\n \"ttl\": \"24H\",\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_deployment_id", + "filename": "get_deployment_id.bru", + "seq": 7, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments/<>DEPLOYMENT_ID", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "json": "{\n \"ttl\": \"24H\",\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_scenario", + "filename": "get_scenario.bru", + "seq": 1, + "request": { + "url": "{{ai_api_url}}/v2/lm/scenarios", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_scenario_executable", + "filename": "get_scenario_executable.bru", + "seq": 2, + "request": { + "url": "{{ai_api_url}}/v2/lm/scenarios/orchestration/executables", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "stop_deployment_id", + "filename": "stop_deployment_id.bru", + "seq": 8, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments/", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"targetStatus\": \"STOPPED\"\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "03_generic_secret", + "filename": "03_generic_secret", + "items": [ + { + "type": "http", + "name": "create", + "filename": "create.bru", + "seq": 2, + "request": { + "url": "{{ai_api_url}}/v2/admin/secrets", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"\",\n \"data\": {\n \"type\": \"SFRUUA==\",\n \"description\": \"\",\n \"clientId\": \"\",\n \"authentication\": \"\",\n \"tokenServiceURL\": \"\",\n \"password\": \"\",\n \"proxyType\": \"PROXY\",\n \"url\": \"\",\n \"tokenServiceURLType\": \"TOKENSERVICE\",\n \"user\": \"\",\n \"clientSecret\": \"\",\n \"scope\": \"\"\n },\n \"labels\": [\n {\n \"key\": \"ext.ai.sap.com/document-grounding\",\n \"value\": \"true\"\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete", + "filename": "delete.bru", + "seq": 3, + "request": { + "url": "{{ai_api_url}}/v2/admin/secrets/canary-rg-secret", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_all", + "filename": "get_all.bru", + "seq": 1, + "request": { + "url": "{{ai_api_url}}/v2/admin/secrets", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "update", + "filename": "update.bru", + "seq": 4, + "request": { + "url": "{{ai_api_url}}/v2/admin/secrets/canary-rg-secret", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"data\": {\n \"clientId\": \"\"\n }\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "06_vector", + "filename": "06_vector", + "items": [ + { + "type": "http", + "name": "create_collections", + "filename": "create_collections.bru", + "seq": 2, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"title\": \"test-canary-collection\",\n \"embeddingConfig\": {\n \"modelName\": \"text-embedding-ada-002-v2\"\n },\n \"metadata\": [\n {\n \"key\": \"purpose\",\n \"value\": [\n \"demonstration\"\n ]\n },\n {\n \"key\": \"a-random-key\",\n \"value\": [\n \"hello world!\"\n ]\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "create_documents", + "filename": "create_documents.bru", + "seq": 5, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"documents\": [\n {\n \"metadata\": [\n {\n \"key\": \"url\",\n \"value\": [\n \"http://hello.com\",\n \"123\"\n ]\n }\n ],\n \"chunks\": [\n {\n \"content\": \"Joule is the AI copilot that truly understands your business. Joule revolutionizes how you interact with your SAP business systems, making every touchpoint count and every task simpler.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"1\"\n ]\n }\n ]\n },\n {\n \"content\": \"It enables the companion of the Intelligent Enterprise, guiding you through content discovery within SAP Ecosystem, and giving a transparent role-based access to the relevant processes from everywhere. This is the one assistant experience, a unified and delightful user experience across SAP’s Ǯ solution portfolio.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"2\"\n ]\n }\n ]\n }\n ]\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_collection_by_id", + "filename": "delete_collection_by_id.bru", + "seq": 12, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections/", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_documents_by_id", + "filename": "delete_documents_by_id.bru", + "seq": 11, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents/", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_all_collections", + "filename": "get_all_collections.bru", + "seq": 1, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_all_documents_by_collection_id", + "filename": "get_all_documents_by_collection_id.bru", + "seq": 6, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_collection_by_id", + "filename": "get_collection_by_id.bru", + "seq": 4, + "request": { + "url": "{{ai_api_url}}/v2/lm/document-grounding/vector/collections/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_collection_creation_status_by_id", + "filename": "get_collection_creation_status_by_id.bru", + "seq": 3, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//creationStatus", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_collection_deletion_status_by_id", + "filename": "get_collection_deletion_status_by_id.bru", + "seq": 13, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//deletionStatus", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_documents_by_id", + "filename": "get_documents_by_id.bru", + "seq": 7, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "insert_documents", + "filename": "insert_documents.bru", + "seq": 9, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"documents\": [\n {\n \"id\": \"\",\n \"metadata\": [\n {\n \"key\": \"url\",\n \"value\": [\"http://hello1.com\"]\n },\n {\n \"key\": \"test-insert\",\n \"value\": [\"123\"]\n }\n ],\n \"chunks\": [\n {\n \"content\": \"Joule is not the AI copilot that truly understands your business. Joule revolutionizes how you interact with your SAP business systems, making every touchpoint count and every task simpler.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"1\"\n ]\n }\n ]\n },\n {\n \"content\": \"It enables the companion of the Intelligent Enterprise, guiding you through content discovery within SAP Ecosystem, and giving a transparent role-based access to the relevant processes from everywhere. This is the one assistant experience, a unified and delightful user experience across SAP’s Ǯ solution portfolio.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"2\"\n ]\n }\n ]\n }\n ]\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "search", + "filename": "search.bru", + "seq": 10, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/search", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"query\": \"is Joule an AI Copilot?\",\n \"filters\": [\n {\n \"id\": \"string\",\n \"collectionIds\": [\n \"\"\n ],\n \"configuration\": {},\n \"collectionMetadata\": [],\n \"documentMetadata\": [\n {\n \"key\": \"url\",\n \"value\": [\n \"http://hello1.com\"\n ],\n \"selectMode\": [\"ignoreIfKeyAbsent\"]\n }\n ],\n \"chunkMetadata\": []\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "update_documents", + "filename": "update_documents.bru", + "seq": 8, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"documents\": [\n {\n \"id\": \"\",\n \"metadata\": [\n {\n \"key\": \"url\",\n \"value\": [\"http://hello1.com\"]\n }\n ],\n \"chunks\": [\n {\n \"content\": \"Joule is not the AI copilot that truly understands your business. Joule revolutionizes how you interact with your SAP business systems, making every touchpoint count and every task simpler.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"1\"\n ]\n }\n ]\n },\n {\n \"content\": \"It enables the companion of the Intelligent Enterprise, guiding you through content discovery within SAP Ecosystem, and giving a transparent role-based access to the relevant processes from everywhere. This is the one assistant experience, a unified and delightful user experience across SAP’s Ǯ solution portfolio.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"2\"\n ]\n }\n ]\n }\n ]\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "01_resource_group", + "filename": "01_resource_group", + "items": [ + { + "type": "http", + "name": "create", + "filename": "create.bru", + "seq": 1, + "request": { + "url": "{{ai_api_url}}/v2/admin/resourceGroups", + "method": "POST", + "headers": [], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"resourceGroupId\": \"{{resource_group}}\",\n \"labels\": [\n {\n \"key\": \"ext.ai.sap.com/document-grounding\",\n \"value\": \"true\"\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_by_id", + "filename": "delete_by_id.bru", + "seq": 4, + "request": { + "url": "{{ai_api_url}}/v2/admin/resourceGroups/{{resource_group}}", + "method": "DELETE", + "headers": [], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get", + "filename": "get.bru", + "seq": 2, + "request": { + "url": "{{ai_api_url}}/v2/admin/resourceGroups", + "method": "GET", + "headers": [], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_by_id", + "filename": "get_by_id.bru", + "seq": 3, + "request": { + "url": "{{ai_api_url}}/v2/admin/resourceGroups/{{resource_group}}", + "method": "GET", + "headers": [], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "04_pipeline", + "filename": "04_pipeline", + "items": [ + { + "type": "http", + "name": "create_pipeline", + "filename": "create_pipeline.bru", + "seq": 1, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines", + "method": "POST", + "headers": [ + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"type\": \"MSSharePoint\",\n \"configuration\": {\n \"destination\": \"\",\n \"sharePoint\": {\n \"site\": {\n \"name\": \"\",\n \"includePaths\": [\n \"/\"\n ]\n }\n }\n }\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_pipeline_by_pipeline_id", + "filename": "delete_pipeline_by_pipeline_id.bru", + "seq": 5, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines/", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_all_pipelines", + "filename": "get_all_pipelines.bru", + "seq": 2, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_pipeline_by_pipeline_id", + "filename": "get_pipeline_by_pipeline_id.bru", + "seq": 3, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_pipeline_status_by_pipeline_id", + "filename": "get_pipeline_status_by_pipeline_id.bru", + "seq": 4, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines//status", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "07_retrieval", + "filename": "07_retrieval", + "items": [ + { + "type": "http", + "name": "dataRepositories by id", + "filename": "dataRepositories by id.bru", + "seq": 2, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/retrieval/dataRepositories/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "dataRepositories", + "filename": "dataRepositories.bru", + "seq": 1, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/retrieval/dataRepositories", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "retrieval_pipeline", + "filename": "retrieval_pipeline.bru", + "seq": 3, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/retrieval/search", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"query\": \"what is AI106 about and who are the presenters?\",\n \"filters\": [\n {\n \"id\": \"string\",\n \"searchConfiguration\": {},\n \"dataRepositories\": [\n \"\"\n ],\n \"dataRepositoryType\": \"vector\",\n \"dataRepositoryMetadata\": [],\n \"documentMetadata\": [],\n \"chunkMetadata\": []\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "retrieval_vector", + "filename": "retrieval_vector.bru", + "seq": 4, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/retrieval/search", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"query\": \"is joule an ai copilot?\",\n \"filters\": [\n {\n \"id\": \"string\",\n \"searchConfiguration\": {\n \"maxChunkCount\": 1\n },\n \"dataRepositories\": [\n \"\"\n ],\n \"dataRepositoryType\": \"vector\",\n \"dataRepositoryMetadata\": [],\n \"documentMetadata\": [],\n \"chunkMetadata\": []\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "05_orchestration", + "filename": "05_orchestration", + "items": [ + { + "type": "http", + "name": "completion", + "filename": "completion.bru", + "seq": 1, + "request": { + "url": "{{orchestration_service_url}}/v2/completion", + "method": "POST", + "headers": [ + { + "name": "ai-resource-group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{ \n\"config\": {\n \"modules\": {\n \"prompt_templating\": {\n \"prompt\": {\n \"template\": [\n {\n \"role\": \"assistant\",\n \"content\": \"Support Issue: '''{{?support-issue}}'''\\n Context Information: '''{{?issue-context}}'''\"\n },\n {\n \"role\": \"system\",\n \"content\": \"You are a helpful support assistant. Your task is to help answer a given support issue. \\n Your proceed as follows: \\n First, check if the provided context information answers the issue. Based on the result do one of the following: \\n a) If yes, provide an answer based on the provided context information in form of an email and then finish. \\n b) If no, only if you cannot answer the issue you summarize the issue for the human support team. Ignore the context information in this case and provide your answer only based on the support issue. Answer in the following format:\\n - Sentiment: [your sentiment analysis] \\n - Key Theme: [theme of the support issue] \\n - Contact: [any contact information available in the issue]\"\n }\n ]\n },\n \"model\": {\n \"name\": \"gpt-4o\",\n \"params\": {\n \"max_completion_tokens\": 300,\n \"temperature\": 0.1,\n \"frequency_penalty\": 0,\n \"presence_penalty\": 0\n }\n }\n },\n \"filtering\": {\n \"input\": {\n \"filters\": [\n {\n \"type\": \"azure_content_safety\",\n \"config\": {\n \"hate\": 4,\n \"self_harm\": 4,\n \"sexual\": 4,\n \"violence\": 4\n }\n }\n ]\n },\n \"output\": {\n \"filters\": [\n {\n \"type\": \"azure_content_safety\",\n \"config\": {\n \"hate\": 0,\n \"self_harm\": 0,\n \"sexual\": 0,\n \"violence\": 0\n }\n }\n ]\n }\n },\n \"masking\": {\n \"providers\": [\n {\n \"type\": \"sap_data_privacy_integration\",\n \"method\": \"pseudonymization\",\n \"entities\": [\n {\n \"type\": \"profile-email\"\n },\n {\n \"type\": \"profile-person\"\n },\n {\n \"type\": \"profile-phone\"\n },\n {\n \"type\": \"profile-address\"\n }\n ]\n }\n ]\n },\n \"grounding\": {\n \"type\": \"document_grounding_service\",\n \"config\": {\n \"filters\": [\n {\n \"id\": \"helpRepo\",\n \"data_repositories\": [\n \"*\"\n ],\n \"search_config\": {\n \"max_chunk_count\": 3\n },\n \"data_repository_type\": \"help.sap.com\"\n }\n ],\n \"placeholders\": {\n \"input\": [\n \"support-issue\"\n ],\n \"output\": \"issue-context\"\n }\n }\n },\n \"translation\": {\n \"input\": {\n \"type\": \"sap_document_translation\",\n \"config\": {\n \"source_language\": \"de-DE\",\n \"target_language\": \"en-US\"\n }\n },\n \"output\": {\n \"type\": \"sap_document_translation\",\n \"config\": {\n \"source_language\": \"en-US\",\n \"target_language\": \"de-DE\"\n }\n }\n }\n }\n },\n \"placeholder_values\": {\n \"support-issue\": \"Betreff: Unterstützung benötigt \\nNachricht: \\nHallo, ich benötige Unterstützung mit SAP Signavio. Insbesondere möchte ich Benachrichtigungen im SAP Signavio Process Manager konfigurieren. Bitte kontaktieren Sie mich mit unter Jane.Janeson@gmx.net.\"\n }\n}\n\n\n", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "completion_help", + "filename": "completion_help.bru", + "seq": 2, + "request": { + "url": "{{orchestration_service_url}}/v2/completion", + "method": "POST", + "headers": [ + { + "name": "ai-resource-group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"config\": {\n \"modules\": {\n \"prompt_templating\": {\n \"prompt\": {\n \"template\": [\n {\n \"role\": \"user\",\n \"content\": \"You are a helpful assistant for any queries for SAP Teched 2024.\\nAnswer the grounding request by providing relevant answers that fit to the request.\\n\\nRequest: {{?groundingRequest}}\\n\\nReports: {{?groundingOutput}}\"\n }\n ]\n },\n \"model\": {\n \"name\": \"gemini-2.5-pro\",\n \"params\": {\n \"max_completion_tokens\": 300\n }\n }\n },\n\n \"grounding\": {\n \"type\": \"document_grounding_service\",\n \"config\": {\n \"filters\": [\n {\n \"id\": \"filter1\",\n \"data_repositories\": [\"*\"],\n \"search_config\": {},\n \"data_repository_type\": \"help.sap.com\"\n }\n ],\n \"placeholders\": {\n \"input\": [\"groundingRequest\"],\n \"output\": \"groundingOutput\"\n }\n }\n },\n\n \"filtering\": {\n \"input\": {\n \"filters\": [\n {\n \"type\": \"azure_content_safety\",\n \"config\": {\n \"hate\": 2,\n \"self_harm\": 2,\n \"sexual\": 2,\n \"violence\": 2\n }\n }\n ]\n },\n \"output\": {\n \"filters\": [\n {\n \"type\": \"azure_content_safety\",\n \"config\": {\n \"hate\": 2,\n \"self_harm\": 2,\n \"sexual\": 2,\n \"violence\": 2\n }\n }\n ]\n }\n }\n }\n },\n\n \"placeholder_values\": {\n \"groundingRequest\": \"what is joule?\"\n }\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_foundation_models", + "filename": "get_foundation_models.bru", + "seq": 3, + "request": { + "url": "{{ai_api_url}}/v2/lm/scenarios/foundation-models/models", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "08_consume_model", + "filename": "08_consume_model", + "items": [ + { + "type": "http", + "name": "direct_model_usage", + "filename": "direct_model_usage.bru", + "seq": 2, + "request": { + "url": "{{orchestration_service_url}}/v2/completion", + "method": "POST", + "headers": [ + { + "name": "ai-resource-group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"config\": {\n \"modules\": {\n \"prompt_templating\": {\n \"prompt\": {\n \"template\": [\n {\n \"role\": \"system\",\n \"content\": \"You are an AI assistant designed to screen resumes for HR purposes. Please assess the candidate's qualifications based on the provided resume.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Candidate Resume:\\n'''{{ ?candidate_resume }}'''\"\n }\n ]\n },\n \"model\": {\n \"name\": \"gpt-4o\",\n \"params\": {\n \"max_tokens\": 500,\n \"temperature\": 0.2,\n \"frequency_penalty\": 0,\n \"presence_penalty\": 0\n }\n }\n }\n }\n },\n \"placeholder_values\": {\n \"candidate_resume\": \"John Doe\\n1234 Data St, San Francisco, CA 94101\\n(123) 456-7890\\njohndoe@email.com\\nLinkedIn Profile\\nGitHub Profile\\n\\nObjective\\nDetail-oriented Data Scientist with 3+ years of experience in data analysis, statistical modeling, and machine learning.\\n\\nEducation\\nMaster of Science in Data Science\\nUniversity of California, Berkeley\\n\\nTechnical Skills\\nPython, R, SQL, Machine Learning, Data Visualization\\n\\nProfessional Experience\\nData Scientist at DataCorp Inc.\\n\\nPersonal Interests\\n- I absolutely love exploring new technologies.\\n- I hate people who are dishonest and unreliable.\"\n }\n}", + "formUrlEncoded": [], + "multipartForm": [], + "file": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + } + ], + "environments": [ + { + "variables": [ + { + "name": "ai_auth_url", + "value": "https://********.****.eu11.hana.ondemand.com", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "ai_api_url", + "value": "https://api.ai.*******.aws.ml.hana.ondemand.com", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "resource_group", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "common_endpoint", + "value": "/v2/lm/document-grounding", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "orchestration_service_url", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "access_token", + "value": "", + "enabled": true, + "secret": true, + "type": "text" + } + ], + "name": "Grounding-test" + } + ], + "brunoConfig": { + "version": "1", + "name": "bruno_config", + "type": "collection", + "ignore": [ + "node_modules", + ".git" + ] + } +} \ No newline at end of file diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/cv.txt b/tutorials/ai-core-orchestration-consumption-v2/img/cv.txt new file mode 100644 index 0000000000..002b35fc8f --- /dev/null +++ b/tutorials/ai-core-orchestration-consumption-v2/img/cv.txt @@ -0,0 +1,79 @@ +John Doe +1234 Data St, San Francisco, CA 94101 +(123) 456-7890 +johndoe@email.com +LinkedIn Profile +GitHub Profile + +Objective +Detail-oriented Data Scientist with 3+ years of experience in data analysis, statistical modeling, and machine learning. Seeking to leverage expertise in predictive modeling and data visualization to help drive data-informed decision-making at [Company Name]. + +Education +Master of Science in Data Science +University of California, Berkeley +Graduated: May 2021 + +Bachelor of Science in Computer Science +University of California, Los Angeles +Graduated: May 2019 + +Technical Skills + +Programming Languages: Python, R, SQL, Java +Data Analysis & Visualization: Pandas, NumPy, Matplotlib, Seaborn, Tableau +Machine Learning: Scikit-learn, TensorFlow, Keras, XGBoost +Big Data Technologies: Hadoop, Spark +Databases: MySQL, PostgreSQL +Version Control: Git + +Professional Experience + +Data Scientist +DataCorp Inc., San Francisco, CA +June 2021 – Present + +Developed predictive models to optimize marketing campaigns, which increased ROI by 20%. +Conducted in-depth data analysis using Python and SQL to identify trends and patterns in large datasets. +Collaborated with cross-functional teams to implement data-driven strategies that improved customer satisfaction scores by 15%. +Created interactive dashboards using Tableau to visualize KPIs for stakeholders. + +Data Analyst Intern +Analytics Solutions, Los Angeles, CA +June 2020 – August 2020 + +Analyzed large datasets to identify opportunities for business growth and improvement. +Assisted in the development of automated reporting tools using Python and Excel. +Worked with data visualization tools to create insightful reports for management. + +Projects + +Customer Segmentation Analysis +Conducted K-means clustering on customer data to segment the customer base into distinct groups, enabling targeted marketing strategies. + +Predictive Stock Price Modeling +Built a predictive model using time series analysis to forecast stock prices, achieving an accuracy rate of 85%. + +Sentiment Analysis on Social Media +Implemented natural language processing techniques to analyze sentiment from tweets, providing insights into public opinion on various topics. + +Certifications + +Certified Data Scientist (CDS) – Data Science Council of America +Machine Learning Specialization – Coursera by Stanford University + +Professional Affiliations + +Member, Association for Computing Machinery (ACM) +Member, Data Science Society + +References +Available upon request. + +Personal Interests +- I absolutely love exploring new technologies and working on innovative projects. +- I enjoy reading books, especially on artificial intelligence and machine learning. +- I hate people who are dishonest and unreliable. +- I love traveling and experiencing new cultures. +- I enjoy playing video games, especially competitive ones. +- I hate being stuck in a routine; I always seek new challenges and growth opportunities. +-I hate working in Azure cloud -"Azure cloud is the most irritating platform i have ever used" diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/deployement_running.png b/tutorials/ai-core-orchestration-consumption-v2/img/deployement_running.png new file mode 100644 index 0000000000..cf88b981c0 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/deployement_running.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/deployment_create_config.png b/tutorials/ai-core-orchestration-consumption-v2/img/deployment_create_config.png new file mode 100644 index 0000000000..cd3097237d Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/deployment_create_config.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/env_set.png b/tutorials/ai-core-orchestration-consumption-v2/img/env_set.png new file mode 100644 index 0000000000..9925274fc1 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/env_set.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/get_resource_group.png b/tutorials/ai-core-orchestration-consumption-v2/img/get_resource_group.png new file mode 100644 index 0000000000..1c94b79957 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/get_resource_group.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/get_token.png b/tutorials/ai-core-orchestration-consumption-v2/img/get_token.png new file mode 100644 index 0000000000..c0f10d0fb1 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/get_token.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image001.png b/tutorials/ai-core-orchestration-consumption-v2/img/image001.png new file mode 100644 index 0000000000..9ce7268d96 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image001.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image002.png b/tutorials/ai-core-orchestration-consumption-v2/img/image002.png new file mode 100644 index 0000000000..48c60670f6 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image002.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image003.png b/tutorials/ai-core-orchestration-consumption-v2/img/image003.png new file mode 100644 index 0000000000..67bf0592cc Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image003.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image004.png b/tutorials/ai-core-orchestration-consumption-v2/img/image004.png new file mode 100644 index 0000000000..7bf6a749d2 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image004.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image005.png b/tutorials/ai-core-orchestration-consumption-v2/img/image005.png new file mode 100644 index 0000000000..74e16191c8 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image005.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image006.png b/tutorials/ai-core-orchestration-consumption-v2/img/image006.png new file mode 100644 index 0000000000..94bb076e4b Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image006.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image007.png b/tutorials/ai-core-orchestration-consumption-v2/img/image007.png new file mode 100644 index 0000000000..71603d84d0 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image007.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image008.png b/tutorials/ai-core-orchestration-consumption-v2/img/image008.png new file mode 100644 index 0000000000..520e30cdbf Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image008.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image009.png b/tutorials/ai-core-orchestration-consumption-v2/img/image009.png new file mode 100644 index 0000000000..203436d8d9 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image009.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image010.png b/tutorials/ai-core-orchestration-consumption-v2/img/image010.png new file mode 100644 index 0000000000..4315dd9913 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image010.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image011.png b/tutorials/ai-core-orchestration-consumption-v2/img/image011.png new file mode 100644 index 0000000000..a8cb12724e Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image011.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image012.png b/tutorials/ai-core-orchestration-consumption-v2/img/image012.png new file mode 100644 index 0000000000..27bc695ca2 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image012.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image013.png b/tutorials/ai-core-orchestration-consumption-v2/img/image013.png new file mode 100644 index 0000000000..da09957658 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image013.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image014.png b/tutorials/ai-core-orchestration-consumption-v2/img/image014.png new file mode 100644 index 0000000000..47fdabf25f Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image014.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image015.png b/tutorials/ai-core-orchestration-consumption-v2/img/image015.png new file mode 100644 index 0000000000..67f997aec6 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image015.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image016.png b/tutorials/ai-core-orchestration-consumption-v2/img/image016.png new file mode 100644 index 0000000000..3ae98d6a24 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image016.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image017.png b/tutorials/ai-core-orchestration-consumption-v2/img/image017.png new file mode 100644 index 0000000000..b54fb099fd Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image017.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image018.png b/tutorials/ai-core-orchestration-consumption-v2/img/image018.png new file mode 100644 index 0000000000..a181cd34f0 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image018.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image019.png b/tutorials/ai-core-orchestration-consumption-v2/img/image019.png new file mode 100644 index 0000000000..45c7501643 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image019.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image020.png b/tutorials/ai-core-orchestration-consumption-v2/img/image020.png new file mode 100644 index 0000000000..85480bf504 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image020.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image021.png b/tutorials/ai-core-orchestration-consumption-v2/img/image021.png new file mode 100644 index 0000000000..08f0bade03 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image021.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image022.png b/tutorials/ai-core-orchestration-consumption-v2/img/image022.png new file mode 100644 index 0000000000..a2c041f590 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image022.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image023.png b/tutorials/ai-core-orchestration-consumption-v2/img/image023.png new file mode 100644 index 0000000000..63754a6201 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image023.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image024.png b/tutorials/ai-core-orchestration-consumption-v2/img/image024.png new file mode 100644 index 0000000000..e038bd4192 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image024.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image025.png b/tutorials/ai-core-orchestration-consumption-v2/img/image025.png new file mode 100644 index 0000000000..8756f8db94 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image025.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image026.png b/tutorials/ai-core-orchestration-consumption-v2/img/image026.png new file mode 100644 index 0000000000..94db85c9e1 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image026.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image_ail_orch.png b/tutorials/ai-core-orchestration-consumption-v2/img/image_ail_orch.png new file mode 100644 index 0000000000..a550a99998 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image_ail_orch.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image_ail_sav.png b/tutorials/ai-core-orchestration-consumption-v2/img/image_ail_sav.png new file mode 100644 index 0000000000..96fc46a99a Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image_ail_sav.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image_js_resp_v2.png b/tutorials/ai-core-orchestration-consumption-v2/img/image_js_resp_v2.png new file mode 100644 index 0000000000..2d5c98c0df Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image_js_resp_v2.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image_orch_js_v2.png b/tutorials/ai-core-orchestration-consumption-v2/img/image_orch_js_v2.png new file mode 100644 index 0000000000..c872c5acbd Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image_orch_js_v2.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/image_py_orch_v2.png b/tutorials/ai-core-orchestration-consumption-v2/img/image_py_orch_v2.png new file mode 100644 index 0000000000..8a66623627 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/image_py_orch_v2.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img001.png b/tutorials/ai-core-orchestration-consumption-v2/img/img001.png new file mode 100644 index 0000000000..2e618699f0 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img001.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img003.png b/tutorials/ai-core-orchestration-consumption-v2/img/img003.png new file mode 100644 index 0000000000..cd6fbd079f Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img003.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img005.png b/tutorials/ai-core-orchestration-consumption-v2/img/img005.png new file mode 100644 index 0000000000..e30d1053e2 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img005.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img007.png b/tutorials/ai-core-orchestration-consumption-v2/img/img007.png new file mode 100644 index 0000000000..92d04ebe37 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img007.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img009.png b/tutorials/ai-core-orchestration-consumption-v2/img/img009.png new file mode 100644 index 0000000000..12da0cce86 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img009.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img011.png b/tutorials/ai-core-orchestration-consumption-v2/img/img011.png new file mode 100644 index 0000000000..60d6bab3df Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img011.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img013.png b/tutorials/ai-core-orchestration-consumption-v2/img/img013.png new file mode 100644 index 0000000000..eefe403658 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img013.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img017.png b/tutorials/ai-core-orchestration-consumption-v2/img/img017.png new file mode 100644 index 0000000000..0643ec1bbb Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img017.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img019.png b/tutorials/ai-core-orchestration-consumption-v2/img/img019.png new file mode 100644 index 0000000000..bbc782b5b2 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img019.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img021.png b/tutorials/ai-core-orchestration-consumption-v2/img/img021.png new file mode 100644 index 0000000000..5ffdb91ce5 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img021.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img023.png b/tutorials/ai-core-orchestration-consumption-v2/img/img023.png new file mode 100644 index 0000000000..8812fa545e Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img023.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img025.png b/tutorials/ai-core-orchestration-consumption-v2/img/img025.png new file mode 100644 index 0000000000..00484604c7 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img025.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img027.png b/tutorials/ai-core-orchestration-consumption-v2/img/img027.png new file mode 100644 index 0000000000..8bb8624ecb Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img027.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img029.png b/tutorials/ai-core-orchestration-consumption-v2/img/img029.png new file mode 100644 index 0000000000..b65221f571 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img029.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/img031.png b/tutorials/ai-core-orchestration-consumption-v2/img/img031.png new file mode 100644 index 0000000000..24a0e44e88 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/img031.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/no_env.png b/tutorials/ai-core-orchestration-consumption-v2/img/no_env.png new file mode 100644 index 0000000000..9d8632c8a9 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/no_env.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/resource_group.png b/tutorials/ai-core-orchestration-consumption-v2/img/resource_group.png new file mode 100644 index 0000000000..bfefb499a3 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/resource_group.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/service_update_creds.png b/tutorials/ai-core-orchestration-consumption-v2/img/service_update_creds.png new file mode 100644 index 0000000000..e6d9f77d67 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/service_update_creds.png differ diff --git a/tutorials/ai-core-orchestration-consumption-v2/img/tut_1_result.png b/tutorials/ai-core-orchestration-consumption-v2/img/tut_1_result.png new file mode 100644 index 0000000000..174bc82986 Binary files /dev/null and b/tutorials/ai-core-orchestration-consumption-v2/img/tut_1_result.png differ diff --git a/tutorials/ai-core-orchestration-consumption/ai-core-orchestration-consumption.md b/tutorials/ai-core-orchestration-consumption/ai-core-orchestration-consumption.md index 06aa91996a..95df4b8674 100644 --- a/tutorials/ai-core-orchestration-consumption/ai-core-orchestration-consumption.md +++ b/tutorials/ai-core-orchestration-consumption/ai-core-orchestration-consumption.md @@ -39,6 +39,10 @@ This tutorial provides a basic introduction to using **orchestration in SAP AI C You will learn how to deploy and configure orchestration to enable the consumption of **multiple GenAI models** within a single workflow. +Please note that: +- The **model** and **model-version** referenced in the tutorial may differ from the current offerings. Refer to the [SAP Note](https://me.sap.com/notes/3437766) for the list of available models. +- The AILaunchpad screenshots in tutorial may slightly vary from the current AILaunchpad UI. + We will walk through a **step-by-step guide** and demonstrate the orchestration flow using a **resume processing use case**. This real-world scenario highlights how different models can collaborate within a cohesive pipeline using orchestration. > **Note:** In SAP AI Core, orchestration deployment is available by default in the default resource group during the onboarding. For any new or additional resource groups, you must deploy a separate orchestration setup. @@ -531,8 +535,11 @@ Follow the screenshot attached for reference. ### Consume LLM's in Generative AI Hub through Orchestration +Please note that the **model** and **model-version** referenced in the tutorial may differ from the current offerings. Refer to the [SAP Note](https://me.sap.com/notes/3437766) for the list of available models. + [OPTION BEGIN [AI Launchpad]] + • Navigate to the resource group where your orchestration has been deployed. • Go to Generative AI Hub. @@ -541,6 +548,8 @@ Follow the screenshot attached for reference. • In the Templating section, locate the message icon with three tabs: User, Assistance, and System. +Please take note that the screenshots in tutorial may slightly vary from the current AILaunchpad UI. + Click on the User tab, Enter the following details: @@ -548,7 +557,7 @@ Click on the User tab, Enter the following details: ```CODE -Here is a candidate's resume: {{?candidate_resume}} +Here is a candidate's resume: {{ ?candidate_resume }} ``` **Variable Definitions:** @@ -726,7 +735,7 @@ template = Template(                       organizational history, and personal interests"""),         UserMessage( -            "Here is a candidate's resume: {{?candidate_resume}}" +            "Here is a candidate's resume: {{ ?candidate_resume }}"         ),     ], @@ -853,7 +862,7 @@ const templatingConfig: TemplatingModuleConfig = { }, { role: 'user', - content: 'Candidate Resume:\n{{?candidate_resume}}', + content: 'Candidate Resume:\n{{ ?candidate_resume }}', }, ], }; @@ -902,7 +911,7 @@ async function generateResponsesForModels(cvContent: string) { temperature: 0.6, }, }, - template: templatingConfig + templating: templatingConfig }, { resourceGroup: RESOURCE_GROUP } ); @@ -1110,7 +1119,7 @@ Together with document grounding and templating, data masking and content filter }, { "role": "user", - "content": "Candidate Resume:\n{{?candidate_resume}}" + "content": "Candidate Resume:\n{{ ?candidate_resume }}" } ], "defaults": { diff --git a/tutorials/ai-core-orchestration-grounding-v2/ai-core-orchestration-grounding-v2.md b/tutorials/ai-core-orchestration-grounding-v2/ai-core-orchestration-grounding-v2.md new file mode 100644 index 0000000000..0f9df9a22b --- /dev/null +++ b/tutorials/ai-core-orchestration-grounding-v2/ai-core-orchestration-grounding-v2.md @@ -0,0 +1,1276 @@ +--- +parser: v2 +auto_validation: true +time: 45 +primary_tag: software-product>sap-ai-core +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-ai-core ] +author_name: Smita Naik +author_profile: https://github.com/I321506 +--- + +# Orchestration(V2) with Grounding Capabilities in SAP AI Core + This tutorial provides a step-by-step guide to setting up document grounding, creating pipelines, and utilizing vector APIs for facility management. In our use case, we use facility management emails uploaded to AWS S3 as grounding documents. This enables precise retrieval of relevant information, supporting efficient query resolution and service request handling. Follow this guide to streamline facility-related insights and response processes. + +## You will learn +- How to set up orchestration pipelines, enable document grounding, and perform vector retrieval using SAP AI Core's grounding capabilities + +## Prerequisites +1. **BTP Account** + If you do not already have a commerical SAP Business Technology Platform (BTP) account, you can use **BTP Advanced Trial**. + [Create a BTP Account](https://developers.sap.com/group.btp-setup.html) +2. **For SAP Developers or Employees** + Internal SAP stakeholders should refer to the following documentation: [How to create BTP Account For Internal SAP Employee](https://me.sap.com/notes/3493139), [SAP AI Core Internal Documentation](https://help.sap.com/docs/sap-ai-core) +3. **For External Developers, Customers, or Partners** + Follow this tutorial to set up your environment and entitlements: [External Developer Setup Tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html), [SAP AI Core External Documentation](https://help.sap.com/docs/sap-ai-core?version=CLOUD) +4. **Create BTP Instance and Service Key for SAP AI Core** + Follow the steps to create an instance and generate a service key for SAP AI Core. Ensure to use service plan **extended**: + [Create Service Key and Instance](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key?version=CLOUD) +5. **AI Core Setup Guide** + Step-by-step guide to set up and get started with SAP AI Core: + [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-genaihub-provisioning.html) +6. An **Extended** SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard plans. For more details, refer to +[SAP AI Core Service Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/service-plans?version=CLOUD) +7. **AI Launchpad Setup Guide** + Step-by-step guide to set up AI Launchpad: + [AI Launchpad Tutorial](https://developers.sap.com/tutorials/ai-launchpad-provisioning.html) + + +## Pre-read + +In this tutorial, we explore how to extend orchestration capabilities in SAP AI Core by incorporating **grounding** — the process of enriching GenAI outputs with **domain-relevant context** to ensure accurate and reliable responses. **Grounding** addresses key challenges such as hallucinations and lack of specificity by connecting the model to external knowledge sources during inference. + +In this tutorial we are covering: + +- How to create the Data Injestion Pipeline(pipeline API and vectore API options). You can choose either of these options based on the requirements +- How to use **Amazon S3** or **Microsoft SharePoint** as document repository. +- How to retrieve and verify the content dynamically from uploaded documents. +- How to configure and use grounding in orchestration. We are focusing on the **grounding** module usage, but in the consumption request you will also find optional modules such as **data masking** and **content filtering** and **templating**, **model configuration** are the mandatory modules in orchestration. +- how to use the solution using **SAP AI Launchpad**, **Python SDK**, **JavaScript**, and **API(Bruno Client)**. + +> **Use Case:** In our scenario, we use **facility management emails** uploaded to **Microsoft SharePoint** or **Amazon S3** as grounding documents. The orchestration pipeline retrieves relevant content from these documents and enables **context-aware question answering** using retrieval-augmented generation (RAG). + +For additional context, refer to: +🔗 [Grounding in SAP AI Core (Help Portal)](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/grounding?version=CLOUD) + +**Video links:** + +* [End to end usage of grounding using API(Bruno Client)](https://video.sap.com/media/t/1_zkzzd5dk) + +**Overview of the tutorial steps:** + +![img](img/grounding-usage-flow1.png) + + +### Create service key for AI Core instance + +[OPTION BEGIN [Bruno]] + +This step enables the foundational setup of the AI Core instance by creating a service key, which is crucial for accessing and managing the AI Core services in the development environment. + +• Required service plan **extended**. You can follow steps in https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/enabling-service-in-cloud-foundry?locale=en-US to create an AI Core instance and service key in development environment. Ensure to choose service plan **extended**. + +#### Download and import Bruno collection + +This step prepares the workspace by importing pre-configured requests for easy interaction with AI Core services using Bruno collections. + +• Download [Bruno_config.json](img/Bruno_config.json) + +• Navigate to Bruno Collections and upload the .json file to import collections + +![img](img/image001.png) + +![img](img/image002.png) + +![img](img/image003.png) + +#### Set env variables + +Environment variables centralize configuration settings required for seamless integration between your service key and the imported collection. + +• Select the getToken query in the imported collection, click on **No Environment**, and configure the environment as canary-test. + +![img](img/image004.png) + +![img](img/image005.png) + +• Set the values inside environment canary-test + +- Populate values from the service key into the following variables: + - **ai_auth_url** + - **ai_api_url** + - **client_id** + - **client_secret** + +![img](img/image006.png) + +• Add a resource group name at **resource_group** + +• Save the configuration and set the active environment to canary-test. + +![img](img/image007.png) + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +To interact with SAP AI Core using SAP Cloud SDK, you first need to create a service key that grants secure access to your AI Core instance. Follow the step **Set Up Your Environment and Configure Access** in the [tutorial](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) to establish your connection. + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +To interact with SAP AI Core using the python Gen AI SDK, you first need to create a service key that grants secure access to your AI Core instance. + +• Configure proxy modules by setting up environment variables for AI Core credentials. + +• Replace placeholder values in ~/.aicore/config.json with AI Core service keys from BTP. + +• Optionally, set the AICORE_HOME environment variable to override the default config path. + + +![img](img/image077.png) + +[OPTION END] + +### Generate token + +[OPTION BEGIN [Bruno]] + +- This step generates an access token, required for authenticating API requests during the process. + - Select the get_token request and execute it. + - **Note**: Regenerate the token if it expires during execution. + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +As the access token is automatically initially requested and sent with every request to the server, this step is not necessary for the Javascript SDK. + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +As the access token is automatically initially requested and sent with every request to the server, this step is not necessary for the python SDK. + +[OPTION END] + +### Create/Update resource group to use grounding module + +Resource groups segment workloads and manage resources for specific AI Core services. + +To enable document grounding in SAP AI Core, your Resource Group (RG) must include a specific **label**. If you're creating a new RG, add this label during setup. + +[OPTION BEGIN [Bruno]] + + • Expand **01_resource_group** and execute the create request to create a resource group. + +```json +{ + "resourceGroupId": "{{resource_group}}", + "labels": [ + { + "key": "ext.ai.sap.com/document-grounding", + "value": "true" + } + ] +} +``` +![img](img/image008.png) + +• Verify the group status using the **get_by_id** request to ensure it is **PROVISIONED**. + +![img](img/image009.png) + +**Note:** +If you're using an existing RG, you can patch it with the label to activate grounding support. + +Send a PATCH request to the endpoint : + +{{apiurl}}/v2/admin/resourceGroups/{{resource_group_name}} with the body: + +```json +{ + "resourceGroupId": "", + "labels": [ + { + "key": "ext.ai.sap.com/document-grounding", + "value": "true" + } + ] +} +``` + +[OPTION END] + +[OPTION BEGIN [AI Launchpad]] + +• In the **Workspaces** app, choose the **AI API connection**. + +• Open the **SAP AI Core Administration** app and choose **Resource Groups**. + +• The **Resource Groups** screen appears with a tile for each existing **resource group**. + +• Choose Create to create reference details for a new resource group. + +• Complete the fields in the Create **Resource Group** dialog box. + +![img](img/image042.png) + +• Enter a **resource group ID**. + +**Note:** Ensure that the resource group ID is unique. If the ID is not unique and is currently in use, then the new resource group and its details will overwrite the existing resource group. + +• Choose the **subaccount_id** label key and enter a value. + +• Choose the **zone_id** label key and enter a value. + +• Choose the **instance_id** label key and enter a value. + +• Enter the **document-grounding** label key and enter the value true. + +• If additional labels are required, enter their keys and corresponding values. + +• Choose **Create** to create the **resource group**. + +• The All Resource Groups screen appears and shows the **new resource group**. + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +In this step, we will create a resource group in SAP AI Core using the [`@sap-ai-sdk/ai-api`](https://github.com/SAP/ai-sdk-js/tree/main/packages/ai-api) package of the SAP Cloud SDK for AI (JavaScript). For more information, refer to the official [documentation](https://sap.github.io/ai-sdk/docs/js/ai-core/ai-api). + +**NOTE**: In order to use the document grounding service, the resource group must be created with the document grounding label set to `true`. Therefore, existing resource groups without the label will not work for document grounding. + +• To start, install the dependency in your project. + +``` +npm install @sap-ai-sdk/ai-api +``` + +• Add the following code to your project to create a resource group. + +```javascript +import { ResourceGroupApi } from '@sap-ai-sdk/ai-api'; + +const RESOURCE_GROUP = '' // Please change to your desired ID + +// Create resource group using ResourceGroupApi +async function createResourceGroup() { + try { + const response = await ResourceGroupApi. + kubesubmitV4ResourcegroupsCreate({ + resourceGroupId: RESOURCE_GROUP, + labels: [ + { + key: 'ext.ai.sap.com/document-grounding', + value: 'true', + } + ] + }).execute(); + return response.resourceGroupId; + } catch (error: any) { + console.error('Error while creating Resource Group:', error.stack); + } +} + +const resourceGroupId = await createResourceGroup(); +console.log("Created Resource Group with ID: ", resourceGroupId) +``` + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +Create a resource group in SAP AI Core. Please note that for using the document grounding service, your request must contain the document grounding **label** set to **true**. Therefore, existing resource groups without the label won't work. + +```Python +from ai_core_sdk.models.resource_group import Label + +# Name of the resource group to create +resource_group = "" + +labels = [ + Label( + key="ext.ai.sap.com/document-grounding", + value="true" + ) +] + +# Create Resource Group +try: + rg = ai_core_client.resource_groups.create( + resource_group_id = resource_group, + labels = labels + ) + print("Created resource group:", rg.resource_group_id) +except Exception as e: + if "already exists" in str(e): + print(f"Resource group '{resource_group}' already exists") + else: + raise +``` + +[OPTION END] + +### Create generic secret + +This step is required only if you are using external document repositories such as Amazon S3 or Microsoft SharePoint for grounding. + +**Note:** + +* If you do not want to use an external document repository, and instead plan to use chunks data then use Vector API and generate embeddings and store in vector database. In this scenario you can **SKIP** this step entirely. +* There are other options such as **SFTP** is also supported. But currently in the tutorial we are not covering it. + +A Generic Secret securely stores credentials for your document repository in SAP AI Core. These secrets are later used while creating the knowledge base (data repository) using the Pipeline API. + + +**👉 Which one to create?** + +Create the generic secret for SharePoint or S3 based on the document repository you plan to use for your grounding documents. + +🔸 If you're using Microsoft SharePoint: + +Follow the instructions to create a secret with the required base64-encoded SharePoint credentials. + +🔸 If you're using Amazon S3: + +Use the AWS CLI to configure your bucket and provide the base64-encoded credentials in the secret payload. + +Proceed directly to creating collections using the Vector API if you're not working with S3 or SharePoint or SFTP. + +[OPTION BEGIN [Bruno]] + +#### **Generic secret for sharepoint (option-1)** + +Generic secrets securely store SharePoint credentials required for document access + +• Expand **03_generic_secret** and select create request + +![img](img/image013.png) + +• Please refer to point 2 under https://help.sap.com/docs/joule/integrating-joule-with-sap/configure-access-from-sap-btp?locale=en-US for reference values of MS SharePoint credentials + +• Update **clientId**, **tokenServiceURL**, **password**, **url**, **user** and **clientSecret** according MS SharePoint credentials + +• To prepare your own SharePoint Create SharePoint Site (optional, you can re-use an existing site if you have one) + + 1. Create a Group and a Technical User (optional, existing can be reused) + 2. Register an Application, Generate a Client Secret, & Expose the application using web API + 3. Validate the SharePoint access with the Technical User + +• All values needs to be provided as base 64 encoded values + +#### **Generic secret for AWS S3 (option-2)** + +Generic secrets securely store AWS S3 credentials required for document access + +• Expand **03_generic_secret** and select create request + +Use the below payload to create a secret for AWS S3 with NoAuthentication as authentication type. + +```CODE +{ + "name": "", // Name of the generic secret to be created + "data": { + "url": "", // Base64-encoded value in the format https://s3..amazonaws.com + "authentication": "Tm9BdXRoZW50aWNhdGlvbg=", // Base64 encoded value for NoAuthentication + "description": "", // Base64 encoded description of the secret + "access_key_id": "", // Base64 encoded value of access key id + "bucket": "", // Base64 encoded value of bucket name + "host": "", // Base64 encoded value of host + "region": "", // Base64 encoded value of region + "secret_access_key": "", // Base64 encoded value of secret access key + "username": "", // Base64 encoded value of username + "type": "SFRUUA==", // [Optional] Base64 encoded value for HTTP + "proxyType": "SW50ZXJuZXQ=", // [Optional] Base64 encoded value for Internet + }, + "labels": [ + { + "key": "ext.ai.sap.com/document-grounding", // Label for Document Grounding feature + "value": "true" + }, + { + "key": "ext.ai.sap.com/documentRepositoryType", // Label for Document Repository Type + "value": "S3" + } + ] +} +``` + +• Ensure that all values in the data dictionary are Base64-encoded as per AWS S3 credential requirements. + +![img](img/image072.png) + +[OPTION END] + +[OPTION BEGIN [AI Launchpad]] + +#### **Generic secret for sharepoint (option-1)** + +1. In the **Workspaces** app, choose the AI API connection. + +2. If you want to add your secret at the resource group level, choose the resource group. Alternatively you can use the toggles in the header or dialog box, where you will be prompted to specify a resource group. + +3. Open the **SAP AI Core Administration app** and choose **Generic Secrets**. The Generic Secrets screen appears with a tile for each existing secret. + +4. Choose **Add** to enter reference details for a new secret. + +5. Complete the fields in the Add Generic Secret dialog box as follows: + + - Switch between tenant-level secrets and resource-group-level secrets + + - If your secret is at the resource-group level: confirm the resource group. To change the resource group, choose  (Change Value). Enter a name for the secret. Secret names must comply with the following criteria: + + - Contain only lowercase alphanumeric characters, hyphens (-), or numbers + + - Do not start or end with a hyphen (-) + + ![img](img/image_gen_sec.png) + + ![img](img/image054.png) + + • Enter the secret in JSON format. For example: + + ```CODE + { + + "type": "SFRUUA==", + + "description": "", + + "clientId": "", + + "authentication": "", + + "tokenServiceURL": "", + + "password": "", + + "proxyType": "", + + "url": "", + + "tokenServiceURLType": "", + + "user": "", + + "clientSecret": "", + + "scope": "SCOPE", + + "labels": [ + + { + + "key": "ext.ai.sap.com/document-grounding", + + "value": "true" + + } + + ] + + } + ``` + +#### **Generic secret for AWS S3 (option-2)** + +1. **Open the Workspaces app** and choose the **AI API connection**. + +2. If needed, toggle between **tenant-level** and **resource-group-level** secret creation. + +3. Navigate to the **SAP AI Core Administration** app and go to **Generic Secrets**. + +4. Choose **Add** to create a new secret. + +5. Fill out the form as follows: + - **Resource Group**: `` + - **Name**: `aws-credentials-1` + - **Secret (JSON format)**: + +```json + { + "access_key_id": "", + "secret_access_key": "", + "bucket": "", + "host": "", + "region": "", + "url": "", + "username": "", + "authentication": "NoAuthentication", + "description": "AWS S3 credentials for document grounding", + "type": "HTTP", + "proxyType": "Internet" + } +``` + +**Labels** + +Add the following key-value pairs as labels: +| Key | Value | +|--------------------------------------------------|-------| +| ext.ai.sap.com/document-grounding | true | +| ext.ai.sap.com/documentRepositoryType | S3 | + +![img](img/image_gen_sec.png) + +![img](img/image078.png) + + +1. Click Add to save the secret. + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +In this step, we will create a generic secret in SAP AI Core using the [`@sap-ai-sdk/ai-api`](https://github.com/SAP/ai-sdk-js/tree/main/packages/ai-api) package of the SAP Cloud SDK for AI (JavaScript). For more information, refer to the official [documentation](https://sap.github.io/ai-sdk/docs/js/ai-core/ai-api). + +#### **Generic secret for sharepoint (option-1)** + +This step specifically creates a secret in SAP AI Core that stores Base64-encoded credentials for SharePoint access, securely enabling document grounding workflows via Microsoft Graph. + +```javascript +import { SecretApi } from '@sap-ai-sdk/ai-api'; + +// Create Secret using SecretApi +async function createGenericSecret() { + try { + const response = await SecretApi.kubesubmitV4GenericSecretsCreate({ + name: 'canary-rg1-secret', + data: { + type: 'SFRUUA==', + description: '', + clientId: '', + authentication: '', + tokenServiceURL: '', + password: '', + proxyType: '', + url: '', + tokenServiceURLType: '', + user: '', + clientSecret: '', + scope: '' + }, + labels: [ + { + key: 'ext.ai.sap.com/document-grounding', + value: 'true', + }, + ], + }).execute(); + return response; + } catch (error: any) { + console.error('Error while creating Resource Group:', error.stack); + } +} + +const secret = await createGenericSecret(); +console.log(secret?.message) +``` + +#### **Generic secret for AWS S3 (option-2)** + +Generic secrets securely store S3 credentials required for document access. Please change the values as per your AWS S3 credentials. + +```javascript +import { SecretApi } from '@sap-ai-sdk/ai-api'; + +async function createS3GenericSecret() { + try { + const response = await SecretApi.kubesubmitV4GenericSecretsCreate( + { + name: 's3-grounding-secret', + data: { + description: "", + url: "", + authentication: "Tm9BdXRoZW50aWNhdGlvbg==", + access_key_id: "", + secret_access_key: "", + bucket: "", + region: "", + host: "", + username: "" , + type: "SFRUUA==", + proxyType: "" + }, + labels: [ + { + key: 'ext.ai.sap.com/document-grounding', + value: 'true' + }, + { + key: 'ext.ai.sap.com/documentRepositoryType', + value: 'S3' + } + ] + }, + { + 'AI-Resource-Group': '' + } + ).execute(); + + console.log('✅ S3 Generic Secret created:', response.name); + return response; + } catch (error: any) { + console.error( + '❌ Error while creating S3 Generic Secret:', + error.cause?.response?.data || error.message + ); + } +} + +await createS3GenericSecret(); +``` + +[OPTION END] + +[OPTION BEGIN [Python]] + +In this step, we will create a generic secret in SAP AI Core using the SAP Cloud SDK for AI (Python). For more information, refer to the official [documentation](https://help.sap.com/docs/sap-ai-core/generative-ai/generic-secrets-for-grounding-e1a201c1fc2e4eb3a570efd81a3b3616?q=document+grounding) + +#### **Generic secret for sharepoint (option-1)** + +This step specifically creates a secret in SAP AI Core that stores Base64-encoded credentials for SharePoint access, securely enabling document grounding workflows via Microsoft Graph. + +```Python +json_data = { + 'name': '', + 'data': { + 'description': '', + 'clientId': '', + 'authentication': 'T0F1dGgyUGFzc3dvcmQ=', + 'tokenServiceURL': '', + 'password': '', + 'url': 'aHR0cHM6Ly9ncmFwaC5taWNyb3NvZnQuY29t', + 'tokenServiceURLType': 'RGVkaWNhdGVk', + 'user': '', + 'clientSecret': '', + 'scope': 'aHR0cHM6Ly9ncmFwaC5taWNyb3NvZnQuY29tLy5kZWZhdWx0', + }, + 'labels': [ + { + 'key': 'ext.ai.sap.com/document-grounding', + 'value': 'true', + }, + ], +} + +secret = requests.post(f'{AI_API_URL}/v2/admin/secrets', headers=headers, json=json_data) + +secret.json() +``` + +#### **Generic secret for AWS S3 (option-2)** + +Generic secrets securely store S3 credentials required for document access. Please change the values as per your AWS S3 credentials. + +```Python +# Prepare secret payload +secret_payload = { + "name": "", + "data": { + "description": "", + "url": "", + "authentication": "Tm9BdXRoZW50aWNhdGlvbg==", + "access_key_id": "", + "secret_access_key": "", + "bucket": "", + "region": "", + "host": "", + "username": "" + }, + "labels": [ + { + "key": "ext.ai.sap.com/document-grounding", + "value": "true" + }, + { + "key": "ext.ai.sap.com/documentRepositoryType", + "value": "S3" + } + ] +} + +# Create secret +response = requests.post(f"{AI_API_URL}/v2/admin/secrets", headers=headers, json=secret_payload) +print("Secret creation:", response.status_code, response.text) +``` + +[OPTION END] + +### Data Ingestion from Document Repositories via Pipeline API + +Choose your data repository type and set up the integration. + +In the tutorial, currently we are covering the document repositories: + +* Microsoft Sharepoint +* AWS S3 + +This step covers, fetching documents from a supported data source(Sharepoint, S3, SFTP etc), preprocessing and creating chunks of those documents, and stores their semantic embeddings in the HANA Vector Store. + +#### Create Pipeline + +In this use case, we have added facility management emails as grounding documents, uploading them to the designated SharePoint folder. I’m attaching the sample email folder [sample_emails.zip] (img/sample_emails.zip) used in this scenario. For practice, you can also use these emails if needed. + +[OPTION BEGIN [Bruno]] + +#### If you are using MSSharePoint as a document repository [Option-1] + +• **Expand 04_pipeline** and select **create_pipeline** request + +• Replace value **generic_secret_name** with generic secret name created in step 6. + +• Replace value **sharepoint_site_name** with site name of MS SharePoint. + +• Replace value **folder_name** with the name of the folder from which the documents have to be taken.Multiple folder names can be specified. + +![img](img/image014.png) + + +#### If you are using AWS S3 as a document repository [Option-2] + +**Expand 04_pipeline** and select **create_pipeline** request + +Use the below payload to create a pipeline for AWS S3. + +```CODE +{ + "type": "S3", + "configuration": { + "destination": "" // Name of the generic secret created for S3 + }, + "metadata": { // [Optional] + "destination": "", // Name of the generic secret created for S3 MetaData Server + } +} +``` +• Replace value **generic_secret_name** with generic secret name created in step 8. + +**Note:**'metadata' is an optional field which takes the destination name created for the s3 metadata server. + +![img](img/image069.png) + + +How to upload document to AWS S3 using AWS CLI? + +**Note:** Download and install AWS CLI from the AWS CLI official page. + +Open Command Prompt and configure AWS CLI with your credentials: + +Enter your Access Key, Secret Key, Region, and Output Format when prompted. + +```COPY +aws configure +``` +![img](img/image074.png) + +**Upload a Document to AWS S3** + +To upload a grounding document to an S3 bucket, use: + +```CODE +aws s3 cp s3://// +``` + +**Verify File Upload** + +To check if the file is uploaded successfully, list the contents of the folder: + +```CODE +aws s3 ls s3://// +``` +![img](img/image076.png) + +#### Get Pipeline by Pipeline ID + +This request fetches details of a specific pipeline using its unique ID. It is useful for verifying the configuration and settings of a particular pipeline. + +![img](img/image032.png) + +#### Get Pipeline Status by Pipeline ID + +This request checks the current status of a specific pipeline, such as whether it is running, completed, or failed. It helps in tracking the execution progress. + +![img](img/image033.png) + +Once the pipeline is successfully created, documents uploaded in SharePoint are converted into vectors via APIs. The conversion process can be validated upon successful pipeline execution. + +[OPTION END] + +[OPTION BEGIN [AI Launchpad]] + +To enable document grounding, the next step is to **create a Data Repository** in SAP Generative AI Hub using the secrets you configured earlier Step. + +#### **Navigation Path** + +In the **SAP AI Launchpad**: + +1. Navigate to **Generative AI Hub** from the side menu. +2. Click on **Grounding Management**. +3. Click **Create** to open the *Create Data Repository* wizard. + +#### 🔹 Option 1: Microsoft SharePoint Configuration + +> 📸 _Refer to Screenshot: SharePoint Setup (see below)_ + +1. In the **Create Data Repository** form: + - **Embedding Model**: Leave as default (`Text Embedding 3 Large`). + - **Document Store Type**: Select `MSSharePoint`. + - **Document Grounding Generic Secret**: Select the secret you created in **Step 5**. + - **Document Store Name**: Provide a name for your repository. For example: + ``` + Dev_blr3_document + ``` + - **Include Paths**: Enter the SharePoint folder path that contains your documents to test grounding. + Example: + ``` + SharedDocuments/Sample_docs/UA_test + ``` + +2. Click **Create** to finalize the setup. + +![img](img/image079.png) + +--- + +#### 🔹 Option 2: AWS S3 Configuration + +> 📸 _Refer to Screenshot: S3 Setup (see below)_ + +1. In the **Create Data Repository** form: + - **Embedding Model**: Leave as default (`Text Embedding 3 Large`). + - **Document Store Type**: Select `S3`. + - **Document Grounding Generic Secret**: Select the AWS secret you created in **Step 5** (e.g., `aws-credentials-1`). + +2. Once selected, you're ready to proceed. The required S3 bucket, region, and credentials are handled through the secret. + +3. Click **Create** to finish. + +![img](img/image080.png) + +--- + +> ✅ After completing this step, your knowledge base (data repository) will be linked to your document source. The documents will be embedded and made available for grounding in the chat experience. + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +We are creating a document-grounding pipeline using SAP AI Core. The pipeline is configured to integrate with Microsoft SharePoint as a data source, enabling AI-driven document processing. This setup allows seamless ingestion of documents from a specified SharePoint site, ensuring efficient data retrieval and processing. + +**Note:** For this step, we are using the [document grounding module](https://sap.github.io/ai-sdk/docs/js/ai-core/document-grounding) of the SDK so make sure to add the dependency to your project. + +```javascript +// Request body for pipeline creation request +const pipelineRequest: PipelinePostRequst = { + type: 'MSSharePoint', + configuration: { + destination: '', + sharePoint: { + site: { + name: '', + includePaths: ['/'] + } + } + } +}; + +// Create the pipeline +const pipeline = await PipelinesApi.createPipeline(pipelineRequest, { + 'AI-Resource-Group': RESOURCE_GROUP +}).execute(); + +console.log('Created Pipeline with ID: ', pipeline.pipelineId); +``` + +[OPTION END] + +[OPTION BEGIN [Python SDK]] + +we are creating a document-grounding pipeline using SAP AI Core. The pipeline is configured to integrate with AWS S3 as a data source, enabling AI-driven document processing. This setup allows seamless ingestion of documents from a specified S3 data storage, ensuring efficient data retrieval and processing. + +```Python +from gen_ai_hub.proxy import get_proxy_client +from gen_ai_hub.document_grounding.client import PipelineAPIClient +from gen_ai_hub.document_grounding.models.pipeline import S3PipelineCreateRequest, CommonConfiguration + +aicore_client = get_proxy_client() +pipelines_api_client = PipelineAPIClient(aicore_client) +generic_secret_s3_bucket = "" +s3_config = S3PipelineCreateRequest(configuration=CommonConfiguration(destination=generic_secret_s3_bucket)) +response = pipelines_api_client.create_pipeline(s3_config) +print(f"Reference the Vector knowledge base using the pipeline ID: {response.pipelineId}") +# check the status of the vectorization pipeline until it is completed +print(pipelines_api_client.get_pipeline_status(response.pipelineId)) +``` + +[OPTION END] + +### Data Ingestion of Chunks via Vector API + +Vector API processes the chunks provided by user and stores their semantic embeddings in collections. + +[OPTION BEGIN [Bruno]] + +#### Create collection + +• Expand 06_vector and select create_collections request. + +• Replace value with the required collection name. + +• The metadata can be an array of key-value pairs to have some additional information. + +**Note:** Currently supported modelName is only text-embedding-ada-002-v2. + +![img](img/image066.png) + +#### Create documents + +• Click on the create_document request and replace the path parameter with the valid collection ID. + +• Replace value with the required key of metadata and put corresponding values to it. Multiple metadata can be specified here. + +• Replace values , with the required chunk of the documents. Add multiple chunks based on the requirement. The metadata can also be specified for each chunk. + +![img](img/image067.png) + +#### Verifying Vector Processing (Optional) + +These steps help inspect vector collections and documents to confirm successful processing. + + • **get_collection_creation_status_by_id** – Checks whether the collection was created successfully. + +![img](img/image035.png) + + • **get_collection_by_id** – Retrieves detailed information about a specific collection + +![img](img/image036.png) + + • **get_documents_by_id** – Fetches specific documents within a collection for debugging + +![img](img/image038.png) + +[OPTION END] + +### Get Data Repository ID + +[OPTION BEGIN [Bruno]] + +This ID uniquely identifies the knowledge base (data repository) created using either Pipeline API or Vector API. It is required during retrieval and orchestration steps to fetch relevant grounded content. + + • **dataRepositories** - List all the collection of Data repositories + +![img](img/image040.png) + + • **dataRepositories by id** – Fetches details of a specific repository for targeted debugging. + +![img](img/image041.png) + +[OPTION END] + +### Retrieval Search Without Orchestration - Optional Step + +If you only want to test semantic matching from the data repository without involving LLM inference or orchestration, you can use this standalone retrieval search option. + +**Steps:** + + • Again, navigate to **07_retrieval** in Bruno. + + • Use **retrieval_vector** to get relevant chunks. + + • Simply provide your **search query** and the **data repository ID(s)** in the request + + • **Analyze the chunks** field in the response. + +This is useful for **debugging retrieval quality** before plugging into a larger pipeline. + +![img](img/image068.png) + +### Get or create orchestration deployment + +Before beginning inference, make sure: + +An orchestration deployment with scenario ID: orchestration is RUNNING. + +Use the get_deployment API (under the "Deployments" section in the Bruno collection) to confirm the deployment. + +Update the orchestration_service_url in your environment variables accordingly. + +NOTE: If you don’t have an orchestration deployment or would like to refer GET Deployment steps, refer to "Create Configuration for Orchestration Deployment" step in this tutorial [tutorial] (https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html#130d0b6a-6d86-4505-9a80-6d268f9e2e51). + +### Configure Grounding Module in Orchestration Workflow + +[OPTION BEGIN [Bruno]] + +Once your data repository is created (via Pipeline API or Vector API), you can configure orchestration to enable grounded responses. Regardless of the ingestion method, the orchestration module uses the Data Repository ID to retrieve relevant content and inject it into the prompt for context-aware generation. + +**Steps:** + +• Expand 05_orchestration in the Bruno collection. + +• Use the completion request. + +• provide the Data Repository ID in the "groundingRequest" section of the request body. + +• The orchestration will automatically query the referenced data repository and fetch contextual information. + +![img](img/image051.png) + +[OPTION END] + +[OPTION BEGIN [AI Launchpad]] + +Grounding is a crucial step in orchestration that ensures responses are enriched with relevant and accurate data from predefined sources. This section explains how grounding works in the AI Launchpad. + +**Input Variables** + +Input variables are parameters sent to the grounding service to facilitate data retrieval. These variables can be referenced in the template definition, allowing dynamic data incorporation based on user inputs. + +**Output Variable** + +The output variable holds the retrieved data from the grounding service. This data can then be utilized in the template definition to generate contextual and informed responses. + +**Selected Sources** + +You can specify the repositories from which the grounding module retrieves information. If no specific repositories are selected, grounding will include all available sources by default. Selecting relevant sources ensures precise and domain-specific data retrieval for improved orchestration outcomes. + +![img](img/image047.png) + +**Templating** + +Templating enables you to define structured prompts and system messages for the generative AI model. Using placeholders like {{?groundingRequest}} and {{?groundingOutput}}, you can dynamically customize inputs for grounding-based data retrieval. These placeholders must follow naming rules and can have default values for testing. If no default value is set, the workflow prompts for input during execution. + +![img](img/image048.png) + +**Model configuration** + +Model configuration allows you to select the AI model for your workflow. If no model is selected, the default model is used. You can specify additional parameters in JSON format, such as setting the n parameter to receive multiple responses. You can see which models are available within an orchestration deployment by selecting the deployment ID. + +![img](img/image049.png) + +[OPTION END] + +[OPTION BEGIN [Javascript SDK]] + +We are configuring an AI Orchestration Pipeline using SAP AI Core. The pipeline integrates multiple AI modules to process and refine inputs efficiently. This setup enables **document grounding, LLM processing, templating, and content filtering**, ensuring accurate and safe AI-generated responses. + +```javascript +import { + OrchestrationClient, buildDocumentGroundingConfig, buildAzureContentSafetyFilter} + from '@sap-ai-sdk/orchestration'; + +// --------------------------- +// Initialize Orchestration Client +// --------------------------- +const orchestrationClient = new OrchestrationClient({ + promptTemplating: { + model: { + name: 'gpt-4o', + params: { + max_completion_tokens: 200, + temperature: 0 + } + } + }, + grounding: buildDocumentGroundingConfig({ + placeholders: { + input: ['groundingRequest'], + output: 'groundingOutput' + }, + filters: [ + { + id: 'filter1', + data_repositories: ['a0165**************55f'], + data_repository_type: 'vector', + search_config: { max_chunk_count: 20 } + } + ] + }), + filtering: { + input: { + filters: [ + buildAzureContentSafetyFilter({ + Hate: 'ALLOW_SAFE_LOW', + Violence: 'ALLOW_SAFE_LOW' + }) + ] + }, + output: { + filters: [ + buildAzureContentSafetyFilter({ + Hate: 'ALLOW_SAFE_LOW', + Violence: 'ALLOW_SAFE_LOW' + }) + ] + } + } +}, +{resourceGroup:resourceGroupId} + +); +``` +[OPTION END] + +[OPTION BEGIN [ Python SDK]] + +We are configuring an AI Orchestration Pipeline using SAP AI Core. The pipeline integrates multiple AI modules to process and refine inputs efficiently. This setup enables **document grounding, LLM processing, templating, and content filtering**, ensuring accurate and safe AI-generated responses. + +``` python +from gen_ai_hub.proxy import get_proxy_client +from gen_ai_hub.orchestration_v2.models.message import SystemMessage, UserMessage +from gen_ai_hub.orchestration_v2.models.template import Template +from gen_ai_hub.orchestration_v2.service import OrchestrationService + +# Set up Orchestration Service (V2) +proxy_client = get_proxy_client() +orchestration_service = OrchestrationService(proxy_client) + +# Runtime input for the orchestration pipeline +template = Template( + template=[ + SystemMessage(content="""Facility Solutions Company provides services to luxury residential complexes, + apartments, individual homes, and commercial properties such as office buildings, + retail spaces, industrial facilities, and educational institutions. + Customers are encouraged to reach out with maintenance requests, service deficiencies, + follow-ups, or any issues they need by email."""), + UserMessage(content="""You are a helpful assistant for any queries for answering questions. + Answer the request by providing relevant answers that fit to the request.\n\n + Request: {{ ?user_query }}\n + Context: {{ ?grounding_response }}""") + ] +) + +from gen_ai_hub.orchestration_v2.models.llm_model_details import LLMModelDetails + +llm = LLMModelDetails(name="gpt-4o", params={"max_completion_tokens": 2048}) + +from gen_ai_hub.orchestration_v2.models.document_grounding import (GroundingModuleConfig,GroundingType, +DocumentGroundingFilter,DataRepositoryType,DocumentGroundingConfig,DocumentGroundingPlaceholders,GroundingSearchConfig) + +filters=[DocumentGroundingFilter(id="vector", + data_repositories=["a0165*************10855f"], + data_repository_type=DataRepositoryType.VECTOR.value, + search_config= GroundingSearchConfig(max_chunk_count=20) + )] + + +placeholders = DocumentGroundingPlaceholders( + input=["user_query"], + output="grounding_response" +) + +# Grounding module config +grounding_config = GroundingModuleConfig( + type=GroundingType.DOCUMENT_GROUNDING_SERVICE.value, + config=DocumentGroundingConfig( + filters=filters, + placeholders=placeholders + ) +) +``` + +[OPTION END] + +### Run Orchestration with Prompt to Get Context-aware Response + +[OPTION BEGIN [Bruno]] + +Once orchestration is configured with grounding, you can now send prompts via the orchestration API or SDKs. The configured grounding module will fetch enterprise context dynamically and enrich the prompt, enabling the LLM to generate accurate and context-aware responses. Use this approach in production scenarios for precise Q&A and insights based on your internal data. + +![img](img/image052.png) + +[OPTION END] + +[OPTION BEGIN [AI Launchpad]] + +**Orchestration Workflow** + +After you have built your orchestration workflow, you can test it to generate output from your chosen model. + +1. Navigate to the Orchestration Test Run section + +2. click on **Run** to view the responses + +![img](img/image050.png) + +You can also save the created orchestration for future use as shown in image + +![img](img/image_ail_resp.png) + +[OPTION END] + +[OPTION BEGIN [JavaScript SDK]] + +The configuration defines a document grounding module that retrieves relevant context from a vector-based repository, a GPT-4o model for response generation, a templating module to structure responses, and Azure Content Safety filters to ensure compliance and content moderation. This orchestration streamlines AI-driven summarization while maintaining reliability and security. + +```javascript +// Send Chat Completion Request + +const response = await orchestrationClient.chatCompletion({ + messages: [ + { + role: 'system', + content: `Facility Solutions Company provides services to luxury residential complexes, apartments, +individual homes, and commercial properties such as office buildings, retail spaces, industrial facilities, and educational institutions. +Customers are encouraged to reach out with maintenance requests, service deficiencies, follow-ups, or any issues they need by email.` + }, + { + role: 'user', + content: `You are a helpful assistant for any queries. +Answer the request by providing relevant answers that fit the request. +Request: {{ ?groundingRequest }} +Context: {{ ?groundingOutput }}` + } + ], + placeholderValues: { + groundingRequest: 'Is there any complaint from customers?' + } +}, +); + +// --------------------------- +// Output Response +// --------------------------- +console.log(response.getContent()); +``` +![img](img/image_js_resp.png) + +[OPTION END] + +[OPTION BEGIN [ Python SDK]] + +The configuration defines a document grounding module that retrieves relevant context from a vector-based repository, a GPT-4o model for response generation, a templating module to structure responses, and Azure Content Safety filters to ensure compliance and content moderation. This orchestration streamlines AI-driven summarization while maintaining reliability and security. + +``` python +from gen_ai_hub.orchestration_v2.models.template import PromptTemplatingModuleConfig +from gen_ai_hub.orchestration_v2.models.config import ModuleConfig, OrchestrationConfig +from gen_ai_hub.proxy import get_proxy_client +from gen_ai_hub.orchestration_v2.service import OrchestrationService + +proxy_client = get_proxy_client() + +prompt_template = PromptTemplatingModuleConfig(prompt=template, + model=llm) + +module_config = ModuleConfig(prompt_templating=prompt_template, grounding = grounding_config) + +config = OrchestrationConfig(modules=module_config) + +orchestration_service = OrchestrationService( + proxy_client=proxy_client, + config=config +) + +response = orchestration_service.run(placeholder_values={"user_query": "Is there any complaint?"}) +print(response.final_result.choices[0].message.content) +``` + +![img](img/image070.png) + +[OPTION END] + +### Conclusion + +Adding Grounding significantly enhances the model's ability to provide Accurate and Context-specific responses. Without Grounding, the model generates generic replies, while with grounding, it retrieves precise information from the uploaded document. Screenshots showcasing both responses are provided for comparison. diff --git a/tutorials/ai-core-orchestration-grounding-v2/grounding-cloud-sdk-Tutorial.ipynb b/tutorials/ai-core-orchestration-grounding-v2/grounding-cloud-sdk-Tutorial.ipynb new file mode 100644 index 0000000000..1857076f31 --- /dev/null +++ b/tutorials/ai-core-orchestration-grounding-v2/grounding-cloud-sdk-Tutorial.ipynb @@ -0,0 +1,1511 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load the AI Core service key" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import dotenv from \"dotenv\";\n", + "dotenv.config();\n", + "\n", + "const serviceKey = JSON.parse(process.env.AICORE_SERVICE_KEY);\n", + "\n", + "const AI_API_URL = serviceKey.serviceurls.AI_API_URL;\n", + "const clientid = serviceKey.clientid;\n", + "const clientsecret = serviceKey.clientsecret;\n", + "const authUrl = serviceKey.url;" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Resource Group" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Created Resource Group with ID: rg-test2\n" + ] + } + ], + "source": [ + "import { ResourceGroupApi } from '@sap-ai-sdk/ai-api';\n", + "\n", + "const RESOURCE_GROUP = 'rg-test2' // Please change to your desired ID\n", + "\n", + "// Create resource group using ResourceGroupApi\n", + "async function createResourceGroup() {\n", + " try {\n", + " const response = await ResourceGroupApi.\n", + " kubesubmitV4ResourcegroupsCreate({\n", + " resourceGroupId: RESOURCE_GROUP,\n", + " labels: [\n", + " {\n", + " key: 'ext.ai.sap.com/document-grounding',\n", + " value: 'true',\n", + " }\n", + " ]\n", + " }).execute();\n", + " return response.resourceGroupId;\n", + " } catch (error: any) {\n", + " console.error('Error while creating Resource Group:', error.stack);\n", + " }\n", + "}\n", + "\n", + "const resourceGroupId = await createResourceGroup();\n", + "console.log(\"Created Resource Group with ID: \", resourceGroupId)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a New Orchestration Configuration\n", + "In this step, we define a function to create an orchestration configuration using the ConfigurationApi from the SAP AI SDK. This configuration integrates various parameters needed for orchestration, such as the executable ID and scenario ID.\n", + "\n", + "Key Points:\n", + "\n", + "ConfigurationApi: Provides methods for interacting with the SAP AI SDK's configuration services.\n", + "\n", + "parameterBindings: Specifies the parameters used for orchestration." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " id: \"a4bfd193-6745-44ee-aa56-f5643830e6fc\",\n", + " message: \"Configuration created\"\n", + "}\n" + ] + } + ], + "source": [ + "import { ConfigurationApi } from '@sap-ai-sdk/ai-api';\n", + "\n", + "async function createOrchestrationConfiguration(resourceGroupId: string) {\n", + " const requestBody = {\n", + " name: 'orchestration-config',\n", + " executableId: 'orchestration',\n", + " scenarioId: 'orchestration',\n", + " parameterBindings: [\n", + " { key: 'modelFilterList', value: 'null' },\n", + " { key: 'modelFilterListType', value: 'allow' }\n", + " ],\n", + " inputArtifactBindings: []\n", + " };\n", + "\n", + " try {\n", + " const responseData = await ConfigurationApi\n", + " .configurationCreate(requestBody, {\n", + " 'AI-Resource-Group': resourceGroupId\n", + " })\n", + " .execute();\n", + "\n", + " return responseData;\n", + " } catch (error) {\n", + " if (error.response) {\n", + " console.error(error.response.status);\n", + " console.error(error.response.data);\n", + " } else {\n", + " console.error(error.message);\n", + " }\n", + " }\n", + "}\n", + "\n", + "// usage\n", + "const orchestrationConfig = await createOrchestrationConfiguration(resourceGroupId);\n", + "console.log(orchestrationConfig);\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Deployment of orchestration\n", + "This step involves creating a deployment using the specified configuration and resource group. The deployment is handled via the DeploymentApi, which streamlines the process of activating the orchestration setup.\n", + "\n", + "Key Points:\n", + "\n", + "DeploymentApi: Used for initiating the deployment based on the given configuration.\n", + "\n", + "createDeployment Function: This function handles the API call to create the deployment." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "import { DeploymentApi } from '@sap-ai-sdk/ai-api';\n", + "import type { AiDeploymentCreationResponse } from '@sap-ai-sdk/ai-api';\n", + "\n", + "/**\n", + " * Create a deployment using the configuration specified by configurationId.\n", + " * @param configurationId - ID of the configuration to be used.\n", + " * @param resourceGroup - AI-Resource-Group where the resources are available.\n", + " * @returns Deployment creation response with 'targetStatus': 'RUNNING'.\n", + " */\n", + "export async function createDeployment(\n", + " configurationId: string,\n", + " resourceGroup: string\n", + "): Promise {\n", + " return DeploymentApi.deploymentCreate(\n", + " { configurationId },\n", + " { 'AI-Resource-Group': resourceGroup }\n", + " ).execute();\n", + "}\n", + "/**\n", + " * Deploy the orchestration using the given configuration ID.\n", + " * @param resourceGroup - AI-Resource-Group where the resources are available.\n", + " * @returns A message indicating the result of the deployment operation.\n", + " */\n", + "export async function deployOrchestration(\n", + " resourceGroup: string\n", + "): Promise {\n", + " // Fetch the configuration ID (can be retrieved or passed dynamically)\n", + " const configurationId = orchestrationConfig.id;\n", + "\n", + " try {\n", + " // Step: Create deployment using the configuration ID\n", + " const response = await createDeployment(configurationId, resourceGroup);\n", + " // console.log(`Orchestration deployment created with ID: ${response.id}`);\n", + " return `Orchestration deployment created with ID: ${response.id}`;\n", + " } catch (error) {\n", + " console.error('Error creating orchestration deployment:', error);\n", + " return 'Failed to create orchestration deployment.';\n", + " }\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "Promise { \u001b[36m\u001b[39m }" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Orchestration deployment created with ID: d55308d355e4a756\n" + ] + } + ], + "source": [ + "// usage to deploy orchestration\n", + "(async () => { \n", + " try {\n", + " const result = await deployOrchestration(resourceGroupId);\n", + " console.log(result); // Outputs deployment creation response\n", + " } catch (error) {\n", + " console.error('Error executing orchestration deployment:', error);\n", + " }\n", + " })();" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here, you are explicitly defining the orchestration service deployment URL (orchestration_service_url) which points to your deployed LLM configuration. This URL is used to send inference requests (like prompt executions) to the SAP AI Core Orchestration." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Generic Secret" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### In this tutorial, we're demonstrating how to create a vector knowledge base by connecting either SharePoint or AWS S3 as the document source—multiple options are supported and optional based on your setup." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### creating knowledge base using Sharepoint - option 1\n", + "\n", + "This step specifically creates a secret in SAP AI Core that stores Base64-encoded credentials for SharePoint access, securely enabling document grounding workflows via Microsoft Graph." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import { SecretApi } from '@sap-ai-sdk/ai-api';\n", + "\n", + "// Create Secret using SecretApi\n", + "async function createGenericSecret() {\n", + " try {\n", + " const response = await SecretApi.kubesubmitV4GenericSecretsCreate({\n", + " name: 'canary-rg1-secret',\n", + " data: {\n", + " type: 'SFRUUA==',\n", + " description: '',\n", + " clientId: '',\n", + " authentication: '',\n", + " tokenServiceURL: '',\n", + " password: '',\n", + " proxyType: '',\n", + " url: '',\n", + " tokenServiceURLType: '',\n", + " user: '',\n", + " clientSecret: '',\n", + " scope: ''\n", + " },\n", + " labels: [\n", + " {\n", + " key: 'ext.ai.sap.com/document-grounding',\n", + " value: 'true',\n", + " },\n", + " ],\n", + " }).execute();\n", + " return response;\n", + " } catch (error: any) {\n", + " console.error('Error while creating Resource Group:', error.stack);\n", + " }\n", + "}\n", + "\n", + "const secret = await createGenericSecret();\n", + "console.log(secret?.message)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### creating knowledge base using AWS S3 - Option 2" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Alternatively, instead of SharePoint, we can use AWS S3 as a document repository for grounding. In the example below, we securely store credentials as a secret named aws-s3-secret that will later be referenced in the pipeline creation.\n", + "\n", + "This makes it clear that both SharePoint and AWS S3 are optional approaches and interchangeable based on the user’s infrastructure." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ S3 Generic Secret created: s3-grounding-secret\n" + ] + }, + { + "data": { + "text/plain": [ + "{ message: \u001b[32m\"secret has been created\"\u001b[39m, name: \u001b[32m\"s3-grounding-secret\"\u001b[39m }" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import { SecretApi } from '@sap-ai-sdk/ai-api';\n", + "\n", + "async function createS3GenericSecret() {\n", + " try {\n", + " const response = await SecretApi.kubesubmitV4GenericSecretsCreate(\n", + " {\n", + " name: 's3-grounding-secret',\n", + " data: {\n", + " description: \"\",\n", + " url: \"\",\n", + " authentication: \"Tm9BdXRoZW50aWNhdGlvbg==\",\n", + " access_key_id: \"\",\n", + " secret_access_key: \"\",\n", + " bucket: \"\",\n", + " region: \"\",\n", + " host: \"\",\n", + " username: \"\" ,\n", + " type: \"SFRUUA==\",\n", + " proxyType: \"\" \n", + " },\n", + " labels: [\n", + " {\n", + " key: 'ext.ai.sap.com/document-grounding',\n", + " value: 'true'\n", + " },\n", + " {\n", + " key: 'ext.ai.sap.com/documentRepositoryType',\n", + " value: 'S3'\n", + " }\n", + " ]\n", + " },\n", + " {\n", + " 'AI-Resource-Group': ''\n", + " }\n", + " ).execute();\n", + "\n", + " console.log('✅ S3 Generic Secret created:', response.name);\n", + " return response;\n", + " } catch (error: any) {\n", + " console.error(\n", + " '❌ Error while creating S3 Generic Secret:',\n", + " error.cause?.response?.data || error.message\n", + " );\n", + " }\n", + "}\n", + "\n", + "await createS3GenericSecret();\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Pipeline Creation" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Pipeline creation using sharepoint - option 1\n", + "In this step, we are creating a document grounding pipeline using SharePoint as the knowledge source. The pipeline connects to the document repository defined in the SharePoint site using the previously created secret " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "// Request body for pipeline creation request\n", + "const pipelineRequest: PipelinePostRequst = {\n", + " type: 'MSSharePoint',\n", + " configuration: {\n", + " destination: 'canary-rg1-secret',\n", + " sharePoint: {\n", + " site: {\n", + " name: '',\n", + " includePaths: ['/']\n", + " }\n", + " }\n", + " }\n", + "};\n", + "\n", + "// Create the pipeline\n", + "const pipeline = await PipelinesApi.createPipeline(pipelineRequest, {\n", + " 'AI-Resource-Group': RESOURCE_GROUP\n", + "}).execute();\n", + "\n", + "console.log('Created Pipeline with ID: ', pipeline.pipelineId);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Pipeline creation using AWS S3 - option 2\n", + "Once the secret (aws-s3-secret) is created, we can now configure the document grounding pipeline using AWS S3 as the data source. This example shows how to set up a pipeline by referencing the created secret. The pipeline will extract and prepare documents from the specified S3 bucket for grounding.\n", + "\n", + "🔄 You can follow a similar flow for SharePoint or other supported sources — choosing between SharePoint and S3 is flexible based on your document storage setup." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ S3 Pipeline created successfully\n", + "Pipeline ID: 914b1a69-0413-4dfc-8582-5f9adf3a1fa5\n" + ] + } + ], + "source": [ + "\n", + "import { PipelinesApi } from '@sap-ai-sdk/document-grounding';\n", + "\n", + "const RESOURCE_GROUP = resourceGroupId;\n", + "\n", + "const s3PipelineRequest: CreatePipeline = {\n", + " type: 'S3',\n", + " configuration: {\n", + " destination: 's3-grounding-secret'\n", + " }\n", + "};\n", + "\n", + "try {\n", + " const pipeline = await PipelinesApi.createPipeline(\n", + " s3PipelineRequest,\n", + " { 'AI-Resource-Group': RESOURCE_GROUP }\n", + " ).execute();\n", + "\n", + " console.log('✅ S3 Pipeline created successfully');\n", + " console.log('Pipeline ID:', pipeline.pipelineId);\n", + "} catch (error: any) {\n", + " console.error('❌ Pipeline creation failed');\n", + " console.error(error.cause?.response?.data || error.message);\n", + "}\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Set Up the Orchestration Service" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now that we have our document grounding pipeline ready, we can configure the LLM Orchestration Service to process incoming user queries in context.\n", + "\n", + "We define a system message to describe the business scenario for the LLM — in this case, a Facility Solutions Company offering property maintenance and support services. The prompt template includes placeholders for the user’s query and the grounded document context (retrieved from S3 or SharePoint), making the responses personalized and context-aware.\n", + "\n", + "💡 This setup ensures that the LLM generates accurate, domain-specific, and grounded responses using the extracted content from your enterprise documents." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Based on the provided context, there are several complaints from customers regarding various services. Here are the specific complaints:\n", + "\n", + "1. **Window Cleaning Service Oversight**: Mark Phillips reported that the windows in the east wing of the Riverfront Business Complex were missed during the last cleaning service, and this has been a recurring issue.\n", + "\n", + "2. **Landscaping Service Issues**: James Anderson mentioned problems with the recent landscaping service at Crestview Gardens Apartments, where the shrubs were not trimmed properly, and debris was left behind.\n", + "\n", + "3. **Missed Cleaning in Conference Room**: Michael Nguyen noted that the conference room was missed during the cleaning service at their main office on Elm Street.\n", + "\n", + "4. **Malfunctioning Elevator**: Raj Patel is following up on a maintenance request regarding a malfunctioning elevator in Building B at Oakwood Corporate Center.\n", + "\n", + "5. **Heating System Malfunction**: Emily Carter reported an urgent issue with the heating system in her apartment at Greenview Residences, which needs immediate attention due\n" + ] + } + ], + "source": [ + "import {\n", + " OrchestrationClient, buildDocumentGroundingConfig, buildAzureContentSafetyFilter} \n", + " from '@sap-ai-sdk/orchestration';\n", + "\n", + "// ---------------------------\n", + "// Initialize Orchestration Client\n", + "// ---------------------------\n", + "const orchestrationClient = new OrchestrationClient({\n", + " promptTemplating: {\n", + " model: {\n", + " name: 'gpt-4o',\n", + " params: {\n", + " max_completion_tokens: 200,\n", + " temperature: 0\n", + " }\n", + " }\n", + " },\n", + " grounding: buildDocumentGroundingConfig({\n", + " placeholders: {\n", + " input: ['groundingRequest'],\n", + " output: 'groundingOutput' \n", + " },\n", + " filters: [\n", + " {\n", + " id: 'filter1',\n", + " data_repositories: ['a0165**************55f'],\n", + " data_repository_type: 'vector',\n", + " search_config: { max_chunk_count: 20 }\n", + " }\n", + " ]\n", + " }),\n", + " filtering: {\n", + " input: {\n", + " filters: [\n", + " buildAzureContentSafetyFilter({\n", + " Hate: 'ALLOW_SAFE_LOW',\n", + " Violence: 'ALLOW_SAFE_LOW'\n", + " })\n", + " ]\n", + " },\n", + " output: {\n", + " filters: [\n", + " buildAzureContentSafetyFilter({\n", + " Hate: 'ALLOW_SAFE_LOW',\n", + " Violence: 'ALLOW_SAFE_LOW'\n", + " })\n", + " ]\n", + " }\n", + " }\n", + "},\n", + "{resourceGroup:resourceGroupId}\n", + "\n", + ");\n", + "\n", + "// ---------------------------\n", + "// Send Chat Completion Request\n", + "// ---------------------------\n", + "const response = await orchestrationClient.chatCompletion({\n", + " messages: [\n", + " {\n", + " role: 'system',\n", + " content: `Facility Solutions Company provides services to luxury residential complexes, apartments,\n", + "individual homes, and commercial properties such as office buildings, retail spaces, industrial facilities, and educational institutions.\n", + "Customers are encouraged to reach out with maintenance requests, service deficiencies, follow-ups, or any issues they need by email.`\n", + " },\n", + " {\n", + " role: 'user',\n", + " content: `You are a helpful assistant for any queries.\n", + "Answer the request by providing relevant answers that fit the request.\n", + "Request: {{ ?groundingRequest }}\n", + "Context: {{ ?groundingOutput }}`\n", + " }\n", + " ],\n", + " placeholderValues: {\n", + " groundingRequest: 'Is there any complaint from customers?'\n", + " }\n", + "},\n", + ");\n", + "console.log(response.getContent());" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "OrchestrationResponse {\n", + " rawResponse: {\n", + " status: \u001b[33m200\u001b[39m,\n", + " statusText: \u001b[32m\"OK\"\u001b[39m,\n", + " headers: Object [AxiosHeaders] {\n", + " date: \u001b[32m\"Thu, 05 Feb 2026 10:07:32 GMT\"\u001b[39m,\n", + " \u001b[32m\"content-type\"\u001b[39m: \u001b[32m\"application/json\"\u001b[39m,\n", + " \u001b[32m\"content-length\"\u001b[39m: \u001b[32m\"16942\"\u001b[39m,\n", + " \u001b[32m\"x-upstream-service-time\"\u001b[39m: \u001b[32m\"2525\"\u001b[39m\n", + " },\n", + " config: {\n", + " transitional: {\n", + " silentJSONParsing: \u001b[33mtrue\u001b[39m,\n", + " forcedJSONParsing: \u001b[33mtrue\u001b[39m,\n", + " clarifyTimeoutError: \u001b[33mfalse\u001b[39m\n", + " },\n", + " adapter: [ \u001b[32m\"xhr\"\u001b[39m, \u001b[32m\"http\"\u001b[39m, \u001b[32m\"fetch\"\u001b[39m ],\n", + " transformRequest: [ \u001b[36m[Function: transformRequest]\u001b[39m ],\n", + " transformResponse: [ \u001b[36m[Function: transformResponse]\u001b[39m ],\n", + " timeout: \u001b[33m0\u001b[39m,\n", + " xsrfCookieName: \u001b[32m\"XSRF-TOKEN\"\u001b[39m,\n", + " xsrfHeaderName: \u001b[32m\"X-XSRF-TOKEN\"\u001b[39m,\n", + " maxContentLength: \u001b[33m-1\u001b[39m,\n", + " maxBodyLength: \u001b[33m-1\u001b[39m,\n", + " env: {\n", + " FormData: [Function: FormData] {\n", + " LINE_BREAK: \u001b[32m\"\\r\\n\"\u001b[39m,\n", + " DEFAULT_CONTENT_TYPE: \u001b[32m\"application/octet-stream\"\u001b[39m\n", + " },\n", + " Blob: \u001b[36m[class Blob]\u001b[39m\n", + " },\n", + " validateStatus: \u001b[36m[Function: validateStatus]\u001b[39m,\n", + " headers: Object [AxiosHeaders] {\n", + " Accept: \u001b[32m\"application/json, text/plain, */*\"\u001b[39m,\n", + " \u001b[32m\"Content-Type\"\u001b[39m: \u001b[32m\"application/json\"\u001b[39m,\n", + " authorization: \u001b[32m\"Bearer eyJ0eXAiOiJKV1QiLCJqaWQiOiJIR2FiTHZ3a0g5NzhMMXRMSzB6Zk5kV0NuQUl2ZW1sSlNPMXI0ZU1WUE5RPSIsImFsZyI6IlJTMjU2Iiwiamt1IjoiaHR0cHM6Ly9haWNvcmUtc2luLmF1dGhlbnRpY2F0aW9uLnNhcC5oYW5hLm9uZGVtYW5kLmNvbS90b2tlbl9rZXlzIiwia2lkIjoiZGVmYXVsdC1qd3Qta2V5LS02OTk3OTQ4NzkifQ.eyJzdWIiOiJzYi01M2QwMTFhZC1kNjk1LTQ5ZWItYjM0ZS1kODI0MDFkOThjOTchYjc5MzY2fHhzdWFhX3N0ZCFiNzcwODkiLCJpc3MiOiJodHRwczovL2FpY29yZS1zaW4uYXV0aGVudGljYXRpb24uc2FwLmhhbmEub25kZW1hbmQuY29tL29hdXRoL3Rva2VuIiwiYXV0aG9yaXRpZXMiOlsieHN1YWFfc3RkIWI3NzA4OS5kb2NrZXJyZWdpc3RyeXNlY3JldC5jcmVkZW50aWFscy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmRvY2tlcnJlZ2lzdHJ5c2VjcmV0LmNyZWRlbnRpYWxzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5Lm5vZGVzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuYXJ0aWZhY3RzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZGVwbG95bWVudHMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5Lm9iamVjdHN0b3Jlc2VjcmV0LmNyZWRlbnRpYWxzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlc291cmNlZ3JvdXAud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LmRlcGxveW1lbnRzLmxvZ3MucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLm9yY2hlc3RyYXRpb25Db25maWdzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZGVwbG95bWVudHMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuYXBwbGljYXRpb25zLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXhlY3V0aW9ucy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLnByb21wdFRlbXBsYXRlcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZGVwbG95bWVudHMucHJlZGljdCIsInhzdWFhX3N0ZCFiNzcwODkuYXBwbGljYXRpb25zLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRpb25zLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuYXJ0aWZhY3RzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRhYmxlcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MuY3JlYXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zZXJ2aWNlcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zZWNyZXRzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5vYmplY3RzdG9yZXNlY3JldC5jcmVkZW50aWFscy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV4ZWN1dGlvbnNjaGVkdWxlcy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuZXhlY3V0aW9ucy5sb2dzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5tZXRyaWNzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zZWNyZXRzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5tZXRyaWNzLnJlYWQiLCJ1YWEucmVzb3VyY2UiLCJ4c3VhYV9zdGQhYjc3MDg5LmtwaXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLm9yY2hlc3RyYXRpb25Db25maWdzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5kZWxldGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlcG9zaXRvcmllcy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkucmVwb3NpdG9yaWVzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmRhdGFzZXRzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5ub2Rlcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5tZXRhLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmxvZ3MucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkucmVzb3VyY2Vncm91cC5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuY29uZmlndXJhdGlvbnMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuZGF0YXNldHMuZG93bmxvYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5jb25maWd1cmF0aW9ucy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV4ZWN1dGlvbnNjaGVkdWxlcy5yZWFkIl0sImNsaWVudF9pZCI6InNiLTUzZDAxMWFkLWQ2OTUtNDllYi1iMzRlLWQ4MjQwMWQ5OGM5NyFiNzkzNjZ8eHN1YWFfc3RkIWI3NzA4OSIsImF1ZCI6WyJ4c3VhYV9zdGQhYjc3MDg5LmtwaXMiLCJ4c3VhYV9zdGQhYjc3MDg5LmRhdGFzZXRzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5jb25maWd1cmF0aW9ucyIsInNiLTUzZDAxMWFkLWQ2OTUtNDllYi1iMzRlLWQ4MjQwMWQ5OGM5NyFiNzkzNjZ8eHN1YWFfc3RkIWI3NzA4OSIsInhzdWFhX3N0ZCFiNzcwODkubWV0YSIsInhzdWFhX3N0ZCFiNzcwODkucmVwb3NpdG9yaWVzIiwieHN1YWFfc3RkIWI3NzA4OS5ub2RlcyIsInhzdWFhX3N0ZCFiNzcwODkuZXhlY3V0aW9ucy5sb2dzIiwieHN1YWFfc3RkIWI3NzA4OS5hcHBsaWNhdGlvbnMiLCJ1YWEiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlc291cmNlZ3JvdXAiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMiLCJ4c3VhYV9zdGQhYjc3MDg5LnNlcnZpY2VzIiwieHN1YWFfc3RkIWI3NzA4OS5sb2dzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MiLCJ4c3VhYV9zdGQhYjc3MDg5Lm9iamVjdHN0b3Jlc2VjcmV0LmNyZWRlbnRpYWxzIiwieHN1YWFfc3RkIWI3NzA4OS5kZXBsb3ltZW50cy5sb2dzIiwieHN1YWFfc3RkIWI3NzA4OS5kb2NrZXJyZWdpc3RyeXNlY3JldC5jcmVkZW50aWFscyIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmRlcGxveW1lbnRzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MucHJvbXB0VGVtcGxhdGVzIiwieHN1YWFfc3RkIWI3NzA4OS5zZWNyZXRzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXhlY3V0aW9ucyIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLm9yY2hlc3RyYXRpb25Db25maWdzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MubWV0cmljcyIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmFydGlmYWN0cyIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV4ZWN1dGFibGVzIl0sImV4dF9hdHRyIjp7ImVuaGFuY2VyIjoiWFNVQUEiLCJzdWJhY2NvdW50aWQiOiJlM2NiNGE2NC1kNTNiLTRiMDQtYTZlZC1kMGMxNDNhNzg2OWUiLCJ6ZG4iOiJhaWNvcmUtc2luIiwic2VydmljZWluc3RhbmNlaWQiOiI1M2QwMTFhZC1kNjk1LTQ5ZWItYjM0ZS1kODI0MDFkOThjOTcifSwiemlkIjoiZTNjYjRhNjQtZDUzYi00YjA0LWE2ZWQtZDBjMTQzYTc4NjllIiwiZ3JhbnRfdHlwZSI6ImNsaWVudF9jcmVkZW50aWFscyIsImF6cCI6InNiLTUzZDAxMWFkLWQ2OTUtNDllYi1iMzRlLWQ4MjQwMWQ5OGM5NyFiNzkzNjZ8eHN1YWFfc3RkIWI3NzA4OSIsInNjb3BlIjpbInhzdWFhX3N0ZCFiNzcwODkuZG9ja2VycmVnaXN0cnlzZWNyZXQuY3JlZGVudGlhbHMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5kb2NrZXJyZWdpc3RyeXNlY3JldC5jcmVkZW50aWFscy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5ub2Rlcy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmFydGlmYWN0cy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmRlcGxveW1lbnRzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5vYmplY3RzdG9yZXNlY3JldC5jcmVkZW50aWFscy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5yZXNvdXJjZWdyb3VwLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5kZXBsb3ltZW50cy5sb2dzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmRlcGxveW1lbnRzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmFwcGxpY2F0aW9ucy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV4ZWN1dGlvbnMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmRlcGxveW1lbnRzLnByZWRpY3QiLCJ4c3VhYV9zdGQhYjc3MDg5LmFwcGxpY2F0aW9ucy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXhlY3V0aW9ucy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmFydGlmYWN0cy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXhlY3V0YWJsZXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV2YWx1YXRpb25NZXRyaWNzLmNyZWF0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2VydmljZXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2VjcmV0cy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkub2JqZWN0c3RvcmVzZWNyZXQuY3JlZGVudGlhbHMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LmV4ZWN1dGlvbnMubG9ncy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MubWV0cmljcy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2VjcmV0cy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MubWV0cmljcy5yZWFkIiwidWFhLnJlc291cmNlIiwieHN1YWFfc3RkIWI3NzA4OS5rcGlzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MuZGVsZXRlIiwieHN1YWFfc3RkIWI3NzA4OS5yZXBvc2l0b3JpZXMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlcG9zaXRvcmllcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5kYXRhc2V0cy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkubm9kZXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkubWV0YS5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5sb2dzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlc291cmNlZ3JvdXAucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmNvbmZpZ3VyYXRpb25zLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmRhdGFzZXRzLmRvd25sb2FkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MucHJvbXB0VGVtcGxhdGVzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuY29uZmlndXJhdGlvbnMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMucmVhZCJdLCJleHAiOjE3NzAzMjgzNDMsImlhdCI6MTc3MDI4NTE0MywianRpIjoiNjBiZjM1MWUyOWE3NDYzYTgwMzExNjk3MzM4NWEwOTEiLCJyZXZfc2lnIjoiNzE2YzQyZWUiLCJjaWQiOiJzYi01M2QwMTFhZC1kNjk1LTQ5ZWItYjM0ZS1kODI0MDFkOThjOTchYjc5MzY2fHhzdWFhX3N0ZCFiNzcwODkifQ.VkAUJJQfaj0w7vIIXVXdoBn9Iiz3XsJY0yHV_82JVy65JRfY4LiYlOctaT83BKW4kjbymYsLhAy4ufFm8kzpxLdzu95hVbnE4pYb6eMDo-qKfMzV0fbnW8r4RAzy_qVek8YDQmLRMfvCuAxaeqp-z-xdqKZsFdSUxg_UWyly2UOMGijIAqMI2KJlHQEg12uNz76xHS1hROLj8hJ7v9ModQbJnpaDnnG9_3Hdd6DkCcHo_Q6HKxiriDJ8UfALnyZoXZAYUnLcalS3NOkrljjU8jsKUwR1tbxtKPDtpJGrc0puHjWGifAWvX5UFGTM_TBi0lRlA3qk51DRPDFjQ-JnBw\"\u001b[39m,\n", + " \u001b[32m\"ai-resource-group\"\u001b[39m: \u001b[32m\"grounding\"\u001b[39m,\n", + " \u001b[32m\"ai-client-type\"\u001b[39m: \u001b[32m\"AI SDK JavaScript\"\u001b[39m,\n", + " \u001b[32m\"User-Agent\"\u001b[39m: \u001b[32m\"axios/1.13.4\"\u001b[39m,\n", + " \u001b[32m\"Content-Length\"\u001b[39m: \u001b[32m\"1253\"\u001b[39m,\n", + " \u001b[32m\"Accept-Encoding\"\u001b[39m: \u001b[32m\"gzip, compress, deflate, br\"\u001b[39m\n", + " },\n", + " httpAgent: Agent {\n", + " _events: [Object: null prototype] {\n", + " free: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " newListener: \u001b[36m[Function: maybeEnableKeylog]\u001b[39m\n", + " },\n", + " _eventsCount: \u001b[33m2\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " defaultPort: \u001b[33m80\u001b[39m,\n", + " protocol: \u001b[32m\"http:\"\u001b[39m,\n", + " options: [Object: null prototype] { path: \u001b[1mnull\u001b[22m },\n", + " requests: [Object: null prototype] {},\n", + " sockets: [Object: null prototype] {},\n", + " freeSockets: [Object: null prototype] {},\n", + " keepAliveMsecs: \u001b[33m1000\u001b[39m,\n", + " keepAlive: \u001b[33mfalse\u001b[39m,\n", + " maxSockets: \u001b[33mInfinity\u001b[39m,\n", + " maxFreeSockets: \u001b[33m256\u001b[39m,\n", + " scheduling: \u001b[32m\"lifo\"\u001b[39m,\n", + " maxTotalSockets: \u001b[33mInfinity\u001b[39m,\n", + " totalSocketCount: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m\n", + " },\n", + " httpsAgent: Agent {\n", + " _events: [Object: null prototype] {\n", + " free: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " newListener: \u001b[36m[Function: maybeEnableKeylog]\u001b[39m\n", + " },\n", + " _eventsCount: \u001b[33m2\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " defaultPort: \u001b[33m443\u001b[39m,\n", + " protocol: \u001b[32m\"https:\"\u001b[39m,\n", + " options: [Object: null prototype] {\n", + " rejectUnauthorized: \u001b[33mtrue\u001b[39m,\n", + " path: \u001b[1mnull\u001b[22m\n", + " },\n", + " requests: [Object: null prototype] {},\n", + " sockets: [Object: null prototype] {},\n", + " freeSockets: [Object: null prototype] {},\n", + " keepAliveMsecs: \u001b[33m1000\u001b[39m,\n", + " keepAlive: \u001b[33mfalse\u001b[39m,\n", + " maxSockets: \u001b[33mInfinity\u001b[39m,\n", + " maxFreeSockets: \u001b[33m256\u001b[39m,\n", + " scheduling: \u001b[32m\"lifo\"\u001b[39m,\n", + " maxTotalSockets: \u001b[33mInfinity\u001b[39m,\n", + " totalSocketCount: \u001b[33m-1\u001b[39m,\n", + " maxCachedSessions: \u001b[33m100\u001b[39m,\n", + " _sessionCache: { map: {}, list: [] },\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m\n", + " },\n", + " paramsSerializer: { serialize: \u001b[36m[Function: serialize]\u001b[39m },\n", + " method: \u001b[32m\"post\"\u001b[39m,\n", + " baseURL: \u001b[32m\"https://api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/d09e492a95564c54/v2/completion\"\u001b[39m,\n", + " params: {},\n", + " proxy: \u001b[33mfalse\u001b[39m,\n", + " data: \u001b[32m'{\"config\":{\"modules\":{\"prompt_templating\":{\"model\":{\"name\":\"gpt-4o\",\"params\":{\"max_completion_tokens\":200,\"temperature\":0}},\"prompt\":{\"template\":[{\"role\":\"system\",\"content\":\"Facility Solutions Company provides services to luxury residential complexes, apartments,\\\\nindividual homes, and commercial properties such as office buildings, retail spaces, industrial facilities, and educational institutions.\\\\nCustomers are encouraged to reach out with maintenance requests, service deficiencies, follow-ups, or any issues they need by email.\"},{\"role\":\"user\",\"content\":\"You are a helpful assistant for any queries.\\\\nAnswer the request by providing relevant answers that fit the request.\\\\nRequest: {{?groundingRequest}}\\\\nContext: {{?groundingOutput}}\"}]}},\"filtering\":{\"input\":{\"filters\":[{\"type\":\"azure_content_safety\"}]},\"output\":{\"filters\":[{\"type\":\"azure_content_safety\"}]}},\"grounding\":{\"type\":\"document_grounding_service\",\"config\":{\"placeholders\":{\"input\":[\"groundingRequest\"],\"output\":\"groundingOutput\"},\"filters\":[{\"data_repository_type\":\"vector\",\"id\":\"filter1\",\"data_repositories\":[\"a01659e2-bfb8-408a-a5ee-c44aec10855f\"],\"search_config\":{\"max_chunk_count\":20}}]}}}},\"placeholder_values\":{\"groundingRequest\":\"Is there any complaint from customers?\"}}'\u001b[39m,\n", + " allowAbsoluteUrls: \u001b[33mtrue\u001b[39m\n", + " },\n", + " request: \u001b[36m\u001b[39m HttpsClientRequest {\n", + " _events: [Object: null prototype] {\n", + " abort: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " aborted: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " connect: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " error: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " socket: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " timeout: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " drain: \u001b[36m[Function (anonymous)]\u001b[39m\n", + " },\n", + " _eventsCount: \u001b[33m7\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " outputData: [],\n", + " outputSize: \u001b[33m0\u001b[39m,\n", + " writable: \u001b[33mtrue\u001b[39m,\n", + " destroyed: \u001b[33mfalse\u001b[39m,\n", + " _last: \u001b[33mtrue\u001b[39m,\n", + " chunkedEncoding: \u001b[33mfalse\u001b[39m,\n", + " shouldKeepAlive: \u001b[33mfalse\u001b[39m,\n", + " maxRequestsOnConnectionReached: \u001b[33mfalse\u001b[39m,\n", + " _defaultKeepAlive: \u001b[33mtrue\u001b[39m,\n", + " useChunkedEncodingByDefault: \u001b[33mtrue\u001b[39m,\n", + " sendDate: \u001b[33mfalse\u001b[39m,\n", + " _removedConnection: \u001b[33mfalse\u001b[39m,\n", + " _removedContLen: \u001b[33mfalse\u001b[39m,\n", + " _removedTE: \u001b[33mfalse\u001b[39m,\n", + " _contentLength: \u001b[32m\"1253\"\u001b[39m,\n", + " _hasBody: \u001b[33mtrue\u001b[39m,\n", + " _trailer: \u001b[32m\"\"\u001b[39m,\n", + " finished: \u001b[33mtrue\u001b[39m,\n", + " _headerSent: \u001b[33mtrue\u001b[39m,\n", + " _closed: \u001b[33mfalse\u001b[39m,\n", + " socket: Socket {\n", + " _events: {\n", + " close: \u001b[36m[Function: onClose]\u001b[39m,\n", + " error: \u001b[36m[Function]\u001b[39m,\n", + " prefinish: \u001b[90mundefined\u001b[39m,\n", + " finish: \u001b[90mundefined\u001b[39m,\n", + " drain: \u001b[90mundefined\u001b[39m,\n", + " data: \u001b[90mundefined\u001b[39m,\n", + " end: \u001b[36m[Function: _onReadableStreamEnd]\u001b[39m,\n", + " readable: \u001b[90mundefined\u001b[39m,\n", + " connect: \u001b[36m[Function: onConnect]\u001b[39m,\n", + " free: \u001b[36m[Function: onFree]\u001b[39m,\n", + " timeout: \u001b[36m[Array]\u001b[39m,\n", + " agentRemove: \u001b[36m[Function: onRemove]\u001b[39m\n", + " },\n", + " _readableState: ReadableState {\n", + " highWaterMark: \u001b[33m16384\u001b[39m,\n", + " buffer: [],\n", + " bufferIndex: \u001b[33m0\u001b[39m,\n", + " length: \u001b[33m0\u001b[39m,\n", + " pipes: [],\n", + " awaitDrainWriters: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kState)\u001b[39m]: \u001b[33m1050484\u001b[39m\n", + " },\n", + " _writableState: WritableState {\n", + " highWaterMark: \u001b[33m16384\u001b[39m,\n", + " length: \u001b[33m0\u001b[39m,\n", + " corked: \u001b[33m0\u001b[39m,\n", + " onwrite: \u001b[36m[Function: bound onwrite]\u001b[39m,\n", + " writelen: \u001b[33m0\u001b[39m,\n", + " bufferedIndex: \u001b[33m0\u001b[39m,\n", + " pendingcb: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kState)\u001b[39m]: \u001b[33m1091450228\u001b[39m,\n", + " [\u001b[32mSymbol(kBufferedValue)\u001b[39m]: \u001b[1mnull\u001b[22m\n", + " },\n", + " allowHalfOpen: \u001b[33mfalse\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " server: \u001b[1mnull\u001b[22m,\n", + " _server: \u001b[1mnull\u001b[22m,\n", + " _peername: \u001b[90mundefined\u001b[39m,\n", + " _sockname: \u001b[90mundefined\u001b[39m,\n", + " _pendingData: \u001b[1mnull\u001b[22m,\n", + " _pendingEncoding: \u001b[32m\"\"\u001b[39m,\n", + " _host: \u001b[32m\"api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " _parent: \u001b[1mnull\u001b[22m,\n", + " _needsSockInitWorkaround: \u001b[33mfalse\u001b[39m,\n", + " autoSelectFamilyAttemptedAddresses: [ \u001b[32m\"3.68.36.191:443\"\u001b[39m, \u001b[32m\"18.185.69.210:443\"\u001b[39m ],\n", + " setTimeout: \u001b[36m[Function: setStreamTimeout]\u001b[39m,\n", + " connecting: \u001b[33mfalse\u001b[39m,\n", + " _eventsCount: \u001b[33m7\u001b[39m,\n", + " timeout: \u001b[33m0\u001b[39m,\n", + " write: \u001b[36m[Function: _writeAfterFIN]\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(asyncIdSymbol)\u001b[39m]: \u001b[33m9\u001b[39m,\n", + " [\u001b[32mSymbol(kHandle)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kSetNoDelay)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(lastWriteQueueSize)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(timeout)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBuffer)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferCb)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferGen)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBytesRead)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kBytesWritten)\u001b[39m]: \u001b[33m0\u001b[39m\n", + " },\n", + " _header: \u001b[32m\"POST /v2/inference/deployments/d09e492a95564c54/v2/completion HTTP/1.1\\r\\n\"\u001b[39m +\n", + " \u001b[32m\"\\r\\n\"\u001b[39m,\n", + " _keepAliveTimeout: \u001b[33m0\u001b[39m,\n", + " _onPendingData: \u001b[36m[Function: nop]\u001b[39m,\n", + " _bodyWriter: WritableStreamDefaultWriter {\n", + " closed: Promise { \u001b[90mundefined\u001b[39m },\n", + " desiredSize: \u001b[33m0\u001b[39m,\n", + " ready: Promise { \u001b[90mundefined\u001b[39m }\n", + " },\n", + " defaultProtocol: \u001b[32m\"https:\"\u001b[39m,\n", + " aborted: \u001b[33mfalse\u001b[39m,\n", + " agent: Agent {\n", + " _events: [Object: null prototype] {\n", + " free: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " newListener: \u001b[36m[Function: maybeEnableKeylog]\u001b[39m\n", + " },\n", + " _eventsCount: \u001b[33m2\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " defaultPort: \u001b[33m443\u001b[39m,\n", + " protocol: \u001b[32m\"https:\"\u001b[39m,\n", + " options: [Object: null prototype] {\n", + " rejectUnauthorized: \u001b[33mtrue\u001b[39m,\n", + " path: \u001b[1mnull\u001b[22m\n", + " },\n", + " requests: [Object: null prototype] {},\n", + " sockets: [Object: null prototype] {},\n", + " freeSockets: [Object: null prototype] {},\n", + " keepAliveMsecs: \u001b[33m1000\u001b[39m,\n", + " keepAlive: \u001b[33mfalse\u001b[39m,\n", + " maxSockets: \u001b[33mInfinity\u001b[39m,\n", + " maxFreeSockets: \u001b[33m256\u001b[39m,\n", + " scheduling: \u001b[32m\"lifo\"\u001b[39m,\n", + " maxTotalSockets: \u001b[33mInfinity\u001b[39m,\n", + " totalSocketCount: \u001b[33m-1\u001b[39m,\n", + " maxCachedSessions: \u001b[33m100\u001b[39m,\n", + " _sessionCache: { map: {}, list: [] },\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m\n", + " },\n", + " method: \u001b[32m\"POST\"\u001b[39m,\n", + " maxHeaderSize: \u001b[90mundefined\u001b[39m,\n", + " insecureHTTPParser: \u001b[90mundefined\u001b[39m,\n", + " path: \u001b[32m\"/v2/inference/deployments/d09e492a95564c54/v2/completion\"\u001b[39m,\n", + " _req: [Object: null prototype] { requestRid: \u001b[33m42\u001b[39m, cancelHandleRid: \u001b[33m43\u001b[39m },\n", + " _encrypted: \u001b[33mtrue\u001b[39m,\n", + " socketPath: \u001b[90mundefined\u001b[39m,\n", + " joinDuplicateHeaders: \u001b[90mundefined\u001b[39m,\n", + " _ended: \u001b[33mfalse\u001b[39m,\n", + " res: IncomingMessageForClient {\n", + " _events: {\n", + " close: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " error: \u001b[36m[Function: handleStreamError]\u001b[39m,\n", + " data: \u001b[36m[Function: handleStreamData]\u001b[39m,\n", + " end: \u001b[36m[Function: handleStreamEnd]\u001b[39m,\n", + " readable: \u001b[90mundefined\u001b[39m,\n", + " aborted: \u001b[36m[Function: handlerStreamAborted]\u001b[39m\n", + " },\n", + " _readableState: ReadableState {\n", + " highWaterMark: \u001b[33m16384\u001b[39m,\n", + " buffer: [],\n", + " bufferIndex: \u001b[33m0\u001b[39m,\n", + " length: \u001b[33m0\u001b[39m,\n", + " pipes: [],\n", + " awaitDrainWriters: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kState)\u001b[39m]: \u001b[33m194512764\u001b[39m\n", + " },\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " decoder: TextDecoder {\n", + " encoding: \u001b[32m\"utf-8\"\u001b[39m,\n", + " fatal: \u001b[33mfalse\u001b[39m,\n", + " ignoreBOM: \u001b[33mfalse\u001b[39m\n", + " },\n", + " socket: Socket {\n", + " _events: \u001b[36m[Object]\u001b[39m,\n", + " _readableState: \u001b[36m[ReadableState]\u001b[39m,\n", + " _writableState: \u001b[36m[WritableState]\u001b[39m,\n", + " allowHalfOpen: \u001b[33mfalse\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " server: \u001b[1mnull\u001b[22m,\n", + " _server: \u001b[1mnull\u001b[22m,\n", + " _peername: \u001b[90mundefined\u001b[39m,\n", + " _sockname: \u001b[90mundefined\u001b[39m,\n", + " _pendingData: \u001b[1mnull\u001b[22m,\n", + " _pendingEncoding: \u001b[32m\"\"\u001b[39m,\n", + " _host: \u001b[32m\"api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " _parent: \u001b[1mnull\u001b[22m,\n", + " _needsSockInitWorkaround: \u001b[33mfalse\u001b[39m,\n", + " autoSelectFamilyAttemptedAddresses: \u001b[36m[Array]\u001b[39m,\n", + " setTimeout: \u001b[36m[Function: setStreamTimeout]\u001b[39m,\n", + " connecting: \u001b[33mfalse\u001b[39m,\n", + " _eventsCount: \u001b[33m7\u001b[39m,\n", + " timeout: \u001b[33m0\u001b[39m,\n", + " write: \u001b[36m[Function: _writeAfterFIN]\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(asyncIdSymbol)\u001b[39m]: \u001b[33m9\u001b[39m,\n", + " [\u001b[32mSymbol(kHandle)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kSetNoDelay)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(lastWriteQueueSize)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(timeout)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBuffer)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferCb)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferGen)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBytesRead)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kBytesWritten)\u001b[39m]: \u001b[33m0\u001b[39m\n", + " },\n", + " httpVersionMajor: \u001b[1mnull\u001b[22m,\n", + " httpVersionMinor: \u001b[1mnull\u001b[22m,\n", + " httpVersion: \u001b[1mnull\u001b[22m,\n", + " complete: \u001b[33mtrue\u001b[39m,\n", + " rawHeaders: [\n", + " \u001b[32m\"date\"\u001b[39m,\n", + " \u001b[32m\"Thu, 05 Feb 2026 10:07:32 GMT\"\u001b[39m,\n", + " \u001b[32m\"content-type\"\u001b[39m,\n", + " \u001b[32m\"application/json\"\u001b[39m,\n", + " \u001b[32m\"content-length\"\u001b[39m,\n", + " \u001b[32m\"16942\"\u001b[39m,\n", + " \u001b[32m\"x-upstream-service-time\"\u001b[39m,\n", + " \u001b[32m\"2525\"\u001b[39m\n", + " ],\n", + " rawTrailers: [],\n", + " joinDuplicateHeaders: \u001b[33mfalse\u001b[39m,\n", + " aborted: \u001b[33mfalse\u001b[39m,\n", + " upgrade: \u001b[1mnull\u001b[22m,\n", + " url: \u001b[32m\"https://api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/d09e492a95564c54/v2/completion\"\u001b[39m,\n", + " method: \u001b[1mnull\u001b[22m,\n", + " statusCode: \u001b[33m200\u001b[39m,\n", + " statusMessage: \u001b[32m\"OK\"\u001b[39m,\n", + " client: Socket {\n", + " _events: \u001b[36m[Object]\u001b[39m,\n", + " _readableState: \u001b[36m[ReadableState]\u001b[39m,\n", + " _writableState: \u001b[36m[WritableState]\u001b[39m,\n", + " allowHalfOpen: \u001b[33mfalse\u001b[39m,\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " server: \u001b[1mnull\u001b[22m,\n", + " _server: \u001b[1mnull\u001b[22m,\n", + " _peername: \u001b[90mundefined\u001b[39m,\n", + " _sockname: \u001b[90mundefined\u001b[39m,\n", + " _pendingData: \u001b[1mnull\u001b[22m,\n", + " _pendingEncoding: \u001b[32m\"\"\u001b[39m,\n", + " _host: \u001b[32m\"api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " _parent: \u001b[1mnull\u001b[22m,\n", + " _needsSockInitWorkaround: \u001b[33mfalse\u001b[39m,\n", + " autoSelectFamilyAttemptedAddresses: \u001b[36m[Array]\u001b[39m,\n", + " setTimeout: \u001b[36m[Function: setStreamTimeout]\u001b[39m,\n", + " connecting: \u001b[33mfalse\u001b[39m,\n", + " _eventsCount: \u001b[33m7\u001b[39m,\n", + " timeout: \u001b[33m0\u001b[39m,\n", + " write: \u001b[36m[Function: _writeAfterFIN]\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(asyncIdSymbol)\u001b[39m]: \u001b[33m9\u001b[39m,\n", + " [\u001b[32mSymbol(kHandle)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kSetNoDelay)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(lastWriteQueueSize)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(timeout)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBuffer)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferCb)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBufferGen)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kBytesRead)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kBytesWritten)\u001b[39m]: \u001b[33m0\u001b[39m\n", + " },\n", + " _consuming: \u001b[33mtrue\u001b[39m,\n", + " _dumped: \u001b[33mfalse\u001b[39m,\n", + " _eventsCount: \u001b[33m5\u001b[39m,\n", + " req: \u001b[36m[Circular *1]\u001b[39m,\n", + " _bodyRid: \u001b[33m44\u001b[39m,\n", + " responseUrl: \u001b[32m\"https://api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/d09e492a95564c54/v2/completion\"\u001b[39m,\n", + " redirects: [],\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(kHeaders)\u001b[39m]: {\n", + " date: \u001b[32m\"Thu, 05 Feb 2026 10:07:32 GMT\"\u001b[39m,\n", + " \u001b[32m\"content-type\"\u001b[39m: \u001b[32m\"application/json\"\u001b[39m,\n", + " \u001b[32m\"content-length\"\u001b[39m: \u001b[32m\"16942\"\u001b[39m,\n", + " \u001b[32m\"x-upstream-service-time\"\u001b[39m: \u001b[32m\"2525\"\u001b[39m\n", + " },\n", + " [\u001b[32mSymbol(kHeadersCount)\u001b[39m]: \u001b[33m8\u001b[39m,\n", + " [\u001b[32mSymbol(kTrailers)\u001b[39m]: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kTrailersCount)\u001b[39m]: \u001b[33m0\u001b[39m\n", + " },\n", + " upgradeOrConnect: \u001b[33mfalse\u001b[39m,\n", + " parser: \u001b[1mnull\u001b[22m,\n", + " maxHeadersCount: \u001b[1mnull\u001b[22m,\n", + " reusedSocket: \u001b[33mfalse\u001b[39m,\n", + " host: \u001b[32m\"api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " protocol: \u001b[32m\"https:\"\u001b[39m,\n", + " port: \u001b[33m443\u001b[39m,\n", + " hash: \u001b[90mundefined\u001b[39m,\n", + " search: \u001b[90mundefined\u001b[39m,\n", + " auth: \u001b[90mundefined\u001b[39m,\n", + " _redirectable: Writable {\n", + " _events: {\n", + " error: \u001b[36m[Function: handleRequestError]\u001b[39m,\n", + " prefinish: \u001b[90mundefined\u001b[39m,\n", + " finish: \u001b[90mundefined\u001b[39m,\n", + " drain: \u001b[90mundefined\u001b[39m,\n", + " response: \u001b[36m[Function: handleResponse]\u001b[39m,\n", + " socket: \u001b[36m[Array]\u001b[39m\n", + " },\n", + " _writableState: WritableState {\n", + " highWaterMark: \u001b[33m16384\u001b[39m,\n", + " length: \u001b[33m0\u001b[39m,\n", + " corked: \u001b[33m0\u001b[39m,\n", + " onwrite: \u001b[36m[Function: bound onwrite]\u001b[39m,\n", + " writelen: \u001b[33m0\u001b[39m,\n", + " bufferedIndex: \u001b[33m0\u001b[39m,\n", + " pendingcb: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kState)\u001b[39m]: \u001b[33m17580812\u001b[39m,\n", + " [\u001b[32mSymbol(kBufferedValue)\u001b[39m]: \u001b[1mnull\u001b[22m\n", + " },\n", + " _maxListeners: \u001b[90mundefined\u001b[39m,\n", + " _options: {\n", + " maxRedirects: \u001b[33m21\u001b[39m,\n", + " maxBodyLength: \u001b[33mInfinity\u001b[39m,\n", + " protocol: \u001b[32m\"https:\"\u001b[39m,\n", + " path: \u001b[32m\"/v2/inference/deployments/d09e492a95564c54/v2/completion\"\u001b[39m,\n", + " method: \u001b[32m\"POST\"\u001b[39m,\n", + " headers: \u001b[36m[Object: null prototype]\u001b[39m,\n", + " agents: \u001b[36m[Object]\u001b[39m,\n", + " auth: \u001b[90mundefined\u001b[39m,\n", + " family: \u001b[90mundefined\u001b[39m,\n", + " beforeRedirect: \u001b[36m[Function: dispatchBeforeRedirect]\u001b[39m,\n", + " beforeRedirects: \u001b[36m[Object]\u001b[39m,\n", + " http2Options: \u001b[90mundefined\u001b[39m,\n", + " hostname: \u001b[32m\"api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m,\n", + " port: \u001b[32m\"\"\u001b[39m,\n", + " agent: \u001b[36m[Agent]\u001b[39m,\n", + " nativeProtocols: \u001b[36m[Object]\u001b[39m,\n", + " pathname: \u001b[32m\"/v2/inference/deployments/d09e492a95564c54/v2/completion\"\u001b[39m\n", + " },\n", + " _ended: \u001b[33mtrue\u001b[39m,\n", + " _ending: \u001b[33mtrue\u001b[39m,\n", + " _redirectCount: \u001b[33m0\u001b[39m,\n", + " _redirects: [],\n", + " _requestBodyLength: \u001b[33m1253\u001b[39m,\n", + " _requestBodyBuffers: [],\n", + " _eventsCount: \u001b[33m3\u001b[39m,\n", + " _onNativeResponse: \u001b[36m[Function (anonymous)]\u001b[39m,\n", + " _currentRequest: \u001b[36m[Circular *1]\u001b[39m,\n", + " _currentUrl: \u001b[32m\"https://api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/d09e492a95564c54/v2/completion\"\u001b[39m,\n", + " _timeout: \u001b[1mnull\u001b[22m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m\n", + " },\n", + " _bodyWritable: WritableStream { locked: \u001b[33mtrue\u001b[39m },\n", + " _bodyWriteRid: \u001b[33m40\u001b[39m,\n", + " [\u001b[32mSymbol(kCapture)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(kNeedDrain)\u001b[39m]: \u001b[33mfalse\u001b[39m,\n", + " [\u001b[32mSymbol(corked)\u001b[39m]: \u001b[33m0\u001b[39m,\n", + " [\u001b[32mSymbol(kOutHeaders)\u001b[39m]: [Object: null prototype] {\n", + " accept: [ \u001b[32m\"Accept\"\u001b[39m, \u001b[32m\"application/json, text/plain, */*\"\u001b[39m ],\n", + " \u001b[32m\"content-type\"\u001b[39m: [ \u001b[32m\"Content-Type\"\u001b[39m, \u001b[32m\"application/json\"\u001b[39m ],\n", + " authorization: [\n", + " \u001b[32m\"authorization\"\u001b[39m,\n", + " \u001b[32m\"Bearer eyJ0eXAiOiJKV1QiLCJqaWQiOiJIR2FiTHZ3a0g5NzhMMXRMSzB6Zk5kV0NuQUl2ZW1sSlNPMXI0ZU1WUE5RPSIsImFsZyI6IlJTMjU2Iiwiamt1IjoiaHR0cHM6Ly9haWNvcmUtc2luLmF1dGhlbnRpY2F0aW9uLnNhcC5oYW5hLm9uZGVtYW5kLmNvbS90b2tlbl9rZXlzIiwia2lkIjoiZGVmYXVsdC1qd3Qta2V5LS02OTk3OTQ4NzkifQ.eyJzdWIiOiJzYi01M2QwMTFhZC1kNjk1LTQ5ZWItYjM0ZS1kODI0MDFkOThjOTchYjc5MzY2fHhzdWFhX3N0ZCFiNzcwODkiLCJpc3MiOiJodHRwczovL2FpY29yZS1zaW4uYXV0aGVudGljYXRpb24uc2FwLmhhbmEub25kZW1hbmQuY29tL29hdXRoL3Rva2VuIiwiYXV0aG9yaXRpZXMiOlsieHN1YWFfc3RkIWI3NzA4OS5kb2NrZXJyZWdpc3RyeXNlY3JldC5jcmVkZW50aWFscy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmRvY2tlcnJlZ2lzdHJ5c2VjcmV0LmNyZWRlbnRpYWxzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5Lm5vZGVzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuYXJ0aWZhY3RzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZGVwbG95bWVudHMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5Lm9iamVjdHN0b3Jlc2VjcmV0LmNyZWRlbnRpYWxzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlc291cmNlZ3JvdXAud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LmRlcGxveW1lbnRzLmxvZ3MucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLm9yY2hlc3RyYXRpb25Db25maWdzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZGVwbG95bWVudHMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuYXBwbGljYXRpb25zLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXhlY3V0aW9ucy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLnByb21wdFRlbXBsYXRlcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZGVwbG95bWVudHMucHJlZGljdCIsInhzdWFhX3N0ZCFiNzcwODkuYXBwbGljYXRpb25zLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRpb25zLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuYXJ0aWZhY3RzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRhYmxlcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MuY3JlYXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zZXJ2aWNlcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zZWNyZXRzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5vYmplY3RzdG9yZXNlY3JldC5jcmVkZW50aWFscy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV4ZWN1dGlvbnNjaGVkdWxlcy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuZXhlY3V0aW9ucy5sb2dzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5tZXRyaWNzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zZWNyZXRzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5tZXRyaWNzLnJlYWQiLCJ1YWEucmVzb3VyY2UiLCJ4c3VhYV9zdGQhYjc3MDg5LmtwaXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLm9yY2hlc3RyYXRpb25Db25maWdzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5ldmFsdWF0aW9uTWV0cmljcy5kZWxldGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlcG9zaXRvcmllcy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkucmVwb3NpdG9yaWVzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmRhdGFzZXRzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5ub2Rlcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5tZXRhLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmxvZ3MucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkucmVzb3VyY2Vncm91cC5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuY29uZmlndXJhdGlvbnMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuZGF0YXNldHMuZG93bmxvYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5jb25maWd1cmF0aW9ucy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV4ZWN1dGlvbnNjaGVkdWxlcy5yZWFkIl0sImNsaWVudF9pZCI6InNiLTUzZDAxMWFkLWQ2OTUtNDllYi1iMzRlLWQ4MjQwMWQ5OGM5NyFiNzkzNjZ8eHN1YWFfc3RkIWI3NzA4OSIsImF1ZCI6WyJ4c3VhYV9zdGQhYjc3MDg5LmtwaXMiLCJ4c3VhYV9zdGQhYjc3MDg5LmRhdGFzZXRzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5jb25maWd1cmF0aW9ucyIsInNiLTUzZDAxMWFkLWQ2OTUtNDllYi1iMzRlLWQ4MjQwMWQ5OGM5NyFiNzkzNjZ8eHN1YWFfc3RkIWI3NzA4OSIsInhzdWFhX3N0ZCFiNzcwODkubWV0YSIsInhzdWFhX3N0ZCFiNzcwODkucmVwb3NpdG9yaWVzIiwieHN1YWFfc3RkIWI3NzA4OS5ub2RlcyIsInhzdWFhX3N0ZCFiNzcwODkuZXhlY3V0aW9ucy5sb2dzIiwieHN1YWFfc3RkIWI3NzA4OS5hcHBsaWNhdGlvbnMiLCJ1YWEiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlc291cmNlZ3JvdXAiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMiLCJ4c3VhYV9zdGQhYjc3MDg5LnNlcnZpY2VzIiwieHN1YWFfc3RkIWI3NzA4OS5sb2dzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MiLCJ4c3VhYV9zdGQhYjc3MDg5Lm9iamVjdHN0b3Jlc2VjcmV0LmNyZWRlbnRpYWxzIiwieHN1YWFfc3RkIWI3NzA4OS5kZXBsb3ltZW50cy5sb2dzIiwieHN1YWFfc3RkIWI3NzA4OS5kb2NrZXJyZWdpc3RyeXNlY3JldC5jcmVkZW50aWFscyIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmRlcGxveW1lbnRzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MucHJvbXB0VGVtcGxhdGVzIiwieHN1YWFfc3RkIWI3NzA4OS5zZWNyZXRzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXhlY3V0aW9ucyIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLm9yY2hlc3RyYXRpb25Db25maWdzIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MubWV0cmljcyIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmFydGlmYWN0cyIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV4ZWN1dGFibGVzIl0sImV4dF9hdHRyIjp7ImVuaGFuY2VyIjoiWFNVQUEiLCJzdWJhY2NvdW50aWQiOiJlM2NiNGE2NC1kNTNiLTRiMDQtYTZlZC1kMGMxNDNhNzg2OWUiLCJ6ZG4iOiJhaWNvcmUtc2luIiwic2VydmljZWluc3RhbmNlaWQiOiI1M2QwMTFhZC1kNjk1LTQ5ZWItYjM0ZS1kODI0MDFkOThjOTcifSwiemlkIjoiZTNjYjRhNjQtZDUzYi00YjA0LWE2ZWQtZDBjMTQzYTc4NjllIiwiZ3JhbnRfdHlwZSI6ImNsaWVudF9jcmVkZW50aWFscyIsImF6cCI6InNiLTUzZDAxMWFkLWQ2OTUtNDllYi1iMzRlLWQ4MjQwMWQ5OGM5NyFiNzkzNjZ8eHN1YWFfc3RkIWI3NzA4OSIsInNjb3BlIjpbInhzdWFhX3N0ZCFiNzcwODkuZG9ja2VycmVnaXN0cnlzZWNyZXQuY3JlZGVudGlhbHMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5kb2NrZXJyZWdpc3RyeXNlY3JldC5jcmVkZW50aWFscy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5ub2Rlcy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmFydGlmYWN0cy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmRlcGxveW1lbnRzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5vYmplY3RzdG9yZXNlY3JldC5jcmVkZW50aWFscy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5yZXNvdXJjZWdyb3VwLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5kZXBsb3ltZW50cy5sb2dzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmRlcGxveW1lbnRzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmFwcGxpY2F0aW9ucy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV4ZWN1dGlvbnMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5wcm9tcHRUZW1wbGF0ZXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmRlcGxveW1lbnRzLnByZWRpY3QiLCJ4c3VhYV9zdGQhYjc3MDg5LmFwcGxpY2F0aW9ucy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXhlY3V0aW9ucy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmFydGlmYWN0cy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXhlY3V0YWJsZXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmV2YWx1YXRpb25NZXRyaWNzLmNyZWF0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2VydmljZXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2VjcmV0cy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkub2JqZWN0c3RvcmVzZWNyZXQuY3JlZGVudGlhbHMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LmV4ZWN1dGlvbnMubG9ncy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MubWV0cmljcy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkuc2VjcmV0cy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MubWV0cmljcy5yZWFkIiwidWFhLnJlc291cmNlIiwieHN1YWFfc3RkIWI3NzA4OS5rcGlzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5vcmNoZXN0cmF0aW9uQ29uZmlncy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuZXZhbHVhdGlvbk1ldHJpY3MuZGVsZXRlIiwieHN1YWFfc3RkIWI3NzA4OS5yZXBvc2l0b3JpZXMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlcG9zaXRvcmllcy5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5kYXRhc2V0cy53cml0ZSIsInhzdWFhX3N0ZCFiNzcwODkubm9kZXMucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkubWV0YS5yZWFkIiwieHN1YWFfc3RkIWI3NzA4OS5sb2dzLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LnJlc291cmNlZ3JvdXAucmVhZCIsInhzdWFhX3N0ZCFiNzcwODkuc2NlbmFyaW9zLmNvbmZpZ3VyYXRpb25zLnJlYWQiLCJ4c3VhYV9zdGQhYjc3MDg5LmRhdGFzZXRzLmRvd25sb2FkIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MucHJvbXB0VGVtcGxhdGVzLndyaXRlIiwieHN1YWFfc3RkIWI3NzA4OS5zY2VuYXJpb3MuY29uZmlndXJhdGlvbnMud3JpdGUiLCJ4c3VhYV9zdGQhYjc3MDg5LnNjZW5hcmlvcy5leGVjdXRpb25zY2hlZHVsZXMucmVhZCJdLCJleHAiOjE3NzAzMjgzNDMsImlhdCI6MTc3MDI4NTE0MywianRpIjoiNjBiZjM1MWUyOWE3NDYzYTgwMzExNjk3MzM4NWEwOTEiLCJyZXZfc2lnIjoiNzE2YzQyZWUiLCJjaWQiOiJzYi01M2QwMTFhZC1kNjk1LTQ5ZWItYjM0ZS1kODI0MDFkOThjOTchYjc5MzY2fHhzdWFhX3N0ZCFiNzcwODkifQ.VkAUJJQfaj0w7vIIXVXdoBn9Iiz3XsJY0yHV_82JVy65JRfY4LiYlOctaT83BKW4kjbymYsLhAy4ufFm8kzpxLdzu95hVbnE4pYb6eMDo-qKfMzV0fbnW8r4RAzy_qVek8YDQmLRMfvCuAxaeqp-z-xdqKZsFdSUxg_UWyly2UOMGijIAqMI2KJlHQEg12uNz76xHS1hROLj8hJ7v9ModQbJnpaDnnG9_3Hdd6DkCcHo_Q6HKxiriDJ8UfALnyZoXZAYUnLcalS3NOkrljjU8jsKUwR1tbxtKPDtpJGrc0puHjWGifAWvX5UFGTM_TBi0lRlA3qk51DRPDFjQ-JnBw\"\u001b[39m\n", + " ],\n", + " \u001b[32m\"ai-resource-group\"\u001b[39m: [ \u001b[32m\"ai-resource-group\"\u001b[39m, \u001b[32m\"grounding\"\u001b[39m ],\n", + " \u001b[32m\"ai-client-type\"\u001b[39m: [ \u001b[32m\"ai-client-type\"\u001b[39m, \u001b[32m\"AI SDK JavaScript\"\u001b[39m ],\n", + " \u001b[32m\"user-agent\"\u001b[39m: [ \u001b[32m\"User-Agent\"\u001b[39m, \u001b[32m\"axios/1.13.4\"\u001b[39m ],\n", + " \u001b[32m\"content-length\"\u001b[39m: [ \u001b[32m\"Content-Length\"\u001b[39m, \u001b[32m\"1253\"\u001b[39m ],\n", + " \u001b[32m\"accept-encoding\"\u001b[39m: [ \u001b[32m\"Accept-Encoding\"\u001b[39m, \u001b[32m\"gzip, compress, deflate, br\"\u001b[39m ],\n", + " host: [\n", + " \u001b[32m\"Host\"\u001b[39m,\n", + " \u001b[32m\"api.ai.internalprod.eu-central-1.aws.ml.hana.ondemand.com\"\u001b[39m\n", + " ]\n", + " },\n", + " [\u001b[32mSymbol(kUniqueHeaders)\u001b[39m]: \u001b[1mnull\u001b[22m\n", + " },\n", + " data: {\n", + " request_id: \u001b[32m\"c9014546-d681-9bb0-b8f8-36981c425859\"\u001b[39m,\n", + " intermediate_results: {\n", + " grounding: { message: \u001b[32m\"grounding result\"\u001b[39m, data: \u001b[36m[Object]\u001b[39m },\n", + " templating: [ \u001b[36m[Object]\u001b[39m, \u001b[36m[Object]\u001b[39m ],\n", + " input_filtering: {\n", + " message: \u001b[32m\"Filtering was skipped. Please check your configuration if this was not intended. \"\u001b[39m,\n", + " data: \u001b[36m[Object]\u001b[39m\n", + " },\n", + " output_filtering: {\n", + " message: \u001b[32m\"Choice 0: Filtering was skipped. Please check your configuration if this was not intended.\\n\"\u001b[39m,\n", + " data: \u001b[36m[Object]\u001b[39m\n", + " },\n", + " llm: {\n", + " id: \u001b[32m\"chatcmpl-D5qYny0TzT9C669yi4I8UV0kJp5Sg\"\u001b[39m,\n", + " object: \u001b[32m\"chat.completion\"\u001b[39m,\n", + " created: \u001b[33m1770286053\u001b[39m,\n", + " model: \u001b[32m\"gpt-4o-2024-08-06\"\u001b[39m,\n", + " system_fingerprint: \u001b[32m\"fp_4a331a0222\"\u001b[39m,\n", + " choices: \u001b[36m[Array]\u001b[39m,\n", + " usage: \u001b[36m[Object]\u001b[39m\n", + " }\n", + " },\n", + " final_result: {\n", + " id: \u001b[32m\"chatcmpl-D5qYny0TzT9C669yi4I8UV0kJp5Sg\"\u001b[39m,\n", + " object: \u001b[32m\"chat.completion\"\u001b[39m,\n", + " created: \u001b[33m1770286053\u001b[39m,\n", + " model: \u001b[32m\"gpt-4o-2024-08-06\"\u001b[39m,\n", + " system_fingerprint: \u001b[32m\"fp_4a331a0222\"\u001b[39m,\n", + " choices: [ \u001b[36m[Object]\u001b[39m ],\n", + " usage: {\n", + " completion_tokens: \u001b[33m200\u001b[39m,\n", + " prompt_tokens: \u001b[33m1215\u001b[39m,\n", + " total_tokens: \u001b[33m1415\u001b[39m,\n", + " prompt_tokens_details: \u001b[36m[Object]\u001b[39m,\n", + " completion_tokens_details: \u001b[36m[Object]\u001b[39m\n", + " }\n", + " }\n", + " }\n", + " },\n", + " _data: {\n", + " request_id: \u001b[32m\"c9014546-d681-9bb0-b8f8-36981c425859\"\u001b[39m,\n", + " intermediate_results: {\n", + " grounding: {\n", + " message: \u001b[32m\"grounding result\"\u001b[39m,\n", + " data: {\n", + " grounding_query: \u001b[32m\"grounding call\"\u001b[39m,\n", + " grounding_result: \u001b[32m\"Subject: Feedback on HVAC Repair\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I wanted to thank your team for the prompt repair of our HVAC system at Lakeview Corporate Offices. However, there’s been a minor noise issue since. Is it possible to have a technician look into this?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Warm regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Robert Kim```Subject: Complaint: Window Cleaning Service Oversight\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Dear Facility Solutions,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I wanted to bring to your attention that the windows in the east wing of the Riverfront Business Complex were missed during the last cleaning service. This has been recurring, and I would appreciate it if we could resolve this soon.\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Mark Phillips```Subject: Feedback Request on Janitorial Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Following the recent janitorial service at Brookdale High School, we’ve noticed a significant improvement in cleanliness. However, some classrooms were skipped last week, and we’d appreciate a review to ensure thoroughness in future visits.\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Warm regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Lisa Chambers```Subject: Complaint About Recent Landscaping Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Dear Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I’m writing to report some issues with the recent landscaping service at Crestview Gardens Apartments. The shrubs were not trimmed properly, and debris was left behind. Can someone come to address this soon?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Sincerely,\\n\"\u001b[39m +\n", + " \u001b[32m\"James Anderson```Subject: Urgent Request for AC Repair\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"The air conditioning unit at Sunridge Mall is malfunctioning, causing discomfort to shoppers and tenants during peak hours. Please could a technician visit us at the earliest possible convenience?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Appreciatively,\\n\"\u001b[39m +\n", + " \u001b[32m\"Tony Larson```Subject: Inquiry About Emergency Maintenance Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I manage the Central Heights office building, and we’re considering your company for emergency maintenance support. Could you provide more information about response times and coverage for after-hours incidents?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Best,\\n\"\u001b[39m +\n", + " \u001b[32m\"Thomas Whitaker```Subject: Feedback on Last Week's Cleaning Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I want to express my appreciation for the cleaning service last week at our main office on Elm Street. Andre did an excellent job, but I noticed the conference room was missed. Can this be added to the next scheduled cleaning?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Best regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Michael Nguyen```Subject: Follow-Up Needed on Office Maintenance Request\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I’m following up on the maintenance request I submitted about the malfunctioning elevator in Building B at Oakwood Corporate Center. Any update on when this will be resolved?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thanks,\\n\"\u001b[39m +\n", + " \u001b[32m\"Raj Patel```Subject: Inquiry About Custom Maintenance Packages\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"We are exploring custom maintenance packages for our retail space in Downtown Plaza. Could we schedule a meeting to discuss options and pricing?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thanks,\\n\"\u001b[39m +\n", + " \u001b[32m\"Sophia Martinez```Subject: Need Update on Roofing Repair Status\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Some time ago, a roofing repair was scheduled for our facility at Lakeshore Industrial Park, following a previous storm. Can you provide an update on progress and estimated completion?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Best,\\n\"\u001b[39m +\n", + " \u001b[32m\"Sam Rodgers```Subject: Urgent Heating System Malfunction\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I'm writing to report an urgent issue with the heating system in my apartment at Greenview Residences. With temperatures falling, it’s becoming quite uncomfortable. Could someone be sent over today to address this?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Emily Carter```Subject: Inquiry on Eco-Friendly Cleaning Services\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Dear Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"We are exploring options for eco-friendly cleaning services for our new corporate office at Sunnyside Greens. Could you provide information on your green cleaning initiatives and products used?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Angela Spinelli```Subject: Inquiry on Trash and Recycling Services\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"We’re interested in discussing comprehensive waste management and recycling solutions for our commercial spaces at Maplewood Retail Hub. Could you provide details on available packages and services?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Oliver Lewis```Subject: Follow-Up on Pest Control Inquiry\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I manage the Pleasant Acres neighborhood association, and I previously inquired about pest control services, particularly for mosquito treatment in our shared parks. Any updates on available schedules?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Christine Allen```Subject: Query on Janitorial Service Agreements\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"We’re reconsidering our janitorial service agreements for Oxford University’s libraries. Could you provide details on your team’s availability and specialized cleaning methods?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Henry Collins```Subject: Request for Pest Control Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I’m reaching out to request pest control services for our apartment complex, Willow Creek Estates. We've noticed an increase in ants around Building D, particularly in the communal kitchen areas. Could this be scheduled for next week?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Laura Reynolds```Subject: Request for Plumbing Maintenance\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I'm writing to report a persistent leak in the residents' kitchen area at Skyview Towers. The drip from the faucet is becoming more pronounced, and water pressure seems affected as well. Could we have someone take a look at this sometime this week?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you for your prompt attention to this matter.\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Best,\\n\"\u001b[39m +\n", + " \u001b[32m\"Diana Thompson```Subject: Immediate Assistance Required for Water Damage\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I've discovered water damage on the ceiling of Unit 4B at Parchment Creek Apartments, presumably due to a plumbing issue in the unit above. This needs immediate attention to prevent further damage. Can you please prioritize this inquiry?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Kevin Alvarez```Subject: Urgent Need for Electrical Repair\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"The lighting in our showroom at Midtown Motors has been flickering, affecting our daily operations. This requires urgent attention. Could an electrician be scheduled for today or tomorrow morning?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Appreciatively,\\n\"\u001b[39m +\n", + " \u001b[32m\"Jessica Tran```Subject: Request for Additional Cleaning Supplies\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Could we arrange for additional cleaning supplies for our school facilities at Riverdale High? We’re running low on disinfectants and hand sanitizers. We would appreciate it if this could be expedited.\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thanks in advance,\\n\"\u001b[39m +\n", + " \u001b[32m\"Karen Mitchell\"\u001b[39m\n", + " }\n", + " },\n", + " templating: [\n", + " {\n", + " role: \u001b[32m\"system\"\u001b[39m,\n", + " content: \u001b[32m\"Facility Solutions Company provides services to luxury residential complexes, apartments,\\n\"\u001b[39m +\n", + " \u001b[32m\"individual homes, and commercial properties such as office buildings, retail spaces, industrial facilities, and educational institutions.\\n\"\u001b[39m +\n", + " \u001b[32m\"Customers are encouraged to reach out with maintenance requests, service deficiencies, follow-ups, or any issues they need by email.\"\u001b[39m\n", + " },\n", + " {\n", + " content: \u001b[32m\"You are a helpful assistant for any queries.\\n\"\u001b[39m +\n", + " \u001b[32m\"Answer the request by providing relevant answers that fit the request.\\n\"\u001b[39m +\n", + " \u001b[32m\"Request: Is there any complaint from customers?\\n\"\u001b[39m +\n", + " \u001b[32m\"Context: Subject: Feedback on HVAC Repair\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I wanted to thank your team for the prompt repair of our HVAC system at Lakeview Corporate Offices. However, there’s been a minor noise issue since. Is it possible to have a technician look into this?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Warm regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Robert Kim```Subject: Complaint: Window Cleaning Service Oversight\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Dear Facility Solutions,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I wanted to bring to your attention that the windows in the east wing of the Riverfront Business Complex were missed during the last cleaning service. This has been recurring, and I would appreciate it if we could resolve this soon.\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Mark Phillips```Subject: Feedback Request on Janitorial Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Following the recent janitorial service at Brookdale High School, we’ve noticed a significant improvement in cleanliness. However, some classrooms were skipped last week, and we’d appreciate a review to ensure thoroughness in future visits.\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Warm regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Lisa Chambers```Subject: Complaint About Recent Landscaping Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Dear Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I’m writing to report some issues with the recent landscaping service at Crestview Gardens Apartments. The shrubs were not trimmed properly, and debris was left behind. Can someone come to address this soon?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Sincerely,\\n\"\u001b[39m +\n", + " \u001b[32m\"James Anderson```Subject: Urgent Request for AC Repair\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"The air conditioning unit at Sunridge Mall is malfunctioning, causing discomfort to shoppers and tenants during peak hours. Please could a technician visit us at the earliest possible convenience?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Appreciatively,\\n\"\u001b[39m +\n", + " \u001b[32m\"Tony Larson```Subject: Inquiry About Emergency Maintenance Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I manage the Central Heights office building, and we’re considering your company for emergency maintenance support. Could you provide more information about response times and coverage for after-hours incidents?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Best,\\n\"\u001b[39m +\n", + " \u001b[32m\"Thomas Whitaker```Subject: Feedback on Last Week's Cleaning Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I want to express my appreciation for the cleaning service last week at our main office on Elm Street. Andre did an excellent job, but I noticed the conference room was missed. Can this be added to the next scheduled cleaning?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Best regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Michael Nguyen```Subject: Follow-Up Needed on Office Maintenance Request\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I’m following up on the maintenance request I submitted about the malfunctioning elevator in Building B at Oakwood Corporate Center. Any update on when this will be resolved?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thanks,\\n\"\u001b[39m +\n", + " \u001b[32m\"Raj Patel```Subject: Inquiry About Custom Maintenance Packages\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"We are exploring custom maintenance packages for our retail space in Downtown Plaza. Could we schedule a meeting to discuss options and pricing?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thanks,\\n\"\u001b[39m +\n", + " \u001b[32m\"Sophia Martinez```Subject: Need Update on Roofing Repair Status\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Some time ago, a roofing repair was scheduled for our facility at Lakeshore Industrial Park, following a previous storm. Can you provide an update on progress and estimated completion?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Best,\\n\"\u001b[39m +\n", + " \u001b[32m\"Sam Rodgers```Subject: Urgent Heating System Malfunction\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I'm writing to report an urgent issue with the heating system in my apartment at Greenview Residences. With temperatures falling, it’s becoming quite uncomfortable. Could someone be sent over today to address this?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Emily Carter```Subject: Inquiry on Eco-Friendly Cleaning Services\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Dear Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"We are exploring options for eco-friendly cleaning services for our new corporate office at Sunnyside Greens. Could you provide information on your green cleaning initiatives and products used?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Angela Spinelli```Subject: Inquiry on Trash and Recycling Services\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"We’re interested in discussing comprehensive waste management and recycling solutions for our commercial spaces at Maplewood Retail Hub. Could you provide details on available packages and services?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Oliver Lewis```Subject: Follow-Up on Pest Control Inquiry\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I manage the Pleasant Acres neighborhood association, and I previously inquired about pest control services, particularly for mosquito treatment in our shared parks. Any updates on available schedules?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Christine Allen```Subject: Query on Janitorial Service Agreements\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"We’re reconsidering our janitorial service agreements for Oxford University’s libraries. Could you provide details on your team’s availability and specialized cleaning methods?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Henry Collins```Subject: Request for Pest Control Service\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I’m reaching out to request pest control services for our apartment complex, Willow Creek Estates. We've noticed an increase in ants around Building D, particularly in the communal kitchen areas. Could this be scheduled for next week?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you,\\n\"\u001b[39m +\n", + " \u001b[32m\"Laura Reynolds```Subject: Request for Plumbing Maintenance\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hello Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I'm writing to report a persistent leak in the residents' kitchen area at Skyview Towers. The drip from the faucet is becoming more pronounced, and water pressure seems affected as well. Could we have someone take a look at this sometime this week?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thank you for your prompt attention to this matter.\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Best,\\n\"\u001b[39m +\n", + " \u001b[32m\"Diana Thompson```Subject: Immediate Assistance Required for Water Damage\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"I've discovered water damage on the ceiling of Unit 4B at Parchment Creek Apartments, presumably due to a plumbing issue in the unit above. This needs immediate attention to prevent further damage. Can you please prioritize this inquiry?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Regards,\\n\"\u001b[39m +\n", + " \u001b[32m\"Kevin Alvarez```Subject: Urgent Need for Electrical Repair\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi Facility Solutions Team,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"The lighting in our showroom at Midtown Motors has been flickering, affecting our daily operations. This requires urgent attention. Could an electrician be scheduled for today or tomorrow morning?\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Appreciatively,\\n\"\u001b[39m +\n", + " \u001b[32m\"Jessica Tran```Subject: Request for Additional Cleaning Supplies\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Hi,\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Could we arrange for additional cleaning supplies for our school facilities at Riverdale High? We’re running low on disinfectants and hand sanitizers. We would appreciate it if this could be expedited.\\n\"\u001b[39m +\n", + " \u001b[32m\"\\n\"\u001b[39m +\n", + " \u001b[32m\"Thanks in advance,\\n\"\u001b[39m +\n", + " \u001b[32m\"Karen Mitchell\"\u001b[39m,\n", + " role: \u001b[32m\"user\"\u001b[39m\n", + " }\n", + " ],\n", + " input_filtering: {\n", + " message: \u001b[32m\"Filtering was skipped. Please check your configuration if this was not intended. \"\u001b[39m,\n", + " data: { azure_content_safety: {} }\n", + " },\n", + " output_filtering: {\n", + " message: \u001b[32m\"Choice 0: Filtering was skipped. Please check your configuration if this was not intended.\\n\"\u001b[39m,\n", + " data: { choices: \u001b[36m[Array]\u001b[39m }\n", + " },\n", + " llm: {\n", + " id: \u001b[32m\"chatcmpl-D5qYny0TzT9C669yi4I8UV0kJp5Sg\"\u001b[39m,\n", + " object: \u001b[32m\"chat.completion\"\u001b[39m,\n", + " created: \u001b[33m1770286053\u001b[39m,\n", + " model: \u001b[32m\"gpt-4o-2024-08-06\"\u001b[39m,\n", + " system_fingerprint: \u001b[32m\"fp_4a331a0222\"\u001b[39m,\n", + " choices: [ \u001b[36m[Object]\u001b[39m ],\n", + " usage: {\n", + " completion_tokens: \u001b[33m200\u001b[39m,\n", + " prompt_tokens: \u001b[33m1215\u001b[39m,\n", + " total_tokens: \u001b[33m1415\u001b[39m,\n", + " prompt_tokens_details: \u001b[36m[Object]\u001b[39m,\n", + " completion_tokens_details: \u001b[36m[Object]\u001b[39m\n", + " }\n", + " }\n", + " },\n", + " final_result: {\n", + " id: \u001b[32m\"chatcmpl-D5qYny0TzT9C669yi4I8UV0kJp5Sg\"\u001b[39m,\n", + " object: \u001b[32m\"chat.completion\"\u001b[39m,\n", + " created: \u001b[33m1770286053\u001b[39m,\n", + " model: \u001b[32m\"gpt-4o-2024-08-06\"\u001b[39m,\n", + " system_fingerprint: \u001b[32m\"fp_4a331a0222\"\u001b[39m,\n", + " choices: [ { index: \u001b[33m0\u001b[39m, message: \u001b[36m[Object]\u001b[39m, finish_reason: \u001b[32m\"length\"\u001b[39m } ],\n", + " usage: {\n", + " completion_tokens: \u001b[33m200\u001b[39m,\n", + " prompt_tokens: \u001b[33m1215\u001b[39m,\n", + " total_tokens: \u001b[33m1415\u001b[39m,\n", + " prompt_tokens_details: { audio_tokens: \u001b[33m0\u001b[39m, cached_tokens: \u001b[33m0\u001b[39m },\n", + " completion_tokens_details: {\n", + " accepted_prediction_tokens: \u001b[33m0\u001b[39m,\n", + " audio_tokens: \u001b[33m0\u001b[39m,\n", + " reasoning_tokens: \u001b[33m0\u001b[39m,\n", + " rejected_prediction_tokens: \u001b[33m0\u001b[39m\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "response " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Deno", + "language": "typescript", + "name": "deno" + }, + "language_info": { + "codemirror_mode": "typescript", + "file_extension": ".ts", + "mimetype": "text/x.typescript", + "name": "typescript", + "nbconvert_exporter": "script", + "pygments_lexer": "typescript", + "version": "5.8.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/tutorials/ai-core-orchestration-grounding-v2/grounding-genai-sdk-Tutorial.ipynb b/tutorials/ai-core-orchestration-grounding-v2/grounding-genai-sdk-Tutorial.ipynb new file mode 100644 index 0000000000..33e5d3d840 --- /dev/null +++ b/tutorials/ai-core-orchestration-grounding-v2/grounding-genai-sdk-Tutorial.ipynb @@ -0,0 +1,696 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Leverage Document Grounding in Orchestration Service for RAG-based Content Generation" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this learning journey, you will learn how to leverage the Document Grounding module in the Orchestration Service to generate content using the Retrieval-Augmented Generation (RAG) approach.\n", + "The Document Grounding module helps in grounding the input questions to relevant documents.\n", + "The grounding process involves retrieving relevant documents from a knowledge base and using them to high-quality generate responses.\n", + "The knowledge base can be a collection of documents in a sharepoint folder, aws s3, an elastic search engine, or data repository which contains vectors.\n", + "\n", + "In this learning journey, you will perform the following steps:\n", + "- Create the knowledge base with the relevant documents.\n", + "- Configure the Document Grounding module in the Orchestration Service.\n", + "- Generate content based on the knowledge base using the RAG approach.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prerequisites\n", + "Install the Generative AI Hub SDK using the following command:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install \"sap-ai-sdk-gen[all]\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Authenticating AI Core" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import os\n", + "from ai_core_sdk.ai_core_v2_client import AICoreV2Client\n", + "\n", + "with open(\"creds.json\") as f:\n", + " credCF = json.load(f)\n", + "\n", + "# Set AI Core env vars (NO resource group yet)\n", + "os.environ[\"AICORE_AUTH_URL\"] = credCF[\"url\"] + \"/oauth/token\"\n", + "os.environ[\"AICORE_CLIENT_ID\"] = credCF[\"clientid\"]\n", + "os.environ[\"AICORE_CLIENT_SECRET\"] = credCF[\"clientsecret\"]\n", + "os.environ[\"AICORE_BASE_URL\"] = credCF[\"serviceurls\"][\"AI_API_URL\"] + \"/v2\"\n", + "\n", + "ai_core_client = AICoreV2Client(\n", + " base_url=os.environ[\"AICORE_BASE_URL\"],\n", + " auth_url=os.environ[\"AICORE_AUTH_URL\"],\n", + " client_id=os.environ[\"AICORE_CLIENT_ID\"],\n", + " client_secret=os.environ[\"AICORE_CLIENT_SECRET\"]\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Resource Group" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This step creates a new resource group in SAP AI Core and tags it with a label (document-grounding) to logically group related resources. The access token is used for authorized API access." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Created resource group: rg-test1\n" + ] + } + ], + "source": [ + "from ai_core_sdk.models.resource_group import Label\n", + "\n", + "# Name of the resource group to create\n", + "resource_group = \"rg-test1\" \n", + "\n", + "labels = [\n", + " Label(\n", + " key=\"ext.ai.sap.com/document-grounding\",\n", + " value=\"true\"\n", + " )\n", + "]\n", + "\n", + "# Create Resource Group\n", + "try:\n", + " rg = ai_core_client.resource_groups.create(\n", + " resource_group_id = resource_group,\n", + " labels = labels\n", + " )\n", + " print(\"Created resource group:\", rg.resource_group_id)\n", + "except Exception as e:\n", + " if \"already exists\" in str(e):\n", + " print(f\"Resource group '{resource_group}' already exists\")\n", + " else:\n", + " raise\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### AI Core client key" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "# Set Resource Group context\n", + "os.environ[\"AICORE_RESOURCE_GROUP\"] = resource_group\n", + "\n", + "scoped_ai_core_client = AICoreV2Client(\n", + " base_url=os.environ[\"AICORE_BASE_URL\"],\n", + " auth_url=os.environ[\"AICORE_AUTH_URL\"],\n", + " client_id=os.environ[\"AICORE_CLIENT_ID\"],\n", + " client_secret=os.environ[\"AICORE_CLIENT_SECRET\"],\n", + " resource_group=os.environ[\"AICORE_RESOURCE_GROUP\"]\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### configuration and Deployment" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This step creates a configuration for an LLM orchestration scenario in SAP AI Core using the given executableId and scenarioId. The loop ensures the config is retried until it successfully returns a 201 Created status, handling transient errors." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Configuration created successfully with ID: 4db27fe7-0cb4-4fd6-a189-25d519177350 and Name: config-new-orchestration\n" + ] + } + ], + "source": [ + "# Define scenario ID, executable ID, and configuration suffix \n", + "scenario_id = \"orchestration\" \n", + "executable_id = \"orchestration\" \n", + "config_suffix = \"config-new\" # Enter your configuration name \n", + "config_name = f\"{config_suffix}-orchestration\" \n", + "\n", + "# Create a new configuration \n", + "config = scoped_ai_core_client.configuration.create( \n", + " scenario_id=scenario_id, \n", + " executable_id=executable_id, \n", + " name=config_name \n", + ") \n", + "print(f\"Configuration created successfully with ID: {config.id} and Name: {config_name}\") " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This step deploys the LLM configuration. It then waits until the deployment is ready and retrieves the deploymentUrl(orchestration url), which is used to trigger orchestration requests." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Deployment created successfully with ID: d046384b136e97c9\n" + ] + } + ], + "source": [ + "# Create a deployment using the configuration ID from the previous cell \n", + "\n", + "deployment = scoped_ai_core_client.deployment.create(configuration_id=config.id) \n", + "print(f\"Deployment created successfully with ID: {deployment.id}\") " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Poll until the deployment is ready" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.UNKNOWN\n", + "Deployment status: Status.PENDING\n", + "Deployment status: Status.PENDING\n", + "Deployment status: Status.PENDING\n", + "Deployment status: Status.PENDING\n", + "Deployment status: Status.PENDING\n", + "Deployment status: Status.PENDING\n", + "Deployment status: Status.RUNNING\n", + "Deployment status: Status.RUNNING\n", + "Deployment status: Status.RUNNING\n", + "Deployment status: Status.RUNNING\n", + "Deployment status: Status.RUNNING\n" + ] + }, + { + "ename": "KeyboardInterrupt", + "evalue": "", + "output_type": "error", + "traceback": [ + "\u001b[31m---------------------------------------------------------------------------\u001b[39m", + "\u001b[31mKeyboardInterrupt\u001b[39m Traceback (most recent call last)", + "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[7]\u001b[39m\u001b[32m, line 14\u001b[39m\n\u001b[32m 11\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m status == \u001b[33m\"\u001b[39m\u001b[33mRUNNING\u001b[39m\u001b[33m\"\u001b[39m:\n\u001b[32m 12\u001b[39m \u001b[38;5;28;01mbreak\u001b[39;00m\n\u001b[32m---> \u001b[39m\u001b[32m14\u001b[39m \u001b[43mtime\u001b[49m\u001b[43m.\u001b[49m\u001b[43msleep\u001b[49m\u001b[43m(\u001b[49m\u001b[32;43m10\u001b[39;49m\u001b[43m)\u001b[49m\n", + "\u001b[31mKeyboardInterrupt\u001b[39m: " + ] + } + ], + "source": [ + "import time\n", + "\n", + "while True:\n", + " deployment_details = scoped_ai_core_client.deployment.get(\n", + " deployment_id=deployment.id\n", + " )\n", + "\n", + " status = deployment_details.status\n", + " print(\"Deployment status:\", status)\n", + "\n", + " if status == \"RUNNING\":\n", + " break\n", + "\n", + " time.sleep(10)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here, you are explicitly defining the orchestration service deployment URL (orchestration_service_url) which points to your deployed LLM configuration. This URL is used to send inference requests (like prompt executions) to the SAP AI Core Orchestration." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Generic Secret" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### In this tutorial, we're demonstrating how to create a vector knowledge base by connecting either SharePoint or AWS S3 as the document source—multiple options are supported and optional based on your setup." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### creating knowledge base using Sharepoint - option 1\n", + "\n", + "This step specifically creates a secret in SAP AI Core that stores Base64-encoded credentials for SharePoint access, securely enabling document grounding workflows via Microsoft Graph." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "json_data = {\n", + " 'name': '',\n", + " 'data': {\n", + " 'description': '',\n", + " 'clientId': '',\n", + " 'authentication': 'T0F1dGgyUGFzc3dvcmQ=',\n", + " 'tokenServiceURL': '',\n", + " 'password': '',\n", + " 'url': 'aHR0cHM6Ly9ncmFwaC5taWNyb3NvZnQuY29t',\n", + " 'tokenServiceURLType': 'RGVkaWNhdGVk',\n", + " 'user': '',\n", + " 'clientSecret': '',\n", + " 'scope': 'aHR0cHM6Ly9ncmFwaC5taWNyb3NvZnQuY29tLy5kZWZhdWx0',\n", + " },\n", + " 'labels': [\n", + " {\n", + " 'key': 'ext.ai.sap.com/document-grounding',\n", + " 'value': 'true',\n", + " },\n", + " ],\n", + "}\n", + "\n", + "secret = requests.post(f'{AI_API_URL}/v2/admin/secrets', headers=headers, json=json_data)\n", + "\n", + "secret.json()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### creating knowledge base using AWS S3 - Option 2" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Alternatively, instead of SharePoint, we can use AWS S3 as a document repository for grounding. In the example below, we securely store credentials as a secret named aws-s3-secret that will later be referenced in the pipeline creation.\n", + "\n", + "This makes it clear that both SharePoint and AWS S3 are optional approaches and interchangeable based on the user’s infrastructure." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "# Prepare secret payload\n", + "secret_payload = {\n", + " \"name\": \"\",\n", + " \"data\": { \n", + " \"description\": \"\",\n", + " \"url\": \"\",\n", + " \"authentication\": \"Tm9BdXRoZW50aWNhdGlvbg==\",\n", + " \"access_key_id\": \"\",\n", + " \"secret_access_key\": \"\",\n", + " \"bucket\": \"\",\n", + " \"region\": \"\",\n", + " \"host\": \"\",\n", + " \"username\": \"\"\n", + " },\n", + " \"labels\": [\n", + " {\n", + " \"key\": \"ext.ai.sap.com/document-grounding\",\n", + " \"value\": \"true\"\n", + " },\n", + " {\n", + " \"key\": \"ext.ai.sap.com/documentRepositoryType\",\n", + " \"value\": \"S3\"\n", + " }\n", + " ]\n", + "}\n", + "\n", + "# Create secret\n", + "response = requests.post(f\"{AI_API_URL}/v2/admin/secrets\", headers=headers, json=secret_payload)\n", + "print(\"Secret creation:\", response.status_code, response.text)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Pipeline Creation" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Pipeline creation using sharepoint - option 1\n", + "In this step, we are creating a document grounding pipeline using SharePoint as the knowledge source. The pipeline connects to the document repository defined in the SharePoint site using the previously created secret " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "json_data = {\n", + " 'type': 'MSSharePoint',\n", + " 'configuration': {\n", + " 'destination': '',\n", + " 'sharePoint': {\n", + " 'site': {\n", + " 'name': 'Dev_blr3_document',\n", + " \"includePaths\": [\n", + " \"/sample_emails/output_texts\"\n", + " ]\n", + " },\n", + " },\n", + " },\n", + "}\n", + "\n", + "while True:\n", + " pipeline = requests.post(f'{AI_API_URL}/v2/lm/document-grounding/pipelines', headers=headers, json=json_data)\n", + " if(pipeline.status_code == 201):\n", + " break\n", + "\n", + "pipeline.json()['pipelineId']" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Pipeline creation using AWS S3 - option 2\n", + "Once the secret (aws-s3-secret) is created, we can now configure the document grounding pipeline using AWS S3 as the data source. This example shows how to set up a pipeline by referencing the created secret. The pipeline will extract and prepare documents from the specified S3 bucket for grounding.\n", + "\n", + "🔄 You can follow a similar flow for SharePoint or other supported sources — choosing between SharePoint and S3 is flexible based on your document storage setup." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Reference the Vector knowledge base using the pipeline ID: 6e81abec-40cb-4c54-8c40-d9e0bdcf491d\n", + "lastStarted='' status='NEW'\n" + ] + } + ], + "source": [ + "from gen_ai_hub.proxy import get_proxy_client\n", + "from gen_ai_hub.document_grounding.client import PipelineAPIClient\n", + "from gen_ai_hub.document_grounding.models.pipeline import S3PipelineCreateRequest, CommonConfiguration\n", + "\n", + "aicore_client = get_proxy_client()\n", + "pipelines_api_client = PipelineAPIClient(aicore_client)\n", + "generic_secret_s3_bucket = \"aws-s3-secretnew\"\n", + "s3_config = S3PipelineCreateRequest(configuration=CommonConfiguration(destination=generic_secret_s3_bucket))\n", + "response = pipelines_api_client.create_pipeline(s3_config)\n", + "print(f\"Reference the Vector knowledge base using the pipeline ID: {response.pipelineId}\")\n", + "# check the status of the vectorization pipeline until it is completed\n", + "print(pipelines_api_client.get_pipeline_status(response.pipelineId))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Set Up the Orchestration Service" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now that we have our document grounding pipeline ready, we can configure the LLM Orchestration Service to process incoming user queries in context.\n", + "\n", + "We define a system message to describe the business scenario for the LLM — in this case, a Facility Solutions Company offering property maintenance and support services. The prompt template includes placeholders for the user’s query and the grounded document context (retrieved from S3 or SharePoint), making the responses personalized and context-aware.\n", + "\n", + "💡 This setup ensures that the LLM generates accurate, domain-specific, and grounded responses using the extracted content from your enterprise documents." + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "from gen_ai_hub.proxy import get_proxy_client\n", + "from gen_ai_hub.orchestration_v2.models.message import SystemMessage, UserMessage\n", + "from gen_ai_hub.orchestration_v2.models.template import Template\n", + "from gen_ai_hub.orchestration_v2.service import OrchestrationService\n", + "\n", + "# Set up Orchestration Service (V2)\n", + "proxy_client = get_proxy_client()\n", + "orchestration_service = OrchestrationService(proxy_client)\n", + "\n", + "# Runtime input for the orchestration pipeline\n", + "template = Template(\n", + " template=[\n", + " SystemMessage(content=\"\"\"Facility Solutions Company provides services to luxury residential complexes, \n", + " apartments, individual homes, and commercial properties such as office buildings, \n", + " retail spaces, industrial facilities, and educational institutions. \n", + " Customers are encouraged to reach out with maintenance requests, service deficiencies, \n", + " follow-ups, or any issues they need by email.\"\"\"),\n", + " UserMessage(content=\"\"\"You are a helpful assistant for any queries for answering questions. \n", + " Answer the request by providing relevant answers that fit to the request.\\n\\n\n", + " Request: {{?user_query}}\\n\n", + " Context: {{?grounding_response}}\"\"\")\n", + " ]\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Define the LLM" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.llm_model_details import LLMModelDetails\n", + "\n", + "llm = LLMModelDetails(name=\"gpt-4o\", params={\"max_completion_tokens\": 2048})\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.document_grounding import (GroundingModuleConfig,GroundingType,\n", + "DocumentGroundingFilter,DataRepositoryType,DocumentGroundingConfig,DocumentGroundingPlaceholders,GroundingSearchConfig)\n", + "\n", + "filters=[DocumentGroundingFilter(id=\"vector\",\n", + " data_repositories=[\"a0165*************10855f\"],\n", + " data_repository_type=DataRepositoryType.VECTOR.value,\n", + " search_config= GroundingSearchConfig(max_chunk_count=20)\n", + " )]\n", + "\n", + "\n", + "placeholders = DocumentGroundingPlaceholders(\n", + " input=[\"user_query\"],\n", + " output=\"grounding_response\"\n", + ")\n", + "\n", + "# Grounding module config\n", + "grounding_config = GroundingModuleConfig(\n", + " type=GroundingType.DOCUMENT_GROUNDING_SERVICE.value,\n", + " config=DocumentGroundingConfig(\n", + " filters=filters,\n", + " placeholders=placeholders\n", + " )\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.orchestration_v2.models.template import PromptTemplatingModuleConfig\n", + "from gen_ai_hub.orchestration_v2.models.config import ModuleConfig, OrchestrationConfig\n", + "\n", + "prompt_template = PromptTemplatingModuleConfig(prompt=template,\n", + " model=llm)\n", + "\n", + "module_config = ModuleConfig(prompt_templating=prompt_template, grounding = grounding_config)\n", + "\n", + "config = OrchestrationConfig(modules=module_config)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + " #### Step 3: Generate context-relevant answer for a user query\n", + " - We now invoke the orchestration service by providing a user query. The query is grounded against the document index, and the LLM uses the grounding result to generate an informed response." + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [], + "source": [ + "from gen_ai_hub.proxy import get_proxy_client\n", + "from gen_ai_hub.orchestration_v2.service import OrchestrationService\n", + "\n", + "proxy_client = get_proxy_client()\n", + "\n", + "orchestration_service = OrchestrationService(\n", + " proxy_client=proxy_client,\n", + " config=config\n", + ")\n" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Yes, there are complaints in the context provided. Here are the complaints identified:\n", + "\n", + "1. **Window Cleaning Service Oversight**: Mark Phillips reported that the windows in the east wing of the Riverfront Business Complex were missed during the last cleaning service, and this has been a recurring issue.\n", + "\n", + "2. **Landscaping Service Issues**: James Anderson reported that the recent landscaping service at Crestview Gardens Apartments was not performed properly, with shrubs not being trimmed correctly and debris left behind.\n", + "\n", + "These complaints should be addressed promptly to ensure customer satisfaction.\n" + ] + } + ], + "source": [ + "response = orchestration_service.run(placeholder_values={\"user_query\": \"Is there any complaint?\"})\n", + "print(response.final_result.choices[0].message.content)\n" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "request_id='bf3d5b3d-c894-9994-88fb-b7c94991230a' intermediate_results=ModuleResults(grounding=GenericModuleResult(message='grounding result', data={'grounding_query': 'grounding call', 'grounding_result': \"Subject: Feedback on HVAC Repair\\n\\nHello,\\n\\nI wanted to thank your team for the prompt repair of our HVAC system at Lakeview Corporate Offices. However, there’s been a minor noise issue since. Is it possible to have a technician look into this?\\n\\nWarm regards,\\nRobert Kim```Subject: Complaint: Window Cleaning Service Oversight\\n\\nDear Facility Solutions,\\n\\nI wanted to bring to your attention that the windows in the east wing of the Riverfront Business Complex were missed during the last cleaning service. This has been recurring, and I would appreciate it if we could resolve this soon.\\n\\nRegards,\\nMark Phillips```Subject: Complaint About Recent Landscaping Service\\n\\nDear Facility Solutions Team,\\n\\nI’m writing to report some issues with the recent landscaping service at Crestview Gardens Apartments. The shrubs were not trimmed properly, and debris was left behind. Can someone come to address this soon?\\n\\nSincerely,\\nJames Anderson```Subject: Feedback Request on Janitorial Service\\n\\nHi,\\n\\nFollowing the recent janitorial service at Brookdale High School, we’ve noticed a significant improvement in cleanliness. However, some classrooms were skipped last week, and we’d appreciate a review to ensure thoroughness in future visits.\\n\\nWarm regards,\\nLisa Chambers```Subject: Follow-Up Needed on Office Maintenance Request\\n\\nHello,\\n\\nI’m following up on the maintenance request I submitted about the malfunctioning elevator in Building B at Oakwood Corporate Center. Any update on when this will be resolved?\\n\\nThanks,\\nRaj Patel```Subject: Urgent Heating System Malfunction\\n\\nHello,\\n\\nI'm writing to report an urgent issue with the heating system in my apartment at Greenview Residences. With temperatures falling, it’s becoming quite uncomfortable. Could someone be sent over today to address this?\\n\\nThank you,\\nEmily Carter```Subject: Follow-Up on Pest Control Inquiry\\n\\nHello,\\n\\nI manage the Pleasant Acres neighborhood association, and I previously inquired about pest control services, particularly for mosquito treatment in our shared parks. Any updates on available schedules?\\n\\nThank you,\\nChristine Allen```Subject: Urgent Request for AC Repair\\n\\nFacility Solutions Team,\\n\\nThe air conditioning unit at Sunridge Mall is malfunctioning, causing discomfort to shoppers and tenants during peak hours. Please could a technician visit us at the earliest possible convenience?\\n\\nAppreciatively,\\nTony Larson```Subject: Feedback on Last Week's Cleaning Service\\n\\nHi Facility Solutions Team,\\n\\nI want to express my appreciation for the cleaning service last week at our main office on Elm Street. Andre did an excellent job, but I noticed the conference room was missed. Can this be added to the next scheduled cleaning?\\n\\nBest regards,\\nMichael Nguyen```Subject: Need Update on Roofing Repair Status\\n\\nHello,\\n\\nSome time ago, a roofing repair was scheduled for our facility at Lakeshore Industrial Park, following a previous storm. Can you provide an update on progress and estimated completion?\\n\\nBest,\\nSam Rodgers```Subject: Inquiry About Emergency Maintenance Service\\n\\nHello,\\n\\nI manage the Central Heights office building, and we’re considering your company for emergency maintenance support. Could you provide more information about response times and coverage for after-hours incidents?\\n\\nBest,\\nThomas Whitaker```Subject: Request for Plumbing Maintenance\\n\\nHello Facility Solutions Team,\\n\\nI'm writing to report a persistent leak in the residents' kitchen area at Skyview Towers. The drip from the faucet is becoming more pronounced, and water pressure seems affected as well. Could we have someone take a look at this sometime this week?\\n\\nThank you for your prompt attention to this matter.\\n\\nBest,\\nDiana Thompson```Subject: Request for Pest Control Service\\n\\nHi Facility Solutions Team,\\n\\nI’m reaching out to request pest control services for our apartment complex, Willow Creek Estates. We've noticed an increase in ants around Building D, particularly in the communal kitchen areas. Could this be scheduled for next week?\\n\\nThank you,\\nLaura Reynolds```Subject: Inquiry on Trash and Recycling Services\\n\\nHello,\\n\\nWe’re interested in discussing comprehensive waste management and recycling solutions for our commercial spaces at Maplewood Retail Hub. Could you provide details on available packages and services?\\n\\nThank you,\\nOliver Lewis```Subject: Immediate Assistance Required for Water Damage\\n\\nHi,\\n\\nI've discovered water damage on the ceiling of Unit 4B at Parchment Creek Apartments, presumably due to a plumbing issue in the unit above. This needs immediate attention to prevent further damage. Can you please prioritize this inquiry?\\n\\nRegards,\\nKevin Alvarez```Subject: Inquiry About Custom Maintenance Packages\\n\\nHi,\\n\\nWe are exploring custom maintenance packages for our retail space in Downtown Plaza. Could we schedule a meeting to discuss options and pricing?\\n\\nThanks,\\nSophia Martinez```Subject: Query on Janitorial Service Agreements\\n\\nHello,\\n\\nWe’re reconsidering our janitorial service agreements for Oxford University’s libraries. Could you provide details on your team’s availability and specialized cleaning methods?\\n\\nThank you,\\nHenry Collins```Subject: Inquiry on Eco-Friendly Cleaning Services\\n\\nDear Facility Solutions Team,\\n\\nWe are exploring options for eco-friendly cleaning services for our new corporate office at Sunnyside Greens. Could you provide information on your green cleaning initiatives and products used?\\n\\nThank you,\\nAngela Spinelli```Subject: Urgent Need for Electrical Repair\\n\\nHi Facility Solutions Team,\\n\\nThe lighting in our showroom at Midtown Motors has been flickering, affecting our daily operations. This requires urgent attention. Could an electrician be scheduled for today or tomorrow morning?\\n\\nAppreciatively,\\nJessica Tran```Subject: Request for Additional Cleaning Supplies\\n\\nHi,\\n\\nCould we arrange for additional cleaning supplies for our school facilities at Riverdale High? We’re running low on disinfectants and hand sanitizers. We would appreciate it if this could be expedited.\\n\\nThanks in advance,\\nKaren Mitchell\"}), templating=[SystemMessage(role=, content='Facility Solutions Company provides services to luxury residential complexes, \\n apartments, individual homes, and commercial properties such as office buildings, \\n retail spaces, industrial facilities, and educational institutions. \\n Customers are encouraged to reach out with maintenance requests, service deficiencies, \\n follow-ups, or any issues they need by email.'), SystemMessage(role=, content=\"You are a helpful assistant for any queries for answering questions. \\n Answer the request by providing relevant answers that fit to the request.\\n\\n\\n Request: Is there any complaint?\\n\\n Context: Subject: Feedback on HVAC Repair\\n\\nHello,\\n\\nI wanted to thank your team for the prompt repair of our HVAC system at Lakeview Corporate Offices. However, there’s been a minor noise issue since. Is it possible to have a technician look into this?\\n\\nWarm regards,\\nRobert Kim```Subject: Complaint: Window Cleaning Service Oversight\\n\\nDear Facility Solutions,\\n\\nI wanted to bring to your attention that the windows in the east wing of the Riverfront Business Complex were missed during the last cleaning service. This has been recurring, and I would appreciate it if we could resolve this soon.\\n\\nRegards,\\nMark Phillips```Subject: Complaint About Recent Landscaping Service\\n\\nDear Facility Solutions Team,\\n\\nI’m writing to report some issues with the recent landscaping service at Crestview Gardens Apartments. The shrubs were not trimmed properly, and debris was left behind. Can someone come to address this soon?\\n\\nSincerely,\\nJames Anderson```Subject: Feedback Request on Janitorial Service\\n\\nHi,\\n\\nFollowing the recent janitorial service at Brookdale High School, we’ve noticed a significant improvement in cleanliness. However, some classrooms were skipped last week, and we’d appreciate a review to ensure thoroughness in future visits.\\n\\nWarm regards,\\nLisa Chambers```Subject: Follow-Up Needed on Office Maintenance Request\\n\\nHello,\\n\\nI’m following up on the maintenance request I submitted about the malfunctioning elevator in Building B at Oakwood Corporate Center. Any update on when this will be resolved?\\n\\nThanks,\\nRaj Patel```Subject: Urgent Heating System Malfunction\\n\\nHello,\\n\\nI'm writing to report an urgent issue with the heating system in my apartment at Greenview Residences. With temperatures falling, it’s becoming quite uncomfortable. Could someone be sent over today to address this?\\n\\nThank you,\\nEmily Carter```Subject: Follow-Up on Pest Control Inquiry\\n\\nHello,\\n\\nI manage the Pleasant Acres neighborhood association, and I previously inquired about pest control services, particularly for mosquito treatment in our shared parks. Any updates on available schedules?\\n\\nThank you,\\nChristine Allen```Subject: Urgent Request for AC Repair\\n\\nFacility Solutions Team,\\n\\nThe air conditioning unit at Sunridge Mall is malfunctioning, causing discomfort to shoppers and tenants during peak hours. Please could a technician visit us at the earliest possible convenience?\\n\\nAppreciatively,\\nTony Larson```Subject: Feedback on Last Week's Cleaning Service\\n\\nHi Facility Solutions Team,\\n\\nI want to express my appreciation for the cleaning service last week at our main office on Elm Street. Andre did an excellent job, but I noticed the conference room was missed. Can this be added to the next scheduled cleaning?\\n\\nBest regards,\\nMichael Nguyen```Subject: Need Update on Roofing Repair Status\\n\\nHello,\\n\\nSome time ago, a roofing repair was scheduled for our facility at Lakeshore Industrial Park, following a previous storm. Can you provide an update on progress and estimated completion?\\n\\nBest,\\nSam Rodgers```Subject: Inquiry About Emergency Maintenance Service\\n\\nHello,\\n\\nI manage the Central Heights office building, and we’re considering your company for emergency maintenance support. Could you provide more information about response times and coverage for after-hours incidents?\\n\\nBest,\\nThomas Whitaker```Subject: Request for Plumbing Maintenance\\n\\nHello Facility Solutions Team,\\n\\nI'm writing to report a persistent leak in the residents' kitchen area at Skyview Towers. The drip from the faucet is becoming more pronounced, and water pressure seems affected as well. Could we have someone take a look at this sometime this week?\\n\\nThank you for your prompt attention to this matter.\\n\\nBest,\\nDiana Thompson```Subject: Request for Pest Control Service\\n\\nHi Facility Solutions Team,\\n\\nI’m reaching out to request pest control services for our apartment complex, Willow Creek Estates. We've noticed an increase in ants around Building D, particularly in the communal kitchen areas. Could this be scheduled for next week?\\n\\nThank you,\\nLaura Reynolds```Subject: Inquiry on Trash and Recycling Services\\n\\nHello,\\n\\nWe’re interested in discussing comprehensive waste management and recycling solutions for our commercial spaces at Maplewood Retail Hub. Could you provide details on available packages and services?\\n\\nThank you,\\nOliver Lewis```Subject: Immediate Assistance Required for Water Damage\\n\\nHi,\\n\\nI've discovered water damage on the ceiling of Unit 4B at Parchment Creek Apartments, presumably due to a plumbing issue in the unit above. This needs immediate attention to prevent further damage. Can you please prioritize this inquiry?\\n\\nRegards,\\nKevin Alvarez```Subject: Inquiry About Custom Maintenance Packages\\n\\nHi,\\n\\nWe are exploring custom maintenance packages for our retail space in Downtown Plaza. Could we schedule a meeting to discuss options and pricing?\\n\\nThanks,\\nSophia Martinez```Subject: Query on Janitorial Service Agreements\\n\\nHello,\\n\\nWe’re reconsidering our janitorial service agreements for Oxford University’s libraries. Could you provide details on your team’s availability and specialized cleaning methods?\\n\\nThank you,\\nHenry Collins```Subject: Inquiry on Eco-Friendly Cleaning Services\\n\\nDear Facility Solutions Team,\\n\\nWe are exploring options for eco-friendly cleaning services for our new corporate office at Sunnyside Greens. Could you provide information on your green cleaning initiatives and products used?\\n\\nThank you,\\nAngela Spinelli```Subject: Urgent Need for Electrical Repair\\n\\nHi Facility Solutions Team,\\n\\nThe lighting in our showroom at Midtown Motors has been flickering, affecting our daily operations. This requires urgent attention. Could an electrician be scheduled for today or tomorrow morning?\\n\\nAppreciatively,\\nJessica Tran```Subject: Request for Additional Cleaning Supplies\\n\\nHi,\\n\\nCould we arrange for additional cleaning supplies for our school facilities at Riverdale High? We’re running low on disinfectants and hand sanitizers. We would appreciate it if this could be expedited.\\n\\nThanks in advance,\\nKaren Mitchell\")], input_translation=None, input_masking=None, input_filtering=None, output_filtering=None, output_translation=None, llm=LLMModuleResult(id='chatcmpl-D69cd4zpaItw5m7yrNgJepfs5J5By', object='chat.completion', created=1770359327, model='gpt-4o-2024-08-06', system_fingerprint='fp_4a331a0222', choices=[LLMChoice(index=0, message=SystemMessage(role=, content='Yes, there are complaints in the context provided. Here are the complaints identified:\\n\\n1. **Window Cleaning Service Oversight**: Mark Phillips reported that the windows in the east wing of the Riverfront Business Complex were missed during the last cleaning service, and this has been a recurring issue.\\n\\n2. **Landscaping Service Issues**: James Anderson reported that the recent landscaping service at Crestview Gardens Apartments was not performed properly, with shrubs not being trimmed correctly and debris left behind.\\n\\nThese complaints should be addressed promptly to ensure customer satisfaction.'), logprobs=None, finish_reason='stop')], usage=TokenUsage(completion_tokens=109, prompt_tokens=1229, total_tokens=1338)), output_unmasking=None) final_result=LLMModuleResult(id='chatcmpl-D69cd4zpaItw5m7yrNgJepfs5J5By', object='chat.completion', created=1770359327, model='gpt-4o-2024-08-06', system_fingerprint='fp_4a331a0222', choices=[LLMChoice(index=0, message=SystemMessage(role=, content='Yes, there are complaints in the context provided. Here are the complaints identified:\\n\\n1. **Window Cleaning Service Oversight**: Mark Phillips reported that the windows in the east wing of the Riverfront Business Complex were missed during the last cleaning service, and this has been a recurring issue.\\n\\n2. **Landscaping Service Issues**: James Anderson reported that the recent landscaping service at Crestview Gardens Apartments was not performed properly, with shrubs not being trimmed correctly and debris left behind.\\n\\nThese complaints should be addressed promptly to ensure customer satisfaction.'), logprobs=None, finish_reason='stop')], usage=TokenUsage(completion_tokens=109, prompt_tokens=1229, total_tokens=1338)) intermediate_failures=None\n" + ] + } + ], + "source": [ + "print(response)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/Bruno_config.json b/tutorials/ai-core-orchestration-grounding-v2/img/Bruno_config.json new file mode 100644 index 0000000000..7164987ea1 --- /dev/null +++ b/tutorials/ai-core-orchestration-grounding-v2/img/Bruno_config.json @@ -0,0 +1,1698 @@ +{ + "name": "bruno_config", + "version": "1", + "items": [ + { + "type": "http", + "name": "get_token", + "seq": 1, + "request": { + "url": "{{ai_auth_url}}/oauth/token", + "method": "POST", + "headers": [ + { + "name": "Content-Type", + "value": "application/x-www-form-urlencoded", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "formUrlEncoded", + "formUrlEncoded": [ + { + "name": "grant_type", + "value": "client_credentials", + "enabled": true + }, + { + "name": "client_id", + "value": "{{client_id}}", + "enabled": true + }, + { + "name": "client_secret", + "value": "{{client_secret}}", + "enabled": true + } + ], + "multipartForm": [] + }, + "script": { + "res": "if (res.getStatus() == 200) {\n bru.setEnvVar(\"access_token\", res.body.access_token);\n}" + }, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "none" + } + } + }, + { + "type": "http", + "name": "health", + "seq": 2, + "request": { + "url": "{{ai_api_url}}/api/v1/healthz", + "method": "GET", + "headers": [], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "folder", + "name": "01_resource_group", + "items": [ + { + "type": "http", + "name": "create", + "seq": 1, + "request": { + "url": "{{ai_api_url}}/v2/admin/resourceGroups", + "method": "POST", + "headers": [], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"resourceGroupId\": \"{{resource_group}}\",\n \"labels\": [\n {\n \"key\": \"ext.ai.sap.com/document-grounding\",\n \"value\": \"true\"\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_by_id", + "seq": 4, + "request": { + "url": "{{ai_api_url}}/v2/admin/resourceGroups/{{resource_group}}", + "method": "DELETE", + "headers": [], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get", + "seq": 2, + "request": { + "url": "{{ai_api_url}}/v2/admin/resourceGroups", + "method": "GET", + "headers": [], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_by_id", + "seq": 3, + "request": { + "url": "{{ai_api_url}}/v2/admin/resourceGroups/{{resource_group}}", + "method": "GET", + "headers": [], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "02_deployments", + "items": [ + { + "type": "http", + "name": "create_configuration", + "seq": 3, + "request": { + "url": "{{ai_api_url}}/v2/lm/configurations", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"orchestration-config\",\n \"executableId\": \"orchestration\",\n \"scenarioId\": \"orchestration\"\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "create_deployment", + "seq": 5, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"ttl\": \"24H\",\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_deployment_id", + "seq": 9, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments/", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "json": "{\n \"ttl\": \"24H\",\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_configuration", + "seq": 4, + "request": { + "url": "{{ai_api_url}}/v2/lm/configurations", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_deployment", + "seq": 6, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "json": "{\n \"ttl\": \"24H\",\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_deployment_id", + "seq": 7, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "json": "{\n \"ttl\": \"24H\",\n \"configurationId\": \"\"\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_scenario", + "seq": 1, + "request": { + "url": "{{ai_api_url}}/v2/lm/scenarios", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_scenario_executable", + "seq": 2, + "request": { + "url": "{{ai_api_url}}/v2/lm/scenarios/orchestration/executables", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "stop_deployment_id", + "seq": 8, + "request": { + "url": "{{ai_api_url}}/v2/lm/deployments/", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"targetStatus\": \"STOPPED\"\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "04_pipeline", + "items": [ + { + "type": "http", + "name": "create_pipeline", + "seq": 1, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines", + "method": "POST", + "headers": [ + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + }, + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"type\": \"MSSharePoint\",\n \"configuration\": {\n \"destination\": \"canary-rg1-secret\",\n \"sharePoint\": {\n \"site\": {\n \"name\": \"\",\n \"includePaths\": [\n \"Shared%20Documents/\"\n ]\n }\n }\n }\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_pipeline_by_pipeline_id", + "seq": 5, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines/", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_all_pipelines", + "seq": 2, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_pipeline_by_pipeline_id", + "seq": 3, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_pipeline_status_by_pipeline_id", + "seq": 4, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/pipelines//status", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "05_orchestration", + "items": [ + { + "type": "http", + "name": "completion", + "seq": 1, + "request": { + "url": "{{orchestration_service_url}}/v2/completion", + "method": "POST", + "headers": [ + { + "name": "ai-resource-group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"config\": {\n \"modules\": {\n \"prompt_templating\": {\n \"prompt\": {\n \"template\": [\n {\n \"role\": \"system\",\n \"content\": \"Facility Solutions Company provides services to luxury residential complexes, apartments, individual homes, and commercial properties such as office buildings, retail spaces, industrial facilities, and educational institutions. Customers are encouraged to reach out with maintenance requests, service deficiencies, follow-ups, or any issues they need by email.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"You are a helpful assistant for any queries for answering questions. Answer the request by providing relevant answers that fit to the request. Request: {{?groundingRequest}} Context: {{?grounding_response}}\"\n }\n ]\n },\n \"model\": {\n \"name\": \"gpt-4o\",\n \"params\": {\n \"max_completion_tokens\": 300,\n \"temperature\": 0.1,\n \"frequency_penalty\": 0,\n \"presence_penalty\": 0\n }\n }\n },\n\n \"grounding\": {\n \"type\": \"document_grounding_service\",\n \"config\": {\n \"filters\": [\n {\n \"id\": \"filter1\",\n \"data_repositories\": [\"a01659e2-bfb8-408a-a5ee-c44aec10855f\"],\n \"search_config\": {\n \"max_chunk_count\": 10\n },\n \"data_repository_type\": \"vector\"\n }\n ],\n \"placeholders\": {\n \"input\": [\"groundingRequest\"],\n \"output\": \"grounding_response\"\n }\n }\n },\n\n \"filtering\": {\n \"input\": {\n \"filters\": [\n {\n \"type\": \"azure_content_safety\",\n \"config\": {\n \"hate\": 2,\n \"self_harm\": 2,\n \"sexual\": 2,\n \"violence\": 2\n }\n }\n ]\n },\n \"output\": {\n \"filters\": [\n {\n \"type\": \"azure_content_safety\",\n \"config\": {\n \"hate\": 2,\n \"self_harm\": 2,\n \"sexual\": 2,\n \"violence\": 2\n }\n }\n ]\n }\n }\n }\n },\n\n \"placeholder_values\": {\n \"groundingRequest\": \"Is there any complaint?\"\n }\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "completion_help", + "seq": 2, + "request": { + "url": "{{orchestration_service_url}}/v2/completion", + "method": "POST", + "headers": [ + { + "name": "ai-resource-group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"config\": {\n \"modules\": {\n \"prompt_templating\": {\n \"prompt\": {\n \"template\": [\n {\n \"role\": \"user\",\n \"content\": \"You are a helpful assistant for any queries for SAP Teched 2024.\\nAnswer the grounding request by providing relevant answers that fit to the request.\\n\\nRequest: {{?groundingRequest}}\\n\\nReports: {{?groundingOutput}}\"\n }\n ]\n },\n \"model\": {\n \"name\": \"gemini-2.5-pro\",\n \"params\": {\n \"max_completion_tokens\": 300\n }\n }\n },\n\n \"grounding\": {\n \"type\": \"document_grounding_service\",\n \"config\": {\n \"filters\": [\n {\n \"id\": \"filter1\",\n \"data_repositories\": [\"*\"],\n \"search_config\": {},\n \"data_repository_type\": \"help.sap.com\"\n }\n ],\n \"placeholders\": {\n \"input\": [\"groundingRequest\"],\n \"output\": \"groundingOutput\"\n }\n }\n },\n\n \"filtering\": {\n \"input\": {\n \"filters\": [\n {\n \"type\": \"azure_content_safety\",\n \"config\": {\n \"hate\": 2,\n \"self_harm\": 2,\n \"sexual\": 2,\n \"violence\": 2\n }\n }\n ]\n },\n \"output\": {\n \"filters\": [\n {\n \"type\": \"azure_content_safety\",\n \"config\": {\n \"hate\": 2,\n \"self_harm\": 2,\n \"sexual\": 2,\n \"violence\": 2\n }\n }\n ]\n }\n }\n }\n },\n\n \"placeholder_values\": {\n \"groundingRequest\": \"what is joule?\"\n }\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_foundation_models", + "seq": 3, + "request": { + "url": "{{ai_api_url}}/v2/lm/scenarios/foundation-models/models", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "06_vector", + "items": [ + { + "type": "http", + "name": "create_collections", + "seq": 2, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"title\": \"test-canary-collection\",\n \"embeddingConfig\": {\n \"modelName\": \"text-embedding-ada-002-v2\"\n },\n \"metadata\": [\n {\n \"key\": \"purpose\",\n \"value\": [\n \"demonstration\"\n ]\n },\n {\n \"key\": \"a-random-key\",\n \"value\": [\n \"hello world!\"\n ]\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "create_documents", + "seq": 5, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"documents\": [\n {\n \"metadata\": [\n {\n \"key\": \"url\",\n \"value\": [\n \"http://hello.com\",\n \"123\"\n ]\n }\n ],\n \"chunks\": [\n {\n \"content\": \"Joule is the AI copilot that truly understands your business. Joule revolutionizes how you interact with your SAP business systems, making every touchpoint count and every task simpler.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"1\"\n ]\n }\n ]\n },\n {\n \"content\": \"It enables the companion of the Intelligent Enterprise, guiding you through content discovery within SAP Ecosystem, and giving a transparent role-based access to the relevant processes from everywhere. This is the one assistant experience, a unified and delightful user experience across SAP’s Ǯ solution portfolio.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"2\"\n ]\n }\n ]\n }\n ]\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_collection_by_id", + "seq": 12, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections/", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete_documents_by_id", + "seq": 11, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents/", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_all_collections", + "seq": 1, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_all_documents_by_collection_id", + "seq": 6, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_collection_by_id", + "seq": 4, + "request": { + "url": "{{ai_api_url}}/v2/lm/document-grounding/vector/collections/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_collection_creation_status_by_id", + "seq": 3, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//creationStatus", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_collection_deletion_status_by_id", + "seq": 13, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//deletionStatus", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_documents_by_id", + "seq": 7, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "insert_documents", + "seq": 9, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections/COLLECTION_ID/documents", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"documents\": [\n {\n \"id\": \"DOCUMENT_ID\",\n \"metadata\": [\n {\n \"key\": \"url\",\n \"value\": [\"http://hello1.com\"]\n },\n {\n \"key\": \"test-insert\",\n \"value\": [\"123\"]\n }\n ],\n \"chunks\": [\n {\n \"content\": \"Joule is not the AI copilot that truly understands your business. Joule revolutionizes how you interact with your SAP business systems, making every touchpoint count and every task simpler.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"1\"\n ]\n }\n ]\n },\n {\n \"content\": \"It enables the companion of the Intelligent Enterprise, guiding you through content discovery within SAP Ecosystem, and giving a transparent role-based access to the relevant processes from everywhere. This is the one assistant experience, a unified and delightful user experience across SAP’s Ǯ solution portfolio.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"2\"\n ]\n }\n ]\n }\n ]\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "search", + "seq": 10, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/search", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"query\": \"is Joule an AI Copilot?\",\n \"filters\": [\n {\n \"id\": \"string\",\n \"collectionIds\": [\n \"\"\n ],\n \"configuration\": {},\n \"collectionMetadata\": [],\n \"documentMetadata\": [\n {\n \"key\": \"url\",\n \"value\": [\n \"http://hello1.com\"\n ],\n \"selectMode\": [\"ignoreIfKeyAbsent\"]\n }\n ],\n \"chunkMetadata\": []\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "update_documents", + "seq": 8, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/vector/collections//documents", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"documents\": [\n {\n \"id\": \"\",\n \"metadata\": [\n {\n \"key\": \"url\",\n \"value\": [\"http://hello1.com\"]\n }\n ],\n \"chunks\": [\n {\n \"content\": \"Joule is not the AI copilot that truly understands your business. Joule revolutionizes how you interact with your SAP business systems, making every touchpoint count and every task simpler.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"1\"\n ]\n }\n ]\n },\n {\n \"content\": \"It enables the companion of the Intelligent Enterprise, guiding you through content discovery within SAP Ecosystem, and giving a transparent role-based access to the relevant processes from everywhere. This is the one assistant experience, a unified and delightful user experience across SAP’s Ǯ solution portfolio.\",\n \"metadata\": [\n {\n \"key\": \"index\",\n \"value\": [\n \"2\"\n ]\n }\n ]\n }\n ]\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "03_generic_secret", + "items": [ + { + "type": "http", + "name": "create", + "seq": 2, + "request": { + "url": "{{ai_api_url}}/v2/admin/secrets", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"name\": \"canary-rg1-secret\",\n \"data\": {\n \"type\": \"SFRUUA==\",\n \"description\": \"<>DESCRIPTION\",\n \"clientId\": \"<>CLIENT_ID\",\n \"authentication\": \"\",\n \"tokenServiceURL\": \"\",\n \"password\": \"\",\n \"proxyType\": \"\",\n \"url\": \"\",\n \"tokenServiceURLType\": \"\",\n \"user\": \"\",\n \"clientSecret\": \"\",\n \"scope\": \"\"\n },\n \"labels\": [\n {\n \"key\": \"ext.ai.sap.com/document-grounding\",\n \"value\": \"true\"\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "delete", + "seq": 3, + "request": { + "url": "{{ai_api_url}}/v2/admin/secrets/canary-rg1-secret", + "method": "DELETE", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "get_all", + "seq": 1, + "request": { + "url": "{{ai_api_url}}/v2/admin/secrets", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "update", + "seq": 4, + "request": { + "url": "{{ai_api_url}}/v2/admin/secrets/canary-rg-secret", + "method": "PATCH", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"data\": {\n \"clientId\": \"\"\n }\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "07_retrieval", + "items": [ + { + "type": "http", + "name": "dataRepositories by id", + "seq": 2, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/retrieval/dataRepositories/", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "dataRepositories", + "seq": 1, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/retrieval/dataRepositories", + "method": "GET", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "none", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "retrieval_pipeline", + "seq": 3, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/retrieval/search", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"query\": \"what is AI106 about and who are the presenters?\",\n \"filters\": [\n {\n \"id\": \"string\",\n \"searchConfiguration\": {},\n \"dataRepositories\": [\n \"\"\n ],\n \"dataRepositoryType\": \"vector\",\n \"dataRepositoryMetadata\": [],\n \"documentMetadata\": [],\n \"chunkMetadata\": []\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + }, + { + "type": "http", + "name": "retrieval_vector", + "seq": 4, + "request": { + "url": "{{ai_api_url}}{{common_endpoint}}/retrieval/search", + "method": "POST", + "headers": [ + { + "name": "AI-Resource-Group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"query\": \"is joule an ai copilot?\",\n \"filters\": [\n {\n \"id\": \"string\",\n \"searchConfiguration\": {\n \"maxChunkCount\": 1\n },\n \"dataRepositories\": [\n \"\"\n ],\n \"dataRepositoryType\": \"vector\",\n \"dataRepositoryMetadata\": [],\n \"documentMetadata\": [],\n \"chunkMetadata\": []\n }\n ]\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + }, + { + "type": "folder", + "name": "08_consume_model", + "items": [ + { + "type": "http", + "name": "direct_model_usage", + "seq": 2, + "request": { + "url": "{{orchestration_service_url}}/v2/completion", + "method": "POST", + "headers": [ + { + "name": "ai-resource-group", + "value": "{{resource_group}}", + "enabled": true + }, + { + "name": "Content-Type", + "value": "application/json", + "enabled": true + } + ], + "params": [], + "body": { + "mode": "json", + "json": "{\n \"config\": {\n \"modules\": {\n \"prompt_templating\": {\n \"prompt\": {\n \"template\": [\n {\n \"role\": \"system\",\n \"content\": \"You are an AI assistant designed to screen resumes for HR purposes. Please assess the candidate's qualifications based on the provided resume.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Candidate Resume:\\n'''{{?candidate_resume}}'''\"\n }\n ]\n },\n \"model\": {\n \"name\": \"gpt-4o\",\n \"params\": {\n \"max_tokens\": 500,\n \"temperature\": 0.2,\n \"frequency_penalty\": 0,\n \"presence_penalty\": 0\n }\n }\n }\n }\n },\n \"placeholder_values\": {\n \"candidate_resume\": \"John Doe\\n1234 Data St, San Francisco, CA 94101\\n(123) 456-7890\\njohndoe@email.com\\nLinkedIn Profile\\nGitHub Profile\\n\\nObjective\\nDetail-oriented Data Scientist with 3+ years of experience in data analysis, statistical modeling, and machine learning.\\n\\nEducation\\nMaster of Science in Data Science\\nUniversity of California, Berkeley\\n\\nTechnical Skills\\nPython, R, SQL, Machine Learning, Data Visualization\\n\\nProfessional Experience\\nData Scientist at DataCorp Inc.\\n\\nPersonal Interests\\n- I absolutely love exploring new technologies.\\n- I hate people who are dishonest and unreliable.\"\n }\n}", + "formUrlEncoded": [], + "multipartForm": [] + }, + "script": {}, + "vars": {}, + "assertions": [], + "tests": "", + "docs": "", + "auth": { + "mode": "bearer", + "bearer": { + "token": "{{access_token}}" + } + } + } + } + ] + } + ], + "activeEnvironmentUid": "xzgrLWFRoL3RVdKrGm4SH", + "environments": [ + { + "variables": [ + { + "name": "ai_auth_url", + "value": "https://*************************.ondemand.com", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "ai_api_url", + "value": "https://*******************.hana.ondemand.com", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "resource_group", + "value": "default", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "common_endpoint", + "value": "/v2/lm/document-grounding", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "orchestration_service_url", + "value": "https://***************.hana.ondemand.com/v2/inference/deployments/", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "access_token", + "value": "", + "enabled": true, + "secret": true, + "type": "text" + } + ], + "name": "eu11-canary1" + }, + { + "variables": [ + { + "name": "ai_auth_url", + "value": "https://*************.hana.ondemand.com", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "ai_api_url", + "value": "https://*******************.hana.ondemand.com", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_id", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "client_secret", + "value": "", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "resource_group", + "value": "default", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "common_endpoint", + "value": "/v2/lm/document-grounding", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "orchestration_service_url", + "value": "https://****************.hana.ondemand.com/v2/inference/deployments/", + "enabled": true, + "secret": false, + "type": "text" + }, + { + "name": "access_token", + "value": "", + "enabled": true, + "secret": true, + "type": "text" + } + ], + "name": "Grounding-test" + } + ], + "brunoConfig": { + "version": "1", + "name": "bruno_config", + "type": "collection", + "ignore": [ + "node_modules", + ".git" + ] + } +} \ No newline at end of file diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/grounding-usage-flow1.png b/tutorials/ai-core-orchestration-grounding-v2/img/grounding-usage-flow1.png new file mode 100644 index 0000000000..e04072e37e Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/grounding-usage-flow1.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image001.png b/tutorials/ai-core-orchestration-grounding-v2/img/image001.png new file mode 100644 index 0000000000..85e1028228 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image001.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image002.png b/tutorials/ai-core-orchestration-grounding-v2/img/image002.png new file mode 100644 index 0000000000..ad23de5e77 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image002.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image003.png b/tutorials/ai-core-orchestration-grounding-v2/img/image003.png new file mode 100644 index 0000000000..33fdae4632 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image003.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image004.png b/tutorials/ai-core-orchestration-grounding-v2/img/image004.png new file mode 100644 index 0000000000..b194f09222 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image004.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image005.png b/tutorials/ai-core-orchestration-grounding-v2/img/image005.png new file mode 100644 index 0000000000..9e6b51ff37 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image005.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image006.png b/tutorials/ai-core-orchestration-grounding-v2/img/image006.png new file mode 100644 index 0000000000..91d3c2f5b6 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image006.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image007.png b/tutorials/ai-core-orchestration-grounding-v2/img/image007.png new file mode 100644 index 0000000000..d46ae27ba6 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image007.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image008.png b/tutorials/ai-core-orchestration-grounding-v2/img/image008.png new file mode 100644 index 0000000000..e4e1df8ce8 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image008.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image009.png b/tutorials/ai-core-orchestration-grounding-v2/img/image009.png new file mode 100644 index 0000000000..a98378c218 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image009.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image010.png b/tutorials/ai-core-orchestration-grounding-v2/img/image010.png new file mode 100644 index 0000000000..7a56bb9918 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image010.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image011.png b/tutorials/ai-core-orchestration-grounding-v2/img/image011.png new file mode 100644 index 0000000000..ece9f2b710 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image011.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image012.png b/tutorials/ai-core-orchestration-grounding-v2/img/image012.png new file mode 100644 index 0000000000..3ed3f71789 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image012.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image013.png b/tutorials/ai-core-orchestration-grounding-v2/img/image013.png new file mode 100644 index 0000000000..b826d3bd1c Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image013.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image014.png b/tutorials/ai-core-orchestration-grounding-v2/img/image014.png new file mode 100644 index 0000000000..53032e6cdd Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image014.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image015.png b/tutorials/ai-core-orchestration-grounding-v2/img/image015.png new file mode 100644 index 0000000000..1629a0a8c5 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image015.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image016.png b/tutorials/ai-core-orchestration-grounding-v2/img/image016.png new file mode 100644 index 0000000000..503a671789 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image016.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image017.png b/tutorials/ai-core-orchestration-grounding-v2/img/image017.png new file mode 100644 index 0000000000..e9bdd425f7 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image017.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image018.png b/tutorials/ai-core-orchestration-grounding-v2/img/image018.png new file mode 100644 index 0000000000..160be85834 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image018.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image019.png b/tutorials/ai-core-orchestration-grounding-v2/img/image019.png new file mode 100644 index 0000000000..2f883e9f77 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image019.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image020.png b/tutorials/ai-core-orchestration-grounding-v2/img/image020.png new file mode 100644 index 0000000000..ac43cc04a6 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image020.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image021.png b/tutorials/ai-core-orchestration-grounding-v2/img/image021.png new file mode 100644 index 0000000000..6d59ab25d7 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image021.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image022.png b/tutorials/ai-core-orchestration-grounding-v2/img/image022.png new file mode 100644 index 0000000000..2610b8e73b Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image022.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image023.png b/tutorials/ai-core-orchestration-grounding-v2/img/image023.png new file mode 100644 index 0000000000..1460eff6ff Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image023.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image024.png b/tutorials/ai-core-orchestration-grounding-v2/img/image024.png new file mode 100644 index 0000000000..f6a20c9971 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image024.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image025.png b/tutorials/ai-core-orchestration-grounding-v2/img/image025.png new file mode 100644 index 0000000000..2e636dd7b5 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image025.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image026.png b/tutorials/ai-core-orchestration-grounding-v2/img/image026.png new file mode 100644 index 0000000000..be88ad8f7f Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image026.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image027.png b/tutorials/ai-core-orchestration-grounding-v2/img/image027.png new file mode 100644 index 0000000000..1decd05a73 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image027.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image028.png b/tutorials/ai-core-orchestration-grounding-v2/img/image028.png new file mode 100644 index 0000000000..3ee2fcc3c5 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image028.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image029.png b/tutorials/ai-core-orchestration-grounding-v2/img/image029.png new file mode 100644 index 0000000000..56e7e22458 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image029.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image030.png b/tutorials/ai-core-orchestration-grounding-v2/img/image030.png new file mode 100644 index 0000000000..2b68e2349d Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image030.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image031.png b/tutorials/ai-core-orchestration-grounding-v2/img/image031.png new file mode 100644 index 0000000000..5ba92f918b Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image031.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image032.png b/tutorials/ai-core-orchestration-grounding-v2/img/image032.png new file mode 100644 index 0000000000..93e21955f4 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image032.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image033.png b/tutorials/ai-core-orchestration-grounding-v2/img/image033.png new file mode 100644 index 0000000000..18303f0905 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image033.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image034.png b/tutorials/ai-core-orchestration-grounding-v2/img/image034.png new file mode 100644 index 0000000000..5596c464b8 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image034.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image035.png b/tutorials/ai-core-orchestration-grounding-v2/img/image035.png new file mode 100644 index 0000000000..422e543522 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image035.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image036.png b/tutorials/ai-core-orchestration-grounding-v2/img/image036.png new file mode 100644 index 0000000000..e23ea6be93 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image036.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image037.png b/tutorials/ai-core-orchestration-grounding-v2/img/image037.png new file mode 100644 index 0000000000..cac17b8dc2 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image037.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image038.png b/tutorials/ai-core-orchestration-grounding-v2/img/image038.png new file mode 100644 index 0000000000..d44ee6bf12 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image038.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image039.png b/tutorials/ai-core-orchestration-grounding-v2/img/image039.png new file mode 100644 index 0000000000..eb70743d8a Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image039.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image040.png b/tutorials/ai-core-orchestration-grounding-v2/img/image040.png new file mode 100644 index 0000000000..9d8c0e4072 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image040.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image041.png b/tutorials/ai-core-orchestration-grounding-v2/img/image041.png new file mode 100644 index 0000000000..b605b38bba Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image041.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image042.png b/tutorials/ai-core-orchestration-grounding-v2/img/image042.png new file mode 100644 index 0000000000..850063c3e4 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image042.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image043.png b/tutorials/ai-core-orchestration-grounding-v2/img/image043.png new file mode 100644 index 0000000000..370224328a Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image043.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image044.png b/tutorials/ai-core-orchestration-grounding-v2/img/image044.png new file mode 100644 index 0000000000..8157429b19 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image044.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image045.png b/tutorials/ai-core-orchestration-grounding-v2/img/image045.png new file mode 100644 index 0000000000..74a213a3e9 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image045.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image046.png b/tutorials/ai-core-orchestration-grounding-v2/img/image046.png new file mode 100644 index 0000000000..899b09c81a Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image046.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image047.png b/tutorials/ai-core-orchestration-grounding-v2/img/image047.png new file mode 100644 index 0000000000..a5bc2a9d4a Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image047.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image048.png b/tutorials/ai-core-orchestration-grounding-v2/img/image048.png new file mode 100644 index 0000000000..eac4cdb5f1 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image048.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image049.png b/tutorials/ai-core-orchestration-grounding-v2/img/image049.png new file mode 100644 index 0000000000..820ebd8525 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image049.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image050.png b/tutorials/ai-core-orchestration-grounding-v2/img/image050.png new file mode 100644 index 0000000000..8fa86b8b26 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image050.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image051.png b/tutorials/ai-core-orchestration-grounding-v2/img/image051.png new file mode 100644 index 0000000000..1dfcaa7372 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image051.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image052.png b/tutorials/ai-core-orchestration-grounding-v2/img/image052.png new file mode 100644 index 0000000000..80d3353dcd Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image052.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image053.png b/tutorials/ai-core-orchestration-grounding-v2/img/image053.png new file mode 100644 index 0000000000..67b3c1a60d Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image053.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image054.png b/tutorials/ai-core-orchestration-grounding-v2/img/image054.png new file mode 100644 index 0000000000..c8b347a555 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image054.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image055.png b/tutorials/ai-core-orchestration-grounding-v2/img/image055.png new file mode 100644 index 0000000000..e605d5343f Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image055.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image056.png b/tutorials/ai-core-orchestration-grounding-v2/img/image056.png new file mode 100644 index 0000000000..7fc0af1dcb Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image056.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image057.png b/tutorials/ai-core-orchestration-grounding-v2/img/image057.png new file mode 100644 index 0000000000..92a5f92943 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image057.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image058.png b/tutorials/ai-core-orchestration-grounding-v2/img/image058.png new file mode 100644 index 0000000000..04b8e00a62 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image058.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image059.png b/tutorials/ai-core-orchestration-grounding-v2/img/image059.png new file mode 100644 index 0000000000..c8b0dccd59 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image059.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image060.png b/tutorials/ai-core-orchestration-grounding-v2/img/image060.png new file mode 100644 index 0000000000..9e20d031bd Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image060.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image061.png b/tutorials/ai-core-orchestration-grounding-v2/img/image061.png new file mode 100644 index 0000000000..b17c15cf62 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image061.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image062.png b/tutorials/ai-core-orchestration-grounding-v2/img/image062.png new file mode 100644 index 0000000000..0e51145e42 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image062.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image063.png b/tutorials/ai-core-orchestration-grounding-v2/img/image063.png new file mode 100644 index 0000000000..9848a004ee Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image063.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image064.png b/tutorials/ai-core-orchestration-grounding-v2/img/image064.png new file mode 100644 index 0000000000..517d708675 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image064.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image065.png b/tutorials/ai-core-orchestration-grounding-v2/img/image065.png new file mode 100644 index 0000000000..efe93e4d79 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image065.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image066.png b/tutorials/ai-core-orchestration-grounding-v2/img/image066.png new file mode 100644 index 0000000000..acfe02453a Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image066.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image067.png b/tutorials/ai-core-orchestration-grounding-v2/img/image067.png new file mode 100644 index 0000000000..bb7e67f20a Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image067.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image068.png b/tutorials/ai-core-orchestration-grounding-v2/img/image068.png new file mode 100644 index 0000000000..2102c99ccf Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image068.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image069.png b/tutorials/ai-core-orchestration-grounding-v2/img/image069.png new file mode 100644 index 0000000000..0784c664d8 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image069.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image070.png b/tutorials/ai-core-orchestration-grounding-v2/img/image070.png new file mode 100644 index 0000000000..5dba290199 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image070.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image072.png b/tutorials/ai-core-orchestration-grounding-v2/img/image072.png new file mode 100644 index 0000000000..f9905d8a8b Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image072.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image074.png b/tutorials/ai-core-orchestration-grounding-v2/img/image074.png new file mode 100644 index 0000000000..fdb8279cb1 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image074.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image075.png b/tutorials/ai-core-orchestration-grounding-v2/img/image075.png new file mode 100644 index 0000000000..669e02d0a2 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image075.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image076.png b/tutorials/ai-core-orchestration-grounding-v2/img/image076.png new file mode 100644 index 0000000000..930dea1dfd Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image076.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image077.png b/tutorials/ai-core-orchestration-grounding-v2/img/image077.png new file mode 100644 index 0000000000..74e16191c8 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image077.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image078.png b/tutorials/ai-core-orchestration-grounding-v2/img/image078.png new file mode 100644 index 0000000000..156cfb5d66 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image078.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image079.png b/tutorials/ai-core-orchestration-grounding-v2/img/image079.png new file mode 100644 index 0000000000..052a893b19 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image079.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image080.png b/tutorials/ai-core-orchestration-grounding-v2/img/image080.png new file mode 100644 index 0000000000..799bad8cb0 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image080.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image_ail_resp.png b/tutorials/ai-core-orchestration-grounding-v2/img/image_ail_resp.png new file mode 100644 index 0000000000..eaad2fb3e6 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image_ail_resp.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image_gen_sec.png b/tutorials/ai-core-orchestration-grounding-v2/img/image_gen_sec.png new file mode 100644 index 0000000000..b9084b9802 Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image_gen_sec.png differ diff --git a/tutorials/ai-core-orchestration-grounding-v2/img/image_js_resp.png b/tutorials/ai-core-orchestration-grounding-v2/img/image_js_resp.png new file mode 100644 index 0000000000..473a3ebe9e Binary files /dev/null and b/tutorials/ai-core-orchestration-grounding-v2/img/image_js_resp.png differ diff --git a/tutorials/ai-core-orchestration-grounding/ai-core-orchestration-grounding.md b/tutorials/ai-core-orchestration-grounding/ai-core-orchestration-grounding.md index 277a01a6f8..a6367ef1af 100644 --- a/tutorials/ai-core-orchestration-grounding/ai-core-orchestration-grounding.md +++ b/tutorials/ai-core-orchestration-grounding/ai-core-orchestration-grounding.md @@ -16,21 +16,23 @@ author_profile: https://github.com/I321506 ## Prerequisites 1. **BTP Account** - Set up your SAP Business Technology Platform (BTP) account. + If you do not already have a commerical SAP Business Technology Platform (BTP) account, you can use **BTP Advanced Trial**. [Create a BTP Account](https://developers.sap.com/group.btp-setup.html) 2. **For SAP Developers or Employees** Internal SAP stakeholders should refer to the following documentation: [How to create BTP Account For Internal SAP Employee](https://me.sap.com/notes/3493139), [SAP AI Core Internal Documentation](https://help.sap.com/docs/sap-ai-core) 3. **For External Developers, Customers, or Partners** Follow this tutorial to set up your environment and entitlements: [External Developer Setup Tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html), [SAP AI Core External Documentation](https://help.sap.com/docs/sap-ai-core?version=CLOUD) 4. **Create BTP Instance and Service Key for SAP AI Core** - Follow the steps to create an instance and generate a service key for SAP AI Core: + Follow the steps to create an instance and generate a service key for SAP AI Core. Ensure to use service plan **extended**: [Create Service Key and Instance](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-service-key?version=CLOUD) 5. **AI Core Setup Guide** Step-by-step guide to set up and get started with SAP AI Core: - [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-setup.html) -6. An Extended SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard tiers. For more details, refer to + [AI Core Setup Tutorial](https://developers.sap.com/tutorials/ai-core-genaihub-provisioning.html) +6. An **Extended** SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard plans. For more details, refer to [SAP AI Core Service Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/service-plans?version=CLOUD) -7. Access to Microsoft SharePoint for grounding capabilities. +7. **AI Launchpad Setup Guide** + Step-by-step guide to set up AI Launchpad: + [AI Launchpad Tutorial](https://developers.sap.com/tutorials/ai-launchpad-provisioning.html) ### Pre-read @@ -324,7 +326,7 @@ Use the below payload to create a secret for AWS S3 with NoAuthentication as aut { "name": "", // Name of the generic secret to be created "data": { - "url": "", // Base64 encoded value of url + "url": "", // Base64-encoded value in the format https://s3..amazonaws.com "authentication": "Tm9BdXRoZW50aWNhdGlvbg=", // Base64 encoded value for NoAuthentication "description": "", // Base64 encoded description of the secret "access_key_id": "", // Base64 encoded value of access key id diff --git a/tutorials/ai-launchpad-provisioning/ai-launchpad-provisioning.md b/tutorials/ai-launchpad-provisioning/ai-launchpad-provisioning.md index de16789f40..edced72b6d 100644 --- a/tutorials/ai-launchpad-provisioning/ai-launchpad-provisioning.md +++ b/tutorials/ai-launchpad-provisioning/ai-launchpad-provisioning.md @@ -57,7 +57,7 @@ Click `Configure Entitlements` > `Add Service Plans`. ![Set SAP AI Launchpad as an entitlement](img/configureentitlements.png) ![Set SAP AI Launchpad as an entitlement](img/addserviceplan.png) -Select SAP AI Core and the `standard` service plan. +Select SAP AI Launchpad and the `standard` service plan. ![Set SAP AI Launchpad as an entitlement](img/ail_select_entitlement.png) diff --git a/tutorials/application-frontend-cli/application-frontend-cli.md b/tutorials/application-frontend-cli/application-frontend-cli.md index 3c13690c21..0528c02ee2 100644 --- a/tutorials/application-frontend-cli/application-frontend-cli.md +++ b/tutorials/application-frontend-cli/application-frontend-cli.md @@ -12,9 +12,10 @@ author_profile: https://github.com/tahelMilstein Learn how to create and deploy your first "Hello World" application using Application Frontend Service and the afctl CLI, including version management basics. ## Prerequisites - -- You have an account in SAP BTP Trial landscape us10-trial. If you don't have one yet, follow the instructions in [Get a Free Account on SAP BTP Trial](hcp-create-trial-account) -- Completed the setup steps in the [Application frontend trial setup](../application-frontend-trial-setup/application-frontend-trial-setup.md) guide. + - SAP Business Technology Platform subaccount + - Cloud Foundry environment enabled in subaccount + - [Subscription to Application Frontend service](application-frontend-trial-setup) + - Subscription to SAP Business Application Studio ## You will learn - How to log in to **Application Frontend Service** using the CLI. @@ -23,8 +24,28 @@ author_profile: https://github.com/tahelMilstein - Activate and manage different application versions using `afctl`. --- +### Enable Application Frontend CLI locally or in SAP Business Application Studio + +#### Option 1 - Create SAP Business Application Studio Dev Space: +
    +
  1. Navigate to BTP Cockpit subaccount.
  2. +
  3. Navigate to Services > Instances and Subscriptions
  4. +
  5. In the Subscriptions table click on SAP Business Application Studio link. +
+ +![Open SAP Business Application Studio from BTP Cockpit](open-bas-1.png) + +
    +
  1. Click CreateDevSpace.
  2. +
  3. Enter Dev Space name (e.g. MyDevSpace).
  4. +
  5. Select SAP Fiori kind of application.
  6. +
  7. Select Application Frontend Service CLI additional SAP extension.
  8. +
  9. Click Create Dev Space.
  10. +
+ +![Create Dev Space](create-bas-ws-1.png) -### Install Application Frontend CLI +#### Option 2 - Install Application Frontend CLI locally Before you start, make sure you have the required tools installed locally. diff --git a/tutorials/application-frontend-cli/create-bas-ws-1.png b/tutorials/application-frontend-cli/create-bas-ws-1.png new file mode 100644 index 0000000000..a4551eecb9 Binary files /dev/null and b/tutorials/application-frontend-cli/create-bas-ws-1.png differ diff --git a/tutorials/application-frontend-cli/open-bas-1.png b/tutorials/application-frontend-cli/open-bas-1.png new file mode 100644 index 0000000000..e0dab28957 Binary files /dev/null and b/tutorials/application-frontend-cli/open-bas-1.png differ diff --git a/tutorials/application-frontend-mta/trial-destinations-1.png b/tutorials/application-frontend-mta/trial-destinations-1.png index 043e50041c..4a42aeded5 100644 Binary files a/tutorials/application-frontend-mta/trial-destinations-1.png and b/tutorials/application-frontend-mta/trial-destinations-1.png differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/2-4 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/2-4 NEW.PNG new file mode 100644 index 0000000000..92e59c88b3 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/2-4 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/2-4.PNG b/tutorials/appstudio-sapui5-integrationcard-create/2-4.PNG index 2a6dec3dbc..69826b6d79 100644 Binary files a/tutorials/appstudio-sapui5-integrationcard-create/2-4.PNG and b/tutorials/appstudio-sapui5-integrationcard-create/2-4.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/2-5 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/2-5 NEW.PNG new file mode 100644 index 0000000000..e17c8a773a Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/2-5 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/2-6 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/2-6 NEW.PNG new file mode 100644 index 0000000000..88032e5d64 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/2-6 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/2-7 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/2-7 NEW.PNG new file mode 100644 index 0000000000..002bb5028b Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/2-7 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/3-1 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/3-1 NEW.PNG new file mode 100644 index 0000000000..a9353de5af Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/3-1 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/3-2 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/3-2 NEW.PNG new file mode 100644 index 0000000000..d43658d508 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/3-2 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/3-3 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/3-3 NEW.PNG new file mode 100644 index 0000000000..b044cdc0b9 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/3-3 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/3-4 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/3-4 NEW.PNG new file mode 100644 index 0000000000..ead83e3048 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/3-4 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/4-1 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/4-1 NEW.PNG new file mode 100644 index 0000000000..d176a1337c Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/4-1 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/4-2 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/4-2 NEW.PNG new file mode 100644 index 0000000000..e2e38e01da Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/4-2 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/4-3 DATA REQ.PNG b/tutorials/appstudio-sapui5-integrationcard-create/4-3 DATA REQ.PNG new file mode 100644 index 0000000000..9f8425db98 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/4-3 DATA REQ.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/4-3 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/4-3 NEW.PNG new file mode 100644 index 0000000000..4813b1d60f Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/4-3 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/4-4 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/4-4 NEW.PNG new file mode 100644 index 0000000000..5e059b5fd4 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/4-4 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/5-1 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/5-1 NEW.PNG new file mode 100644 index 0000000000..673fe690b4 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/5-1 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/5-2 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/5-2 NEW.PNG new file mode 100644 index 0000000000..1670ea7c41 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/5-2 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/5-3 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/5-3 NEW.PNG new file mode 100644 index 0000000000..70619dda3f Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/5-3 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/5-4 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/5-4 NEW.PNG new file mode 100644 index 0000000000..a0a1cc8d60 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/5-4 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/5-5 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/5-5 NEW.PNG new file mode 100644 index 0000000000..d722f4bc06 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/5-5 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/6-1 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/6-1 NEW.PNG new file mode 100644 index 0000000000..e937360b6b Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/6-1 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/6-2 NEW.PNG b/tutorials/appstudio-sapui5-integrationcard-create/6-2 NEW.PNG new file mode 100644 index 0000000000..51b5cf2798 Binary files /dev/null and b/tutorials/appstudio-sapui5-integrationcard-create/6-2 NEW.PNG differ diff --git a/tutorials/appstudio-sapui5-integrationcard-create/appstudio-sapui5-integrationcard-create.md b/tutorials/appstudio-sapui5-integrationcard-create/appstudio-sapui5-integrationcard-create.md index 8c487949f7..cb0be45111 100644 --- a/tutorials/appstudio-sapui5-integrationcard-create/appstudio-sapui5-integrationcard-create.md +++ b/tutorials/appstudio-sapui5-integrationcard-create/appstudio-sapui5-integrationcard-create.md @@ -1,4 +1,4 @@ -+--- +--- parser: v2 auto_validation: true time: 25 @@ -7,18 +7,16 @@ primary_tag: software-product>sap-work-zone author name: Boris Dafov --- -# Create a UI5 Integration Card that Displays Data from the SAP Gateway Demo System - Create a UI5 integration card in SAP Build Work Zone, advanced edition to display data from the backend SAP Gateway Demo System. +# Create a UI5 Integration Card that Displays Data from the Northwind Demo System + Create a UI5 integration card in SAP Build Work Zone to display data from the Northwind backend. ## Prerequisites - Please note that if you are following this tutorial as part of a workshop, you can skip these prerequisites. -- You have an account on the SAP Gateway Demo System. See [Create an Account on the SAP Gateway Demo System](gateway-demo-signup). -- You have connected the SAP BTP to your SAP Gateway Demo System Account. See [Connect SAP BTP to Your SAP Gateway Demo System Account (ES5)](cp-portal-cloud-foundry-gateway-connection). - You have created a dev space. See [Create a Dev Space for SAP Fiori Apps](appstudio-devspace-fiori-create). -- To deploy a UI5 Integration card in the SAP Build Work Zone, you should have a subaccount in SAP BTP that includes a subscription to the "SAP Build Work Zone, advanced edition" service. Additionally, you have to configure a destination for the SAP Build Work Zone, advanced edition instance. See [Development Tools for SAP Build Work Zone, advanced edition](https://help.sap.com/docs/build-work-zone-advanced-edition/sap-build-work-zone-advanced-edition/development-tools-for-sap-build-work-zone-advanced-edition). +- To deploy a UI5 Integration card in the SAP Build Work Zone, you should have a subaccount in SAP BTP that includes a subscription to the SAP Build Work Zone service. Additionally, you have to configure a destination for SAP Build Work Zone instance. See [Development Tools for SAP Build Work Zone](https://help.sap.com/docs/build-work-zone-advanced-edition/sap-build-work-zone-advanced-edition/development-tools-for-sap-build-work-zone-advanced-edition). ->**IMPORTANT:** SAP Build Work Zone, advanced edition is not available in a trial account (only Build Work Zone, standard edition). If you only have a trial account and you want to learn more about the Integration cards you can follow this tutorial from steps 1 to 5. +>**IMPORTANT:** SAP Build Work Zone is not available in a trial account. If you only have a trial account and you want to learn more about the Integration cards you can follow this tutorial from steps 1 to 5. ## You will learn @@ -62,26 +60,26 @@ Integration cards are UI elements which display concise pieces of information in ![Image depicting UI Integration Card template option](2-3.PNG) 4. Fill-in the required project details. Use the **Highlight Card** template, which creates an Integration card of type List and select Finish. ->If you are following this tutorial as part of a workshop, please give your card a unique name. In this case your card name should be `wz_products_by_category_card`. +>If you are following this tutorial as part of a workshop, please give your card a unique name. In this case your card name should be `wz_orders_by_shipper`. | Description | Value | :------------- | :------------- - | Project Name | `products_by_category_card` If you're taking part in a workshop, please add your unique identifier to the project name like this: `_products_by_category_card`. + | Project Name | `orders_by_shipper` If you're taking part in a workshop, please add your unique identifier to the project name like this: `_orders_by_shipper`. | Name Space | `ns` | Select a Card Sample (dropdown menu) | `Highlight Card` - | Title | `Products by Category Card` + | Title | `Orders by Shipper` | Subtitle | `UI5 Integration Card of Type List` | Compatible with SAP Mobile Cards (dropdown menu) | `False` - ![Image depicting required Project Details](2-4.PNG) + ![Image depicting required Project Details](2-4 NEW.PNG) 5. To see the card, right-click on `manifest.json` and select **UI Integration Card: Preview**. - ![Image depicting UI Integration Card: Preview option](2-5.PNG) + ![Image depicting UI Integration Card: Preview option](2-5 NEW.PNG) 6. Currently the card displays only static data: - ![Image depicting the application showing only static data](2-6.PNG) + ![Image depicting the application showing only static data](2-6 NEW.PNG) 7. Open the `manifest.json` file. Everything needed to render the card is described in this file. @@ -96,202 +94,218 @@ Integration cards are UI elements which display concise pieces of information in - `data` sections: Define how the card handles its data. It can provide static data (see the `json` object below) or define required parameters for a data request to a backend system. Can be set on different levels (card, header, filter-definition, or content). The inner level data sections take precedence. In the example below the data section is defined on content level. - ![Image depicting manifest.json file structure](2-7.PNG) + ![Image depicting manifest.json file structure](2-7 NEW.PNG) In the next steps you edit the `manifest.json` file to configure the card. ### Add destination to connect to Gateway - By connecting your card to the SAP Gateway Demo System (ES5), you're enabling the card to display dynamic data. Card destinations are used for outbound communication to a remote resource and contain the required connection information. + By connecting your card to the public Northwind demo service, you're enabling the card to display dynamic data. Card destinations are used for outbound communication to a remote resource and contain the required connection information. -1. To set a destination, add the following `configuration` section in the `sap.card` section after the `type` subsection. Note, that the card destination is pointing to the same (ES5) destination that is set on the subaccount level. +1. To set a destination, add the following `configuration` section in the `sap.card` section after the `type` subsection. ```JSON - "configuration": { - "destinations": { - "ES5": { - "name": "ES5", - "defaultUrl": "/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/" - } - } - }, + "configuration": { + "destinations": { + "Northwind": { + "name": "Northwind", + "label": "Northwind V4 Service URL", + "defaultUrl": "https://services.odata.org/V4/Northwind/Northwind.svc" + } + } + }, ``` - ![Image depicting manifest.json file – add configuration section](3-1.PNG) + ![Image depicting manifest.json file – add configuration section](3-1 NEW.PNG) -2. To configure a data request pointing to the SAP Gateway Demo System, add a new `data` section after the `configuration`. In this way the `data` section will be defined on a card level. Note, that our destination is referred here using the double-bracket syntax `{{destinations.ES5}}`. +2. To configure a data request pointing to the Northwind demo service, add a new `data` section after the `configuration`. In this way the `data` section will be defined on a card level. Note, that our destination is referred here using the double-bracket syntax `{{destinations.Northwind}}`. ```JSON - "data": { - "request": { - "url": "{{destinations.ES5}}/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products", - "withCredentials": true - }, - "path": "/d/results" + "sap.card": { + "data": { + "request": { + "url": "{{destinations.Northwind}}/Orders" }, + "path": "/value" + } + }, ``` - ![Image depicting manifest.json file – add data section](3-2.PNG) + ![Image depicting manifest.json file – add data section](3-2 NEW.PNG) + +>**IMPORTANT:** Due to an issue with the **UI Integration Card: Preview** option, you may need to replace {{destinations.Northwind}} with "https://services.odata.org/V4/Northwind/Northwind.svc" ! -3. To display the dynamically requested data, replace the static `content` section with the following one. The `title`, `description`, `icon`, and `info` properties are now dynamically requested. +Finally, to display the dynamically requested data, replace the static `content` section with the following one. The `title`, `description`, and `info` properties are now dynamically requested. ```JSON "content": { - "item": { - "title": "{Name}", - "description": "{Description}", - "icon": { - "src": "{ImageUrl}" - }, - "info": { - "value": "{AverageRating}", - "state": "{= ${AverageRating} > 3.5 ? 'Success' : 'Warning' }" - } - }, - "maxItems": 5 + "item": { + "title": "{ShipName}", + "description": "{ShipAddress}", + "info": { + "value": "{ShipCountry}" } + } + } ``` - ![Image depicting manifest.json file – replace content section](3-3.PNG) + ![Image depicting manifest.json file – replace content section](3-3 NEW.PNG) **Results after Step 3:** -The application displays dynamic data loaded from the SAP Gateway Demo System (ES5). Note, that the actual displayed products may differ depending on the current data in the ES5 demo system. You can also check the [manifest.json](https://raw.githubusercontent.com/SAPDocuments/Tutorials/master/tutorials/appstudio-sapui5-integrationcard-create/manifest_after_step3.json) file at this step. To learn more, see the [Destinations](https://sapui5.hana.ondemand.com/test-resources/sap/ui/integration/demokit/cardExplorer/webapp/index.html#/learn/features/destinations) and [Data](https://sapui5.hana.ondemand.com/test-resources/sap/ui/integration/demokit/cardExplorer/webapp/index.html#/learn/features/data) sections in the Card Explorer. +The application displays dynamic data loaded from the Northwind demo service. Note, that the actual displayed products may differ depending on the current data provided by the Northwind demo service. You can also check the [manifest.json](https://raw.githubusercontent.com/SAPDocuments/Tutorials/master/tutorials/appstudio-sapui5-integrationcard-create/manifest_after_step3.json) file at this step. To learn more, see the [Destinations](https://ui5.sap.com/test-resources/sap/ui/integration/demokit/cardExplorer/webapp/index.html#/learn/configuration/destinations) and [Data](https://ui5.sap.com/test-resources/sap/ui/integration/demokit/cardExplorer/webapp/index.html#/learn/features/data) sections in the Card Explorer. -![Image depicting the application showing dynamic data](3-4.PNG) +![Image depicting the application showing dynamic data](3-4 NEW.PNG) If you would like to deploy the card and see how it looks on SAP Build Work Zone, you can skip to Step 6 and deploy it. In the next steps you add card capabilities that can make your card more interactive. ### Add manifest parameters - Manifest parameters provide dynamic values for card attributes. They are replaced during manifest processing and can be used with the double-bracket syntax like: `{{parameters.city}}`. As an example, in this step you will add parameters to set the header (`title` and `subTitle`) properties and the number (`maxItems`) of displayed items in the content. + Manifest parameters provide dynamic values for card attributes. They are replaced during manifest processing and can be used from the `parameters` model, for example: `{parameters>/city/value}`. As an example, in this step you will add parameters to set the header (`title`) property and the number (`maxItems`) of displayed items in the content. + + >If you are following this tutorial as part of a workshop and run out of time, you can skip steps 4,5,6 and create a simpler card. You can later read the steps you missed. 1. To define parameters - add the following `parameters` subsection in the `manifest.json` in the `configuration` section (note the comma which divides the entries). ```JSON - , - "parameters": { - "title" : { - "value": "List Card with Top {{parameters.maxItems}} Products" - }, - "subTitle": { - "value": "These are the top sellers this month" - }, - "maxItems": { - "value": 4 - } - } + "parameters": { + "title" : { + "value": "Orders by Shipper" + }, + "maxOrdersShown": { + "value": "4", + "type": "integer", + "label": "Numbers of orders", + "description": "How many orders to show in the list." + } + } + ``` + + ![Image depicting manifest.json file - add parameters](4-1 NEW.PNG) + +2. To use the new `maxOrdersShown` parameter, add it as shown below: + + ```JSON + "maxItems": "{parameters>/maxOrdersShown/value}" ``` - ![Image depicting manifest.json file - add parameters](4-1.PNG) + ![Image depicting manifest.json file – use maxOrdersShown parameter](4-2 NEW.PNG) -2. To use the new `maxItems` parameter, replace the `maxItems: 5` static value in the `content` section with the (`maxItems`) parameter as shown below: +3. Update the data request as follows: ```JSON - "maxItems": "{{parameters.maxItems}}" + "data": { + "request":{ + "url": "{{destinations.Northwind_V4}}/Orders", + "parameters": { + "$top": "{parameters>/maxOrdersShown/value}" + }, + "path": "/value/" + } + } ``` + ![Image depicting manifest.json file – use $top](4-3 DATA REQ.PNG) - ![Image depicting manifest.json file – use maxItems parameter](4-2.PNG) +>**IMPORTANT:** Due to an issue with the **UI Integration Card: Preview** option, you may need to replace {{destinations.Northwind}} with "https://services.odata.org/V4/Northwind/Northwind.svc" ! -3. Let's also use the new parameters in the `header` section. Use the double-bracket syntax and edit (or replace) the header, so it looks like this: +Finally, let's also use the new parameters in the `header` section. Use the `parameters` syntax and edit (or replace) the header, so it looks like this: ```JSON "header": { - "title": "{{parameters.title}}", - "subTitle": "{{parameters.subTitle}}", - "icon": { - "src": "sap-icon://desktop-mobile" - }, - "status": { - "text": "{{parameters.maxItems}} of 20" - } - }, + "title": "{parameters>/title/value}", + "icon": { + "src": "sap-icon://desktop-mobile" + }, + "status": { + "text": "{parameters>/maxOrdersShown/value}" + } + }, ``` - ![Image depicting manifest.json file - edit header](4-3.PNG) + ![Image depicting manifest.json file - edit header](4-3 NEW.PNG) **Results after Step 4:** -In this step you have learned how to declare configurable parameters and use them to achieve desired dynamic behavior. The application now displays a list of 4 items according to the `parameters` property (`maxItems value: 4`). +In this step, you have learned how to declare configurable parameters and use them to achieve the desired dynamic behavior. The application now displays a list of 4 items according to the `parameters` property (`maxOrdersShown value: 4`). -![Image depicting the application showing dynamic data using parameters](4-4.PNG) +![Image depicting the application showing dynamic data using parameters](4-4 NEW.PNG) -To learn more, see the [Manifest Parameters](https://sapui5.hana.ondemand.com/test-resources/sap/ui/integration/demokit/cardExplorer/webapp/index.html#/learn/features/manifestParameters) section in the Card Explorer. +To learn more, see the [Manifest Parameters](https://ui5.sap.com/test-resources/sap/ui/integration/demokit/cardExplorer/webapp/index.html#/learn/configuration/manifestParameters) section in the Card Explorer. ### Add user interaction with filtering - You can make the card even more dynamic when using filters. Filters appear as a dropdown under the card header, and users can interact to customize the data shown by the card. The value of each filter can be used inside a data request definition by using the `{filters>/myFilter/value}` placeholder. When the end user selects different value from the dropdown - a new data request is made with the updated value. As an example, in this step you will add a filter that enables users to filter products by a selected category. + You can make the card even more dynamic when using filters. Filters appear as a dropdown under the card header, and users can interact to customize the data shown by the card. The value of each filter can be used inside a data request definition by using the `{filters>/myFilter/value}` placeholder. When the end user selects different value from the dropdown - a new data request is made with the updated value. As an example, in this step you will add a filter that enables users to filter the orders by a selected shipper. 1. Add a `filters` subsection in the `configuration` section. It defines a dropdown list with product categories, which are received by a data request. ```JSON - , - "filters": { - "mainCategory": { - "value": "{{parameters.selectedCategoryName}}", - "type": "string", - "label": "Main Category", - "description": "Filter products by main category.", - "item": { - "path": "/d/results", - "template": { - "key": "{Id}", - "title": "{Name}" - } - }, - "data": { - "request": { - "url": "{{destinations.ES5}}/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/MainCategories", - "withCredentials": true - } - } + "filters": { + "shipper": { + "value": "{parameters>/selectedShipperID/value}", + "type": "Select", + "label": "Shipper", + "item": { + "path": "/value", + "template": { + "key": "{ShipperID}", + "title": "{CompanyName}" + } + }, + "data": { + "request": { + "url": "{{destinations.Northwind}}/Shippers" } } + } + }, ``` - ![Image depicting manifest.json file - add filters section](5-1.PNG) + ![Image depicting manifest.json file - add filters section](5-1 NEW.PNG) -2. Add `selectedCategoryName` subsection in the `parameters` section. This is the category that is initially selected in the filter. Later, the user can change it from the dropdown list. +2. Add `selectedShipperID` subsection in the `parameters` section. This is the shipper that is initially selected in the filter. Later, the user can change it from the dropdown list. ```JSON - , - "selectedCategoryName": { - "value": "Computer Systems" - } + "selectedShipperID": { + "value": 3, + "label": "The default selected shipper" + } ``` - ![Image depicting manifest.json file – set the initially selected category](5-2.PNG) + ![Image depicting manifest.json file – set the initially selected category](5-2 NEW.PNG) -3. Add `parameters` in the main `data` section > `request` subsection, after the `url` property as shown below. The `$filter` parameter will be used in a data request for the category with `MainCategoryName` that is equal to the one selected by the user in the filter's dropdown list. +3. Add `parameters` in the main `data` section > `request` subsection, after the `url` property as shown below. The `$filter` parameter will be used in a data request for the orders with `shipper` that is equal to the one selected by the user in the filter's dropdown list. ```JSON - "parameters": { - "$filter": "MainCategoryName eq '{filters>/mainCategory/value}'" - }, + "request": { + "url": "https://services.odata.org/V4/Northwind/Northwind.svc/Orders", + "parameters": { + "$top": "{parameters>/maxOrdersShown/value}", + "$filter": "Shipper/ShipperID eq {filters>/shipper/value}" + } + } ``` - ![Image depicting manifest.json file - add filter parameter in the main data section](5-3.PNG) + ![Image depicting manifest.json file - add filter parameter in the main data section](5-3 NEW.PNG) -4. Finally replace the title in the `header` adding the `{filters>/supplier/selectedItem/title}` parameter, which will show the selected category: +4. Finally, replace the title in the `header` adding the `{filters>/shipper/selectedItem/title}` parameter, which will show the selected category: ```JSON - "title": "Products filtered by {filters>/mainCategory/selectedItem/title} category", + "title": "Orders by Shipper {filters>/shipper/selectedItem/title}", ``` - ![Image depicting manifest.json file – use parameters in the header's title ](5-4.PNG) + ![Image depicting manifest.json file – use parameters in the header's title ](5-4 NEW.PNG) **Results after Step 5:** If you have any issues you can check the [manifest.json](https://raw.githubusercontent.com/SAPDocuments/Tutorials/master/tutorials/appstudio-sapui5-integrationcard-create/manifest.json) file at this step. It is configured with destinations, parameters, and a filter. -The application displays the products from the selected category: -![Image depicting the application showing dynamic data, parameters, and a filter](5-5.PNG) +The application displays the products from the selected category: ->**IMPORTANT:** Due to an issue with the **UI Integration Card: Preview** option, it may not be able to correctly display the products that are filtered! +![Image depicting the application showing dynamic data, parameters, and a filter](5-5 NEW.PNG) -To learn more, see the [Filters](https://sapui5.hana.ondemand.com/test-resources/sap/ui/integration/demokit/cardExplorer/webapp/index.html#/learn/features/filters) section in the Card Explorer. +To learn more, see the [Card Filters](https://ui5.sap.com/test-resources/sap/ui/integration/demokit/cardExplorer/webapp/index.html#/learn/filters) section in the Card Explorer. ### Configure card parameters that are displayed in SAP Build Work Zone @@ -299,11 +313,11 @@ To learn more, see the [Filters](https://sapui5.hana.ondemand.com/test-resources 1. Select the `dt/configuration.js` file (in the Explorer view on the left). - ![Image depicting the configuration.js file in the file menu](6-1.PNG) + ![Image depicting the configuration.js file in the file menu](6-1 NEW.PNG) 2. Replace the content with the code below: -```JSON +```JAVASCRIPT sap.ui.define(["sap/ui/integration/Designtime"], function ( Designtime ) { @@ -313,7 +327,7 @@ sap.ui.define(["sap/ui/integration/Designtime"], function ( "form": { "items": { "maxItems": { - "manifestpath": "/sap.card/configuration/parameters/maxItems/value", + "manifestpath": "/sap.card/configuration/parameters/maxOrdersShown/value", "type": "integer", "label": "Maximum Items", "translatable": false, @@ -331,7 +345,7 @@ sap.ui.define(["sap/ui/integration/Designtime"], function ( The `dt/configuration.js` now looks like: -![Image depicting the configuration.js file content](6-2.PNG) +![Image depicting the configuration.js file content](6-2 NEW.PNG) diff --git a/tutorials/appstudio-sapui5-integrationcard-create/manifest.json b/tutorials/appstudio-sapui5-integrationcard-create/manifest.json index 6f7b0481d8..d0a95e4d73 100644 --- a/tutorials/appstudio-sapui5-integrationcard-create/manifest.json +++ b/tutorials/appstudio-sapui5-integrationcard-create/manifest.json @@ -1,9 +1,9 @@ { "_version": "1.14.0", "sap.app": { - "id": "ns.products_by_category_card", + "id": "ns.orders_by_shipper", "type": "card", - "title": "Products by Category Card", + "title": "Orders by Shipper", "subTitle": "UI5 Integration Card of Type List", "applicationVersion": { "version": "1.0.0" @@ -20,82 +20,77 @@ "type": "List", "configuration": { "destinations": { - "ES5": { - "name": "ES5", - "defaultUrl": "/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/" + "Northwind": { + "name": "Northwind", + "label": "Northwind V4 Service URL", + "defaultUrl": "https://services.odata.org/V4/Northwind/Northwind.svc" } }, "parameters": { - "title" : { - "value": "List Card with Top {{parameters.maxItems}} Products" + "title": { + "value": "Orders by Shipper" }, - "subTitle": { - "value": "These are the top sellers this month" + "maxOrdersShown": { + "value": "4", + "type": "integer", + "label": "Numbers of orders", + "description": "How many orders to show in the list." }, - "maxItems": { - "value": 4 - }, - "selectedCategoryName": { - "value": "Computer Systems" - } - }, - "filters": { - "mainCategory": { - "value": "{{parameters.selectedCategoryName}}", - "type": "string", - "label": "Main Category", - "description": "Filter products by main category.", - "item": { - "path": "/d/results", - "template": { - "key": "{Id}", - "title": "{Name}" - } - }, - "data": { - "request": { - "url": "{{destinations.ES5}}/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/MainCategories", - "withCredentials": true - } - } - } - } + "selectedShipperID": { + "value": 3, + "label": "The default selected shipper" + } + }, + "filters": { + "shipper": { + "value": "{parameters>/selectedShipperID/value}", + "type": "Select", + "label": "Shipper", + "item": { + "path": "/value", + "template": { + "key": "{ShipperID}", + "title": "{CompanyName}" + } + }, + "data": { + "request": { + "url": "https://services.odata.org/V4/Northwind/Northwind.svc/Shippers" + } + } + } + } }, "data": { "request": { - "url": "{{destinations.ES5}}/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products", - "parameters": { - "$filter": "MainCategoryName eq '{filters>/mainCategory/value}'" + "url": "https://services.odata.org/V4/Northwind/Northwind.svc/Orders", + "parameters": { + "$top": "{parameters>/maxOrdersShown/value}", + "$filter": "Shipper/ShipperID eq {filters>/shipper/value}" + } + }, + "path": "/value" }, - "withCredentials": true - }, - "path": "/d/results" - }, "designtime": "dt/configuration", "header": { - "title": "Products filtered by {filters>/mainCategory/selectedItem/title} category", - "subTitle": "{{parameters.subTitle}}", + "title": "Orders by Shipper {filters>/shipper/selectedItem/title}", "icon": { "src": "sap-icon://desktop-mobile" }, "status": { - "text": "{{parameters.maxItems}} of 20" + "text": "{parameters>/maxOrdersShown/value}" } }, "content": { - "item": { - "title": "{Name}", - "description": "{Description}", - "icon": { - "src": "{ImageUrl}" - }, - "info": { - "value": "{AverageRating}", - "state": "{= ${AverageRating} > 3.5 ? 'Success' : 'Warning' }" - } - }, - "maxItems": "{{parameters.maxItems}}" - } + "item": { + "title": "{ShipName}", + "description": "{ShipAddress}", + "info": { + "value": "{ShipCountry}" + } + }, + "maxItems": "{parameters>/maxOrdersShown/value}" + } }, "sap.platform.mobilecards": { "compatible": false diff --git a/tutorials/appstudio-sapui5-integrationcard-create/manifest_after_step3.json b/tutorials/appstudio-sapui5-integrationcard-create/manifest_after_step3.json index 63c4e7027a..6418d00e1b 100644 --- a/tutorials/appstudio-sapui5-integrationcard-create/manifest_after_step3.json +++ b/tutorials/appstudio-sapui5-integrationcard-create/manifest_after_step3.json @@ -1,9 +1,9 @@ -{ +{{ "_version": "1.14.0", "sap.app": { - "id": "ns.products_by_category_card", + "id": "ns.orders_by_shipper", "type": "card", - "title": "Products by Category Card", + "title": "Orders by Shipper", "subTitle": "UI5 Integration Card of Type List", "applicationVersion": { "version": "1.0.0" @@ -20,19 +20,19 @@ "type": "List", "configuration": { "destinations": { - "ES5": { - "name": "ES5", - "defaultUrl": "/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/" + "Northwind": { + "name": "Northwind", + "label": "Northwind V4 Service URL", + "defaultUrl": "https://services.odata.org/V4/Northwind/Northwind.svc" } } }, "data": { "request": { - "url": "{{destinations.ES5}}/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products", - "withCredentials": true - }, - "path": "/d/results" - }, + "url": "https://services.odata.org/V4/Northwind/Northwind.svc/Orders" + }, + "path": "/value" + }, "designtime": "dt/configuration", "header": { "title": "List Card with Top 5 Products", @@ -45,19 +45,14 @@ } }, "content": { - "item": { - "title": "{Name}", - "description": "{Description}", - "icon": { - "src": "{ImageUrl}" - }, - "info": { - "value": "{AverageRating}", - "state": "{= ${AverageRating} > 3.5 ? 'Success' : 'Warning' }" - } - }, - "maxItems": 5 - } + "item": { + "title": "{ShipName}", + "description": "{ShipAddress}", + "info": { + "value": "{ShipCountry}" + } + } + } }, "sap.platform.mobilecards": { "compatible": false diff --git a/tutorials/btp-app-create-cap-application/btp-app-create-cap-application.md b/tutorials/btp-app-create-cap-application/btp-app-create-cap-application.md deleted file mode 100644 index a3bc3114de..0000000000 --- a/tutorials/btp-app-create-cap-application/btp-app-create-cap-application.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -author_name: Mahati Shankar -author_profile: https://github.com/smahati -title: Create a CAP-Based Application -description: This tutorial shows you how to create a new CAP-based application, which exposes the OData V4 protocol. -keywords: cap -auto_validation: true -time: 15 -tags: [ tutorial>beginner, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform] -primary_tag: software-product-function>sap-cloud-application-programming-model ---- - -## Prerequisites - - [Prepare Your Development Environment for CAP](btp-app-prepare-dev-environment-cap) - -## Details -### You will learn - - How to use the CAP's tooling `cds init` to create your project - - How to use the CAP's tooling `cds watch` to launch your project - - How to add files to your project - ---- -> This tutorial will soon be phased out. -> -> For more tutorials about how to develop and deploy a full stack CAP application on SAP BTP, see: -> -> - [Develop a Full-Stack CAP Application Following SAP BTP Developer’s Guide](https://developers.sap.com/group.cap-application-full-stack.html) -> - [Deploy a Full-Stack CAP Application in SAP BTP, Cloud Foundry Runtime Following SAP BTP Developer’s Guide](https://developers.sap.com/group.deploy-full-stack-cap-application.html) -> - [Deploy a Full-Stack CAP Application in SAP BTP, Kyma Runtime Following SAP BTP Developer’s Guide](https://developers.sap.com/group.deploy-full-stack-cap-kyma-runtime.html) -> -> To continue learning how to implement business applications on SAP BTP, see: -> -> - [SAP BTP Developer’s Guide](https://help.sap.com/docs/btp/btp-developers-guide/what-is-btp-developers-guide?version=Cloud&locale=en-US) -> - [Related Hands-On Experience](https://help.sap.com/docs/btp/btp-developers-guide/related-hands-on-experience?version=Cloud&locale=en-US) -> - [Tutorials for ABAP Cloud](https://help.sap.com/docs/btp/btp-developers-guide/tutorials-for-abap-cloud?version=Cloud&locale=en-US) -> - [Tutorials for SAP Cloud Application Programming Model](https://help.sap.com/docs/btp/btp-developers-guide/tutorials-for-sap-cloud-application-programming-model?version=Cloud&locale=en-US) - -[ACCORDION-BEGIN [Step 1: ](Create and initialize the project)] -1. Open a command line window. - -2. Navigate to your tutorial root directory. - - ```Shell/Bash - cd - ``` - -4. Execute the following command: - - ```Shell/Bash - cds init cpapp - ``` - This creates a `cpapp` directory and an initial CAP project within the `cpapp` directory. - -3. Switch to your `cpapp` directory. - - ```Shell/Bash - cd cpapp - ``` - -5. Open the project in VS Code. - - ```Shell/Bash - code . - ``` - - The project looks like this in VS Code: - - ![VS Code](vscode.png) - - > You might see some hidden files in your project in case you have previously customized your `Files: Exclude` settings in VS Code. More info in [Default settings](https://code.visualstudio.com/docs/getstarted/settings#_default-settings). - -6. In VS Code choose **Terminal** → **New Terminal** from its menu. - - A new terminal opens in the lower right part of the VS Code screen. - -7. In the VS Code terminal, run the following command. - - ```Shell/Bash - npm install - ``` - -8. In the VS Code terminal, start a CAP server. - - ```Shell/Bash - cds watch - ``` - - > In case you get the error: `cds : File \cds.ps1 cannot be loaded because running scripts is disabled on this system.` after you run `cds watch` - - > You can run the command: - - > ```bash - > Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope CurrentUser - > ``` - - > This will change the script execution policy for your user to `Bypass` directly from the VS Code terminal. To learn more about execution policies, see [About Execution Policies](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies?view=powershell-7.1). - - The CAP server serves all the CAP sources from your project. It also "watches" all the files in your projects and conveniently restarts whenever you save a file. Changes you have made will immediately be served without you having to do anything. - - The screen now looks like this: - - ![CDS Watch](cdswatch.png) - - The CAP server tells you that there is no model and no service definitions yet that it can serve. You add some in the next step. - -[VALIDATE_1] -[ACCORDION-END] ---- -[ACCORDION-BEGIN [Step 2: ](Add files to the project)] -1. Open the Finder on Mac or the Explorer on Windows and navigate to the `tutorial` directory created in [Prepare Your Development Environment for CAP](btp-app-prepare-dev-environment-cap). - -2. Open the directory `templates` and keep it open as you copy a number of files from there. For this part of the tutorial and others, it's probably best if you place it next to your VS Code instance. - - !![Windows](codeandfinder.png) - - Alternatively, you can open it as a second folder in your VS Code project: **File** → **Add Folder to Workspace...**. - -3. Copy the file `schema.cds` from `templates/create-cap-application/db` to the `db` folder of your app. - - This is the code: - - - ```JavaScript - namespace sap.ui.riskmanagement; - using { managed } from '@sap/cds/common'; - entity Risks : managed { - key ID : UUID @(Core.Computed : true); - title : String(100); - prio : String(5); - descr : String; - miti : Association to Mitigations; - impact : Integer; - criticality : Integer; - } - entity Mitigations : managed { - key ID : UUID @(Core.Computed : true); - description : String; - owner : String; - timeline : String; - risks : Association to many Risks on risks.miti = $self; - } - ``` - - - It creates two entities in the namespace `sap.ui.riskmanagement`: `Risks` and `Mitigations`. Each of them has a key called `ID` and several other properties. A Risk has a Mitigation and, therefore, the property `miti` has an association to exactly one Mitigation. A Mitigation in turn can be used for many Risks, so it has a "to many" association. The key is automatically filled by the CAP server, which is exposed to the user of the service with the annotation `@(Core.Computed : true)`. - - Notice how the CAP server reacted to dropping the file. It now tells you that it has a model but there are no service definitions yet and, thus, it still can't serve anything. Next, you add a service definition. - -4. Copy the file `risk-service.cds` from `templates/create-cap-application/srv` to the `srv` folder of your app. - - The content of the file looks like this: - - - ```JavaScript - using { sap.ui.riskmanagement as my } from '../db/schema'; - @path: 'service/risk' - service RiskService { - entity Risks as projection on my.Risks; - annotate Risks with @odata.draft.enabled; - entity Mitigations as projection on my.Mitigations; - annotate Mitigations with @odata.draft.enabled; - } - ``` - - It creates a new service `RiskService` in the namespace `sap.ui.riskmanagement`. This service exposes two entities: `Risks` and `Mitigations`, which are exposing the entities of the database schema you've created in the step before. - - If you again look at the terminal, you see that the CAP server has noticed the new file and now tells us that it serves something under . - -5. In your browser open the link . - - !![Service](service.png) - - > You may have to stop the CAP server with Ctrl + C and restart it with the `cds watch` command. - -6. Choose the `$metadata` link. - - You see the OData metadata document of your new service. So, with just the two files for the database schema and the service exposure you added to your project, you have already got a running OData service! You might wonder why the service itself is called `risk` even though in the file it's called `RiskService`. This is a convention in CAP, the service suffix is subtracted from the name. - - If you now choose the `Risks` link, you only get this: - - ```JavaScript - { - @odata.context: "$metadata#Risks", - value: [ ] - } - ``` - - So, there's no data yet. This is because so far, your model doesn't contain any data. You add some now. - -7. Copy the folder `data` from `templates/create-cap-application/db` to the `db` folder of your app. If VS Code asks you whether to copy the folder, confirm. - - You have now added two comma-separated value (CSV) files that contain local data for both the `Risks` and the `Mitigations` entities. A quick look into the `sap.ui.riskmanagement-Risks.csv` (the name consists of your namespace and the name of your database entity from the `schema.cds` file) file shows data like this: - - ```csv - ID;createdAt;createdBy;title;prio;descr;miti_ID;impact - 20466922-7d57-4e76-b14c-e53fd97dcb11;2021-04-27T00:00:00.000Z;max.mustermann@muster.com;CFR non-compliance;Fred Fish;3;Recent restructuring might violate CFR code 71;20466921-7d57-4e76-b14c-e53fd97dcb11;10000 - ... - ``` - - The first line contains all the properties from your `Risks` entity. While the other ones are straight forward, consider the `miti_ID` property. In your entity, you only have a `miti` property, so where does it come from? `miti` is an association to `Mitigations`, as `Mitigations` could have several key properties, the association on the database needs to point to all of these, therefore the CAP server creates a property `_` for each key. - - As always, the CAP server has noticed the change. - - > You may have to stop the CAP server with Ctrl + C and restart it with the `cds watch` command. - -8. Revisit the `Risks` entity in your browser. You now see the data exposed. - - !![Service Data](servicedata.png) - - - > The Risks entity looks different? - - > When you revisit the **Risks** entity, you might see something like this instead of the nicely-formatted output above. - > !![No JSON Viewer](no-json-viewer.png) - > However, this doesn't mean you have made a mistake in the tutorial. Rather, this is the correct output without any formatting. If you'd like to see a formatted output in your browser, you can add a plugin to your browser. Here are a few exemplary JSON formatters for different browsers: - - > - [Chrome](https://chrome.google.com/webstore/detail/jsonvue/chklaanhfefbnpoihckbnefhakgolnmc) - > - [Microsoft Edge](https://microsoftedge.microsoft.com/addons/detail/jsonview/kmpfgkgaimakokfhgdahhiaaiidiphco) - > - [Safari](https://apps.apple.com/us/app/json-peep-for-safari/id1458969831?mt=12) - -And that's it. You now have a full blown OData service, which complies with the OData standard and supports the respective queries without having to code anything but the data model and exposing the service itself. - -> The service is completely exposed without any authentication or authorization check. You extend the service later in the tutorial [Implement Roles and Authorization Checks In CAP](btp-app-cap-roles) with such checks. - - - -[DONE] -The result of this tutorial can be found in the [`create-cap-application`](https://github.com/SAP-samples/cloud-cap-risk-management/tree/create-cap-application) branch. - - -[ACCORDION-END] ---- \ No newline at end of file diff --git a/tutorials/btp-app-create-cap-application/cdswatch.png b/tutorials/btp-app-create-cap-application/cdswatch.png deleted file mode 100644 index 41d0930702..0000000000 Binary files a/tutorials/btp-app-create-cap-application/cdswatch.png and /dev/null differ diff --git a/tutorials/btp-app-create-cap-application/codeandfinder.png b/tutorials/btp-app-create-cap-application/codeandfinder.png deleted file mode 100644 index be7479392a..0000000000 Binary files a/tutorials/btp-app-create-cap-application/codeandfinder.png and /dev/null differ diff --git a/tutorials/btp-app-create-cap-application/no-json-viewer.png b/tutorials/btp-app-create-cap-application/no-json-viewer.png deleted file mode 100644 index e79db8b04c..0000000000 Binary files a/tutorials/btp-app-create-cap-application/no-json-viewer.png and /dev/null differ diff --git a/tutorials/btp-app-create-cap-application/service.png b/tutorials/btp-app-create-cap-application/service.png deleted file mode 100644 index d03314f0fd..0000000000 Binary files a/tutorials/btp-app-create-cap-application/service.png and /dev/null differ diff --git a/tutorials/btp-app-create-cap-application/servicedata.png b/tutorials/btp-app-create-cap-application/servicedata.png deleted file mode 100644 index c885aeca7d..0000000000 Binary files a/tutorials/btp-app-create-cap-application/servicedata.png and /dev/null differ diff --git a/tutorials/btp-app-create-cap-application/vscode.png b/tutorials/btp-app-create-cap-application/vscode.png deleted file mode 100644 index 4dba09d27f..0000000000 Binary files a/tutorials/btp-app-create-cap-application/vscode.png and /dev/null differ diff --git a/tutorials/btp-app-launchpage/btp-app-launchpage.md b/tutorials/btp-app-launchpage/btp-app-launchpage.md deleted file mode 100644 index 67b35ea146..0000000000 --- a/tutorials/btp-app-launchpage/btp-app-launchpage.md +++ /dev/null @@ -1,100 +0,0 @@ ---- -author_name: Mahati Shankar -author_profile: https://github.com/smahati -title: Use a Local Launch Page -description: This tutorial shows you how to add a launch page for local testing. -keywords: cap -auto_validation: true -time: 15 -tags: [ tutorial>beginner, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform, software-product>sap-fiori-tools, software-product>sapui5] -primary_tag: software-product-function>sap-cloud-application-programming-model ---- - -## Prerequisites - - Before you start with this tutorial, you have two options: - - Follow the instructions in **Step 16: Start from an example branch** of [Prepare Your Development Environment for CAP](btp-app-prepare-dev-environment-cap) to checkout the [`create-ui-freestyle-sapui5`](https://github.com/SAP-samples/cloud-cap-risk-management/tree/create-ui-freestyle-sapui5) branch. - - Complete the previous tutorial [Create a UI Using Freestyle SAPUI5](btp-app-create-ui-freestyle-sapui5) with all its prerequisites. - - -## Details -### You will learn - - How to add a launch page for local testing - - ---- -> This tutorial will soon be phased out. -> -> For more tutorials about how to develop and deploy a full stack CAP application on SAP BTP, see: -> -> - [Develop a Full-Stack CAP Application Following SAP BTP Developer’s Guide](https://developers.sap.com/group.cap-application-full-stack.html) -> - [Deploy a Full-Stack CAP Application in SAP BTP, Cloud Foundry Runtime Following SAP BTP Developer’s Guide](https://developers.sap.com/group.deploy-full-stack-cap-application.html) -> - [Deploy a Full-Stack CAP Application in SAP BTP, Kyma Runtime Following SAP BTP Developer’s Guide](https://developers.sap.com/group.deploy-full-stack-cap-kyma-runtime.html) -> -> To continue learning how to implement business applications on SAP BTP, see: -> -> - [SAP BTP Developer’s Guide](https://help.sap.com/docs/btp/btp-developers-guide/what-is-btp-developers-guide?version=Cloud&locale=en-US) -> - [Related Hands-On Experience](https://help.sap.com/docs/btp/btp-developers-guide/related-hands-on-experience?version=Cloud&locale=en-US) -> - [Tutorials for ABAP Cloud](https://help.sap.com/docs/btp/btp-developers-guide/tutorials-for-abap-cloud?version=Cloud&locale=en-US) -> - [Tutorials for SAP Cloud Application Programming Model](https://help.sap.com/docs/btp/btp-developers-guide/tutorials-for-sap-cloud-application-programming-model?version=Cloud&locale=en-US) - -[ACCORDION-BEGIN [Step 1: ](Introduction)] -Our `risks` and `mitigations` applications have been generated by the SAP Fiori application generator and can be started independently. You can add a launch page for local testing. This page looks like a real SAP Build Work Zone, standard edition site, but is just a local copy of the otherwise centrally managed SAP Build Work Zone, standard edition site. It comes with a limited version of the functionality of the original SAP Build Work Zone, standard edition site. There's no option to add or remove apps via a configuration, user roles aren't at all taken into account, and end-user personalization is also not included. If you want these and other SAP Build Work Zone, standard edition functionalities included, you have to set them up for your project. Find out how to do this in [Prepare SAP Build Work Zone, Standard Edition Setup](btp-app-work-zone-setup). You stick with the launch page for this tutorial though. - -In the current implementation, the applications are launched without a launch page. You can open the `risks` application through the file `app/risks/webapp/index.html`. If you now create a second application using the SAP Fiori application generator within your project, it will be generated in the same way, again with its own `index.html` file. Instead, you want to use a launch page for all the applications. You can add a launch page by creating an `.html` file that uses the built-in UI5 shell in the `app` folder, which has both the `risks` and `mitigations` applications. - -[DONE] -[ACCORDION-END] ---- -[ACCORDION-BEGIN [Step 2: ](Implementation)] -1. Copy the file `launchpage.html` from `templates/launchpage/app` to the `app` folder of your app. - -2. With `cds watch` running, open the app in your browser at . - -3. You now see the `Mitigations` app next to the `Risks` app on the launch page. - - !![Launch Page](launchpage2apps.png) - -[DONE] -[ACCORDION-END] ---- -[ACCORDION-BEGIN [Step 3: ](Check the launchpage.html file)] -Let's have a look at the `launchpage.html` file and the configuration in there. In the first script you will see: - -```HTML[5,10,13,18] - -``` - -> Why name it `launchpage.html` instead of `index.html`? - -> You are using the name `launchpage.html` because `cds watch` by default looks for an `index.html` file in the `app` folder. If `cds watch` finds such a file, it replaces the default page that also contains the links to the services with the `index.html` in the folder. While this makes sense in many cases, for development purposes we stick to the index page of CDS and give a different name to our index file. - -There are two applications in the launch page with URLs that point to the respective apps. There are other properties configured here like the title and description. Similarly, another application can be added to the launch page by adding an entry here. - -[VALIDATE_1] -The result of this tutorial can be found in the [`launchpage`](https://github.com/SAP-samples/cloud-cap-risk-management/tree/launchpage) branch. - - -[ACCORDION-END] ---- \ No newline at end of file diff --git a/tutorials/btp-app-launchpage/launchpage2apps.png b/tutorials/btp-app-launchpage/launchpage2apps.png deleted file mode 100644 index fdb4fbd274..0000000000 Binary files a/tutorials/btp-app-launchpage/launchpage2apps.png and /dev/null differ diff --git a/tutorials/btp-terraform-get-started/btp-terraform-get-started.md b/tutorials/btp-terraform-get-started/btp-terraform-get-started.md index 73ae385b81..ca0dcb5e91 100644 --- a/tutorials/btp-terraform-get-started/btp-terraform-get-started.md +++ b/tutorials/btp-terraform-get-started/btp-terraform-get-started.md @@ -39,7 +39,7 @@ terraform { required_providers { btp = { source = "SAP/btp" - version = "~>1.15.0" + version = "~>1.21.3" } } } diff --git a/tutorials/cap-operator-01-prepare/cap-operator-01-prepare.md b/tutorials/cap-operator-01-prepare/cap-operator-01-prepare.md new file mode 100644 index 0000000000..6cf01b9233 --- /dev/null +++ b/tutorials/cap-operator-01-prepare/cap-operator-01-prepare.md @@ -0,0 +1,181 @@ +--- +title: Setting Up SAP BTP and Kyma Runtime for Deployment +description: Learn how to set up the SAP BTP, Kyma runtime for deploying the application. +parser: v2 +auto_validation: true +time: 30 +tags: [ tutorial>beginner, software-product>sap-cap-operator--kubernetes-environment, topic>cloud-operations, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform, software-product>sap-btp--kyma-runtime] +primary_tag: software-product>sap-cap-operator--kubernetes-environment +author_name: Anirudh Prasad +author_profile: https://github.com/anirudhprasad-sap +--- + +## You will learn + +- How to configure entitlements. +- How to enable the SAP BTP Kyma runtime in your subaccount in SAP BTP. +- How to create an SAP HANA Cloud service instance in the SAP BTP cockpit. + +## Prerequisites + +- You have an [enterprise global account](https://help.sap.com/docs/btp/sap-business-technology-platform/getting-global-account#loiod61c2819034b48e68145c45c36acba6e) in SAP BTP. To use services for free, you can sign up for an SAP BTPEA (SAP BTP Enterprise Agreement) or a Pay-As-You-Go for SAP BTP global account and use the free tier services only. See [Using Free Service Plans](https://help.sap.com/docs/btp/sap-business-technology-platform/using-free-service-plans?version=Cloud). +- You have a platform user. See [User and Member Management](https://help.sap.com/docs/btp/sap-business-technology-platform/user-and-member-management). +- You're an administrator of the global account in SAP BTP. +- You have a subaccount in SAP BTP to deploy the services and applications. + +> This tutorial follows the guidance provided in the [SAP BTP Developer's Guide](https://help.sap.com/docs/btp/btp-developers-guide/what-is-btp-developers-guide). + +### Configure the entitlements + +To deploy the Incident Management sample application, you need the following entitlements: + +| Service | Plan | Quota required | +| ------------- | :-----------: | ----: | +| Kyma runtime | free (Environment) | 1 | +| SAP HANA Cloud | hana-free | 1 | +| SAP HANA Cloud | tools (Application) | 1 | +| SAP HANA Schemas & HDI Containers | hdi-shared | 1 | +| HTML5 Application Repository Service | app-host | 1 | +| HTML5 Application Repository Service | app-runtime | 1 | +| Destination Service | lite | 1 | +| SaaS Provisioning Service | application | 1 | +| Service Manager | container | 1 | +| Authorization and Trust Management Service | broker | 1 | + +> You can find more information about entitlements in [Configure Entitlements and Quotas](https://help.sap.com/docs/btp/sap-business-technology-platform/configure-entitlements-and-quotas-for-subaccounts). + +### Enable SAP BTP, Kyma runtime + +Let's enable your subaccount to use the SAP BTP, Kyma runtime. + +1. Navigate to your subaccount and choose **Enable Kyma** under the **Kyma Environment** tab. + + ![Enable Kyma](./img/enable-kyma.png) + +2. In the **Enable Kyma** popup, change the values for **Instance Name** and **Cluster Name** as needed and choose **Create**. + + ![Enable Kyma popup](./img/enable-kyma-popup.png) + + > Make sure that the instance name is CLI-friendly. CLI-friendly names make it easier to manage your instances with the SAP BTP command-line interface as well. + > + > A CLI-friendly name is a short string (up to 32 characters) that contains only alphanumeric characters (A-Z, a-z, 0-9), periods, underscores, and hyphens. It can't contain white spaces. + > + > When enabling the runtime, you notice that the instance name is generated automatically for you. You can use that name or replace it with the name of your choice. + + +### Subscribe to SAP HANA Cloud Administration Tools + +1. Navigate to your subaccount and choose **Services** → **Service Marketplace** on the left. + +2. Type **SAP HANA Cloud** in the search box and choose **Create**. + + ![Create an SAP HANA Cloud tools instance](./img/create-hana-tools.png) + +3. In the **New Instance or Subscription** popup, select **tools** from the dropdown in the **Plan** field and choose **Create**. + + ![SAP HANA Cloud tools instance creation popup](./img/create-hana-tools-popup.png) + +4. Choose **View Subscription** and wait until the status changes to **Subscribed**. + + ![View subscription](./img/view-subscription.png) + + ![Status subscribed](./img/hanatools-status-subscribed.png) + +5. In your SAP BTP subaccount, choose **Security** → **Role Collections** in the left-hand pane. + +6. Choose role collection **SAP HANA Cloud Administrator**. + +7. Choose **Edit**. + + ![Edit role](./img/hana-edit-role.png) + +8. In the **Users** section, enter your user and select the icon to add the user. + + ![Add user](./img/hana-add-user.png) + + > Keep the `Default Identity Provider` setting unless you have a custom identity provider configured. + +9. Choose **Save**. + + You've assigned the **SAP HANA Cloud Administrator** role collection to your user. + +> Log out and log back in to make sure your new role collection is considered. + +### Create an SAP HANA Cloud service instance + +SAP HANA Cloud is used as a persistence layer. + +Follow these steps to create an SAP HANA Cloud service instance in the SAP BTP cockpit: + +1. In your SAP BTP subaccount, navigate to **Services** → **Instances and Subscriptions** in the left-hand pane. + +2. Choose **SAP HANA Cloud**. You're redirected to SAP HANA Cloud multi-environment administration tools. Sign in with your SAP BTP cockpit username/email if necessary. + + ![SAP HANA Cloud Go to application](./img/hana-goto-app.png) + +3. In SAP HANA Cloud Central, choose **Create Instance**. + + ![SAP HANA Cloud create instance](./img/hana-create-instance.png) + +4. Choose *Confiure manually* as **Instance Configuration** and *SAP HANA Database* as **Instance Type**. Then, choose **Next Step**. + + ![Create SAP HANA DB Step 1](./img/create-hana-db1.png) + +5. In the **Instance Name** field, enter *application-hana-instance*. + +6. In the **Administrator Password** and **Confirm Administrator Password** fields, enter a password for DBADMIN. Choose **Next Step**. + + ![Create SAP HANA DB Step 2](./img/create-hana-db2.png) + +7. At **SAP HANA Database: Size and Availability**, choose **Next Step**. + +8. In **SAP HANA Database: Connections**, select the **All IP addresses** radio button, and choose **Next Step**. + + ![Create SAP HANA DB Step 3](./img/create-hana-db3.png) + +9. At **SAP HANA Database: Advanced Settings**, choose **Next Step**. + +10. At **Data Lake: General**, choose **Review and Create**. + +11. Choose **Create Instance**. + +The creation of the database instance can take some minutes to complete. + +> Your SAP HANA Cloud service instance automatically stops overnight, according to the time zone of the region where the server is located. This means you need to restart your instance every day before you start working with it. + +### Map your SAP HANA Cloud service instance to your Kyma cluster + +1. Go to SAP HANA Cloud Central. If you've closed it, open it again by following these steps: + + - In your SAP BTP subaccount, navigate to **Services** → **Instances and Subscriptions**. + - Choose **SAP HANA Cloud**. You're redirected to SAP HANA Cloud multi-environment administration tools. Sign in with your SAP BTP cockpit username/email if necessary. + +2. For the **application-hana-instance** instance, choose **Manage Configuration**. + + ![Manage instance configuration](./img/hana-config.png) + +3. Select the **Instance Mapping** tab and choose **Add Mapping**. + + ![Add instance mapping](./img/hana-add-mapping.png) + +4. Select **Kyma** from the dropdown under **Environment Type**. + +5. Under **Environment Instance ID**, paste the GUID of your Kyma cluster. Here's how to find it: + + - Open your Kyma dashboard. + - Choose **Namespaces** on the left and choose **kyma-system**. + - Navigate to **Configuration** → **Config Maps** and choose **sap-btp-operator-config**. + - You can see the GUID of your Kyma cluster in the **CLUSTER_ID** section. + + ![Add environment instance ID](./img/hana-kyma-cluster-id.png) + + > If no namespace is provided, the instance is mapped to all namespaces in the cluster. + +6. Choose **Review and Save**. In the popup, choose **Save Changes**. + + ![Save changes](./img/hana-save-mapping.png) + + You've mapped your SAP HANA Cloud service instance to your Kyma cluster. + + > For more information, see [Map an SAP HANA Database to another Environment Context](https://help.sap.com/docs/HANA_CLOUD/9ae9104a46f74a6583ce5182e7fb20cb/1683421d02474567a54a81615e8e2c48.html) to add a new Cloud foundry or Kyma mapping. + diff --git a/tutorials/cap-operator-01-prepare/img/create-hana-db1.png b/tutorials/cap-operator-01-prepare/img/create-hana-db1.png new file mode 100644 index 0000000000..7fbd30d62d Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/create-hana-db1.png differ diff --git a/tutorials/cap-operator-01-prepare/img/create-hana-db2.png b/tutorials/cap-operator-01-prepare/img/create-hana-db2.png new file mode 100644 index 0000000000..210f5d1154 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/create-hana-db2.png differ diff --git a/tutorials/cap-operator-01-prepare/img/create-hana-db3.png b/tutorials/cap-operator-01-prepare/img/create-hana-db3.png new file mode 100644 index 0000000000..cbe3303313 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/create-hana-db3.png differ diff --git a/tutorials/cap-operator-01-prepare/img/create-hana-tools-popup.png b/tutorials/cap-operator-01-prepare/img/create-hana-tools-popup.png new file mode 100644 index 0000000000..d82dacc949 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/create-hana-tools-popup.png differ diff --git a/tutorials/cap-operator-01-prepare/img/create-hana-tools.png b/tutorials/cap-operator-01-prepare/img/create-hana-tools.png new file mode 100644 index 0000000000..3dfd3fc0c4 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/create-hana-tools.png differ diff --git a/tutorials/cap-operator-01-prepare/img/enable-kyma-popup.png b/tutorials/cap-operator-01-prepare/img/enable-kyma-popup.png new file mode 100644 index 0000000000..8430d0fd6e Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/enable-kyma-popup.png differ diff --git a/tutorials/cap-operator-01-prepare/img/enable-kyma.png b/tutorials/cap-operator-01-prepare/img/enable-kyma.png new file mode 100644 index 0000000000..9887e946f2 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/enable-kyma.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hana-add-mapping.png b/tutorials/cap-operator-01-prepare/img/hana-add-mapping.png new file mode 100644 index 0000000000..994d635c0f Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hana-add-mapping.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hana-add-user.png b/tutorials/cap-operator-01-prepare/img/hana-add-user.png new file mode 100644 index 0000000000..1370299eb8 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hana-add-user.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hana-config.png b/tutorials/cap-operator-01-prepare/img/hana-config.png new file mode 100644 index 0000000000..0c503439ea Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hana-config.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hana-create-instance.png b/tutorials/cap-operator-01-prepare/img/hana-create-instance.png new file mode 100644 index 0000000000..8bd38404f0 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hana-create-instance.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hana-edit-role.png b/tutorials/cap-operator-01-prepare/img/hana-edit-role.png new file mode 100644 index 0000000000..8de9d20d26 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hana-edit-role.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hana-goto-app.png b/tutorials/cap-operator-01-prepare/img/hana-goto-app.png new file mode 100644 index 0000000000..012a31ce22 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hana-goto-app.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hana-kyma-cluster-id.png b/tutorials/cap-operator-01-prepare/img/hana-kyma-cluster-id.png new file mode 100644 index 0000000000..57403aac5c Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hana-kyma-cluster-id.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hana-save-mapping.png b/tutorials/cap-operator-01-prepare/img/hana-save-mapping.png new file mode 100644 index 0000000000..abec199535 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hana-save-mapping.png differ diff --git a/tutorials/cap-operator-01-prepare/img/hanatools-status-subscribed.png b/tutorials/cap-operator-01-prepare/img/hanatools-status-subscribed.png new file mode 100644 index 0000000000..a1faa2b810 Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/hanatools-status-subscribed.png differ diff --git a/tutorials/cap-operator-01-prepare/img/view-subscription.png b/tutorials/cap-operator-01-prepare/img/view-subscription.png new file mode 100644 index 0000000000..fffbb70efd Binary files /dev/null and b/tutorials/cap-operator-01-prepare/img/view-subscription.png differ diff --git a/tutorials/cap-operator-02-tools/cap-operator-02-tools.md b/tutorials/cap-operator-02-tools/cap-operator-02-tools.md new file mode 100644 index 0000000000..c0fe3bd451 --- /dev/null +++ b/tutorials/cap-operator-02-tools/cap-operator-02-tools.md @@ -0,0 +1,213 @@ +--- +title: Install Tools for Deployment +description: Learn how to install the necessary tools for deploying the application +parser: v2 +auto_validation: true +time: 35 +tags: [ tutorial>beginner, software-product>sap-cap-operator--kubernetes-environment, topic>cloud-operations, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform, software-product>sap-btp--kyma-runtime] +primary_tag: software-product>sap-cap-operator--kubernetes-environment +author_name: Anirudh Prasad +author_profile: https://github.com/anirudhprasad-sap +--- + +## You will learn + +- How to install the tools required for deploying CAP applications in the SAP BTP, Kyma runtime. + - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) + - [kubelogin](https://github.com/int128/kubelogin) + - [helm](https://helm.sh/docs/intro/install/) + - [pack](https://buildpacks.io/docs/tools/pack/#install) + - A container management app such as [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Rancher Desktop](https://rancherdesktop.io/). + +## Prerequisites + +- You've configured the respective entitlements, enabled the Kyma runtime in your subaccount, and created an SAP HANA Cloud service instance in the SAP BTP cockpit. Follow the steps in the [Setting Up SAP BTP and Kyma Runtime for Deployment](cap-operator-01-prepare) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. +- You have an [enterprise global account](https://help.sap.com/docs/btp/sap-business-technology-platform/getting-global-account#loiod61c2819034b48e68145c45c36acba6e) in SAP BTP. To use services for free, you can sign up for an SAP BTPEA (SAP BTP Enterprise Agreement) or a Pay-As-You-Go for SAP BTP global account and use the free tier services only. See [Using Free Service Plans](https://help.sap.com/docs/btp/sap-business-technology-platform/using-free-service-plans?version=Cloud). +- You have a platform user. See [User and Member Management](https://help.sap.com/docs/btp/sap-business-technology-platform/user-and-member-management). +- You're an administrator of the global account in SAP BTP. +- You have a subaccount in SAP BTP to deploy the services and applications. +- For Windows, you need Chocolatey. Chocolatey is a package manager that speeds up and eases installation of the tools in this tutorial. See how to install Chocolatey in [Setup/Install](https://docs.chocolatey.org/en-us/choco/setup). +- You've prepared a container registry and you've logged in to the container registry through your CLI. A container registry is a repo where you can push your Docker images. You can use any container registry offering as long as it can be reached from the public internet. In case if you don't have access to a container registry, you can make use of the [Docker Registry Community Module](https://kyma-project.io/external-content/docker-registry/docs/user/README.html) from Kyma. + +### Install kubectl + +[OPTION BEGIN [macOS]] +1. To install kubectl, run the following command: +```Shell/Bash +brew install kubectl +``` +2. Check if the installation is successful: +```Shell/Bash +kubectl version --client +``` +You see a version number. +[OPTION END] + +[OPTION BEGIN [Windows]] +You can install kubectl using Chocolatey. + +1. To install kubectl, run the following command: +```Shell/Bash +choco install kubernetes-cli +``` +2. Check if the installation is successful: +```Shell/Bash +kubectl version --client +``` +You see something like: +`Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"windows/amd64"}` +[OPTION END] + +[OPTION BEGIN [Linux]] +Follow the instructions for your preferred way of installing kubectl at [Install and Set Up kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/). +[OPTION END] + +### Install kubelogin + +[OPTION BEGIN [macOS]] +To install kubelogin, run the following command: +```Shell/Bash +brew install int128/kubelogin/kubelogin +``` +See [Setup](https://github.com/int128/kubelogin#setup) in the kubelogin docs for more details. +[OPTION END] + +[OPTION BEGIN [Windows]] +You can install kubelogin using Chocolatey: + +```Shell/Bash +choco install kubelogin +``` + +See [Setup](https://github.com/int128/kubelogin#setup) in the kubelogin docs for more details. +[OPTION END] + +[OPTION BEGIN [Linux]] +To install kubelogin, run the following command: +```Shell/Bash +brew install int128/kubelogin/kubelogin +``` + +See [Setup](https://github.com/int128/kubelogin#setup) in the kubelogin docs for more details. +[OPTION END] + +### Log in to your Kyma cluster + +1. Choose `KubeconfigURL` under the **Kyma Environment** tab in your subaccount. + + ![Kubeconfig URL](./img/kubeconfigURL.png) + + A `kubeconfig.yaml` file is downloaded. + + ![Kubeconfig yaml](./img/kubeconfig_yaml.png) + +2. Copy the `kubeconfig.yaml` file to the `~/.kube/` directory and rename it to `config`. Replace or rename any existing file with the same name. + +There are two additional steps for Windows users only: + +3. Go to `C:\ProgramData\chocolatey\bin`. + +4. Rename `kubelogin.exe` to `kubectl-oidc_login.exe`. + +### Install helm + +[OPTION BEGIN [macOS]] +There's a multitude of options to install Helm. You can see the full list at [Installing Helm](https://helm.sh/docs/intro/install/). We have also listed some options: + +To install Helm, run the following command: +```Shell/Bash +brew install helm +``` +[OPTION END] + +[OPTION BEGIN [Windows]] +There's a multitude of options to install Helm. You can see the full list at [Installing Helm](https://helm.sh/docs/intro/install/). We have also listed some options: + +You can install Helm using Chocolatey. + +1. To install Helm, run the following command: +```Shell/Bash +choco install kubernetes-helm +``` +2. Check if the installation is successful: +```Shell/Bash +helm version +``` +You see something like `version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.5"}`. +[OPTION END] + + +### Install Paketo (pack) + +[OPTION BEGIN [macOS]] +Pack lets you build container images that are collaboratively maintained, making it easier to maintain and update. + +```Shell/Bash +brew install buildpacks/tap/pack +``` +[OPTION END] + +[OPTION BEGIN [Windows]] +Pack lets you build container images that are collaboratively maintained, making it easier to maintain and update. + +You can install pack using Chocolatey with the following command: +```Shell/Bash +choco install pack +``` +As an alternative, you can install `pack` manually: + +1. Download `pack` for your platform from [GitHub](https://github.com/buildpacks/pack/releases). +2. Extract the `pack` binary. +3. Enter **Edit the System Environment Variables** in the Windows search box (Windows icon in the task bar). The **System Properties** dialog is opened. +4. Choose **Environment Variables...**. +5. Choose your `Path` environment variable under *User Variables for ``* and choose **Edit**. +6. Choose **Browse** and navigate to the folder where you extracted the `pack` binary. +7. Choose **OK** to add `pack` to your `Path` environment variable. +[OPTION END] + +[OPTION BEGIN [Linux]] +Pack lets you build container images that are collaboratively maintained, making it easier to maintain and update. + +Follow the instructions to install the [pack CLI](https://buildpacks.io/docs/tools/pack/#install). +[OPTION END] + +### Install a container management app + +[OPTION BEGIN [Docker Desktop]] + +Kyma runs on containers. For this tutorial, you need an application that enables you to manage container images on your desktop (build, push, pull, and run) and a Docker-compatible command-line interface. We provide two examples - Docker Desktop and Rancher Desktop. You can choose one of these or any other app suitable for this purpose. + +* **macOS**: Download the installer from [Install Docker Desktop on Mac](https://docs.docker.com/desktop/mac/install/) and follow the instructions to install and set up Docker Desktop. + +* **Windows**: Download the installer from [Install Docker Desktop on Windows](https://docs.docker.com/desktop/windows/install/) and follow the instructions to install and set up Docker Desktop. + +[OPTION END] +[OPTION BEGIN [Rancher Desktop]] + +Kyma runs on containers. For this tutorial, you need an application that enables you to manage container images on your desktop (build, push, pull, and run) and a Docker-compatible command-line interface. We provide two examples - Docker Desktop and Rancher Desktop. You can choose one of these or any other app suitable for this purpose. + +* **macOS**: + + 1. Go to the [releases](https://github.com/rancher-sandbox/rancher-desktop/releases) page. + 2. Download the Rancher Desktop installer for macOS. + + > The macOS installer is called `Rancher.Desktop-.dmg`. Here's an example with the current latest version: `Rancher.Desktop-1.2.1.x86_64.dmg`. + + 3. Run the installer. When the installation is complete, drag the Rancher Desktop icon to the **Applications** folder. + + > You can find details about installation requirements and steps to install or uninstall in [macOS](https://docs.rancherdesktop.io/getting-started/installation#macos). + +* **Windows**: + + 1. Go to the [releases](https://github.com/rancher-sandbox/rancher-desktop/releases) page. + 2. Download the Rancher Desktop installer for Windows. + + > The Windows installer is called `Rancher.Desktop.Setup..exe`. Here's an example with the current latest version: `Rancher.Desktop.Setup.1.2.1.exe`. + + 3. Run the installer. When the installation is complete, choose **Finish**. + + > You can find details about installation requirements and steps to install or uninstall in [Windows](https://docs.rancherdesktop.io/getting-started/installation#windows). + +* **Linux**: There are several ways to install Rancher Desktop on Linux. You can find details about installation requirements and steps to install or uninstall in [Linux](https://docs.rancherdesktop.io/getting-started/installation#linux). + +[OPTION END] diff --git a/tutorials/cap-operator-02-tools/img/kubeconfigURL.png b/tutorials/cap-operator-02-tools/img/kubeconfigURL.png new file mode 100644 index 0000000000..744207e3de Binary files /dev/null and b/tutorials/cap-operator-02-tools/img/kubeconfigURL.png differ diff --git a/tutorials/cap-operator-02-tools/img/kubeconfig_yaml.png b/tutorials/cap-operator-02-tools/img/kubeconfig_yaml.png new file mode 100644 index 0000000000..ab1cd37758 Binary files /dev/null and b/tutorials/cap-operator-02-tools/img/kubeconfig_yaml.png differ diff --git a/tutorials/cap-operator-03-add-cap-operator/cap-operator-03-add-cap-operator.md b/tutorials/cap-operator-03-add-cap-operator/cap-operator-03-add-cap-operator.md new file mode 100644 index 0000000000..22cf44542a --- /dev/null +++ b/tutorials/cap-operator-03-add-cap-operator/cap-operator-03-add-cap-operator.md @@ -0,0 +1,47 @@ +--- +title: Enable CAP Operator Community Module in Kyma Cluster +description: Learn how to enable the CAP Operator community module in your Kyma cluster. +parser: v2 +auto_validation: true +time: 5 +tags: [ tutorial>beginner, software-product>sap-cap-operator--kubernetes-environment, topic>cloud-operations, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform, software-product>sap-btp--kyma-runtime] +primary_tag: software-product>sap-cap-operator--kubernetes-environment +author_name: Anirudh Prasad +author_profile: https://github.com/anirudhprasad-sap +--- + +## You will learn + +- How to enable the CAP Operator community module in your Kyma cluster. + +## Prerequisites + +- You've enabled the Kyma runtime in your subaccount. Follow the steps in the [Setting Up SAP BTP and Kyma Runtime for Deployment](cap-operator-01-prepare) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. +- You have an [enterprise global account](https://help.sap.com/docs/btp/sap-business-technology-platform/getting-global-account#loiod61c2819034b48e68145c45c36acba6e) in SAP BTP. To use services for free, you can sign up for an SAP BTPEA (SAP BTP Enterprise Agreement) or a Pay-As-You-Go for SAP BTP global account and use the free tier services only. See [Using Free Service Plans](https://help.sap.com/docs/btp/sap-business-technology-platform/using-free-service-plans?version=Cloud). +- You have a platform user. See [User and Member Management](https://help.sap.com/docs/btp/sap-business-technology-platform/user-and-member-management). +- You're an administrator of the global account in SAP BTP. +- You have a subaccount in SAP BTP to deploy the services and applications. + +### Enable CAP Operator community module + +1. Open your Kyma dashboard. + +2. Navigate to **Configuration** → **Modules** and choose **Add** within the **Community Modules** list. + + ![Add Community Module](./img/community-module-1.png) + +3. Choose **Add** in the **Source YAMLs** section to load the list of community modules. + + ![Load Community Modules](./img/community-module-2.png) + +4. In the popup, you can see the list of available community modules. Choose **Add**. + + ![Select CAP Operator Module](./img/community-module-3.png) + +5. Select the **CAP Operator** module and choose **Add**. + + ![Add CAP Operator Module](./img/community-module-4.png) + +6. Wait until the automatic installation is complete and the **Module State** changes to **Ready**. + + ![CAP Operator Installed](./img/community-module-5.png) diff --git a/tutorials/cap-operator-03-add-cap-operator/img/community-module-1.png b/tutorials/cap-operator-03-add-cap-operator/img/community-module-1.png new file mode 100644 index 0000000000..49cf5e5a4f Binary files /dev/null and b/tutorials/cap-operator-03-add-cap-operator/img/community-module-1.png differ diff --git a/tutorials/cap-operator-03-add-cap-operator/img/community-module-2.png b/tutorials/cap-operator-03-add-cap-operator/img/community-module-2.png new file mode 100644 index 0000000000..e1ea71d4ec Binary files /dev/null and b/tutorials/cap-operator-03-add-cap-operator/img/community-module-2.png differ diff --git a/tutorials/cap-operator-03-add-cap-operator/img/community-module-3.png b/tutorials/cap-operator-03-add-cap-operator/img/community-module-3.png new file mode 100644 index 0000000000..0450daeba6 Binary files /dev/null and b/tutorials/cap-operator-03-add-cap-operator/img/community-module-3.png differ diff --git a/tutorials/cap-operator-03-add-cap-operator/img/community-module-4.png b/tutorials/cap-operator-03-add-cap-operator/img/community-module-4.png new file mode 100644 index 0000000000..98ff3fc656 Binary files /dev/null and b/tutorials/cap-operator-03-add-cap-operator/img/community-module-4.png differ diff --git a/tutorials/cap-operator-03-add-cap-operator/img/community-module-5.png b/tutorials/cap-operator-03-add-cap-operator/img/community-module-5.png new file mode 100644 index 0000000000..16589e651c Binary files /dev/null and b/tutorials/cap-operator-03-add-cap-operator/img/community-module-5.png differ diff --git a/tutorials/cap-operator-04-prepare-app/cap-operator-04-prepare-app.md b/tutorials/cap-operator-04-prepare-app/cap-operator-04-prepare-app.md new file mode 100644 index 0000000000..c5ed1d7548 --- /dev/null +++ b/tutorials/cap-operator-04-prepare-app/cap-operator-04-prepare-app.md @@ -0,0 +1,143 @@ +--- +title: Prepare Multi-Tenant Applications for Deployment with CAP Operator +description: This tutorial shows you how to prepare your multi-tenant application for deployment in SAP BTP, Kyma runtime using the CAP Operator. +parser: v2 +auto_validation: true +time: 20 +tags: [ tutorial>beginner, software-product>sap-cap-operator--kubernetes-environment, topic>cloud-operations, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform, software-product>sap-btp--kyma-runtime] +primary_tag: software-product>sap-cap-operator--kubernetes-environment +author_name: Anirudh Prasad +author_profile: https://github.com/anirudhprasad-sap +--- + +## You will learn + +- How to build container images for your multi-tenant application and push them to a container registry. + +## Prerequisites + +- You've configured the respective entitlements, enabled the Kyma runtime in your SAP BTP subaccount, and created an SAP HANA Cloud service instance in the SAP BTP cockpit. Follow the steps in the [Setting Up SAP BTP and Kyma Runtime for Deployment](cap-operator-01-prepare) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. +- You've installed all the required tools. Follow the steps in the [Install Tools for Deployment](cap-operator-02-tools) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. + +### Download and set up the project locally + +Clone the application from the [Incident Management Application GitHub Repository](https://github.com/cap-js/incidents-app/tree/cap-operator-tutorials). For e.g. using: +```bash +git clone https://github.com/cap-js/incidents-app.git -b cap-operator-tutorials +``` + +This is a multi-tenant SAP Cloud Application Programming Model (CAP) application. It utilizes the [application router](https://www.npmjs.com/package/@sap/approuter) for routing, SAP Authorization and Trust Management service (XSUAA), and SAP HANA Cloud as the database. The front end is built with SAP Fiori and deployed to the HTML5 Application Repository. + +Open a command-line window in the folder where your application holds **incidents-app** and run the following command to open the project in Visual Studio (VS) Code: + +```bash +code . +``` + +### Build images + +> Make sure you're logged in to your container registry. In case if you don't have access to a container registry, you can make use of the [Docker Registry Community Module](https://kyma-project.io/external-content/docker-registry/docs/user/README.html) from Kyma. + +> If you're using a device with a non-x86 processor (for example, MacBook M1/M2), you need to instruct Docker to use x86 images by setting the **DOCKER_DEFAULT_PLATFORM** environment variable using the command `export DOCKER_DEFAULT_PLATFORM=linux/amd64`. Check the [environment variables](https://docs.docker.com/engine/reference/commandline/cli/#environment-variables) for more information. + +> Make sure to replace `` with the link to your container registry and keep in mind that `` is a string. + +> Looking for your Docker server URL? + +> The Docker server URL is the same as the path used for Docker login, so you can quickly check it by running the following command in your terminal: + +> ```json +> cat ~/.docker/config.json +> ``` + +> In case you're using Docker Hub as your container registry, replace the placeholder `` with your Docker Hub user ID. + +#### Build the CAP Node.js and the MTXS sidecar image + +1. In VS Code, choose **Terminal** → **New Terminal** and run the following command: + + ```bash + npm install + ``` + + This command installs the required dependencies and updates the **package-lock.json** file of your project. + +2. Create the productive CAP build for your application: + + ```bash + npx cds build --production + ``` + + The CAP build writes to the **gen/srv** folder. + +3. Build the CAP Node.js image: + + ```bash + pack build /incident-management-srv: \ + --path gen/srv \ + --builder paketobuildpacks/builder-jammy-base \ + --publish + ``` + + > The pack CLI builds the image that contains the build result in the **gen/srv** folder and the required npm packages by using the [Cloud Native Buildpack for Node.js](https://github.com/paketo-buildpacks/nodejs) provided by Paketo. + +4. Build the MTXS sidecar image: + + ```bash + pack build /incident-management-mtxs-sidecar: \ + --path gen/mtx/sidecar \ + --builder paketobuildpacks/builder-jammy-base \ + --publish + ``` + + > **IMPORTANT:** The **project.toml** file in the **gen/mtx/sidecar** folder is copied automatically from the **mtxs/sidecar** folder during the build. This file exposes the node process inside the container so that CAP Operator can trigger tenant operations using the MTXS CLIs. + +#### Build the application router image + +1. In the VS Code terminal, navigate to the **app/router** folder and run the following command: + + ```bash + npm install + ``` + +2. In the VS Code terminal, navigate back to the root folder of your project: + + ```bash + cd ../.. + ``` + +3. Build the application router image: + + ```bash + pack build /incident-management-approuter: \ + --path app/router \ + --buildpack paketo-buildpacks/nodejs \ + --builder paketobuildpacks/builder-jammy-base \ + --env BP_NODE_RUN_SCRIPTS="" \ + --publish + ``` + +#### Build the HTML5 deployer image + +1. In the VS Code terminal, navigate to the **ui-resources** folder and run the following command: + + ```bash + npm install && npm run package + ``` + + This command builds and copies the archive **nsincidents.zip** inside the **ui-resources/resources** folder. + +2. In the VS Code terminal, navigate back to the root folder of your project: + + ```bash + cd .. + ``` + +3. Build the UI deployer image: + + ```bash + pack build /incident-management-html5-deployer: \ + --path ui-resources \ + --builder paketobuildpacks/builder-jammy-base \ + --publish + ``` diff --git a/tutorials/cap-operator-05-deploy-app/cap-operator-05-deploy-app.md b/tutorials/cap-operator-05-deploy-app/cap-operator-05-deploy-app.md new file mode 100644 index 0000000000..ec758efd1e --- /dev/null +++ b/tutorials/cap-operator-05-deploy-app/cap-operator-05-deploy-app.md @@ -0,0 +1,178 @@ +--- +title: Deploy Multi-Tenant Applications using CAP Operator +description: This tutorial shows you how to deploy your multi-tenant application in SAP BTP, Kyma runtime using the CAP Operator. +parser: v2 +auto_validation: true +time: 20 +tags: [ tutorial>beginner, software-product>sap-cap-operator--kubernetes-environment, topic>cloud-operations, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform, software-product>sap-btp--kyma-runtime] +primary_tag: software-product>sap-cap-operator--kubernetes-environment +author_name: Anirudh Prasad +author_profile: https://github.com/anirudhprasad-sap +--- + +## You will learn + +- How to deploy your multi-tenant application in SAP BTP, Kyma runtime using the CAP Operator. + +## Prerequisites + +- You've configured the respective entitlements, enabled the Kyma runtime in your SAP BTP subaccount, and created an SAP HANA Cloud service instance in the SAP BTP cockpit. Follow the steps in the [Setting Up SAP BTP and Kyma Runtime for Deployment](cap-operator-01-prepare) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. +- You've installed all the required tools. Follow the steps in the [Install Tools for Deployment](cap-operator-02-tools) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. +- You've enabled the CAP Operator community module in your Kyma cluster. Follow the steps in the [Enable CAP Operator Community Module](cap-operator-03-add-cap-operator) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. +- You've prepared your multi-tenant application for deployment. Follow the steps in the [Prepare Multi-Tenant Applications for Deployment with CAP Operator](cap-operator-04-prepare-app) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. + +### Add CAP Operator Helm chart + +CAP Operator provides a plugin to generate a Helm chart for your CAP application. + +1. Add the [CAP Operator plugin](https://www.npmjs.com/package/@cap-js/cap-operator-plugin/) to your project by running the following command in your project root folder: + + ```bash + npm add @cap-js/cap-operator-plugin -D + ``` + +2. Run the following command to generate a Helm chart for your CAP application: + + ```bash + npx cds add cap-operator --with-templates + ``` + + As a result, you see a newly created **chart** folder in your project. The **chart** folder holds the Helm configuration, including the **values.yaml** file where you add your container images. + +3. Add your container image settings to your **chart/values.yaml** file: + + ```yaml[7,13,19,25] + ... + workloads: + appRouter: + ... + deploymentDefinition: + type: Router + image: /incident-management-approuter: + ... + server: + ... + deploymentDefinition: + type: CAP + image: /incident-management-srv: + ... + contentDeploy: + ... + jobDefinition: + type: Content + image: /incident-management-html5-deployer: + ... + tenantJob: + ... + jobDefinition: + type: TenantOperation + image: /incident-management-mtxs-sidecar: + ... + ... + ``` + +4. Add the `EXIT_PROCESS_AFTER_UPLOAD` environment variable to the content deploy job in your **chart/values.yaml** file to ensure that the HTML5 deployer exits after the upload is complete: + + ```yaml[8,9] + ... + contentDeploy: + ... + jobDefinition: + type: Content + image: /incident-management-html5-deployer: + env: + - name: EXIT_PROCESS_AFTER_UPLOAD + value: "true" + ... + ``` + +### Deploy CAP Operator Helm chart + +1. Run the following command to create a dedicated space for your application and enable the Istio service mesh to handle communication: + + ```bash + kubectl create namespace incident-management + kubectl label namespace incident-management istio-injection=enabled + ``` + +2. If you are using a private container registry, create a secret in the **incident-management** namespace so the cluster can pull your images. If your images are public, you can skip this step. + + ```bash + kubectl -n incident-management create secret generic regcred --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson + ``` + +3. Run the following command to get the cluster shoot domain: + + ```bash + kubectl get gateway -n kyma-system kyma-gateway -o jsonpath='{.spec.servers[0].hosts[0]}' | sed 's/^\*\.//' + ``` + + The result looks like this: + ```bash + .kyma.ondemand.com + ``` + + > `` is a placeholder for a string of characters that’s unique for your cluster. + +4. Create a new file named **trial-env.yaml** in the project root folder with the following content and replace the placeholder values with your specific information: + + ```yaml + appName: + capOperatorSubdomain: cap-op + clusterDomain: # Value obtained in the previous step + globalAccountId: + providerSubaccountId: + providerSubdomain: + tenantId: + imagePullSecret: regcred # Only include if you performed Step 2 + ``` + + > **`appName`**: Choose a name that is unique within your subaccount region. This prevents naming collisions with other deployments. + + > **`capOperatorSubdomain`**: In Kyma clusters, CAP Operator subdomain default value is `cap-op`. + + > **`clusterDomain`**: Use the domain string you retrieved in the previous step. + + > **`globalAccountId`**: You can find this in the URL of your browser when you are viewing your subaccount in the SAP BTP Cockpit. + + > ![Save changes](./img/global-account-id.png) + + > **`providerSubaccountId`**, **`providerSubdomain`** and **`providerTenantId`**: In the SAP BTP cockpit, go to your subaccount **Overview** and check the **General** section. You can find all three values there. + + > ![Save changes](./img/provider-subdomain-tenant-id.png) + + > **`imagePullSecret`**: Only include this line if you are using a private registry. If you followed Step 2, set this to `regcred`. + +4. To prepare your deployment, run the following command in your project root to generate the **runtime-values.yaml** file inside your **chart** folder: + + ```bash + npx cap-op-plugin generate-runtime-values --with-input-yaml trial-env.yaml + ``` + + > This command maps your environment settings to the application's configuration. It creates a **runtime-values.yaml** file that Helm uses during deployment to override the default settings in the **values.yaml** file with your specific cluster and account details. + +5. Make sure that your SAP HANA Cloud instance is running. Free tier HANA instances are stopped overnight. + + > Your SAP HANA Cloud service instance automatically stops overnight, according to the time zone of the region where the server is located. This means you need to restart your instance every day before you start working with it. You can restart your instance using the SAP BTP cockpit. + +6. Deploy using the Helm command: + + ```bash + helm upgrade --install incident-management --namespace incident-management ./chart \ + --set-file serviceInstances.xsuaa.jsonParameters=xs-security.json -f ./chart/runtime-values.yaml + ``` + + This command installs the Helm chart from the chart folder with the release name **incident-management** in the **incident-management** namespace. + + > With the **helm upgrade --install** command, you can install a new chart as well as upgrade an existing chart. + +7. To check the status of your deployment: + + 1. Open your Kyma dashboard. + 2. Choose **Namespaces** on the left and choose **incident-management**. + 3. Navigate to **CAP Operator** → **CAP Application**. + 4. You can see the status of your deployed application here. When the status is **Consistent**, your application is successfully deployed. + + ![Save changes](./img/kyma-dashboard-cap-application-status.png) + + > In the example deployment, the unique **appName** is **incident-tutorial**. diff --git a/tutorials/cap-operator-05-deploy-app/img/global-account-id.png b/tutorials/cap-operator-05-deploy-app/img/global-account-id.png new file mode 100644 index 0000000000..4069768c21 Binary files /dev/null and b/tutorials/cap-operator-05-deploy-app/img/global-account-id.png differ diff --git a/tutorials/cap-operator-05-deploy-app/img/hana-instance-id.png b/tutorials/cap-operator-05-deploy-app/img/hana-instance-id.png new file mode 100644 index 0000000000..63598b0108 Binary files /dev/null and b/tutorials/cap-operator-05-deploy-app/img/hana-instance-id.png differ diff --git a/tutorials/cap-operator-05-deploy-app/img/kyma-dashboard-cap-application-status.png b/tutorials/cap-operator-05-deploy-app/img/kyma-dashboard-cap-application-status.png new file mode 100644 index 0000000000..02ab42d4f7 Binary files /dev/null and b/tutorials/cap-operator-05-deploy-app/img/kyma-dashboard-cap-application-status.png differ diff --git a/tutorials/cap-operator-05-deploy-app/img/provider-subdomain-tenant-id.png b/tutorials/cap-operator-05-deploy-app/img/provider-subdomain-tenant-id.png new file mode 100644 index 0000000000..84cf59b320 Binary files /dev/null and b/tutorials/cap-operator-05-deploy-app/img/provider-subdomain-tenant-id.png differ diff --git a/tutorials/cap-operator-06-subscribe/cap-operator-06-subscribe.md b/tutorials/cap-operator-06-subscribe/cap-operator-06-subscribe.md new file mode 100644 index 0000000000..15f5eac2c6 --- /dev/null +++ b/tutorials/cap-operator-06-subscribe/cap-operator-06-subscribe.md @@ -0,0 +1,48 @@ +--- +title: Subscribe to the Multi-Tenant Application from Consumer Subaccount +description: This tutorial shows you how to subscribe to a multi-tenant application from a consumer subaccount. +parser: v2 +auto_validation: true +time: 10 +tags: [ tutorial>beginner, software-product>sap-cap-operator--kubernetes-environment, topic>cloud-operations, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform, software-product>sap-btp--kyma-runtime] +primary_tag: software-product>sap-cap-operator--kubernetes-environment +author_name: Anirudh Prasad +author_profile: https://github.com/anirudhprasad-sap +--- + +## You will learn + +- How to subscribe to a multi-tenant application from a consumer subaccount using the CAP Operator. + +## Prerequisites + +- You've deployed the application. Follow the steps in the [Deploy your Application using CAP Operator](cap-operator-05-deploy-app) tutorial that is part of the [Application Lifecycle Management using CAP Operator](group.kyma-cap-operator-lifecycle) tutorial group. +- You're an administrator of the global account in SAP BTP. + +### Create new subaccount + +Create a new subaccount in the same global account where you have deployed the multi-tenant CAP application in the previous tutorial, for example, `Customer`. You can find the steps to create a new subaccount in the [Create Subaccounts](https://help.sap.com/docs/btp/sap-business-technology-platform/create-subaccount) documentation. + +### Subscribe to the multi-tenant application + +1. Navigate to your subaccount and choose **Services** → **Service Marketplace** on the left. + +2. Type your application name in the search box and choose **Create**. + + ![application-search](./img/application-search.png) + + > When you enter your application name, ensure it matches the one you used in the previous tutorial. For instance, if you used **incident-tutorial** earlier, use that same name here. + +3. In the **New Instance or Subscription** popup, choose **Create**. + + ![subscribe-application](./img/subscribe-application.png) + + > This subscribes your subaccount to the multi-tenant application deployed in the provider subaccount. + +4. In the **Creation in Progress** popup, choose **View Subscription**. + +5. Wait until the subscription status changes to **Subscribed** and choose **Go to Application**. + + ![subscription-succeeded](./img/subscription-succeeded.png) + + > You've successfully subscribed to the multi-tenant application from your consumer subaccount. diff --git a/tutorials/cap-operator-06-subscribe/img/application-search.png b/tutorials/cap-operator-06-subscribe/img/application-search.png new file mode 100644 index 0000000000..8f2efde2a8 Binary files /dev/null and b/tutorials/cap-operator-06-subscribe/img/application-search.png differ diff --git a/tutorials/cap-operator-06-subscribe/img/subscribe-application.png b/tutorials/cap-operator-06-subscribe/img/subscribe-application.png new file mode 100644 index 0000000000..51b36ae15a Binary files /dev/null and b/tutorials/cap-operator-06-subscribe/img/subscribe-application.png differ diff --git a/tutorials/cap-operator-06-subscribe/img/subscription-succeeded.png b/tutorials/cap-operator-06-subscribe/img/subscription-succeeded.png new file mode 100644 index 0000000000..a85a393b11 Binary files /dev/null and b/tutorials/cap-operator-06-subscribe/img/subscription-succeeded.png differ diff --git a/tutorials/cp-cf-create-destination/cp-cf-create-destination.md b/tutorials/cp-cf-create-destination/cp-cf-create-destination.md index 6cfda7a368..79d3f9f527 100644 --- a/tutorials/cp-cf-create-destination/cp-cf-create-destination.md +++ b/tutorials/cp-cf-create-destination/cp-cf-create-destination.md @@ -36,8 +36,7 @@ The Northwind OData services are available in several versions. Most tutorials c ### Enter your SAP BTP account - For (free) Trial Accounts: -- For Free Tier and Enterprise Accounts on **feature set A**: -- For Free Tier and Enterprise Accounts on **feature set B**: +- For Free Tier and Enterprise Accounts: ### Access your subaccount diff --git a/tutorials/cp-cf-understand-application-lifecycle/cp-cf-understand-application-lifecycle.md b/tutorials/cp-cf-understand-application-lifecycle/cp-cf-understand-application-lifecycle.md index f5507fd4c6..cd89128e77 100644 --- a/tutorials/cp-cf-understand-application-lifecycle/cp-cf-understand-application-lifecycle.md +++ b/tutorials/cp-cf-understand-application-lifecycle/cp-cf-understand-application-lifecycle.md @@ -86,7 +86,7 @@ Every application stops running at some point, whether from a normal shutdown, a - _Crash_ - If an application instance crashes, Cloud Foundry is designed to [automatically try to restart it](https://docs.cloudfoundry.org/devguide/deploy-apps/app-lifecycle.html#crash-events). Application crashes are usually due to an issue with the application itself, though in rare cases a crash could be caused by a problem with some underlying Cloud Foundry component(s). In such a cases, it is important to look at the logs, events and metrics to determine the cause of the crash, and to discern whether or not Cloud Foundry can recover on its own, or if human intervention is required. -- _Shutdown_ - Certain [actions](https://docs.cloudfoundry.org/devguide/deploy-apps/app-lifecycle.html#shutdown) will cause Cloud Foundry to shutdown an application instance. On shutdown, Cloud Foundry sends the app process a SIGTERM, giving the application 10 seconds to stop on its own before being forcibly terminated via SIGKILL. This 10 second limit is a system-wide setting, and is also the default configuration of the SAP BTP. An application developer should keep this in mind when creating their app so that it can handle shutdowns gracefully. +- _Shutdown_ - Certain [actions](https://docs.cloudfoundry.org/devguide/deploy-apps/app-lifecycle.html#shutdown) will cause Cloud Foundry to shutdown an application instance. On shutdown, Cloud Foundry sends the app process a SIGTERM, giving the application by default 10 seconds to stop on its own before being forcibly terminated via SIGKILL. This is a system-wide setting, and the configuration of the SAP BTP is 60 seconds. An application developer should keep this in mind when creating their app so that it can handle shutdowns gracefully. - _Evacuation_ - In some cases, the virtual machines (VMs) that run the containers hosting an app instance may need to be restarted. For example, this may happen if underlying VM image or Cloud Foundry are updated. Through a process called [evacuation](https://docs.cloudfoundry.org/devguide/deploy-apps/app-lifecycle.html#evacuation), Cloud Foundry automatically relocates an app instance to another VM before restarting the VM that previously ran that app instance. When this occurs the app instance is recreated, and once the new app instance reports itself as healthy, the old instance is shut down. This may cause a brief period of time where duplicates of an app can be seen. If an app only has one instance, it may become unavailable during this process (if the new app instance doesn't report as healthy within the default 10 minute evacuation timeout). diff --git a/tutorials/data-lake-file-containers-hdlfscli/data-lake-file-containers-hdlfscli.md b/tutorials/data-lake-file-containers-hdlfscli/data-lake-file-containers-hdlfscli.md index 96cb1ccdbb..a88f904cdf 100644 --- a/tutorials/data-lake-file-containers-hdlfscli/data-lake-file-containers-hdlfscli.md +++ b/tutorials/data-lake-file-containers-hdlfscli/data-lake-file-containers-hdlfscli.md @@ -118,7 +118,6 @@ Some HDLFSCLI help documentation should appear if it is successfully installed a [OPTION END] - ### Generate Certificates To connect the HDLFSCLI to an SAP HANA, data lake Files container, a certificate will need to be generated to make a secure connection. Below are the steps required to create a self-signed certificate to get started using the HDLFSCLI. You will require an installation of OpenSSL. Use your preferred Linux package installer to install OpenSSL if it is not already installed. If you're using a Windows machine, then Windows Subsystem for Linux (WSL) will have OpenSSL installed. Alternatively, OpenSSL can be installed for Windows from [here](https://slproweb.com/products/Win32OpenSSL.html). @@ -127,6 +126,18 @@ Then, follow these steps to creating your self-signed certificate. Make sure the certificate fields are not all exactly the same between the Certificate Authority (CA) and client certificates. Otherwise, it is assumed to be a self-signed cert and the cert validation below will fail. +Create a folder. + +```Shell (Microsoft Windows) +mkdir %HOMEPATH%\certs +cd %HOMEPATH%\certs +``` + +```Shell (Linux or Mac) +mkdir -p $HOME/certs +cd $HOME/certs +``` + Create a private key for the CA (2048 bits). ```Shell @@ -141,7 +152,7 @@ openssl req -x509 -new -key ca.key -days 200 -out ca.crt Create a signing request for the client certificate. -Provide at least a common name and fill other fields as desired. Also, leave the email-Id field blank. +Provide at least a common name and fill other fields as desired. Leave the email-Id field blank. The Common Name must be different from the one used for the CA public certificate. ```Shell openssl req -new -nodes -newkey rsa:2048 -out client.csr -keyout client.key diff --git a/tutorials/data-lake-file-containers-restapi-node/data-lake-file-containers-restapi-node.md b/tutorials/data-lake-file-containers-restapi-node/data-lake-file-containers-restapi-node.md index d5638c0163..622202b64d 100644 --- a/tutorials/data-lake-file-containers-restapi-node/data-lake-file-containers-restapi-node.md +++ b/tutorials/data-lake-file-containers-restapi-node/data-lake-file-containers-restapi-node.md @@ -384,7 +384,7 @@ Upon attempting to access the files via the command line, an error message indic ### Explore and Experiment! -These endpoints along with the others documented in the [REST API reference](https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/QRC_4_2021/en-US/html/index.html) can be used by any application to manipulate or manage the files in the HANA Data Lake File Container. Other endpoints not demonstrated here include APPEND, GETRESTORSNAPSHOT, WHOAMI, RENAME, and RESTORESNAPSHOT. +These endpoints along with the others documented in the [REST API reference](https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/latest/en-US/index.html) can be used by any application to manipulate or manage the files in the HANA Data Lake File Container. Other endpoints not demonstrated here include APPEND, GETRESTORSNAPSHOT, WHOAMI, RENAME, and RESTORESNAPSHOT. To replicate these requests in other languages or HTTP tools, copy the request headers, FILES REST API + request URL, and body contents. diff --git a/tutorials/data-lake-file-containers-restapi/data-lake-file-containers-restapi.md b/tutorials/data-lake-file-containers-restapi/data-lake-file-containers-restapi.md index 8a5725812a..ad6d127bc8 100644 --- a/tutorials/data-lake-file-containers-restapi/data-lake-file-containers-restapi.md +++ b/tutorials/data-lake-file-containers-restapi/data-lake-file-containers-restapi.md @@ -21,7 +21,7 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake - Users without access to the HDLFSCLI can use the REST API to perform File Store operations. ## Intro -SAP HANA data lake file containers are accessible via a REST API. The official REST API reference can be found [here](https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/latest/en-US/html/index.html). However, below are some python demonstrations using some of the common endpoints. Although this tutorial doesn't cover other endpoint testing tools, these endpoints and the contents of the request body can be used in any other http interface such as PostMan or CURL. +SAP HANA data lake file containers are accessible via a REST API. The official REST API reference can be found [here](https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/latest/en-US/index.html). However, below are some python demonstrations using some of the common endpoints. Although this tutorial doesn't cover other endpoint testing tools, these endpoints and the contents of the request body can be used in any other http interface such as PostMan or CURL. --- @@ -270,7 +270,7 @@ Upon attempting to access the files via the command line, an error message indic ### Explore and Experiment! -These endpoints along with the others documented in the [REST API reference](https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/latest/en-US/html/index.html) can be used by any application to manipulate or manage the files in the HANA Data Lake File Container. Other endpoints not demonstrated here include DELETE, APPEND, GETRESTORSNAPSHOT, WHOAMI, RENAME, and RESTORESNAPSHOT. +These endpoints along with the others documented in the [REST API reference](https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/latest/en-US/index.html) can be used by any application to manipulate or manage the files in the HANA Data Lake File Container. Other endpoints not demonstrated here include DELETE, APPEND, GETRESTORSNAPSHOT, WHOAMI, RENAME, and RESTORESNAPSHOT. To replicate these requests in other languages or HTTP tools, copy the request headers, FILES REST API + request URL, and body contents. diff --git a/tutorials/data-lake-schedule-data-movement/data-lake-schedule-data-movement.md b/tutorials/data-lake-schedule-data-movement/data-lake-schedule-data-movement.md index 2903d108d3..9cbb48267d 100644 --- a/tutorials/data-lake-schedule-data-movement/data-lake-schedule-data-movement.md +++ b/tutorials/data-lake-schedule-data-movement/data-lake-schedule-data-movement.md @@ -182,7 +182,7 @@ CREATE TABLE HDLRE_CUSTOMER ); ``` -Here I will break down [creating an event](https://help.sap.com/viewer/19b3964099384f178ad08f2d348232a9/2021_4_QRC/en-US/a617091784f210158db2e43f0733ae5d.html?q=CREATE%20EVENT) in HDLRE. In the following SQL you create an event called `PullCustomerDataFromHANA`. Immediately after you create a schedule `SchedulePullCustomerDataFromHANA`. The schedule is scheduled to start at 12:00am and repeat the event every Sunday. Below the "HANDLER" you define the SQL script to be executed. The script creates a local temporary table (this table will be lost once the connection is dropped) and then inserts the data from that the temporary table into your `HDLRE_CUSTOMER` table which persists inside of your HDLRE instance. So, every Sunday the event is copying the data from your HANA table to your HDLRE table. +Here I will break down [creating an event](https://help.sap.com/docs/hana-cloud-data-lake/sql-reference-for-data-lake-relational-engine/create-event-statement-for-data-lake-relational-engine) in HDLRE. In the following SQL you create an event called `PullCustomerDataFromHANA`. Immediately after you create a schedule `SchedulePullCustomerDataFromHANA`. The schedule is scheduled to start at 12:00am and repeat the event every Sunday. Below the "HANDLER" you define the SQL script to be executed. The script creates a local temporary table (this table will be lost once the connection is dropped) and then inserts the data from that the temporary table into your `HDLRE_CUSTOMER` table which persists inside of your HDLRE instance. So, every Sunday the event is copying the data from your HANA table to your HDLRE table. ```SQL CREATE EVENT PullCustomerDataFromHANA diff --git a/tutorials/data-lake-text-search/data-lake-text-search.md b/tutorials/data-lake-text-search/data-lake-text-search.md index bc2d0f05dd..aa8df6d61d 100644 --- a/tutorials/data-lake-text-search/data-lake-text-search.md +++ b/tutorials/data-lake-text-search/data-lake-text-search.md @@ -236,7 +236,7 @@ The above creates the index on both columns using the configuration that you def ### Query the Table Using the CONTAINS function -To make use of the text index that you created above, you can use the CONTAINS function. The CONTAINS function when used on a text index allows you to search your text columns for key words, partial key words (using * operator), and words that are near each other. Learn more about what's possible with text searching in the (SAP Help documentation)[https://help.sap.com/viewer/a8937bea84f21015a80bc776cf758d50/2021_4_QRC/en-US/a5f9128284f21015be99d1a8e8925c94.html?q=CONTAINS%20text%20search]. Try a simple query on your index. +To make use of the text index that you created above, you can use the CONTAINS function. The CONTAINS function when used on a text index allows you to search your text columns for key words, partial key words (using * operator), and words that are near each other. Learn more about what's possible with text searching in the (SAP Help documentation)[https://help.sap.com/docs/hana-cloud-data-lake/administration-guide-for-data-lake-relational-engine/contains-conditions-for-full-text-searches]. Try a simple query on your index. ```SQL SELECT Actor1Geo_FullName, Actor2Geo_FullName FROM EVENT CONTAINS(EVENT.Actor1Geo_FullName, 'United States'); @@ -304,7 +304,7 @@ Notice, all the entries with the closest match to the exact search term have the ### Knowledge check -You now know how to create a text index on a `text` or `varchar` column, configure that text index, and use the CONTAINS function to perform a text search on your data. Be sure to check out the SAP Help documentation for more information on [text indexes](https://help.sap.com/viewer/a8937bea84f21015a80bc776cf758d50/2021_4_QRC/en-US/a5efed9884f210158fd8bd686e7be818.html) and performing [text search](https://help.sap.com/viewer/a8937bea84f21015a80bc776cf758d50/2021_4_QRC/en-US/a5f8abf084f2101580319c6ef971d09c.html). +You now know how to create a text index on a `text` or `varchar` column, configure that text index, and use the CONTAINS function to perform a text search on your data. Be sure to check out the SAP Help documentation for more information on [text indexes](https://help.sap.com/docs/hana-cloud-data-lake/administration-guide-for-data-lake-relational-engine-sap-hana-db-managed/text-indexes-in-data-lake-relational-engine-sap-hana-db-managed). diff --git a/tutorials/data-to-value-conn-concur-part01/data-to-value-conn-concur-part01.md b/tutorials/data-to-value-conn-concur-part01/data-to-value-conn-concur-part01.md index 39287fdcff..7049795615 100644 --- a/tutorials/data-to-value-conn-concur-part01/data-to-value-conn-concur-part01.md +++ b/tutorials/data-to-value-conn-concur-part01/data-to-value-conn-concur-part01.md @@ -4,7 +4,7 @@ author_profile: https://github.com/alphageek7443 keywords: tutorial auto_validation: true time: 20 -tags: [ software-product>sap-concur, tutorial>advanced ] +tags: [ software-product>sap-concur, tutorial>advanced, software-product>sap-datasphere ] primary_tag: software-product>sap-datasphere parser: v2 --- @@ -89,14 +89,12 @@ Visual Studio Code (or just VSCode) is a free source code editor developed and m 3. Copy the below code to the file ``` - ### @hostname = @your-client_id = @your-client_secret = @username = @password = - ``` Replace the placeholder as per your application configuration **your-client_id**, **your-client_secret**, **your-company_uuid** and **your-company_request_token** you have generated in previous step. @@ -111,7 +109,6 @@ The first time you request for a **refreshToken** This is used to get a new acce 1. Copy below code just below the above code to Obtain a **refresh token** and store it to a variable. ``` - ### Obtain a Refresh token # @name refeshTokenCall POST {{hostname}}/oauth2/v0/token HTTP/1.1 @@ -128,7 +125,6 @@ client_id={{your-client_id}} ### @refeshToken = {{refeshTokenCall.response.body.refresh_token}} - ``` ### Obtain an Access Token @@ -138,7 +134,6 @@ The Oauth2 service generates access tokens for authenticated applications. The t 1. Copy below code just below the above code to Obtain a **access Token** and store it to a variable. ``` - ### Obtain an Access Token # @name accessTokenCall POST {{hostname}}/oauth2/v0/token HTTP/1.1 @@ -153,19 +148,16 @@ client_id={{your-client_id}} ### @accessToken = {{accessTokenCall.response.body.access_token}} - ``` ### Calling an API with the Access Token The base URI for all subsequent calls. Armed with the accessToken you can start making calls to an SAP Concur API. Here’s an example How you can retrieve Expense Report by utilizing the appropriate base URI with the access token. ``` - ### Get Expense Report # @name getExpenseReport GET {{hostname}}/api/v3.0/expense/reports?limit=100&user=ALL HTTP/1.1 Authorization: Bearer {{accessToken}} - ``` ### Test it out diff --git a/tutorials/deploy-nodejs-application-kyma/deploy-nodejs-application-kyma.md b/tutorials/deploy-nodejs-application-kyma/deploy-nodejs-application-kyma.md index 0e55ba99a4..9b7e65953d 100644 --- a/tutorials/deploy-nodejs-application-kyma/deploy-nodejs-application-kyma.md +++ b/tutorials/deploy-nodejs-application-kyma/deploy-nodejs-application-kyma.md @@ -176,6 +176,8 @@ spec: value: "8080" - name: TMPDIR value: /tmp + - name: BP_NODE_OPTIMIZE_MEMORY + value: "false" image: /multitenant-kyma-backend:v1 # replace with your Docker Hub account name name: kyma-multitenant-node-multitenancy ports: diff --git a/tutorials/fiori-tools-cap-create-application/application-info-page.png b/tutorials/fiori-tools-cap-create-application/application-info-page.png index 60b99853bf..dcc07a1b76 100644 Binary files a/tutorials/fiori-tools-cap-create-application/application-info-page.png and b/tutorials/fiori-tools-cap-create-application/application-info-page.png differ diff --git a/tutorials/fiori-tools-cap-create-application/choose-tile-list-report-new.png b/tutorials/fiori-tools-cap-create-application/choose-tile-list-report-new.png deleted file mode 100644 index 6532f858cb..0000000000 Binary files a/tutorials/fiori-tools-cap-create-application/choose-tile-list-report-new.png and /dev/null differ diff --git a/tutorials/fiori-tools-cap-create-application/choose-tile-list-report.png b/tutorials/fiori-tools-cap-create-application/choose-tile-list-report.png index d30dcf3aac..37d8de209b 100644 Binary files a/tutorials/fiori-tools-cap-create-application/choose-tile-list-report.png and b/tutorials/fiori-tools-cap-create-application/choose-tile-list-report.png differ diff --git a/tutorials/fiori-tools-cap-create-application/fiori-tools-cap-create-application.md b/tutorials/fiori-tools-cap-create-application/fiori-tools-cap-create-application.md index 7f9178bf6d..ff6c9d6e8a 100644 --- a/tutorials/fiori-tools-cap-create-application/fiori-tools-cap-create-application.md +++ b/tutorials/fiori-tools-cap-create-application/fiori-tools-cap-create-application.md @@ -3,7 +3,7 @@ title: Create an SAP Fiori Elements Application description: Create an SAP Fiori elements application of type list report object page based on the SAP Cloud Application Programming Model. auto_validation: true time: 15 -tags: [ software-product>sap-fiori, software-product>sap-fiori-tools, tutorial>beginner, software-product>sap-fiori, software-product>sap-business-application-studio, software-product-function>sap-cloud-application-programming-model, software-product>sap-business-technology-platform] +tags: [ software-product-function>sap-fiori, software-product-function>sap-fiori-tools, tutorial>beginner, software-product-function>sap-fiori, software-product-function>sap-business-application-studio, software-product-function>sap-cloud-application-programming-model, software-product-function>sap-business-technology-platform ] primary_tag: software-product>sap-fiori contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>https://github.com/jo-fiess ] --- @@ -21,7 +21,7 @@ contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>ht 2. Select the tile **List Report Page** and click **Next**. - ![Choose tile "List Report Object Page"](choose-tile-list-report.png) + !![Choose tile "List Report Object Page"](choose-tile-list-report.png) 3. Now you connect the application template with your OData service. The OData service you use for this example was already prepared during the previous tutorial: [Prepare Your Development Environment](fiori-tools-cap-prepare-dev-env) @@ -33,17 +33,27 @@ contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>ht When finished, click **Next**. - ![Select service related parameters](enter-service-parameters.png) + !![Select service related parameters](enter-service-parameters.png) 4. For your application you need to choose the main entity set from the OData service. Objects of this type will be displayed in the list report. In your application, start with **Incidents**. As your application will not have a sub-object page, you do not need a navigation entity. Leave **Yes** selected for the prompt **Automatically add table columns to the list page and a section to the object page if none already exists?**. + Leave the selected **Table Type**. + When finished, click **Next**. -5. Maintain specific attributes of the application project as follows (Minimum SAPUI5 version is updated automatically): +5. Maintain specific attributes of the application project as follows: + + Module Name: `incidents` + + Application Title: `Incidents Management` - ![Provide project attributes](provide-project-attributes.png) + Application Namespace: `sap.fe.demo` + + Keep the rest as default. + + !![Provide project attributes](provide-project-attributes.png) >Be sure to choose exactly the **Module name** and the **Application namespace** as shown above, because these are referenced in the sample code. @@ -53,11 +63,11 @@ contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>ht After the project is generated, an Application Information page is shown giving you an overview of project details and tasks that you may perform on this project. It is recommended that you keep this page open as it will be used in other steps. You can open it any time using selecting menu **View->Command Palette...** and select **Fiori: Open Application Info** - ![Application Information page](application-info-page.png) + !![Application Information page](application-info-page.png) You will also see a new folder `incidents` inside the `app` folder. - ![Review the generated artifacts](review-generated-artifacts.png) + !![Review the generated artifacts](review-generated-artifacts.png) [DONE] [ACCORDION-END] @@ -67,17 +77,19 @@ contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>ht Your SAP Fiori elements application needs a server to run. This server is provided by the command line client and development toolkit for the SAP Cloud Application Programming Model. The setup for using the server was done in the previous tutorial [Prepare Your Development Environment](fiori-tools-cap-prepare-dev-env). 1. Select the tile **Preview Application** from **Application Information** page. - ![Select watch script](preview-application.png) + !![Select watch script](preview-application.png) 2. When the quick pick is shown, select **watch-incidents** script - ![Select watch script](select-watch-script.png) + !![Select watch script](select-watch-script.png) + + Your app should now start in a new window. If not, you can click on the link from the terminal or click on **Open in New Tab** as shown below. A dialog window may pop up and you can choose the option as follows. Click **Open in New Tab**. - ![Click button Open in New Tab on popup](click-open-in-new-tab.png) + !![Click button Open in New Tab on popup](click-open-in-new-tab.png) >Please check for a browser popup blocker in case the popup windows are not visible. @@ -85,7 +97,7 @@ Your SAP Fiori elements application needs a server to run. This server is provid Press **Go**. The list report table will then show the data from the sample service. - ![List Report with items](list-report-go.png) + !![List Report with items](list-report-go.png) Filter fields, actions, and table columns are defined by the annotations in the Core Data Service (CDS) files. These files are part of the OData service definition. diff --git a/tutorials/fiori-tools-cap-create-application/preview-application.png b/tutorials/fiori-tools-cap-create-application/preview-application.png index 4b444b68a7..e9c6328bf9 100644 Binary files a/tutorials/fiori-tools-cap-create-application/preview-application.png and b/tutorials/fiori-tools-cap-create-application/preview-application.png differ diff --git a/tutorials/fiori-tools-cap-create-application/provide-project-attributes.png b/tutorials/fiori-tools-cap-create-application/provide-project-attributes.png index b2a1908bf7..c8e1b1edb5 100644 Binary files a/tutorials/fiori-tools-cap-create-application/provide-project-attributes.png and b/tutorials/fiori-tools-cap-create-application/provide-project-attributes.png differ diff --git a/tutorials/fiori-tools-cap-create-application/review-generated-artifacts.png b/tutorials/fiori-tools-cap-create-application/review-generated-artifacts.png index df21533050..111d72672d 100644 Binary files a/tutorials/fiori-tools-cap-create-application/review-generated-artifacts.png and b/tutorials/fiori-tools-cap-create-application/review-generated-artifacts.png differ diff --git a/tutorials/fiori-tools-cap-create-application/select-watch-script.png b/tutorials/fiori-tools-cap-create-application/select-watch-script.png index d730b41def..d04462fc24 100644 Binary files a/tutorials/fiori-tools-cap-create-application/select-watch-script.png and b/tutorials/fiori-tools-cap-create-application/select-watch-script.png differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/fiori-tools-cap-modify-list-report.md b/tutorials/fiori-tools-cap-modify-list-report/fiori-tools-cap-modify-list-report.md index 527aa603ff..bfe805424e 100644 --- a/tutorials/fiori-tools-cap-modify-list-report/fiori-tools-cap-modify-list-report.md +++ b/tutorials/fiori-tools-cap-modify-list-report/fiori-tools-cap-modify-list-report.md @@ -3,7 +3,7 @@ author_name: Dimitri Herber author_profile: https://github.com/fakirdi auto_validation: true time: 15 -tags: [products>sap-fiori-elements, products>sap-fiori-tools, tutorial>beginner, products>sap-fiori, products>sap-business-application-studio, software-product-function>sap-cloud-application-programming-model, products>sap-business-technology-platform] +tags: [software-product-function>sap-fiori-elements, products>sap-fiori-tools, tutorial>beginner, products>sap-fiori, products>sap-business-application-studio, software-product-function>sap-cloud-application-programming-model, products>sap-business-technology-platform] primary_tag: products>sap-fiori parser: v2 contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>https://github.com/jo-fiess ] @@ -23,34 +23,37 @@ contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>ht - How to configure the list report to load data automatically - In SAP Fiori elements applications, UI annotations are used to refine the user interface. All annotations are documented in the [OData 4.0 Vocabularies](https://sap.github.io/odata-vocabularies/vocabularies/UI.html). With SAP Fiori Tools - Application modeler, you don't have to be an annotation expert, as the necessary UI annotations are automatically generated when you add or modify the UI elements for your application. You can easily navigate to the annotations behind the UI elements to review and/or manually update them in the code editor. - ### Add filter field to the filter bar In this step, you will learn how to add filters to the List Report page of your application using the Page Editor and see the auto generated annotation code in the local annotation file. -1. From the Application Information page, click on the **ListReport** page - +1. From the Application Information page, click on the **ListReport** page. + + ![Open List Report Page](t3-open-list-report-page-app-info.png) The Page Editor view opens up listing all the major page elements in the application outline structure. 2. Press the **+** icon in the **Filter Fields** sub-node of the **Filter Bar** node on the outline. It becomes visible, once you hover over the sub-node. + ![Add Filter Fields Icon](t3-add-filter-fields.png) 3. When prompted, choose **category_code** as **Filter Field** and press **Add**. - ![Add Filter Fields Icon](t3-add-filter-fields-dialog.png) + + ![Add Filter Fields Icon](t3-add-filter-fields-dialog.png) - The new filter field is added to the filter bar. The application preview (if started) is automatically refreshed to display it. + The new filter field is added to the filter bar. The application preview (if started) is automatically refreshed to display it. ![New Filter Field](t3-annotation-selection-field-category.PNG) + > This is enabled by copying the `UI.SelectionFields` annotation to the local annotation file and updating it with `category_code` property in the background. You can press ![Navigate to source code](t3-navigate-source-code.png) (Navigate to source code) icon displayed in the **Filter Fields** sub-node on hover to see the updated annotation in the local annotation file. + ```CDS SelectionFields : [ incidentStatus_code, @@ -80,11 +83,13 @@ In this step, you will learn how to enhance the value help defined in the projec - In the Properties pane displayed to the right of the outline, find the **Display Type** property. Currently it shows **Value Help (base layer)**, indicating that value help is defined in the layer lower than this app. To enhance the value help settings, choose **Value Help** instead. + ![Filter Properties](t3-initial-load-filter-properties.png) - - In the pop-up dialog, make sure **Display as Dropdown** is switched on, press **Add Column** under **Results List**, choose **desc** in the **Property** column and press **Apply**. + - In the pop-up dialog, make sure **Display as Dropdown** is switched on, press **Add Column** under **Results List**, choose **desc** in the **Property** column and press **Apply**. - ![Value Help Dialog](t3-value-help-dialog-updated.png) + + ![Value Help Dialog](t3-value-help-dialog-updated.png) 3. Application preview is refreshed and displays the **Category** filter as drop-down list and shows the value help with the description column. @@ -93,17 +98,19 @@ In this step, you will learn how to enhance the value help defined in the projec ### Configure the application to load data automatically -In this step, you will learn how to configure the application to load data automatically when started without the need of pressing the **Go** button. - -1. In the Page Editor, select the **table** node on the outline to show the properties of the table. +In this step, you will learn how to configure the application to load data automatically when started without the need of pressing the **Go** button. - ![Table Properties](t3-initial-load-table-properties.PNG) -2. In the Properties pane, locate the **Initial Load** property and set it to **Enabled**. +1. In the Page Editor, select the **table** node on the outline to show the properties of the table. +2. In the Properties pane, locate the **Initial Load** property and set it to **Enabled**. + ![Initial Load](t3-initial-load-table-properties-initial-load-true.PNG) + >To easily find the specific property in the Properties pane, you can use the Search Properties field in the top right corner. -1. After the application is refreshed, the table data will be loaded automatically. +3. After the application is refreshed, the table data will be loaded automatically. + + ![Add Column](t3-initial-load-table-preview.png) @@ -113,18 +120,21 @@ In this step, you will learn how to enhance the list report table with additiona 1. In the Page Editor, press the **+** icon in the **Column** sub-node of the **Table** node on the outline and choose **Add Basic Columns**. + ![Add Column](t3-add-column.png) 2. When prompted, choose **title** in the **Columns** field + ![Add Column Dialog](t3-add-title-column.png) > You can filter the list of suggestions by typing a few characters of the option you want to choose. This will filter the list of suggestions. 3. Press **Add**. - The application preview refreshes and displays the column added to the table. + The application preview refreshes and displays the column added to the table. + ![Annotation Cursor](t3-annotation-line-item-LR.PNG) > If your preview window is not wide enough, the last column is not visible unless its Importance property is set to High or Medium. diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-add-column.png b/tutorials/fiori-tools-cap-modify-list-report/t3-add-column.png index d3cc46f14c..c9b9c2199b 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-add-column.png and b/tutorials/fiori-tools-cap-modify-list-report/t3-add-column.png differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-add-filter-fields-dialog.png b/tutorials/fiori-tools-cap-modify-list-report/t3-add-filter-fields-dialog.png index 71a65bdd8a..e74b0b516b 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-add-filter-fields-dialog.png and b/tutorials/fiori-tools-cap-modify-list-report/t3-add-filter-fields-dialog.png differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-add-filter-fields.png b/tutorials/fiori-tools-cap-modify-list-report/t3-add-filter-fields.png index 44b0bb1ee1..bc979248cf 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-add-filter-fields.png and b/tutorials/fiori-tools-cap-modify-list-report/t3-add-filter-fields.png differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-add-title-column.png b/tutorials/fiori-tools-cap-modify-list-report/t3-add-title-column.png index a061f9a903..1128f12d3e 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-add-title-column.png and b/tutorials/fiori-tools-cap-modify-list-report/t3-add-title-column.png differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-annotation-selection-field-category.PNG b/tutorials/fiori-tools-cap-modify-list-report/t3-annotation-selection-field-category.PNG index a040195692..808fe230e8 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-annotation-selection-field-category.PNG and b/tutorials/fiori-tools-cap-modify-list-report/t3-annotation-selection-field-category.PNG differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-filter-properties.png b/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-filter-properties.png index 11a2328be5..9d281f6277 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-filter-properties.png and b/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-filter-properties.png differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-table-properties-initial-load-true.PNG b/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-table-properties-initial-load-true.PNG index 82f2d351d0..64b4336716 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-table-properties-initial-load-true.PNG and b/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-table-properties-initial-load-true.PNG differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-table-properties.PNG b/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-table-properties.PNG deleted file mode 100644 index 79de0a8f05..0000000000 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-initial-load-table-properties.PNG and /dev/null differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-open-list-report-page-app-info.png b/tutorials/fiori-tools-cap-modify-list-report/t3-open-list-report-page-app-info.png index d4d8043f9c..2a8004a3a7 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-open-list-report-page-app-info.png and b/tutorials/fiori-tools-cap-modify-list-report/t3-open-list-report-page-app-info.png differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-value-help-dialog-updated.png b/tutorials/fiori-tools-cap-modify-list-report/t3-value-help-dialog-updated.png index da59ce04bc..5b57450df8 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-value-help-dialog-updated.png and b/tutorials/fiori-tools-cap-modify-list-report/t3-value-help-dialog-updated.png differ diff --git a/tutorials/fiori-tools-cap-modify-list-report/t3-value-help-icon2.PNG b/tutorials/fiori-tools-cap-modify-list-report/t3-value-help-icon2.PNG index 2563a97ba6..c43cd1f327 100644 Binary files a/tutorials/fiori-tools-cap-modify-list-report/t3-value-help-icon2.PNG and b/tutorials/fiori-tools-cap-modify-list-report/t3-value-help-icon2.PNG differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/fiori-tools-cap-modify-object-page.md b/tutorials/fiori-tools-cap-modify-object-page/fiori-tools-cap-modify-object-page.md index cff9e4e342..8c5a3b2b88 100644 --- a/tutorials/fiori-tools-cap-modify-object-page/fiori-tools-cap-modify-object-page.md +++ b/tutorials/fiori-tools-cap-modify-object-page/fiori-tools-cap-modify-object-page.md @@ -3,7 +3,7 @@ author_name: Dimitri Herber author_profile: https://github.com/fakirdi auto_validation: true time: 15 -tags: [ software-product>sap-fiori-elements, software-product>sap-fiori-tools, tutorial>beginner, software-product>sap-fiori, software-product>sap-business-application-studio, software-product-function>sap-cloud-application-programming-model, software-product>sap-business-technology-platform] +tags: [ software-product-function>sap-fiori-elements, software-product>sap-fiori-tools, tutorial>beginner, software-product>sap-fiori, software-product>sap-business-application-studio, software-product-function>sap-cloud-application-programming-model, software-product>sap-business-technology-platform] primary_tag: software-product>sap-fiori parser: v2 contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>https://github.com/jo-fiess ] @@ -28,25 +28,29 @@ contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>ht 1. Open the object page of your application by clicking one of the incidents within the list report table. You´ll see the field group **Incident Details** on the **Incident Overview** section. + ![Annotation Cursor](t4-annotation-section-field-1.PNG) 2. Open the Page Editor for the object page of your app: from the Application Information page, click on **ObjectPage** within Pages. - ![Open Object Page](t4-open-object-page-app-info.png) + + ![Open Object Page](t4-open-object-page-app-info.png) The Page Editor view opens up listing all the major page elements in the application outline structure. -3. Expand the nodes **Sections->Incident Overview->Subsections->Incident Details->Form**, press the **+** icon in the **Fields** sub-node and choose **Add Basic Fields**. +3. Expand the nodes **Sections - Incident Overview - Subsections - Incident Details - Form**, press the **+** icon in the **Fields** sub-node and choose **Add Basic Fields**. + ![Add Fields Icon](t4-add-section-fields.png) - 4. When prompted, choose **description** in **Fields** and press **Add**. + ![Add Fields Dialog](t4-add-fields-dialog.png) 5. Application preview automatically refreshes (if started) to show the additional field **Incident Description** within the field group **Incident Details**. + ![Annotation Cursor](t4-annotation-section-field-2.PNG) @@ -54,25 +58,31 @@ contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>ht In the previous step, you added a new field to an existing field group. Now you will add a new field group to the section **Incident Overview**. -1. In Page editor, expand the nodes **Sections->Incident Overview** if not already expanded, press the **+** icon in the **Subsections** sub-node and choose **Add Form Section**. +1. In Page editor, expand the nodes **Sections - Incident Overview** if not already expanded, press the **+** icon in the **Subsections** sub-node and choose **Add Form Section**. + ![Add Section Icon](t4-add-section.png) 2. When prompted, type **General Information** in the **Label** field and press **Add**. + ![Add Sections Dialog](t4-add-section-dialog.png) General Information section is visible in the outline. -3. Expand the nodes **General Information->Form**, press **+** icon in the **Fields** node and choose **Add Basic Fields**. +3. Expand the nodes **General Information -> Form**, press **+** icon in the **Fields** node and choose **Add Basic Fields**. - ![Add Fields Icon](t4-add-section-fields-icon.png) + + ![Add Fields Icon](t4-add-section-fields-icon.png) 4. When prompted, choose **priority_code**, **category_code** and **incidentStatus_code** as **Fields** and press **Add**. + ![Add Fields Dialog](t4-add-section-fields-dialog.png) Application preview refreshes and shows the additional field group **General Information** within the section **Incident Overview**. + + ![Annotation Cursor](t4-annotation-section-field-group.PNG) ### Add custom section to the object page @@ -83,14 +93,17 @@ To simplify this exercise, you will find prepared content in the `ext` folder of 1. Using drag&drop move the existing folder `ext` located in `test-resources` to the `webapp` folder of the incidents application. + ![Annotation Cursor](t4-annotation-custom-section-ext-4.png) 2. In Page Editor, press the **+** icon in the **Sections** node and choose **Add Custom Section**. + ![Annotation Cursor](t4-annotation-custom-section-page-editor-add-section.PNG) 3. In the **Add Custom Section** dialog, modify the content of the fields as showed in the image below and press **Add**. + ![Annotation Cursor](t4-annotation-custom-section-add-section-dialog.PNG) >The content of field **Fragment Name** represents one of the prepared artifacts located in the `ext` folder. @@ -99,6 +112,7 @@ To simplify this exercise, you will find prepared content in the `ext` folder of You have now finished the creation of the new custom section. Once the application preview is refreshed, check the new section displayed on the object page. + ![Annotation Cursor](t4-annotation-custom-section-on-object-page.PNG) ### Add new column to Incident Process Flow table @@ -107,24 +121,29 @@ Now you are going to add a new column to the object page table **Incidents Proce 1. In Page editor, expand the nodes **Sections->Incident Process Flow->Table**, press the **+** icon in the **Columns** sub-node and choose **Add Basic Columns**. + ![Add Columns Icon](t4-add-column-icon.png) 2. When prompted, choose **stepStatus** in the **Columns** field and press **Add**. + ![Add Column Dialog](t4-add-column-dialog.png) **Process Step Status** column is added at the bottom of the columns list section in the outline. 3. Drag the newly added column to the top of the columns list and drop it there. + ![Move Column](t4-move-column.png) 4. Choose the **Process Step Status** column to display its properties in the Properties pane to the right of the outline. In the **Criticality** field, change the value to **criticality**. + ![Define Criticality](t4-add-column-criticality.png) Once the application preview is refreshed, the new column added to the object page table. + ![Annotation Cursor](t4-annotation-LSP-table-column.PNG) ### Enable the flexible column layout @@ -135,10 +154,13 @@ The flexible column layout allows you to have the list report and the object pag In the **Property Panel** select the **Flexible Column Layout** and choose the `Mid-Expanded` option for the two column layout. + ![Annotation Cursor](t4-flexible-column-layout-global-page-settings.PNG) 2. In the application preview, the list report and object page are now shown in a two column layout. When you click on a different row in the list report the object page will update accordingly. + + ![Annotation Cursor](t4-flexible-column-layout-final.PNG) ### Summary @@ -147,6 +169,7 @@ At this point, your list report object page application is complete. > To prepare your app for translation, you can generate the translation keys for all the language dependent fields in your app. For that, choose the globe button at the top of the screen and, once prompted, press **Create**. + ![Prepare for Translation](t4-i18n.png) Over the past four tutorials, you have used the SAP Business Technology Platform, SAP Fiori tools and SAP Fiori elements to build this application. You have learned how to: diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-add-fields-dialog.png b/tutorials/fiori-tools-cap-modify-object-page/t4-add-fields-dialog.png index 88ebbe45c9..b3c1ffdee0 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-add-fields-dialog.png and b/tutorials/fiori-tools-cap-modify-object-page/t4-add-fields-dialog.png differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields-dialog.png b/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields-dialog.png index 96c356df49..8a87818fb3 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields-dialog.png and b/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields-dialog.png differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields-icon.png b/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields-icon.png index e8a4f6dfae..a4ec25c2a1 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields-icon.png and b/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields-icon.png differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields.png b/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields.png index 36d162295a..473c0ca866 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields.png and b/tutorials/fiori-tools-cap-modify-object-page/t4-add-section-fields.png differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-custom-section-add-section-dialog.PNG b/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-custom-section-add-section-dialog.PNG index ab5f882a48..c983c511ba 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-custom-section-add-section-dialog.PNG and b/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-custom-section-add-section-dialog.PNG differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-section-field-2.PNG b/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-section-field-2.PNG index afaefa9fd3..c0f55a2d06 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-section-field-2.PNG and b/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-section-field-2.PNG differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-section-field-group.PNG b/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-section-field-group.PNG index d8c26730a7..7a13b9e6a5 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-section-field-group.PNG and b/tutorials/fiori-tools-cap-modify-object-page/t4-annotation-section-field-group.PNG differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-flexible-column-layout-global-page-settings.PNG b/tutorials/fiori-tools-cap-modify-object-page/t4-flexible-column-layout-global-page-settings.PNG index b6fdd4f121..b78f3e5795 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-flexible-column-layout-global-page-settings.PNG and b/tutorials/fiori-tools-cap-modify-object-page/t4-flexible-column-layout-global-page-settings.PNG differ diff --git a/tutorials/fiori-tools-cap-modify-object-page/t4-open-object-page-app-info.png b/tutorials/fiori-tools-cap-modify-object-page/t4-open-object-page-app-info.png index a9c692ce2a..598fc14414 100644 Binary files a/tutorials/fiori-tools-cap-modify-object-page/t4-open-object-page-app-info.png and b/tutorials/fiori-tools-cap-modify-object-page/t4-open-object-page-app-info.png differ diff --git a/tutorials/fiori-tools-cap-prepare-dev-env/create-dev-space-BAS.png b/tutorials/fiori-tools-cap-prepare-dev-env/create-dev-space-BAS.png index 0cf5f47bd6..1aea830fa6 100644 Binary files a/tutorials/fiori-tools-cap-prepare-dev-env/create-dev-space-BAS.png and b/tutorials/fiori-tools-cap-prepare-dev-env/create-dev-space-BAS.png differ diff --git a/tutorials/fiori-tools-cap-prepare-dev-env/fiori-tools-cap-prepare-dev-env.md b/tutorials/fiori-tools-cap-prepare-dev-env/fiori-tools-cap-prepare-dev-env.md index a3a8fd091c..5903628d36 100644 --- a/tutorials/fiori-tools-cap-prepare-dev-env/fiori-tools-cap-prepare-dev-env.md +++ b/tutorials/fiori-tools-cap-prepare-dev-env/fiori-tools-cap-prepare-dev-env.md @@ -3,7 +3,7 @@ title: Prepare Your Development Environment for SAP Fiori Elements description: Set up your development environment with SAP Business Application Studio to create an SAP Fiori elements application based on the SAP Cloud Application Programming Model. auto_validation: true time: 20 minutes -tags: [ products>sap-fiori-elements, products>sap-fiori-tools, tutorial>beginner, products>sap-fiori, products>sap-business-application-studio, software-product-function>sap-cloud-application-programming-model, products>sap-business-technology-platform] +tags: [ software-product-function>sap-fiori-elements, tutorial>beginner, software-product-function>sap-business-application-studio, software-product-function>sap-cloud-application-programming-model, software-product-function>sap-business-technology-platform ] primary_tag: products>sap-fiori contributors: [ Hitesh Parmar>https://github.com/hitesh-parmar, Joachim Fiess>https://github.com/jo-fiess ] --- @@ -28,7 +28,7 @@ Click [here](https://cap.cloud.sap/docs/about/) for more information about the S Click **Create Dev Space**. - ![Start the Dev Space](create-dev-space-BAS.png) + !![Start the Dev Space](create-dev-space-BAS.png) Your development space is now ready to use. Wait until the status has changed from **STARTING** to **RUNNING**. After the initial creation this is done automatically. @@ -54,16 +54,15 @@ Once you are in the development space, you will see a **Welcome** page from whic 2. Click the link **Clone from Git**. - ![Click on link "Clone from Git"](click-clone-from-git.png) + !![Click on link "Clone from Git"](click-clone-from-git.png) Paste the repository link into the input field and press **Enter**. - ![Enter the github repository URL](enter-github-repository.png) + !![Enter the github repository URL](enter-github-repository.png) -3. Wait until the cloning has finished. When you see a toast message in the lower right corner, click **Open** to open the project. - You see your project in the explorer panel as shown in the image below: +3. Wait until the cloning has finished. Click **Open** to open the project. You see your project in the explorer panel as shown in the image below: - ![Explorer service structure](explorer-project-tree.png) + !![Explorer service structure](explorer-project-tree.png) [DONE] [ACCORDION-END] diff --git a/tutorials/hana-clients-choose-hana-instance/hana-clients-choose-hana-instance.md b/tutorials/hana-clients-choose-hana-instance/hana-clients-choose-hana-instance.md index 4cd7c72933..988959d0aa 100644 --- a/tutorials/hana-clients-choose-hana-instance/hana-clients-choose-hana-instance.md +++ b/tutorials/hana-clients-choose-hana-instance/hana-clients-choose-hana-instance.md @@ -7,20 +7,24 @@ primary_tag: software-product>sap-hana-cloud --- # Choose an SAP HANA Database + Learn about SAP HANA Cloud and SAP HANA, express edition and choose one that will be used with the SAP HANA client interfaces in subsequent tutorials. ## Prerequisites - - A Microsoft Windows, Linux, or Mac computer - - A machine that can run SAP HANA, express edition if the SAP HANA Cloud trial or free tier is not used + +- A Microsoft Windows, Linux, or Mac computer +- A machine that can run SAP HANA, express edition if SAP HANA Cloud free tier is not used ## You will learn - - How to create an instance of SAP HANA Cloud or SAP HANA, express edition - - How to connect to a SAP HANA Cloud or an SAP HANA, express edition database + +- How to create an instance of SAP HANA Cloud or SAP HANA, express edition +- How to connect to a SAP HANA Cloud or an SAP HANA, express edition database ## Intro + This tutorial will provide tips and pointers on setting up an instance of [SAP HANA](https://www.sap.com/products/hana.html) running in the cloud or on-premise so that it can then be connected to using a few of the [SAP HANA Client](https://help.sap.com/docs/SAP_HANA_CLIENT) interfaces. -For more information on SAP HANA, consult [What Is SAP HANA](https://help.sap.com/docs/SAP_HANA_PLATFORM/eb3777d5495d46c5b2fa773206bbfb46/d3b1adcdbb571014a93eff11ad9a1d89.html). +For more information on SAP HANA Cloud, consult [Introduction to SAP HANA Cloud](https://help.sap.com/docs/hana-cloud/feature-scope-description-for-sap-hana-cloud-3dd959f1b8574cb0ba19ab05cfc0d3ae/introduction-to-sap-hana-cloud). > Access help from the SAP community or provide feedback on this tutorial by navigating to the **Feedback** link located on the top right of this page. @@ -40,6 +44,7 @@ For more information on SAP HANA, consult [What Is SAP HANA](https://help.sap.co --- ### Overview of SAP HANA Cloud and SAP HANA On-premise + There are multiple versions of SAP HANA. The information below is a list of links for the documentation of each version. | Version | Notes @@ -50,38 +55,38 @@ There are multiple versions of SAP HANA. The information below is a list of lin ### SAP HANA Cloud -Here are a few benefits of using SAP HANA Cloud: - - * Software updates are automatically applied by SAP. - * Hardware is managed by a cloud provider (e.g. AWS, Azure, or GCP). +Here are a few benefits of using SAP HANA Cloud: - * Many data center locations to choose from as listed in the [SAP Discovery Center](https://discovery-center.cloud.sap/serviceCatalog/sap-hana-cloud?region=all&tab=service_plan) +- Software updates are automatically applied by SAP. - * [Backups](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/backup-and-recovery) are automatic and recovery can be initiated in SAP HANA Cloud Central. +- Hardware is managed by a cloud provider (e.g. AWS, Azure, or GCP). - * The memory, compute and storage settings can be changed as your needs change. Note a few operations can be performed using [service requests](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/service-requests). +- Many data center locations to choose from as listed in the [SAP Discovery Center](https://discovery-center.cloud.sap/serviceCatalog/sap-hana-cloud?region=all&tab=service_plan) - * The ability is provided to expand data storage from in-memory, to native storage extensions, to a data lake, while providing a common access layer that enables you to have further control over performance and cost. See also [Lower Your Data Management Costs With SAP HANA Cloud](https://blogs.sap.com/2019/10/29/lower-your-data-management-costs-with-sap-hana-cloud/). +- [Backups](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/backup-and-recovery) are automatic and recovery can be initiated in SAP HANA Cloud Central. +- The memory, compute and storage settings can be changed as your needs change. Note a few operations can be performed using [service requests](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/service-requests). - Here are a few differences between SAP HANA Cloud and an on-premise version: +- The ability is provided to expand data storage from in-memory, to native storage extensions, to a data lake, while providing a common access layer that enables you to have further control over performance and cost. See also [Lower Your Data Management Costs With SAP HANA Cloud](https://blogs.sap.com/2019/10/29/lower-your-data-management-costs-with-sap-hana-cloud/). - * Every SAP HANA Cloud instance is one SAP HANA database. SAP HANA Cloud as of 2024 QRC 4 offers [multitenancy](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-multitenancy/introducing-sap-hana-cloud-multitenancy) support. For further details see [The next step towards cost-effectiveness and scalability with SAP HANA Cloud Multitenancy](https://community.sap.com/t5/technology-blog-posts-by-sap/the-next-step-towards-cost-effectiveness-and-scalability-with-sap-hana/ba-p/13885564). On-premise SAP HANA also has a concept of tenant databases (a system database and one or more tenant databases) but in a different manner from SAP HANA Cloud. For further details see [SAP HANA Tenant Databases](https://help.sap.com/docs/SAP_HANA_PLATFORM/eb3777d5495d46c5b2fa773206bbfb46/0baadba82dd9407cbb852ae98f49f6bd.html). +Here are a few differences between SAP HANA Cloud and an on-premise version: - * Connections to an SAP HANA Cloud instance must be secure and require a minimum SAP HANA client version of 2.4.167. +- Every SAP HANA Cloud instance is one SAP HANA database. SAP HANA Cloud as of 2024 QRC 4 offers [multitenancy](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-multitenancy/introducing-sap-hana-cloud-multitenancy) support. For further details see [The next step towards cost-effectiveness and scalability with SAP HANA Cloud Multitenancy](https://community.sap.com/t5/technology-blog-posts-by-sap/the-next-step-towards-cost-effectiveness-and-scalability-with-sap-hana/ba-p/13885564). On-premise SAP HANA also has a concept of tenant databases (a system database and one or more tenant databases) but in a different manner from SAP HANA Cloud. For further details see [SAP HANA Tenant Databases](https://help.sap.com/docs/SAP_HANA_PLATFORM/eb3777d5495d46c5b2fa773206bbfb46/0baadba82dd9407cbb852ae98f49f6bd.html). - * The administration user for SAP HANA Cloud is named DBADMIN while for an SAP HANA 2.0 database it is SYSTEM. For additional details see [Predefined Users](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-security-guide/predefined-users), [SAP HANA Cloud Administrator DBADMIN](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-administration-guide/user-management-with-sap-hana-database-administrator-dbadmin), and [Predefined Users in HANA 2.0](https://help.sap.com/docs/SAP_HANA_PLATFORM/b3ee5778bc2e4a089d3299b82ec762a7/de4ee8bbbb5710148a04f023da147c8d.html). +- The administration user for SAP HANA Cloud is named DBADMIN while for an SAP HANA 2.0 database it is SYSTEM. For additional details see [Predefined Users](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-security-guide/predefined-users), [SAP HANA Cloud Administrator DBADMIN](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-administration-guide/user-management-with-sap-hana-database-administrator-dbadmin), and [Predefined Users in HANA 2.0](https://help.sap.com/docs/SAP_HANA_PLATFORM/b3ee5778bc2e4a089d3299b82ec762a7/de4ee8bbbb5710148a04f023da147c8d.html). - Information on the instance size steps for SAP HANA Cloud, SAP HANA databases can be found at [Create an SAP HANA Database Instance Using SAP HANA Cloud Central](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/create-sap-hana-database-instance-using-sap-hana-cloud-central). Service plan, pricing and data center availability can be found at [SAP HANA Cloud Service (SAP Discovery Center)](https://discovery-center.cloud.sap/serviceCatalog/sap-hana-cloud?region=all&tab=service_plan). Details on limitations can be found at [System Limitations](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/system-limitations). Compatibility information can be found at [Compatibility with Other SAP HANA Versions](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-migration-guide/compatibility-with-other-sap-hana-versions). Additional details can be found at [What is SAP HANA?](https://www.sap.com/products/technology-platform/hana/what-is-sap-hana.html). +- The administration tool for SAP HANA Cloud is SAP HANA Cloud Central. The administration tools for the on-premise include the SAP HANA cockpit and the SAP HANA database explorer. +Information on the instance size steps for SAP HANA Cloud, SAP HANA databases can be found at [Create an SAP HANA Database Instance Using SAP HANA Cloud Central](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/create-sap-hana-database-instance-using-sap-hana-cloud-central). Service plan, pricing and data center availability can be found at [SAP HANA Cloud Service (SAP Discovery Center)](https://discovery-center.cloud.sap/serviceCatalog/sap-hana-cloud?region=all&tab=service_plan). Details on limitations can be found at [System Limitations](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/system-limitations). Compatibility information can be found at [Compatibility with Other SAP HANA Versions](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-migration-guide/compatibility-with-other-sap-hana-versions). Additional details can be found at [What is SAP HANA?](https://www.sap.com/products/technology-platform/hana/what-is-sap-hana.html). ### Connect to SAP HANA Cloud + >To complete the tutorials in the mission, an SAP HANA instance is needed. Step 3 and 5 in this tutorial provide two different, free options that can be used to set up an SAP HANA instance. Only one of these steps needs to be completed if you currently do not have access to an SAP HANA instance. -The instructions on how to setup a free SAP HANA Cloud trial or free tier within the SAP Business Technology Platform (SAP BTP), are well covered in a number of other sources listed below. Trial is only available on the US10 landscape and is in a separate SAP BTP trial account whereas free tier is available in multiple production SAP BTP accounts and provides a seamless transition from a free tier to a paid plan. +The instructions on how to setup a free SAP HANA Cloud instance within the SAP Business Technology Platform (SAP BTP) are well covered in a number of other sources listed below. The SAP BTP Trial is available on the US10 and AP21 landscapes. When using the free SAP HANA Cloud service in productive landscapes, there is an option to transition the service from free tier to a paid service. -* [Set Up Your SAP HANA Cloud, SAP HANA Database (free tier or trial) and Understand the Basics](group.hana-cloud-get-started-1-trial) +* [Set Up Your SAP HANA Cloud, SAP HANA Database and Understand the Basics](group.hana-cloud-get-started-1-trial) * [SAP Learning Journey - Provisioning and Administering Databases in SAP HANA Cloud](https://learning.sap.com/learning-journey/provision-and-administer-databases-in-sap-hana-cloud) @@ -99,7 +104,7 @@ For more information on SAP BTP see the following product pages and help documen * [https://help.sap.com/docs/btp](https://help.sap.com/docs/btp) -Continue with this tutorial once you have created an SAP HANA Cloud trial or free tier instance as shown below. +Continue with this tutorial once you have created an SAP HANA Cloud instance as shown below. ![SAP HANA Cloud Trial instance](hana-cloud-instance.png) @@ -108,7 +113,7 @@ Continue with this tutorial once you have created an SAP HANA Cloud trial or fre ![SQL Endpoint](SQLEndpoint.png) - >The SAP HANA Cloud, HANA database free tier or trial instances are shut down on a nightly basis and will need to be restarted before working with them the next day. + >The SAP HANA Cloud, HANA database free tier instances are shut down on a nightly basis and will need to be restarted before working with them the next day. 2. Open a SQL console for your database instance from SAP HANA Cloud Central. @@ -151,13 +156,12 @@ Continue with this tutorial once you have created an SAP HANA Cloud trial or fre Congratulations! You have connected to SAP HANA Cloud and performed a few queries. - ### SAP HANA, express edition ->This step only needs to be completed if you currently do not have access to an SAP HANA Instance and did not setup an SAP HANA instance through the SAP HANA Cloud Trial or free tier as explained in step 3. +>This step only needs to be completed if you currently do not have access to an SAP HANA Instance and did not setup an SAP HANA instance through the SAP HANA Cloud free tier as explained in step 3. SAP provides a free streamlined version of SAP HANA that runs on developer laptops called [SAP HANA, express edition](https://www.sap.com/products/technology-platform/hana/express-trial.html). -SAP HANA runs on a few versions of Linux. SAP HANA, express edition provides a binary install as well as virtual machine images that can be run on Microsoft Windows, macOS and Linux machines. This is described in the [Getting Started with SAP HANA 2.0, express edition (Binary Installer Method)](https://help.sap.com/docs/SAP_HANA,_EXPRESS_EDITION/32c9e0c8afba4c87814e61d6a1141280) or [Getting Started with SAP HANA 2.0, express edition (Virtual Machine Method)](https://help.sap.com/docs/SAP_HANA,_EXPRESS_EDITION/8c3bbc4a904d42efac77c09da0bccf64). A database-only option and a database + XS Advanced Applications option are available. The database + XS Advanced Applications install includes the SAP HANA cockpit, the SAP HANA database explorer, and the SAP HANA Web IDE for SAP HANA. +SAP HANA runs on a few versions of Linux. SAP HANA, express edition provides a binary install as well as [docker images](https://hub.docker.com/u/saplabs). This is described in the [Getting Started with SAP HANA 2.0, express edition (Binary Installer Method)](https://help.sap.com/docs/SAP_HANA,_EXPRESS_EDITION/32c9e0c8afba4c87814e61d6a1141280). A database-only option and a database + XS Advanced Applications option are available. The database + XS Advanced Applications install includes the SAP HANA cockpit, the SAP HANA database explorer, and the SAP HANA Web IDE for SAP HANA. ![SAP HANA express download manager](express-download-manager.png) @@ -169,7 +173,7 @@ At this point, you should have a running instance of SAP HANA, express edition. ### Connect to SAP HANA, express edition ->This step only needs to be completed if you currently do not have access to an SAP HANA Instance and did not setup an SAP HANA instance through the SAP HANA Cloud Trial or free tier as explained in step 3. +>This step only needs to be completed if you currently do not have access to an SAP HANA Instance and did not setup an SAP HANA instance using the SAP HANA Cloud free tier as explained in step 3. A default installation will contain one [system](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/39da3d057f56427ab1bb7f738ca9e7ce.html) database named **SYSTEMDB** and one [tenant](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/623afd167e6b48bf956ebb7f2142f058.html) database named **HXE**. diff --git a/tutorials/hana-clients-dot-net-core/HANAClientDriver.png b/tutorials/hana-clients-dot-net-core/HANAClientDriver.png index 4c83bff1ae..0a75b84b37 100644 Binary files a/tutorials/hana-clients-dot-net-core/HANAClientDriver.png and b/tutorials/hana-clients-dot-net-core/HANAClientDriver.png differ diff --git a/tutorials/hana-clients-dot-net-core/dotNET-csproj-code.png b/tutorials/hana-clients-dot-net-core/dotNET-csproj-code.png deleted file mode 100644 index d7b8c9b73f..0000000000 Binary files a/tutorials/hana-clients-dot-net-core/dotNET-csproj-code.png and /dev/null differ diff --git a/tutorials/hana-clients-dot-net-core/hana-clients-dot-net-core.md b/tutorials/hana-clients-dot-net-core/hana-clients-dot-net-core.md index fb33df8b23..2e287d1656 100644 --- a/tutorials/hana-clients-dot-net-core/hana-clients-dot-net-core.md +++ b/tutorials/hana-clients-dot-net-core/hana-clients-dot-net-core.md @@ -7,31 +7,36 @@ primary_tag: software-product>sap-hana-cloud --- # Connect Using the SAP HANA .NET Interface + Create and debug a .NET application that connects to SAP HANA using the SAP HANA client. ## Prerequisites - - You have completed the first 3 tutorials in this mission. + +- You have completed the first 3 tutorials in this mission. ## You will learn - - How to install the .NET SDK - - How to create and debug a .NET application that queries an SAP HANA database + +- How to install the .NET SDK +- How to create and debug a .NET application that queries an SAP HANA database ## Intro + [.NET](https://en.wikipedia.org/wiki/.NET_Core) is a free and open-source software framework for Microsoft Windows, Linux and macOS operating systems and is the successor to the .NET Framework. .NET was previously known as .NET Core. --- ### Install the .NET SDK + The first step is to check if you have the .NET SDK installed and what version it is. Enter the following command: ```Shell dotnet --version -``` +``` + If the `dotnet` command is not recognized, it means that the .NET SDK has not been installed. If the SDK is installed, the command returns the currently installed version, such as 8.0.203. If the .NET SDK is not installed, download it from [Download .NET](https://dotnet.microsoft.com/download) and run the installer on Microsoft Windows or Mac. - ![.NET Core SDK Install](dotnet-install.png) On Linux, follow the instructions for the appropriate Linux version such as [Install the .NET SDK or the .NET Runtime on openSUSE](https://docs.microsoft.com/en-us/dotnet/core/install/linux-opensuse). @@ -40,94 +45,37 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` >For further details on supported versions, see SAP Note [3165810 - SAP HANA Client Supported Platforms](https://launchpad.support.sap.com/#/notes/3165810). - ### Create a .NET application that queries an SAP HANA database -1. Create a new console app with the below commands: + +1. Create a new console app with the below commands: ```Shell (Microsoft Windows) cd %HOMEPATH%/HANAClientsTutorial dotnet new console -o dotNET ``` - >On Linux or Mac, you need to modify the `HDBDOTNETCORE` variable to point to the location of the `libadonetHDB.so` or `libadonetHDB.dylib` file before creating a new console app. - - >There are two ways to set an environment variable: using the `EXPORT` command on a Shell window or in a user's profile script. When an environment variable is modified from the Shell, however, its existence ends when the user's session ends. This could become an issue when you want the variable to persist across multiple user sessions. - - >Hence, we will set the `HDBDOTNETCORE` environment variable via the user profile. - - >Open an editor to edit the file `.bash_profile` or `.profile`. - - >```Shell (Linux or Mac) - >pico ~/.bash_profile - >``` - - >Replace `pico` with your preferred text editor. - - >Add the following line to it. - - >```Shell (Linux or Mac) - >export HDBDOTNETCORE=/home/dan/sap/hdbclient/dotnetcore - >``` - - >Run the source command to immediately apply all the changes made to the `.bash_profile` file. - - >```Shell (Linux or Mac) - >source ~/.bash_profile - >``` - - >Now, you may run the following command to create the console app. - - >```Shell (Linux or Mac) - >cd $HOME/HANAClientsTutorial - >dotnet new console -o dotNET - >``` - -2. Open the `dotNET.csproj` file: - - ```Shell (Microsoft Windows) - cd dotNET - notepad dotNET.csproj - ``` - ```Shell (Linux or Mac) - cd dotNET - pico dotNET.csproj + cd $HOME/HANAClientsTutorial + dotnet new console -o dotNET ``` - Add the following below the `PropertyGroup` section (within the `Project` section) to indicate where to load the SAP HANA Client .NET driver from. Modify the `HintPath` section with the information about where the dll is located on your machine. - - >The SAP HANA driver can be downloaded from [SAP Development Tools](https://tools.eu1.hana.ondemand.com/#hanatools) for either Linux, Windows or macOS if required. - - >![HANAClientDriverDownload](HANAClientDriver.png) - - ```Shell (Microsoft Windows) - - - C:\SAP\hdbclient\dotnetcore\v6.0\Sap.Data.Hana.Net.v6.0.dll - - - ``` +2. Add the SAP HANA .NET data provider which is available on [nuget](https://www.nuget.org/packages/Sap.Data.Hana.Net.v8.0/). A list of available providers from SAP is available at [SAP-SE](https://www.nuget.org/profiles/SAP-SE). - ```Shell (Linux or Mac) - - - /home/dan/sap/hdbclient/dotnetcore/v6.0/Sap.Data.Hana.Net.v6.0.dll - - - + ```Shell + cd dotNET + dotnet add package Sap.Data.Hana.Net.v8.0 ``` - ![dotNET.csproj code](dotNET-csproj-code.png) - - Once the `dotNet.csproj` file has been updated, save and close the file. -3. Run the app to validate that SAP HANA driver can be loaded: + ![HANAClientDriverDownload](HANAClientDriver.png) + +3. Run the app to validate that SAP HANA driver can be loaded: ```Shell dotnet run ``` - >If an error occurs, double check that the hintpath is correct. -4. Open an editor to edit the file `Program.cs`. +4. Open an editor to edit the file `Program.cs`. + ```Shell (Windows) notepad Program.cs ``` @@ -136,7 +84,7 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` pico Program.cs ``` -5. Replace all content of `Program.cs` with the code below. Be sure to update values where necessary and save the file when finished. +5. Replace all content of `Program.cs` with the code below. Be sure to update values where necessary and save the file when finished. ```C# using System; @@ -149,7 +97,7 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` { try { - using (var conn = new HanaConnection("Server=999deec0-ccb7-4a5e-b317-d419e19be648.hana.prod-us10.hanacloud.ondemand.com:443;UID=User1;PWD=Password1;encrypt=true;sslValidateCertificate=false")) + using (var conn = new HanaConnection("Server=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx.hana.prod-xxxx.hanacloud.ondemand.com:443;UID=User1;PWD=Password1;encrypt=true;sslValidateCertificate=false")) // encrypt and sslValidateCertificate should be true for HANA Cloud connections // As of SAP HANA Client 2.6, connections on port 443 enable encryption by default @@ -202,17 +150,20 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` The above app makes use of some of the SAP HANA client .NET driver methods, such as [HanaConnection](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/d19390d16d6110149af29776dce510bc.html). Connection details for this class can be found at [Microsoft ADO.NET Connection Properties](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/469e137b6d611014ac27bffe40be2f18.html). Further .NET API details can be found in the [.NET API browser](https://docs.microsoft.com/en-us/dotnet/api/?view=net-6.0). -6. Run the app: +6. Run the app: ```Shell dotnet run ``` + >Before running the program make sure to be in the directory where Program.cs is saved ![Result of running the app](result.png) +7. Optionally remove any unused platform files from the runtimes folder at HANAClientsTutorial\dotNET\bin\Debug\net9.0\runtimes. ### Debug the application + 1. Open Visual Studio Code. If needed, download Visual Studio Code [here](https://code.visualstudio.com/Download). 2. If you have not already done so, choose **File | Add Folder to Workspace**, and then add the `HANAClientsTutorial` folder. @@ -238,7 +189,7 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` For further information on debugging .NET apps consult [Tutorial: Debug a .NET Core console application using Visual Studio Code](https://docs.microsoft.com/en-us/dotnet/core/tutorials/debugging-with-visual-studio-code) and [Instructions for setting up the .NET Core debugger](https://github.com/OmniSharp/omnisharp-vscode/blob/master/debugger.md). ### Knowledge check -Congratulations! You have now created and debugged a .NET application that connects to and queries an SAP HANA database. +Congratulations! You have now created and debugged a .NET application that connects to and queries an SAP HANA database. --- diff --git a/tutorials/hana-clients-entity-framework/HANAClientDriver.png b/tutorials/hana-clients-entity-framework/HANAClientDriver.png new file mode 100644 index 0000000000..f824212930 Binary files /dev/null and b/tutorials/hana-clients-entity-framework/HANAClientDriver.png differ diff --git a/tutorials/hana-clients-entity-framework/Tables-in-USER2-schema.png b/tutorials/hana-clients-entity-framework/Tables-in-USER2-schema.png new file mode 100644 index 0000000000..8bec38d355 Binary files /dev/null and b/tutorials/hana-clients-entity-framework/Tables-in-USER2-schema.png differ diff --git a/tutorials/hana-clients-entity-framework/entity-framework-tools.png b/tutorials/hana-clients-entity-framework/entity-framework-tools.png index 58679e1318..4325f2f201 100644 Binary files a/tutorials/hana-clients-entity-framework/entity-framework-tools.png and b/tutorials/hana-clients-entity-framework/entity-framework-tools.png differ diff --git a/tutorials/hana-clients-entity-framework/hana-clients-entity-framework.md b/tutorials/hana-clients-entity-framework/hana-clients-entity-framework.md index c8bb24c1a6..0d4de86ba8 100644 --- a/tutorials/hana-clients-entity-framework/hana-clients-entity-framework.md +++ b/tutorials/hana-clients-entity-framework/hana-clients-entity-framework.md @@ -7,31 +7,31 @@ primary_tag: software-product>sap-hana-cloud --- # Connect Using the Microsoft Entity Framework Core (EF Core) + Create and debug an EF Core application that connects to SAP HANA. ## Prerequisites - - You have completed the first 3 tutorials in this mission - - You have completed the previous tutorial on .NET in this mission + +- You have completed the first 3 tutorials in this mission +- You have completed the previous tutorial on .NET in this mission ## You will learn - - How to install the .NET Core EF CLI - - How to create and debug an EF Core application that queries an SAP HANA database - - How to use the scaffold command to generate entity classes for pre-existing schema tables + +- How to install the .NET Core EF CLI +- How to create and debug an EF Core application that queries an SAP HANA database +- How to use the scaffold command to generate entity classes for pre-existing schema tables ## Intro -[.NET](https://en.wikipedia.org/wiki/.NET_Core) is a free and open-source software framework for Microsoft Windows, Linux and Mac operating systems and is the successor to the .NET Framework. Entity Framework Core is a modern object-database mapper for .NET and can reduce data access code in an application. THe first example shown below either requires an empty database or one with the following table. -```SQL -CONNECT USER2 Password2; -CREATE TABLE "Hotel" ("Id" int, "Name" VARCHAR (20), "Address" VARCHAR (60)); -``` +[.NET](https://en.wikipedia.org/wiki/.NET_Core) is a free and open-source software framework for Microsoft Windows, Linux, and Mac operating systems and is the successor to the .NET Framework. Entity Framework Core is a modern object-database mapper for .NET and can reduce data access code in an application. --- ### Install the .NET Core EF CLI + The `dotnet` tool command can be used to install and manage tools that extend .NET. The following are a few examples that can be run to show help, to list the local and globally installed tools, to uninstall `dotnet-ef` if an incompatible version is installed, and to search the repository for version details of the `dotnet-ef` tool. -``` +```Shell dotnet tool -? dotnet tool list -? dotnet tool list @@ -40,12 +40,12 @@ dotnet tool uninstall dotnet-ef -g dotnet tool search dotnet-ef --detail ``` -The SAP HANA Client 2.17 release supports EF Core 6.0 & 7.0. For a list versions and support dates see [EF Core releases and planning](https://learn.microsoft.com/en-us/ef/core/what-is-new/) and SAP Note [3165810 - SAP HANA Client Supported Platforms](https://launchpad.support.sap.com/#/notes/3165810). +The SAP HANA Client 2.27 release supports EF Core 8.0 among other versions. For a list versions and support dates see [EF Core releases and planning](https://learn.microsoft.com/en-us/ef/core/what-is-new/) and SAP Note [3165810 - SAP HANA Client Supported Platforms](https://launchpad.support.sap.com/#/notes/3165810). -Run the following command to install version 7 of the dotnet-ef tool. +Run the following command to install version 8 of the dotnet-ef tool. ```Shell -dotnet tool install dotnet-ef --version 7.0.17 -g +dotnet tool install dotnet-ef --version 9.0.14 -g dotnet tool list -g ``` @@ -60,7 +60,8 @@ dotnet ef -h ![entity framework tools](entity-framework-tools.png) ### Create a .NET Core EF application that queries an SAP HANA database -1. Create a new console app with the below commands: + +1. Create a new console app with the below commands: ```Shell (Microsoft Windows) cd %HOMEPATH%/HANAClientsTutorial @@ -72,65 +73,29 @@ dotnet ef -h dotnet new console -o EFCore ``` -2. Open the `EFCore.csproj` file: - - ```Shell (Microsoft Windows) - cd EFCore - notepad EFCore.csproj - ``` +2. Add the required packages including the SAP HANA .NET data provider which is available on [nuget](https://www.nuget.org/packages/Sap.EntityFrameworkCore.Hana.v9.0). A list of available providers from SAP is available at [SAP-SE](https://www.nuget.org/profiles/SAP-SE). - ```Shell (Linux or Mac) + ```Shell cd EFCore - pico EFCore.csproj + dotnet add package Sap.EntityFrameworkCore.Hana.v9.0 + dotnet add package Microsoft.EntityFrameworkCore.Relational --version 9.0.14 ``` - Add the following below the `PropertyGroup` section (within the `Project` section) to indicate where to load the SAP HANA Client .NET and entity driver from. Modify the `HintPath` section with the information about where the dlls are located on your machine. + ![HANAClientDriverDownload](HANAClientDriver.png) - ```Shell (Microsoft Windows) - - - C:\SAP\hdbclient\dotnetcore\v6.0\Sap.Data.Hana.Net.v6.0.dll - - - - C:\SAP\hdbclient\dotnetcore\v6.0\Sap.EntityFrameworkCore.Hana.v7.0.dll - - - - - - - - ``` + The packages can be listed using the command below. - ```Shell (Linux or Mac) - - - /home/dan/sap/hdbclient/dotnetcore/v6.0/Sap.Data.Hana.Net.v6.0.dll - - - /home/dan/sap/hdbclient/dotnetcore/v6.0/Sap.EntityFrameworkCore.Hana.v7.0.dll - - - - - - - + ```Shell + dotnet list package ``` - ![csproj file](csproj-file.png) - - Once the `dotNet.csproj` file has been updated, save, and close the file. - -3. Run the app to validate that SAP hdbclient DLLs can be loaded: +3. Run the app to validate that SAP hdbclient DLLs can be loaded: ```Shell dotnet run ``` - The expected output is `Hello, World!`. - >If a warning occurs mentioning that a SAP reference could not be resolved, revisit the `EFCore.csproj` file and double check that the hintpath is correct. + The expected output is `Hello, World!`. 4. Open an editor and create a file named `HotelModel.cs`. @@ -162,7 +127,7 @@ dotnet ef -h } protected override void OnConfiguring(DbContextOptionsBuilder options) { - options.UseHana("Server=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx.hana.prod-xxxx.hanacloud.ondemand.com:443;UserName=User2;Password=Password2;CurrentSchema=User2"); + options.UseHana("Server=xxxxxxxx-.hanacloud.ondemand.com:443;UserName=User2;Password=Password2;Current Schema=USER2"); } } @@ -174,9 +139,10 @@ dotnet ef -h } ``` - Be sure to update the host URL and optionally the user name and password. Note that calls to EnsureDeleted and EnsureCreated will delete and recreate the objects in the schema USER2. As documented at [RelationalDatabaseCreator.EnsureDeleted Method](https://learn.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.storage.relationaldatabasecreator.ensuredeleted) it will delete all objects in the schema USER2. + Be sure to update the host URL and optionally the user name and password. Note that calls to EnsureDeleted and EnsureCreated will delete and recreate the objects in the schema USER2. As documented at [RelationalDatabaseCreator.EnsureDeleted Method](https://learn.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.storage.relationaldatabasecreator.ensuredeleted), it will delete all objects in the schema USER2. + -6. Open an editor to edit the file `Program.cs`. +6. Open an editor to edit the file `Program.cs`. ```Shell (Windows) notepad Program.cs @@ -186,7 +152,7 @@ dotnet ef -h pico Program.cs ``` -7. Replace the entire contents of `Program.cs` with the code below. Save and close the file when finished. +7. Replace the entire contents of `Program.cs` with the code below. Save and close the file when finished. ```C# using var db = new HotelContext(); @@ -206,17 +172,18 @@ dotnet ef -h Further details on SAP HANA Client entity core driver can be found at [Entity Framework Core Support](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/3e6ef454ffc94cda8fefb0acf5be007b.html). Further .NET API details can be found in the [.NET API browser](https://learn.microsoft.com/en-us/dotnet/api/?view=efcore-6.0). -8. Run the app: +8. Run the app: ```Shell dotnet run ``` + >Before running the program make sure to be in the directory where Program.cs is saved ![Result of running the app](result.png) - ### Debug the application + 1. Open Visual Studio Code. If needed, download the application [here](https://code.visualstudio.com/Download). 2. If you have not already done so, choose **File | Add Folder to Workspace**, and then add the `HANAClientsTutorial` folder. @@ -238,6 +205,7 @@ dotnet ef -h For further information on debugging .NET apps consult [Tutorial: Debug a .NET Core console application using Visual Studio Code](https://docs.microsoft.com/en-us/dotnet/core/tutorials/debugging-with-visual-studio-code) and [Instructions for setting up the .NET Core debugger](https://github.com/OmniSharp/omnisharp-vscode/blob/master/debugger.md). ### Generate scaffolding classes for an existing schema + The following steps demonstrate the process of generating entity type classes and a DbContext class based on an existing database schema. Additional details can be found at [Scaffolding (Reverse Engineering)](https://learn.microsoft.com/en-us/ef/core/managing-schemas/scaffolding/?tabs=dotnet-core-cli). 1. Create a new console app with the below commands: @@ -252,56 +220,13 @@ The following steps demonstrate the process of generating entity type classes an dotnet new console -o EFCoreScaffold ``` -2. Open the `EFCoreScaffold.csproj` file: - - ```Shell (Microsoft Windows) - cd EFCoreScaffold - notepad EFCoreScaffold.csproj - ``` - - ```Shell (Linux or Mac) - cd EFCoreScaffold - pico EFCoreScaffold.csproj - ``` - - Add the following below the `PropertyGroup` section (within the `Project` section) to indicate where to load the SAP HANA Client .NET and entity driver from. Modify the `HintPath` section with the information about where the dlls are located on your machine. - - ```Shell (Microsoft Windows) - - - C:\SAP\hdbclient\dotnetcore\v6.0\Sap.Data.Hana.Net.v6.0.dll - - - C:\SAP\hdbclient\dotnetcore\v6.0\Sap.EntityFrameworkCore.Hana.v7.0.dll - - - - - - - - ``` - - ```Shell (Linux or Mac) - - - /home/dan/sap/hdbclient/dotnetcore/v6.0/Sap.Data.Hana.Net.v6.0.dll - - - /home/dan/sap/hdbclient/dotnetcore/v6.0/Sap.EntityFrameworkCore.Hana.v7.0.dll - - - - - - - - ``` - -3. Install the required package `Microsoft.EntityFrameworkCore.Design`. +2. Install the required packages. ```Shell - dotnet add package Microsoft.EntityFrameworkCore.Design --version 7.0.17 + cd EFCoreScaffold + dotnet add package Sap.EntityFrameworkCore.Hana.v9.0 + dotnet add package Microsoft.EntityFrameworkCore.Relational --version 9.0.14 + dotnet add package Microsoft.EntityFrameworkCore.Design --version 9.0.14 ``` The list of installed packages can be seen using the below command. @@ -314,17 +239,19 @@ The following steps demonstrate the process of generating entity type classes an Additional details can be found at [dotnet add package](https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-add-package) and [Microsoft.EntityFrameworkCore.Design](https://www.nuget.org/packages/Microsoft.EntityFrameworkCore.Design) -4. Use the scaffold command to generate entity classes for the HOTELS schema. Update the SQL endpoint. +3. Use the scaffold command to generate entity classes for the HOTELS schema. Update the SQL endpoint. ```Shell - dotnet ef dbcontext scaffold "Server=xxxxxx-4782-bc7e-297099099b59.hana.prod-ca10.hanacloud.ondemand.com:443;uid=USER2;pwd=Password2;Current Schema=HOTELS" Sap.EntityFrameworkCore.Hana.v7.0 --schema HOTELS --context HotelsContext + dotnet ef dbcontext scaffold "Server=xxxxxxxx-.hanacloud.ondemand.com:443;uid=USER2;pwd=Password2;Current Schema=HOTELS" Sap.EntityFrameworkCore.Hana.v9.0 --schema HOTELS --context HotelsContext ``` + Notice that classes have been generated for each object in the schema HOTELS. + ![scaffold command](scaffold.png) Should you wish to regenerate the files in the future and overwrite the existing files, the `--force` parameter can be used. Additional details on the scaffold command can be found at [.NET Core CLI](https://learn.microsoft.com/en-us/ef/core/cli/dotnet#dotnet-ef-dbcontext-scaffold). - -5. Open an editor to edit the file `Program.cs`. + +4. Open an editor to edit the file `Program.cs`. ```Shell (Windows) notepad Program.cs @@ -334,7 +261,7 @@ The following steps demonstrate the process of generating entity type classes an pico Program.cs ``` -6. Replace the entire contents of `Program.cs` with the code below. Save and close the file when finished. +5. Replace the entire contents of `Program.cs` with the code below. Save and close the file when finished. ```C# using EFCoreScaffold; @@ -353,7 +280,7 @@ The following steps demonstrate the process of generating entity type classes an Console.WriteLine("Found item#: " + maintenanceItems.Mno + " Desc: " + maintenanceItems.Description); ``` -7. Open an editor to edit the file `HotelsContext.cs`. +6. Open an editor to edit the file `HotelsContext.cs`. ```Shell (Windows) notepad HotelsContext.cs @@ -363,9 +290,9 @@ The following steps demonstrate the process of generating entity type classes an pico HotelsContext.cs ``` -8. Delete the `OnConfiguring` method. This will be added to the `MyHotelsContext.cs` class. +7. Delete the `OnConfiguring` method. This will be added to the `MyHotelsContext.cs` class. -9. Open an editor to create and edit a new file named `MyHotelsContext.cs`. +8. Open an editor to create and edit a new file named `MyHotelsContext.cs`. ```Shell (Windows) notepad MyHotelsContext.cs @@ -375,7 +302,7 @@ The following steps demonstrate the process of generating entity type classes an pico MyHotelsContext.cs ``` -10. Add the code below. Update the Server= line to match your SAP HANA Cloud SQL endpoint. Save and close the file when finished. Note that the schema is changed to be USER2 while the original objects are in the schema HOTELS. +9. Add the code below. Update the Server= line to match your SAP HANA Cloud SQL endpoint. Save and close the file when finished. Note that the schema is changed to be USER2. ```C# using Microsoft.EntityFrameworkCore; @@ -396,12 +323,12 @@ The following steps demonstrate the process of generating entity type classes an protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { - optionsBuilder.UseHana("Server=xxxxxxxx-4782-bc7e-297099099b59.hana.prod-ca10.hanacloud.ondemand.com:443;uid=USER2;pwd=Password2;Current Schema=USER2"); + optionsBuilder.UseHana("Server=xxxxxxxx-.hanacloud.ondemand.com:443;uid=USER2;pwd=Password2;Current Schema=USER2"); } } ``` -11. Run the app: +10. Run the app: ```Shell dotnet run @@ -409,9 +336,12 @@ The following steps demonstrate the process of generating entity type classes an ![Result of running the app](results2.png) + Notice that tables such as CUSTOMER, HOTEL, MAINTENANCE etc have now been created in the USER2 schema. + + ![tables in user2 schema](Tables-in-USER2-schema.png) ### Knowledge check -Congratulations! You have now created and debugged a .NET application that connects to and queries an SAP HANA database. +Congratulations! You have now created and debugged a .NET application that connects to and queries an SAP HANA database. --- diff --git a/tutorials/hana-clients-entity-framework/install.png b/tutorials/hana-clients-entity-framework/install.png index 3fe89e613b..2d9c0e0eac 100644 Binary files a/tutorials/hana-clients-entity-framework/install.png and b/tutorials/hana-clients-entity-framework/install.png differ diff --git a/tutorials/hana-clients-entity-framework/package-list.png b/tutorials/hana-clients-entity-framework/package-list.png index f06ce57fb1..b1a7a2e6ce 100644 Binary files a/tutorials/hana-clients-entity-framework/package-list.png and b/tutorials/hana-clients-entity-framework/package-list.png differ diff --git a/tutorials/hana-clients-entity-framework/scaffold.png b/tutorials/hana-clients-entity-framework/scaffold.png index 74edcc43a2..4f7a610e7a 100644 Binary files a/tutorials/hana-clients-entity-framework/scaffold.png and b/tutorials/hana-clients-entity-framework/scaffold.png differ diff --git a/tutorials/hana-clients-hdbsql/hana-clients-hdbsql.md b/tutorials/hana-clients-hdbsql/hana-clients-hdbsql.md index 8365e5f165..243870c12d 100644 --- a/tutorials/hana-clients-hdbsql/hana-clients-hdbsql.md +++ b/tutorials/hana-clients-hdbsql/hana-clients-hdbsql.md @@ -72,7 +72,7 @@ This step demonstrates how to connect to a SAP HANA instance using [HDBSQL](http An example of configuring this setting is shown in [Allow connections to SAP HANA Cloud instance from selected IP addresses — using the command line](https://blogs.sap.com/2020/10/30/allow-connections-to-sap-hana-cloud-instance-from-selected-ip-addresses-using-the-command-line/). - - The SAP HANA Cloud, HANA database trial instance will be automatically stopped overnight. That means you need to restart your instance before working with it each new day. + - The SAP HANA Cloud, HANA database free tier instance will be automatically stopped overnight. That means you need to restart your instance before working with it each new day. - Connections to a HANA Cloud instance must use encryption. The default encryption library on Windows is mscrypto and on Linux and macOS it is OpenSSL. The following example demonstrates how one could use the SAP provided conmmoncrypto library instead of the default encryption library. Note, the following steps require that the SAP HANA Client be downloaded from the SAP Software Downloads as the download includes the SAP Common Crypto library (libsapcrypto). Note that the environment variables can also be set by running source hdbclienv.sh or hdbclienv.bat. diff --git a/tutorials/hana-clients-routing/M_SQL_PLAN_CACHE.png b/tutorials/hana-clients-routing/M_SQL_PLAN_CACHE.png new file mode 100644 index 0000000000..94f4c496cf Binary files /dev/null and b/tutorials/hana-clients-routing/M_SQL_PLAN_CACHE.png differ diff --git a/tutorials/hana-clients-routing/add-replica.png b/tutorials/hana-clients-routing/add-replica.png new file mode 100644 index 0000000000..2b0b15169f Binary files /dev/null and b/tutorials/hana-clients-routing/add-replica.png differ diff --git a/tutorials/hana-clients-routing/call-insert-proc.png b/tutorials/hana-clients-routing/call-insert-proc.png new file mode 100644 index 0000000000..18e1f1ce14 Binary files /dev/null and b/tutorials/hana-clients-routing/call-insert-proc.png differ diff --git a/tutorials/hana-clients-routing/call-query-proc.png b/tutorials/hana-clients-routing/call-query-proc.png new file mode 100644 index 0000000000..514ce90afb Binary files /dev/null and b/tutorials/hana-clients-routing/call-query-proc.png differ diff --git a/tutorials/hana-clients-routing/direct-connect-dbisql.png b/tutorials/hana-clients-routing/direct-connect-dbisql.png new file mode 100644 index 0000000000..3476d66581 Binary files /dev/null and b/tutorials/hana-clients-routing/direct-connect-dbisql.png differ diff --git a/tutorials/hana-clients-routing/direct-connect-sql-console.png b/tutorials/hana-clients-routing/direct-connect-sql-console.png new file mode 100644 index 0000000000..22c208b89e Binary files /dev/null and b/tutorials/hana-clients-routing/direct-connect-sql-console.png differ diff --git a/tutorials/hana-clients-routing/direct-connect-sql-console2.png b/tutorials/hana-clients-routing/direct-connect-sql-console2.png new file mode 100644 index 0000000000..8b67ba1fae Binary files /dev/null and b/tutorials/hana-clients-routing/direct-connect-sql-console2.png differ diff --git a/tutorials/hana-clients-routing/execution-host.png b/tutorials/hana-clients-routing/execution-host.png new file mode 100644 index 0000000000..013ceb9ef2 Binary files /dev/null and b/tutorials/hana-clients-routing/execution-host.png differ diff --git a/tutorials/hana-clients-routing/hana-clients-routing.md b/tutorials/hana-clients-routing/hana-clients-routing.md new file mode 100644 index 0000000000..9503a4aac7 --- /dev/null +++ b/tutorials/hana-clients-routing/hana-clients-routing.md @@ -0,0 +1,176 @@ +--- +parser: v2 +auto_validation: true +time: 10 +tags: [ tutorial>beginner, software-product-function>sap-hana-cloud--sap-hana-database, tutorial>license] +primary_tag: software-product>sap-hana-cloud +--- + +# Routing queries to a read only replica + + This tutorial demonstrates how read only queries can be routed to a replica. The option to add a replica to an SAP HANA Cloud instance requires a productive (non free tier) instance. + +## Prerequisites + +- An SAP HANA Cloud QRC 1 2026 (or newer) instance that supports adding a replica +- A 2.28 (QRC 1 2026) or newer version of the SAP HANA Client + +## You will learn + +- How to add a synchronous replica +- How to direct a SQL query to a replica using a hint +- How to connect to a replica so that read only queries can be executed without using hints +- Additional settings that affect routing + +## Intro + +A replica is used to provide an additional copy of your instance that is kept up to date through replication. This instance can then be used to quickly replace the source instance with the replica using the takeover action performed in SAP HANA Cloud Central. By sending read only workloads to the replica, this can offload workloads from the source node and provide better utilization. + +The following are some additional sources of information on this topic: + +- [Instance Replication](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/instance-replication) +- [Active/Active (Read-Enabled) Replicas](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-administration-guide/active-active-read-enabled) +- [Client Support for Active/Active (Read Enabled)](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/c4c65c8be4ba4ef9b07a029928f322f0.html) + +--- + +### Add a replica + +The following steps demonstrate how a replica can be added to an SAP HANA Cloud instance. + +1. In SAP HANA Cloud Central, open the manage configuration wizard. + + ![open the manage configuration wizard](manage-configuration-wziard.png) + +2. Under the availability zone section, select add replica and choose synchronous as the replication mode. + + ![Add replica](add-replica.png) + +3. Once the replica has been added, it is then possible, if needed, to perform a takeover so that the replica becomes the source node. This step is shown for illustrative purposes only and does not need to be completed. + + ![perform a takeover](take-over.png) + +### Hint based routing + +Individual read only queries can be routed to the replica. There are some conditions such as the isolation level must be read commited. Further details can be found at [Hint-Based Statement Routing for Active/Active (Read Enabled)](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/a6aa1cc4e070420c97e31fb1afd2ad3d.html). The following steps attempt to demonstrate this. + +1. Verify the version of the SAP HANA client which needs to be 2.28 or higher by executing the below SQL. + + ```SQL + SELECT CLIENT_VERSION, CLIENT_APPLICATION, * FROM M_CONNECTIONS WHERE CONNECTION_ID = CURRENT_CONNECTION; + ``` + +2. Execute the below SQL to create and populate a table named MYTABLE. + + ```SQL + CREATE TABLE MYTABLE (C1 INT GENERATED BY DEFAULT AS IDENTITY, T1 TIMESTAMP); + INSERT INTO MYTABLE(T1) VALUES(CURRENT_TIMESTAMP); + ``` + + This table and its contents will be available on both the source and replica instances. + +3. Execute the below SQL to perform a query against the source node and the replica node. + + ```SQL + SELECT C1 AS QUERY_ON_PRIMARY, STATEMENT_EXECUTION_HOST() FROM MYTABLE; + SELECT C1 AS QUERY_ON_REPLICA, STATEMENT_EXECUTION_HOST() FROM MYTABLE WITH HINT (RESULT_LAG('hana_sr')); + ``` + + ![query with hint to run on a replica](execution-host.png) + + Notice above that the suffix (-1) of the execution host for the replica is different from the source. + +4. Examine the M_SQL_PLAN_CACHE table of the source and replica. + + ```SQL + SELECT HOST, STATEMENT_STRING, USER_NAME, LAST_EXECUTION_TIMESTAMP FROM SYS.M_SQL_PLAN_CACHE WHERE + STATEMENT_STRING LIKE '%MYTABLE%' ORDER BY LAST_EXECUTION_TIMESTAMP DESC; + --Notice that both statements appear but only the first one is executed + + SELECT HOST, STATEMENT_STRING, USER_NAME, LAST_EXECUTION_TIMESTAMP FROM _SYS_VR_REPLICA.M_SQL_PLAN_CACHE WHERE + STATEMENT_STRING LIKE '%MYTABLE%' ORDER BY LAST_EXECUTION_TIMESTAMP DESC; + --Notice that only the statement routed to the replica appears with this query + + --ALTER SYSTEM CLEAR SQL PLAN CACHE; + ``` + + ![querying M_SQL_PLAN_CACHE](M_SQL_PLAN_CACHE.png) + +5. Execute the following SQL to create two stored procedures, one that can be routed to a replica and one that cannot. + + ```SQL + CREATE OR REPLACE PROCEDURE QUERY_PROC() + LANGUAGE SQLSCRIPT + READS SQL DATA + AS + BEGIN + SELECT COUNT(*), STATEMENT_EXECUTION_HOST() FROM MYTABLE; + END; + + CREATE OR REPLACE PROCEDURE INSERT_PROC() + LANGUAGE SQLSCRIPT AS + BEGIN + INSERT INTO MYTABLE(T1) VALUES(CURRENT_TIMESTAMP); + END; + ``` + + Notice above that the first procedure contains the declaration READS SQL DATA which indicates that it does not modify the schema or data while the second stored procedure does modify the tables data. Further details on the syntax is available at [CREATE PROCEDURE Statement](https://help.sap.com/docs/HANA_CLOUD_DATABASE_CN/1bb35593d1e54ce48b4f8ce071594d5e/20d467407519101484f190f545d54b24.html?locale=en-US). + +6. Execute the two stored procedures and examine where they are executed. + + ```SQL + CALL QUERY_PROC() WITH HINT (RESULT_LAG('hana_sr')); + + SELECT HOST, STATEMENT_STRING, USER_NAME, LAST_EXECUTION_TIMESTAMP FROM SYS.M_SQL_PLAN_CACHE WHERE + STATEMENT_STRING LIKE '%CALL QUERY_PROC%' ORDER BY LAST_EXECUTION_TIMESTAMP DESC; + + SELECT HOST, STATEMENT_STRING, USER_NAME, LAST_EXECUTION_TIMESTAMP FROM _SYS_VR_REPLICA.M_SQL_PLAN_CACHE WHERE + STATEMENT_STRING LIKE '%CALL QUERY_PROC%' ORDER BY LAST_EXECUTION_TIMESTAMP DESC; + ``` + + ![Call QUERY_PROC](call-query-proc.png) + + ```SQL + SELECT * FROM MYTABLE; + CALL INSERT_PROC() WITH HINT (RESULT_LAG('hana_sr')); + SELECT * FROM MYTABLE; + + SELECT HOST, STATEMENT_STRING, USER_NAME, LAST_EXECUTION_TIMESTAMP FROM SYS.M_SQL_PLAN_CACHE WHERE + STATEMENT_STRING LIKE '%CALL INSERT_PROC%' ORDER BY LAST_EXECUTION_TIMESTAMP DESC; + + SELECT HOST, STATEMENT_STRING, USER_NAME, LAST_EXECUTION_TIMESTAMP FROM _SYS_VR_REPLICA.M_SQL_PLAN_CACHE WHERE + STATEMENT_STRING LIKE '%CALL INSERT_PROC%' ORDER BY LAST_EXECUTION_TIMESTAMP DESC; + ``` + + ![CALL INSERT_PROC](call-insert-proc.png) + +### Directly connect to a replica + +A connection can be made directly to the replica so that individual statements do not need to include a hint statement. To do so use the connection parameter replicationRole with a value of REPLICA. + +```Shell +hdbsql -Z replicationRole=REPLICA -A -n 08849ce0-f173-4139-baba-a0a28399ef55.hana.aws.hcd-us10.hanacloud.ondemand.com:443 -u DBADMIN -p myPassword +``` + +```SQL +SELECT C1 AS QUERY_ON_PRIMARY, STATEMENT_EXECUTION_HOST() FROM MYTABLE; +INSERT INTO MYTABLE(T1) VALUES(CURRENT_TIMESTAMP); +``` + +![Direct Connect with DBISQL](direct-connect-dbisql.png) + +Within the SQL Console, this parameter can be provided as shown below using the advanced options. + +![Advanced connection options in the SQL Console](direct-connect-sql-console.png) + +![Advanced connection options in the SQL Console](direct-connect-sql-console2.png) + +### Additional considerations + +If you are using hint based routing, the statement needs to be prepared before it is executed for the hint to be considered. Some tools such as the SQL Console and HDBSQL always prepare statements before executing them. For applications that do not do this, there is a setting called [routeDirectExecute](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/4fe9978ebac44f35b9369ef5a4a26f4c.html) that can be enabled. A further example using this setting in a Node.js application is shown in step 7 of the tutorial [Use an Elastic Compute Node (ECN) for Scheduled Workloads](hana-cloud-ecn). + +### Knowledge check + +Congratulations, you have now directed read only queries to a replica which can improve utilization of SAP HANA Cloud instances. + +--- diff --git a/tutorials/hana-clients-routing/manage-configuration-wziard.png b/tutorials/hana-clients-routing/manage-configuration-wziard.png new file mode 100644 index 0000000000..2eb38c5bd4 Binary files /dev/null and b/tutorials/hana-clients-routing/manage-configuration-wziard.png differ diff --git a/tutorials/hana-clients-routing/take-over.png b/tutorials/hana-clients-routing/take-over.png new file mode 100644 index 0000000000..505fce454d Binary files /dev/null and b/tutorials/hana-clients-routing/take-over.png differ diff --git a/tutorials/hana-clients-trace/hana-clients-trace.md b/tutorials/hana-clients-trace/hana-clients-trace.md index 9ee25a03a3..1022e16b5d 100644 --- a/tutorials/hana-clients-trace/hana-clients-trace.md +++ b/tutorials/hana-clients-trace/hana-clients-trace.md @@ -55,7 +55,7 @@ Trace settings can also be configured using environment variables, or via connec The `%p` will be replaced with the process ID of the traced application. Including `%p` in the file name ensures that each process can write its own trace file. - >The next step provides an example of sending the trace output to `stdout` or `stderr`. Another option for Node.js applications is to specify a callback to receive the trace output to using the `onTrace` method which is shown in the tutorial [Connect Using the SAP HANA Node.js Interface](hana-clients-node) + >The next step provides an example of sending the trace output to `stdout` or `stderr`. Another option for Node.js applications is to specify a callback to receive the trace output to using the `onTrace` method which is shown in the tutorial [Connect Using the SAP HANA Node.js Interface](hana-clients-node). Example trace categories include: @@ -112,7 +112,7 @@ Trace settings can also be configured using environment variables, or via connec BUILD MODE: rel APPLICATION: C:\SAP\hdbclient\hdbsql.exe HOST: W-R90XC65K - OS USER: I826567 + OS USER: I234567 CURRENT DIRECTORY: c:\temp\traces TRACE FILE NAME: c:\temp\traces\SQLDBC-55652.txt PROCESS ID: 55652 @@ -144,6 +144,7 @@ Trace settings can also be configured using environment variables, or via connec ```Shell hdbsqldbc_cons TRACE OFF + hdbsqldbc_cons SHOW ALL ``` @@ -163,7 +164,30 @@ The following are some additional options for tracing. hdbsqldbc_cons TRACE ONLY ON ERROR 10 ``` -3. In situations where `hdbsqldbc_cons` is not accessible, perhaps because a driver was installed directly using npm or pip, trace settings can be set using environment variables. +3. Filtering can be used to reduce the size of trace files. Place the following SQL statements into a file named sql.sql to try this out. + + ```SQL + SELECT 'tracetestUSER1' FROM DUMMY; + CONNECT USER2 PASSWORD Password1; + SELECT 'tracetestUSER2' FROM DUMMY; + ``` + + Execute the commands below to reset the trace settings, enable SQL trace for USER2, and then to run the above SQL statements which will execute as USER1 and then as USER2. + + ```Shell + hdbsqldbc_cons TRACE OFF + hdbsqldbc_cons TRACE SQL ON LEVEL INFO + hdbsqldbc_cons TRACE FILTER SQL USER USER2 + hdbsql -U User1UserKey -I sql.sql + ``` + + The expected result is that the resulting trace file only traces the query from USER2. + +4. In situations where `hdbsqldbc_cons` is not accessible, perhaps because a driver was installed directly using npm or pip, trace settings can be set using environment variables. The following values can be used in the trace file name. + + * %p represents the process ID + * %a represents the application user + * %c represents the connection ID ```Shell (Windows) set HDB_SQLDBC_TRACEFILE=c:\temp\traces\SQLDBC-%p.txt @@ -198,7 +222,7 @@ The following are some additional options for tracing. ![Environment Variable Values](EnvironmentVariable.png) -4. Trace information can be directed to `stdout` or `stderr`. See below for a few examples. +5. Trace information can be directed to `stdout` or `stderr`. See below for a few examples. ```Shell hdbsql -U User1UserKey -Z traceFile=stdout -Z traceOptions=sql=warning "SELECT * FROM HOTELS.CUSTOMER" @@ -212,7 +236,7 @@ The following are some additional options for tracing. set HDB_SQLDBC_TRACEFILE= ``` -5. Tracing can also be enabled in an application's connection properties. For further details see `traceFile` and `traceOptions` in [SQLDBC Connection Properties](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/f6fb06ffe4484f6fa61f10082b11663d.html). +6. Tracing can also be enabled in an application's connection properties. For further details see `traceFile` and `traceOptions` in [SQLDBC Connection Properties](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/f6fb06ffe4484f6fa61f10082b11663d.html). ### Tracing a JDBC Connection diff --git a/tutorials/hana-cloud-automation-rest/hana-cloud-automation-rest.md b/tutorials/hana-cloud-automation-rest/hana-cloud-automation-rest.md index 62cdc6ef3c..e684e7b41b 100644 --- a/tutorials/hana-cloud-automation-rest/hana-cloud-automation-rest.md +++ b/tutorials/hana-cloud-automation-rest/hana-cloud-automation-rest.md @@ -70,15 +70,15 @@ The API's can be invoked using a tool such as the [REST Client](https://marketpl #From the clientsecret field in the service binding. @clientsecret = + #Instance ID of a SAP HANA Cloud instance in the same subaccount as the service manager instance + @instanceid = + #Generated by the request bearer token call. Copy the access_token value from the result without the quotes @bearer = - #Instance ID of a SAP HANA Cloud instance in the same subaccount as the service manager instance - @instanceid = - #Authorization REST API call -------------------- - #Request bearer token + #Request bearer token and paste result into the bearer variable above #See also https://help.sap.com/docs/service-manager/sap-service-manager/retrieve-oauth2-access-token #Note that by default the token expires after 1799 seconds or 30 minutes as seen in the response in the expires_in field GET {{uaa_url}}/{{oauth}} @@ -89,7 +89,7 @@ The API's can be invoked using a tool such as the [REST Client](https://marketpl #Service plan query #See also https://help.sap.com/docs/service-manager/sap-service-manager/filtering-parameters-and-operators #See also https://api.sap.com/api/APIServiceManager/resource/Service_Plans - GET {{uri}}/v1/service_plans?fieldQuery=name eq 'hana' + GET {{uri}}/v1/service_plans?fieldQuery=name contains 'hana' Authorization: Bearer {{bearer}} ### diff --git a/tutorials/hana-cloud-data-products-consumption/create-formation-1.png b/tutorials/hana-cloud-data-products-consumption/create-formation-1.png new file mode 100644 index 0000000000..8e043858f8 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/create-formation-1.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/create-formation-2.png b/tutorials/hana-cloud-data-products-consumption/create-formation-2.png new file mode 100644 index 0000000000..f7df7637a5 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/create-formation-2.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/create-formation-3.png b/tutorials/hana-cloud-data-products-consumption/create-formation-3.png new file mode 100644 index 0000000000..ea14085598 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/create-formation-3.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/explore-a-data-product.png b/tutorials/hana-cloud-data-products-consumption/explore-a-data-product.png new file mode 100644 index 0000000000..df41286333 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/explore-a-data-product.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/explore-a-data-product2.png b/tutorials/hana-cloud-data-products-consumption/explore-a-data-product2.png new file mode 100644 index 0000000000..6a1ed4a66f Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/explore-a-data-product2.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/explore-a-data-product3.png b/tutorials/hana-cloud-data-products-consumption/explore-a-data-product3.png new file mode 100644 index 0000000000..6e699ea1de Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/explore-a-data-product3.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/find-data-product.png b/tutorials/hana-cloud-data-products-consumption/find-data-product.png new file mode 100644 index 0000000000..23668758b5 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/find-data-product.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/find-data-product2.png b/tutorials/hana-cloud-data-products-consumption/find-data-product2.png new file mode 100644 index 0000000000..69660e1545 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/find-data-product2.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/find-data-product3.png b/tutorials/hana-cloud-data-products-consumption/find-data-product3.png new file mode 100644 index 0000000000..026bb5fd2c Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/find-data-product3.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/find-data-product4.png b/tutorials/hana-cloud-data-products-consumption/find-data-product4.png new file mode 100644 index 0000000000..5db3a44b97 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/find-data-product4.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/find-data-product5.png b/tutorials/hana-cloud-data-products-consumption/find-data-product5.png new file mode 100644 index 0000000000..d7009a80d9 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/find-data-product5.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/hana-cloud-data-products-consumption.md b/tutorials/hana-cloud-data-products-consumption/hana-cloud-data-products-consumption.md new file mode 100644 index 0000000000..a7d44c804e --- /dev/null +++ b/tutorials/hana-cloud-data-products-consumption/hana-cloud-data-products-consumption.md @@ -0,0 +1,148 @@ +--- +parser: v2 +auto_validation: true +time: 10 +tags: [ tutorial>beginner, software-product-function>sap-hana-cloud--sap-hana-database, tutorial>license] +primary_tag: software-product>sap-hana-cloud +--- + +# Access and Query a Data Product in SAP HANA Cloud + + Learn how to share a data product that has been published in the SAP Business Data Cloud (SAP BDC) with SAP HANA Cloud so that its contents can be queried as virtual tables. Data products are read-only curated data that has been produced by an SAP application such as SAP S/4HANA, SAP SuccessFactors, SAP Ariba, and Concur among others. They provide access to SAP data without the need for complex data preparation steps. SAP Business Data Cloud (SAP BDC) is a data platform that harmonizes all data from SAP and non-SAP sources, into a unified semantic layer of trusted data, to power advanced analytics and to build AI applications. SAP HANA Cloud is an in-memory database that provides a multi model engine, access to both structured and unstructured data, and embedded libraries for machine learning enabling it to provide real time analytics. + +For additional details on these topics see: + +* [Introducing SAP Business Data Cloud](https://learning.sap.com/courses/introducing-sap-business-data-cloud) +* [Start your free trial of SAP Business Data Cloud](https://www.sap.com/products/data-cloud/trial.html) +* [SAP Business Data Cloud | SAP Community](https://pages.community.sap.com/topics/business-data-cloud) +* [Provisioning and Administering Databases in SAP HANA Cloud](https://learning.sap.com/courses/provisioning-and-administering-databases-in-sap-hana-cloud) +* [Basic Trial - Introduction to SAP HANA Cloud](https://learning.sap.com/courses/prd-hc-introduction) +* [Data Product Support in SAP HANA Cloud | SAP Help Portal](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/data-product-support-in-sap-hana-cloud-internal) + +## Prerequisites + +* Access to SAP Business Data Cloud and a productive SAP HANA Cloud instance + +## You will learn + +* How to configure SAP Business Data Cloud and SAP HANA Cloud in a formation so that data products can be shared from SAP BDC to a SAP HANA Cloud database +* How to share a selected data product +* How to install the shared data product into SAP HANA Cloud +* How to view the installed data product (in data products tab, see also the data sources) +* How to query data from the shared data product in SAP HANA Cloud + +--- + +### Set up a formation + +A formation is a logical grouping of SAP systems that simplify the connectivity setup and to provide a unified view of the systems. For further details see [How Data Products Become Available in SAP HANA Cloud](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/how-data-products-become-available-in-sap-hana-cloud), [Automating Integrations Using Formations](https://help.sap.com/docs/btp/sap-business-technology-platform/including-sap-systems-in-formation?locale=en-US&version=Cloud), [Creating SAP Business Data Cloud Formations](https://help.sap.com/docs/business-data-cloud/administering-sap-business-data-cloud/creating-sap-business-data-cloud-formations?locale=en-US), and [Introductory Guide to Provisioning of SAP Business Data Cloud](https://community.sap.com/t5/technology-blog-posts-by-sap/introductory-guide-to-provisioning-of-sap-business-data-cloud/ba-p/14117460). + +1. Navigate to formations in the [SAP Business Technology (SAP BTP) Cockpit](https://cockpit.btp.cloud.sap/cockpit#) and select Create Formation. + + ![Formations in SAP BTP Cockpit](create-formation-1.png) + +2. Create a new formation of type Integrations with SAP Business Data Cloud. + + ![Step 1 of create formation](create-formation-2.png) + +3. Select the products to be part of the formation. + + ![Step 2 of create formation](create-formation-3.png) + +4. Complete the formation. + +### Locate and examine a data product in SAP BDC + +Now that trust has been established between SAP BDC and SAP HANA Cloud, a data product can be located in SAP BDC and shared to SAP HANA Cloud. + +1. In the SAP BDC cockpit, select Catalog & Marketplace and search for a data product. + + ![Find a data product](find-data-product.png) + +2. The details of the data product including the description and its properties can be examined. + + ![View the description and properties](find-data-product2.png) + +3. Details of the objects included can be displayed by examining the API. + + ![View the objects](find-data-product3.png) + + It is also possible to examine the columns of each object by clicking View Columns. + + ![View the objects](find-data-product4.png) + + ![View the objects](find-data-product5.png) + +### Share a data product to SAP HANA Cloud + +1. Once a data product has been identified that you wish to be able to access from SAP HANA Cloud, select the Share button in the top right of its details page. + + ![Share a data product](share-a-data-product.png) + + Specify the system to share the data product with by pressing Add a Target. + + ![Choose to add a target](share-a-data-product2.png) + + Specify an SAP HANA Cloud system as a target. + + ![Specify the target to be shared to](share-a-data-product3.png) + + Finally complete the process by pressing Share. + + ![Complete the sharing of the data product](share-a-data-product4.png) + +### Install a data product into SAP HANA Cloud + +The data product that was previously shared can now be installed into SAP HANA Cloud which will create a remote source to the data product and create virtual tables for each object in the data product. + +1. Open SAP HANA Cloud Central and select the SAP HANA Cloud instance in which you wish to install the data product. + + ![View the shared data products](install-a-data-product.png) + +2. Select the previously shared data product and choose install. + + ![Install the data product](install-a-data-product2.png) + +3. The database user that is currently connected will be granted read access to the virtual tables. The below SQL is an example of granting an additional user access to the schema that contains the virtual tables. + + ```SQL + GRANT SELECT ON SCHEMA "_SAP_DATAPRODUCT_sap_s4com_dataProduct_SalesOrder_v1_4a6dc5d7-7af5-4b74-8ac7-b9ed0d1e6e95" TO USER1; + ``` + +### Examine the remote source and virtual tables + +1. Start by selecting the schema of the data product. + + ![Select the schema](explore-a-data-product.png) + +2. The database objects app will now open with a schema filter applied. + + ![database objects app filtered on schema](explore-a-data-product2.png) + + As shown above a virtual table was created for each object in the data product. + +3. A remote source was created that provides the connection to the data product. + + ![database objects app filtered on schema](explore-a-data-product3.png) + + Further details on remote sources can be found at [CREATE REMOTE SOURCE](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/create-remote-source-statement-access-control?locale=en-US). + +### Query the data product using SAP HANA Cloud + +The data in the data product can now be queried using the virtual tables. An example is shown below. + +```SQL +SET SCHEMA "_SAP_DATAPRODUCT_sap_s4com_dataProduct_SalesOrder_v1_5aad3293-2564-4873-8c15-188e653f781b"; +SELECT COUNT(*) FROM "_SAP_DATAPRODUCT_c9429f3e-0d0b-4d0b-9364-cec9e77daebe_salesorder.SalesOrder" AS SALES_ORDER; +SELECT COUNT(*) FROM "_SAP_DATAPRODUCT_c9429f3e-0d0b-4d0b-9364-cec9e77daebe_salesorder.SalesOrderItem" AS SALES_ORDER_ITEM; +SELECT * FROM "_SAP_DATAPRODUCT_c9429f3e-0d0b-4d0b-9364-cec9e77daebe_salesorder.SalesOrder" AS SALES_ORDER; +SELECT * FROM "_SAP_DATAPRODUCT_c9429f3e-0d0b-4d0b-9364-cec9e77daebe_salesorder.SalesOrderItem" AS SALES_ORDER_ITEM; +SELECT "SalesOrderItemText", COUNT("SalesOrderItemText") FROM "_SAP_DATAPRODUCT_c9429f3e-0d0b-4d0b-9364-cec9e77daebe_salesorder.SalesOrderItem" + AS SALES_ORDER_ITEM GROUP BY "SalesOrderItemText" ORDER BY COUNT("SalesOrderItemText") DESC; +``` + +![Query the data product](query-data-product.png) + +### Knowledge check + +Congratulations! You have now learned how to install and query a data product using SAP HANA Cloud. See the blog post [Consuming Data Products in SAP HANA Cloud via SAP Business Application Studio/SAP Build Code](https://community.sap.com/t5/technology-blog-posts-by-sap/consuming-data-products-in-sap-hana-cloud-via-sap-business-application/ba-p/14320009) to learn how to use a data product from within a calculation view. diff --git a/tutorials/hana-cloud-data-products-consumption/install-a-data-product.png b/tutorials/hana-cloud-data-products-consumption/install-a-data-product.png new file mode 100644 index 0000000000..8f3ee8a1f7 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/install-a-data-product.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/install-a-data-product2.png b/tutorials/hana-cloud-data-products-consumption/install-a-data-product2.png new file mode 100644 index 0000000000..bdb7be30ba Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/install-a-data-product2.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/query-data-product.png b/tutorials/hana-cloud-data-products-consumption/query-data-product.png new file mode 100644 index 0000000000..b9da128a01 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/query-data-product.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/share-a-data-product.png b/tutorials/hana-cloud-data-products-consumption/share-a-data-product.png new file mode 100644 index 0000000000..95e7387e0b Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/share-a-data-product.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/share-a-data-product2.png b/tutorials/hana-cloud-data-products-consumption/share-a-data-product2.png new file mode 100644 index 0000000000..3455b4e383 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/share-a-data-product2.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/share-a-data-product3.png b/tutorials/hana-cloud-data-products-consumption/share-a-data-product3.png new file mode 100644 index 0000000000..7f219eab60 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/share-a-data-product3.png differ diff --git a/tutorials/hana-cloud-data-products-consumption/share-a-data-product4.png b/tutorials/hana-cloud-data-products-consumption/share-a-data-product4.png new file mode 100644 index 0000000000..60477c8b90 Binary files /dev/null and b/tutorials/hana-cloud-data-products-consumption/share-a-data-product4.png differ diff --git a/tutorials/hana-cloud-dl-clients-dot-net/dotNET-csproj-code.png b/tutorials/hana-cloud-dl-clients-dot-net/dotNET-csproj-code.png index 4d3e1141b4..2124b5e056 100644 Binary files a/tutorials/hana-cloud-dl-clients-dot-net/dotNET-csproj-code.png and b/tutorials/hana-cloud-dl-clients-dot-net/dotNET-csproj-code.png differ diff --git a/tutorials/hana-cloud-dl-clients-dot-net/hana-cloud-dl-clients-dot-net.md b/tutorials/hana-cloud-dl-clients-dot-net/hana-cloud-dl-clients-dot-net.md index d95b647283..1947147bfe 100644 --- a/tutorials/hana-cloud-dl-clients-dot-net/hana-cloud-dl-clients-dot-net.md +++ b/tutorials/hana-cloud-dl-clients-dot-net/hana-cloud-dl-clients-dot-net.md @@ -7,26 +7,32 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake --- # Connect to Data Lake Relational Engine Using the .NET Driver + Create and debug a .NET application that connects to a data lake Relational Engine. ## Prerequisites - - You have completed the first tutorial in this group. + +- You have completed the first tutorial in this group. ## You will learn - - How to install the .NET SDK - - How to create and debug a .NET application that queries a data lake Relational Engine + +- How to install the .NET SDK +- How to create and debug a .NET application that queries a data lake Relational Engine ## Intro + [.NET](https://en.wikipedia.org/wiki/.NET_Core) is a free and open-source software framework for Microsoft Windows, Linux and Mac operating systems and is the successor to the .NET Framework. .NET was previously known as .NET Core. --- ### Install the .NET SDK + The first step is to check if you have the .NET SDK installed and what version it is. Enter the following command: ```Shell dotnet --version ``` + If the `dotnet` command is not recognized, it means that the .NET SDK has not been installed. If the SDK is installed, the command returns the currently installed version, such as 9.0.101. If the .NET SDK is not installed, download it from [Download .NET](https://dotnet.microsoft.com/download) and run the installer on Microsoft Windows. @@ -38,9 +44,9 @@ On Linux, follow the instructions for the appropriate Linux version such as [Ins In order for the shell to recognize that the .NET SDK is installed and for any `dotnet` commands in future steps to be recognized, a new shell window needs to be opened. - ### Create a .NET application that queries a data lake Relational Engine -1. Create a new console app with the below commands: + +1. Create a new console app with the below commands: ```Shell (Microsoft Windows) cd %HOMEPATH%/DataLakeClientsTutorial @@ -52,7 +58,7 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` dotnet new console -o dotNET ``` -2. Open the `dotNET.csproj` file: +2. Open the `dotNET.csproj` file: ```Shell (Microsoft Windows) cd dotNET @@ -69,7 +75,7 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` ```Shell (Microsoft Windows) - C:\SAP\DLClient\IQ-17_1\Assembly\Core2.1\Sap.Data.SQLAnywhere.Core.v2.1.dll + C:\SAP\hdlclient\sdk\dotnet\Sap.Data.SQLAnywhere.Core.v2.1.dll ``` @@ -77,33 +83,27 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` ```Shell (Linux) - /home/dan/sap/dlclient/IQ-17_1/assembly/core2.1/Sap.Data.SQLAnywhere.Core.v2.1.dll + /home/dan/sap/hdlclient/sdk/dotnet/Sap.Data.SQLAnywhere.Core.v2.1.dll ``` - Note that if the developer licensed version of the data lake Client was installed the path might be similar to - - ``` - C:\SAP\hdlclient\sdk\dotnet\Sap.Data.SQLAnywhere.Core.v2.1.dll - or - /home/dan/sap/hdlclient/sdk/dotnet/Sap.Data.SQLAnywhere.Core.v2.1.dll - ``` - - ![dotNET.csproj code](dotNET-csproj-code.png) + ![dotNET.csproj updates](dotNET-csproj-code.png) Once the `dotNet.csproj` file has been updated, save and close the file. -3. Run the app to validate that data lake driver can be loaded: +3. Run the app to validate that data lake driver can be loaded: ```Shell dotnet run ``` - >If an error occurs, double check that the hintpath is correct and that the IQ.sh variables have been set. + + >If an error occurs, double check that the hintpath is correct and on Linux that the script hdlclienv.sh to set the variables has been run. ![Result of running the app](result0.png) -4. Open an editor to edit the file `Program.cs`. +4. Open an editor to edit the file `Program.cs`. + ```Shell (Windows) notepad Program.cs ``` @@ -111,8 +111,8 @@ In order for the shell to recognize that the .NET SDK is installed and for any ` ```Shell (Linux) pico Program.cs ``` - -5. Replace the entire contents of `Program.cs` with the code below. Update the host value in the connection string. + +5. Replace the entire contents of `Program.cs` with the code below. Update the host value in the connection string. ```C# using System; diff --git a/tutorials/hana-cloud-dl-clients-golang/createModule.png b/tutorials/hana-cloud-dl-clients-golang/createModule.png index 2c44a8aab8..b2c5b6d342 100644 Binary files a/tutorials/hana-cloud-dl-clients-golang/createModule.png and b/tutorials/hana-cloud-dl-clients-golang/createModule.png differ diff --git a/tutorials/hana-cloud-dl-clients-golang/goModContents.png b/tutorials/hana-cloud-dl-clients-golang/goModContents.png index 12eae74d75..82de8566f8 100644 Binary files a/tutorials/hana-cloud-dl-clients-golang/goModContents.png and b/tutorials/hana-cloud-dl-clients-golang/goModContents.png differ diff --git a/tutorials/hana-cloud-dl-clients-golang/hana-cloud-dl-clients-golang.md b/tutorials/hana-cloud-dl-clients-golang/hana-cloud-dl-clients-golang.md index e255e0b14b..577ef54997 100644 --- a/tutorials/hana-cloud-dl-clients-golang/hana-cloud-dl-clients-golang.md +++ b/tutorials/hana-cloud-dl-clients-golang/hana-cloud-dl-clients-golang.md @@ -7,21 +7,26 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake --- # Connect to Data Lake Relational Engine Using the Go Driver + Create and debug a Go application that connects to data lake Relational Engine. ## Prerequisites - - You have completed the first tutorial in this group. + +- You have completed the first tutorial in this group. ## You will learn - - How to install Go - - How to create and debug a Go application that queries a data lake Relational Engine + +- How to install Go +- How to create and debug a Go application that queries a data lake Relational Engine ## Intro + Go is an open-source programming language developed by Google to increase productivity among programmers. For more information, see the [Go Documentation](https://golang.org/doc/). --- ### Install Go + The first step is to check if Go is installed, and if so, which version. To do so, enter the following command: ```Shell @@ -30,7 +35,7 @@ go version ![go version linux](version2.png) -If Go is installed, then it will return the currently installed version, such as 1.23.4 +If Go is installed, then it will return the currently installed version, such as 1.26.1 If it is not installed, download it from [Download Go](https://golang.org/dl/), run the installer, follow the provided instructions, and ensure that Go is in your path. @@ -39,6 +44,7 @@ On Linux, follow the instructions for the appropriate Linux version: [Installing >Note: A new shell window must be opened for the system to recognize the Go installation and for executing any future Go commands. ### Configure the environment + The data lake Relational Engine Client interface for Go, like the other data lake Relational Engine client interfaces (except JDBC), makes use of a C library named SQLDBC. The Go driver loads the SQLDBC library named `libdbcapiHDB` using [cgo](https://golang.org/cmd/cgo/). For further information on the following steps, consult [Go (golang) Driver](https://help.sap.com/docs/hana-cloud-data-lake/client-interfaces/go-golang-driver) in the SAP HANA Cloud, Data Lake Client Interfaces Reference Guide. In order to use the Go Driver, a 64-bit `gcc` compiler is required. @@ -68,8 +74,9 @@ The Go driver loads the SQLDBC library named `libdbcapiHDB` using [cgo](https:/ ```Shell gcc --version ``` + On Linux (if needed), install the System GNU C compiler for your version of Linux. Note that if you are using openSUSE, minGW is included in the installation for Go through YaST. - + ![gcc 64-bit](gccLinux.png) 2. Examine the Go environment by running the below command: @@ -84,15 +91,14 @@ The Go driver loads the SQLDBC library named `libdbcapiHDB` using [cgo](https:/ 3. Set the `CGO_LDFLAGS` environment variable to point to the location of the HDLRE client library as shown below. - On Windows, search **Edit the System Environment Variables** and click on **Environment Variables**. Add a **NEW** user variable. Set the variable name to **CGO_LDFLAGS** and the value as the location of `dbcapi` library: `C:\SAP\hdlclient\IQ-17_1\bin64\dbcapi.dll` + On Windows, search **Edit the System Environment Variables** and click on **Environment Variables**. Add a **NEW** user variable. Set the variable name to **CGO_LDFLAGS** and the value as the location of `dbcapi` library: `C:\SAP\hdlclient\bin64\dbcapi.dll` ![Set Environment Variables](setEnvVar.png) >It is also possible on Microsoft Windows to set this using the SETX command from a shell. - On Linux, check if the following variable are defined. - + ```Shell (Linux) echo $CGO_LDFLAGS echo $LD_LIBRARY_PATH @@ -102,35 +108,34 @@ The Go driver loads the SQLDBC library named `libdbcapiHDB` using [cgo](https:/ ```Shell (Linux) pico .bashrc - export CGO_LDFLAGS=$HOME/sap/dlclient/IQ-17_1/lib64/libdbcapi_r.so - export CGO_LDFLAGS=$HOME/sap/hdlclient/lib64/libdbcapi_r.so - export LD_LIBRARY_PATH=$HOME/sap/dlclient/IQ-17_1/lib64 - export LD_LIBRARY_PATH=$HOME/sap/hdlclient/lib64 + export CGO_LDFLAGS=$HDL_CLIENT_HOME/lib64/libdbcapi_r.so + export LD_LIBRARY_PATH=$HDL_CLIENT_HOME/lib64 ``` - + ![.bash_profile contents](bashProfileAfterCGO.png) 4. Navigate to the driver folder and create a Go module. Note that the path may be different depending on the data lake client install used. ```Shell (Windows) - cd %IQDIR17%\sdk\golang\SAP\go-hdlre\driver + cd %HDL_CLIENT_HOME%\sdk\golang\SAP\go-hdlre\driver go mod init "SAP/go-hdlre/driver" go mod tidy ``` - + ```Shell (Linux) - cd $IQDIR17/sdk/golang-hdlre/src/SAP/go-hdlre/driver - cd $IQDIR17/sdk/golang/SAP/go-hdlre/driver/ + cd $HDL_CLIENT_HOME/sdk/golang/SAP/go-hdlre/driver/ go mod init "SAP/go-hdlre/driver" go mod tidy ``` + ![createModule](createModule.png) The contents of the data lake Client folder is not writeable so you may need to change the permissions on the driver folder or copy files to a new location. ### Create a Go application that queries an SAP data lake Relational Engine + 1. In a shell, create a folder named `go`, enter the newly created directory, and open a file named `goQuery.go` in an editor. - + ```Shell (Windows) mkdir %HOMEPATH%\DataLakeClientsTutorial\go cd %HOMEPATH%\DataLakeClientsTutorial\go @@ -200,8 +205,7 @@ The Go driver loads the SQLDBC library named `libdbcapiHDB` using [cgo](https:/ go mod init "go/goQuery" go mod tidy notepad go.mod - ``` - + ``` ```Shell (Linux) go mod init "go/goQuery" @@ -209,18 +213,16 @@ The Go driver loads the SQLDBC library named `libdbcapiHDB` using [cgo](https:/ pico go.mod ``` -4. Add the code below to `go.mod` under the go 1.23.4 (version) line: - +4. Add the code below to `go.mod` under the go 1.26.1 (version) line: + >Ensure you have the correct path to the driver folder. The path depends on your installation. Note that two example locations are provided. Choose the one that's closest to your installation and edit it if necessary. ```Code (Windows) - replace SAP/go-hdlre/driver v0.1.0 => C:\SAP\dlclient\IQ-17_1\SDK\golang\SAP\go-hdlre\driver replace SAP/go-hdlre/driver v0.1.0 => C:\SAP\hdlclient\sdk\golang\SAP\go-hdlre\driver require SAP/go-hdlre/driver v0.1.0 ``` - + ```Code (Linux) - replace SAP/go-hdlre/driver v0.1.0 => /home/name/sap/dlclient/IQ-17_1/sdk/golang-hdlre/src/SAP/go-hdlre/driver replace SAP/go-hdlre/driver v0.1.0 => /home/name/sap/hdlclient/sdk/golang/SAP/go-hdlre/driver require SAP/go-hdlre/driver v0.1.0 ``` @@ -242,8 +244,8 @@ The Go driver loads the SQLDBC library named `libdbcapiHDB` using [cgo](https:/ For more information on the API's used, consult the SAP HANA Cloud, data lake connection specific properties at [Connect from Go to Data Lake Relational Engine](https://help.sap.com/docs/SAP_HANA_DATA_LAKE/a895964984f210158925ce02750eb580/0b55e305d26941c191c71eaa07f72bb5.html), [Go Database/SQL Tutorial](http://go-database-sql.org/index.html), and [Package SQL](https://golang.org/pkg/database/sql/) - ### Debug the application + Visual Studio Code provides plugins for Go and can be used to debug an application. 1. If you have not already done so, download [Visual Studio Code](https://code.visualstudio.com/Download). @@ -272,11 +274,8 @@ Visual Studio Code provides plugins for Go and can be used to debug an applicati >Note that debugging can also be performed from the command line using [Delve](https://github.com/go-delve/delve ). - - ### Knowledge check -Congratulations! You have now created and debugged a Go application that connects to and queries a data lake Relational Engine. - +Congratulations! You have now created and debugged a Go application that connects to and queries a data lake Relational Engine. --- diff --git a/tutorials/hana-cloud-dl-clients-jdbc/hana-cloud-dl-clients-jdbc.md b/tutorials/hana-cloud-dl-clients-jdbc/hana-cloud-dl-clients-jdbc.md index 62cae697e6..b239951fae 100644 --- a/tutorials/hana-cloud-dl-clients-jdbc/hana-cloud-dl-clients-jdbc.md +++ b/tutorials/hana-cloud-dl-clients-jdbc/hana-cloud-dl-clients-jdbc.md @@ -7,21 +7,25 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake --- # Connect to Data Lake Relational Engine Using the JDBC Driver + Create and debug a Java application that connects to the data lake Relational Engine. ## Prerequisites - - You have completed the first tutorial in this group. + +- You have completed the first tutorial in this group. ## You will learn - - How to create and debug a Java application that connects to and queries a data lake Relational Engine database + +- How to create and debug a Java application that connects to and queries a data lake Relational Engine database ## Intro -[Java Database Connectivity](https://en.wikipedia.org/wiki/Java_Database_Connectivity) (JDBC) provides an [API](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) for accessing databases from Java. An application written to the JDBC standard can be ported to other databases. Database vendors provide JDBC drivers for their database products. Further details of the SAP JDBC driver can be found at [JDBC Driver](https://help.sap.com/docs/hana-cloud-data-lake/client-interfaces/jdbc-driver). +[Java Database Connectivity](https://en.wikipedia.org/wiki/Java_Database_Connectivity) (JDBC) provides an [API](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) for accessing databases from Java. An application written to the JDBC standard can be ported to other databases. Database vendors provide JDBC drivers for their database products. Further details of the SAP JDBC driver can be found at [JDBC Driver](https://help.sap.com/docs/hana-cloud-data-lake/client-interfaces/jdbc-driver). --- ### Install a JDK + Ensure that you have a Java Development Kit (JDK) installed and make sure it is accessible from your path. You should already have the SAP Java Virtual Machine (JVM) installed after completing the [SAP HANA Cloud, Data Lake Client Interfaces Overview](https://developers.sap.com/tutorials/hana-cloud-dl-clients-overview.html#f86d9ece-1bd4-4add-81a4-aedc6b290e97) prerequisite tutorial. @@ -43,17 +47,17 @@ For Linux, the following command will install Java on openSUSE Leap 15.4. sudo zypper install java-11-openjdk-devel ``` - ### The data lake Relational Engine JDBC driver -The data lake Relational Engine JDBC driver is a type 2 driver, which means it has a native (non-Java) component. For additional details see [Type 2 driver – Native-API driver](https://en.wikipedia.org/wiki/JDBC_driver#Type_2_driver_%E2%80%93_Native-API_driver). The driver is located in `%IQDIR17%\java\sajdbc4.jar` on Microsoft Windows and `$IQDIR17/java/sajdbc4.jar` on Linux. The native component is at `%IQDIR17%\Bin64\dbjdbc17.dll` on Microsoft Windows and `$IQDIR17\lib64\libdbjdbc17.so` on Linux. -See [data lake Relational Engine JDBC driver](https://help.sap.com/docs/hana-cloud-data-lake/developer-guide-for-data-lake-relational-engine/jdbc-drivers) for additional details. +The data lake Relational Engine JDBC driver is a type 2 driver, which means it has a native (non-Java) component. For additional details see [Type 2 driver – Native-API driver](https://en.wikipedia.org/wiki/JDBC_driver#Type_2_driver_%E2%80%93_Native-API_driver). The driver is located in `%HDL_CLIENT_HOME%\java\sajdbc4.jar` on Microsoft Windows and `$HDL_CLIENT_HOME/java/sajdbc4.jar` on Linux. The native component is at `%HDL_CLIENT_HOME%\Bin64\dbjdbc17.dll` on Microsoft Windows and `$HDL_CLIENT_HOME\lib64\libdbjdbc17.so` on Linux. +See [data lake Relational Engine JDBC driver](https://help.sap.com/docs/hana-cloud-data-lake/developer-guide-for-data-lake-relational-engine/jdbc-drivers) for additional details. ### Create a Java application that queries data lake Relational Engine + 1. The following commands create a folder named `java`, enter the newly created directory, create a file named `JavaQuery.java`, and open the file in notepad. - >The HOMEPATH environment variable should resolve to your user in your users folder such as c:\users\dan. Its value can be seen on Microsoft Windows by entering echo %HOMEPATH% into a shell. + >The HOMEPATH environment variable should resolve to your user in your user's folder such as c:\users\dan. Its value can be seen on Microsoft Windows by entering echo %HOMEPATH% into a shell. ```Shell (Microsoft Windows) mkdir %HOMEPATH%\DataLakeClientsTutorial\java @@ -112,19 +116,19 @@ See [data lake Relational Engine JDBC driver](https://help.sap.com/docs/hana-clo Compile the `.java` file into a `.class` file using the following command: ```Shell (Microsoft Windows) - javac -cp %IQDIR17%\Java\sajdbc4.jar;. JavaQuery.java + javac -cp %HDL_CLIENT_HOME%\Java\sajdbc4.jar;. JavaQuery.java ``` ```Shell (Linux) - javac -cp $IQDIR17/java/sajdbc4.jar:. JavaQuery.java + javac -cp $HDL_CLIENT_HOME/java/sajdbc4.jar:. JavaQuery.java ``` 4. Run `JavaQuery.class` and indicate where the JDBC driver is located. ```Shell (Microsoft Windows) - java -classpath %IQDIR17%\Java\sajdbc4.jar;. JavaQuery + java -classpath %HDL_CLIENT_HOME%\Java\sajdbc4.jar;. JavaQuery ``` ```Shell (Linux) - java -classpath $IQDIR17/java/sajdbc4.jar:. JavaQuery + java -classpath $HDL_CLIENT_HOME/java/sajdbc4.jar:. JavaQuery ``` ![Java Query](jdbc-query.png) @@ -132,6 +136,7 @@ See [data lake Relational Engine JDBC driver](https://help.sap.com/docs/hana-clo See [JDBC Program Structure](https://help.sap.com/viewer/a894a54d84f21015b142ffe773888f8c/latest/en-US/3bd5a89b6c5f1014ad1bae9e04645f43.html) for additional details. ### Debug the application + Visual Studio Code can run and debug a Java application. It is a lightweight but powerful source code editor available on Microsoft Windows, macOS, and Linux. 1. If required, [Download Visual Studio Code](https://code.visualstudio.com/Download). @@ -148,7 +153,7 @@ Visual Studio Code can run and debug a Java application. It is a lightweight but ![referenced libraries](ref-libraries.png) - The JDBC driver is located at `%IQDIR17%\Java\sajdbc4.jar` on Microsoft Windows and `$IQDIR17/java/sajdbc4.jar` on Linux. + The JDBC driver is located at `%HDL_CLIENT_HOME%\Java\sajdbc4.jar` on Microsoft Windows and `$HDL_CLIENT_HOME/java/sajdbc4.jar` on Linux. 5. Place a breakpoint and then select **Run | Start Debugging**. @@ -159,6 +164,7 @@ Visual Studio Code can run and debug a Java application. It is a lightweight but ![VS Code Debugging](debugging.png) ### Knowledge check + Congratulations! You have now created and debugged a Java application that connects to and queries a data lake Relational Engine database. --- diff --git a/tutorials/hana-cloud-dl-clients-node/hana-cloud-dl-clients-node.md b/tutorials/hana-cloud-dl-clients-node/hana-cloud-dl-clients-node.md index 977a0ed0d6..836bb1b5e5 100644 --- a/tutorials/hana-cloud-dl-clients-node/hana-cloud-dl-clients-node.md +++ b/tutorials/hana-cloud-dl-clients-node/hana-cloud-dl-clients-node.md @@ -7,22 +7,27 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake --- # Connect to Data Lake Relational Engine Using the Node.js Driver + Create and debug a Node.js application that connects to data lake Relational Engine. ## Prerequisites - - You have completed the first tutorial in this group. + +- You have completed the first tutorial in this group. ## You will learn - - How to install Node.js and the data lake Relational Engine Node.js driver - - How to create and debug a Node.js application - - How to use both the synchronous and asynchronous driver interfaces + +- How to install Node.js and the data lake Relational Engine Node.js driver +- How to create and debug a Node.js application +- How to use both the synchronous and asynchronous driver interfaces ## Intro + Node.js provides a JavaScript runtime outside of the browser and uses an asynchronous event driven programming model. For more details, see [Introduction to Node.js](https://nodejs.org/en/learn/getting-started/introduction-to-nodejs). On Microsoft Windows, in this tutorial, the shell used is the Command Prompt. --- ### Install Node.js + Ensure you have Node.js installed and check its version. Enter the following command: ```Shell @@ -41,8 +46,8 @@ If Node.js is not installed, download the long-term support (LTS) version of the > >![Chocolatey](Chocolatey.png) - ### Install the data lake Relational Engine client for Node.js + The Node.js driver covered in this tutorial is [@sap\iq-client](https://www.npmjs.com/package/@sap/iq-client) which supports the latest Node.js versions and includes a promise library. An alternate driver is the [SQL Anywhere](https://github.com/sqlanywhere/node-sqlanywhere) driver. 1. Open a new Shell and create a folder named `node` and enter the newly created directory. @@ -72,11 +77,11 @@ The Node.js driver covered in this tutorial is [@sap\iq-client](https://www.npmj ![npm list](npm-list.png) - ### Create a synchronous Node.js application that queries SAP data lake Relational Engine + 1. Create a new file named `nodeQuery.js` in an editor. -Depending on what version of the data lake client was used, execute: + Depending on what version of the data lake client was used, execute: ```Shell (Microsoft Windows) notepad nodeQuery.js @@ -84,7 +89,7 @@ Depending on what version of the data lake client was used, execute: Substitute `pico` below for your preferred text editor. - ```Shell (Linux or Mac) + ```Shell (Linux) pico nodeQuery.js ``` @@ -117,7 +122,7 @@ Depending on what version of the data lake client was used, execute: connection.disconnect(); ``` -4. Run the app. +3. Run the app. ```Shell node nodeQuery.js @@ -127,6 +132,12 @@ Depending on what version of the data lake client was used, execute: If an error appears such as Error: `libdbcapi_r.so` is missing, its location can be specified using an environment variable such as IQ_DBCAPI_DIR. + ```Shell (Linux) + IQ_DBCAPI_DIR=$HDL_CLIENT_HOME/lib64 + export IQ_DBCAPI_DIR + echo $IQ_DBCAPI_DIR + ``` + Note the above app makes use of some of the data lake Relational Engine client Node.js driver methods, such as [connect](https://help.sap.com/docs/hana-cloud-data-lake/developer-guide-for-data-lake-relational-engine/connect-string-object-function-method), [exec](https://help.sap.com/docs/hana-cloud-data-lake/developer-guide-for-data-lake-relational-engine/exec-ute-string-array-object-function-method) and [disconnect](https://help.sap.com/docs/hana-cloud-data-lake/developer-guide-for-data-lake-relational-engine/disconnect-close-end-function-method). Two examples showing the drivers methods being used asynchronously are shown in the next two steps. @@ -138,9 +149,7 @@ Depending on what version of the data lake client was used, execute: >node nodeQuery.js >``` - Linux or Mac - - >```Shell + >```Shell(Linux) >export DEBUG=* >node nodeQuery.js >``` @@ -153,9 +162,7 @@ Depending on what version of the data lake client was used, execute: >set DEBUG >set DEBUG= >set DEBUG - >``` - - Linux or Mac + >``` >```Shell (Linux) >printenv | grep DEBUG @@ -164,6 +171,7 @@ Depending on what version of the data lake client was used, execute: >``` ### Create an asynchronous app that uses callbacks + Asynchronous programming enables non-blocking code execution which is demonstrated in the below example. 1. Open a file named `nodeQueryCallback.js` in an editor. @@ -254,12 +262,13 @@ Asynchronous programming enables non-blocking code execution which is demonstrat ```Shell node nodeQueryCallback.js ``` + ![Running nodeQueryCallback.js](Node-query-callback.png) Notice that asynchronous method calls use callback functions. - ### Create an asynchronous app that uses promises + The Node.js driver for the data lake Relational Engine client provides support for promises. The following example demonstrates this. Notice that there is less nesting of code then the previous example. 1. Open a file named `nodeQueryPromise.js` in an editor. @@ -270,7 +279,7 @@ The Node.js driver for the data lake Relational Engine client provides support f Substitute `pico` below for your preferred text editor. - ```Shell (Linux or Mac) + ```Shell (Linux) pico nodeQueryPromise.js ``` @@ -353,20 +362,21 @@ The Node.js driver for the data lake Relational Engine client provides support f } ``` -4. Run the app. +3. Run the app. ```Shell node nodeQueryPromise.js ``` + ![Running nodeQueryPromise.js](Node-query-promise.png) The above code makes use of the [promise module](https://help.sap.com/docs/hana-cloud-data-lake/developer-guide-for-data-lake-relational-engine/promise-module). Additional details on promises can be found at [Using Promises](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises). - ### Debug the application -Visual Studio Code can run and debug a Node.js application. It is a lightweight but powerful source code editor which is available on Windows, macOS and Linux. Ensure that Node.js is added to your path in environment variables such as C:\Program Files\nodejs. -1. If required, download [Visual Studio Code.](https://code.visualstudio.com/Download). +Visual Studio Code can run and debug a Node.js application. It is a lightweight but powerful source code editor which is available on Windows, macOS and Linux. Ensure that Node.js is added to your path in environment variables such as C:\Program Files\nodejs. + +1. If required, download [Visual Studio Code](https://code.visualstudio.com/Download). 2. In Visual Studio Code, choose **File | Add Folder to Workspace** and then add the `DataLakeClientsTutorial` folder. @@ -383,13 +393,15 @@ Visual Studio Code can run and debug a Node.js application. It is a lightweight ![VS Code Debugging](debugging.png) If "Can't find Node.js binary 'node': path does not exist" error pops up, open a Shell and run the following command. + ```Shell code . ``` - Then restart VSCode. + Then restart VSCode. ### Knowledge check + Congratulations! You have created and debugged a Node.js application that connects to and queries an SAP data lake Relational Engine database. --- diff --git a/tutorials/hana-cloud-dl-clients-odbc/hana-cloud-dl-clients-odbc.md b/tutorials/hana-cloud-dl-clients-odbc/hana-cloud-dl-clients-odbc.md index db019e9a7e..5c90e7610d 100644 --- a/tutorials/hana-cloud-dl-clients-odbc/hana-cloud-dl-clients-odbc.md +++ b/tutorials/hana-cloud-dl-clients-odbc/hana-cloud-dl-clients-odbc.md @@ -7,21 +7,26 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake --- # Connect to Data Lake Relational Engine Using the ODBC Driver + Configure a data source to connect to the previously created data lake Relational Engine and then use the data source in unixODBC and Microsoft Excel. ## Prerequisites - - You have completed the first tutorial in this group. + +- You have completed the first tutorial in this group. ## You will learn - - How to create an ODBC data source for a data lake Relational Engine connection - - How to use the configured data source with other applications + +- How to create an ODBC data source for a data lake Relational Engine connection +- How to use the configured data source with other applications ## Intro + [Open Database Connectivity](https://en.wikipedia.org/wiki/Open_Database_Connectivity) (ODBC) provides an [API](https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/odbc-api-reference?view=sql-server-ver15) for accessing databases. Database vendors provide ODBC drivers for their database products. An application written to the ODBC standard can be ported to other databases that also provide an ODBC interface. --- ### Configure a data source on Linux with unixODBC + 1. On SUSE Linux, unixODBC can be installed using Zypper or YaST. ```Shell (Linux) @@ -33,10 +38,12 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake For more details on how to accomplish this, please follow the second step of [this tutorial](hxe-ua-dbfundamentals-odbc). 2. The following commands can be used to confirm that unixODBC is installed and determine the location of the .odbc.ini file (if it exists). + ```Shell (Linux) cd /etc/unixODBC odbcinst -j ``` + ![odbcinst -j](odbcinst-1.png) 3. Navigate to the directory where the `.odbc.ini` file is located, similar to the one highlighted in the screenshot above. Open or create the `.odbc.ini` file with the following command: @@ -65,10 +72,11 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake dbisql -hdl -c "uid=USER1;pwd=Password1;dsn=HC_DL" -nogui isql -v HC_DL USER1 Password1 ``` -dsn is the name set in the odbc.ini file in the previous step. + + dsn is the name set in the odbc.ini file in the previous step. **DBISQL** - + Some example queries you can run are listed below. ```SQL @@ -95,6 +103,7 @@ dsn is the name set in the odbc.ini file in the previous step. >``` ### Configure a data source using Microsoft Windows ODBC Data Source Administrator + The ODBC Data Source Administrator can be used to view the installed ODBC drivers and to create data sources for an installed driver. 1. Open the administrator by entering **ODBC** after clicking on the Microsoft Windows start icon. @@ -105,7 +114,6 @@ The ODBC Data Source Administrator can be used to view the installed ODBC driver ![odbc admin drivers](drivers-1.png) - 3. Click the **User DSN** tab to view the data sources. 4. Click **Add** to create a new data source to connect to a data lake Relational Engine database. @@ -123,13 +131,13 @@ The ODBC Data Source Administrator can be used to view the installed ODBC driver In the **ODBC tab** of the configuration window, fill in the **Data source name**. Switch to the **Login tab** and enter in the **USER1** credentials. - + Retrieve the SQL Endpoint for your data lake instance. You can find this via the SAP BTP Cockpit or by using the **Copy SQL Endpoint** menu option in SAP HANA Cloud Central and input into **Host** field. ![SQL Endpoint](sql-endpoint.png) - + Select the **Connect to SAP HANA CLOUD, data lake Relational Engine** action. - + ![specify the credentials, host and port](data-source2.png) 7. Verify the connection by clicking on **Test Connection** in the ODBC tab. @@ -145,17 +153,18 @@ The ODBC Data Source Administrator can be used to view the installed ODBC driver For additional details see [Connection Properties](https://help.sap.com/viewer/a895964984f210158925ce02750eb580/latest/en-US/a6d47d6e84f210158d4980b069eff5dd.html). ### Use a data lake data source from Microsoft Excel + An application that supports ODBC can now make use of the created data source. One example on Windows is Microsoft Excel. The following steps demonstrate how to use Microsoft Excel to query data in data lake Relational Engine using the ODBC connector. 1. Open Microsoft Excel. -2. In the **Data** tab, select **Get Data | From Other Sources | From ODBC**. +2. In the **Data** tab, select **Get Data | From Other Sources | From ODBC**. ![Excel ODBC](ExcelODBC.png) -3. Select the previously created data source that contains the connection information to data lake Relational Engine. +3. Select the previously created data source that contains the connection information to data lake Relational Engine. ![Excel DSN](ExcelDSN.png) @@ -176,6 +185,7 @@ The following steps demonstrate how to use Microsoft Excel to query data in data For further information on programming an application to use the ODBC client driver, see [ODBC CLI](https://help.sap.com/viewer/a894a54d84f21015b142ffe773888f8c/latest/en-US/a3171c5084f210159caebadd9e149481.html). ### Knowledge check + Congratulations! You have configured an ODBC data source to contain connection information for a SAP HANA Cloud, data lake Relational Engine database and used that data source from unixODBC and Microsoft Excel. ---- +--- \ No newline at end of file diff --git a/tutorials/hana-cloud-dl-clients-overview/add-data-lake2.png b/tutorials/hana-cloud-dl-clients-overview/add-data-lake2.png index 9db15a3a62..7ced2180ba 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/add-data-lake2.png and b/tutorials/hana-cloud-dl-clients-overview/add-data-lake2.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/allowed-connections.png b/tutorials/hana-cloud-dl-clients-overview/allowed-connections.png index 72c91d8b3c..9d2de11c13 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/allowed-connections.png and b/tutorials/hana-cloud-dl-clients-overview/allowed-connections.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/crypto-check.png b/tutorials/hana-cloud-dl-clients-overview/crypto-check.png index d8eb841536..bf386f874a 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/crypto-check.png and b/tutorials/hana-cloud-dl-clients-overview/crypto-check.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/crypto-check0.png b/tutorials/hana-cloud-dl-clients-overview/crypto-check0.png new file mode 100644 index 0000000000..a339a7dde0 Binary files /dev/null and b/tutorials/hana-cloud-dl-clients-overview/crypto-check0.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/data-lake-client-install.png b/tutorials/hana-cloud-dl-clients-overview/data-lake-client-install.png deleted file mode 100644 index 3acf6513d9..0000000000 Binary files a/tutorials/hana-cloud-dl-clients-overview/data-lake-client-install.png and /dev/null differ diff --git a/tutorials/hana-cloud-dl-clients-overview/data-lake-running.png b/tutorials/hana-cloud-dl-clients-overview/data-lake-running.png index f4c582109c..71ab26eb8d 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/data-lake-running.png and b/tutorials/hana-cloud-dl-clients-overview/data-lake-running.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/dbisql-copy-endpoint.png b/tutorials/hana-cloud-dl-clients-overview/dbisql-copy-endpoint.png index f88fb57115..defe9ff5a5 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/dbisql-copy-endpoint.png and b/tutorials/hana-cloud-dl-clients-overview/dbisql-copy-endpoint.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/dl-software-downloads.png b/tutorials/hana-cloud-dl-clients-overview/dl-software-downloads.png index 096a30eb2f..d5412a4df1 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/dl-software-downloads.png and b/tutorials/hana-cloud-dl-clients-overview/dl-software-downloads.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/env-variables.png b/tutorials/hana-cloud-dl-clients-overview/env-variables.png deleted file mode 100644 index 17917d9131..0000000000 Binary files a/tutorials/hana-cloud-dl-clients-overview/env-variables.png and /dev/null differ diff --git a/tutorials/hana-cloud-dl-clients-overview/hana-cloud-dl-clients-overview.md b/tutorials/hana-cloud-dl-clients-overview/hana-cloud-dl-clients-overview.md index a22884aef3..15388b9381 100644 --- a/tutorials/hana-cloud-dl-clients-overview/hana-cloud-dl-clients-overview.md +++ b/tutorials/hana-cloud-dl-clients-overview/hana-cloud-dl-clients-overview.md @@ -7,41 +7,47 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake --- # SAP HANA Cloud, Data Lake Client Interfaces Overview - Learn about SAP HANA Cloud, data lake, how to create a free tier or trial instance, how to examine the data lake Relational Engine using SAP HANA Cloud Central, how to install the data lake client, and how to query the database using the SQL Console or the Interactive SQL Client. + + Learn about SAP HANA Cloud, data lake, how to create a free tier instance, how to examine the data lake Relational Engine using SAP HANA Cloud Central, how to install the data lake client, and how to query the database using the SQL Console or the Interactive SQL Client. ## Prerequisites - - A computer running Microsoft Windows or Linux. + +- A computer running Microsoft Windows or Linux. ## You will learn - - Information about SAP HANA Cloud, data lake Relational Engine - - How to install the data lake client - - How to create sample tables, views, and procedures - - How to connect using the SQL Console and database objects apps in SAP HANA Cloud Central - - How to connect using the Interactive SQL Client (DBISQL) + +- Information about SAP HANA Cloud, data lake Relational Engine +- How to install the data lake client +- How to create sample tables, views, and procedures +- How to connect using the SQL Console and database objects apps in SAP HANA Cloud Central +- How to connect using the Interactive SQL Client (DBISQL) ## Intro + This tutorial group will provide guidance on setting up an instance of [SAP HANA Cloud, data lake](https://help.sap.com/docs/hana-cloud-data-lake) so that it can then be connected to and queried using a few of the data lake client interfaces as described in [SAP HANA Cloud, Data Lake Developer Guide for Data Lake Relational Engine](https://help.sap.com/docs/hana-cloud-data-lake/developer-guide-for-data-lake-relational-engine/sap-hana-cloud-data-lake-developer-guide-for-data-lake-relational-engine) and [SAP HANA Cloud, Data Lake Client Interfaces](https://help.sap.com/docs/hana-cloud-data-lake/client-interfaces/sap-hana-cloud-data-lake-client-connections). On Microsoft Windows, in this tutorial, the shell used is the Command Prompt. > Access help from the SAP community or provide feedback on this tutorial by navigating to the Feedback link shown below. -> +> >![Give Feedback](feedback.png) --- ### Overview of SAP HANA Cloud + SAP HANA Cloud is composed of multiple components. - * SAP HANA is an in-memory, multi-model, column-based, relational database. For further details see [Introduction to SAP HANA Cloud](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-getting-started-guide/introduction-to-sap-hana-cloud) and the tutorial mission [Use Clients to Query an SAP HANA Database](mission.hana-cloud-clients). +- SAP HANA is an in-memory, multi-model, column-based, relational database. For further details see [Introduction to SAP HANA Cloud](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-getting-started-guide/introduction-to-sap-hana-cloud) and the tutorial mission [Use Clients to Query an SAP HANA Database](mission.hana-cloud-clients). - * SAP HANA Cloud, data lake is composed of two components; a data lake Relational Engine and data lake Files. +- SAP HANA Cloud, data lake is composed of two components; a data lake Relational Engine and data lake Files. [Data Lake Relational Engine](https://help.sap.com/docs/hana-cloud-data-lake/welcome-guide/data-lake-relational-engine) is a disk-based, column-oriented relational database for storing and analyzing large amounts of infrequently updated data. It descends from [SAP IQ](https://help.sap.com/docs/SAP_IQ), which was previously named Sybase IQ. Because of its heritage, there are commonalities with other Sybase products. Some of the client interface drivers are shared with [SAP SQL Anywhere](https://help.sap.com/docs/SAP_SQL_Anywhere) and SAP Adaptive Server Enterprise. [Data Lake Files](https://help.sap.com/docs/hana-cloud-data-lake/welcome-guide/data-lake-files) can be used to store and access unstructured data such as trace files and structured files like CSV, Parquet, or Delta table, or Iceberg table. Structured files can use [SQL on Files](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-on-files-guide/sap-hana-cloud-sap-hana-database-sql-on-files-guide), which enables SQL queries to be performed on them. - >Note, that the data lake Files component, is currently not available in free tier or trial accounts. + >Note, that the data lake Files component, is currently not available in free tier. ### Choose where to deploy the database instances + The SAP Business Technology Platform (SAP BTP) provides multiple runtime environments such as Cloud Foundry and Kyma. When a data lake instance is created, it can be created in an SAP BTP subaccount or in a Cloud Foundry space. SAP HANA Cloud Central can be used to provision and manage instances in the SAP BTP subaccount or in a Cloud Foundry space. In the screenshot below, there is an instance of a data lake that was provisioned in the SAP BTP subaccount (Other Environments) and one that was provisioned into Cloud Foundry. ![Runtime Environments](runtime.png) @@ -51,23 +57,24 @@ SAP HANA Cloud Central can be accessed (once a subscription and setup is complet ![multi environment tools](multi-env-tools.png) ### Create a data lake instance ->To complete the tutorials in this group, a SAP HANA Cloud, data lake instance is needed. There are two different free options available, which are the SAP BTP free-tier and SAP BTP trial. For instructions on registering, see [Set Up Your SAP HANA Cloud, SAP HANA Database (free tier or trial) and Understand the Basics](group.hana-cloud-get-started-1-trial). -The following steps provide instructions on how to create a data lake instance in the SAP BTP trial. Additional content on this topic is available at [Quick Start Tutorial for Data Lake](https://help.sap.com/docs/hana-cloud-data-lake/quick-start-tutorial-for-standalone-data-lake/quick-start-tutorial-for-standalone-data-lake). +>To complete the tutorials in this group, a SAP HANA Cloud, data lake instance is needed. Setup instructions on the SAP BTP trial are provided at [Set Up Your SAP HANA Cloud, SAP HANA Database and Understand the Basics](group.hana-cloud-get-started-1-trial). + +The following steps provide instructions on how to create a data lake instance in the SAP BTP trial using a free tier service plan. Additional content on this topic is available at [Quick Start Tutorial for Data Lake](https://help.sap.com/docs/hana-cloud-data-lake/quick-start-tutorial-for-standalone-data-lake/quick-start-tutorial-for-standalone-data-lake). There are multiple ways to create a data lake: -* A data lake can be created in step 6 of the SAP HANA Database creation wizard. +- A data lake can be created in step 6 of the SAP HANA Database creation wizard. ![add a data lake](add-data-lake2.png) -* A data lake can be added to an already created SAP HANA database that does not have a data lake already associated with it. +- A data lake can be added to an already created SAP HANA database that does not have a data lake already associated with it. ![add data lake](add-data-lake.png) When a data lake is created in either of the previous two methods, it is configured to be maximally compatible with an SAP HANA database. -* A data lake can be created that is independent (standalone) of a SAP HANA database by using the **Create Instance** button. +- A data lake can be created that is independent (standalone) of a SAP HANA database by using the **Create Instance** button. ![independent data lake](standalone.png) @@ -87,7 +94,7 @@ Perform the following steps to create a data lake Relational Engine. >The HDLADMIN user has a [login policy](https://help.sap.com/docs/hana-cloud-data-lake/security-for-data-lake-relational-engine/login-policy-options) that enforces the [update of the password](https://help.sap.com/docs/hana-cloud-data-lake/security-for-data-lake-relational-engine/changing-password-single-control) after 180 days. -3. If this instance is a free tier or trial instance or a test instance, set allowed connections to **Allow all IP addresses** so that client applications can connect from any IP address. +3. If this instance is a free tier instance or a test instance, set allowed connections to **Allow all IP addresses** so that client applications can connect from any IP address. ![Allowed connections](allowed-connections.png) @@ -95,10 +102,11 @@ Perform the following steps to create a data lake Relational Engine. ![data lake running](data-lake-running.png) - >**Important:** SAP HANA Cloud, HANA data lake free tier or trial instances are shut down overnight and will need to be restarted before working with them the next day. + >**Important:** SAP HANA Cloud, HANA data lake free tier instances are shut down overnight and will need to be restarted before working with them the next day. ### Examine the Data Lake -Once the data lake has been created, it's details can be examined. + +Once the data lake has been created, its details can be examined. 1. Click on the instance to show its details. @@ -110,10 +118,11 @@ Once the data lake has been created, it's details can be examined. After you enter your credentials, should you wish to use a different set of credentials, the current credentials can be updated using **Sign in to the Instance**. - ![Credentials](credentials2.png) + ![Credentials](credentials2.png) ### Create tables, views, functions, and procedures + In this step, a sample HOTEL dataset will be created comprising tables, a view, and a stored procedure. 1. From the action menu, select **Open SQL Console**. @@ -133,7 +142,7 @@ In this step, a sample HOTEL dataset will be created comprising tables, a view, Additional details can be found at [System Functions](https://help.sap.com/docs/hana-cloud-data-lake/sql-reference-for-data-lake-relational-engine/system-functions) and [Stored Procedures in Data Lake Relational Engine](https://help.sap.com/docs/hana-cloud-data-lake/sql-reference-for-data-lake-relational-engine/system-procedures-for-data-lake-relational-engine). -3. Execute the below SQL statements +3. Execute the below SQL statements ```SQL ---- drops the schema and all objects it contains @@ -275,8 +284,8 @@ In this step, a sample HOTEL dataset will be created comprising tables, a view, ORDER BY CUS.NAME ASC; END; ``` - -4. Open the database objects app. Select the instance, select **Tables**, and set the schema filter to be **HOTELS** to limit the returned tables to be those that were just created in the HOTELS schema. + +4. Open the database objects app. Select the instance, select **Tables**, and set the schema filter to be **HOTELS** to limit the returned tables to be those that were just created in the HOTELS schema. ![database objects app](database-objects.png) @@ -286,26 +295,33 @@ In this step, a sample HOTEL dataset will be created comprising tables, a view, For additional details on the SQL Console and database objects app, see the tutorials [Query Databases Using the SQL Console in SAP HANA Cloud Central](https://developers.sap.com/tutorials/hana-dbx-hcc.html) and [Browse and Explore Catalog Objects with the Database Objects App](https://developers.sap.com/tutorials/hana-dbx-database-objects.html), which showcases many of their features. +### Download the data lake client install + +The data lake client install is available without included cryptographic libraries using the SAP Developer License agreement from [SAP Development Tools](https://tools.hana.ondemand.com/#hanatools). This version does not require the user to sign in prior to downloading the software. The software will use the cryptographic library found on the OS such as OpenSSL or SAP CommonCryptoLib. Note that it currently does not include the [hdlfscli](https://help.sap.com/docs/hana-cloud-data-lake/user-guide-for-data-lake-files/hdlfscli-data-lake-files-utility) tool. -### Install the developer licensed version of the data lake client for Linux -This version of the data lake client does not include cryptographic libraries as it makes use of the libraries that are available on the operating systems such as OpenSSL. The data lake client is available for download after accepting the SAP Developer License agreement. Currently, this version is available for Microsoft Windows and Linux. Choose which client you would like to use and follow either step 6, 7 or 8. Either client can be used to complete the steps shown in this tutorial group. +![Tools on demand](tools-on-demand.png) -Ensure that [SAP JVM (Java Virtual Machine) 8.0](https://tools.hana.ondemand.com/#cloud) is installed before proceeding with the following steps. The DBISQL tool requires [SAP JVM 8.0](https://tools.hana.ondemand.com/#cloud) +The data lake client install is also available from [SAP for me](https://me.sap.com/softwarecenter). This version includes cryptographic libraries and does require a login and purchase of the software to access the download. To access it, navigate to **Support Packages & Patches** | **By Alphabetical Index (A-Z)** | **H | HANA CLOUD CLIENTS | HANA CLOUD CLIENTS 1.0 | HANA DATALAKE CLIENT 1.0**. Select the platform (Microsoft Windows or Linux) and download the latest version of the archive. +![SAP for Me Download Software](sap-for-me.png) -1. Open the HANA tab of [SAP Development Tools](https://tools.hana.ondemand.com/#hanatools). +![data lake software downloads](dl-software-downloads.png) -2. Download the SAP HANA Data Lake Client 1.0. +Either location can be used with this tutorial. - ![download](developmenttools.png) +Further details on the download and install process can be found at [Download the SAP HANA Data Lake Client](https://help.sap.com/docs/hana-cloud-data-lake/client-interfaces/download-sap-hana-data-lake-client). -3. Extract the archive. +### Install the data lake client for Linux + +Ensure that [SAP JVM (Java Virtual Machine) 8.0](https://tools.hana.ondemand.com/#cloud) is installed before proceeding with the following steps if you wish to use the DBISQL tool. + +1. Extract the archive. ```Shell (Linux) tar -zxvf hdlclient-latest-linux-x64.tar.gz ``` -4. Start the installer. +2. Start the installer. ```Shell (Linux) ./hdbinst @@ -313,7 +329,7 @@ Ensure that [SAP JVM (Java Virtual Machine) 8.0](https://tools.hana.ondemand.com ![run the installer](hdbinst.png) -5. Configure the environment variables. This can be done by calling `hdlclienv.sh` manually or it can be added to the Bash shell by referencing it in `.bashrc`. +3. Configure the environment variables. This can be done by calling `hdlclienv.sh` manually or it can be added to the Bash shell by referencing it in `.bashrc`. Open the `.bashrc`. @@ -343,164 +359,45 @@ Ensure that [SAP JVM (Java Virtual Machine) 8.0](https://tools.hana.ondemand.com The following command should display the install location of the data lake client. ```Shell (Linux) - echo $IQDIR17 + echo $HDL_CLIENT_HOME ``` - >In the case that the data lake client needs to be uninstalled, run the `hdbuninst` file located in the directory `~/sap/hdlclient/install`. - - --- - -### Install the developer licensed version of the data lake client for Microsoft Windows -This version of the data lake client does not include cryptographic libraries and requires openSSL to be installed separately to provide cryptography. The data lake client is available for download after accepting the SAP Developer License agreement. Currently, this version is available for Microsoft Windows and Linux. Choose which client you would like to use and follow either step 6, 7 or 8. Either client can be used to complete the steps shown in this tutorial group. - -Ensure that [SAP JVM (Java Virtual Machine) 8.0](https://tools.hana.ondemand.com/#cloud) is installed before proceeding with the following steps. The DBISQL tool requires [SAP JVM 8.0](https://tools.hana.ondemand.com/#cloud) - -1. Open the HANA tab of [SAP Development Tools](https://tools.hana.ondemand.com/#hanatools). + >In the case that the data lake client needs to be uninstalled, run the `hdbuninst` file located in the directory `~/sap/hdlclient/install`. -2. Download the SAP HANA Data Lake Client 1.0. +### Install data lake client for Microsoft Windows - ![download](development-tools-win.png) +Ensure that [SAP JVM (Java Virtual Machine) 8.0](https://tools.hana.ondemand.com/#cloud) is installed before proceeding with the following steps if you wish to use the DBISQL tool. -3. Run hdbsetup.exe. +1. Run hdbsetup.exe. ![run the installer](windows-dev-lic-installer.png) +2. Examine the installation log and take note if the required crypto libraries were located in the machine's path. + ![view log](crypto-check0.png) -4. Examine the installation log and take note if the required crypto libraries were located in the machines path. + Select View Log and search for crypto ![crypto check](crypto-check.png) - If these libraries were not found but are on your machine perhaps as part of the [git client](https://git-scm.com/downloads/win), add that folder to your path (For example, C:\Git\mingw64\bin). See also [Install the SAP HANA Data Lake Client (Developer License) in GUI Mode on Microsoft Windows](https://help.sap.com/docs/hana-cloud-data-lake/client-interfaces/install-sap-hana-data-lake-client-in-gui-mode-on-microsoft-windows). + If these libraries were not found but are on your machine perhaps as part of the [git client](https://git-scm.com/downloads/win), add that folder to your path (For example, C:\Git\mingw64\bin). -5. After the installation process is completed, open Microsoft Windows, click the **Start** icon and search for **Edit the system environment variables** and press the "Environment Variables" button under the "Advanced" tab. +3. After the installation process is completed, open Microsoft Windows, click the **Start** icon and search for **Edit the system environment variables** and press the "Environment Variables" button under the "Advanced" tab. ![Open Environment Variables](open-environment-var.png) - Create a new variable, set the variable name to **JAVA_HOME** and the variable value as the location where JVM has been unzipped such as C:\SAP\SAPJVM8. Press the **OK** to ensure the changes made by the installer are now active. - - ![JAVA_HOME creation](java-home-creation.png) - - -6. Ensure that the following System variables exist. - - ![Environment variables](env-variables.png) - - -7. It is also possible to run a batch file that will temporarily set the environment variables. This can be done by calling hdlclienv.bat from within a command prompt. +4. It is also possible to run a batch file that will temporarily set the environment variables. This can be done by calling hdlclienv.bat from within a command prompt. ![screenshot showing calling hdlclienv.bat](calling-hdlclienv.png) Then once called, the variables set can be seen by calling SET. ![variables set](effect-of-call-bat-file.png) - ->In the case the data lake client needs to be uninstalled, run the `hdbuninst` file located in the directory `C:\SAP\hdlclient\install`. - - - -### Install the data lake client downloaded from SAP Software Center -This version of the data lake client is available from the SAP Software Center and requires an S-user ID and only shows software that you have purchased. Additional details can be found at [Software Downloads FAQ](https://support.sap.com/content/dam/support/en_us/library/ssp/my-support/help-for-sap-support-applications/online_help-software_downloads.html#faq). Either client can be used to complete the steps shown in this tutorial group. Choose which client you would like to use and follow either step 6 or step 7. - -1. Open [SAP for me](https://me.sap.com/softwarecenter) and navigate to **Support Packages & Patches** | **By Alphabetical Index (A-Z)**. - - ![SAP for Me Download Software](sap-for-me.png) - - 2. Navigate to **H | HANA CLOUD CLIENTS | HANA CLOUD CLIENTS 1.0 | HANA DATALAKE CLIENT 1.0**. Select the platform (Microsoft Windows or Linux) and download the latest version of the archive. - - ![data lake software downloads](dl-software-downloads.png) - - > Note access to the client install is currently limited to S-user IDs - -2. Extract the archive and start the installer. - - * On Microsoft Windows extract the zip and run setup.exe as an administrator. - - ![Run setup.exe as administrator](run-as-administrator.png) - - Finish the steps as instructed. - - ![data lake client](data-lake-client-install.png) - - >If the install fails on Microsoft Windows, consult the following SAP Notes: - > - >* [3001764 - SAP IQ 16.x - `InvocationTargetException` Installer Error (WINDOWS)](https://launchpad.support.sap.com/#/notes/3001764) - > - >* [3001813 - SAP IQ 16.1 SP 04 Rev08 - 'Error Loading sylapij.dll' Installer Error (WINDOWS)](https://launchpad.support.sap.com/#/notes/3001813) - - - * On Linux, extract the archive. - - ```Shell (Linux) - tar -zxvf HANADLCLIENT100*.TGZ - ``` - - Run `setup.bin` which will start either the GUI installer or text-based installer. To use the text-based installer, add `-i console` to the command. - - ```Shell (Linux) - cd ebf* - ./setup.bin - ``` - >If the installation fails on Linux due to an InvocationTargetException, try installing Java first before proceeding with the installation again. - - -3. Specify an install folder such as C:\sap\DLClient (for Microsoft Windows) or /home/dan/sap/dlclient (for Linux) and install all the features. Choosing a similarly structured path will ensure a smooth experience throughout the tutorial and help reduce issues with paths. - - ![GUI Installer](windows-gui-install.png) - - Console mode installer on Linux - - ![Product Features](productfeatures.png) - - Follow the remaining prompts to finish the installation. - -4. The installation location can be referenced through an environment variable. - - * On Microsoft Windows, open a new command prompt window and run the following to see the installation location. - - ```Shell (Microsoft Windows) - ECHO %IQDIR17% - ``` - - * On Linux, this environment variable and others are set in a file named `IQ.sh`. Configure it to be run each time the Bash shell is started by referencing it in `.bash_profile` or possibly `.bashrc`. - - Open the `.bash_profile`. - - ```Shell (Linux) - pico ~/.bash_profile - ``` - - >This tutorial uses notepad and `pico` as default text editors, but any text editor will do. - >`Pico` can be installed on SUSE Linux with - - >```Shell (Linux SUSE) - sudo zypper install pico - >``` - - Add the following line to point to the location where the SAP data lake client is installed. - - ```Shell (Linux) .bash_profile - source /path-to-data-lake-install/IQ.sh - ``` - - Test the change by running: - - ```Shell (Linux) - source ~/.bash_profile - ``` - - The following command should display the install location of the data lake client. - - ```Shell (Linux) - echo $IQDIR17 - ``` - - >In the case that the Data Lake Client needs to be uninstalled, run the `uninstall.exe` file located in the directory `/path-to-data-lake-install/sybuninstall/IQClientSuite/`. - - +5. In the case the data lake client needs to be uninstalled, run the `hdbuninst` file located in the directory `C:\SAP\hdlclient\install`. ### Connect with the Interactive SQL Client (DBISQL) + The data lake client install includes [dbisql Interactive SQL Utility](https://help.sap.com/docs/hana-cloud-data-lake/client-interfaces/dbisql-interactive-sql-utility), which can be used to connect and query a data lake Relational Engine. The following steps will provide instructions on how to connect to the data lake Relational Engine using DBISQL and then populate the previously created tables with sample data. 1. DBISQL requires SAP JVM 8.0. Verify that you have this installed by entering java -version as shown below. @@ -513,37 +410,38 @@ The data lake client install includes [dbisql Interactive SQL Utility](https://h If you do not have this version on Microsoft Windows, it can be downloaded and configured as shown below - * Download [SAP JVM 8.0](https://tools.hana.ondemand.com/#cloud) and unzip it to c:\SAP\SAPJVM8 + - Download [SAP JVM 8.0](https://tools.hana.ondemand.com/#cloud) and unzip it to c:\SAP\SAPJVM8 - * In your environment variables, add the bin subdirectory of JAVA_HOME to the existing Path environment variable. + - In your environment variables, add the bin subdirectory of JAVA_HOME to the existing Path environment variable. ![Windows Java variables](windows-java-home.png) - * Open a new command prompt and test the change. + - Open a new command prompt and test the change. ```Shell (Microsoft Windows) java -version dbisql ``` - The Data Lake Relational Engine should start, this may take a moment. + The Data Lake Relational Engine should start, this may take a moment. If the location has changed since the install was run, you may need to edit C:\SAP\hdlclient\bin64\dbisql.ini. If you do not have this version on Linux, it can be downloaded and installed as shown below. - * Download and install [SAP JVM 8.0](https://tools.hana.ondemand.com/#cloud) + - Download and install [SAP JVM 8.0](https://tools.hana.ondemand.com/#cloud) ```Shell (Linux) unzip sapjvm-8.1.102-linux-x64.zip mv ./sapjvm_8 ~/ ``` - * Add the following to your bashrc. + - Add the following to your bashrc. ```Shell (Linux) export JAVA_HOME=~/sapjvm_8 export PATH=$PATH:$JAVA_HOME/bin ``` - * Apply and test the changes. + - Apply and test the changes. + ```Shell (Linux) source ~/.bashrc java -version @@ -558,7 +456,7 @@ The data lake client install includes [dbisql Interactive SQL Utility](https://h sudo zypper install libXtst6 sudo zypper install libXi6 ``` - + If an error occurs mentioning that saip17.jar file has moved or has been deleted, examine C:\Users\Public\Documents\DBISQL 17.1.6\dbisql_64.rep and optionally comment out with # the plugins that are not loading. @@ -578,7 +476,7 @@ The data lake client install includes [dbisql Interactive SQL Utility](https://h > > A failure to connect could be caused by the allowed connections list, which is editable in SAP HANA Cloud Central. - Paste the folling code in the console and click run. + Paste the following code in the console and click run. > >```SQL >SELECT * FROM SYS.SYSINFO; @@ -604,6 +502,7 @@ The data lake client install includes [dbisql Interactive SQL Utility](https://h ### Insert data with Interactive SQL Client (DBISQL) + 1. Execute the following insert statements to provide some sample data. >If you do not wish to use the GUI mode, paste the insert statements into a file first and then run `dbisql -c "uid..." sql.sql`. @@ -724,7 +623,6 @@ The data lake client install includes [dbisql Interactive SQL Utility](https://h > >![autocommit setting](auto-commit-hcc.png) - 2. Notice that pressing ctrl-space brings up auto complete (GUI mode only). ![auto complete](show-reservations.png) @@ -752,8 +650,7 @@ The data lake client install includes [dbisql Interactive SQL Utility](https://h See [Connection Parameters](https://help.sap.com/docs/hana-cloud-data-lake/client-interfaces/connection-parameters) for additional documentation on the parameters used to connect. - -### Knowledge check +### Knowledge check Congratulations! You have created and connected to a data lake Relational Engine. In the following tutorials, the client interfaces will be used to connect from ODBC, JDBC and Node.js. diff --git a/tutorials/hana-cloud-dl-clients-overview/java-home-creation.png b/tutorials/hana-cloud-dl-clients-overview/java-home-creation.png deleted file mode 100644 index 7dcdb7291a..0000000000 Binary files a/tutorials/hana-cloud-dl-clients-overview/java-home-creation.png and /dev/null differ diff --git a/tutorials/hana-cloud-dl-clients-overview/open-environment-var.png b/tutorials/hana-cloud-dl-clients-overview/open-environment-var.png index f9d377bf71..67310cc6b4 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/open-environment-var.png and b/tutorials/hana-cloud-dl-clients-overview/open-environment-var.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/productfeatures.png b/tutorials/hana-cloud-dl-clients-overview/productfeatures.png deleted file mode 100644 index a7fe130cd7..0000000000 Binary files a/tutorials/hana-cloud-dl-clients-overview/productfeatures.png and /dev/null differ diff --git a/tutorials/hana-cloud-dl-clients-overview/run-as-administrator.png b/tutorials/hana-cloud-dl-clients-overview/run-as-administrator.png deleted file mode 100644 index 2ae48bbb7b..0000000000 Binary files a/tutorials/hana-cloud-dl-clients-overview/run-as-administrator.png and /dev/null differ diff --git a/tutorials/hana-cloud-dl-clients-overview/standalone2.png b/tutorials/hana-cloud-dl-clients-overview/standalone2.png index 53c08c13ff..cd901e1d13 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/standalone2.png and b/tutorials/hana-cloud-dl-clients-overview/standalone2.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/tools-on-demand.png b/tutorials/hana-cloud-dl-clients-overview/tools-on-demand.png new file mode 100644 index 0000000000..0f42335b77 Binary files /dev/null and b/tutorials/hana-cloud-dl-clients-overview/tools-on-demand.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/windows-dev-lic-installer.png b/tutorials/hana-cloud-dl-clients-overview/windows-dev-lic-installer.png index 119f004603..8f13163fcd 100644 Binary files a/tutorials/hana-cloud-dl-clients-overview/windows-dev-lic-installer.png and b/tutorials/hana-cloud-dl-clients-overview/windows-dev-lic-installer.png differ diff --git a/tutorials/hana-cloud-dl-clients-overview/windows-gui-install.png b/tutorials/hana-cloud-dl-clients-overview/windows-gui-install.png deleted file mode 100644 index f40b880735..0000000000 Binary files a/tutorials/hana-cloud-dl-clients-overview/windows-gui-install.png and /dev/null differ diff --git a/tutorials/hana-cloud-dl-clients-python/add-variable.png b/tutorials/hana-cloud-dl-clients-python/add-variable.png deleted file mode 100644 index e2aad0802d..0000000000 Binary files a/tutorials/hana-cloud-dl-clients-python/add-variable.png and /dev/null differ diff --git a/tutorials/hana-cloud-dl-clients-python/hana-cloud-dl-clients-python.md b/tutorials/hana-cloud-dl-clients-python/hana-cloud-dl-clients-python.md index 79d5b38033..ec6fa14aa7 100644 --- a/tutorials/hana-cloud-dl-clients-python/hana-cloud-dl-clients-python.md +++ b/tutorials/hana-cloud-dl-clients-python/hana-cloud-dl-clients-python.md @@ -1,4 +1,4 @@ - --- +--- parser: v2 auto_validation: true time: 15 @@ -7,24 +7,27 @@ primary_tag: software-product-function>sap-hana-cloud--data-lake --- # Connect to Data Lake Relational Engine Using Python Drivers + Create and debug a Python application that connects to a data lake Relational Engine using the sqlanydb python driver or the pyodbc bridge. ## Prerequisites - - You have completed the first 2 tutorials in this group +- You have completed the first 2 tutorials in this group ## You will learn - - How to install Python, the `sqlanydb`, and `pyodbc` Python drivers - - How to create, run, and debug a Python application that connects to and queries a data lake Relational Engine database +- How to install Python, the `sqlanydb`, and `pyodbc` Python drivers +- How to create, run, and debug a Python application that connects to and queries a data lake Relational Engine database --- ## Intro + In the 2023 Stack Overflow’s annual developer survey, Python ranked 3rd in the [Most popular technologies](https://survey.stackoverflow.co/2023/#most-popular-technologies-language) section. For further information on Python, see [Introduction to Python 3](https://realpython.com/python-introduction/) or [The Python Tutorial](https://docs.python.org/3/tutorial/). The following steps create a simple Python app that can connect to and query an SAP HANA data lake Relational Engine. ### Install Python + The first step is to check if Python and pip are installed. 1. Enter the commands below. @@ -33,6 +36,7 @@ The first step is to check if Python and pip are installed. python --version python3 --version ``` + If Python is installed, the command will return a value such as Python 3.13.0 >In some Linux distributions, 'python' refers to Python 2, while 'python3' refers to Python 3. However, as Python 2 is now obsolete, 'python' may refer to Python 3 instead. It may also be referred to by its specific version such as python3.13. @@ -49,7 +53,6 @@ The first step is to check if Python and pip are installed. python3.13 --version ``` - 2. Enter the commands below. ```Shell @@ -57,9 +60,10 @@ The first step is to check if Python and pip are installed. pip3 --version pip install --upgrade pip ``` + >If you encounter issues with user permissions, run command prompt as an administrator and try again. - The standard package installer for Python is [pip](https://pypi.org/project/pip/). The following commands will check the version of pip and attempt to upgrade it to the latest available version. Again, use the pip or pip3 command that returns a version 3.4 or greater of Python. + The standard package installer for Python is [pip](https://pypi.org/project/pip/). The following commands will check the version of pip and attempt to upgrade it to the latest available version. >On Linux, if you encounter permission issues, one way to solve the issue is to use `sudo` before the command. @@ -71,30 +75,17 @@ The first step is to check if Python and pip are installed. zypper install python3-pip >``` - ### Install the sqlanydb Python driver + The `sqlanydb` package is the python driver for the data lake Relational Engine and is available as part of the data lake Relational Engine install and is available at [PyPI](https://pypi.org/project/sqlanydb/). -1. Navigate to your Data Lake Client installation folder and enter the following command to install `sqlanydb`. Depending on which install of the data lake client was used, execute: +1. Navigate to your Data Lake Client installation folder and enter the following command to install the python driver named `sqlanydb`. ```Shell (Microsoft Windows) - cd %IQDIR17%\SDK\Python + cd %HDL_CLIENT_HOME%\SDK\Python pip install sqlanydb-1.0.14.tar.gz ``` - ```Shell (Microsoft Windows) - cd %IQDIR17%\SDK\Python - python setup.py install - ``` - - - >If the error 'no module named setuptools' appears, the following may be used as a workaround until this issue is resolved. - - ```Shell - pip install setuptools - ``` - - On Linux the rest of the steps will be executed in a virtual environment. First make a project folder, and create a virtual environment inside it. To do so, open the terminal app, write the following command, and hit return, here pyvenv is the name of the folder that you wish to create the virtual environment in. @@ -102,13 +93,15 @@ The `sqlanydb` package is the python driver for the data lake Relational Engine ```Shell(Linux) mkdir $HOME/pyvenv ``` - Now, use the venv command to create a virtual environment inside the given folder, here python-virtualenv is the name of the virtual enviroment that is to be created. + + Now, use the venv command to create a virtual environment inside the given folder, here python-virtualenv is the name of the virtual environment that is to be created. ```Shell(Linux) cd $HOME/pyvenv python3 -m venv pyvenv/python-virtualenv ``` - We now activate the virtual enviroment , which we will use to complete the rest of the steps for linux based systems. + + We now activate the virtual environment, which we will use to complete the rest of the steps for linux based systems. ```Shell(Linux) source pyvenv/python-virtualenv/bin/activate @@ -118,27 +111,15 @@ The `sqlanydb` package is the python driver for the data lake Relational Engine ![python-install](virtualenv.png) - Depending on which install of the data lake client was used, execute - - - ```Shell (Linux) - cd $IQDIR17/sdk/python - python3 setup.py install - ``` - - or + Now install the driver named 'sqlanydb'. ```Shell (Linux) - cd $IQDIR17/sdk/python + cd $HDL_CLIENT_HOME/sdk/python pip install sqlanydb-1.0.14.tar.gz ``` -2. On Microsoft Windows for the non developer licensed install, create a user environment variable named `SQLANY_API_DLL` and set it to `%IQDIR17%\Bin64\dbcapi.dll`. - - ![add a variable named SQLANY_API_DLL](add-variable.png) - - ### Create a Python application that uses sqlanydb to query the data lake Relational Engine + 1. In a shell, create a folder named `python-sqlanydb`, enter the newly created directory, and open a file name `pythonQuery.py` in an editor. ```Shell (Microsoft Windows) @@ -190,8 +171,8 @@ The `sqlanydb` package is the python driver for the data lake Relational Engine For further information on the Python Driver, visit [Python and Database Access](https://help.sap.com/docs/hana-cloud-data-lake/developer-guide-for-data-lake-relational-engine/python-and-database-access). - ### Install the Python ODBC bridge using pip and PyPI + This is an alternate method of connecting to a data lake Relation Engine from a Python app. The Python ODBC bridge is an open source Python module available on [`PyPI`](https://pypi.org/project/pyodbc/). The performance characteristics between the two drivers may vary depending on the use case. 1. Ensure that you have created a connection to the data lake Relational Engine using ODBC as shown in step 1 (Windows) or 2 (Linux) of the [Connect to Data Lake Relational Engine Using the ODBC Driver](hana-cloud-dl-clients-odbc) tutorial. @@ -213,8 +194,8 @@ This is an alternate method of connecting to a data lake Relation Engine from a >If this command fails on Linux, you may need to install gcc-c++, python3-devel, and unixodbc-dev. - ### Create a Python application that uses pyodbc to query the data lake Relational Engine + 1. In a shell, create a folder named `python-pyodbc`, enter the newly created directory, and open a file name `pythonQuery.py` in an editor. ```Shell (Microsoft Windows) @@ -255,7 +236,9 @@ This is an alternate method of connecting to a data lake Relation Engine from a curs.close() conn.close() ``` + Save and close `pythongQuery.py`. + 3. The `dsn` value refers to the data source name in the Microsoft Windows ODBC Administrator or the Linux `.odbc.ini` file. ![ODBC Administrator](odbcWindow.png) @@ -276,8 +259,8 @@ This is an alternate method of connecting to a data lake Relation Engine from a The code in `pythonQuery.py` uses [PEP 249 -- Python Database API Specification](https://www.python.org/dev/peps/pep-0249/), which defines a set of methods that provide a consistent database interface, independent of the actual database being used. - ### Debug the application + Visual Studio Code provides plugins for Python and can be used to debug an application. 1. If you have not already done so, download [Visual Studio Code](https://code.visualstudio.com/Download). @@ -306,6 +289,4 @@ Visual Studio Code provides plugins for Python and can be used to debug an appli Congratulations! You have now created and debugged a Python application that connects to and queries a data lake Relational Engine database. - - --- diff --git a/tutorials/hana-cloud-ecn/hana-cloud-ecn.md b/tutorials/hana-cloud-ecn/hana-cloud-ecn.md index 47eb0cd404..bb5b4c69d6 100644 --- a/tutorials/hana-cloud-ecn/hana-cloud-ecn.md +++ b/tutorials/hana-cloud-ecn/hana-cloud-ecn.md @@ -504,6 +504,8 @@ Workload classes can be used to direct a specified workload to an ECN. Further ![Workload management app](routing-ui-1.png) + After clicking on the routing location, the ECN node ecn1 can be selected. + ![Select the routing location](routing-ui-2.png) 3. Map a workload to the workload class. diff --git a/tutorials/hana-cloud-migration/hana-cloud-migration.md b/tutorials/hana-cloud-migration/hana-cloud-migration.md index 4b701c9b7b..ee83a21080 100644 --- a/tutorials/hana-cloud-migration/hana-cloud-migration.md +++ b/tutorials/hana-cloud-migration/hana-cloud-migration.md @@ -33,7 +33,7 @@ The following topics may be of help when planning a migration to SAP HANA Cloud: * [SAP HANA Cloud Migration Guide (product documentation)](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-migration-guide/sap-hana-cloud-migration-guide) -* [Migration to SAP HANA Cloud (sap.com)](https://www.sap.com/products/technology-platform/hana/cloud-migration.html) +* [Migration to SAP HANA Cloud (sap.com)](https://www.sap.com/products/technology-platform/hana/cloud-migration./) * [Migrating your SAP HANA on-premise to SAP HANA Cloud (video)](https://www.sap.com/assetdetail/2023/01/8a0f1b28-597e-0010-bca6-c68f7e60039b.html) @@ -116,7 +116,7 @@ In order to connect from the Self-Service Migration tool running in the public i ![SAP Cloud Connector](cloud-connector.png) -The cloud connector provides connectivity from a public internet where SAP HANA Cloud is running to an on-premise SAP HANA database. Step-by-step instructions are provided at [Connect from SAP HANA Cloud to SAP HANA, express edition via the Cloud Connector](hana-dbx-remote-sources#b30ac17a-9705-4be4-bc30-d6dbdddfa6d8). +The cloud connector provides connectivity from a public internet where SAP HANA Cloud is running to an on-premise SAP HANA database. Step-by-step instructions are provided at [Connect from SAP HANA Cloud to SAP HANA, express edition via the Cloud Connector](hana-dbx-remote-sources). Once the cloud connector has been installed and configured to connect to a BTP subaccount, it will appear as shown below in the SAP BTP Cockpit. diff --git a/tutorials/hana-cloud-mission-trial-1/hana-cloud-mission-trial-1.md b/tutorials/hana-cloud-mission-trial-1/hana-cloud-mission-trial-1.md index c737cfb930..f3979c8cb1 100644 --- a/tutorials/hana-cloud-mission-trial-1/hana-cloud-mission-trial-1.md +++ b/tutorials/hana-cloud-mission-trial-1/hana-cloud-mission-trial-1.md @@ -8,16 +8,23 @@ tags: [tutorial>beginner, software-product>sap-hana-cloud] primary_tag: software-product>sap-hana-cloud --- -# Sign up for an SAP HANA Cloud Trial account - Learn about the SAP HANA Cloud trial and the process to sign up for it. +# Understand How the SAP HANA Cloud Free Tier Service can be Used in a SAP BTP Trial or Productive Account + + Learn about the process to sign up for a SAP Business Technology Platform (SAP BTP) trial or productive account and the details of the SAP HANA Cloud free tier service plan. + +## Prerequisites + +- You have access to a SAP BTP trial account or a productive account that has SAP HANA Cloud entitlements ## You will learn -- Differences between a trial SAP HANA Cloud instance and a non trial instance -- The process to sign up for a trial SAP BTP account + +- Differences between a SAP BTP trial and productive subaccount +- How to sign up for a SAP BTP account +- Differences between running a free tier SAP HANA Cloud instance in a SAP BTP trial or productive subaccount ## Intro -This tutorial is part of a mission, in which you will learn in a hands-on, end-to-end setting how to use SAP HANA Cloud, SAP HANA database. SAP offers two free options to use SAP HANA Cloud. This tutorial covers the first option, which is signing up for the SAP HANA Cloud trial. The trial allows you to use SAP HANA Cloud in a test environment and does not require payment details to sign up, whereas the free tier option can be easily upgraded to a paid version but is only available in enterprise accounts. If you would like to learn more about the second option of using SAP HANA Cloud free tier, proceed to [this tutorial](hana-cloud-mission-trial-2-ft). +This tutorial is part of a mission, in which you will learn in a hands-on, end-to-end setting how to use SAP HANA Cloud, SAP HANA database. SAP offers a free version of the SAP HANA Cloud service. This free service is offered in both a SAP BTP trial account and in a SAP BTP productive subaccount. The SAP HANA Cloud free tier service when running in a productive subaccount can be easily upgraded to a paid version. >![Alex Banner](banner-alex.png) > @@ -25,48 +32,64 @@ This tutorial is part of a mission, in which you will learn in a hands-on, end-t > > In this mission, we will help Alex, the CEO of a fictitious company called *Best Run Travel* to answer a concrete business question with SAP HANA Cloud, SAP HANA database: > -> * As a global travel agency, Best Run Travel has data from many different affiliates. -> * Alex needs to know the **top 5 partners** of their agency and wants to find out the **days with maximum booking of each partner**. -> * Best Run Travel uses SAP HANA Cloud, SAP HANA database to store and manage all its data. Now, your mission is to help Alex find a subset of the data related to the partner sales and create a way for Alex to share this subset with other departments in Best Run Travel. - +> - As a global travel agency, Best Run Travel has data from many different affiliates. +> - Alex needs to know the **top 5 partners** of their agency and wants to find out the **days with maximum booking of each partner**. +> - Best Run Travel uses SAP HANA Cloud, SAP HANA database to store and manage all its data. Now, your mission is to help Alex find a subset of the data related to the partner sales and create a way for Alex to share this subset with other departments in Best Run Travel. --- - ### Get to know the SAP HANA Cloud trial -- You can use your trial account to test the following components: **SAP HANA Cloud, SAP HANA database** and **SAP HANA Cloud, data lake**. +### Get to know the SAP BTP account types -- If your trial account remains inactive, you will be asked to extend your trial every 30 days. If you regularly log in to your trial account, your trial account will be automatically extended up to 90 days. +- SAP BTP trial allows you to build full applications in a test environment to learn and explore the capabilities of SAP BTP. However, once you are ready to move to productive use a new productive BTP account is required. -- Basic and Advanced trial instances cannot be upgraded to a paid instance; only Free Tier instances have the option to upgrade to a paid instance. +- Customers with a productive SAP BTP account can use free tier service plans for SAP BTP to explore, learn, and try SAP BTP services (such as SAP HANA Cloud) with a path to productive use. -- If you already use other services in SAP Business Technology Platform, those will not be affected or limited in any way by your participation in the SAP HANA Cloud trial. +- Free tier service plans provide a means to try out selected services up to a specified capacity limit. In the case of SAP HANA Cloud free tier service, when running in a productive account, it can be switched easily to the paid tier service, enabling additional functionality without losing any work -- Trial database instances are stopped on a nightly basis. Each time you start working with your trial instance, you need to restart it first. +Further details can be found at [Trial Accounts and Free Tier](https://help.sap.com/docs/btp/sap-business-technology-platform/trial-accounts-and-free-tier). -- If you do not restart your instance within **30 days**, it will be **deleted**. Your trial account, however, will continue to exist and you can easily provision an instance again, if you wish to do so. +### Sign up for an SAP BTP account -- The configuration of your trial instance of SAP HANA Cloud, SAP HANA database is **16 GB of memory, 1 vCPU, and 80 GB of storage**. +In this step, you can learn how to sign up for the SAP BTP trial or a productive account. -- Features such as JSON document store and Script Server require larger HANA Cloud configurations (3 vCPUs, 45 GB of memory) and are therefore *not supported* in a trial account. +To sign up for an SAP BTP trial account follow the below steps. - -### Sign up for the SAP HANA Cloud trial -In this step, you can learn how to sign up for the trial of SAP HANA Cloud. If you already have an SAP BTP trial account, proceed to the next step to add the appropriate entitlements to your account. - - -1. Click on [this link](https://www.sap.com/products/technology-platform/pricing.html) to get to the try and buy page. +1. Click on [this link](https://www.sap.com/products/technology-platform/pricing.html) to get to the try and buy page. ![Screenshot Trial signup1](ss-01-trial-Signup1.png) Select the Advanced trial option. -2. You will then receive a popup and an email with a link to access the [SAP BTP Trial](https://cockpit.hanatrial.ondemand.com/trial/#/home/trial). +2. You will then receive a popup and an email with a link to access the [SAP BTP Trial](https://cockpit.hanatrial.ondemand.com/trial/#/home/trial). ![Open the trial](ss-02-Trial-Signup2.png) + Select the region that is closest to you. + + ![select region](select-region.png) + >It is important to note that the first time you access your trial, you will need to choose your identity provider (you can choose the default). Additionally, if you have two-factor authentication enabled, you will have to enter the security token that is sent to you based on the method of authentication you have chosen. -Congratulations, you have successfully signed up for the SAP HANA Cloud trial! Learn how you can start using SAP HANA Cloud in the [next tutorial](hana-cloud-mission-trial-2). +Congratulations, you have successfully signed up for the SAP BTP trial account. + +Alternatively, if you wish to instead work in a productive account, select the free tier option on the previously shown try and buy page. Additional details can be found [get an account on SAP BTP to try out free tier service plans](btp-free-tier-account). + +### Get to know the SAP HANA Cloud free tier service + +- You can use your SAP BTP trial or productive account to test the following components: **SAP HANA Cloud, SAP HANA database** and **SAP HANA Cloud, data lake**. + +- If your SAP BTP trial account remains inactive, you will be asked to extend your trial every 30 days. If you regularly log in to your trial account, your trial account will be automatically extended up to 90 days. + +- The SAP HANA Cloud free tier service when running in a SAP BTP productive account can be upgraded to a paid instance. + +- If you already use other services in SAP Business Technology Platform, those will not be affected or limited in any way by your use of the SAP HANA Cloud free tier service. + +- SAP HANA Cloud free tier instances are stopped on a nightly basis. Each day you start working with your free tier instance, you need to restart it first. + +- If you do not restart your instance within 30 days, it will be deleted. You can easily provision a new instance again, if you wish to do so. + +- The configuration of your free tier instance of SAP HANA Cloud, SAP HANA database is 16 GB of memory, 1 vCPU, and 80 GB of storage. +- Features such as JSON document store, triple store, and script server require larger SAP HANA Cloud configurations (3 vCPUs, 45 GB of memory) and are therefore *not supported* in the free tier service. Additional details are available at [SAP HANA Database License](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/sap-hana-database-license). An [alert](https://help.sap.com/docs/alert-notification/sap-alert-notification-for-sap-btp/hdb-free-tier-instance-expiration) is sent if the instance is not started for 15 days. Further details on how to view and receive alerts can be found at [Alerts in SAP HANA Database and Data Lake](https://developers.sap.com/tutorials/hana-cloud-alerts.html). ### Knowledge Check diff --git a/tutorials/hana-cloud-mission-trial-1/select-region.png b/tutorials/hana-cloud-mission-trial-1/select-region.png new file mode 100644 index 0000000000..de6311c750 Binary files /dev/null and b/tutorials/hana-cloud-mission-trial-1/select-region.png differ diff --git a/tutorials/hana-cloud-mission-trial-10/hana-cloud-mission-trial-10.md b/tutorials/hana-cloud-mission-trial-10/hana-cloud-mission-trial-10.md index 3b6e866fba..2145cf9df6 100644 --- a/tutorials/hana-cloud-mission-trial-10/hana-cloud-mission-trial-10.md +++ b/tutorials/hana-cloud-mission-trial-10/hana-cloud-mission-trial-10.md @@ -12,7 +12,6 @@ primary_tag: software-product>sap-hana-cloud Learn how to create a user and grant others access to your calculation view within the SAP HANA database in SAP HANA Cloud. ## Prerequisites -- You have access to [SAP HANA Cloud trial](hana-cloud-mission-trial-2) or [SAP HANA Cloud free tier](hana-cloud-mission-trial-2-ft), or a production environment of SAP HANA Cloud, SAP HANA database - You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) - You have completed the tutorial to [import the sample data needed for this mission](hana-cloud-mission-trial-5) - You have [set up a development project in SAP Business Application Studio and connected it to your database](hana-cloud-mission-trial-8). diff --git a/tutorials/hana-cloud-mission-trial-2-ft/BTP-entitlement-save-ft.png b/tutorials/hana-cloud-mission-trial-2-ft/BTP-entitlement-save-ft.png deleted file mode 100644 index c48f8751c8..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/BTP-entitlement-save-ft.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/BTP-entitlements-ft.png b/tutorials/hana-cloud-mission-trial-2-ft/BTP-entitlements-ft.png deleted file mode 100644 index 1b0c67c07a..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/BTP-entitlements-ft.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/BTP-global-account.png b/tutorials/hana-cloud-mission-trial-2-ft/BTP-global-account.png deleted file mode 100644 index bd7ebe5af1..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/BTP-global-account.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/add-serv-plans.png b/tutorials/hana-cloud-mission-trial-2-ft/add-serv-plans.png deleted file mode 100644 index bb79f90292..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/add-serv-plans.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/assign-role.png b/tutorials/hana-cloud-mission-trial-2-ft/assign-role.png deleted file mode 100644 index 5b0fad7cfe..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/assign-role.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/banner-alex.png b/tutorials/hana-cloud-mission-trial-2-ft/banner-alex.png deleted file mode 100644 index 366d8c79ce..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/banner-alex.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/create-instance.png b/tutorials/hana-cloud-mission-trial-2-ft/create-instance.png deleted file mode 100644 index 6c37c5f769..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/create-instance.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/hana-cloud-mission-trial-2-ft.md b/tutorials/hana-cloud-mission-trial-2-ft/hana-cloud-mission-trial-2-ft.md deleted file mode 100644 index f22b420460..0000000000 --- a/tutorials/hana-cloud-mission-trial-2-ft/hana-cloud-mission-trial-2-ft.md +++ /dev/null @@ -1,165 +0,0 @@ ---- -parser: v2 -author_name: Dan van Leeuwen -author_profile: https://github.com/danielva -auto_validation: true -time: 10 -tags: [ tutorial>beginner, software-product>sap-hana-cloud, software-product-function>sap-btp-cockpit] -primary_tag: software-product>sap-hana-cloud ---- - -# Start Using SAP HANA Cloud Free Tier in SAP BTP Cockpit - Learn how to get started with SAP HANA Cloud free tier or how to add it to an existing account on SAP Business Technology Platform. - -## Prerequisites -## You will learn -- How to sign up for SAP HANA Cloud free tier -- How to add SAP HANA Cloud to an existing SAP BTP account -- How the SAP BTP Cockpit is structured and where to find SAP HANA Cloud in it - - -## Intro -This tutorial is part of a mission, in which you will learn in a hands-on, end-to-end setting how to use SAP HANA Cloud, SAP HANA database. SAP offers two free options to use SAP HANA Cloud. This tutorial covers the second option, which is using SAP HANA Cloud free tier. The free tier option can be easily upgraded to a paid version but does require payment details, while the trial allows you to use SAP HANA Cloud in a test environment and does not require payment details to sign up. If you would like to learn more about the first option (SAP HANA Cloud trial), navigate to the [previous tutorial](hana-cloud-mission-trial-2). - - ->![Alex Banner](banner-alex.png) -> -> **Help Alex gain business insights using SAP HANA Cloud, SAP HANA database.** -> -> In this mission, we will help Alex, the CEO of a fictitious company called *Best Run Travel* to answer a concrete business question with SAP HANA Cloud, SAP HANA database: -> -> * As a global travel agency, Best Run Travel has data from many different affiliates. -> * Alex needs to know the **top 5 partners** of their agency and wants to find out the **days with maximum booking of each partner**. -> * Best Run Travel uses SAP HANA Cloud, SAP HANA database to store and manage all its data. Now, your mission is to help Alex find a subset of the data related to the partner sales and create a way for Alex to share this subset with other departments in Best Run Travel. - - -In this tutorial, you will learn how to create an SAP Business Technology Platform (BTP) account and then add SAP HANA Cloud free tier services to your SAP BTP account. Having access to SAP HANA Cloud is a prerequisite for all other tutorials in this mission. - -> If you have a **production environment** of SAP HANA Cloud, SAP HANA database, you may also follow the steps described in this mission. - ---- - -### Get to know the SAP HANA Cloud free tier model -- Customers with an enterprise account can use the free service plans for SAP BTP to explore, learn, and try SAP BTP services (such as SAP HANA Cloud) with a path to productive use. - -- The free tier means you can try out selected services up to a specified capacity limit and switch easily to the paid tier, without losing any work. - -- The configuration of your free tier instance of SAP HANA Cloud, SAP HANA database is *16 GB of memory, 1 vCPU, and 80 GB of storage**. - -- If you do not restart your instance within **30 days**, it will be **deleted**. Additional details are available at [SAP HANA Database License](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/sap-hana-database-license). An [alert](https://help.sap.com/docs/alert-notification/sap-alert-notification-for-sap-btp/hdb-free-tier-instance-expiration) is sent if the instance is not started for 15 days. Further details on how to view and receive alerts can be found at [Alerts in SAP HANA Database and Data Lake](https://developers.sap.com/tutorials/hana-cloud-alerts.html). - -- In comparison, SAP BTP trial (introduced in the first tutorial) allows you to build full applications in a test environment to learn and explore the capabilities of SAP BTP. However, once customers and partners are ready to move to the next phase and deploy to production, they will need to get a new productive account and start over. - -- SAP HANA Cloud services are available as both a trial or free tier model – customers and partners can choose the option based on their preferences. Note that this tutorial contains details for the second option: SAP HANA Cloud free tier. - -- Features such as JSON document store, knowledge graph, and Script Server require larger HANA Cloud configurations (3 `vCPUs`, 45G memory) and are therefore *not supported* when using free tier. - -### Create an SAP BTP account to use the free tier model -If you would like to use the free tier model to get started with SAP HANA Cloud, [get an account on SAP BTP to try out free tier service plans](btp-free-tier-account) by following steps 1 to 8 of the linked tutorial. - -In order to use the SAP HANA Cloud free tier model, you will need to have the SAP HANA Cloud service entitlement available in your subaccount. To provision a free tier instance, the free tier service plans must be enabled in your subaccount entitlement (the next step will walk you through this). If you plan to upgrade your instance to a paid tier, the paid tier service plans must also be enabled. No charges will occur if you are only making use of the free tier service plans. However, if you decide to upgrade to a paid tier service plan, applicable charges will occur once the instance has been upgraded. - -Once you have your SAP BTP account setup, proceed to the next step to learn how to add the appropriate entitlements to your account. - -### Add SAP HANA Cloud to an existing SAP BTP account ->If you have an existing SAP BTP account, this section will walk you through adding entitlements to your SAP BTP account so you can start using the appropriate services. - -1. In the SAP BTP cockpit, click on your **subaccount**. - - ![open the subaccount](open-subaccount.png) - -2. Then click on **Entitlements** on the left-hand side menu and search for entitlements for SAP HANA. - - ![BTP Entitlements](BTP-entitlements-ft.png) - -3. Confirm that you have entitlements for the services (and service plans) listed here: - - - SAP HANA Cloud: - * `tools (Application)` - * `hana-free` - * `hana-cloud-connection-free` - * `relational-data-lake-free` - - - SAP HANA Schemas & HDI Containers: - * `hdi-shared` - * `schema` - * `securestore` - -4. If you do not have any of the entitlements above, you need to add them to your account. To do that, click on **Edit** on the top right-hand corner of the screen, then click on **Add Service Plans** in the same area of the screen. - - In the pop-up that opens, type `SAP HANA` in the search box to see all relevant entitlements. - - ![BTP select entitlements](add-serv-plans.png) - - After clicking on **Add X Service Plans**, where X is the number of services you want to add, make sure to click on the **Save** button. - - ![BTP entitlements save](BTP-entitlement-save-ft.png) - -### Add a subscription to SAP HANA Cloud tools - -1. From SAP BTP Cockpit, click on **Services** and then **Service Marketplace**. Search for **SAP HANA Cloud** and click **Create** in the top-right corner. - - ![Create an instance of SAP HANA Cloud](create-instance.png) - -2. Select **SAP HANA Cloud** under Service and **tools** under Plan. - - ![subscribe to tooling](subscribe-to-tooling-existing-acct.png) - -3. To ensure that your desired user has the necessary permissions to manage instances in HANA Cloud Central, navigate to **Security** > **Users** in the left hand side menu. Then click on your user. - - ![user management](user-mgmt.png) - - Click on the **Assign Role Collection** button. - - ![assign role collection](assign-role.png) - - Select **SAP HANA Cloud Administrator** then click Assign Role Collection. - - ![Select SAP HANA admin role](role-selected.png) - -4. Navigate to **Services**, **Instances and Subscriptions** and click on **SAP HANA Cloud** to open SAP HANA Cloud Central. - - ![hana cloud central](hcc-app.png) - -Congratulations, you have added free tier services to your account on SAP BTP! You now have the ability to [provision your free tier instance of SAP HANA Cloud](hana-cloud-mission-trial-3) and start your journey. - - -### Get to know SAP BTP cockpit -SAP BTP cockpit is a web-based interface used to manage SAP cloud applications, such as SAP HANA Cloud. This is where you can manage your SAP Business Technology Platform account and users as well as create new instances whenever necessary. - -Use the **Help** button at the top right-hand corner of the screen once you are logged in. This will open a **Help Topics** pane where areas that you can get help custom to the page will appear, as well as embedded links to guided answers and documentation. - -![BTP Help](BTP-help.png) - -For further details, consult our documentation material [here](https://help.sap.com/docs/btp). - - -### Understand Accounts, Directories, and Subaccounts -Your account on SAP Business Technology Platform is called a **global account**. As the administrator, you will have full control of your global account and be able to create directories, subaccounts, and instances. Subaccounts are a smaller part of your global account. Directories group subaccounts under the global account. - -![BTP Global Account](BTP-global-account.png) - -Below you can see a simplified diagram of a global account in SAP BTP Cockpit with different ways in which directories, subaccounts, are used to organize SAP HANA database and data lake instances. Of course, once you use SAP HANA Cloud, you will most likely have many more databases, subaccounts, and perhaps even global accounts. These levels will then help you keep everything well-organized. - -![BTP Illustration](btp-illustration.png) - -> **Global Account**: Your account on the SAP BTP Platform is called a global account. As the administrator, you will have full control of your global account and be able to create subaccounts, spaces, and instances. -> -> **Directories**: Directories group subaccounts into a folder and are useful to organize them. For example, if your subaccounts are geographical regions such as countries, your directories could be continents. -> -> **Subaccounts**: Subaccounts are a smaller part of your global account. For example, if your global account is your whole organization, your subaccounts could be either your geographical regions or specific departments, depending on what your internal structure requires. -> -> **Instances**: You can create and access instances of SAP HANA Cloud, SAP HANA database and SAP HANA Cloud, data lake. -> -> **Spaces**: You can choose to optionally provision an SAP HANA Cloud instance into the Cloud Foundry runtime. If you do, multiple Cloud Foundry spaces can be used to further organize instances. - -*Well done!* - -You have completed the second tutorial of this mission! Learn in the [next tutorial](hana-cloud-mission-trial-3) how to provision an instance of SAP HANA Cloud, SAP HANA database. - - - -### Knowledge Check - - ---- diff --git a/tutorials/hana-cloud-mission-trial-2-ft/hcc-app.png b/tutorials/hana-cloud-mission-trial-2-ft/hcc-app.png deleted file mode 100644 index 501080de19..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/hcc-app.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/open-subaccount.png b/tutorials/hana-cloud-mission-trial-2-ft/open-subaccount.png deleted file mode 100644 index ef4ba7bb7a..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/open-subaccount.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/role-selected.png b/tutorials/hana-cloud-mission-trial-2-ft/role-selected.png deleted file mode 100644 index 0e962639aa..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/role-selected.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/subscribe-to-tooling-existing-acct.png b/tutorials/hana-cloud-mission-trial-2-ft/subscribe-to-tooling-existing-acct.png deleted file mode 100644 index 8a6d0803e4..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/subscribe-to-tooling-existing-acct.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/user-mgmt.png b/tutorials/hana-cloud-mission-trial-2-ft/user-mgmt.png deleted file mode 100644 index 375060f19a..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2-ft/user-mgmt.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/BTP-entitlement-save-ft.png b/tutorials/hana-cloud-mission-trial-2/BTP-entitlement-save-ft.png new file mode 100644 index 0000000000..afe8fbdcc4 Binary files /dev/null and b/tutorials/hana-cloud-mission-trial-2/BTP-entitlement-save-ft.png differ diff --git a/tutorials/hana-cloud-mission-trial-2/BTP-entitlements-ft.png b/tutorials/hana-cloud-mission-trial-2/BTP-entitlements-ft.png new file mode 100644 index 0000000000..2139061592 Binary files /dev/null and b/tutorials/hana-cloud-mission-trial-2/BTP-entitlements-ft.png differ diff --git a/tutorials/hana-cloud-mission-trial-2/BTP-entitlements.png b/tutorials/hana-cloud-mission-trial-2/BTP-entitlements.png deleted file mode 100644 index 4fcce3d58d..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/BTP-entitlements.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/BTP-global-account.png b/tutorials/hana-cloud-mission-trial-2/BTP-global-account.png new file mode 100644 index 0000000000..3da60b8029 Binary files /dev/null and b/tutorials/hana-cloud-mission-trial-2/BTP-global-account.png differ diff --git a/tutorials/hana-cloud-mission-trial-2/BTP-global-account2.png b/tutorials/hana-cloud-mission-trial-2/BTP-global-account2.png deleted file mode 100644 index 7373ad4827..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/BTP-global-account2.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/BTP-help-trial.png b/tutorials/hana-cloud-mission-trial-2/BTP-help-trial.png deleted file mode 100644 index 0a28cbdca0..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/BTP-help-trial.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/BTP-help.png b/tutorials/hana-cloud-mission-trial-2/BTP-help.png similarity index 100% rename from tutorials/hana-cloud-mission-trial-2-ft/BTP-help.png rename to tutorials/hana-cloud-mission-trial-2/BTP-help.png diff --git a/tutorials/hana-cloud-mission-trial-2/BTP-trial-period.png b/tutorials/hana-cloud-mission-trial-2/BTP-trial-period.png deleted file mode 100644 index d9fa70ed05..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/BTP-trial-period.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/add-serv-plans.png b/tutorials/hana-cloud-mission-trial-2/add-serv-plans.png new file mode 100644 index 0000000000..1c9c950f78 Binary files /dev/null and b/tutorials/hana-cloud-mission-trial-2/add-serv-plans.png differ diff --git a/tutorials/hana-cloud-mission-trial-2/assign-role.png b/tutorials/hana-cloud-mission-trial-2/assign-role.png index b65e51060b..fd0b9f23db 100644 Binary files a/tutorials/hana-cloud-mission-trial-2/assign-role.png and b/tutorials/hana-cloud-mission-trial-2/assign-role.png differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/btp-illustration.png b/tutorials/hana-cloud-mission-trial-2/btp-illustration.png similarity index 100% rename from tutorials/hana-cloud-mission-trial-2-ft/btp-illustration.png rename to tutorials/hana-cloud-mission-trial-2/btp-illustration.png diff --git a/tutorials/hana-cloud-mission-trial-2/btp-org-illustration.png b/tutorials/hana-cloud-mission-trial-2/btp-org-illustration.png deleted file mode 100644 index ee98091a23..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/btp-org-illustration.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/create-instance.png b/tutorials/hana-cloud-mission-trial-2/create-instance.png deleted file mode 100644 index 18a21fe559..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/create-instance.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/create-subscription.png b/tutorials/hana-cloud-mission-trial-2/create-subscription.png new file mode 100644 index 0000000000..dd90de5928 Binary files /dev/null and b/tutorials/hana-cloud-mission-trial-2/create-subscription.png differ diff --git a/tutorials/hana-cloud-mission-trial-2-ft/deletion-warning.png b/tutorials/hana-cloud-mission-trial-2/deletion-warning.png similarity index 100% rename from tutorials/hana-cloud-mission-trial-2-ft/deletion-warning.png rename to tutorials/hana-cloud-mission-trial-2/deletion-warning.png diff --git a/tutorials/hana-cloud-mission-trial-2/hana-cloud-mission-trial-2.md b/tutorials/hana-cloud-mission-trial-2/hana-cloud-mission-trial-2.md index 5a2470fdb9..14d4773cad 100644 --- a/tutorials/hana-cloud-mission-trial-2/hana-cloud-mission-trial-2.md +++ b/tutorials/hana-cloud-mission-trial-2/hana-cloud-mission-trial-2.md @@ -8,19 +8,22 @@ tags: [ tutorial>beginner, software-product>sap-hana-cloud, software-product-fun primary_tag: software-product>sap-hana-cloud --- -# Start Using SAP HANA Cloud Trial in SAP BTP Cockpit - Learn how to configure entitlements and create a subscription for SAP HANA Cloud. +# Start Using SAP HANA Cloud Free Tier Service in SAP BTP Cockpit + + Learn how to get started with SAP HANA Cloud free tier service in the SAP Business Technology Platform (SAP BTP). ## Prerequisites -- You have signed up for an [SAP HANA Cloud trial account](hana-cloud-mission-trial-1) + +- You have access to a SAP BTP trial account or a productive account that has SAP HANA Cloud entitlements ## You will learn -- How to add SAP HANA Cloud to an existing SAP BTP trial account -- How the SAP BTP cockpit is structured and where to find SAP HANA Cloud in it + +- How to subscribe to the SAP HANA Tools +- About the SAP BTP Cockpit and SAP BTP subaccount ## Intro -This tutorial is part of a mission, in which you will learn in a hands-on, end-to-end setting how to use SAP HANA Cloud, SAP HANA database. SAP offers two free options to use SAP HANA Cloud. This tutorial covers the first option, which is using SAP HANA Cloud trial. The free tier option can be easily upgraded to a paid version but does require payment details, while the trial allows you to use SAP HANA Cloud in a test environment and does not require payment details to sign up. If you would like to learn more about the free tier option (SAP HANA Cloud trial), navigate to the [next tutorial](hana-cloud-mission-trial-2-ft). +This tutorial is part of a mission, in which you will learn in a hands-on, end-to-end setting how to use SAP HANA Cloud, SAP HANA database. In this tutorial you will learn how to work with the entitlements and role collections required for the SAP HANA Cloud tooling as well as get an overview of the SAP BTP Cockpit and subaccount. >![Alex Banner](banner-alex.png) > @@ -28,50 +31,58 @@ This tutorial is part of a mission, in which you will learn in a hands-on, end-t > > In this mission, we will help Alex, the CEO of a fictitious company called *Best Run Travel* to answer a concrete business question with SAP HANA Cloud, SAP HANA database: > -> * As a global travel agency, Best Run Travel has data from many different affiliates. -> * Alex needs to know the **top 5 partners** of their agency and wants to find out the **days with maximum booking of each partner**. -> * Best Run Travel uses SAP HANA Cloud, SAP HANA database to store and manage all its data. Now, your mission is to help Alex find a subset of the data related to the partner sales and create a way for Alex to share this subset with other departments in Best Run Travel. +> - As a global travel agency, Best Run Travel has data from many different affiliates. +> - Alex needs to know the **top 5 partners** of their agency and wants to find out the **days with maximum booking of each partner**. +> - Best Run Travel uses SAP HANA Cloud, SAP HANA database to store and manage all its data. Now, your mission is to help Alex find a subset of the data related to the partner sales and create a way for Alex to share this subset with other departments in Best Run Travel. - In this tutorial, you will learn how to add the new multi-environment tooling SAP HANA Cloud to new or existing trial accounts of SAP Business Technology Platform. Having access to SAP HANA Cloud is a prerequisite for all other tutorials in this mission. +--- -> If you have a **production environment** of SAP HANA Cloud, SAP HANA database, you may also follow the steps described in this mission. +### Examine the SAP HANA Cloud entitlements -### Examine entitlements for SAP HANA Cloud +Before a service can be used in the SAP BTP Cockpit, it must have its entitlements enabled. -1. In the SAP BTP Cockpit, click on your **subaccount**. - - ![open the trial subaccount](subaccount2.png) +1. In the SAP BTP cockpit, click on your **subaccount**. - > Currently, the trial version of SAP HANA Cloud is offered only in AWS US East. + ![open the subaccount](open-subaccount.png) -2. Then click on **Entitlements** on the left-hand side menu and search for entitlements for SAP HANA. +2. Then click on **Entitlements** on the left-hand side menu and search for entitlements for SAP HANA Cloud. - ![BTP Entitlements](BTP-entitlements.png) + ![BTP Entitlements](BTP-entitlements-ft.png) -3. Notice that the following entitlements are shown. +3. Confirm that you have entitlements for the services and plans listed here: - - SAP HANA Cloud: - * `tools (Application)` - * `hana-cloud-connection` - * `hana` - * `relational-data-lake` + - SAP HANA Cloud: + - `tools (Application)` + - `hana-free` + - `hana-cloud-connection-free` + - `relational-data-lake-free` - SAP HANA Schemas & HDI Containers: - * `hdi-shared` - * `schema` - * `securestore` + - `hdi-shared` + - `schema` + - `securestore` + +4. If you do not have any of the entitlements above, you need to add them to your account. To do that, click on **Edit** on the top right-hand corner of the screen, then click on **Add Service Plans** in the same area of the screen. + + In the pop-up that opens, type `SAP HANA` in the search box to see all relevant entitlements. + + ![BTP select entitlements](add-serv-plans.png) + + After clicking on **Add X Service Plans**, where X is the number of services you want to add, make sure to click on the **Save** button. + + ![BTP entitlements save](BTP-entitlement-save-ft.png) ### Add a subscription to SAP HANA Cloud tools 1. From SAP BTP Cockpit, click on **Services** and then **Service Marketplace**. Search for **SAP HANA Cloud** and click **Create** in the top-right corner. - ![Create an instance of SAP HANA Cloud](create-instance.png) + ![Create an instance of SAP HANA Cloud](create-subscription.png) 2. Select **SAP HANA Cloud** under Service and **tools** under Plan. ![subscribe to tooling](subscribe-to-tooling-existing-acct.png) -3. To ensure that your desired user has the necessary permissions to manage instances in HANA Cloud Central, navigate to **Security** > **Users** in the left-hand side menu. Then click on your user. +3. To ensure that your desired user has the necessary permissions to manage instances in HANA Cloud Central, navigate to **Security** > **Users** in the left hand side menu. Then click on your user. ![user management](user-mgmt.png) @@ -83,50 +94,32 @@ This tutorial is part of a mission, in which you will learn in a hands-on, end-t ![Select SAP HANA admin role](role-selected.png) -4. Navigate to **Instances**, **Instances and Subscriptions** and click on **SAP HANA Cloud** to open SAP HANA Cloud Central. +4. Navigate to **Services**, **Instances and Subscriptions** and click on **SAP HANA Cloud** to open SAP HANA Cloud Central. ![hana cloud central](hcc-app.png) +Congratulations, you have added free tier services to your account on SAP BTP! You now have the ability to [provision your free tier instance of SAP HANA Cloud](hana-cloud-mission-trial-3) and start your journey. -Congratulations, you have added the SAP HANA Cloud entitlements and subscribed to the multi-environment tools. You now have the ability to [provision your trial instance of SAP HANA Cloud](hana-cloud-mission-trial-3) and start your journey. - -### Get to know SAP BTP Cockpit -SAP BTP Cockpit is a web-based interface used to manage SAP cloud applications, such as SAP HANA Cloud. This is where you can manage your SAP Business Technology Platform account and users as well as create new instances whenever necessary. +### Get to know SAP BTP cockpit -![Screenshot Trial home page](ss-09-trial-home-page.png) +SAP BTP cockpit is a web-based interface used to manage SAP cloud applications, such as SAP HANA Cloud. This is where you can manage your SAP Business Technology Platform account and users as well as create new instances whenever necessary. -When you first access your trial account, you will see the [**Trial Home Page**](https://account.hanatrial.ondemand.com/trial/#/home/trial). +Use the **Help** button at the top right-hand corner of the screen once you are logged in. This will open a **Help Topics** pane where areas that you can get help custom to the page will appear, as well as embedded links to guided answers and documentation. -> In a production environment, you do not see the Trial Home Page. +![BTP Help](BTP-help.png) -This is where you can enter your account but also find helpful resources to get to know the SAP BTP Cockpit in detail: +For further details, consult our documentation material [here](https://help.sap.com/docs/btp). -- Take the virtual tour once you start your trial for the first time. +### Understand Accounts, Directories, and Subaccounts - ![Screenshot Trial home page Tour](ss-10-trial-home-page-tour.png) +Your account on SAP Business Technology Platform is called a **global account**. As the administrator, you will have full control of your global account and be able to create directories, subaccounts, and instances. Subaccounts are a smaller part of your global account. Directories group subaccounts under the global account. -There is also some built-in functionality that can help you with using SAP BTP Cockpit and provide you with more information: +![BTP Global Account](BTP-global-account.png) -- Use the **Help** button at the top right-hand corner of the screen once you are logged in. This will open a **Help Topics** pane where areas that you can get help custom to the page will appear, as well as embedded links to guided answers and documentation. +Below you can see a simplified diagram of a global account in SAP BTP Cockpit with different ways in which directories, subaccounts, are used to organize SAP HANA database and data lake instances. Of course, once you use SAP HANA Cloud, you will most likely have many more databases, subaccounts, and perhaps even global accounts. These levels will then help you keep everything well-organized. - ![BTP Help](BTP-help-trial.png) - -- Use **Trial Period** to get more information about your SAP BTP Trial. - - ![BTP Help](BTP-trial-period.png) - -- For further details, consult our documentation material [here](https://help.sap.com/docs/btp). - - -### Understand Accounts, Directories, Subaccounts, and Spaces -Your account on SAP Business Technology Platform is called a **global account**. As the administrator, you will have full control of your global account and be able to create directories, subaccounts, and instances. Subaccounts are a smaller part of your global account. Directories are groups of subaccounts under the global account. - -![BTP Global Account](BTP-global-account2.png) - -Below you can see a simplified diagram of a global account in SAP BTP cockpit with different ways in which directories and subaccounts are used to organize SAP HANA database and data lake instances. Of course, once you use SAP HANA Cloud, you will most likely have many more databases, subaccounts, and perhaps even global accounts. These levels will then help you keep everything well-organized. - -![BTP Illustration](btp-org-illustration.png) +![BTP Illustration](btp-illustration.png) > **Global Account**: Your account on the SAP BTP Platform is called a global account. As the administrator, you will have full control of your global account and be able to create subaccounts, spaces, and instances. > @@ -134,15 +127,15 @@ Below you can see a simplified diagram of a global account in SAP BTP cockpit wi > > **Subaccounts**: Subaccounts are a smaller part of your global account. For example, if your global account is your whole organization, your subaccounts could be either your geographical regions or specific departments, depending on what your internal structure requires. > -> **Instances**: You can create and access instances of SAP HANA Cloud, SAP HANA database, and SAP HANA Cloud, data lake. +> **Instances**: You can create and access instances of SAP HANA Cloud, SAP HANA database and SAP HANA Cloud, data lake. > > **Spaces**: You can choose to optionally provision an SAP HANA Cloud instance into the Cloud Foundry runtime. If you do, multiple Cloud Foundry spaces can be used to further organize instances. *Well done!* -You have completed the first tutorial of this mission! Proceed to the [fourth tutorial](hana-cloud-mission-trial-3) in this mission to learn how to provision an instance of SAP HANA Cloud, SAP HANA database. +You have completed the second tutorial of this mission! Learn in the [next tutorial](hana-cloud-mission-trial-3) how to provision an instance of SAP HANA Cloud, SAP HANA database. -### Knowledge Check +### Knowledge Check --- diff --git a/tutorials/hana-cloud-mission-trial-2/hcc-app.png b/tutorials/hana-cloud-mission-trial-2/hcc-app.png index c10904f618..ff4f2cec80 100644 Binary files a/tutorials/hana-cloud-mission-trial-2/hcc-app.png and b/tutorials/hana-cloud-mission-trial-2/hcc-app.png differ diff --git a/tutorials/hana-cloud-mission-trial-2/open-subaccount.png b/tutorials/hana-cloud-mission-trial-2/open-subaccount.png new file mode 100644 index 0000000000..b06952a155 Binary files /dev/null and b/tutorials/hana-cloud-mission-trial-2/open-subaccount.png differ diff --git a/tutorials/hana-cloud-mission-trial-2/role-selected.png b/tutorials/hana-cloud-mission-trial-2/role-selected.png index 1a87eed9a1..31d6e54950 100644 Binary files a/tutorials/hana-cloud-mission-trial-2/role-selected.png and b/tutorials/hana-cloud-mission-trial-2/role-selected.png differ diff --git a/tutorials/hana-cloud-mission-trial-2/ss-09-trial-home-page.png b/tutorials/hana-cloud-mission-trial-2/ss-09-trial-home-page.png deleted file mode 100644 index 18d851edb7..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/ss-09-trial-home-page.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/ss-10-trial-home-page-tour.png b/tutorials/hana-cloud-mission-trial-2/ss-10-trial-home-page-tour.png deleted file mode 100644 index 53acc361b6..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/ss-10-trial-home-page-tour.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/subaccount2.png b/tutorials/hana-cloud-mission-trial-2/subaccount2.png deleted file mode 100644 index 4b9615686e..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-2/subaccount2.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-2/subscribe-to-tooling-existing-acct.png b/tutorials/hana-cloud-mission-trial-2/subscribe-to-tooling-existing-acct.png index d0dea7e896..f73fa6db16 100644 Binary files a/tutorials/hana-cloud-mission-trial-2/subscribe-to-tooling-existing-acct.png and b/tutorials/hana-cloud-mission-trial-2/subscribe-to-tooling-existing-acct.png differ diff --git a/tutorials/hana-cloud-mission-trial-2/user-mgmt.png b/tutorials/hana-cloud-mission-trial-2/user-mgmt.png index 9999bdd7ba..1becfc489f 100644 Binary files a/tutorials/hana-cloud-mission-trial-2/user-mgmt.png and b/tutorials/hana-cloud-mission-trial-2/user-mgmt.png differ diff --git a/tutorials/hana-cloud-mission-trial-3/estimator-tiers.png b/tutorials/hana-cloud-mission-trial-3/estimator-tiers.png index de5b8f7398..a00f84508a 100644 Binary files a/tutorials/hana-cloud-mission-trial-3/estimator-tiers.png and b/tutorials/hana-cloud-mission-trial-3/estimator-tiers.png differ diff --git a/tutorials/hana-cloud-mission-trial-3/hana-cloud-mission-trial-3.md b/tutorials/hana-cloud-mission-trial-3/hana-cloud-mission-trial-3.md index a5bbbf1c49..775d8acba1 100644 --- a/tutorials/hana-cloud-mission-trial-3/hana-cloud-mission-trial-3.md +++ b/tutorials/hana-cloud-mission-trial-3/hana-cloud-mission-trial-3.md @@ -9,25 +9,28 @@ primary_tag: software-product>sap-hana-cloud --- # Provision an Instance of SAP HANA Cloud, SAP HANA Database + Learn how to provision an instance of SAP HANA Cloud, SAP HANA database. ## Prerequisites -- You have access to [SAP HANA Cloud trial](hana-cloud-mission-trial-2), [SAP HANA Cloud free tier](hana-cloud-mission-trial-2-ft), or a production environment of SAP HANA Cloud, SAP HANA database +- You have access to a SAP BTP trial account or a productive account that has SAP HANA Cloud entitlements ## You will learn -- How to provision an instance of SAP HANA Cloud, SAP HANA database + +- How the provisioning wizard in SAP HANA Cloud can be used to configure the settings and features of an SAP HANA Cloud database instance. ## Intro -A few notes to remember about free tier model and trial accounts: -- If you are using a free tier model or trial account, you will only be able to create one instance with a predefined size (16 GB of memory, 1 vCPU, and 80 GB of storage for trial). However, the process to create the instance is very similar to production environments, the difference being that in a production environment you have the ability to further customize your instance. For example, you are able to change advanced settings for your SAP HANA Cloud instance. +A few notes to remember about free tier instances: + +- If you are using a free tier instance, you will only be able to create one instance with a predefined size (16 GB of memory, 1 vCPU, and 80 GB of storage). However, the process to create the instance is very similar to production environments, the difference being that in a production environment you can further customize your instance. For example, you can change advanced settings for your SAP HANA Cloud instance. -- Free tier model and trial instances will be **stopped on a nightly basis**. Each time you start working with your free tier model or trial instance, you need to restart it. +- Free tier instances will be **stopped on a nightly basis**. Each time you start working with your free tier instance, you need to restart it. -- If you do not restart your trial instance within **30 days**, it will be **deleted**. Your BTP account, however, will continue to exist and you can easily provision an instance again, if you wish to do so. +- If you do not restart your free tier instance within **30 days**, it will be **deleted**. Your BTP account, however, will continue to exist and you can easily provision an instance again, if you wish to do so. -- The instance summary card: Trial (left) and free tier (middle) does not display a cost estimate. If you are using free tier, ensure you see the free tier indicator icon since paid tier (right) will show you a cost estimate meaning charges will be incurred if you create an instance. +- The instance summary card for a free tier instance does not display a cost estimate. If you are using free tier, ensure you see the free tier indicator icon since paid tier (right) will show you a cost estimate meaning charges will be incurred if you create an instance. ![Estimator for each tier](estimator-tiers.png) @@ -35,28 +38,6 @@ A few notes to remember about free tier model and trial accounts: ### Start the Provisioning Wizard -[OPTION BEGIN [Trial]] - -To create your first instance of SAP HANA Cloud, SAP HANA database, you need to follow these steps: - -1. In SAP BTP cockpit, open SAP HANA Cloud Central by clicking on the subscription to SAP HANA Cloud in the **Subscriptions** tab. - - ![HCC ME tooling](hcc-app.png) - -2. On the top-right corner of the screen, click on **Create Instance**. - - ![Create instance in SAP HANA Cloud Central](hcc-create-instance.png) - -3. Here you must choose the **Type** of instance to create. Select **SAP HANA Database**. With the trial account, you have the option to manually configure your SAP HANA Database. - - > If you would like to learn more about **SAP HANA Cloud, Data Lake**, and [Get Started with a Standalone SAP HANA Cloud, Data Lake](mission.hana-cloud-data-lake-get-started), navigate to the linked mission for the basics. - - ![Trial Provisioning Wizard](trial-step-1.png) - -4. Click on **Next Step** to continue. - -[OPTION END] - [OPTION BEGIN [Free Tier]] To create your first instance of SAP HANA Cloud, SAP HANA database, you need to follow these steps: @@ -65,19 +46,19 @@ To create your first instance of SAP HANA Cloud, SAP HANA database, you need to ![HCC ME tooling](hcc-app.png) -2. On the top-right corner of the screen, click on **Create Instance**. +2. On the top-right corner of the screen, click on **Create Instance**. ![Create instance in SAP HANA Cloud Central](hcc-create-instance.png) -3. Here you must choose the **Type** of instance to create. Select **SAP HANA Database**. - - Note that if you have enabled only one type of service plan in your SAP HANA Cloud entitlement (e.g. free tier only), the License section does not appear and that service plan type will be used automatically. +3. Here you must choose the **Type** of instance to create. Select **SAP HANA Database**. + + Note that if you have enabled only one type of service plan in your SAP HANA Cloud entitlement (e.g. free tier only), the License section does not appear, and that service plan type will be used automatically. > If you would like to learn more about **SAP HANA Cloud, Data Lake**, and [Get Started with a Standalone SAP HANA Cloud, Data Lake](mission.hana-cloud-data-lake-get-started), navigate to the linked mission for the basics. ![Free Tier Provisioning Wizard](free-tier-step-1.png) -4. Click on **Next Step** to continue. +4. Click on **Next Step** to continue. [OPTION END] @@ -85,37 +66,37 @@ To create your first instance of SAP HANA Cloud, SAP HANA database, you need to To create your first instance of SAP HANA Cloud, SAP HANA database, you need to follow these steps: -1. In SAP BTP cockpit, open SAP HANA Cloud Central by clicking on the subscription to SAP HANA Cloud in the **Subscriptions** tab. +1. In SAP BTP cockpit, open SAP HANA Cloud Central by clicking on the subscription to SAP HANA Cloud in the **Subscriptions** tab. - ![HCC ME tooling](hcc-app.png) + ![HCC ME tooling](open-hcc.png) -2. On the top-right corner of the screen, click on **Create Instance**. +2. On the top-right corner of the screen, click on **Create Instance**. ![Create instance in SAP HANA Cloud Central](hcc-create-instance.png) -3. Here you must choose the **Type** of instance to create. - +3. Here you must choose the **Type** of instance to create. + A **License** section will appear. To use the free tier model, click on **Free Tier** so that it is highlighted as shown below. Select **SAP HANA Database**. You have multiple options to configure your instance. Select **Configure manually**. - + Note that if you have enabled only one type of service plan in your SAP HANA Cloud entitlement (e.g. free tier only), the License section does not appear and that service plan type will be used automatically. > If you would like to learn more about **SAP HANA Cloud, Data Lake**, and [Get Started with a Standalone SAP HANA Cloud, Data Lake](mission.hana-cloud-data-lake-get-started), navigate to the linked mission for the basics. ![Paid Tier Provisioning Wizard](paid-tier-step-1.png) -4. Click on **Next Step** to continue. +4. Click on **Next Step** to continue. [OPTION END] ### Choose your license, instance name, and password -1. In the **Basics** section, enter a name for your instance in the field **Instance Name**, such as `HC_HDB`. +1. In the **Basics** section, enter a name for your instance in the field **Instance Name**, such as `HC_HDB`. > This field does not allow any spaces in the name. Keep in mind that you will not be able to change the name after the instance has been created. -2. Insert a password in the **Administrator Password** field. +2. Insert a password in the **Administrator Password** field. -3. Confirm it by typing it again in the **Confirm Administrator Password** field. +3. Confirm it by typing it again in the **Confirm Administrator Password** field. ![HANA step 1](hdb-instance-name.png) @@ -123,21 +104,29 @@ To create your first instance of SAP HANA Cloud, SAP HANA database, you need to 4. You may also choose the runtime environment. Further details can be found at [What Runtime Environment is my SAP HANA Cloud Instance Using?](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/runtime-environments-for-sap-hana-cloud) - -5. Now click on **Next Step** to continue. - +5. Now click on **Next Step** to continue. ### Set up the size of your database -> There are different instructions available to you depending on whether you are using a free tier model or trial account versus a production environment. Please make sure to select the one that applies to your situation to get the most out of this tutorial. +> There are different instructions available to you depending on whether you are using a free tier instance versus a productive instance. Please make sure to select the one that applies to your situation to get the most out of this tutorial. In this step of the provisioning wizard, you can set up the size of your SAP HANA database in SAP HANA Cloud. +[OPTION BEGIN [Free Tier]] + +For a free tier instance, the size allocation is predefined to 16 GB for memory, 80 GB for storage and 1 vCPU for computation. + +![SAP HANA Database Memory Allocation](hdb-memory2.png) + +Click on **Next Step** to continue. + +[OPTION END] + [OPTION BEGIN [Production]] In a production environment, you are able to select a performance class and choose the initial size of your instance. -1. Here, you can select how much **Memory** you wish to allocate to this instance. +1. Here, you can select how much **Memory** you wish to allocate to this instance. ![HDB Memory](2-ss-04-HDB-Memory.png) @@ -147,197 +136,184 @@ In a production environment, you are able to select a performance class and choo Follow this [link](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/sap-hana-database-size) to learn more about the memory allocation. -2. Click on **Next Step** to continue. +2. Click on **Next Step** to continue. [OPTION END] -[OPTION BEGIN [Free Tier/Trial]] +### Specify database availability zone and replicas -For a free tier and trial instance, the size allocation is predefined to 16 GB for memory, 80 GB for storage and 1 vCPUs for computation. +> There are different instructions available to you depending on whether you are using a free tier instance versus a productive instance. Please make sure to select the one that applies to your situation to get the most of this tutorial. +Here, you can select in this step if you want to create **replicas** of your instance to increase your system availability. These replicas are exact duplicates of your instance that will be managed in the background and automatically synchronized. In case of issues, you can take over a replica of your instance to ensure minimal interruption. -![SAP HANA Database Memory Allocation](hdb-memory2.png) +[OPTION BEGIN [Free Tier]] -Click on **Next Step** to continue. +In a free tier instance, availability zone and replicas are not supported. -[OPTION END] +![HANA database replicas](hdb-replicas2.png) +To read more about increasing system availability, you can check this [technical documentation](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-administration-guide/increasing-system-availability). -### Specify database availability zone and replicas +Click on **Next Step** to continue. -> There are different instructions available to you depending on whether you are using a free tier model or trial account versus a production environment. Please make sure to select the one that applies to your situation to get the most of this tutorial. +> Keep in mind that you cannot change the availability zone of the instance after it has been created. To update replicas, you need to delete and re-create them. -Here, you can select in this step if you want to create **replicas** of your instance to increase your system availability. These replicas are exact duplicates of your instance that will be managed in the background and automatically synchronized. In case of issues, you can take over a replica of your instance to ensure minimal interruption. +[OPTION END] [OPTION BEGIN [Production]] -1. Select the availability zone for your instance and optionally, include a replica. +1. Select the availability zone for your instance and optionally, include a replica. ![Availability Zone](avail-zone2.png) To read more about increasing system availability, you can check this [technical documentation](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-administration-guide/increasing-system-availability). -2. Click on **Next Step** to continue. +2. Click on **Next Step** to continue. > Keep in mind that you cannot change the **availability zone of the instance** after it has been created. To update replicas, you need to delete and re-create the instance. [OPTION END] -[OPTION BEGIN [Free Tier/Trial]] -In a free tier model or trial environment, availability zone and replicas are not supported. +### Check the advanced settings + +> There are different instructions are available to you depending on whether you are using a free tier instance vs a productive instance. Please make sure to select the one that applies to your situation to get the most of this tutorial. -![HANA database replicas](hdb-replicas2.png) +[OPTION BEGIN [Free Tier]] -To read more about increasing system availability, you can check this [technical documentation](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-administration-guide/increasing-system-availability). +Now you can configure the **Advanced Settings**. -Click on **Next Step** to continue. +1. The Predictive Analysis Library (PAL) is not required for this tutorial mission. Further details can be found at [Hands-on Tutorial: Machine Learning with SAP HANA Cloud](https://community.sap.com/t5/artificial-intelligence-and-machine-learning-blogs/hands-on-tutorial-machine-learning-with-sap-hana-cloud/ba-p/13683430). -> Keep in mind that you cannot change the availability zone of the instance after it has been created. To update replicas, you need to delete and re-create them. +2. The Data Provisioning Server is not required in this tutorial mission. -[OPTION END] +3. You may manage the allowed connections for your SAP HANA database instance, i.e. allowing access to your SAP HANA database instance from outside of the SAP Business Technology Platform (SAP BTP). Selecting Allow only BTP IP addresses denies all IP addresses outside SAP BTP. You may choose to allow access to specific applications by inserting one or more specific IP addresses or you can allow connections from all IP addresses. +4. Next, you can also choose to enable the SAP Cloud Connector, which makes it easier to connect this SAP HANA database instance to an SAP HANA on-premises database. You can also set the connection preferences for your cloud connector under **Allowed connections**. + > Keep in mind that you can still change your configurations here at a later point, if you decide to do so. -### Check the advanced settings + ![HDB advanced settings](hdb-advanced-settings2.png) + +5. Instance mapping enables an instance provisioned into the SAP BTP subaccount to be mapped into a runtime environment such as Cloud Foundry. Step-by-step instructions can be found in the [Create a Development Project in SAP Business Application Studio](hana-cloud-mission-trial-8) tutorial. + +6. Click on **Next Step** in the bottom left corner to continue. -> There are different instructions are available to you depending on whether you are using a free tier model or trial account versus a production environment. Please make sure to select the one that applies to your situation to get the most of this tutorial. +[OPTION END] [OPTION BEGIN [Production]] -1. Under **Advanced Settings**, you can choose to enable additional features such as the **Script Server**, **Document Store**, **Triple Store**, **Natural Language Processing (NLP)**, and **Data Provisioning Server**. If your database does not have the required `vCPUs` for either of the first two options, you can click on the link on the error message, which will change your original setup and add more `vCPUs` automatically. +1. Under **Advanced Settings**, you can choose to enable additional features such as the **Script Server**, **Document Store**, **Triple Store**, **Natural Language Processing (NLP)**, and **Data Provisioning Server**. If your database does not have the required `vCPUs` for either of the first two options, you can click on the link on the error message, which will change your original setup and add more `vCPUs` automatically. ![Advanced Settings](prod-advanced-settings2.png) -2. You can now manage the allowed connections for your SAP HANA database instance, i.e., you can choose to allow access to your SAP HANA database instance from outside of the SAP Business Technology Platform. You can either limit it to SAP Business Technology Platform by denying all IP addresses, or allow specific applications to access it by inserting one or more specific IP addresses. Finally, you can also allow all connections from all IP addresses. +2. You can now manage the allowed connections for your SAP HANA database instance, i.e., you can choose to allow access to your SAP HANA database instance from outside of the SAP Business Technology Platform. You can either limit it to SAP Business Technology Platform by denying all IP addresses or allow specific applications to access it by inserting one or more specific IP addresses. Finally, you can also allow all connections from all IP addresses. -3. Next, you can also choose to enable the **SAP Cloud Connector**, which makes it easier to connect this SAP HANA database instance to an SAP HANA on-premise database. +3. Next, you can also choose to enable the **SAP Cloud Connector**, which makes it easier to connect this SAP HANA database instance to an SAP HANA on-premise database. > To get familiar with the **Cloud Connector**, you can check the [technical documentation](https://help.sap.com/docs/connectivity/sap-btp-connectivity-cf/cloud-connector). > >Select whether you want your SAP HANA database to connect to your on-premises remote sources through the cloud connector. For details, see the [SAP HANA Database Connectivity Documentation](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-data-access-guide/data-access-in-sap-hana-cloud-sap-hana-database?locale=en-US). > > Keep in mind that you can still change your configurations here at a later point, if you decide to do so. - -4. Instance mapping enables an instance provisioned into the SAP BTP subaccount to be mapped into a runtime environment such as Cloud Foundry. Step-by-step instructions can be found in the [Create a Development Project in SAP Business Application Studio](hana-cloud-mission-trial-8) tutorial. - -5. Click on **Next Step** in the bottom left corner to continue. - -[OPTION END] -[OPTION BEGIN [Free Tier/Trial]] - -Now you can configure the **Advanced Settings**. - -1. The Predictive Analysis Library (PAL) is not required for this tutorial mission. Further details can be found at [Hands-on Tutorial: Machine Learning with SAP HANA Cloud](https://community.sap.com/t5/artificial-intelligence-and-machine-learning-blogs/hands-on-tutorial-machine-learning-with-sap-hana-cloud/ba-p/13683430). - -2. The Data Provisioning Server is not required in this tutorial mission. - -3. You may manage the allowed connections for your SAP HANA database instance, i.e. allowing access to your SAP HANA database instance from outside of the SAP Business Technology Platform (SAP BTP). Selecting Allow only BTP IP addresses denies all IP addresses outside SAP BTP. You may choose to allow access to specific applications by inserting one or more specific IP addresses or you can allow connections from all IP addresses. - -4. Next, you can also choose to enable the SAP Cloud Connector, which makes it easier to connect this SAP HANA database instance to an SAP HANA on-premises database. You can also set the connection preferences for your cloud connector under **Allowed connections**. - - > Keep in mind that you can still change your configurations here at a later point, if you decide to do so. - - ![HDB advanced settings](hdb-advanced-settings2.png) -5. Instance mapping enables an instance provisioned into the SAP BTP subaccount to be mapped into a runtime environment such as Cloud Foundry. Step-by-step instructions can be found in the [Create a Development Project in SAP Business Application Studio](hana-cloud-mission-trial-8) tutorial. +4. Instance mapping enables an instance provisioned into the SAP BTP subaccount to be mapped into a runtime environment such as Cloud Foundry. Step-by-step instructions can be found in the [Create a Development Project in SAP Business Application Studio](hana-cloud-mission-trial-8) tutorial. -6. Click on **Next Step** in the bottom left corner to continue. +5. Click on **Next Step** in the bottom left corner to continue. [OPTION END] - ### Enable the SAP HANA Cloud, data lake (optional) In the last step of the provisioning wizard, you have the option of also provisioning a managed data lake. If you enable the data lake in this step, this data lake will have maximum compatibility with SAP HANA and a remote connection between your SAP HANA database and the data lake will be created automatically during provisioning. > If you do not wish to enable a data lake, you can skip this step by clicking on **Review and Create** in the bottom-right corner. -[OPTION BEGIN [Production]] -1. If you click on **Create data lake**, a managed SAP HANA Cloud, data lake will be provisioned alongside your SAP HANA database in SAP HANA Cloud and will include a data lake Files instance. +[OPTION BEGIN [Free Tier]] - ![Data Lake Enabled](hdl-prod-create2.png) +1. Once you select **Create Data Lake** option, two more menu options will appear in the wizard with additional steps. Note that a data lake Files instance is not included in the free tier plan. -2. Next, give your data lake instance a name under **Instance Name**. + ![Create Data Lake](hdl-create2.png) - ![Data lake name](hdl-prod-name2.png) +2. Next, give your data lake instance a name under **Instance Name**. - > When you add a managed data lake, the HDLADMIN user is automatically created and is given the same password as DBADMIN, which you set in the first step. If later you decide to change the password of one user, the password of the other user will **not** be automatically changed. + >When you add a managed data lake, the HDLADMIN user is automatically created and is given the same password as DBADMIN, which you set in the first step. If later you decide to change the password of one user, the password of the other user will **not** be automatically changed. -3. Click on **Next Step** to continue. + ![Name Data Lake](hdl-name2.png) -4. This is where you can adjust how many **coordinators** and **workers** you want for your data lake, as well the amount of **storage** you wish to allocate to this instance. +3. In production environment this is where you can adjust how many **coordinators** and **workers** you want for your data lake, as well the amount of **storage** you wish to allocate to this instance. But for a free tier instance, you can't change these as they are predefined settings. - ![Data Lake IQ](hdl-prod-dlre2.png) + > Please remember that you can enable or disable the data lake later as well if you prefer. + > + > The coordinator and worker size, as well as the number of workers will affect instance pricing. For details, see [SAP HANA Cloud Capacity Unit Estimator](https://hcsizingestimator.cfapps.eu10.hana.ondemand.com/). - > The coordinator and worker size, as well as the number of workers will affect instance pricing. For details, see SAP HANA Cloud Capacity Unit Estimator. + ![Data lake size](hdl-size2.png) -5. Click on **Next Step** to continue. +4. Click on **Next Step** to continue. -6. Now you can set up the **Advanced Settings** for the data lake. Here you can manage the allowed connections and choose - just like you did for your SAP HANA database in SAP HANA Cloud - if you want to allow only BTP IP addresses, all IP addresses or, specific IP addresses. The last option also gives you the option to **Copy IP addresses from the SAP HANA database** choosing again, who can have access to your data lake instance. +5. Now you can set up the **Advanced Settings** for the data lake instance. Here you can manage the allowed connections and chose - just like for your SAP HANA database in SAP HANA Cloud - if you want to allow only BTP IP addresses, all IP addresses or specific IP addresses. - ![HDL Connections](hdl-prod-review2.png) + Note that backups are not available for free tier instances. + + ![Data lake advanced](hdl-advanced2.png) + +6. Lastly, click on **Review and Create** to finish the provisioning process and **Create Instance**. -7. Lastly, click on **Review and Create** to finish the provisioning process. + ![Data Lake Create Instances](hdl-create-instance2.png) [OPTION END] -[OPTION BEGIN [Free Tier/Trial]] -1. Once you select **Create Data Lake** option, two more menu options will appear in the wizard with additional steps. Note that a data lake Files instance is not included in free tier or trial. +[OPTION BEGIN [Production]] + +1. If you click on **Create data lake**, a managed SAP HANA Cloud, data lake will be provisioned alongside your SAP HANA database in SAP HANA Cloud and will include a data lake Files instance. - ![Create Data Lake](hdl-create2.png) + ![Data Lake Enabled](hdl-prod-create2.png) 2. Next, give your data lake instance a name under **Instance Name**. - >When you add a managed data lake, the HDLADMIN user is automatically created and is given the same password as DBADMIN, which you set in the first step. If later you decide to change the password of one user, the password of the other user will **not** be automatically changed. - - ![Name Data Lake](hdl-name2.png) + ![Data lake name](hdl-prod-name2.png) -3. In production environment this is where you can adjust how many **coordinators** and **workers** you want for your data lake, as well the amount of **storage** you wish to allocate to this instance. But in a free tier model or trial account, you can't change these as they are predefined settings. + > When you add a managed data lake, the HDLADMIN user is automatically created and is given the same password as DBADMIN, which you set in the first step. If later you decide to change the password of one user, the password of the other user will **not** be automatically changed. - > Please remember that you can enable or disable the data lake later as well if you prefer. - > - > The coordinator and worker size, as well as the number of workers will affect instance pricing. For details, see [SAP HANA Cloud Capacity Unit Estimator](https://hcsizingestimator.cfapps.eu10.hana.ondemand.com/). +3. Click on **Next Step** to continue. - ![Data lake size](hdl-size2.png) +4. This is where you can adjust how many **coordinators** and **workers** you want for your data lake, as well the amount of **storage** you wish to allocate to this instance. -4. Click on **Next Step** to continue. + ![Data Lake IQ](hdl-prod-dlre2.png) -5. Now you can set up the **Advanced Settings** for the data lake instance. Here you can manage the allowed connections and chose - just like for your SAP HANA database in SAP HANA Cloud - if you want to allow only BTP IP addresses, all IP addresses or specific IP addresses. + > The coordinator and worker size, as well as the number of workers will affect instance pricing. For details, see SAP HANA Cloud Capacity Unit Estimator. - Note that backups are not available for instances under free tier or trial accounts. +5. Click on **Next Step** to continue. - ![Data lake advanced](hdl-advanced2.png) +6. Now you can set up the **Advanced Settings** for the data lake. Here you can manage the allowed connections and choose - just like you did for your SAP HANA database in SAP HANA Cloud - if you want to allow only BTP IP addresses, all IP addresses or, specific IP addresses. The last option also gives you the option to **Copy IP addresses from the SAP HANA database** choosing again, who can have access to your data lake instance. -6. Lastly, click on **Review and Create** to finish the provisioning process and **Create Instance**. + ![HDL Connections](hdl-prod-review2.png) - ![Data Lake Create Instances](hdl-create-instance2.png) +7. Lastly, click on **Review and Create** to finish the provisioning process. [OPTION END] - You are done! Your first SAP HANA Cloud, SAP HANA database and data lake instances will be created, and you can monitor their status to see when they will be ready to be used. This process usually takes a few minutes. - ### Start and stop your instance The final step is learning how to stop and start your instance. -> In a free tier or trial account, your instance will be automatically stopped on a nightly basis, according to the server region time zone. That means you need to restart your instance before you start working with your free tier model or trial every day. +> A free tier instance will be automatically stopped on a nightly basis, according to the server region time zone. That means you need to restart your instance before you start working with your instance every day. -1. To stop an instance, just click on **Stop** in the three dots menu next to the SAP HANA Cloud instance line in SAP HANA Cloud Central. Once your instance is stopped, the menu item will be updated to **Start**. +1. To stop an instance, just click on **Stop** in the three dots menu next to the SAP HANA Cloud instance line in SAP HANA Cloud Central. Once your instance is stopped, the menu item will be updated to **Start**. ![Three Dots](three-dots2.png) -2. To restart the instance, simply click on the **Start** menu item. Once it's ready to be used, it will show a green **Created** status on SAP BTP Cockpit, and a **Running** status on the SAP HANA Cloud Central. +2. To restart the instance, simply click on the **Start** menu item. Once it's ready to be used, it will show a green **Created** status on SAP BTP Cockpit, and a **Running** status on the SAP HANA Cloud Central. >Note that all these processes take a few minutes to be completed and to show an updated status. You can use the auto-refresh button to select how often you would like your instances list to periodically refresh. -> ![Refresh Instances](time-refresh2.png) +> ![Refresh Instances](time-refresh2.png) > -### Upgrade to Paid Tier (Free Tier Only) +### Upgrade to Paid Tier -When you are ready to upgrade your free tier instance to Paid Tier, you can also choose the three dots menu (under Actions) next to the SAP HANA Cloud instance line in SAP HANA Cloud Central. From here, click on **Upgrade to Paid Tier**. Note that paid tier plans must be enabled in your SAP HANA Cloud entitlement in order for the **Upgrade to Paid Tier** menu item to appear. +When you are ready to upgrade your free tier instance running in a productive SAP BTP account to a Paid Tier, you can choose the three dots menu (under Actions) next to the SAP HANA Cloud instance line in SAP HANA Cloud Central. From here, click on **Upgrade to Paid Tier**. Note that paid tier plans must be enabled in your SAP HANA Cloud entitlement for the **Upgrade to Paid Tier** menu item to appear. ![upgrade to paid tier](upgrade-paid-tier-2.png) @@ -347,11 +323,6 @@ A dialog box will appear indicating that there will be costs associated with the Now you know how to provision an instance of SAP HANA Cloud using SAP BTP Cockpit and SAP HANA Cloud Central. In the next tutorial, learn about the tools that help to manage and access your database instance. - ### Knowledge Check - - - - --- diff --git a/tutorials/hana-cloud-mission-trial-3/hcc-app.png b/tutorials/hana-cloud-mission-trial-3/hcc-app.png index e193cc0f55..ff4f2cec80 100644 Binary files a/tutorials/hana-cloud-mission-trial-3/hcc-app.png and b/tutorials/hana-cloud-mission-trial-3/hcc-app.png differ diff --git a/tutorials/hana-cloud-mission-trial-3/open-hcc.png b/tutorials/hana-cloud-mission-trial-3/open-hcc.png new file mode 100644 index 0000000000..d6c34c352e Binary files /dev/null and b/tutorials/hana-cloud-mission-trial-3/open-hcc.png differ diff --git a/tutorials/hana-cloud-mission-trial-3/trial-step-1.png b/tutorials/hana-cloud-mission-trial-3/trial-step-1.png deleted file mode 100644 index 5be390df31..0000000000 Binary files a/tutorials/hana-cloud-mission-trial-3/trial-step-1.png and /dev/null differ diff --git a/tutorials/hana-cloud-mission-trial-4/hana-cloud-mission-trial-4.md b/tutorials/hana-cloud-mission-trial-4/hana-cloud-mission-trial-4.md index 85cc360104..8a0fbcdb2d 100644 --- a/tutorials/hana-cloud-mission-trial-4/hana-cloud-mission-trial-4.md +++ b/tutorials/hana-cloud-mission-trial-4/hana-cloud-mission-trial-4.md @@ -9,14 +9,15 @@ primary_tag: software-product>sap-hana-cloud --- # Tools to Manage and Access the SAP HANA Cloud, SAP HANA Database + To get started with SAP HANA Cloud, SAP HANA database, you will need to use a few different tools. Learn here what you can use them for. ## Prerequisites -- You have access to [SAP HANA Cloud trial](hana-cloud-mission-trial-2), [SAP HANA Cloud free tier](hana-cloud-mission-trial-2-ft), or a production environment of SAP HANA Cloud, SAP HANA database -- You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) +- You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) ## You will learn + - How to use SAP HANA Cloud Central - How to access SAP HANA database explorer - How to access SAP Business Application Studio @@ -39,50 +40,49 @@ SAP HANA Cloud Central is your main administration tool for all SAP HANA Cloud i **How to open SAP HANA Cloud Central** -- In SAP BTP cockpit, open SAP HANA Cloud Central by clicking on the subscription to SAP HANA Cloud in the Subscriptions tab. +- In SAP BTP cockpit, open SAP HANA Cloud Central by clicking on the subscription to SAP HANA Cloud in the Subscriptions tab. ![BTP Manage SAP HANA Cloud](hcc-app.png) -- SAP HANA Cloud Central will open in a new tab, where you can manage this instance. +- SAP HANA Cloud Central will open in a new tab, where you can manage this instance. **What you can do in SAP HANA Cloud Central** -- *Get an overview of all SAP HANA Cloud instances in a subaccount* - -- *Create SAP HANA Cloud instances* +- *Get an overview of all SAP HANA Cloud instances in a subaccount* -- *Find an instance using the instance ID* +- *Create SAP HANA Cloud instances* -- *Check the status of an instance* +- *Find an instance using the instance ID* -- *Review notifications* +- *Check the status of an instance* -- *Check the memory, compute, and storage consumption* +- *Review notifications* -- *Start and stop instances* +- *Check the memory, compute, and storage consumption* -- *Manage and delete instances* +- *Start and stop instances* -- *Perform SAP HANA database migrations* +- *Manage and delete instances* -- *View alerts in the Alerts app* +- *Perform SAP HANA database migrations* -- *Run queries in the SQL console tab* +- *View alerts in the Alerts app* -- *Explore the schema of the database using the database objects app* +- *Run queries in the SQL console tab* +- *Explore the schema of the database using the database objects app* **How to find your instances** -- In SAP HANA Cloud Central you can see all your instances. If you want to manage and maintain multiple instances, you can use the filters and search options on the top center area of the screen. Use **Adapt Filters** to modify the types of filters displayed. +- In SAP HANA Cloud Central you can see all your instances. If you want to manage and maintain multiple instances, you can use the filters and search options on the top center area of the screen. Use **Adapt Filters** to modify the types of filters displayed. ![HCC filters](hcc-filters.png) **Manage your instances** -- You can open many options by clicking on the **three dots** under the **Actions** column to each instance on the list. This includes options to manage configurations, start or stop the instance, or delete it. From this menu, you can also open the other tools you can use with your instances, such as SAP HANA database explorer. +- You can open many options by clicking on the **three dots** under the **Actions** column to each instance on the list. This includes options to manage configurations, start or stop the instance, or delete it. From this menu, you can also open the other tools you can use with your instances, such as SAP HANA database explorer. -- One of the most important options you can get is the **SQL Endpoint** of your instance. To do so, click **Copy SQL Endpoint**. You will need this for multiple tasks, such as connecting to other systems. +- One of the most important options you can get is the **SQL Endpoint** of your instance. To do so, click **Copy SQL Endpoint**. You will need this for multiple tasks, such as connecting to other systems. ![HCC SQL Endpoint](hcc-sqlend2.png) @@ -92,16 +92,16 @@ SAP HANA Cloud Central is your main administration tool for all SAP HANA Cloud i Click on an instance to see further details of an instance including: - - *Memory* - * *Compute* - * *Network* - * *Storage* - * *Consumption* - * *User & Authorization Management* - * *Workload Management* - * *Data Replication* - * *Auditing* - * *Performance Details including expensive statements* + - *Memory* + - *Compute* + - *Network* + - *Storage* + - *Consumption* + - *User & Authorization Management* + - *Workload Management* + - *Data Replication* + - *Auditing* + - *Performance Details including expensive statements* ![HCC Instance details](HCC-instance-details.png) @@ -136,20 +136,19 @@ SAP HANA database explorer allows you to interact with SAP HANA databases, as we The SAP HANA database explorer offers a graphical interface and the SQL console, allowing you to freely access and manage your data. - In SAP HANA database explorer, you can: -- *Browse the database catalog* +- *Browse the database catalog* -- *Execute SQL statements* +- *Execute SQL statements* -- *Debug stored procedures* +- *Debug stored procedures* -- *Add, remove, or manage remote sources* +- *Add, remove, or manage remote sources* -- *Import, and export data* +- *Import, and export data* -- *View diagnostic files* +- *View diagnostic files* If you want to view, add, or manage any of the catalog items, right click on the item and choose from the available options. @@ -159,15 +158,15 @@ An important part of the SAP HANA database explorer is the **Catalog** browser. **How to open SAP HANA database explorer** -1. Open SAP HANA Cloud Central. +1. Open SAP HANA Cloud Central. -2. In the row of the SAP HANA Cloud database instance you want to open in SAP HANA database explorer, click on the **three dots** in the **Actions** column. +2. In the row of the SAP HANA Cloud database instance you want to open in SAP HANA database explorer, click on the **three dots** in the **Actions** column. -3. Then, click on **Open in SAP HANA database explorer**. +3. Then, click on **Open in SAP HANA database explorer**. ![HCC Open DBX](hcc-open-dbx2.png) -4. The SAP HANA database explorer will open on a new tab. If this is the first-time you are accessing it, you will need to enter the credentials of your DBADMIN user. +4. The SAP HANA database explorer will open on a new tab. If this is the first-time you are accessing it, you will need to enter the credentials of your DBADMIN user. > In this mission, you will use the SAP HANA database explorer for many tasks, so we recommend you bookmark it for easy access. @@ -178,48 +177,43 @@ For more information on how to use the SAP HANA database explorer, you can also SAP Business Application Studio is a development environment available for users with SAP HANA Cloud, SAP HANA database. There, you can create your development projects and model your data, including calculation views. This is also the tool you can use to build custom applications that connect and make use of your SAP HANA Cloud databases. -Using SAP Business Application Studio is not strictly necessary to use your trial instance, but if you would like to use calculation views and create applications it is strongly recommended. In this mission, you will learn to use it. +Using SAP Business Application Studio is not strictly necessary to use your SAP HANA Cloud instance, but if you would like to use calculation views and create applications it is strongly recommended. In this mission, you will learn to use it. **What you can do in SAP Business Application Studio** The SAP Business Application Studio provides tools specific to building business applications within the SAP ecosystem, covering the end-to-end development cycle. You can: -- *Create development spaces* - -- *Clone an existing project* +- *Create development spaces* -- *Create a new project using a template* +- *Clone an existing project* -- *Use editors for SAP-specific technologies* +- *Create a new project using a template* -- *Test your application while consuming services from remote sources* +- *Use editors for SAP-specific technologies* -- *Build and deploy you application as a multi-target application* +- *Test your application while consuming services from remote sources* +- *Build and deploy you application as a multi-target application* -> To use SAP Business Application Studio, you need be subscribed to this service within the SAP BTP Cockpit. You must also have Cloud Foundry enabled to add the SAP Business Application Studio entitlement to your trial account. -> -> If you are using a *trial account*, you can subscribe automatically via the **quick tool access**. -> -> If you are **not** using a trial account or you have added SAP HANA Cloud to an existing SAP BTP trial, you need to **subscribe manually**. +> To use SAP Business Application Studio, you need be subscribed to this service within the SAP BTP Cockpit. You must also have Cloud Foundry enabled to add the SAP Business Application Studio entitlement. > > Select the option that applies to you by clicking on the options below the step title. [OPTION BEGIN [Quick tool access]] -**Quick tool access in trial** +**Quick tool access** -1. Go to the [SAP BTP Cockpit trial home page](https://account.hanatrial.ondemand.com/trial/#/home/trial). +1. Go to the [SAP BTP Cockpit trial home page](https://account.hanatrial.ondemand.com/trial/#/home/trial). ![Trial Home Page Quick Access BAS](ss-10-Trial-home-page-quick-access-BAS.png) -2. After logging in, click on the **SAP Business Application Studio** button under the **Quick Tool Access** area. +2. After logging in, click on the **SAP Business Application Studio** button under the **Quick Tool Access** area. -5. A new tab will open with SAP Business Application Studio. +3. A new tab will open with SAP Business Application Studio. -6. Click **OK** to accept the privacy statement if this is your first-time accessing SAP Business Application Studio. +4. Click **OK** to accept the privacy statement if this is your first-time accessing SAP Business Application Studio. -7. We recommend that you bookmark this URL so you can easily return to the SAP Business Application Studio. +5. We recommend that you bookmark this URL so you can easily return to the SAP Business Application Studio. > You can learn more about the SAP Business Application Studio by visiting the documentation [here](https://help.sap.com/docs/bas/sap-business-application-studio/what-is-sap-business-application-studio). @@ -228,15 +222,15 @@ The SAP Business Application Studio provides tools specific to building business **Manually subscribe to SAP Business Application Studio** -1. Navigate to your **Subaccount**. +1. Navigate to your **Subaccount**. -2. Click on **Service Marketplace** on the left side of the screen. +2. Click on **Service Marketplace** on the left side of the screen. -3. Scroll down or use the search bar to find **SAP Business Application Studio** and click on the three dots and choose **Create** to add a subscription. If you can see the option **Go to Application**, you are already subscribed. +3. Scroll down or use the search bar to find **SAP Business Application Studio** and click on the three dots and choose **Create** to add a subscription. If you can see the option **Go to Application**, you are already subscribed. ![BTP Marketplace](ss-11-BTP-marketplace.png) -4. Click on **Security** and then **Users**. +4. Click on **Security** and then **Users**. ![Users](users.png) @@ -244,14 +238,13 @@ The SAP Business Application Studio provides tools specific to building business ![Assign role collection](role-collection.png) -5. Open the SAP Business Application Studio. +5. Open the SAP Business Application Studio. ![Open BAS](start-bas.png) +6. Click on **OK** to accept the privacy statement if this is your first-time accessing SAP Business Application Studio. -6. Click on **OK** to accept the privacy statement if this is your first-time accessing SAP Business Application Studio. - -7. We recommend that you bookmark this URL so you can easily return to the SAP Business Application Studio. +7. We recommend that you bookmark this URL so you can easily return to the SAP Business Application Studio. > You can learn more about SAP Business Application Studio [here](https://help.sap.com/docs/bas/sap-business-application-studio/what-is-sap-business-application-studio). @@ -261,14 +254,6 @@ Well done! You have completed the fourth tutorial of this mission! Now you know how to access the tools you need to make the best use of your SAP HANA Cloud, SAP HANA database instances. Learn in the next tutorial how to import data into your SAP HANA Cloud database. - - - ### Knowledge Check - - - - - --- diff --git a/tutorials/hana-cloud-mission-trial-5/hana-cloud-mission-trial-5.md b/tutorials/hana-cloud-mission-trial-5/hana-cloud-mission-trial-5.md index 2671d28b0b..3de02b82b5 100644 --- a/tutorials/hana-cloud-mission-trial-5/hana-cloud-mission-trial-5.md +++ b/tutorials/hana-cloud-mission-trial-5/hana-cloud-mission-trial-5.md @@ -9,14 +9,15 @@ primary_tag: software-product>sap-hana-cloud --- # Import Data into SAP HANA Cloud, SAP HANA Database + Learn in this tutorial how to use the SAP HANA database explorer to import the sample data needed for this mission from a tar.gz file. ## Prerequisites -- You have access to [SAP HANA Cloud trial](hana-cloud-mission-trial-2) or [SAP HANA Cloud free tier](hana-cloud-mission-trial-2-ft), or a production environment of SAP HANA Cloud, SAP HANA database -- You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) +- You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) ## You will learn + - How to import catalog objects from your local machine to your database using the SAP HANA database explorer ## Intro @@ -31,50 +32,47 @@ primary_tag: software-product>sap-hana-cloud ### Download the sample data set - SAP provides a free data model focused on flight data for anyone to use. We're going to import this sample data and use it to help you complete the mission for Best Run Travel. Download the [SFLIGHT sample data](https://github.com/SAP/hana-xsa-opensap-hana7/raw/snippets_2.3.2/ex2/sflight_hana.tar.gz) from the public SAP GitHub repository and save it on your local machine. Note the location of the file. - ### Open the SAP HANA database explorer -1. Under Instances and Subscriptions, open SAP HANA Cloud Central. - +1. Under Instances and Subscriptions, open SAP HANA Cloud Central. + ![HCC application](hcc-app.png) -2. In the **Actions** column, click on the **three dots** and select the option to **Open in SAP HANA Database Explorer**. +2. In the **Actions** column, click on the **three dots** and select the option to **Open in SAP HANA Database Explorer**. ![Open the SAP HANA database explorer](open-dbx.png) -3. SAP HANA database explorer will open in a new tab. - +3. SAP HANA database explorer will open in a new tab. ### Import the data to your catalog -1. In the pane on the left, expand your database and right-click on **Catalog**. +1. In the pane on the left, expand your database and right-click on **Catalog**. -2. Click on **Import Catalog Objects**. +2. Click on **Import Catalog Objects**. ![DBX - import catalog objects](ss-02-dbx-import-catalog-objects.png) -3. Where it says **Local archive**, click on **Browse** and select the `SFLIGHT` file you previously downloaded to your local machine. +3. Where it says **Local archive**, click on **Browse** and select the `SFLIGHT` file you previously downloaded to your local machine. ![Browse](ss-03-browse.png) -4. Wait until the archive is uploaded completely. You can see the status of the upload next to the **Browse** button. +4. Wait until the archive is uploaded completely. You can see the status of the upload next to the **Browse** button. ![DBX uploading archive](ss-04-dbx-uploading-archive.png) -5. Once the upload is completed, you will see a list of **Catalog Objects**. All of the objects will be automatically selected for import. +5. Once the upload is completed, you will see a list of **Catalog Objects**. All of the objects will be automatically selected for import. ![DBX catalog objects](ss-05-dbx-catalog-objects.png) -6. Keep all options as they are and then click on **Import**. +6. Keep all options as they are and then click on **Import**. -7. Once the import is completed, you will see a confirmation notification on the top right-hand side of the screen. +7. Once the import is completed, you will see a confirmation notification on the top right-hand side of the screen. ![DBX import completed successfully](ss-06-dbx-import-completed-successfully.png) @@ -90,7 +88,6 @@ Note the location of the file. > > ![Import CSV3.png](ss-09-import-CSV3.png) - ### Preview the data Once the data is imported, you can take a look at it. @@ -125,12 +122,6 @@ You have completed the fifth tutorial of this mission! Now you know how to impor Learn in the next tutorial how to create and manage users and privileges. - ### Knowledge Check - - - - - --- diff --git a/tutorials/hana-cloud-mission-trial-6/hana-cloud-mission-trial-6.md b/tutorials/hana-cloud-mission-trial-6/hana-cloud-mission-trial-6.md index 68e5e3fae3..e0340df4cc 100644 --- a/tutorials/hana-cloud-mission-trial-6/hana-cloud-mission-trial-6.md +++ b/tutorials/hana-cloud-mission-trial-6/hana-cloud-mission-trial-6.md @@ -9,20 +9,21 @@ primary_tag: software-product>sap-hana-cloud --- # Create Users and Manage Roles and Privileges + Learn how to create users and assign roles and privileges using SQL or SAP HANA Cloud Central. ## Prerequisites -- You have access to [SAP HANA Cloud trial](hana-cloud-mission-trial-2) or [SAP HANA Cloud free tier](hana-cloud-mission-trial-2-ft), or a production environment of SAP HANA Cloud, SAP HANA database + - You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) - You have completed the tutorial to [import data in SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-5) ## You will learn + - The basics about the role-based security model in SAP HANA Cloud, SAP HANA database - How to create users using SQL or the User Management app - How to assign roles and privileges using SQL statements or in SAP HANA Cloud Central - ## Intro > > ![Alex Banner](banner-alex.png) @@ -35,24 +36,22 @@ primary_tag: software-product>sap-hana-cloud ### Understand roles and privileges - SAP HANA Cloud, SAP HANA database defines user permissions and privileges using a **role-based security model**. Roles and privileges can be granted to users or revoked from users. A role is a set of privileges that can, as a group, be assigned to a user. Then, as the role's privileges change, the user's privileges change accordingly. Roles can be broken down as follows: -- **User-Defined Roles** are a custom collection, often created to group privileges and tasks -- **System Roles** are built-in and automatically created with a new database +- **User-Defined Roles** are a custom collection, often created to group privileges and tasks +- **System Roles** are built-in and automatically created with a new database A privilege provides the ability to perform an operation on the system. A permission, on the other hand, is that ability in the given environment. A user may not have permission to perform a task if they have the privilege, but not on the currently acted on object. Privileges are broken down as follows: -- **System privileges** give you the right to perform the action -- **Object-level privileges** restrict your right to perform the action to the specified objects, on which the privilege is granted. +- **System privileges** give you the right to perform the action +- **Object-level privileges** restrict your right to perform the action to the specified objects, on which the privilege is granted. When a new object is created, the owner can be defined, otherwise, the creator becomes the owner. This gives privileges to modify the structure of the table and grant other privileges to other database users. Ownership of a table is not sufficient to load the table with data. The user must also have `INSERT` permission on the table. - ### Create users and roles and manage privileges Before you add users to an instance, you should create user roles that fit your needs. You can leverage some of the default user roles, edit them, or create completely customized ones. @@ -65,7 +64,7 @@ In this step, you can find instructions on both of these options. Click on **SQL **Create users and roles using the SQL console in HANA Cloud Central** -1. Open SAP HANA Cloud Central. Then navigate to the **SQL console** tab for the SAP HANA Cloud, SAP HANA database instance. +1. Open SAP HANA Cloud Central. Then navigate to the **SQL console** tab for the SAP HANA Cloud, SAP HANA database instance. ![SQL Console tab in HANA Cloud Central](sql-console-tab.png) @@ -73,7 +72,7 @@ In this step, you can find instructions on both of these options. Click on **SQL ![SQL console UI](sql-console-ui.png) -2. Users can be created with this simplified statement. You can replace the contents inside the `<>` placeholders to set your desired credentials for your new user. The username must be unique in the database and the password must contain lower case, upper case, and a digit. +2. Users can be created with this simplified statement. You can replace the contents inside the `<>` placeholders to set your desired credentials for your new user. The username must be unique in the database and the password must contain lower case, upper case, and a digit. ``` CREATE USER PASSWORD ""; @@ -88,7 +87,7 @@ In this step, you can find instructions on both of these options. Click on **SQL CREATE USER UPS_GRANTOR PASSWORD "Password1" NO FORCE_FIRST_PASSWORD_CHANGE SET USERGROUP DEFAULT; ``` -3. To grant this user roles and privileges, you can use the `GRANT` statement. To use this statement to grant a certain privilege, you must have the privilege and permissions required to grant this privilege. +3. To grant this user roles and privileges, you can use the `GRANT` statement. To use this statement to grant a certain privilege, you must have the privilege and permissions required to grant this privilege. First create `genericRoleForOO` and `genericRoleForAP` roles. These are generic roles for an object owner (OO) and application user (AP), which will be later used in a later tutorial with SAP Business Application Studio. @@ -133,19 +132,19 @@ In this step, you can find instructions on both of these options. Click on **SQL **Create users and roles in the users and roles apps** -1. Within SAP HANA Cloud Central, select your instance by clicking on it and scroll to the **User & Authorization Management** app. +1. Within SAP HANA Cloud Central, select your instance by clicking on it and scroll to the **User & Authorization Management** app. ![Open cockpit from HCC](open-cockpit-hcc.png) 2. Click on **Roles** to get started. -3. You will be directed to the Role Management page, where you can see a list of all existing user roles as well as role groups. If you click on one of them, you will see the details of this role on the right-hand side of the screen. Clicking on one of the roles allows you to edit them, for example, you can assign System, Object and Analytic Privileges and more. +3. You will be directed to the Role Management page, where you can see a list of all existing user roles as well as role groups. If you click on one of them, you will see the details of this role on the right-hand side of the screen. Clicking on one of the roles allows you to edit them, for example, you can assign System, Object and Analytic Privileges and more. ![HANA cockpit Role Management submenu](role-list-hcc.png) -4. To create a new user role, click on the **Create role** button. +4. To create a new user role, click on the **Create role** button. -5. This opens the role creation wizard on the right-hand side of the screen. First create a role named `genericRoleForOO`. Leave the rest of the settings as is. This role will be used in a later tutorial when you create a development project using SAP Business Application Studio. +5. This opens the role creation wizard on the right-hand side of the screen. First create a role named `genericRoleForOO`. Leave the rest of the settings as is. This role will be used in a later tutorial when you create a development project using SAP Business Application Studio. Click on **Create** at the bottom right corner of the screen. @@ -153,7 +152,7 @@ In this step, you can find instructions on both of these options. Click on **SQL 6. Create another role named `genericRoleForAP`, which represents a generic role for an application user. Leave the rest of the settings as is. This role will be used in a later tutorial when you create a development project using SAP Business Application Studio. -7. Now that you created the necessary roles, it's time to assign privileges to it. You have a few options here. You can add some of the existing roles into this one, combining the privileges into one single role. You can also select individual privileges, be it system, object, or analytic privileges. +7. Now that you created the necessary roles, it's time to assign privileges to it. You have a few options here. You can add some of the existing roles into this one, combining the privileges into one single role. You can also select individual privileges, be it system, object, or analytic privileges. For the `genericRoleForAP` user, go to the **Object Privileges** tab and select **Edit Object Privileges**, then **Add Object** at the top of the table. @@ -162,18 +161,18 @@ In this step, you can find instructions on both of these options. Click on **SQL >For more technical details on creating roles and deciding on privileges, please see our technical documentation [here](https://help.sap.com/viewer/c82f8d6a84c147f8b78bf6416dae7290/LATEST/en-US/dec8d273bb571014b4c2b771d3e0f166.html). 8. Under **Object**, search for `SFLIGHT`. Select the result with Object Type **SCHEMA**. - + ![Select SLFIGHT object](select-sflight-hcc.png) Press **Select** at the bottom-right corner. -9. Under **Select Privileges**, scroll to find **SELECT** and click on the checkbox. This will grant SELECT privileges to your user. +9. Under **Select Privileges**, scroll to find **SELECT** and click on the checkbox. This will grant SELECT privileges to your user. ![Select privileges for user](select-privileges-hcc.png) Press **OK** when done. Then press **Save** to ensure that your changes are saved. -10. Repeat steps 8 - 9 for the `genericRoleForOO` role. When you reach the **Add Objects with Privileges** pop-up, scroll to `SELECT` and click the checkbox **and** enable the toggle under **Grantable to Others**. +10. Repeat steps 8 - 9 for the `genericRoleForOO` role. When you reach the **Add Objects with Privileges** pop-up, scroll to `SELECT` and click the checkbox **and** enable the toggle under **Grantable to Others**. ![Add select privileges with grant option](select-privileges-w-grant-option-hcc.png) @@ -181,15 +180,15 @@ In this step, you can find instructions on both of these options. Click on **SQL *Your first big step is done! Now it's time to create individual users.* -11. To get started, switch to the **User Management** app. +11. To get started, switch to the **User Management** app. ![Select User Management](user-mgmt-card-hcc.png) -12. This screen works just like the Role Management page, so click on the **Create User** to add a new user. +12. This screen works just like the Role Management page, so click on the **Create User** to add a new user. ![HANA cockpit security user mgmt](create-user-hcc.png) -13. Give the User Name `UPS_GRANTOR`. +13. Give the User Name `UPS_GRANTOR`. ![Create a user in cockpit](user-create-hcc.png) @@ -203,11 +202,10 @@ In this step, you can find instructions on both of these options. Click on **SQL >To know more about creating user and restricted users, visit the documentation [here](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-database-administration-with-sap-hana-cockpit/create-restricted-database-user). -14. Click on **Role Assignment** in the top-right corner. +14. Click on **Role Assignment** in the top-right corner. ![Role assignment dropdown](role-assignment-hcc.png) - 15. Click **Edit Assigned Roles**, then **Add**. ![Add role assignment](add-role-assignment-hcc.png) @@ -217,7 +215,7 @@ In this step, you can find instructions on both of these options. Click on **SQL ![Select generic roles](select-roles-hcc.png) 17. Under the **Grantable to Others** column, enable the toggles by clicking on them. - + ![Enable the toggles to be grantable to others](grantable-to-others-toggle-hcc.png) Press **Save** at the top of the table when finished. @@ -232,16 +230,8 @@ You have completed the fourth tutorial of this mission! Now you know how you can You now know all the basics to start working with our sample data and help Alex gain business insights about their company, **Best Run Travel**. - Learn in the next tutorial how to query the database using SQL statements. - - ### Knowledge Check - - - - - --- diff --git a/tutorials/hana-cloud-mission-trial-7/hana-cloud-mission-trial-7.md b/tutorials/hana-cloud-mission-trial-7/hana-cloud-mission-trial-7.md index 3dd773b584..9ff2e87639 100644 --- a/tutorials/hana-cloud-mission-trial-7/hana-cloud-mission-trial-7.md +++ b/tutorials/hana-cloud-mission-trial-7/hana-cloud-mission-trial-7.md @@ -12,7 +12,6 @@ primary_tag: software-product>sap-hana-cloud Learn how to create new tables, view table details, join tables, and extract specific data from tables using SQL statements in the SAP HANA database explorer. ## Prerequisites -- You have access to [SAP HANA Cloud trial](hana-cloud-mission-trial-2) or [SAP HANA Cloud free tier](hana-cloud-mission-trial-2-ft), or a production environment of SAP HANA Cloud, SAP HANA database - You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) - You have completed the tutorial to [import the sample data needed for this mission](hana-cloud-mission-trial-5) - Optional: You can [download the code snippets](https://github.com/SAP-samples/hana-cloud-learning/blob/4ac0be770033d3425cc30a2f22f8f5c0823bb810/Mission:%20SAP%20HANA%20Database%20in%20SAP%20HANA%20Cloud/Tutorial%206/Tutorial%206%20Queries.txt) used in this tutorial from our public GitHub repository diff --git a/tutorials/hana-cloud-mission-trial-8/hana-cloud-mission-trial-8.md b/tutorials/hana-cloud-mission-trial-8/hana-cloud-mission-trial-8.md index c77ee517e6..c258f74000 100644 --- a/tutorials/hana-cloud-mission-trial-8/hana-cloud-mission-trial-8.md +++ b/tutorials/hana-cloud-mission-trial-8/hana-cloud-mission-trial-8.md @@ -12,7 +12,6 @@ primary_tag: software-product>sap-hana-cloud Create a development project, establish a connection to a database, create a user-provided service and .hdbgrants file, and deploy your project. ## Prerequisites -- You have access to [SAP HANA Cloud trial](hana-cloud-mission-trial-2) or [SAP HANA Cloud free tier](hana-cloud-mission-trial-2-ft), or a production environment of SAP HANA Cloud, SAP HANA database - You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) - You have completed the tutorial to [import the sample data needed for this mission](hana-cloud-mission-trial-5) - [Download the sample code](https://github.com/SAP-samples/hana-cloud-learning/blob/4ac0be770033d3425cc30a2f22f8f5c0823bb810/Mission:%20SAP%20HANA%20Database%20in%20SAP%20HANA%20Cloud/Tutorial%206/Tutorial%206%20Queries.txt) files from our public GitHub repository diff --git a/tutorials/hana-cloud-mission-trial-9/hana-cloud-mission-trial-9.md b/tutorials/hana-cloud-mission-trial-9/hana-cloud-mission-trial-9.md index 7dfa52e253..92fa5e2c43 100644 --- a/tutorials/hana-cloud-mission-trial-9/hana-cloud-mission-trial-9.md +++ b/tutorials/hana-cloud-mission-trial-9/hana-cloud-mission-trial-9.md @@ -12,7 +12,6 @@ primary_tag: software-product>sap-hana-cloud Learn how to create your own calculation views in SAP HANA Cloud, SAP HANA database with SAP Business Application Studio using Join and Rank nodes. ## Prerequisites -- You have access to [SAP HANA Cloud trial](hana-cloud-mission-trial-2) or [SAP HANA Cloud free tier](hana-cloud-mission-trial-2-ft), or a production environment of SAP HANA Cloud, SAP HANA database - You have completed the tutorial to [provision an instance of SAP HANA Cloud, SAP HANA database](hana-cloud-mission-trial-3) - You have completed the tutorial to [import the sample data needed for this mission](hana-cloud-mission-trial-5) - You have [set up a development project in SAP Business Application Studio and connected it to your database](hana-cloud-mission-trial-8) diff --git a/tutorials/hana-dbx-browse/hana-dbx-browse.md b/tutorials/hana-dbx-browse/hana-dbx-browse.md index 4db940f7b5..eb872544e4 100644 --- a/tutorials/hana-dbx-browse/hana-dbx-browse.md +++ b/tutorials/hana-dbx-browse/hana-dbx-browse.md @@ -7,21 +7,23 @@ primary_tag: software-product>sap-hana-cloud --- # Browse Schema with the Database Browser in SAP HANA Database Explorer + See how the database browser can be used to explore and examine objects in an SAP HANA database. ## Prerequisites -- An SAP HANA database such as SAP HANA Cloud trial or the SAP HANA, express edition that includes the SAP HANA database explorer + +- An SAP HANA database such as SAP HANA Cloud free tier or the SAP HANA, express edition that includes the SAP HANA database explorer - You have completed the first 3 tutorials in this group ## You will learn - - How a schema filter can be used in the database browser - - How to explore and examine objects in an SAP HANA database + +- How a schema filter can be used in the database browser +- How to explore and examine objects in an SAP HANA database --- ### Schemas - 1. Many objects within an SAP HANA database belong to a schema. A schema allows objects, like tables, views, functions, and stored procedures, to be grouped together. The current schema in the SQL console is shown at the top of the SQL console. ![Current Schema](CurrentSchema.png) diff --git a/tutorials/hana-dbx-connections/hana-dbx-connections.md b/tutorials/hana-dbx-connections/hana-dbx-connections.md index a757a7c322..a04697ac99 100644 --- a/tutorials/hana-dbx-connections/hana-dbx-connections.md +++ b/tutorials/hana-dbx-connections/hana-dbx-connections.md @@ -7,16 +7,20 @@ primary_tag: software-product>sap-hana-cloud --- # Add Databases to the SAP HANA Database Explorer + This tutorial will explore different instance types, such as SAP HANA Cockpit Database, SAP HANA Cloud, data lake Relational Engine, data lake Files, and SAP HANA Deployment Infrastructure (HDI) that can be added, along with the different operations that can be performed on them. ## Prerequisites -- An SAP HANA database such as SAP HANA Cloud free tier, trial or the SAP HANA, express edition that includes the SAP HANA database explorer + +- An SAP HANA database such as SAP HANA Cloud free tier or the SAP HANA, express edition that includes the SAP HANA database explorer ## You will learn + - How to add different instance types in the SAP HANA database explorer - Additional operations that can be performed on an instance ## Intro + Instances in the SAP HANA database explorer represent SAP HANA, data lake Relational Engine, or data lake Files connections that you browse and interact with. SQL consoles are associated with a database instance. @@ -28,7 +32,8 @@ SQL consoles are associated with a database instance. ### Add an SAP HANA cockpit database instance Instances shown in SAP HANA Cloud Central or in the SAP HANA cockpit can be opened in the SAP HANA database explorer. -1. From SAP HANA Cloud Central, choose **Open in SAP HANA Database Explorer**. +1. From SAP HANA Cloud Central, choose **Open in SAP HANA Database Explorer**. + ![Open in the database explorer](from-directory.png) @@ -47,19 +52,20 @@ Instances shown in SAP HANA Cloud Central or in the SAP HANA cockpit can be open Hover over the database to see a summary and note that the type is Cockpit Database. ### Add an SAP HANA database connection + Instances can also be added directly to the SAP HANA database explorer. To connect to an SAP HANA Cloud or on-premise database, the host, port, user name, and password must be provided. -1. In the SAP HANA database explorer, press the **+** button to add a new instance. +1. In the SAP HANA database explorer, press the **+** button to add a new instance. ![Add a new database](new-connection0.png) -2. For Instance Type, choose **SAP HANA Database**. +2. For Instance Type, choose **SAP HANA Database**. ![Database types](connection-type.png) >An SAP HANA, express edition or on-premise database can have two types of databases; system and tenant. This is known as multitenant. System databases are used to manage one or more tenant databases and are only applicable to on-premise systems. For further details, see [Server Architecture of Tenant Databases](https://help.sap.com/docs/SAP_HANA_PLATFORM/78209c1d3a9b41cd8624338e42a12bf6/f9aba40d6c4c4ae48cce461db4d42d88.html). -3. Provide the host, port, user name, password, and name to show in display. Below are instructions on how to obtain the host name and port number. +3. Provide the host, port, user name, password, and name to show in display. Below are instructions on how to obtain the host name and port number. ![encrypted connection](encrypted.png) @@ -96,7 +102,7 @@ Instances can also be added directly to the SAP HANA database explorer. To conn >Instructions on using X.509 certificate are provided at [Authenticate to SAP HANA Cloud using X.509](tutorials/hana-clients-x509). -4. After pressing OK, a new instance will appear whose type is SAP HANA Database. +4. After pressing OK, a new instance will appear whose type is SAP HANA Database. ![new database](new-connection.png) @@ -122,7 +128,7 @@ Instances can also be added directly to the SAP HANA database explorer. To conn > >For additional details, see [Add Instances to the SAP HANA Database Explorer](https://help.sap.com/docs/hana-cloud/sap-hana-database-explorer/add-instances-to-sap-hana-database-explorer) and the [SET Statement](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c1d3f60099654ecfb3fe36ac93c121bb/20fd82b675191014b22c8af08d0b319c.html). -5. It is also possible to connect using an X.509 certificate. Instructions can be found at [Authenticate to SAP HANA Cloud using X.509](hana-clients-x509) on how to create a client certificate and how to configure SAP HANA Cloud for use with certificate authentication. +5. It is also possible to connect using an X.509 certificate. Instructions can be found at [Authenticate to SAP HANA Cloud using X.509](tutorials/hana-clients-x509) on how to create a client certificate and how to configure SAP HANA Cloud for use with certificate authentication. ![X.509 certificate authentication](cert-auth.png) @@ -162,7 +168,7 @@ A data lake Relational Engine is a column oriented, disk based relational store Diagnostic files can also be viewed in the Logs directory. -4. It is also possible to connect using an X.509 certificate. Instructions can be found at [Authenticate to SAP HANA Cloud using X.509](hana-clients-x509) on how to create a certificate. The below SQL can be used to configure the data lake Relational Engine to enable X.509 certificate authentication. +4. It is also possible to connect using an X.509 certificate. Instructions can be found at [Authenticate to SAP HANA Cloud using X.509](tutorials/hana-clients-x509) on how to create a certificate. The below SQL can be used to configure the data lake Relational Engine to enable X.509 certificate authentication. ```SQL CREATE LOGIN POLICY X509Policy LOGIN_MODE=X509; --valid for 180 days by default @@ -181,7 +187,7 @@ A data lake Relational Engine is a column oriented, disk based relational store ### Add a data lake Files container (Optional) A [data lake Files container](https://help.sap.com/docs/hana-cloud-data-lake/user-guide-for-data-lake-files/understanding-data-lake-files) provides storage for non structured files such as images or PDF documents. It can also store structured files such as CSV, parquet, or ORC files and with the use of [SQL on Files](https://help.sap.com/docs/hana-cloud-data-lake/administration-guide-for-sql-on-files/using-sql-on-files), queries can be performed on the data contained in those files. An example of using the data lake Files container is shown as a target for an export operation at [Export and Import Data and Schema with SAP HANA Database Explorer](hana-dbx-export-import). -1. A connection can be added to a data lake Files container. A data lake Files container is not currently available in trial or free tier instances of SAP HANA Cloud. +1. A connection can be added to a data lake Files container. A data lake Files container is not currently available in free tier instances of SAP HANA Cloud. ![Add a data lake Files container](add-data-lake-file-container.png) diff --git a/tutorials/hana-dbx-create-schema/hana-dbx-create-schema.md b/tutorials/hana-dbx-create-schema/hana-dbx-create-schema.md index 6f7b8c8bf4..6f43e2b218 100644 --- a/tutorials/hana-dbx-create-schema/hana-dbx-create-schema.md +++ b/tutorials/hana-dbx-create-schema/hana-dbx-create-schema.md @@ -7,22 +7,27 @@ primary_tag: software-product>sap-hana-cloud --- # Create Database Objects with SAP HANA Database Explorer + Create a user group, users, roles, and populate a sample schema that includes tables, views, functions and procedures using the SQL console. ## Prerequisites - - An SAP HANA database such as SAP HANA Cloud free trial, free tier, or the SAP HANA, express edition that includes the SAP HANA database explorer + +- An SAP HANA database such as SAP HANA Cloud free tier, or the SAP HANA, express edition that includes the SAP HANA database explorer ## You will learn - - How to create a user group, users, roles, and a schema - - How to create tables and import data using insert statements - - How to create views, functions, and stored procedures + +- How to create a user group, users, roles, and a schema +- How to create tables and import data using insert statements +- How to create views, functions, and stored procedures ## Intro + The following steps will create a sample hotel dataset using create and insert statements. The next tutorial will demonstrate some of the ways these objects can be exported or imported. --- ### Create a usergroup, users, roles, and a schema + 1. In the SAP HANA database explorer, select the database HC_HDB and open a SQL console. ![Open SQL console](open-sql-console.png) @@ -110,7 +115,7 @@ The following steps will create a sample hotel dataset using create and insert s DROP TABLE TEST; ``` -5. The following statements can be used to delete the schema and objects it contains as well as the users, user group and roles once the tutorials are complete. +5. The following statements can be used to delete the schema and objects it contains as well as the users, user group and roles once the tutorials are complete. **Do not execute the below until the tutorials are complete**. @@ -123,9 +128,9 @@ The following steps will create a sample hotel dataset using create and insert s DROP ROLE HOTEL_ADMIN; DROP ROLE HOTEL_READER; ``` - ### Create and populate tables + 1. Create tables that represent a basic hotel administration system by running the SQL statements below in a SQL console connected to USER1 in the schema of HOTELS. ```SQL @@ -289,7 +294,7 @@ The following steps will create a sample hotel dataset using create and insert s For additional details see [CREATE Table statement](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/create-table-statement-data-definition) and [Insert Statement](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/insert-statement-data-manipulation). - 3. The data can now be queried. +3. The data can now be queried. Identifiers such as table names are automatically upper cased unless they are within "". @@ -303,12 +308,11 @@ The following steps will create a sample hotel dataset using create and insert s For further details, consult [Identifiers and case sensitivity](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/introduction-to-sql#loio209f5020751910148fd8fe88aa4d79d9__identifiers_case). - ### Explore auto-commit -Auto-commit is a setting that when enabled, causes each SQL statement to be immediately committed to the database. When auto-commit is turned off, multiple statements can be executed and then they can all be committed together, or they can all be rolled back. There are two auto-commit settings in an SAP HANA database. -The first setting which can be set in the SQL console, applies to SQL statements that manipulate data such as insert, update, or delete statements. These types of statements are known as Data Manipulation Language (DML). The second setting can be set via SQL applies to SQL statements that modify database schema such create table statements or alter table statements. These types of statements are known as Data Definition Language (DDL). +Auto-commit is a setting that when enabled, causes each SQL statement to be immediately committed to the database. When auto-commit is turned off, multiple statements can be executed and then they can all be committed together, or they can all be rolled back. There are two auto-commit settings in an SAP HANA database. +The first setting which can be set in the SQL console, applies to SQL statements that manipulate data such as insert, update, or delete statements. These types of statements are known as Data Manipulation Language (DML). The second setting can be set via SQL applies to SQL statements that modify database schema such create table statements or alter table statements. These types of statements are known as Data Definition Language (DDL). The following steps will demonstrate these settings. @@ -354,7 +358,7 @@ The following steps will demonstrate these settings. Additional details can be found at [ROLLBACK Statement](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/rollback-statement-transaction-management) and [COMMIT Statement](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/commit-statement-transaction-management). -3. Execute the following SQL statements. +4. Execute the following SQL statements. ```SQL SET TRANSACTION AUTOCOMMIT DDL OFF; @@ -384,7 +388,7 @@ The following steps will demonstrate these settings. Additional details can be found at [SET TRANSACTION AUTOCOMMIT DDL Statement](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/set-transaction-autocommit-ddl-statement-transaction-management). -4. Ensure both settings are back to their default values before continuing. +5. Ensure both settings are back to their default values before continuing. ![autocommit on](autocommit-on.png) @@ -393,6 +397,7 @@ The following steps will demonstrate these settings. ``` ### Create a partition + Partitions can be created to divide the data in a large table into smaller parts. 1. Execute the following SQL statement to create one partition that contains older reservations and one that contains reservations made in 2020 or later. @@ -431,8 +436,8 @@ For further information see [Reduce the Memory Footprint Using Page-Loadable Col Another option for data that is accessed less frequently is the SAP HANA Data Lake. Additional information on when to use Native Store Extensions and Data Lake can be found at [Storage Options](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-getting-started-guide/storage-options). - ### Create views + 1. Views can be created to combine columns from multiple tables into one view or to provide access to certain columns of a table. Executing the following SQL statements creates a view that displays all information from the reservation table. The joins allow for more information about the customer and hotel to be displayed. ```SQL @@ -484,8 +489,8 @@ Another option for data that is accessed less frequently is the SAP HANA Data La For additional details see [CREATE VIEW Statement (Data Definition)](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/create-view-statement-data-definition). - ### Create functions and stored procedures + 1. User-defined functions and procedures can be used to save a set of SQL statements. Functions are considered read-only in that they cannot make modifications to the data. Stored procedures can modify the data using DDL or DML statements. Execute the following SQL to create a function that calculates the average price of a specific room type. @@ -618,7 +623,23 @@ Another option for data that is accessed less frequently is the SAP HANA Data La For additional details see [Procedures](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-sqlscript-reference/procedures). +### Examine the created objects using the monitoring views + +There are multiple monitoring views that contain data about the objects within a database. Further details can be found at [Monitoring Views](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/monitoring-views?locale=en-US). Try out the below queries. + +```SQL +--List of tables and record counts in the schema HOTELS +SELECT SCHEMA_NAME,TABLE_NAME, RECORD_COUNT, TABLE_SIZE +FROM M_TABLES WHERE SCHEMA_NAME = 'HOTELS' ORDER BY RECORD_COUNT DESC; + +--List of columns and data types for the tables in schema HOTELS +SELECT SCHEMA_NAME,TABLE_NAME, COLUMN_NAME, DATA_TYPE_NAME +FROM TABLE_COLUMNS WHERE SCHEMA_NAME = 'HOTELS' ORDER BY TABLE_NAME ASC, COLUMN_NAME ASC; + +``` + ### Schedule a stored procedure + Procedures can also be scheduled in SAP HANA Cloud. Schedule a job using the code provided below. ```SQL @@ -638,6 +659,6 @@ Details about the scheduled job can also be viewed including its properties, par For additional details see [Scheduling Administrative Tasks](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-administration-guide/scheduling-administrative-tasks) and [CREATE SCHEDULER JOB Statement](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/create-scheduler-job-statement-data-definition). - ### Knowledge check + Congratulations! You have now created tables and inserted data, as well as created partitions, views, functions, stored procedures, and scheduled jobs. diff --git a/tutorials/hana-dbx-database-objects/AvgPrice.png b/tutorials/hana-dbx-database-objects/AvgPrice.png index 0d02c892d6..761fd786bd 100644 Binary files a/tutorials/hana-dbx-database-objects/AvgPrice.png and b/tutorials/hana-dbx-database-objects/AvgPrice.png differ diff --git a/tutorials/hana-dbx-database-objects/ColumnData.png b/tutorials/hana-dbx-database-objects/ColumnData.png index b0bcf8b29d..663a5b28ae 100644 Binary files a/tutorials/hana-dbx-database-objects/ColumnData.png and b/tutorials/hana-dbx-database-objects/ColumnData.png differ diff --git a/tutorials/hana-dbx-database-objects/CreateStatement.png b/tutorials/hana-dbx-database-objects/CreateStatement.png index 9e47f49bd7..d2fae548f6 100644 Binary files a/tutorials/hana-dbx-database-objects/CreateStatement.png and b/tutorials/hana-dbx-database-objects/CreateStatement.png differ diff --git a/tutorials/hana-dbx-database-objects/DBObjFilters.png b/tutorials/hana-dbx-database-objects/DBObjFilters.png deleted file mode 100644 index 1b30fb51df..0000000000 Binary files a/tutorials/hana-dbx-database-objects/DBObjFilters.png and /dev/null differ diff --git a/tutorials/hana-dbx-database-objects/DbObjNav.png b/tutorials/hana-dbx-database-objects/DbObjNav.png index 5bd13df4ab..be232b7968 100644 Binary files a/tutorials/hana-dbx-database-objects/DbObjNav.png and b/tutorials/hana-dbx-database-objects/DbObjNav.png differ diff --git a/tutorials/hana-dbx-database-objects/Fav.png b/tutorials/hana-dbx-database-objects/Fav.png index ee4a371903..daf7124819 100644 Binary files a/tutorials/hana-dbx-database-objects/Fav.png and b/tutorials/hana-dbx-database-objects/Fav.png differ diff --git a/tutorials/hana-dbx-database-objects/FavIcon.png b/tutorials/hana-dbx-database-objects/FavIcon.png index 8182161d95..afaef2876c 100644 Binary files a/tutorials/hana-dbx-database-objects/FavIcon.png and b/tutorials/hana-dbx-database-objects/FavIcon.png differ diff --git a/tutorials/hana-dbx-database-objects/FilterFav.png b/tutorials/hana-dbx-database-objects/FilterFav.png index b4edbf087f..ab0197632b 100644 Binary files a/tutorials/hana-dbx-database-objects/FilterFav.png and b/tutorials/hana-dbx-database-objects/FilterFav.png differ diff --git a/tutorials/hana-dbx-database-objects/FuncCall.png b/tutorials/hana-dbx-database-objects/FuncCall.png index ef460a2802..9c492c1dd9 100644 Binary files a/tutorials/hana-dbx-database-objects/FuncCall.png and b/tutorials/hana-dbx-database-objects/FuncCall.png differ diff --git a/tutorials/hana-dbx-database-objects/GenerateFuncStatement.png b/tutorials/hana-dbx-database-objects/GenerateFuncStatement.png new file mode 100644 index 0000000000..6ec90406d7 Binary files /dev/null and b/tutorials/hana-dbx-database-objects/GenerateFuncStatement.png differ diff --git a/tutorials/hana-dbx-database-objects/GenerateSQL.png b/tutorials/hana-dbx-database-objects/GenerateSQL.png index 4722c87959..3991d68200 100644 Binary files a/tutorials/hana-dbx-database-objects/GenerateSQL.png and b/tutorials/hana-dbx-database-objects/GenerateSQL.png differ diff --git a/tutorials/hana-dbx-database-objects/Instances.png b/tutorials/hana-dbx-database-objects/Instances.png index 03b2eb6894..9fa6fdec6d 100644 Binary files a/tutorials/hana-dbx-database-objects/Instances.png and b/tutorials/hana-dbx-database-objects/Instances.png differ diff --git a/tutorials/hana-dbx-database-objects/NavSQLConsole.png b/tutorials/hana-dbx-database-objects/NavSQLConsole.png new file mode 100644 index 0000000000..7c25229efd Binary files /dev/null and b/tutorials/hana-dbx-database-objects/NavSQLConsole.png differ diff --git a/tutorials/hana-dbx-database-objects/Procedure.png b/tutorials/hana-dbx-database-objects/Procedure.png index ef57467868..b66279f7ac 100644 Binary files a/tutorials/hana-dbx-database-objects/Procedure.png and b/tutorials/hana-dbx-database-objects/Procedure.png differ diff --git a/tutorials/hana-dbx-database-objects/ProcedureData.png b/tutorials/hana-dbx-database-objects/ProcedureData.png index 6e9d2a5e06..fb6927a64d 100644 Binary files a/tutorials/hana-dbx-database-objects/ProcedureData.png and b/tutorials/hana-dbx-database-objects/ProcedureData.png differ diff --git a/tutorials/hana-dbx-database-objects/Recent.png b/tutorials/hana-dbx-database-objects/Recent.png index 2ccb5bf053..87d4d4a540 100644 Binary files a/tutorials/hana-dbx-database-objects/Recent.png and b/tutorials/hana-dbx-database-objects/Recent.png differ diff --git a/tutorials/hana-dbx-database-objects/RuntimeInformation.png b/tutorials/hana-dbx-database-objects/RuntimeInformation.png index 8a556d0834..4ea0217885 100644 Binary files a/tutorials/hana-dbx-database-objects/RuntimeInformation.png and b/tutorials/hana-dbx-database-objects/RuntimeInformation.png differ diff --git a/tutorials/hana-dbx-database-objects/SchemaData.png b/tutorials/hana-dbx-database-objects/SchemaData.png index 523452851e..d2eb5d0fe9 100644 Binary files a/tutorials/hana-dbx-database-objects/SchemaData.png and b/tutorials/hana-dbx-database-objects/SchemaData.png differ diff --git a/tutorials/hana-dbx-database-objects/SelectDatabase.png b/tutorials/hana-dbx-database-objects/SelectDatabase.png new file mode 100644 index 0000000000..b931be9913 Binary files /dev/null and b/tutorials/hana-dbx-database-objects/SelectDatabase.png differ diff --git a/tutorials/hana-dbx-database-objects/SelectInstance.png b/tutorials/hana-dbx-database-objects/SelectInstance.png new file mode 100644 index 0000000000..9fa6fdec6d Binary files /dev/null and b/tutorials/hana-dbx-database-objects/SelectInstance.png differ diff --git a/tutorials/hana-dbx-database-objects/SelectSchema.png b/tutorials/hana-dbx-database-objects/SelectSchema.png index 497b896802..a5c0080121 100644 Binary files a/tutorials/hana-dbx-database-objects/SelectSchema.png and b/tutorials/hana-dbx-database-objects/SelectSchema.png differ diff --git a/tutorials/hana-dbx-database-objects/Settings.png b/tutorials/hana-dbx-database-objects/Settings.png index 0bb4fdb4f7..739a7e774a 100644 Binary files a/tutorials/hana-dbx-database-objects/Settings.png and b/tutorials/hana-dbx-database-objects/Settings.png differ diff --git a/tutorials/hana-dbx-database-objects/SettingsFunc.png b/tutorials/hana-dbx-database-objects/SettingsFunc.png new file mode 100644 index 0000000000..befbc6e1d9 Binary files /dev/null and b/tutorials/hana-dbx-database-objects/SettingsFunc.png differ diff --git a/tutorials/hana-dbx-database-objects/TableData.png b/tutorials/hana-dbx-database-objects/TableData.png index 4e3e95d912..4e15d51d86 100644 Binary files a/tutorials/hana-dbx-database-objects/TableData.png and b/tutorials/hana-dbx-database-objects/TableData.png differ diff --git a/tutorials/hana-dbx-database-objects/TableView.png b/tutorials/hana-dbx-database-objects/TableView.png index d73b3af30a..8a57cdc182 100644 Binary files a/tutorials/hana-dbx-database-objects/TableView.png and b/tutorials/hana-dbx-database-objects/TableView.png differ diff --git a/tutorials/hana-dbx-database-objects/ViewData.png b/tutorials/hana-dbx-database-objects/ViewData.png deleted file mode 100644 index a86ba46db5..0000000000 Binary files a/tutorials/hana-dbx-database-objects/ViewData.png and /dev/null differ diff --git a/tutorials/hana-dbx-database-objects/hana-dbx-database-objects.md b/tutorials/hana-dbx-database-objects/hana-dbx-database-objects.md index 666dcd6e77..db4cc53fb4 100644 --- a/tutorials/hana-dbx-database-objects/hana-dbx-database-objects.md +++ b/tutorials/hana-dbx-database-objects/hana-dbx-database-objects.md @@ -2,160 +2,140 @@ parser: v2 auto_validation: true time: 10 -tags: [ tutorial>beginner, software-product-function>sap-hana-cloud--sap-hana-database, software-product>sap-hana] +tags: [ tutorial>beginner, software-product-function>sap-hana-cloud--sap-hana-database, software-product-function>sap-hana-cloud--data-lake] primary_tag: software-product>sap-hana-cloud --- - - # Browse and Explore Catalog Objects with the Database Objects App - Dive into using the Database Objects tool to explore and inspect objects in an SAP HANA database. -## Prerequisites -- An SAP HANA Cloud database such as SAP HANA Cloud trial where the Database Objects tool is available. -- You have completed the first 3 tutorials in this group + Dive into using the database objects app to explore and inspect schema objects in an SAP HANA Cloud, SAP HANA database or data lake Relational Engine. + +## Prerequisites + +- An SAP HANA Cloud database such as SAP HANA Cloud free tier +- You have completed the first 3 tutorials in this group - +## You will learn + +- How to filter for specific tables and schemas within an instance +- How to inspect and explore objects in an SAP HANA Cloud database +- How to generate SQL Statements -## You will learn -- How to filter for specific tables and schemas within an instance -- How to inspect and explore objects in an SAP HANA Cloud database -- Generating SQL Statements for specific schemas for selected database objects --- +### Introduction -### Introduction ->Database Objects is a built-in tool in SAP HANA Cloud Central that enables you to search, view metadata, and generate SQL for catalog objects, right from SAP HANA Cloud Central. +The database objects app is a built-in tool in SAP HANA Cloud Central that enables you to search, view metadata, and generate SQL for catalog objects. - -### Filters and Navigation +### Filters and navigation -1. To navigate to the Database Objects tool, click on the icon for it on the left-hand side of the instances page. - - - ![DBObj Navigation](DbObjNav.png) +1. Ensure that your database instance is running before attempting to open the Database Objects app. Once it is active, you can access the app directly by selecting its icon from the left‑hand navigation panel on the Instances page. -2. Upon opening Database Objects all filters are empty except the **Instance Type** which pre-selects all types you have existing instances for. + ![DBObj Navigation](DbObjNav.png) + + You can also open the Database Objects app through the SQL Console. Make sure you are connected to the correct database, then click the three‑dot menu in the top‑right corner and select Open Database Objects. - - ![DB Obj Filters](DBObjFilters.png) - Click the drop down under the **Instance** filter to select your SAP HANA Database instance. + ![DBObj Navigation SQL Console](NavSQLConsole.png) + +2. Upon opening database objects, click “Select an Instance” at the top of the page to choose the database you want to work with. You can browse the list or use the search field to quickly find your instance. + + + ![Instances](Instances.png) + + ![Select Database](SelectDatabase.png) - - ![Instances](Instances.png) Once selected, the **Schema** and **Search** filter are both available to use. Select the **Schema** filter and search for the HOTELS schema. - - ![Hotels Schema](SelectSchema.png) - You can also directly search for the schema or any other objects directly in the **Search** filter. In this case after searching navigate to the **Schemas** tab directly to view the metadata for the HOTELS schema. This data includes ownership, privileges and create time. + ![Hotels Schema](SelectSchema.png) - - ![Schema Data](SchemaData.png) + You can also directly search for the schema or any other objects directly in the **Search** filter. In this case after searching navigate to the **Schemas** tab directly to view the metadata for the HOTELS schema. This data includes ownership, privileges and create time. - + ![Schema Data](SchemaData.png) -### Explore Tables +### Explore tables -Database Objects table features can be leveraged to view table information such as columns, indexes, properties, runtime information and SQL CREATE Statements. +Information for tables includes columns, indexes, properties, runtime information and SQL CREATE Statements. 1. Select the **Tables** tab to view all associated tables of the HOTELS schema. The page now displays all tables in the schema HOTELS and their table type. - ![Tables View](TableView.png) 2. Select the **RESERVATION** table to explore it further. Click the full screen icon on the top right of the screen to maximize the page and view all tabs. - - ![Table Data](TableData.png) + ![Table Data](TableData.png) By default you should see the column details for the table. - ![Column Data](ColumnData.png) 3. Explore the **Runtime Information** tab, where further information about the table can be found. This information includes the total number of rows, disk size, partitions and memory consumption for the table, as well as individual columns. - - - ![Runtime Information](RuntimeInformation.png) + + ![Runtime Information](RuntimeInformation.png) 4. Examine the other tabs, such as **CREATE Statements**, where SQL code to generate the table can be found. - - - ![Create Statement](CreateStatement.png) + + ![Create Statement](CreateStatement.png) 5. Select the Generate SQL Statement dropdown to see the three ways to have SQL generated for the table. - - - ![SQL Generation](GenerateSQL.png) - + ![SQL Generation](GenerateSQL.png) -### Explore Functions and Procedures +### Explore functions and procedures -1. Navigate to settings and enable the functions object type to view functions in the Database Objects app. +1. To display functions in the Database Objects app, go to settings using your profile icon and turn on the functions object type. ![Settings ](Settings.png) 2. Open the **Functions** tab and select AVERAGE_PRICE to examine it further. - - - ![Average Price Function](AvgPrice.png) + + ![Average Price Function](AvgPrice.png) Select the Generate SQL Statement dropdown and click SELECT Statement to navigate to the SQL Console. - Input *'suite'* in the single quotes of the SELECT statement to get the average price for suites. + ![Average Price Function Generate Statement](GenerateFuncStatement.png) - - ![Function Call](FuncCall.png) + Input *'suite'* in the single quotes of the SELECT statement to get the average price for suites. + ![Function Call](FuncCall.png) 3. Navigate back to the Database Objects app and open the **Procedure** tab. Select RESERVATION_GENERATOR to examine it further. - - - ![Procedure Data](ProcedureData.png) -4. Click Generate SQL to get SQL that runs the stored procedure. - - - ![Run Procedure](Procedure.png) + ![Procedure Data](ProcedureData.png) -To learn more about exploring database instances in Database Explorer refer to the [Browse Schema with the Database Browser in SAP HANA Database Explorer Tutorial](hana-dbx-browse) +4. Click Generate SQL and select the CALL statement to get SQL that runs the stored procedure. + ![Run Procedure](Procedure.png) + +To learn more about exploring database instances in Database Explorer refer to the [Browse Schema with the Database Browser in SAP HANA Database Explorer Tutorial](hana-dbx-browse) -### Additional Features - +### Additional features 1. Select the **Recent** tab to view all the recent objects you opened. - - - ![Recents](Recent.png) + + ![Recents](Recent.png) 2. Navigate to an object and click the star icon on the top right of the screen to favorite it. Allows for easy access to the object through the **Favorites** tab. - - - ![Favorite Icon](FavIcon.png) + + ![Favorite Icon](FavIcon.png) Once selected as a favorite. Navigate to the **Favorites** tab to see it. - - - ![Favorite](Fav.png) -3. Click All/Selected Instance toggle to filter favorites. - - - ![Filter Favorites](FilterFav.png) + ![Favorite](Fav.png) + +3. Click All/Selected Instance toggle to filter favorites. + + ![Filter Favorites](FilterFav.png) 4. Navigate to HANA Cloud Central settings to customize preferences for the Database Objects App. - - - ![settings](Settings.png) + ![settings](SettingsFunc.png) ### Knowledge check -Congratulations! You have now successfully navigated the Database Objects app and learned about the various features and tools available to you right from SAP HANA Cloud Central. - +Congratulations! You have now successfully navigated the Database Objects app and learned about the various features and tools available to you right from SAP HANA Cloud Central. + +To learn how to create multi‑model artifacts like knowledge graphs, property graphs, and document stores using the Database Objects app, you can also explore the tutorial [Try Out Multi‑Model Functionality with the SAP HANA Database Explorer and Database Objects App](hana-dbx-multi-model). diff --git a/tutorials/hana-dbx-export-import/hana-dbx-export-import.md b/tutorials/hana-dbx-export-import/hana-dbx-export-import.md index c40b16b564..1f291fc1ad 100644 --- a/tutorials/hana-dbx-export-import/hana-dbx-export-import.md +++ b/tutorials/hana-dbx-export-import/hana-dbx-export-import.md @@ -7,14 +7,17 @@ primary_tag: software-product>sap-hana-cloud --- # Export and Import Data and Schema with SAP HANA Database Explorer + Use wizards or SQL statements to export and import data and schema using CSV, Apache Parquet, or binary formats. ## Prerequisites -- An SAP HANA database such as SAP HANA Cloud trial, free tier, or the SAP HANA, express edition that includes the SAP HANA database explorer + +- An SAP HANA database such as SAP HANA Cloud free tier, or the SAP HANA, express edition that includes the SAP HANA database explorer - Data lake Files, Amazon AWS, Google Cloud, or Microsoft Azure accounts will be needed for optional steps in this tutorial. - You have completed the first 3 tutorials in this group. ## You will learn + - How to export and import data using the export and import data wizards, SQL statements export into and import from, and the download option in the SQL console results tab - How to export and import schema objects using export and import catalog wizards and the SQL statements export and import - How to use cloud storage providers as a target when exporting or importing @@ -152,7 +155,8 @@ The following steps are for illustrative purposes only and are not meant to be f ``` ### Use data lake Files for export and import from an SAP HANA Cloud, SAP HANA database (optional) -The following steps walk through the process of exporting to and importing data using data lake Files with a SAP HANA Cloud, SAP HANA database. This step requires a productive SAP HANA Cloud data lake instance as data lake files is currently not part of free tier or trial. + +The following steps walk through the process of exporting to and importing data using data lake Files with a SAP HANA Cloud, SAP HANA database. This step requires a productive SAP HANA Cloud data lake instance as data lake files is currently not included in the free tier service plan. 1. Complete steps 3 and 4 in the [Getting Started with Data Lake Files HDLFSCLI](data-lake-file-containers-hdlfscli) tutorial to configure the trust setup of the data lake Files container. @@ -271,7 +275,7 @@ The following steps walk through the process of exporting to and importing data ``` ### Use data lake Files for export and import from an SAP HANA Cloud, data lake Relational Engine database (optional) -The following steps walk through the process of exporting to and importing data using data lake Files with a SAP HANA Cloud, data lake Relational Engine database. This step requires a productive SAP HANA Cloud data lake instance as data lake files is currently not part of free tier or trial. The following steps assume you have followed the first two sub steps in the previous step so that a data lake Files connection has been added to the SAP HANA database explorer. +The following steps walk through the process of exporting to and importing data using data lake Files with a SAP HANA Cloud, data lake Relational Engine database. This step requires a productive SAP HANA Cloud data lake instance as data lake files is currently not part of free tier. The following steps assume you have followed the first two sub steps in the previous step so that a data lake Files connection has been added to the SAP HANA database explorer. 1. Create a database credential for the data lake Files container. This step is required if you wish to export to a data lake Files instance that is not the one associated with the data lake Relational Engine. Further details are described at [Unloading Data to Data Lake Files from Data Lake Relational Engine](https://help.sap.com/docs/hana-cloud-data-lake/load-and-unload-management/unloading-data-to-data-lake-files). Open a SQL Console connected to a data lake Relational Engine instance and execute the below SQL statements. @@ -559,15 +563,15 @@ The following steps walk through the process of using Microsoft Azure storage se 1. Log in to the [Microsoft Azure Portal](https://portal.azure.com/). -2. Create a resource group. +2. Create a resource group under the All services, General, Resource Manager. ![Resource Group](resourceGroup.png) -3. Create a storage Service +3. Create a storage Service under All services, Storage, Storage accounts. ![Storage Account](storageAccount.png) -4. Create a blob container. +4. Create a blob container using the All services, Storage, Storage browser. ![Blob Container](createBlobContainer.png) diff --git a/tutorials/hana-dbx-extension/.vscode/settings.json b/tutorials/hana-dbx-extension/.vscode/settings.json deleted file mode 100644 index aea2d6c64e..0000000000 --- a/tutorials/hana-dbx-extension/.vscode/settings.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "SAP HANA Database Explorer.Use Objects Definition For SQL Generation": true -} \ No newline at end of file diff --git a/tutorials/hana-dbx-extension/hana-dbx-extension.md b/tutorials/hana-dbx-extension/hana-dbx-extension.md index 9db750cd0d..8d8c6bf13f 100644 --- a/tutorials/hana-dbx-extension/hana-dbx-extension.md +++ b/tutorials/hana-dbx-extension/hana-dbx-extension.md @@ -7,22 +7,25 @@ primary_tag: software-product>sap-hana-cloud --- # Use the SAP HANA Database Explorer Extension - Learn how the SAP HANA database explorer for Visual Studio Code extension can be used to connect to both SAP HANA Cloud and on-premise databases, about related general Visual Studio Code features, how to use the catalog browser, and how to execute SQL queries. The SAP HANA database explorer for Visual Studio Code extension contains similar functionality to that in the web-based SAP HANA database explorer although not all functionality is available. + Learn how the SAP HANA database explorer for Visual Studio Code extension can be used to connect to both SAP HANA Cloud and on-premise databases, about related general Visual Studio Code features, how to use the catalog browser, and how to execute SQL queries. The SAP HANA database explorer for Visual Studio Code extension contains similar functionality to that in the web-based SAP HANA database explorer although not all functionality is available. ## Prerequisites -- An SAP HANA database such as SAP HANA Cloud (free tier or trial) or an on-premise SAP HANA database such as the SAP HANA, express edition + +- An SAP HANA database such as SAP HANA Cloud free tier or an on-premise SAP HANA database such as the SAP HANA, express edition - You have completed the first 3 tutorials in this group ## You will learn - - How to setup the Visual Studio Code SAP HANA database explorer extension - - How to connect to an SAP HANA Cloud database, SAP HANA database and SAP HANA User store (to retrieve connection details) - - How to explore and examine objects in an SAP HANA database + +- How to setup the Visual Studio Code SAP HANA database explorer extension +- How to connect to an SAP HANA Cloud database, SAP HANA database and SAP HANA User store (to retrieve connection details) +- How to explore and examine objects in an SAP HANA database --- -### Set up -1. If needed, download [Visual Studio Code](https://code.visualstudio.com/download) for your computer. +### Set up + +1. If needed, download [Visual Studio Code](https://code.visualstudio.com/download) for your computer. ![Download Visual Studio Code](downloadVSCode.png) @@ -43,7 +46,8 @@ primary_tag: software-product>sap-hana-cloud * SAP HANA Database Explorer Connections are database connections retrieved by logging into Cloud Foundry and querying for the set of connections that the Cloud Foundry, web-based SAP HANA database explorer have created. -### Add a local database connection +### Add a local database connection + The SAP HANA database explorer extension can connect to SAP HANA Cloud and on-premise databases as well as an SAP HANA User Store. In this tutorial, a connection to an SAP HANA Cloud database will be made, but the steps to connect to the other types are very similar. Adding local connections do not require authentication to the SAP Business Technology Platform (BTP) or Cloud Foundry. 1. Hover over the **Database List** section and click the **+** button to **Add SAP HANA Database**. @@ -52,7 +56,7 @@ The SAP HANA database explorer extension can connect to SAP HANA Cloud and on-pr A form to add a database will open. -2. Select **SAP HANA Cloud** as your database type and enter values for the **Host**, **Port**, **User** and **Password**, such as USER1 and Password1. You may also change the display name, as desired. +2. Select **SAP HANA Cloud** as your database type and enter values for the **Host**, **Port**, **User** and **Password**, such as USER1 and Password1. You may also change the display name, as desired. This tutorial uses the HOTELS schema. Set the default schema value in the **Advanced Options** as shown below. Subsequent SQL consoles you open will now start with this schema value. diff --git a/tutorials/hana-dbx-hcc/hana-dbx-hcc.md b/tutorials/hana-dbx-hcc/hana-dbx-hcc.md index 6d1c7cc168..b7bece1691 100644 --- a/tutorials/hana-dbx-hcc/hana-dbx-hcc.md +++ b/tutorials/hana-dbx-hcc/hana-dbx-hcc.md @@ -7,29 +7,33 @@ primary_tag: software-product>sap-hana-cloud --- # Query Databases Using the SQL Console in SAP HANA Cloud Central + Learn how the SQL console can be used within SAP HANA Cloud Central to quickly query a selected database. ## Prerequisites + - An SAP HANA Cloud database - You have completed [this](hana-dbx-create-schema) tutorial which creates a database schema for an SAP HANA Cloud, SAP HANA database. - You have completed [this](hana-cloud-dl-clients-overview) tutorial which creates a database schema for an SAP HANA Cloud, data lake Relational Engine ## You will learn - - How to open a SQL console, specify the credentials, and set the current schema - - An overview of the functionality provided in the SQL console + +- How to open a SQL console, specify the credentials, and set the current schema +- An overview of the functionality provided in the SQL console --- ### Open a SQL console + This step demonstrates how a SQL console can quickly be opened from within SAP HANA Cloud Central and how to change the SQL console's credentials and schema. -1. In **SAP HANA Cloud Central** open a SQL console by selecting **SQL Console** in the left pane. Notice that the SQL console is not associated with a database when opened in this way. +1. In **SAP HANA Cloud Central** open a SQL console by selecting **SQL Console** in the left pane. Notice that the SQL console is not associated with a database when opened in this way. ![open SQL console](open-sql-console.png) Additional SQL consoles can also be opened by selecting the **+** icon. -2. This time select **Instances**, select a database, and choose **Open SQL Console** from the actions menu. +2. This time select **Instances**, select a database, and choose **Open SQL Console** from the actions menu. ![open SQL console from an instance](open-sql-console-instance.png) @@ -45,8 +49,7 @@ This step demonstrates how a SQL console can quickly be opened from within SAP H ![Current user](current-user.png) - -4. If you wish to connect to the database using a different set of credentials, select the **Connect this SQL console to a different instance** icon, select the current database and uncheck **Use cached credentials if possible**. +4. If you wish to connect to the database using a different set of credentials, select the **Connect this SQL console to a different instance** icon, select the current database and uncheck **Use cached credentials if possible**. ![Change credentials](change-credentials.png) @@ -71,7 +74,7 @@ This step demonstrates how a SQL console can quickly be opened from within SAP H ![Show current user for a data lake Relational Engine](current-user-dl.png) -5. The current schema can be set and viewed for a SAP HANA database using the SQL statements below. +5. The current schema can be set and viewed for a SAP HANA database using the SQL statements below. ```SQL SET SCHEMA HOTELS; @@ -96,6 +99,7 @@ This step demonstrates how a SQL console can quickly be opened from within SAP H ![available themes](themes.png) ### Execute SQL + This step demonstrates how to execute a SQL query, examine the statement help, view the query results, messages, and history tabs within a SQL console. 1. Execute the following SQL statements. @@ -133,7 +137,7 @@ This step demonstrates how to execute a SQL query, examine the statement help, v ![statement help panel](statement-help.png) - Notice that for SAP HANA Cloud, SAP HANA databases, links to the related documentation and details on the objects used in the SQL statement are shown. + Notice that for SAP HANA Cloud, SAP HANA databases, links to the related documentation and details on the objects used in the SQL statement are shown including a link which will open the database objects app where additional details of the object can be viewed. 4. Commonly used shortcut keys are listed below. Try a few of them out. @@ -171,7 +175,7 @@ This step demonstrates how to execute a SQL query, examine the statement help, v ![connection settings](connection-settings.png) - * Execute the following SQL which is used to illustrate the result behavior settings. + - Execute the following SQL which is used to illustrate the result behavior settings. ```SQL SELECT * FROM M_SYSTEM_INFORMATION_STATEMENTS; @@ -192,7 +196,7 @@ This step demonstrates how to execute a SQL query, examine the statement help, v ![one thousand row limit](settings-result2.png) - * Execute the following SQL which is used to illustrate the result display display settings. + - Execute the following SQL which is used to illustrate the result display display settings. ```SQL SELECT CURRENT_DATE, CURRENT_TIMESTAMP(7), RAND() * 10 FROM DUMMY; @@ -263,9 +267,8 @@ This step demonstrates how to execute a SQL query, examine the statement help, v ![download and import](download-and-import.png) - - ### Statement library + The statement library is a convenient location in the SQL Console to store and retrieve frequently executed SQL statements. It provides a place to store statements that are used frequently as to not type them in repeatedly. The library is pre-populated with useful statements called ‘SYSTEM’ statements. @@ -273,6 +276,7 @@ The library is pre-populated with useful statements called ‘SYSTEM’ statemen ![Statement Library System Statements](statement_libaray_system.png) You may also define custom statements that are only available to you. These are ‘USER’ statements. + ```SQL /* @@ -298,10 +302,12 @@ SELECT * FROM RESERVATION ![Statement Library View User Statements](statement_libaray_user.png) -3. To run a statement, select one from the statement library and click the Run button. +3. To run a statement, select one from the statement library and click the Run button. ![Run Saved Statement](run_saved_statement.png) + If you select multiple saved statements, you additionally have the option to open them together in one tab or in individual tabs. + > It is also possible to export and import SQL statements directly to/from the file system > > ![Import or Export Statements](export_import_statements.png) @@ -312,32 +318,26 @@ User-defined statements can be edited. From the Statement library, select the de ![Modify Saved Statements](replace_statement.png) - - ### A few things to note -The SQL console within SAP HANA Cloud Central appears similar to the one within the SAP HANA database explorer but there are some differences. -* Opening the SQL console within the SAP HANA Cloud Central can be done much quicker than opening the full SAP HANA database explorer. - -* The SQL console that you access from within SAP HANA Cloud Central can only connect to databases that are within the same BTP subaccount as SAP HANA Cloud Central. +The SQL console within SAP HANA Cloud Central appears similar to the one within the SAP HANA database explorer but there are some differences. -* The SQL console in SAP HANA Cloud Central has the following additional features +- The SQL console that you access from within SAP HANA Cloud Central can only connect to databases that are within the same BTP subaccount as SAP HANA Cloud Central. - * Ability to format results - * Support for SAP Morning and Evening Horizon themes - * Additional details such as time of execution, duration, rows returned, and success or failure in the history tab +- The SQL console in SAP HANA Cloud Central has the following additional features -* The SAP HANA database explorer has some additional functionality + - Ability to format results + - Support for SAP Morning and Evening Horizon themes + - Additional details such as time of execution, duration, rows returned, and success or failure in the history tab - * Persistency of SQL tabs and their contents - * SQL debugging - * Code completion of schema objects - * Viewer for spatial and graph data - * Analysis tab for tables and views - * Ability to search for database objects across multiple databases - * Ability to run statements in the background - * Ability to run statements against multiple instances +- The SAP HANA database explorer has some additional functionality + - SQL debugging + - Code completion of schema objects + - Viewer for spatial + - Analysis tab for tables and views + - Ability to search for database objects across multiple databases + - Ability to run statements against multiple instances ### Knowledge check diff --git a/tutorials/hana-dbx-hcc/run_saved_statement.png b/tutorials/hana-dbx-hcc/run_saved_statement.png index d1097f70ed..fc3250f372 100644 Binary files a/tutorials/hana-dbx-hcc/run_saved_statement.png and b/tutorials/hana-dbx-hcc/run_saved_statement.png differ diff --git a/tutorials/hana-dbx-hcc/save_custom_statement.png b/tutorials/hana-dbx-hcc/save_custom_statement.png index 2eab9c0988..18c6c0559a 100644 Binary files a/tutorials/hana-dbx-hcc/save_custom_statement.png and b/tutorials/hana-dbx-hcc/save_custom_statement.png differ diff --git a/tutorials/hana-dbx-hcc/save_statement.png b/tutorials/hana-dbx-hcc/save_statement.png index 0b37f1dc5d..7b94d77b19 100644 Binary files a/tutorials/hana-dbx-hcc/save_statement.png and b/tutorials/hana-dbx-hcc/save_statement.png differ diff --git a/tutorials/hana-dbx-hcc/statement-help.png b/tutorials/hana-dbx-hcc/statement-help.png index 75cf06578c..b5ce43b6b1 100644 Binary files a/tutorials/hana-dbx-hcc/statement-help.png and b/tutorials/hana-dbx-hcc/statement-help.png differ diff --git a/tutorials/hana-dbx-multi-model/hana-dbx-multi-model.md b/tutorials/hana-dbx-multi-model/hana-dbx-multi-model.md index db4dad6ce4..22f7188601 100644 --- a/tutorials/hana-dbx-multi-model/hana-dbx-multi-model.md +++ b/tutorials/hana-dbx-multi-model/hana-dbx-multi-model.md @@ -7,17 +7,21 @@ primary_tag: software-product>sap-hana-cloud --- # Try Out Multi-Model Functionality with the SAP HANA Database Explorer and Database Objects App + Explore knowledge graph, property graph, JSON document store, and spatial capabilities in the SAP HANA database explorer. ## Prerequisites + - A productive SAP HANA Cloud database - You have completed the first 3 tutorials in this group. ## You will learn - - How to create a knowledge graph, a property graph, a document store, and import spatial data. - - How the SAP HANA database explorer and the database objects app can be used with multi-model data. + +- How to create a knowledge graph, a property graph, a document store, and import spatial data. +- How the SAP HANA database explorer and the database objects app can be used with multi-model data. ## Intro + A [knowledge graph](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-knowledge-graph-guide/sap-hana-cloud-sap-hana-database-knowledge-graph-engine-guide) can be used to store facts in triples providing additional meaning and relationships. A [property graph](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-property-graph-engine-reference/sap-hana-cloud-sap-hana-database-property-graph-engine-reference) can be used to show the connections between items such as the connections between airports or between people or groups in a social network. @@ -28,8 +32,8 @@ This tutorial is meant to be an introduction to this topic. For additional cont --- +### Enable the triple store and Create a knowledge graph -### Enable the triple store and create a knowledge graph The following steps will create a knowledge graph that provides information on additional hotel amenities, explore the created knowledge graph using the database objects app, and then will perform a query on the knowledge graph. Before you can create a knowledge graph, please ensure your HANA Instance is version 2025.2 or above, and your instance has triple store activated. Here are the steps to doing this: @@ -46,7 +50,7 @@ Before you can create a knowledge graph, please ensure your HANA Instance is ver ![Add Triple Store](add_triple_store.png) - *The knowledge graph feature is not available for trial or free tier users.* + *The knowledge graph feature is not available for free tier instances.* To learn more about knowledge graphs see [Connecting the Facts: SAP HANA Cloud’s Knowledge Graph Engine for Business Context](https://community.sap.com/t5/technology-blogs-by-sap/connecting-the-facts-sap-hana-cloud-s-knowledge-graph-engine-for-business/ba-p/13888597) and [Choosing Between Knowledge Graphs and Property Graphs in SAP HANA Cloud and Why Both Matter](https://community.sap.com/t5/technology-blogs-by-sap/choosing-between-knowledge-graphs-and-property-graphs-in-sap-hana-cloud-and/ba-p/14074575). @@ -369,7 +373,7 @@ SAP HANA provides the ability to store and query JSON data. This can be useful The following steps will demonstrate how to create a JSON collection that can be used to collect notes about customers staying at a hotel. ->The creation of a JSON collection is not supported in the SAP HANA Cloud free tier or trial. +>The creation of a JSON collection is not supported in the SAP HANA Cloud free tier. 1. Enable the JSON document store. diff --git a/tutorials/hana-dbx-overview/hana-dbx-overview.md b/tutorials/hana-dbx-overview/hana-dbx-overview.md index c9cff2c4a9..7dedeea26c 100644 --- a/tutorials/hana-dbx-overview/hana-dbx-overview.md +++ b/tutorials/hana-dbx-overview/hana-dbx-overview.md @@ -7,19 +7,23 @@ primary_tag: software-product>sap-hana-cloud --- # SAP HANA Database Explorer Overview - Learn about the SAP HANA database explorer and how to start using it with SAP HANA Cloud trial, free tier, SAP HANA, express edition, or SAP HANA Cloud basic trial. + + Learn about the SAP HANA database explorer and how to start using it with SAP HANA Cloud free tier, SAP HANA, express edition, or SAP HANA Cloud basic trial. ## Prerequisites - - A machine that can run SAP HANA, express edition if the other options are not used + +- A machine that can run SAP HANA, express edition if the other options are not used ## You will learn - - About the features provided by the SAP HANA database explorer - - Details about the version differences between the SAP HANA database explorer in SAP HANA Cloud and in an on-premise installation such as SAP HANA, express edition - - How to get started with SAP HANA Cloud trial, free tier, SAP HANA, express edition, or SAP HANA Cloud basic trial ---- +- About the features provided by the SAP HANA database explorer +- Details about the version differences between the SAP HANA database explorer in SAP HANA Cloud and in an on-premise installation such as SAP HANA, express edition +- How to get started with SAP HANA Cloud free tier, SAP HANA, express edition, or SAP HANA Cloud basic trial + +--- ## Intro + > Access help from the SAP community or provide feedback on this tutorial by navigating to the "Feedback" link located on the top right of this page. ### SAP HANA database explorer overview @@ -102,15 +106,13 @@ The SAP Software download links (requires an S-User ID to access) below are for [SAP HANA Runtime Tools 2.0](https://launchpad.support.sap.com/#/softwarecenter/search/XSACHRTT) (Adds the SAP HANA database explorer to the SAP HANA Web IDE) +### SAP HANA Cloud free tier - -### SAP HANA Cloud trial or free tier - -To complete the tutorials in this group, an SAP HANA instance is needed. Steps 3 and 4 in this tutorial provide two different, free options that can be used to set up an SAP HANA instance. Only one of these steps needs to be completed if you currently do not have access to an SAP HANA instance. Alternatively, step 7 provides a quick and easy way to try out SAP HANA Cloud although you will be given access to a user with fewer permissions. Trial is only available on the US10 landscape and is in a separate SAP BTP trial account whereas free tier is available in multiple production SAP BTP accounts and provides a seamless transition from a free tier to a paid plan. +To complete the tutorials in this group, an SAP HANA instance is needed. Steps 3 and 4 in this tutorial provide two different, free options that can be used to set up an SAP HANA instance. Only one of these steps needs to be completed if you currently do not have access to an SAP HANA instance. Alternatively, step 7 provides a quick and easy way to try out SAP HANA Cloud although you will be given access to a user with fewer permissions. The SAP BTP Trial is available on the US10 and AP21 landscapes. If a free tier instance is used in a productive subaccount, a seamless transition from a free tier to a paid plan is available. ![SAP HANA Cloud Trial instance](hana-cloud-instance.png) ->SAP HANA Cloud trial or free tier instances are shut down overnight (i.e. 10:00 PM based on the location where your instance was provisioned) and will need to be restarted before working with them the next day. The tutorial group [Automating SAP HANA Cloud Tasks](https://developers.sap.com/group.sap-hana-cloud-automating.html) provides some examples of using tools such as the BTP CLI or the SAP Automation Pilot to help with repetitive tasks such as starting and stopping instances. +>SAP HANA Cloud free tier instances are shut down overnight (i.e. 10:00 PM based on the location where your instance was provisioned) and will need to be restarted before working with them the next day. The tutorial group [Automating SAP HANA Cloud Tasks](https://developers.sap.com/group.sap-hana-cloud-automating.html) provides some examples of using tools such as the BTP CLI or the SAP Automation Pilot to help with repetitive tasks such as starting and stopping instances. >--- @@ -118,21 +120,21 @@ To complete the tutorials in this group, an SAP HANA instance is needed. Steps 3 The instructions on how to setup a free SAP HANA Cloud trial or free tier within SAP BTP are well covered in several other sources listed below. - * [Set Up Your SAP HANA Cloud, SAP HANA Database (free tier or trial) and Understand the Basics](group.hana-cloud-get-started-1-trial) +- [Set Up Your SAP HANA Cloud, SAP HANA Database and Understand the Basics](group.hana-cloud-get-started-1-trial) - * [SAP Learning Journey - Provisioning and Administering Databases in SAP HANA Cloud](https://learning.sap.com/learning-journey/provision-and-administer-databases-in-sap-hana-cloud) +- [SAP Learning Journey - Provisioning and Administering Databases in SAP HANA Cloud](https://learning.sap.com/learning-journey/provision-and-administer-databases-in-sap-hana-cloud) - * [SAP Discovery Center - SAP HANA Cloud, SAP HANA Database Fundamentals](https://discovery-center.cloud.sap/protected/index.html#/missiondetail/3643/) +- [SAP Discovery Center - SAP HANA Cloud, SAP HANA Database Fundamentals](https://discovery-center.cloud.sap/protected/index.html#/missiondetail/3643/) - * [Help Thomas Get Started with SAP HANA](hana-trial-advanced-analytics) (Only the first 3 steps of this tutorial are needed for basic setup of SAP HANA Cloud.) +- [Help Thomas Get Started with SAP HANA](hana-trial-advanced-analytics) (Only the first 3 steps of this tutorial are needed for basic setup of SAP HANA Cloud.) For more information on the SAP BTP see the following: - * + - - * + - - * + - Continue on to the next tutorial in this group once you have access to an SAP HANA instance. @@ -140,9 +142,9 @@ Continue on to the next tutorial in this group once you have access to an SAP HA >This step only needs to be completed if you currently do not have access to an SAP HANA instance and did not setup an SAP HANA instance through the SAP HANA Cloud as explained in step 3. -An alternative option to using the SAP HANA Cloud trial or free tier is to use the SAP HANA, express edition. SAP provides a free, streamlined version of SAP HANA that runs on developer laptops called [SAP HANA, express edition](https://www.sap.com/products/technology-platform/hana/express-trial.html). +An alternative option to using the SAP HANA Cloud free tier is to use the SAP HANA, express edition. SAP provides a free, streamlined version of SAP HANA that runs on developer laptops called [SAP HANA, express edition](https://www.sap.com/products/technology-platform/hana/express-trial.html). -SAP HANA runs on a few versions of Linux. SAP HANA, express edition provides a binary install as well as virtual machine images that can be run on Microsoft Windows, macOS and Linux machines. This is described in the [Getting Started with SAP HANA 2.0, express edition (Binary Installer Method)](https://help.sap.com/docs/SAP_HANA,_EXPRESS_EDITION/32c9e0c8afba4c87814e61d6a1141280) or [Getting Started with SAP HANA 2.0, express edition (Virtual Machine Method)](https://help.sap.com/docs/SAP_HANA,_EXPRESS_EDITION/8c3bbc4a904d42efac77c09da0bccf64). The **Applications** option adds XS Advanced, the SAP HANA cockpit, the SAP HANA database explorer, and the SAP HANA Web IDE for SAP HANA. +SAP HANA runs on a few versions of Linux. SAP HANA, express edition provides a binary install as well as [docker images](https://hub.docker.com/u/saplabs). This is described in the [Getting Started with SAP HANA 2.0, express edition (Binary Installer Method)](https://help.sap.com/docs/SAP_HANA,_EXPRESS_EDITION/32c9e0c8afba4c87814e61d6a1141280). A database-only option and a database + XS Advanced Applications option are available. The database + XS Advanced Applications install includes the SAP HANA cockpit, the SAP HANA database explorer, and the SAP HANA Web IDE for SAP HANA. The **Applications** option adds XS Advanced, the SAP HANA cockpit, the SAP HANA database explorer, and the SAP HANA Web IDE for SAP HANA. ![SAP HANA express download manager](express-download-manager.png) @@ -155,6 +157,7 @@ Once installed, a useful starting point is the page below. It contains links to the SAP Web IDE for SAP HANA, SAP HANA cockpit, and the SAP HANA cockpit manager. ### SAP HANA Cloud Basic Trial + The SAP HANA Cloud Basic Trial provides a database user and password that has access to a specific schema free for 30 days. The database user can be used with the SAP HANA database explorer. The provided database user can be used to create database objects within the provided schema but cannot create new schemas or users. To get started, click on Try Now on Discover SAP HANA Cloud section of the trial page of [SAP HANA Cloud](https://www.sap.com/products/technology-platform/hana/trial.html). ![experience SAP HANA Cloud](experience.png) @@ -169,6 +172,6 @@ A tutorial is available to be used with the basic trial. ### Knowledge check -Congratulations! You have configured an instance of SAP HANA, either through the SAP HANA Cloud trial, free tier, or SAP HANA, express edition. You've also learned how to start, stop, and manage an instance of SAP HANA Cloud via the Cloud Foundry Command Line Interface. +Congratulations! You have configured an instance of SAP HANA, either through the SAP HANA Cloud free tier, or SAP HANA, express edition. You've also learned how to start, stop, and manage an instance of SAP HANA Cloud via the Cloud Foundry Command Line Interface. --- diff --git a/tutorials/hana-dbx-query/hana-dbx-query.md b/tutorials/hana-dbx-query/hana-dbx-query.md index af792ba765..c8b0b43afe 100644 --- a/tutorials/hana-dbx-query/hana-dbx-query.md +++ b/tutorials/hana-dbx-query/hana-dbx-query.md @@ -7,19 +7,24 @@ primary_tag: software-product>sap-hana-cloud --- # Query with the SQL Console in SAP HANA Database Explorer + Explore features of the SQL console and see how it facilitates querying an SAP HANA database. ## Prerequisites - - An SAP HANA database such as SAP HANA Cloud trial or the SAP HANA, express edition that includes the SAP HANA database explorer - - You have completed the first 3 tutorials in this group. + +- An SAP HANA database such as SAP HANA Cloud trial or the SAP HANA, express edition that includes the + SAP HANA database explorer +- You have completed the first 3 tutorials in this group. ## You will learn - - How to run SQL queries using the SQL console and add filters to the results - - How to use different features of the SQL console including keyboard shortcuts, autocomplete, statement help, and the statement library + +- How to run SQL queries using the SQL console and add filters to the results +- How to use different features of the SQL console including keyboard shortcuts, autocomplete, statement help, and the statement library --- ### Execute SQL + 1. Select a connection and open the SQL console. ![open SQL console](open-sql-console.png) diff --git a/tutorials/hana-dbx-sof/hana-dbx-sof.md b/tutorials/hana-dbx-sof/hana-dbx-sof.md index 349de19e1a..7c5144a892 100644 --- a/tutorials/hana-dbx-sof/hana-dbx-sof.md +++ b/tutorials/hana-dbx-sof/hana-dbx-sof.md @@ -109,7 +109,7 @@ Follow the steps below to connect to the database using the SQL Console. ![Add a database with a different user](add-database-with-different-user.png) ### Connect to the data lake Files instance -Once the data lake Files instance has been created and configured, it can be accessed using the data lake Files app in SAP HANA Cloud Central, [REST API](https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/2024_3_QRC/en-US/html/index.html), or [hdlfscli](https://help.sap.com/docs/hana-cloud-data-lake/user-guide-for-data-lake-files/hdlfscli-data-lake-files-utility). +Once the data lake Files instance has been created and configured, it can be accessed using the data lake Files app in SAP HANA Cloud Central, [REST API](https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/latest/en-US/index.html), or [hdlfscli](https://help.sap.com/docs/hana-cloud-data-lake/user-guide-for-data-lake-files/hdlfscli-data-lake-files-utility). 1. From SAP HANA Cloud Central, select the **Data Lake Files** app. @@ -475,6 +475,8 @@ A virtual table can be changed so that the data is stored in the SAP HANA Cloud ALTER VIRTUAL TABLE TITANIC_CSV ADD SHARED SNAPSHOT REPLICA; ``` + Additional details can be found at [ALTER VIRTUAL TABLE Statement](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/alter-virtual-table-statement-data-definition). + 3. Run the same query on the replica table and examine the time taken. ```SQL diff --git a/tutorials/hana-dbx-troubleshooting/hana-dbx-troubleshooting.md b/tutorials/hana-dbx-troubleshooting/hana-dbx-troubleshooting.md index 8f37ba243e..40b2fc7bcc 100644 --- a/tutorials/hana-dbx-troubleshooting/hana-dbx-troubleshooting.md +++ b/tutorials/hana-dbx-troubleshooting/hana-dbx-troubleshooting.md @@ -215,6 +215,12 @@ Explain plan provides a compiled plan in tabular form without executing it. Thi ![Explain plan Results](explainPlanResults.png) + To clear any cached parameters the below SQL can be run. + + ```SQL + ALTER SYSTEM CLEAR SQL PLAN CACHE; + ``` + For further details see the links below. [View Execution Plans for SQL Statements, Procedures, and Anonymous Blocks](https://help.sap.com/docs/hana-cloud/sap-hana-database-explorer/view-execution-plans-for-sql-statements-procedures-and-anonymous-blocks-sap-hana-cloud-database) diff --git a/tutorials/migration-assessment/migration-assessment.md b/tutorials/migration-assessment/migration-assessment.md index 2815284000..7135734b57 100644 --- a/tutorials/migration-assessment/migration-assessment.md +++ b/tutorials/migration-assessment/migration-assessment.md @@ -37,33 +37,7 @@ To connect the Migration Assessment application with your SAP Process Orchestrat ![Image](Images/2023-01-23_23-54-52.jpg) -3. Make sure that your SAP Cloud Connector exposes the following API paths correctly. They're used to extract data from your SAP Process Orchestration system: - - - Directory Content - - `/CommunicationChannelInService` - - `/IntegratedConfigurationInService` - - `/SenderAgreementInService` - - `/AlertRuleInService` - - `/IntegratedConfiguration750InService` - - `/ValueMappingInService` - - `/ConfigurationScenarioInService` - - `/BPMFacadeBeanImplService` - - `/ReceiverAgreementInService` - - `/ReceiverRuleInService` - - `/ReceiverDeterminationInService` - - `/InterfaceDeterminationInService` - - - ESR Content - - `/dir/read/ext` - - `/dir/query/ext` - - `/rep/support/SimpleQuery` - - `/rep/read/ext` - - `/rep/query/ext` - - `/rep/query/int` - - - Message Monitoring - - - `/mdt` +3. Make sure that your SAP Cloud Connector exposes the following API paths correctly. They're used to extract data from your SAP Process Orchestration system. See [Add an SAP Process Orchestration System](https://help.sap.com/docs/integration-suite/sap-integration-suite/add-sap-process-orchestration-system). 4. Limit access to the previously mentioned endpoints and subpaths by changing **Access Policy** to **Path and All Sub-Paths**. diff --git a/tutorials/mob-app-mao-erp-app-create/adminapi.png b/tutorials/mob-app-mao-erp-app-create/adminapi.png index 7d0ad9848e..d778725a3d 100644 Binary files a/tutorials/mob-app-mao-erp-app-create/adminapi.png and b/tutorials/mob-app-mao-erp-app-create/adminapi.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/ccinfo.png b/tutorials/mob-app-mao-erp-app-create/ccinfo.png index 3016d63bf7..2bafc8e398 100644 Binary files a/tutorials/mob-app-mao-erp-app-create/ccinfo.png and b/tutorials/mob-app-mao-erp-app-create/ccinfo.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/mdwadditionalproperties.png b/tutorials/mob-app-mao-erp-app-create/mdwadditionalproperties.png index 847e5e0e7d..3ee096313b 100644 Binary files a/tutorials/mob-app-mao-erp-app-create/mdwadditionalproperties.png and b/tutorials/mob-app-mao-erp-app-create/mdwadditionalproperties.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/mdwmeteraddprop.png b/tutorials/mob-app-mao-erp-app-create/mdwmeteraddprop.png index f3fd77436a..7a2fb1bbca 100644 Binary files a/tutorials/mob-app-mao-erp-app-create/mdwmeteraddprop.png and b/tutorials/mob-app-mao-erp-app-create/mdwmeteraddprop.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/metricsoutput.png b/tutorials/mob-app-mao-erp-app-create/metricsoutput.png new file mode 100644 index 0000000000..e2d90f271a Binary files /dev/null and b/tutorials/mob-app-mao-erp-app-create/metricsoutput.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/metricsselscreen.png b/tutorials/mob-app-mao-erp-app-create/metricsselscreen.png new file mode 100644 index 0000000000..a0ec70c432 Binary files /dev/null and b/tutorials/mob-app-mao-erp-app-create/metricsselscreen.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/mob-app-mao-erp-app-create.md b/tutorials/mob-app-mao-erp-app-create/mob-app-mao-erp-app-create.md index 6c1e1953a1..54b898870f 100644 --- a/tutorials/mob-app-mao-erp-app-create/mob-app-mao-erp-app-create.md +++ b/tutorials/mob-app-mao-erp-app-create/mob-app-mao-erp-app-create.md @@ -8,30 +8,29 @@ author_profile: https://github.com/I545182 parser: v2 --- -# Create an SAP Mobile Services App for the SAP Service and Asset Manager Mobile App -Create and update an SAP Mobile Services app for the SAP Service and Asset Manager mobile app using the Mobile Services App Create transaction /MERP/CPMS_APPCREATE from the SAP GUI. +# Create a SAP Service and Asset Manager Mobile Services App with Metrics +Create and update a SAP Service and Asset Manager Mobile Services App with Metrics using transaction /MERP/CPMS_APPCREATE from the SAP GUI. ## Prerequisites - Access to your SAP BTP Subaccount and Space. - Access to the SAP Mobile Services service in your SAP BTP subaccount. -- Implement SAP Notes 3504624 and 3524755 in your system to get the latest updates for the SAP Service and Asset Manager Mobile Services App Create transaction `/MERP/CPMS_APPCREATE`. Note 3504624 (component MOB-APP-MAO-FND) contains updates to the Mobile Services App Admin classes on which note 3524755 (component MOB-APP-MAO-ERP) depends on. Please ensure note 3504624 can be implemented in your system before implementing note 3524755. +- Latest App Create and Metrics Updates. Please review [SAP Note 3703174](https://me.sap.com/notes/3703174) for the latest updates. ## You will learn -- How to create and update an SAP Mobile Services app for the SAP Service and Asset Manager mobile app using the MS App Create transaction `/MERP/CPMS_APPCREATE`. -- How to review the created MS app. +- How to create and update a SAP Service and Asset Manager Mobile Services App with Metrics using transaction `/MERP/CPMS_APPCREATE`. +- How to review the SAP Service and Asset Manager Mobile Services App with Metrics - Optional Features: - 1. Use an RFC Destination (Middleware Server) to Create the App. - 2. Add `sap-client` header to the Mobile Destinations. - 3. Enable Multiple Threads in Offline Configuration. - 4. Update the Usage Metering Middleware Server to use an RFC Destination. + 1. Use a RFC Destination (Middleware Server) to create the Mobile Services App. + 2. Use a RFC Destination to send Metrics to Cloud Reporting. + 3. Set up Satellite Systems. + 4. Enable Multiple Threads in Offline Configuration. - Troubleshoot: - 1. Prompted to sign-in after selecting the **Launch in Browser** icon when testing the Mobile Destinations. - 2. Missing Offline Configuration. - 3. Usage Metering Middleware Server Missing and/or Properties Missing. - 4. Usage Metering Background Job Missing. + 1. Missing Offline Configuration. + 2. Usage Metering Middleware Server Missing and/or Properties Missing. + 3. Usage Metering Background Job Missing. ## Intro -In this mission you will learn how to create and update an SAP Mobile Services app for the SAP Service and Asset Manager mobile app using the SAP Service and Asset Manager Mobile Services App Create transaction **`/MERP/CPMS_APPCREATE`** from the SAP GUI. The Mobile Services app created by the transaction may be used to onboard your SAP Service and Asset Manager mobile app. +In this mission you will learn to create and update a SAP Service and Asset Manager Mobile Services App with Metrics using transaction **`/MERP/CPMS_APPCREATE`** from the SAP GUI. The Mobile Services App created by the transaction may be used to onboard your SAP Service and Asset Manager mobile app. ### Gather the Required Information @@ -43,37 +42,36 @@ In this mission you will learn how to create and update an SAP Mobile Services a ![CCInfo](ccinfo.png) - 1. The **SAP Cloud Connector Location Id** is located within the parenthesis (i.e., `ConvergedCloud`). If there are no parenthesis, then the **SCC Location Id** is not required. - - 2. The **Virtual Host** is located in the Host column of the Exposed Back-End Systems table. Please remove any leading or trailing spaces when copied. - -### Create the Mobile Services App via the MS App Create Transaction +### Create a SAP Service and Asset Manager Mobile Services App with Metrics 1. Execute the transaction **`/MERP/CPMS_APPCREATE`** from the SAP GUI, then select your required variant (i.e., `SAP&SAM_`). -2. Fill in the **Admin API**, **SCC Location Id** and **Virtual Host**. Please ensure the **Background Job User** will maintain authorization to run the Usage Metering background job (parameter info below). To add the `sap-client` header to the Mobile Destinations please see Step 5 (recommended). To use an RFC Destination instead of the **Admin API** please see Step 4 (optional). Then execute the transaction. +2. Fill in the **Admin API**, **SCC Location Id** and **Virtual Host**. Please ensure the **Background Job User** will maintain authorization to run the Usage Metering background job (parameter info below). To use a RFC Destination to create the app instead of the **Admin API** please see Step 4 (optional). The app is created as the Metrics Host by default, to set up the app as the Metrics Satellite please see Step 6. Then execute the transaction. ![SelScreen](selscreen.png) - | Parameter | What's the use? | - | :--------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - | **MS Admin API or MW Server GUID** | Used to establish a connection from the SAP Backend to the SAP Mobile Services service. | - | **OData Service Mobile App** | Used to generate the mobile app's offline configuration sent to the SAP Mobile Services **Mobile Offline Access** feature. | - | **OData Service Technical Name** | Used to generate the mobile app's offline configuration sent to the SAP Mobile Services **Mobile Offline Access** feature. | - | **OData Service Group Version** | Used to generate the mobile app's offline configuration sent to the SAP Mobile Services **Mobile Offline Access** feature. | - | **MS Application ID** | The unique application identifier given to the SAP Mobile Services app. | - | **MS Application Name** | The application name given to the SAP Mobile Services app. | - | **MS Application Description** | The application description given to the SAP Mobile Services app. | - | **MS Vendor Name** | The vendor name given to the SAP Mobile Services app. | - | **MS Application Timeout** | The maximum time in milliseconds before a client connection times out in your environment. After that timeout period, the connection is closed. | - | **MS App License Type** | The Service Plan used by Mobile Services. The plan `basic-plus-app` is recommended for SAP mobile applications. | - | **SCC X.509 Virtual Host** | Used to generate the URL for the Mobile Services Mobile Destinations | - | **Cloud Connector Location ID** | Used to set **Cloud Connector Location Id** for the Mobile Services Mobile Destinations | - | **Background Job User** | Used to schedule the Usage Metering background job with a daily frequency. If no user is provided, then the user executing the transaction is used. Please ensure the **Background Job User** will maintain authorization to run the background job. | + | Parameter | What's the use? | + | :-------- | :-------------- | + | **MS Admin API or MW Server GUID** | Used to establish a connection from the SAP Backend to the SAP Mobile Services service. | + | **OData Service Mobile App** | Used to generate the mobile app's offline configuration sent to the SAP Mobile Services **Mobile Offline Access** feature. | + | **OData Service Technical Name** | Used to generate the mobile app's offline configuration sent to the SAP Mobile Services **Mobile Offline Access** feature. | + | **OData Service Group Version** | Used to generate the mobile app's offline configuration sent to the SAP Mobile Services **Mobile Offline Access** feature. | + | **MS Application ID** | The unique application identifier given to the SAP Mobile Services App. | + | **MS Application Name** | The application name given to the SAP Mobile Services App. | + | **MS Application Description** | The application description given to the SAP Mobile Services App. | + | **MS Vendor Name** | The vendor name given to the SAP Mobile Services App. | + | **MS Application Timeout** | The maximum time in milliseconds before a client connection times out in your environment. After that timeout period, the connection is closed. | + | **MS App License Type** | The Service Plan used by Mobile Services. The plan `basic-plus-app` is recommended for SAP mobile applications. | + | **SCC X.509 Virtual Host** | Used to generate the URL for the Mobile Services Mobile Destinations. | + | **Cloud Connector Location ID** | Used to set **Cloud Connector Location Id** for the Mobile Services Mobile Destinations. | + | **Client** | Used to set `sap-client` header for Mobile Services Mobile Destinations. Defaulted to current System Client. | + | **Background Job User** | Used to schedule the Usage Metering background job with a daily frequency. If no user is provided, then the user executing the transaction is used. Please ensure the **Background Job User** will maintain authorization to run the background job. | + | **Client Role** | Used by the Metrics report to determine user counts (i.e., 0 user counts sent for non-productive systems). Defaulted to System Client Role defined in transaction SCC4. Stored in MAIF Product Table. | + | **Satellite System** | Satellite Systems can be setup to avoid duplicate User Counts when multiple Production Systems are in use. Metrics from a Satellite Sytem will be retrieved via the Metrics Report executed in the Host System ensuring duplicate SAP Users are only counted once. See Step 6 to set up Satellite Systems. | >**WARNING:** Any change that may affect the offline configuration (e.g., a new entity type is added to your mobile app configuration, or the **Defer Batch Response** setting is changed for the **OData Service Technical Name** provided when generating the offline configuration) will require you to update the offline configuration in Mobile Services and reset your mobile app. See Step 2.5 to update. -3. If you are not using a Middleware Server with an RFC Destination with Basic Authentication enabled, then you should receive a sign-in prompt after executing the transaction. Please use your SAP BTP username and password to sign in. +3. If you are not using a Middleware Server with a RFC Destination with Basic Authentication enabled, then you should receive a sign-in prompt after executing the transaction. Please use your SAP BTP username and password to sign in. >Please allow ~5 minutes to complete processing. @@ -81,21 +79,21 @@ In this mission you will learn how to create and update an SAP Mobile Services a ![Output](output.png) -5. The Mobile Services app can be updated by re-executing the transaction and selecting the features to update when prompted. See additional info for each option below. +5. The Mobile Services App can be updated by re-executing the transaction and selecting the features to update when prompted. See additional info for each option below. - | Feature | What is Updated? | - | :---------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------ | - | **Update Mobile Offline Access** | Updates the offline configuration of your app. | - | **Update Usage Metering** | Creates Usage Metering Middleware Server and Background Job. Existing Middleware Server and Background Job are deleted. | - | **Compare Offline Configuration** | Compare offline configuration properties, request groups and request download phases between the backend and the mobile services app. | - | **Update Mobile Connectivity** | Updates the offline and online destination settings of your app. | - | **Add Mobile App Update** | Assigns the feature to your app if not already assigned. | - | **Update Mobile Push Notification** | Updates the Predefined Global Push Configuration to **SAP ASSET MANAGER**. | - | **Add Mobile App Catalog** | Assigns the feature to your app if not already assigned. | - | **Add Mobile Cloud Build** | Assigns the feature to your app if not already assigned. | - | **Add Mobile Client Log Upload** | Assigns the feature to your app if not already assigned. | + | Feature | What is Updated? | + | :------ | :--------------- | + | **Update Mobile Offline Access** | Updates the offline configuration of your app. | + | **Update Usage Metering** | Creates Usage Metering Middleware Server and Background Job. Existing Middleware Server and Background Job are deleted. Updates Client Role in Product Table. Updates MS_UNIFIED_SERVER system component. | + | **Compare Offline Configuration** | Compare offline configuration properties, request groups and request download phases between the backend and the Mobile Services App. | + | **Update Mobile Connectivity** | Updates the offline and online destination settings of your app. | + | **Add Mobile App Update** | Assigns the feature to your app if not already assigned. | + | **Update Mobile Push Notification** | Updates the Predefined Global Push Configuration to **SAP ASSET MANAGER**. | + | **Add Mobile App Catalog** | Assigns the feature to your app if not already assigned. | + | **Add Mobile Cloud Build** | Assigns the feature to your app if not already assigned. | + | **Add Mobile Client Log Upload** | Assigns the feature to your app if not already assigned. | -### Review the Created Mobile Services App +### Review a SAP Service and Asset Manager Mobile Services App with Metrics 1. In the **Native/MDK** section of the SAP Mobile Services service in your SAP BTP subaccount you should see your app in a **Started State** with the **MS Application ID** you provided in Step 2.2. @@ -103,32 +101,38 @@ In this mission you will learn how to create and update an SAP Mobile Services a 2. Select your app to see the **Assigned Features**. The following features should be assigned to your app: - - **Mobile App Catalog** - - **Mobile App Update** - - **Mobile Client Log Upload** - - **Mobile Cloud Build** - - **Mobile Connectivity** - - **Mobile Offline Access** - - **Mobile Push Notification** + - **Connectivity** + - **App Catalog** + - **App Update** + - **Client Log Upload** + - **Cloud Build** + - **Offline Access** + - **Push Notification** -3. From your app's overview screen, select the **Mobile Connectivity** feature. You should see two Mobile Destinations created with the properties below. Under the **Actions** column, selecting the **Launch in Browser** icon should return the metadata. If you are prompted to sign in after clicking the **Launch in Browser** icon then please see Step 6. +3. From your app's overview screen, select the **Connectivity** feature. You should see two Mobile Destinations created with the properties below. The `sap-client` header should be automatically added to the destinations **Custom Headers** section. Under the **Actions** column, selecting the **Launch in Browser** icon should return the metadata. ![Destinations](destinations.png) - | Destination Name | Destination URL | - | :------------------------------- | :---------------------------------------------------------------------------- | - | `DEST_SAM_PPROP` | **`http://:/sap/opu/odata/MERP/SAP_SRV_ASSET_MANAGER_`** | + | Destination Name | Destination URL | + | :--------------- | :-------------- | + | `DEST_SAM_PPROP` | **`http://:/sap/opu/odata/MERP/SAP_SRV_ASSET_MANAGER_`** | | `DEST_SAM_ONLINE_PPROP` | **`http://:/sap/opu/odata/MERP/SAP_ONLINE_LOOKUP_EXT_`** | -4. From your app's **Mobile Connectivity** feature, select the **Service Keys** tab. You should see a Service Key with the properties below. The Key should be automatically maintained as the `X-API-Key` property in the Additional Properties of the Usage Metering Middleware Server which we will review in Step 3.7. + **Custom Headers** + + | Header Name | Header Value | + | :-----------| :----------- | + | `sap-client` | Your client (i.e., **`800`**) | + +4. From your app's **Mobile Connectivity** feature, select the **Service Keys** tab. You should see a Service Key with the properties below. The Key should be automatically maintained as the `X-API-Key` property in the Additional Properties of the Usage Metering Middleware Server which we will review in Step 3.8. - | Field Name | Value | - | :--------- | :------------------------------------------ | - | Alias | **``** (i.e., **`PRD001`**) | - | Roles | **`sap_application_metering`** | - | Type | **`API Key`** | + | Field Name | Value | + | :--------- | :---- | + | Alias | **``** (i.e., **`PRD001`**) | + | Roles | **`sap_application_metering`** | + | Type | **`API Key`** | -5. From your app's overview screen, if you select the **Mobile Offline Access** feature you should be able to display and edit the offline configuration. If the offline configuration is missing, then please see Step 7. +5. From your app's overview screen, if you select the **Offline Access** feature you should be able to display and edit the offline configuration. If the offline configuration is missing, then please see Step 8. 6. From your app's overview screen, select the **APIs** tab to view the onboarding QR code which you can scan from the SAP Service and Asset Manager mobile app. @@ -136,35 +140,67 @@ In this mission you will learn how to create and update an SAP Mobile Services a ![MDW](mdw.png) -8. Verify that the Middleware Server has the following **Basic Info** and **Additional Properties**. If the Middleware Server's **Basic Info** or **Additional Properties** are not as expected then please see Step 8. +8. Verify that the Middleware Server has the following **Basic Info** and **Additional Properties**. If the Middleware Server's **Basic Info** or **Additional Properties** are not as expected then please see Step 9. **Basic Info** ![MDWBasicInfo](mdwbasicinfo.png) - | Field Name | Value | - | :--------------------- | -------------------------------------------------- | - | Mobile Application | **``** | - | Server Name | **`_MS_UNIFIED_SERVER`** | - | `Middleware Svr SerNo` | **`SCP`** | - | Server GUID | **``** | - | Port | **`00443`** | - | UI Host Name | **`https://example.cfapps.sap.hana.ondemand.com`** | + | Field Name | Value | + | :--------- | ----- | + | Mobile Application | **``** | + | Server Name | **`_MS_UNIFIED_SERVER`** | + | `Middleware Svr SerNo` | **`SCP`** | + | Server GUID | **``** | + | Port | **`00443`** | + | UI Host Name | **`https://example.cfapps.sap.hana.ondemand.com`** | **Additional Properties** ![MDWAdditionalProperties](mdwadditionalproperties.png) - | Property Group | Property Name | Property Value | - | :------------- | :----------------- | :----------------------------------------- | - | **`METERING`** | **`X-API-Key`** | **``** | + | Property Group | Property Name | Property Value | + | :------------- | :------------ | :------------- | + | **`METERING`** | **`Host`** | **`X`** | + | **`METERING`** | **`X-API-Key`** | **``** | | **`METERING`** | **`service_path`** | **`/mobileservices/service-key/metering`** | -9. To verify that the Usage Metering Background Job is scheduled, please execute transaction **SM37** from the SAP GUI and search for the job name noted in Step 2.4. If the background job is missing, then please see Step 9. +9. To verify that the Usage Metering Background Job is scheduled, please execute transaction **SM37** from the SAP GUI and search for the job name noted in Step 2.4. If the background job is missing, then please see Step 10. ![SM37](sm37.png) -### Optional Feature 1 - Use an RFC Destination (Middleware Server) to Create the App +10. Execute transaction **/SYCLO/CONFIGPANEL** from the SAP GUI to open up the MAIF Configuration Panel. Navigate to **Mobile Application Configuration** > **System Components**. You should see a system component with the properties below. + + ![SystemComponent](syscomp.png) + + | Field Name | Value | + | :--------- | :---- | + | System Component | **`MS_UNIFIED_SERVER`** | + | System Role | **`Host`** | + | Active Flag | Selected | + +11. To ensure the Metrics Requests are sent successfully execute transaction **SE38** and execute program **`/MFND/CORE_CLOUD_METRICS_PROG`**. Provide `SAP_SERVICE_ASSET_MANAGER` in `Product Technical Name` and execute. + + **Selection Screen** + ![MetricsSelScreen](metricsselscreen.png) + + **Successful Output** + ![MetricsOutput](metricsoutput.png) + + | Output | Explanation | + | :----- | :---------- | + | **Authorized Users** | Total of Professional and Standard Users. | + | **Professional Users** | Users having authorization for a Persona with Usage Type Advanced User. | + | **Standard Users** | Users having authorization for a Persona with Usage Type Standard User. | + | **External Users** | Users having authorization for a Persona with Usage Type External User. | + | **Active Users** | Unique users who have completed a sync in the previous day. | + | **Month to Date Active Users** | Unique users who have completed a sync in the previous 30 days. | + | **Persona** | Users having authorization for the Persona. | + | **Mobile Application** | Sync Info for the previous day. | + + >Persona Authorization configuration can be found in the MAIF Configuration Panel. Execute transaction **/SYCLO/CONFIGPANEL** from the SAP GUI to open up the MAIF Configuration Panel and navigate to **Mobile Application Configuration** > **Application Persona**. + +### Optional Feature 1 - Use a RFC Destination (Middleware Server) to Create the Mobile Services App 1. Execute transaction **SM59** from the SAP GUI. Then click the create icon. @@ -176,11 +212,11 @@ In this mission you will learn how to create and update an SAP Mobile Services a **Admin API:** `https://mobile-service-cockpit-example.sap.hana.ondemand.com/cockpit/v1/org/ExampleOrg/space/ExampleSpace` - | Field Name | Value | - | :---------------- | :--------------------------------------------------------- | - | Target Host | **`mobile-service-cockpit-example.sap.hana.ondemand.com`** | - | Service No.(Port) | **`443`** | - | Path Prefix | **`/cockpit/v1/org/ExampleOrg/space/ExampleSpace/app`** | + | Field Name | Value | + | :--------- | :---- | + | Target Host | **`mobile-service-cockpit-example.sap.hana.ondemand.com`** | + | Service No.(Port) | **`443`** | + | Path Prefix | **`/cockpit/v1/org/ExampleOrg/space/ExampleSpace/app`** | >**HTTP Proxy Options** are available in the RFC Destination Technical Settings if required. @@ -204,104 +240,126 @@ In this mission you will learn how to create and update an SAP Mobile Services a ![MDWCreate](mdwcreate.png) - | Field Name | Value | - | :--------------------- | :------------------------- | - | Mobile Application | **``** | - | Server Name | **`Z_MS_ADMIN_API`** | - | Server GUID | **``** | - | Port | **`07003`** | - | `Middleware Svr SerNo` | **`SCP`** | - | RFC Destination | **`Z_MS_ADMIN_API`** | + | Field Name | Value | + | :--------- | :---- | + | Mobile Application | **``** | + | Server Name | **`Z_MS_ADMIN_API`** | + | Server GUID | **``** | + | Port | **`07003`** | + | `Middleware Svr SerNo` | **`SCP`** | + | RFC Destination | **`Z_MS_ADMIN_API`** | -10. You may now use the generated **Server GUID** instead of the **Admin API** in Step 2.2 . You may use F4 Help on the **Admin API or Middleware Server GUID** field of the MS App Create transaction to search for the created Middleware Server. +10. You may now use the generated **Server GUID** instead of the **Admin API** in Step 2.2 . You may use F4 Help on the **Admin API or Middleware Server GUID** field to search for the created Middleware Server. ![ServerGUID](serverguid.png) -### Optional Feature 2 - Add sap-client header to the Mobile Destinations - -1. Follow Step 2.1 and 2.2 then return to this Step before executing. Then click **Advanced Mode**. +### Optional Feature 2 - Use a RFC Destination to send Metrics to Cloud Reporting -2. Under the **Mobile Services Connection Configuration** section, provide the JSON payload below in the **Destination Headers** field of the offline and online destinations (substituting `` with the required client). +1. Copy the **URL** of the **Server API**. from the APIs tab of your Mobile Services App. - ```JSON - {"name": "sap-client", "value": "", "overwrite": false} - ``` + ![UIHost](uihost.png) - ![AddClient](addclient.png) +2. Execute transaction **SM59** from the SAP GUI. Then click the create icon. -3. If your app already exists and you are updating your app, select the option **Update Mobile Connectivity** when prompted. +3. Provide the destination name **`Z_SAM_METERING`** and set **Connection Type** to **`G HTTP Connection to External Server`** (substituting `` with your app version). -4. Alternately, you may edit the Mobile Destinations directly in the SAP Mobile Services **Mobile Connectivity** feature of your app. For each destination, click the pencil icon and navigate to the **Custom Headers** section. Add a custom header with the following values: +4. Provide the copied URL without `https://` in **Host** field of the Target System Settings. Use the **Port** and **Path Prefix** as in the example below. - ![EditDest](editdest.png) + ![RFCTechSetMeter](rfctechsetmeter.png) - | Header Name | Header Value | - | :----------- | :---------------------------- | - | `sap-client` | Your client (i.e., **`800`**) | + | Field Name | Value | + | :--------- | :---- | + | Target Host | **`samcf-sam-cf-sam.example.hana.ondemand.com`** | + | Service No.(Port) | **`443`** | + | Path Prefix | **`/mobileservices/service-key/metering`** | + +5. In the **Logon & Security** tab, within the **Security Options** > **Status of Secure Protocol** section, of your RFC destination please set the **SSL** radio button to **Active**. + + ![RFCSecSetMetering](rfcsecsetmetering.png) -### Optional Feature 3 - Enable Multiple Threads in Offline Configuration - -1. Follow Step 2.1 and 2.2 then return to this Step before executing. Then click **Advanced Mode**. +6. Save the RFC Destination. -2. Under the **Mobile Services Offline OData Settings** section, check the `Calculate oMDO Download Phases` and `Enable Multiple Threads` options. Then, set `Number of Threads` to `3`. +7. We will now update the Usage Metering Middleware Server with the RFC Destination created. To update the Middleware Server, execute transaction **/SYCLO/ADMIN** from the SAP GUI to open up the MAIF Admin Panel. -3. If your app already exists and you are updating your app, select the option **Update Mobile Offline Access** when prompted. +8. Navigate to the **Administration** > **Server Management** section. -4. Alternately, you may generate the offline configuration using the offline configuration program **`/MERP/CORE_OFFLINE_CONFIG_PROG`**. The generated file can then be uploaded in the SAP Mobile Services **Mobile Offline Access** feature of your app. +9. Select the Middleware Server with the name noted in Step 2.4. If the Usage Metering Middleware Server is missing then please see Step 9. + +10. Update the **RFC Destination** and click **Save**. -5. Execute the program **`/MERP/CORE_OFFLINE_CONFIG_PROG`** in transaction **SE38** from the SAP GUI, then select your required variant. + ![MDWMetering](mdwmetering.png) -6. Select the `Advanced Offline Configuration` radio button. Check the `Calculate oMDO Download Phases` and `Enable Multiple Threads` options. Then, set `Number of Threads` to `3`. Execute the transaction. +### Optional Feature 3 - Set up Satellite Systems + +1. Please ensure app has been created and reviewed (Steps 2 and 3) in the Host System. - ![OfflineProgMT](offlineprog_mt.png) +2. Follow Steps 1 to 2.2 then return to this Step before executing. Check the `Satellite System` checkbox. Provide the `Host RFC Destination` (recommended) to create the Satellite Middleware Server in the Host System that will be used to retrieve the Metrics from the Satellite System. If you do not provide the `Host RFC Destination` please ensure you manually create the Satellite Middleware Server in the Host System (see Step 6.5). Execute the transaction. If you are updating an existing app please select **Update Usage Metering** feature when prompted. -7. Please ensure to save the generated file with a `.ini` file extension. + ![SatelliteSettings](satsettings.png) -8. Import the file in the **Mobile Offline Access** feature of your app. +3. Please take note of the **Middleware Server** created on the Host System via RFC. - ![ImportOffline](importoffline.png) + ![SatelliteOutput](satoutput.png) -### Optional Feature 4 - Update the Usage Metering Middleware Server to use an RFC Destination. +4. In the Host System execute transaction **/SYCLO/ADMIN** from the SAP GUI to open up the MAIF Admin Panel. Navigate to the **Administration** > **Server Management** section. -1. Copy the **URL** of the **Server API**. from the APIs tab of your Mobile Services app. +5. Select Middleware Server noted above. Create Satellite Middleware Server if not created automatically via RFC in the previous step. Edit the Middleware Server and provide a RFC Destination to the Satellite System. Ensure the RFC authentication is automatic. + + **Basic Info** + + ![SatServerBasic](satserverbasic.png) + + | Field Name | Value | + | :--------- | ----- | + | Mobile Application | **``** | + | Server Name | **`_MS_UNIFIED_SERVER_CLNT`** | + | System Component | **`MS_UNIFIED_SERVER`** | + | `Middleware Svr SerNo` | **`SCP`** | + | Server GUID | **``** | + | Port | **`00443`** | + | UI Host Name | No Value Required | + | RFC Destination | Provide a RFC Destination to the Satellite System | - ![UIHost](uihost.png) + **Additional Properties** (case sensitive) -2. Execute transaction **SM59** from the SAP GUI. Then click the create icon. + ![SatServerProps](satserverprops.png) -2. Provide the destination name **`Z_SAM_METERING`** and set **Connection Type** to **`G HTTP Connection to External Server`** (substituting `` with your app version). + | Property Group | Property Name | Property Value | + | :------------- | :------------ | :------------- | + | **`METERING`** | **`Host`** | No Value | + | **`METERING`** | **`X-API-Key`** | No Value Required | + | **`METERING`** | **`service_path`** | No Value Required | -3. Provide the copied URL without `https://` in **Host** field of the Target System Settings. Use the **Port** and **Path Prefix** as in the example below. + >Ensure all properties exist even with no values. - ![RFCTechSetMeter](rfctechsetmeter.png) +6. To ensure the Satellite Metrics are retrieved successfully execute transaction **SE38** and execute program **`/MFND/CORE_CLOUD_METRICS_PROG`** in the Host System. Provide `SAP_SERVICE_ASSET_MANAGER` in `Product Technical Name` and execute. - | Field Name | Value | - | :---------------- | :--------------------------------------------------------- | - | Target Host | **`samcf-sam-cf-sam.example.hana.ondemand.com`** | - | Service No.(Port) | **`443`** | - | Path Prefix | **`/mobileservices/service-key/metering`** | - -4. In the **Logon & Security** tab, within the **Security Options** > **Status of Secure Protocol** section, of your RFC destination please set the **SSL** radio button to **Active**. - - ![RFCSecSetMetering](rfcsecsetmetering.png) + **Successful Output** + ![SatMetricOutput](satmetricoutput.png) -5. Save the RFC Destination. +### Optional Feature 4 - Enable Multiple Threads in Offline Configuration + +1. Follow Step 2.1 and 2.2 then return to this Step before executing. Then click **Advanced Mode**. -6. We will now update the Usage Metering Middleware Server with the RFC Destination created. To update the Middleware Server, execute transaction **/SYCLO/ADMIN** from the SAP GUI to open up the MAIF Admin Panel. +2. Under the **Mobile Services Offline OData Settings** section, check the `Calculate oMDO Download Phases` and `Enable Multiple Threads` options. Then, set `Number of Threads` to `3`. -7. Navigate to the **Administration** > **Server Management** section. +3. If your app already exists and you are updating your app, select the option **Update Mobile Offline Access** when prompted. -8. Select the Middleware Server with the name noted in Step 2.4. If the Usage Metering Middleware Server is missing then please see Step 8. - -9. Update the **RFC Destination** and click **Save**. +4. Alternately, you may generate the offline configuration using the offline configuration program **`/MERP/CORE_OFFLINE_CONFIG_PROG`**. The generated file can then be uploaded in the SAP Mobile Services **Mobile Offline Access** feature of your app. - ![MDWMetering](mdwmetering.png) +5. Execute the program **`/MERP/CORE_OFFLINE_CONFIG_PROG`** in transaction **SE38** from the SAP GUI, then select your required variant. + +6. Select the `Advanced Offline Configuration` radio button. Check the `Calculate oMDO Download Phases` and `Enable Multiple Threads` options. Then, set `Number of Threads` to `3`. Execute the transaction. -### Troubleshoot 1 - Prompted to sign-in after selecting the Launch in Browser icon + ![OfflineProgMT](offlineprog_mt.png) + +7. Please ensure to save the generated file with a `.ini` file extension. + +8. Import the file in the **Mobile Offline Access** feature of your app. -1. This can occur when the `sap-client` header is not specified on the Mobile Destinations. Follow Optional Feature 2 to add the `sap-client` header. + ![ImportOffline](importoffline.png) -### Troubleshoot 2 - Missing Offline Configuration +### Troubleshoot 1 - Missing Offline Configuration 1. Follow Step 2.5 and select the option **Update Mobile Offline Access** when prompted. The offline configuration will be regenerated and sent to SAP Mobile Services. @@ -317,9 +375,9 @@ In this mission you will learn how to create and update an SAP Mobile Services a ![ImportOffline](importoffline.png) -### Troubleshoot 3 - Usage Metering Middleware Server Missing and/or Properties Missing +### Troubleshoot 2 - Usage Metering Middleware Server Missing and/or Properties Missing -1. Follow Step 2.5 and select **Usage Metering** feature when prompted. +1. Follow Step 2.5 and select **Update Usage Metering** feature when prompted. 2. Alternately, you may create the Usage Metering Middleware Server manually. Execute transaction **/SYCLO/ADMIN** from the SAP GUI to open up the MAIF Admin Panel. @@ -333,29 +391,30 @@ In this mission you will learn how to create and update an SAP Mobile Services a ![MDWCreateMeter](mdwcreatemeter.png) - | Field Name | Value | - | :--------------------- | -------------------------------------------------- | - | Mobile Application | **``** | - | Server Name | **`_MS_UNIFIED_SERVER`** | - | `Middleware Svr SerNo` | **`SCP`** | - | Server GUID | **``** | - | Port | **`00443`** | - | UI Host Name | **`https://example.cfapps.sap.hana.ondemand.com`** | + | Field Name | Value | + | :--------- | ----- | + | Mobile Application | **``** | + | Server Name | **`_MS_UNIFIED_SERVER`** | + | `Middleware Svr SerNo` | **`SCP`** | + | Server GUID | **``** | + | Port | **`00443`** | + | UI Host Name | **`https://example.cfapps.sap.hana.ondemand.com`** | - >The **UI Host Name** can be found in the **APIs** tab of your Mobile Services app. Copy the **URL** of the **Server API**. + >The **UI Host Name** can be found in the **APIs** tab of your Mobile Services App. Copy the **URL** of the **Server API**. > >![UIHost](uihost.png)
 
**Additional Properties** (case sensitive) ![MDWMeterAddProp](mdwmeteraddprop.png) - - | Property Group | Property Name | Property Value | - | :------------- | :----------------- | :----------------------------------------- | - | **`METERING`** | **`X-API-Key`** | **``** | + + | Property Group | Property Name | Property Value | + | :------------- | :------------ | :------------- | + | **`METERING`** | **`Host`** | **`X`** | + | **`METERING`** | **`X-API-Key`** | **``** | | **`METERING`** | **`service_path`** | **`/mobileservices/service-key/metering`** | - >To generate **X-API-Key** go to the **Mobile Connectivity** feature of your Mobile Services app and select the **Service Keys** tab. Click the add icon "**+**" to add a Service Key with the following values and copy the generated key. + >To generate **X-API-Key** go to the **Mobile Connectivity** feature of your Mobile Services App and select the **Service Keys** tab. Click the add icon "**+**" to add a Service Key with the following values and copy the generated key. > >![AddServiceKey](addservicekey.png) > @@ -363,15 +422,15 @@ In this mission you will learn how to create and update an SAP Mobile Services a **Service Key Values** - | Field Name | Value | - | :--------- | :------------------------------------- | - | Alias | Any Alias is okay (i.e., **`PR1001`**) | - | Roles | **`sap_application_metering`** | - | Type | **`API Key`** | + | Field Name | Value | + | :--------- | :---- | + | Alias | Any Alias is okay (i.e., **`PR1001`**) | + | Roles | **`sap_application_metering`** | + | Type | **`API Key`** | -### Troubleshoot 4 - Usage Metering Background Job Missing. +### Troubleshoot 3 - Usage Metering Background Job Missing. -1. Follow Step 2.5 and select **Usage Metering** feature when prompted. If the background job is still missing, you may try the next steps. +1. Follow Step 2.5 and select **Update Usage Metering** feature when prompted. If the background job is still missing, you may try the next steps. 2. Execute the program **`/MFND/CORE_CLOUD_METRICS_PROG`** in transaction **SE38** from the SAP GUI. diff --git a/tutorials/mob-app-mao-erp-app-create/output.png b/tutorials/mob-app-mao-erp-app-create/output.png index 747e94110a..23fc7aaecb 100644 Binary files a/tutorials/mob-app-mao-erp-app-create/output.png and b/tutorials/mob-app-mao-erp-app-create/output.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/satmetricoutput.png b/tutorials/mob-app-mao-erp-app-create/satmetricoutput.png new file mode 100644 index 0000000000..760337d8be Binary files /dev/null and b/tutorials/mob-app-mao-erp-app-create/satmetricoutput.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/satoutput.png b/tutorials/mob-app-mao-erp-app-create/satoutput.png new file mode 100644 index 0000000000..ba784bbd40 Binary files /dev/null and b/tutorials/mob-app-mao-erp-app-create/satoutput.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/satserverbasic.png b/tutorials/mob-app-mao-erp-app-create/satserverbasic.png new file mode 100644 index 0000000000..ad894113ef Binary files /dev/null and b/tutorials/mob-app-mao-erp-app-create/satserverbasic.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/satserverprops.png b/tutorials/mob-app-mao-erp-app-create/satserverprops.png new file mode 100644 index 0000000000..5fa16487ca Binary files /dev/null and b/tutorials/mob-app-mao-erp-app-create/satserverprops.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/satsettings.png b/tutorials/mob-app-mao-erp-app-create/satsettings.png new file mode 100644 index 0000000000..2ea8b8af88 Binary files /dev/null and b/tutorials/mob-app-mao-erp-app-create/satsettings.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/selscreen.png b/tutorials/mob-app-mao-erp-app-create/selscreen.png index 8c63ad14fc..dcc89d7956 100644 Binary files a/tutorials/mob-app-mao-erp-app-create/selscreen.png and b/tutorials/mob-app-mao-erp-app-create/selscreen.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/serverguid.png b/tutorials/mob-app-mao-erp-app-create/serverguid.png index 65ac583863..6541141c9f 100644 Binary files a/tutorials/mob-app-mao-erp-app-create/serverguid.png and b/tutorials/mob-app-mao-erp-app-create/serverguid.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/sm37.png b/tutorials/mob-app-mao-erp-app-create/sm37.png index 8e43f4e956..3d700406f9 100644 Binary files a/tutorials/mob-app-mao-erp-app-create/sm37.png and b/tutorials/mob-app-mao-erp-app-create/sm37.png differ diff --git a/tutorials/mob-app-mao-erp-app-create/syscomp.png b/tutorials/mob-app-mao-erp-app-create/syscomp.png new file mode 100644 index 0000000000..9224228013 Binary files /dev/null and b/tutorials/mob-app-mao-erp-app-create/syscomp.png differ diff --git a/tutorials/modernize-rfc-receiver/modernize-rfc-receiver.md b/tutorials/modernize-rfc-receiver/modernize-rfc-receiver.md index b2638511c2..47dae2d211 100644 --- a/tutorials/modernize-rfc-receiver/modernize-rfc-receiver.md +++ b/tutorials/modernize-rfc-receiver/modernize-rfc-receiver.md @@ -23,6 +23,8 @@ In this tutorial, we will simulate the process of replacing RFC Receiver communi - Generate SOAP Web Services from RFC Function Modules - Test your Consumer Proxy internally in ABAP backend system. +**Important:** When modernizing, be aware that RFC calls silently truncate values that exceed fixed-length ABAP fields (40 Characters), but SOAP and OData will reject them outright with length errors. To prevent integration failures, ensure you explicitly validate or truncate these fields in your integration flow, or adjust the backend lengths accordingly. + On this Tutorial, we **won't** cover: - The development of the integration flow on SAP Cloud Integration. diff --git a/tutorials/odata-01-intro-origins/association-definition.png b/tutorials/odata-01-intro-origins/association-definition.png deleted file mode 100644 index 47d2db8e0e..0000000000 Binary files a/tutorials/odata-01-intro-origins/association-definition.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/associationset-definition.png b/tutorials/odata-01-intro-origins/associationset-definition.png deleted file mode 100644 index 73322a02e4..0000000000 Binary files a/tutorials/odata-01-intro-origins/associationset-definition.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/entitycontainer.png b/tutorials/odata-01-intro-origins/entitycontainer.png deleted file mode 100644 index 6fa6fe2ca6..0000000000 Binary files a/tutorials/odata-01-intro-origins/entitycontainer.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/entry-links.png b/tutorials/odata-01-intro-origins/entry-links.png deleted file mode 100644 index 9e39a4e283..0000000000 Binary files a/tutorials/odata-01-intro-origins/entry-links.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/northwind-metadata.png b/tutorials/odata-01-intro-origins/northwind-metadata.png deleted file mode 100644 index f5013df709..0000000000 Binary files a/tutorials/odata-01-intro-origins/northwind-metadata.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/northwind-v3-service-document.png b/tutorials/odata-01-intro-origins/northwind-v3-service-document.png deleted file mode 100644 index e223163668..0000000000 Binary files a/tutorials/odata-01-intro-origins/northwind-v3-service-document.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/oasis-odata-services.png b/tutorials/odata-01-intro-origins/oasis-odata-services.png deleted file mode 100644 index 53d67a198c..0000000000 Binary files a/tutorials/odata-01-intro-origins/oasis-odata-services.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/odata-01-intro-origins.md b/tutorials/odata-01-intro-origins/odata-01-intro-origins.md deleted file mode 100644 index ba36a22458..0000000000 --- a/tutorials/odata-01-intro-origins/odata-01-intro-origins.md +++ /dev/null @@ -1,267 +0,0 @@ ---- -parser: v2 -author_name: DJ Adams -author_profile: https://github.com/qmacro -auto_validation: false -primary_tag: software-product>sap-business-technology-platform -tags: [ software-product>sap-business-technology-platform, topic>cloud, programming-tool>odata, tutorial>beginner ] -time: 15 ---- - -# Learn about OData Fundamentals - Discover OData's origins and learn about the fundamentals of OData by exploring a public OData service. - -## You will learn - - Where OData came from and why it's designed the way it is - - What the standard OData operations are and how they relate to HTTP - - What the public Northwind OData service has to offer - - What OData service documents and metadata documents describe - - The basics of OData entity types, sets and relationships - -## Intro -OData is an open standard that is both a data format and a protocol for consuming and manipulating data in a uniform way. It's ISO/IEC approved and managed by the [OASIS organization](https://www.oasis-open.org/). - -OData has its origins in the world of weblogs and syndication, but now serves to power a great deal of the API and integration activities in typical SAP enterprise environments. This tutorial will help you understand OData from the ground up. By looking briefly at RSS and Atom, precursors of OData in some ways, you'll understand and feel more comfortable with OData and its mechanisms. - -> This tutorial is based upon OData versions 2 and 3. With the advent of OData version 4, there are some differences, but none significant enough to distract from the purpose of this particular tutorial which is to give a simple overview of OData and its origins. - ---- - -### Examine RSS, an ancestor of OData - - -You can understand OData as being the combination of two essential parts. The first is the format, the second is the protocol. The format defines how data is described, how it is serialized. The protocol defines how that data is manipulated. - -The origin of OData's format comes from the world of weblogs, blogging and syndication. The Rich Site Summary (RSS) format was defined to describe a blog and the posts available in it, typically with the newest posts first, but in XML format for machine consumption. It can also describe a set of posts collected in another context, for example all posts tagged with a certain value. - -> RSS is also known as "RDF Site Summary" or "Really Simple Syndication". - -Let's look at an example of RSS. The National Aeronautics and Space Administration (NASA) maintains many RSS feeds, and you can see a list of them on the [NASA RSS Feeds](https://www.nasa.gov/content/nasa-rss-feeds) page. Go there now and select the [Breaking News](https://www.nasa.gov/rss/dyn/breaking_news.rss) feed which is at this URL: - - - -The resulting RSS content of this resource should look something like this (reduced here for brevity): - -```xml - - - - NASA Breaking News - A RSS news feed containing the latest NASA news articles and press releases. - http://www.nasa.gov/ - - en-us - - NASA Administrator to Visit Florida Students, Industry - http://www.nasa.gov/press-release/nasa-administrator-to-visit-florida-students-industry - NASA Administrator Bill Nelson will speak to elementary school students about the future of space exploration Monday, May 9, and tour a lab working on robotic construction technologies Tuesday, May 10, during a trip to Florida. - - http://www.nasa.gov/press-release/nasa-administrator-to-visit-florida-students-industry - Fri, 06 May 2022 11:13 EDT - NASA Breaking News - 479411 - - - NASA, ESA Astronauts Safely Return to Earth - http://www.nasa.gov/press-release/nasa-esa-astronauts-safely-return-to-earth - NASA's SpaceX Crew-3 astronauts aboard the Dragon Endurance spacecraft safely splashed down Friday in the Gulf of Mexico off the coast of Florida, completing the agency's third long-duration commercial crew mission to the International Space Station. - - http://www.nasa.gov/press-release/nasa-esa-astronauts-safely-return-to-earth - Fri, 06 May 2022 01:04 EDT - NASA Breaking News - 479399 - - - -``` - -Observe the structure of the XML document. Within the outermost `` element, it describes a `` that has some metadata such as title and description. That `` contains one or more `` elements, each of them representing a breaking news item. - -``` -rss - | - +-- channel - | - +-- item - | - +-- item - | - +-- ... -``` - -Think of this overall structure like a document, with a header and items. - - - -### Examine Atom and the Atom Publishing Protocol - - -Atom is a format very similar to RSS, serving the same purpose, and is properly known as the [Atom Syndication Format](https://tools.ietf.org/html/rfc4287). Some may call Atom a successor to RSS. Unlike RSS, which is just a format specification, Atom also has a related protocol called the [Atom Publishing Protocol](https://tools.ietf.org/html/rfc5023) that enables the manipulation of data stored in Atom-formatted resources. This was useful for weblog authors, who could use tools that spoke the Atom Publishing Protocol to edit and publish posts to remote blogging systems. - -Look at an example of the Atom format in the corresponding Wikipedia entry: - follow the link in the "Contents" box to section 5 "Example of an Atom 1.0 feed". Notice that the general structure of the elements is the same as RSS, consisting of a root `feed` element containing `entry` child elements. - -The Atom Publishing Protocol Request For Comments (RFC) document ([RFC5023](https://tools.ietf.org/html/rfc5023)) describes a series of standard operations that can be performed on entries in an Atom feed - in other words, operations on XML representations of blog posts that are in the form of XML `entry` elements. These operations are for listing multiple entries and creating, editing, retrieving & deleting individual entries, and they correspond to the standard HTTP methods (GET, POST, PUT and DELETE). - -The Atom Publishing Protocol specification also details the concept of a service document that describes what collections of entries are available for a given resource. Here's an example of a service document: - -``` - - - - Main Site - - My Blog Entries - - - - Pictures - image/png - image/jpeg - image/gif - - - -``` - -You will see that these fundamental building blocks of Atom are alive and well in the OData protocol today. - - - -### Look at the basics of OData - - -The ideas in Atom formed the foundation of OData. OData is described in full at but at a simple level, OData has: - - - a service document describing the data available in a given OData service - - the concept of entity sets and entities, which are direct parallels of feeds and entries, respectively, in RSS and Atom - - a basic set of operations: Create, Read, Update, Delete and Query (commonly referred to as CRUD+Q) - -There is a publicly available set of OData services maintained by the OASIS organisation, which are known as the **Northwind** services because they offer a data set based on a business scenario that revolves around a company called **Northwind Traders**. This data set contains entities such as customers, products and suppliers. - -Go to the OASIS OData sample service root URL . You should see something like this: - -![OASIS OData services page](oasis-odata-services.png) - -Select the link **Browse the Read-Only Northwind Service** and you will see the XML contents of this resource: . This is the service document for the OData service at this location, and the start of it should look like this: - -![Northwind service document](northwind-v3-service-document.png) - -Notice how similar it is to the Atom service document, with a "service" root element and "workspace" elements containing "collection" elements that outline the types of data available. In this case you see that there are `Categories`, `CustomerDemographics`, `Customers`, `Employees` and more available in this service. - - -### Look at an OData metadata document - - -In addition to the service document, an OData service also has a metadata document, a resource which describes the data in the OData service. The metadata document itself is available at a "well-known" URL, which is the service document URL with the value `$metadata` appended. For this Northwind OData service, this means that the metadata document should be available at: - - - -Go to this URL and examine the first part of the metadata, which should look something like this: - -![Northwind metadata document](northwind-metadata.png) - -Notice that the basic structure at the start of the metadata document describes the entity types and their properties. You should see entity type definitions for the `Category` entity type, the `CustomerDemographics` entity type, and more. - -You should also see that the properties within these entities are described, that some are defined as key properties, and also some are defined as navigation properties, that describe a link from one entity type to another. For example, there is a relationship between the `Category` entity type and the `Products` entity type by means of the "Products" navigation property in the definition of the `Category` entity type. - -If you're interested, you can scroll through the metadata document to the `Association` definitions to find more details about this relationship identified by the ID `FK_Products_Categories`. You will find the definition of an association that looks like this: - -![Definition of an association](association-definition.png) - -and the definition of an association set that looks like this: - -![Definition of an association set](associationset-definition.png) - - - -### View the products data in the OData service - - -In the previous step you examined entity types. These are detailed descriptions of entities available in the OData service. The entities themselves are available in so-called entity sets. The relationship between entities and entity sets with OData is the direct equivalent of the relationship between entries and feeds in RSS and Atom. In fact, you'll see that `entry` and `feed` elements live on in the OData format. - -Entity sets have their own definitions in the metadata document, described by `EntitySet` elements within the `EntityContainer` element. Scroll down to find, or search for `EntityContainer`, and you will see within it a definition of all the entity sets available in this OData service. It will look something like this: - -![entity container definition](entitycontainer.png) - -Notice there is an entity set "Products", that is a set of entities of type "Product". Search higher up in the metadata document for the entity type "Product", and examine the definition, which should look something like this: - -![product entity definition](product-entity.png) - -You can navigate directly to an entity set by appending its name onto the end of the service document URL. Do that now for the Products entity set, like this: - - - -You will see the XML representation of the Products entity set. Unless you already have a browser feature to display XML prettily, it will look something like this: - -![raw products entity set XML](products-entityset-raw.png) - -It's not easy to read like this, but you should be still able to discern, even in this rendering, features with which you're now familiar. Notice the XML `feed` element is the root element, representing a collection of things. Notice also the first `entry` element, representing the start of the first product record in this collection. - - - -### Install a Chrome extension for XML rendering - - -The Chrome browser is recommended here, as it has a good choice of extensions that can make life easier. There are extensions for Chrome to render XML in a more human-friendly way. One of these extensions is [XML Tree](https://chrome.google.com/webstore/detail/xml-tree/gbammbheopgpmaagmckhpjbfgdfkpadb?hl=en). There are others, but this one will do. Install this in your Chrome browser by following the instructions on the extension page and then reload the [Products entity set resource](https://services.odata.org/V3/Northwind/Northwind.svc/Products). It should now look something like this: - -![rendered products entity set XML](products-entityset-rendered.png) - -Much easier to read, and clearly visible is the structure and relationship described by the `feed` and `entry` elements. It's now also easier to see the actual product data - in this screenshot there is the `Chai` product, with 39 units in stock. - - -### Explore the navigation properties from a product - - -In the screenshot in the previous step, notice the `link` XML elements, in particular the ones with the title attribute values `Category`, `Order_Details` and `Supplier`. Notice also the corresponding values of their type attributes: `entry`, `feed` and `entry` respectively: - -![links from a product entry](entry-links.png) - -Re-examine the [metadata document](https://services.odata.org/V3/Northwind/Northwind.svc/$metadata) to work out what these might be. Look for the Product entity type definition, which (with the new XML Tree extension) will look something like this: - -![product entity type definition](product-entity-type.png) - -Look at the three navigation properties defined. They describe relationships between the `Product` entity type and the `Category`, `Order_Details` and `Supplier` entity types. - -![relationship diagram](relationship-diagram.png) - -The relationship to the `Category` entity type is described with the ID `NorthwindModel.FK_Products_Categories`, with the `To_Role` attribute value being `Categories`. Search elsewhere in the metadata document for `FK_Products_Categories` to find the `Association` definition: - -![products to categories association](products-categories-association.png) - -Notice that the value of the `Multiplicity` attribute for the `Categories` role is defined as "0..1". This means that there can be either zero or one categories for a product. This is why when we follow the navigation property from a `Product` entity type to a `Category` entity type (see the screenshot at the start of this step) the type of the `link` element is `entry`, not `feed`. - -Follow the same path for the relationship to the `OrderDetails` navigation property described with the `To_Role` attribute value of `Order_Details`, and you will find, via the relationship `FK_Order_Details_Products`, that the `Association` definition looks like this: - -![product to order details association](products-orderdetails-association.png) - -In this case, the value of the Multiplicity attribute described for this relationship is `*`. This means that there can be zero, one or more order details for a product. This is why when we follow this navigation property the type of the `link` element is `feed`, rather than `entity`. - - -### Retrieve a specific product - - -The URL shows the `Products` entity set, a feed of individual entries, each one representing a product. In each product `entry` element there is a child `id` element with the unique URL for that particular product, like in this example: - -![id of product entry](product-entry-id.png) - -Specify that ID in the browser address bar, by adding `(1)` to the end of the existing URL: - - - -Note that the resource returned is the entry for that specific product. - - - -### Retrieve order details for a specific product - - -To see how the navigation properties work, go from the individual property entry in the previous step to a list of the related order details. Remembering the navigation property concerned, `Order_Details`, add it to the end of the existing URL in the address bar to navigate to this URL: . - -You should see that the resulting resource is a feed, a collection of entries representing the orders relating to the product specified. - -Finally, use the OData system query option $count to retrieve the number of order details, rather than the order details themselves. Append `$count` onto the end of the existing URL like this: - - - - diff --git a/tutorials/odata-01-intro-origins/product-entity-type.png b/tutorials/odata-01-intro-origins/product-entity-type.png deleted file mode 100644 index 12d987fa15..0000000000 Binary files a/tutorials/odata-01-intro-origins/product-entity-type.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/product-entity.png b/tutorials/odata-01-intro-origins/product-entity.png deleted file mode 100644 index 1a5fbf1805..0000000000 Binary files a/tutorials/odata-01-intro-origins/product-entity.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/product-entry-id.png b/tutorials/odata-01-intro-origins/product-entry-id.png deleted file mode 100644 index d1f7ab9659..0000000000 Binary files a/tutorials/odata-01-intro-origins/product-entry-id.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/products-categories-association.png b/tutorials/odata-01-intro-origins/products-categories-association.png deleted file mode 100644 index a1eaf9ae3c..0000000000 Binary files a/tutorials/odata-01-intro-origins/products-categories-association.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/products-entityset-raw.png b/tutorials/odata-01-intro-origins/products-entityset-raw.png deleted file mode 100644 index caa0bbe5cc..0000000000 Binary files a/tutorials/odata-01-intro-origins/products-entityset-raw.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/products-entityset-rendered.png b/tutorials/odata-01-intro-origins/products-entityset-rendered.png deleted file mode 100644 index aa7fe67c50..0000000000 Binary files a/tutorials/odata-01-intro-origins/products-entityset-rendered.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/products-orderdetails-association.png b/tutorials/odata-01-intro-origins/products-orderdetails-association.png deleted file mode 100644 index cb01c2fbd5..0000000000 Binary files a/tutorials/odata-01-intro-origins/products-orderdetails-association.png and /dev/null differ diff --git a/tutorials/odata-01-intro-origins/relationship-diagram.png b/tutorials/odata-01-intro-origins/relationship-diagram.png deleted file mode 100644 index e67b8b636c..0000000000 Binary files a/tutorials/odata-01-intro-origins/relationship-diagram.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/basic-credentials.png b/tutorials/odata-02-exploration-epm/basic-credentials.png deleted file mode 100644 index 62632bcd7e..0000000000 Binary files a/tutorials/odata-02-exploration-epm/basic-credentials.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/categories-in-json.png b/tutorials/odata-02-exploration-epm/categories-in-json.png deleted file mode 100644 index a4115b13e1..0000000000 Binary files a/tutorials/odata-02-exploration-epm/categories-in-json.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/entity-relationships.png b/tutorials/odata-02-exploration-epm/entity-relationships.png deleted file mode 100644 index 2e2476371b..0000000000 Binary files a/tutorials/odata-02-exploration-epm/entity-relationships.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/execute-sicf.png b/tutorials/odata-02-exploration-epm/execute-sicf.png deleted file mode 100644 index 4908db4970..0000000000 Binary files a/tutorials/odata-02-exploration-epm/execute-sicf.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/main-sub-categories.png b/tutorials/odata-02-exploration-epm/main-sub-categories.png deleted file mode 100644 index e900a67303..0000000000 Binary files a/tutorials/odata-02-exploration-epm/main-sub-categories.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/nested-feed.png b/tutorials/odata-02-exploration-epm/nested-feed.png deleted file mode 100644 index 02d0f9f717..0000000000 Binary files a/tutorials/odata-02-exploration-epm/nested-feed.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/node-hierarchy.png b/tutorials/odata-02-exploration-epm/node-hierarchy.png deleted file mode 100644 index 54a6823295..0000000000 Binary files a/tutorials/odata-02-exploration-epm/node-hierarchy.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/odata-02-exploration-epm.md b/tutorials/odata-02-exploration-epm/odata-02-exploration-epm.md deleted file mode 100644 index 46b00b5db1..0000000000 --- a/tutorials/odata-02-exploration-epm/odata-02-exploration-epm.md +++ /dev/null @@ -1,299 +0,0 @@ ---- -parser: v2 -author_name: DJ Adams -author_profile: https://github.com/qmacro -auto_validation: true -primary_tag: products>sap-cloud-platform -tags: [ products>sap-cloud-platform, topic>cloud, topic>odata, tutorial>beginner ] -time: 15 ---- - -# Continue Your OData Exploration with EPM - Continue your exploration of OData with the Enterprise Procurement Model (EPM) data set in the Gateway demo system. - -## You will learn - - How to explore an OData service in your browser - - How to use navigation paths - - What the common query options are and how to use them - - How to switch to a JSON output format - -## Intro -The Enterprise Procurement Model (EPM) represents a typical business scenario that is complex enough to have meaning in an enterprise context but still simple enough to use for exploring technologies and techniques at a beginner level. - -EPM exists as data in a set of related tables and views, and there are also various OData services that marshal that data and provide business functionality. The EPM and the related OData services are available in the SAP Gateway demo system, and there is a specific EPM OData service, intended for use in a reference app called "Shop", that will be used in this tutorial. - ---- - -### Find the EPM OData service - - -In this step you'll find the EPM OData service by looking for it via the maintenance transaction for the Internet Communication Framework (ICF), to understand how web-based resources in general and OData services in particular are managed within an ABAP system. - -Log on to the SAP Gateway Demo system via the [Web GUI](https://sapes5.sapdevcenter.com/). If necessary, use the arrow button to make the OK Code field appear, so you can enter transaction codes. - -![menu option to show the OK Code field](show-okcode-field.png) - -Enter transaction code **`SICF`** into the OK Code field to start the "Define Services" transaction. In the Service Path field enter **`/sap/opu/odata`** and then select the Execute function. - -![executing SICF with /sap/opu/odata](execute-sicf.png) - -You will be presented with display of the ICF node hierarchy filtered to display only those nodes starting with the path `/sap/opu/odata`, which represents the root of the OData services. Feel free to explore the hierarchy of nodes available within the `odata` branch, and in particular the `sap` node, which is where you'll see a number of sub nodes representing OData services. - -![the ICF node hierarchy for /sap/opu/odata](node-hierarchy.png) - -Now scroll down to find this OData service node: - -`EPM_REF_APPS_SHOP_SRV` - -This is the OData service you will explore. It is an EPM based service for a reference app called "Shop", which explains most of the node's name. The last part, `SRV`, short for "service", is common for OData services served from ABAP systems. This is similar to the convention you may have noticed with the Northwind service in the tutorial [Learn about OData fundamentals](https://developers.sap.com/tutorials/odata-01-intro-origins.html) where the end part of the OData service name was `svc`. - -Use the information in the node hierarchy that leads down to the `EPM_REF_APPS_SHOP_SRV` node to form the part of the OData service URL that will be relative to the SAP Gateway demo system base URL: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV` - -Enter your credentials for the SAP Gateway demo system if prompted. - -![basic authentication](basic-credentials.png) - -The resource returned is the OData service document, showing the collections available, such as `Suppliers`, `MainCategories` and so on. - -![OData service document](service-document.png) - - - - -### Explore entity relationships - - -At a very high level, the entity types and their relationships in this OData service look like this: - -![entity types and their relationships](entity-relationships.png) - -There are corresponding entity sets for each of the entity types. If you want to confirm this for yourself, look at the service's metadata document at this URL to see those relationships (look particularly at the `NavigationProperty`, `Association` and `EntitySet` elements for details): - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/$metadata` - -You should now explore the `Products` entity set and see how a specific product relates to its supplier. - -First, look at all of the products, using this URL for the `Products` entity set: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products` - -Now, find the product entry with the ID `HT-1001` and the name "Notebook Basic 17". - -> If there isn't a product entry with this specific ID, you can choose another one - the IDs follow a similar pattern. - -Use the entry's ID to navigate directly to that product, as an entity: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products('HT-1001')` - -Now you're looking at an individual product, in the form of a single entity (`Products('HT-1001')`) rather than all the products via the entire entity set (`Products`). - -> Note the difference between the `Products` entity set resource and the resource for this specific `Product` entity - the former is represented by the root XML element `feed`, and the latter by the root XML element `entry`. - -Next, follow the link from this product to its supplier, using the information in the relevant `link` element: - -![link from product to supplier](product-to-supplier-link.png) - -In other words, specify this URL: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products('HT-1001')/Supplier` - -You should see data for a single supplier, Becker Berlin, described inside a root XML `entry` element, denoting a single entity: - -![single supplier Becker Berlin](single-supplier.png) - -It is also possible to navigate to individual properties within an entity. Try this now. Select the supplier's web address, by specifying this URL: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products('HT-1001')/Supplier/WebAddress` - - - - -### Page through the products with $top and $skip - - -OData has system query options `$top` and `$skip` that facilitate paging through large entity sets. - -First, find out how many `Suppliers` there are, using the `$count` system query option: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Suppliers/$count` - -At the time of writing, the number of suppliers in the `Suppliers` entity set for this service is 45. You may find there is a different number, but it doesn't matter. - -Request the first 5 suppliers, using `$top`, like this: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Suppliers?$top=5` - -> The system query options like `$top` and `$skip` are part of the query string of the URL which itself is introduced with the `?` symbol. - -Now get the next 5 suppliers, by using `$skip` in conjunction with `$top`: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Suppliers?$top=5&$skip=5` - -> Certain frameworks that process OData, like the [SAP UI5 toolkit](https://ui5.sap.com), use these system query options internally to allow comfortable paging through large data sets in list or table situations. - - -### Have related data included in an entity set request - - -Instead of navigating from an entity to a related entity or entity set using two requests (one for the original entity and then another for the related data), the `$expand` system query option allows for related data to be returned in-line with resources retrieved, in a single request. - -Try this out, by looking at another couple of EPM entities exposed in this OData service - the product categories. There's a `MainCategory` entity type, with a navigation property to a list of entities of type `SubCategory`. Have a quick look at the metadata document to confirm this: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/$metadata` - -![main and sub category entity types](main-sub-categories.png) - -Request a list of all the main categories, and ask for their sub categories to be returned in-line in the response, using the `$expand` system query option: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/MainCategories?$expand=SubCategories` - -If you look closely at the response, you'll see that there is a `feed` XML element at the root, containing `entry` elements. A typical entity set response. However, inside each `entry` element there is a nested `feed` element, itself containing further `entry` elements. - -![nested feeds with $expand](nested-feed.png) - -The `SubCategory` entities related to each `MainCategory` entity are returned in-line, in the response to the request for the `MainCategories` entity set. - - - -### Request responses in a JSON format - - -You may have found looking through the nested XML structures in the previous step quite tedious. XML is human-readable, but not necessarily human-friendly. The OData specification describes an alternative format in JavaScript Object Notation (JSON). This is a more lightweight format and somewhat easier to read, if you have a browser extension that will format JSON for you. - -First, install a JSON formatter extension for your Chrome browser. The `JSONView` extension is a good choice. Go to the [JSONView extension page](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc?hl=en) and add it to your Chrome browser. - -Now, reload the main and sub category structure from the previous step with this URL: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/MainCategories?$expand=SubCategories` - -Next, append the OData system query option `$format=json` to the query string, like this: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/MainCategories?$expand=SubCategories&$format=json` - -> Adding more system query options to the query string of an OData URL is just like adding query parameters to any other URL query string - they are concatenated with the '&' symbol. Don't forget the '$' prefix on each of the OData system query options, though! - -The response is returned in JSON, and formatted by the `JSONView` is considerably easier to read: - -![categories response in JSON](categories-in-json.png) - -> While the entity set and entity responses can be returned in JSON format, the service document and metadata document of an OData service, at least a V2 OData service, cannot - they only exist in XML format. - - -### Reduce the number of properties returned - - -An OData service may contain a definition of an entity type that has a large number of properties. If the consumer of the service really only needs a couple of them, transferring the rest is an unnecessary load on network traffic and can increase response times. The OData system query option `$select` allows you to specify a smaller list of properties that should be returned. - -Looking at the metadata document of the OData service, you will see that the `Product` entity type is one that has many properties. - -First, take a look at all the properties and some sample values by requesting the first entity in the `Products` entity set with this URL: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products?$top=1&$format=json` - -Even in the more lightweight JSON format it's still a lot of data, especially if an entire entity set is requested: - -![a single product entity in JSON format](single-product-entity-in-json.png) - -Now reduce the number of properties down to just a few: `AverageRating`, `Name` and `StockQuantity` using the `$select` system query option: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products?$top=1&$format=json&$select=AverageRating,Name,StockQuantity` - -You'll get a response that looks something like this - a whole lot smaller! - -![product with a reduced number of properties](product-with-few-properties.png) - -The `$select` system query option can also be used to specify properties that are part of entities that are returned in-line with `$expand`. The `Product` entity type has a `Supplier` navigation property as well as a `Reviews` navigation property - these can be returned in-line with `$expand`, and a restricted set of their properties can be specified in `$select` as well. - -Specify this URL, to request the first product, along with the name of its supplier, and also the names of the users who have reviewed that product: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products?$top=1&$format=json&$expand=Supplier,Reviews&$select=AverageRating,Name,StockQuantity,Supplier/Name,Supplier/FormattedAddress,Reviews/UserDisplayName` - -> Depending on how the data has been modified in this demo system, you may find that the first product sometimes has no reviews. In that case, search for one using combinations of the `$top` and `$skip` that you learned about in a previous step. - -The query string portion of this URL is quite long and getting difficult to read. Broken down into its parts, we have the following: - - - Get the first entity: -`$top=1` - - - Return the response in JSON format -`$format=json` - - - Include in-line the related Supplier and Reviews data as well -`$expand=Supplier,Reviews` - - - Only return these specific properties -`$select=AverageRating,Name,StockQuantity,Supplier/Name,Supplier/FormattedAddress,Reviews/UserDisplayName` - -> Notice the format for specifying properties in related entity types, such as `Supplier/Name` and `Reviews/UserDisplayName`. - -This request brings back exactly what we asked for: - -![review and supplier detail for a product](review-supplier-detail-for-product.png) - - - - -### Reduce the number of entities returned by filtering - - -The `$filter` system query option can be used to filter the entities according to criteria that can be expressed by a broad set of operators. - -Refer back to the properties of the first product in the `Products` entity set, as shown in the screenshot in the previous step. Any of the properties here can be used with the `$filter` system query option. - -First, count how many products there are, using `$count`: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products/$count` - -This may well vary, but at the time of writing, this shows 125. - -Now, use the `$filter` system query option in conjunction with `$count` to find out how many products are in the "Computer Systems" main category: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products/$count?$filter=MainCategoryId%20eq%20%27Computer%20Systems%27` - -This should return a count value less than the total, earlier. At the time of writing, the value is 34. - - -> If you're wondering about the strange characters in this URL, they're just [URL encoded](https://en.wikipedia.org/wiki/Percent-encoding) versions of the space and single-quote characters, in other words %20 and %27 respectively. You can actually type the original space and single-quote values into the browser address bar like this: `$filter=MainCategoryId eq 'Computer Systems'` and the characters will be encoded automatically. - -You can double check the results by removing the `$count` part from the URL to see that each of the product entities returned really do belong to the 'Computer Systems' main category: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products/?$filter=MainCategoryId%20eq%20%27Computer%20Systems%27` - -![products in the Computer Systems main category](products-in-computer-systems-main-category.png) - -Operators can be combined. Try this, by finding out whether there are any products in the "Software" main category where the stock is low (10 units or fewer), restricting the results to just show the product name and stock information, in JSON format: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products/?$filter=MainCategoryId%20eq%20%27Software%27%20and%20StockQuantity%20le%2010&$select=StockQuantity,Name&$format=json` - -Again, to break this down, we have the following (before the special characters are URL encoded): - - - Restrict entries to those where the `MainCategoryId` value is "Software" and where the `StockQuantity` value is less than or equal to 10: -`$filter=MainCategoryId eq 'Software' and StockQuantity le 10` - - - Return only the values for the `StockQuantity` and `Name` properties: -`$select=StockQuantity,Name` - - - Return the response in JSON format: -`$format=json` - -You can learn more about the different operators available in the [Filter System Query Option ($filter)](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/#FilterSystemQueryOption) documentation on the OASIS OData website. You'll see that there are functions available for use with `$filter` too, functions such as `substringof` and `startswith`. - - - -### Have entities returned in a certain order - - -The final system query option to examine in this tutorial is `$orderby`, which takes the specification of a list of one or more properties, and optional indicators to specify whether ascending order (the default) or descending order is desired. - -Use the `$orderby` system query option to list the products sorted by average rating, with the most highly rated appearing first, showing the product name, price and the average rating score: - -`https://sapes5.sapdevcenter.com/sap/opu/odata/sap/EPM_REF_APPS_SHOP_SRV/Products?$format=json&$orderby=AverageRating%20desc&$select=Name,Price,CurrencyCode,AverageRating` - -This is the first part of what's returned - looks good: - -![products sorted by average rating](products-sorted-by-average-rating.png) - -There are more system query options, but what you've seen in this tutorial are the main ones. You've been using them primarily in the context of the OData "query" operation, which makes a lot of sense. Some of the system query options are used implicitly in frameworks such as UI5 as stated earlier, but all of them are useful to know to explore, "by hand", an OData service, especially when you intend to use it in building an app. - diff --git a/tutorials/odata-02-exploration-epm/product-properties.png b/tutorials/odata-02-exploration-epm/product-properties.png deleted file mode 100644 index 97036e58a9..0000000000 Binary files a/tutorials/odata-02-exploration-epm/product-properties.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/product-to-supplier-link.png b/tutorials/odata-02-exploration-epm/product-to-supplier-link.png deleted file mode 100644 index 9c1c0e43a4..0000000000 Binary files a/tutorials/odata-02-exploration-epm/product-to-supplier-link.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/product-with-few-properties.png b/tutorials/odata-02-exploration-epm/product-with-few-properties.png deleted file mode 100644 index e2f72e6557..0000000000 Binary files a/tutorials/odata-02-exploration-epm/product-with-few-properties.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/products-in-computer-systems-main-category.png b/tutorials/odata-02-exploration-epm/products-in-computer-systems-main-category.png deleted file mode 100644 index 992be45954..0000000000 Binary files a/tutorials/odata-02-exploration-epm/products-in-computer-systems-main-category.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/products-sorted-by-average-rating.png b/tutorials/odata-02-exploration-epm/products-sorted-by-average-rating.png deleted file mode 100644 index f0302b5482..0000000000 Binary files a/tutorials/odata-02-exploration-epm/products-sorted-by-average-rating.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/review-supplier-detail-for-product.png b/tutorials/odata-02-exploration-epm/review-supplier-detail-for-product.png deleted file mode 100644 index f27db03010..0000000000 Binary files a/tutorials/odata-02-exploration-epm/review-supplier-detail-for-product.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/service-document.png b/tutorials/odata-02-exploration-epm/service-document.png deleted file mode 100644 index a132d636c8..0000000000 Binary files a/tutorials/odata-02-exploration-epm/service-document.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/show-okcode-field.png b/tutorials/odata-02-exploration-epm/show-okcode-field.png deleted file mode 100644 index 277a355dc3..0000000000 Binary files a/tutorials/odata-02-exploration-epm/show-okcode-field.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/single-product-entity-in-json.png b/tutorials/odata-02-exploration-epm/single-product-entity-in-json.png deleted file mode 100644 index 9aad895bef..0000000000 Binary files a/tutorials/odata-02-exploration-epm/single-product-entity-in-json.png and /dev/null differ diff --git a/tutorials/odata-02-exploration-epm/single-supplier.png b/tutorials/odata-02-exploration-epm/single-supplier.png deleted file mode 100644 index 7500b6d8e8..0000000000 Binary files a/tutorials/odata-02-exploration-epm/single-supplier.png and /dev/null differ diff --git a/tutorials/odata-05-data-model-service/odata-05-data-model-service.md b/tutorials/odata-05-data-model-service/odata-05-data-model-service.md deleted file mode 100644 index b40eae7b9b..0000000000 --- a/tutorials/odata-05-data-model-service/odata-05-data-model-service.md +++ /dev/null @@ -1,424 +0,0 @@ ---- -parser: v2 -author_name: DJ Adams -author_profile: https://github.com/qmacro -auto_validation: true -primary_tag: software-product-function>sap-cloud-application-programming-model -tags: [products>sap-business-application-studio, programming-tool>odata, tutorial>beginner ] -time: 20 ---- - -# Define a Simple Data Model and OData Service with CDS - Use Core Data Services (CDS) in the context of the SAP Cloud Application Programming Model (CAP) to quickly set up your own simple OData service. - -## Prerequisites - - **Tutorials:** [Create a Dev Space for Business Applications](appstudio-devspace-create) - -## You will learn -- How to use CDS to model entities and services -- How to seed your OData service with test data -- What CAP can do for you in terms of generating and servicing an OData service - -## Intro -[CDS](https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/855e00bd559742a3b8276fbed4af1008.html) powers a significant part of [CAP](https://cap.cloud.sap). CDS has many features, and in this tutorial you'll encounter a couple of fundamental ones - the ability to declaratively define your data model, concentrating on the domain at hand, and to then be able to expose parts (or all) of that model in a service. You'll also learn how much CAP can do for you with respect to creating full CRUD+Q\* OData services almost from nothing. It's hard to remember how difficult it was to do that before the advent of CAP. - -\*CRUD+Q is a common shorthand for referring to a fully formed OData service that sports Create, Read, Update, Delete, and Query operations. - -You'll use the SAP Business Application Studio (App Studio), with a dev space for business applications, that you should already have ready and set up from the prerequisite tutorial. - -The model and service you'll create is deliberately a very simple one, based on a small subset of something you have seen before if you have followed previous OData tutorials (in particular the [Learn about OData Fundamentals](odata-01-intro-origins) tutorial) - the product information from the Northwind service. - ---- - - -### Remind yourself of the Northwind product data - - -In the tutorial [Learn about OData Fundamentals](odata-01-intro-origins), you familiarized yourself with some of the structure and content of the [Northwind OData service](https://services.odata.org/V4/Northwind/Northwind.svc/). In this tutorial, you'll create your own simple OData service based on information in the Products entity set, so now's a good time look at that product data. - -Jump to the Products entity set in the V4 version of the OData service, with this URL . - -In a [previous tutorial](odata-01-intro-origins), we used the V3 version at . This resource has a default resource representation of XML; more specifically, the value of the `Content-Type` header returned with this resource is `application/atom+xml;type=feed;charset=utf-8` (you can check this by using your browser's developer tools to inspect the HTTP response headers). - -In this tutorial, we're using the V4 version. After all, OData version 4 has been around as an OASIS standard [since 2014](https://raw.githubusercontent.com/qmacro/odata-specs/master/overview.md). Notice that the default representation of most OData V4 resources here is JSON; more specifically, the value of the `Content-Type` header in the response is `application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8`. This JSON representation is also used for OData service document resources in V4 too, whereas in earlier versions it was XML. - -The representation of the `Products` entity set should look something like this: - -```JSON -{ - "@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Products", - "value": [ - { - "@odata.etag": "W/\"1,1\"", - "ProductID": 1, - "ProductName": "Chai", - "SupplierID": 1, - "CategoryID": 1, - "QuantityPerUnit": "10 boxes x 20 bags", - "UnitPrice": 18.0000, - "UnitsInStock": 39, - "UnitsOnOrder": 0, - "ReorderLevel": 10, - "Discontinued": false - }, - { - "@odata.etag": "W/\"1,1\"", - "ProductID": 2, - "ProductName": "Chang", - "SupplierID": 1, - "CategoryID": 1, - "QuantityPerUnit": "24 - 12 oz bottles", - "UnitPrice": 19.0000, - "UnitsInStock": 17, - "UnitsOnOrder": 40, - "ReorderLevel": 25, - "Discontinued": false - } - ] -} -``` - -This of course is just the data; to understand what you're looking at, look now at the heart of the definition of this entity set, in the OData service's metadata document at . - -Ignoring the navigation properties of the `Product` entity type for now, we see this set of property definitions: - -```XML - - - - - - - - - - - - - - - -``` - -So, we know that the `ProductID` property is the only key field, and the types of other properties make sense to us too. - -To find the right balance between realism and efficiency (no-one wants to type in a large amount of definition or data), the first entity definition in the OData service you'll create will be a cut down version of this `Product` entity type, encompassing the following properties: - -- `ProductID` -- `ProductName` -- `UnitsInStock` - -Further entities will be cut down versions of entities in the Northwind OData service too; this suggests that a cut down name for your OData service is appropriate too, so we'll go from `Northwind` to `Northbreeze` (see what we did there?). - - - -### Start a new CAP project - - -To start creating your `Northbreeze` OData service, start by creating a new CAP project in your App Studio dev space using the "New project from Template" wizard available on the Get Started page (if you don't have the Get Started page open, you can recall it with menu path **Help** **→** **Get Started**). - -In the "Select Template and Target Location" step, select the **CAP Project** template and then use the **Start** button to continue. - -![Select the CAP Project template](select-cap-project-template.png) - -In the "CAP Project Details" step, enter `northbreeze` for the project name, ensure that "Node.js" is selected for the runtime, and leave all the other options as they are. Then select the **Finish** button to complete, and wait for the generated project to appear. - -> It's better if you use the all-lowercase version of the name (`northbreeze`) as the name is used as the name of the NPM package that you're (indirectly) creating, and convention there dictates lowercase only. - -Make yourself acquainted with the content of the generated project, by looking through some key files and directories in the App Studio's Explorer. Among these, you should see three directories named `app/`, `db/`, and `srv/`. To understand what these are, and how they relate to what you're going to do in the rest of this tutorial, think of them in a vertical structure like this: - -``` -+------+ -| app/ | -+------+ -| srv/ | -+------+ -| db/ | -+------+ -``` - -At a high level this represents a typical full stack application, with the frontend represented by `app/`, the business logic and services represented by `srv/`, and the persistence layer represented by `db/`. CAP supports work in all of these layers. - -In building your OData service, however, you won't need to make use of the `app/` layer. This is because an OData service is just that - a *service*. You'll be focusing your efforts at the persistence layer (in the `db/` directory) and the business logic layer (in the `srv/` directory). - -While ultimately you'll have created an OData service, which is "flat", providing access to entity data through a uniform and well understood interface, it's best if you think about that service as being the combination of two things -- schema and service -- at two different levels, thus: - -``` -+------+ -| app/ | -+------+ -| srv/ | <-- service: combination(s) of entities focused on consumption -+------+ -| db/ | <-- schema: basic level entity definitions -+------+ -``` - -The OData service you'll be creating is simple and has a one-to-one mapping between schema and service; however, note that CAP's focus on and strong support for [domain modeling](https://cap.cloud.sap/docs/about/#domain-modeling) allows for flexible relationships to be constructed between these two layers, to fit your service consumption needs precisely. - - - -### Define the schema layer - - -The `db/` directory is where entities are defined, and relationships made. Think of it as the overall schema, independent of any intended consumption. - -To keep things as simple as possible, you're going to define a single entity, with only a few properties, and (at least in this tutorial) no relationships to further entities. - -Use the context menu on the `db/` node in the Explorer view to create a new file; give it the name `schema.cds`. - -It's time to define your entity, reflecting a simplified version of the `Product` entity type in the Northwind service definition. Here's the entire content that should go into `schema.cds`. - -> Try to resist the temptation to copy/paste this content; instead, type it in and get to know the rich support for CAP that the App Studio sports, via the SAP CDS Language Support extension. When entering it, you don't have to worry about formatting either - the extension will do that for you too (just use the context menu or the Command Palette to invoke the "Format Document" facility). - -```CDS -namespace northbreeze; - -entity Products { - key ProductID : Integer; - ProductName : String; - UnitsInStock : Integer; -} -``` - -> Note that while in the Northwind service definition the entity type followed the "singular" naming approach ("Product"), the convention in CAP is to use the "plural" naming approach for entity definitions (i.e. "Products"). - -Is this all that's needed for an OData service? Let's find out. - -Open a terminal (menu path **Terminal** **→** **New Terminal**) and that should give you a Bash shell and put you automatically in the root directory of the project you have open in your workspace, that is, `northbreeze`. You'll see a prompt, which consists of your generic username in your App Studio's dev space, the most significant part of the name of the directory you're in (enter the command `pwd` to see the full name, if you're curious) and the traditional shell prompt character `$`. - -```Shell/Bash -user: northbreeze $ -``` - -App Studio dev spaces that have been created using the "SAP Cloud Business Application" type (as you'll have done in the prerequisite tutorial) automatically have the CAP development kit installed (also known as the CDS DK, or "Development Kit", from the name of the NPM package `@sap/cds-dki), including the main command line tool `cds`. One of the features in `cds`'s arsenal is the `watch` command, which will start the CAP server (the runtime) which will start serving services, and restart the CAP server when changes are detected. It will also automatically use an in-memory persistence layer provided by SQLite, which is enough for what we need here in our explorations. - -At the prompt, enter `cds watch`, and observe the output, which should look something like this: - -```Shell/Bash -user: northbreeze $ cds watch - -cds serve all --with-mocks --in-memory? -live reload enabled for browsers - - ___________________________ - -[cds] - loaded model from 1 file(s): - - db/schema.cds - -[cds] - connect using bindings from: { registry: '~/.cds-services.json' } -[cds] - connect to db > sqlite { url: ':memory:' } -/> successfully deployed to in-memory database. - - -[cds] - server listening on { url: 'http://localhost:4004' } -[cds] - launched at 10/23/2024, 3:31:06 PM, version: 8.3.1, in: 363.072ms -[cds] - [ terminate with ^C ] - - - No service definitions found in loaded models. - Waiting for some to arrive... -``` - -This tells us an awful lot already; most importantly for our question, however, is the line "No service definitions found in loaded models - Waiting for some to arrive...". - -You have defined an entity, in a namespace, but not exposed it yet in a service definition. Moreover, if you navigate to the port 4004 that App Studio will have prompted you to connect to, you'll see a welcome page describing what is being served, and the list of service endpoints is currently empty. - -So, creating a service definition is next. You can leave the `cds watch` process running, and it will notice and react to anything you subsequently add or change. - - -### Define the service layer - - -In this step, you'll create the simplest service definition exposing the entire `Products` entity (all three elements) in a service called `Main`. - -Create a new file in the `srv/` directory, calling it `service.cds`. In the same fashion as in the previous step, type (rather than copy/paste) the following into it, exploring what features such as completion help are offered by the language support for CDS in the editor: - -```CDS -using northbreeze from '../db/schema'; - -service Main { - entity Products as projection on northbreeze.Products; -} -``` - -You should see some new output from the `cds watch` process in the terminal, that looks like this: - -``` -[cds] - loaded model from 2 file(s): - - srv/service.cds - db/schema.cds - -[cds] - connect using bindings from: { registry: '~/.cds-services.json' } -[cds] - connect to db > sqlite { url: ':memory:' } -/> successfully deployed to in-memory database. - -[cds] - using auth strategy { - kind: 'mocked', - impl: '../../../../managed-content/globals/pnpm/5/.pnpm/@sap+cds@8.3.1_express@4.21.1/node_modules/@sap/cds/lib/auth/basic-auth' -} - -[cds] - using new OData adapter -[cds] - serving Main { path: '/odata/v4/main' } - -[cds] - server listening on { url: 'http://localhost:4004' } -``` - -This looks promising, in particular the message about the Main service being served. - -If you have still got a browser tab open and looking at the service (or lack thereof), jump to that tab and hit refresh. If you haven't got such a browser tab open, use the Command Palette (call it up with menu path **View** **→** **Find Command...**) to invoke the "Ports: Preview" command, which should give you a link to connections to ports that are currently being exposed. It should look something like this: - -![ports preview](ports-preview.png) - -Make the selection, and you should see a welcome page, this time listing a service endpoint, similar to this: - -![service endpoint](service-endpoint.png) - -This tells us that you have your very own OData service, being served by the CAP runtime. Congratulations! - -Let's pause for a moment to understand what we're seeing here. First, there are the two well-known URLs that are standard for any OData service - the service document, represented by the `/odata/v4/main` hyperlink, and the metadata document, represented by the `$metadata` hyperlink. Note also that these two components are joined with slashes like this: - -``` -/odata/v4/main/$metadata -``` - -This denotes the relative path info for the URL of your OData service. In other words, independent of what host is to serve this service, `/odata/v4/main/` is the actual relative path for the service document. - -Explore the service document and the metadata document now, by following the hyperlinks. There are some high-level observations that are worth making here: - -- The service document faithfully reflects the fact that there is a single entity set `Products` available. -- The metadata document reflects exactly the details that you defined for the entity at the schema layer; this is because the service exposure (in `srv/service.cds`) was the simplest thing that could possibly work, that is, a "pass through" (aka "naked") service, where no properties were filtered out, or added from elsewhere. -- The types in the entity definition (`Integer`, `String`) have been translated into OData types (`Edm.Int32`, `Edm.String`) in the `Property` elements within the `EntityType` element in the metadata document. -- The `ProductID` property has been correctly marked as being a key property. -- An entity set has been defined automatically for the `Products` entity definition, as can be seen within the `EntityContainer` element. - -Note also that: - -- In the root element (`Edmx`) there's a `Version` attribute that declares that the OData version is 4.0. - - -Don't forget to leave the `cds watch` running, ready for the next step! - -### Add data - - -You have got a fully functioning OData service, but it's not as exciting as it could be - there's no data in it yet! If you had selected the `Products` hyperlink on the welcome page in the previous step, you'd have seen something like this: - -```JSON -{ - "@odata.context": "$metadata#Products", - "value": [] -} -``` - -In this penultimate step, you're going to seed your fledgling OData service with data. This will allow you to better kick the tires and discover that yes, this really is a fully functional CRUD+Q OData service that you have created. - -Add a new directory below the `db/` directory, called `data/`, and in there, create a comma-separated value (CSV) file. Given the right names, CSV files in this directory are automatically read, and the data within imported, into the corresponding entities, and that data can then be served in the OData service. - -In order for this to work, the names of the CSV files are important, and are based on a combination of namespace and entity name, separated by a dash. - -So, create a file in the new `db/data/` directory called `northbreeze-Products.csv` and add the following records to it: - -```CSV -ProductID,ProductName,UnitsInStock -1,Chai,39 -2,Chang,17 -3,Aniseed Syrup,13 -``` - -As soon as the contents of this file are saved, you should notice the `cds watch` restart the CAP server, but there's also a new line in the output, that should look something like this: - -```Shell/Bash -> init from db/data/northbreeze-Products.csv -``` - -Great, your seed data is now part of your OData service. - -Jump back to the service (via the welcome page in the previous step) and reselect the `Products` entity set resource. Rather than an empty array for the `value` property, you should now see something like this: - -```JSON -{ - "@odata.context": "$metadata#Products", - "value": [ - { - "ProductID": 1, - "ProductName": "Chai", - "UnitsInStock": 39 - }, - { - "ProductID": 2, - "ProductName": "Chang", - "UnitsInStock": 17 - }, - { - "ProductID": 3, - "ProductName": "Aniseed Syrup", - "UnitsInStock": 13 - } - ] -} -``` - -It's now time to finish this tutorial with a few OData operations. - - -### Try some OData operations - - -There's plenty to explore now you have some data in your simple OData service. Try your own queries, or experiment with some of these. Each time, manipulate the path info and query string as appropriate, based on the URL in your browser. Remember that for the purposes of this tutorial, the URL can be thought of as being made up of three parts. If we take an example OData URL from App Studio, it might look something like this: - -``` -https://port4004-workspaces-ws-czcx7.us10.trial.applicationstudio.cloud.sap/odata/v4/main/Products?$top=11 -``` - -- The first part is the fully qualified hostname, all the way up to the first single slash. -- The second part is the path info, all the way up to the question mark. -- The third part is the query string, introduced by the question mark and made up of one or more `key=value` pairs, with URL-encoded values where appropriate, and joined together with & characters. - -(There is another common part that we see in some URLs, and that's the document fragment identifier, also known as the hash path, introduced with the # character, but this part is not relevant for OData URL construction). - -**Return just the first product** -`/odata/v4/main/Products?$top=1` - -**Return a count of how many products are available** -`/odata/v4/main/Products/$count` - -**Return a single product** -`/odata/v4/main/Products(2)` - -**Return only those "highly stocked" products** -`/odata/v4/main/Products?$filter=UnitsInStock%20gt%2015` - -Your OData service isn't read-only either - it supports all operations (Create, Read, Update, Delete, and Query) out of the box, with no effort on your part at all. - -Try out some write operations now, by opening up a second terminal and using the command line user agent `curl` that's available automatically in all App Studio dev spaces. Here are a few for you to try; in each example, you'll see the prompt (`user: northbreeze $`), the actual invocation (with `curl`) and an indication of the expected output. - -**Add a further product** - -```Shell/Bash -user: northbreeze $ curl -H "Content-Type: application/json" -d '{"ProductID":77,"ProductName":"Original Frankfurter grüne Soße","UnitsInStock":32}' http://localhost:4004/odata/v4/main/Products -{"@odata.context":"$metadata#Products/$entity","ProductID":77,"ProductName":"Original Frankfurter grüne Soße","UnitsInStock":32} -``` - -Once you have added this new product, you can check its existence by going back to the query of the entire entity set: - -`/odata/v4/main/Products` - -**Reduce the number of units in stock for the Chang product** - -```Shell/Bash -user: northbreeze $ curl -H "Content-Type: application/json" -d '{"UnitsInStock":1}' -X PATCH "http://localhost:4004/odata/v4/main/Products(1)" -{"@odata.context":"$metadata#Products/$entity","ProductID":1,"ProductName":"Chai","UnitsInStock":1} -``` - -**Remove the recently added product** - -```Shell/Bash -user: northbreeze $ curl -X DELETE "http://localhost:4004/odata/v4/main/Products(77)" -``` - -At this point, you have exercised your OData service and tried out all five OData operation types. - -Well done! - diff --git a/tutorials/odata-05-data-model-service/ports-preview.png b/tutorials/odata-05-data-model-service/ports-preview.png deleted file mode 100644 index 49ae201915..0000000000 Binary files a/tutorials/odata-05-data-model-service/ports-preview.png and /dev/null differ diff --git a/tutorials/odata-05-data-model-service/select-cap-project-template.png b/tutorials/odata-05-data-model-service/select-cap-project-template.png deleted file mode 100644 index 46541878c0..0000000000 Binary files a/tutorials/odata-05-data-model-service/select-cap-project-template.png and /dev/null differ diff --git a/tutorials/odata-05-data-model-service/service-endpoint.png b/tutorials/odata-05-data-model-service/service-endpoint.png deleted file mode 100644 index c089ce16a1..0000000000 Binary files a/tutorials/odata-05-data-model-service/service-endpoint.png and /dev/null differ diff --git a/tutorials/odata-06-extend-odata-service/main-hyperlink.png b/tutorials/odata-06-extend-odata-service/main-hyperlink.png deleted file mode 100644 index e955c47813..0000000000 Binary files a/tutorials/odata-06-extend-odata-service/main-hyperlink.png and /dev/null differ diff --git a/tutorials/odata-06-extend-odata-service/northbreeze-Products.csv b/tutorials/odata-06-extend-odata-service/northbreeze-Products.csv deleted file mode 100644 index ea876efbc7..0000000000 --- a/tutorials/odata-06-extend-odata-service/northbreeze-Products.csv +++ /dev/null @@ -1,78 +0,0 @@ -ProductID,Category_CategoryID,ProductName,UnitsInStock -1,1,"Chai",39 -2,1,"Chang",17 -3,2,"Aniseed Syrup",13 -4,2,"Chef Anton's Cajun Seasoning",53 -5,2,"Chef Anton's Gumbo Mix",0 -6,2,"Grandma's Boysenberry Spread",120 -7,7,"Uncle Bob's Organic Dried Pears",15 -8,2,"Northwoods Cranberry Sauce",6 -9,6,"Mishi Kobe Niku",29 -10,8,"Ikura",31 -11,4,"Queso Cabrales",22 -12,4,"Queso Manchego La Pastora",86 -13,8,"Konbu",24 -14,7,"Tofu",35 -15,2,"Genen Shouyu",39 -16,3,"Pavlova",29 -17,6,"Alice Mutton",0 -18,8,"Carnarvon Tigers",42 -19,3,"Teatime Chocolate Biscuits",25 -20,3,"Sir Rodney's Marmalade",40 -21,3,"Sir Rodney's Scones",3 -22,5,"Gustaf's Knäckebröd",104 -23,5,"Tunnbröd",61 -24,1,"Guaraná Fantástica",20 -25,3,"NuNuCa Nuß-Nougat-Creme",76 -26,3,"Gumbär Gummibärchen",15 -27,3,"Schoggi Schokolade",49 -28,7,"Rössle Sauerkraut",26 -29,6,"Thüringer Rostbratwurst",0 -30,8,"Nord-Ost Matjeshering",10 -31,4,"Gorgonzola Telino",0 -32,4,"Mascarpone Fabioli",9 -33,4,"Geitost",112 -34,1,"Sasquatch Ale",111 -35,1,"Steeleye Stout",20 -36,8,"Inlagd Sill",112 -37,8,"Gravad lax",11 -38,1,"Côte de Blaye",17 -39,1,"Chartreuse verte",69 -40,8,"Boston Crab Meat",123 -41,8,"Jack's New England Clam Chowder",85 -42,5,"Singaporean Hokkien Fried Mee",26 -43,1,"Ipoh Coffee",17 -44,2,"Gula Malacca",27 -45,8,"Rogede sild",5 -46,8,"Spegesild",95 -47,3,"Zaanse koeken",36 -48,3,"Chocolade",15 -49,3,"Maxilaku",10 -50,3,"Valkoinen suklaa",65 -51,7,"Manjimup Dried Apples",20 -52,5,"Filo Mix",38 -53,6,"Perth Pasties",0 -54,6,"Tourtière",21 -55,6,"Pâté chinois",115 -56,5,"Gnocchi di nonna Alice",21 -57,5,"Ravioli Angelo",36 -58,8,"Escargots de Bourgogne",62 -59,4,"Raclette Courdavault",79 -60,4,"Camembert Pierrot",19 -61,2,"Sirop d'érable",113 -62,3,"Tarte au sucre",17 -63,2,"Vegie-spread",24 -64,5,"Wimmers gute Semmelknödel",22 -65,2,"Louisiana Fiery Hot Pepper Sauce",76 -66,2,"Louisiana Hot Spiced Okra",4 -67,1,"Laughing Lumberjack Lager",52 -68,3,"Scottish Longbreads",6 -69,4,"Gudbrandsdalsost",26 -70,1,"Outback Lager",15 -71,4,"Flotemysost",26 -72,4,"Mozzarella di Giovanni",14 -73,8,"Röd Kaviar",101 -74,7,"Longlife Tofu",4 -75,1,"Rhönbräu Klosterbier",125 -76,1,"Lakkalikööri",57 -77,2,"Original Frankfurter grüne Soße",32 diff --git a/tutorials/odata-06-extend-odata-service/odata-06-extend-odata-service.md b/tutorials/odata-06-extend-odata-service/odata-06-extend-odata-service.md deleted file mode 100644 index 0dd2940351..0000000000 --- a/tutorials/odata-06-extend-odata-service/odata-06-extend-odata-service.md +++ /dev/null @@ -1,642 +0,0 @@ ---- -parser: v2 -author_name: DJ Adams -author_profile: https://github.com/qmacro -auto_validation: true -primary_tag: software-product-function>sap-cloud-application-programming-model -tags: [ software-product-function>sap-business-application-studio, programming-tool>odata, tutorial>beginner ] -time: 20 ---- - -# Extend your Simple Data Model with a Second Entity - Explore entity relationships and navigation properties by extending your simple OData service with further Core Data Services (CDS) definitions. - -## You will learn -- How OData metadata navigation properties work -- How to define relationships between entities in CDS -- What those relationships look like in an OData context - -## Intro -This tutorial assumes you've completed the tutorial [Define a Simple Data Model and OData Service with CDS](odata-05-data-model-service). If you have done, you'll have a brand new OData service `Northbreeze` of your own to use. However, it's still rather simple, with just a single entity. - -In this tutorial, you'll first study the relationship between products and categories in the Northwind OData V4 service. Then, in your own service, you'll add a second entity at the `db/` layer and define a relation between it and the first entity. You'll then expose this second entity at the `srv/` layer and examine what this looks like from an OData metadata and operations perspective. Finally, you'll build a relationship between those two entities, add some data, and check that everything works as intended. - -Before you start, open up the workspace in the SAP Business Application Studio (App Studio) dev space you were using in that previous tutorial, ready to extend the CDS definitions you have so far. - ---- - -### Take a look at the Northwind Product / Category relationship - -In the [previous tutorial in this group](odata-05-data-model-service) you added a cut down version of the `Product` entity type. If you examine [Northwind's metadata document](https://services.odata.org/V4/Northwind/Northwind.svc/$metadata), you should see that this entity type actually has relationships with three other entity types - look for the `NavigationProperty` elements in this extract from the metadata document (some of the `Property` elements have been omitted for brevity): - -```XML - - - - - - - - - - - - - - - - - - -``` - -Let's focus on the relationship to the `Category` entity type, defined in the corresponding `NavigationProperty` element. Here's what that looks like, with some added whitespace for readability: - -```xml - -``` - -Digging into the [Navigation Property section of the OData V4 standards document](http://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752536) we can better understand what this declaration is telling us. Here's what we can work out together, at a high level: - -The navigation property - -- is on the `Product` entity type -- is itself called `Category` -- leads to the `Category` entity type -- has a corresponding path back from a `Category` entity to entities of this `Product` entity type via a `Products` path (this is defined in the optional `Partner` attribute here) - -That's great to know from a theory perspective, but what does this mean in practical terms? - -Well, for starters, it means that we can use the relationship defined to find out the details of the category for a product. For example, taking the first Northwind product, we can navigate to the corresponding category. We can even request the product _and_ category information together. - -Here are three relative URLs, and the default JSON representations of those URLs, that demonstrate this: - -A read of the first Northwind product, via /Products(1): - -```JSON -{ - "@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Products/$entity", - "@odata.etag": "W/\"1,1\"", - "ProductID": 1, - "ProductName": "Chai", - "SupplierID": 1, - "CategoryID": 1, - "QuantityPerUnit": "10 boxes x 20 bags", - "UnitPrice": 18, - "UnitsInStock": 39, - "UnitsOnOrder": 0, - "ReorderLevel": 10, - "Discontinued": false -} -``` - -Here, we only see the value of the `CategoryID` property (`1`), but none of the details of the category itself. - -We can ask to see all the information on the category for that product, with /Products(1)/Category: - -```JSON -{ - "@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Categories/$entity", - "CategoryID": 1, - "CategoryName": "Beverages", - "Description": "Soft drinks, coffees, teas, beers, and ales", - "Picture": "FRwvAAIA..." -} -``` - -> From a path info perspective, this (`/Products(1)/Category`) is nothing special; it's just the same as specifying a normal (non-navigation) property, such as `ProductName`, like this: `/Products(1)/ProductName` - -And this is an example of a request for product _and_ category information to be returned in the same response, using the OData system query option `$expand`: /Products(1)?$expand=Category: - -```JSON -{ - "@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Products/$entity", - "@odata.etag": "W/\"1,1\"", - "ProductID": 1, - "ProductName": "Chai", - "SupplierID": 1, - "CategoryID": 1, - "QuantityPerUnit": "10 boxes x 20 bags", - "UnitPrice": 18, - "UnitsInStock": 39, - "UnitsOnOrder": 0, - "ReorderLevel": 10, - "Discontinued": false, - "Category": { - "CategoryID": 1, - "CategoryName": "Beverages", - "Description": "Soft drinks, coffees, teas, beers, and ales", - "Picture": "FRwvAAIA..." - } -} -``` - -### Take a look at the Northwind Category -> Product relationship - -To balance things out, let's take a brief look at the other entity type in this relationship - and that's the `Category` entity type. Here's what the definition looks like in the metadata document: - -```XML - - - - - - - - - - -``` - -Of course, our gaze falls immediately upon the single `NavigationProperty` here which is `Products`, the "other end of the connection" to what we looked at in the previous step. - -Notice here that in contrast to the type defined for the `NavigationProperty` in the previous step, the type defined for this one is a [built-in abstract type](http://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752518) - a collection of zero or more entities of the type `NorthwindModel.Product`. - -Thinking about what this data model is about, this makes sense, of course. There are categories, and there are products that belong to categories. There may be many products belonging to a single category (say, Beverages), and there may theoretically also be categories for which there are no products. Note also that a product can only belong to a single category here. - -This is a relationship that might be commonly expressed like this: - -``` -+------------+ +------------+ -| Categories | 1 ------- 0..N | Products | -+------------+ +------------+ -``` - -Before moving on to start creating a relationship like this in our own `Northbreeze` OData service, let's just try out an OData query operation on Northwind that uses this `NavigationProperty` that we've been looking at, and that is a count of the products in the seventh category ("Produce"): /Categories(7)/Products/$count. This should return a simple numeric value (which is 5, at the time of writing). - -Staying with Northwind, let's look at the details for that seventh category next, including a list of products in that category /Categories(7)?$expand=Products (only a couple of the products are shown in this sample output, for brevity): - -```JSON -{ - "@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Categories/$entity", - "CategoryID": 7, - "CategoryName": "Produce", - "Description": "Dried fruit and bean curd", - "Picture": "FRwvAAIA...", - "Products": [ - { - "@odata.etag": "W/\"3,7\"", - "ProductID": 7, - "ProductName": "Uncle Bob's Organic Dried Pears", - "SupplierID": 3, - "CategoryID": 7, - "QuantityPerUnit": "12 - 1 lb pkgs.", - "UnitPrice": 30, - "UnitsInStock": 15, - "UnitsOnOrder": 0, - "ReorderLevel": 10, - "Discontinued": false - }, - { - "@odata.etag": "W/\"6,7\"", - "ProductID": 14, - "ProductName": "Tofu", - "SupplierID": 6, - "CategoryID": 7, - "QuantityPerUnit": "40 - 100 g pkgs.", - "UnitPrice": 23.25, - "UnitsInStock": 35, - "UnitsOnOrder": 0, - "ReorderLevel": 0, - "Discontinued": false - } - ] -} -``` - -### Add a new Categories entity to your Northbreeze schema - -Now it's time to build out your own OData service to reflect a similar relationship. As it's a cut down version of Northwind, and we want to concentrate here on the relationships rather than anything else, you'll only include the category ID, name, and description properties. - -Assuming you have your App Studio dev space already open at the workspace you were using in the previous tutorial (see the instructions at the top of this tutorial), open up a terminal (menu path **Terminal** > **New Terminal**) and start `cds watch`, whereupon you should see some familiar output like this: - -```Shell/Bash -user: northbreeze $ cds watch - -cds serve all --with-mocks --in-memory? -live reload enabled for browsers - - ___________________________ - -[cds] - loaded model from 2 file(s): - - srv/service.cds - db/schema.cds - -[cds] - connect using bindings from: { registry: '~/.cds-services.json' } -[cds] - connect to db > sqlite { url: ':memory:' } - > init from db/data/northbreeze-Products.csv -/> successfully deployed to in-memory database. - -[cds] - using auth strategy { - kind: 'mocked', - impl: '../../../../managed-content/globals/pnpm/5/.pnpm/@sap+cds@8.3.1_express@4.21.1/node_modules/@sap/cds/lib/auth/basic-auth' -} - -[cds] - using new OData adapter -[cds] - serving Main { path: '/odata/v4/main' } - -[cds] - server listening on { url: 'http://localhost:4004' } -[cds] - launched at 10/24/2024, 10:22:57 AM, version: 8.3.1, in: 371.49ms -[cds] - [ terminate with ^C ] -``` - -Using `cds watch` gives you a lovely tight [feedback loop](https://martinfowler.com/articles/developer-effectiveness.html#FeedbackLoops) where you can make changes, observe their effects, make further changes, and experience the joy of learning-by-doing. - -Now your feedback loop is active, add a new `Categories` entity definition to your `db/schema.cds` file, so that the resulting entire contents looks like this: - -```CDS -namespace northbreeze; - -entity Products { - key ProductID : Integer; - ProductName : String; - UnitsInStock : Integer; -} - -entity Categories { - key CategoryID : Integer; - CategoryName : String; - Description : String; -} -``` - -> Again, try to resist the temptation just to copy/paste this new definition, and type it in manually instead to learn more about the support for CDS that App Studio has (via the [SAP CDS Language Support extension](https://www.youtube.com/watch?v=eY7BTzch8w0)). - -As soon as the file is saved, you'll see the `cds watch` process restart things. Take a look at the metadata definition to check if you can now see the `Categories` entity type. If you've previously closed the tab for this, remember that you can get to the list of ports that the App Studio is exposing for you in this dev space with the **Ports: Preview** command (use the Command Palette). - -Following the link to the preview of port 4004 (which is where we find the CAP Welcome page), and from there the link to the `/main/$metadata` resource, you may be unsurprised to see that your new entity type isn't there: - -```XML - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -You know why, don't you? Because you haven't yet exposed that entity definition at the **service layer**. - -Do that now, by adding a second line to the definition details for the `Main` service in your `srv/service.cds` file, adding a reference to the `Categories` entity, so the entire file then looks like this: - -```CDS -using northbreeze from '../db/schema'; - -service Main { - entity Products as projection on northbreeze.Products; - entity Categories as projection on northbreeze.Categories; -} -``` - -You should notice that `cds watch` restarts things - when that happens, re-request the metadata document in the other tab, and you should then see the new entity type: - -```XML - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -So far so good, right? Well, let's see. Let's add some category data in the next step. - -### Add data for the Categories entity - -In the same way as in the [tutorial that precedes this](odata-05-data-model-service), you should now seed this entity with some data, using the same Comma Separated Value (CSV) file based technique as before. - -In the `db/data/` directory, next to the existing `northbreeze-Products.csv` file, create a new file `northbreeze-Categories.csv` and save the following content into it (including the header line): - -```CSV -CategoryID,CategoryName,Description -1,"Beverages","Soft drinks, coffees, teas, beers, and ales" -2,"Condiments","Sweet and savory sauces, relishes, spreads, and seasonings" -3,"Confections","Desserts, candies, and sweet breads" -4,"Dairy Products","Cheeses" -5,"Grains/Cereals","Breads, crackers, pasta, and cereal" -6,"Meat/Poultry","Prepared meats" -7,"Produce","Dried fruit and bean curd" -8,"Seafood","Seaweed and fish" -``` - -Back at the CAP Welcome page (the root resource on the exposed port 4004), you can now follow links to both the `Categories` and `Products` entity sets, and see some data for both. - -But there's something missing - there's no way to get from a product to a category, or from a category to a list of products. - -### Define relationships in your schema - -While we can retrieve data for both entities, there's no connection yet between them. Let's address that in this step. - -We understand what a (two-way) relationship looks like when expressed in OData metadata XML. Now you're going to define such a relationship at the schema layer in your simple OData service, drawing a link between the `Products` and `Categories` entities. You're going to do this declaratively in CDS, using [Associations](https://cap.cloud.sap/docs/cds/cdl#associations). - -Open up the `db/schema.cds` file and add a new element to each of the entities, so that the resulting contents look like this: - -```CDS -namespace northbreeze; - -entity Products { - key ProductID : Integer; - ProductName : String; - UnitsInStock : Integer; - Category : Association to Categories; -} - -entity Categories { - key CategoryID : Integer; - CategoryName : String; - Description : String; - Products : Association to many Products - on Products.Category = $self; -} -``` - -> Note the terminology difference for what we might think of as "fields"; OData uses the term "property", and in CAP, or more specifically in CDS, the term "element" is used. - -You've added a pair of pointers, effectively, but note that they're different: - -- the `Category` property is a pointer to a single instance of the `Categories` entity -- the `Products` property is a pointer to zero or more instances of the `Products` entity, and has a qualifying expression to ensure the right relationships - -These types of association you've added are "managed" associations, and allow you to remain at the "what you want" level rather than have to descend to the "how you need to achieve it" level. That said, we need to understand what these declarations mean in terms of OData metadata. Let's find out. - -On saving this `db/schema.cds` file, you should notice that the `cds watch` process (that you should still have running in your terminal session) should restart things. At this point, go to the CAP Welcome page (served as usual on the exposed port 4004) and retrieve the metadata document again. It should now look something like this: - -```XML - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -Excellent! We now not only have the definitions of the entity types `Products` and `Categories`, but these definitions also contain appropriate `NavigationProperty` elements describing the two-way relationship between them. - -Our extended simple OData service's metadata is now what we want it to be, but there's one more thing to do. While we have an indication of a relationship in the metadata, there's no indication or evidence of that relationship in our (CSV-based) test data. - -What do we need to do here? Well, take a closer look at the XML above. The relationship is qualified by a referential constraint, that refers to a new property (which now also appears as a `Property` element within the `EntityType` definition). That property is `Category_CategoryID`. This has been generated by the managed association and is a clue for us as to what we need to do in the data to bring the actual relationships to life. - -### Modify and extend the data for the Products entity - -In fact, if you look at the `Products` entity set (follow the `Products` hyperlink from the CAP Welcome page), you will see that this `Category_CategoryID` is indeed a new property, and there are currently no values for it in any of the entities: - -```JSON -{ - "@odata.context": "$metadata#Products", - "value": [ - { - "ProductID": 1, - "ProductName": "Chai", - "UnitsInStock": 39, - "Category_CategoryID": null - }, - { - "ProductID": 2, - "ProductName": "Chang", - "UnitsInStock": 17, - "Category_CategoryID": null - }, - { - "ProductID": 3, - "ProductName": "Aniseed Syrup", - "UnitsInStock": 13, - "Category_CategoryID": null - } - ] -} -``` - -Let's rectify that now, and also add more products. - -We know that each entity in the [Products entity set in the original Northwind service](https://services.odata.org/V4/Northwind/Northwind.svc/Products) contains a value for the property `CategoryID`; so it's just a case of taking all that data and putting it into our `db/data/northbreeze-Products.csv` file. There's a file associated with this tutorial that's been prepared for you: - - - -Open it and copy the entire contents - then use that to replace what you have in `db/data/northbreeze-Products.csv`. - -After doing this, the contents of that file should look like this: - -```CSV -ProductID,Category_CategoryID,ProductName,UnitsInStock -1,1,"Chai",39 -2,1,"Chang",17 -3,2,"Aniseed Syrup",13 -4,2,"Chef Anton's Cajun Seasoning",53 -5,2,"Chef Anton's Gumbo Mix",0 -6,2,"Grandma's Boysenberry Spread",120 -7,7,"Uncle Bob's Organic Dried Pears",15 -8,2,"Northwoods Cranberry Sauce",6 -9,6,"Mishi Kobe Niku",29 -... -``` - -Note that in the header line, we have the property names - they should match the names of the elements in the entity definition; this includes the link to the `Categories` entity, represented by `Category_CategoryID`. - -On saving the file, you should again see the `cds watch` process restart things, and also (as before) a successful loading of data, indicated by these two lines in the log output: - -```Shell/Bash -> init from db/data/northbreeze-Products.csv -> init from db/data/northbreeze-Categories.csv -``` - -Go back to the `Products` entity set in your service that you were looking at in the previous step, and re-request it. You should now see not only more products, but also that there's a value for the `Category_CategoryID` properties in each of the entities. Here's a cut down version of what you should see: - -```JSON -{ - "@odata.context": "$metadata#Products", - "value": [ - { - "ProductID": 1, - "ProductName": "Chai", - "UnitsInStock": 39, - "Category_CategoryID": 1 - }, - { - "ProductID": 2, - "ProductName": "Chang", - "UnitsInStock": 17, - "Category_CategoryID": 1 - }, - { - "ProductID": 3, - "ProductName": "Aniseed Syrup", - "UnitsInStock": 13, - "Category_CategoryID": 2 - } - ] -} -``` - -That looks good! - -### Exercise the new relationships in your OData service - -Let's use this final step to check if we can now explore the relationship between the two entities in our simple `Northbreeze` OData service. Why don't we use the same approach and set of requests from the first step in this tutorial? - -All of these request URLs will be based on and relative to your `Northbreeze` OData service document - use the CAP Welcome page to get to that service document, and then append each of the relative path info examples onto that. - -In other words, in the CAP Welcome page, select the `/odata/v4/main` hyperlink from the `/odata/v4/main / $metadata` line directly following the "Service Endpoints" heading: - -![main hyperlink](main-hyperlink.png) - -This should take you to the service document, with a URL that looks similar to this: - -`https://port4004-workspaces-ws-czcx7.us10.trial.applicationstudio.cloud.sap/odata/v4/main` - -All paths you use in the rest of this step should be relative to (that is, appended to) this service document URL. - -Let's start by requesting just the first `Northbreeze` product, via `/Products(1)` - -```JSON -{ - "@odata.context": "$metadata#Products/$entity", - "ProductID": 1, - "ProductName": "Chai", - "UnitsInStock": 39, - "Category_CategoryID": 1 -} -``` - -Here, we only see the value of the `Category_CategoryID` property (`1`), but none of the details of the category itself. - -We can ask to see all the information on the category for that product, with `/Products(1)/Category`: - -```JSON -{ - "@odata.context": "../$metadata#Categories/$entity", - "CategoryID": 1, - "CategoryName": "Beverages", - "Description": "Soft drinks, coffees, teas, beers, and ales" -} -``` - -And this is the product _and_ category example request again, a request where we expect information from both entities to be returned in the same response, using the `$expand` OData system query option `/Products(1)?$expand=Category`: - -```JSON -{ - "@odata.context": "$metadata#Products/$entity", - "ProductID": 1, - "ProductName": "Chai", - "UnitsInStock": 39, - "Category_CategoryID": 1, - "Category": { - "CategoryID": 1, - "CategoryName": "Beverages", - "Description": "Soft drinks, coffees, teas, beers, and ales" - } -} -``` - -As you can see, your simple OData service is looking great. With just a few lines of declarative schema definitions, plus pass-through exposure of the entities in a minimal service definition, and some CSV based data, you have yourself a fully functioning OData service. - -Congratulations! - diff --git a/tutorials/odata-07-extend-custom-code/odata-07-extend-custom-code.md b/tutorials/odata-07-extend-custom-code/odata-07-extend-custom-code.md deleted file mode 100644 index ace2f4f64d..0000000000 --- a/tutorials/odata-07-extend-custom-code/odata-07-extend-custom-code.md +++ /dev/null @@ -1,326 +0,0 @@ ---- -parser: v2 -author_name: DJ Adams -author_profile: https://github.com/qmacro -auto_validation: true -primary_tag: software-product-function>sap-cloud-application-programming-model -tags: [ software-product-function>sap-business-application-studio, programming-tool>odata, tutorial>beginner ] -time: 20 ---- - -# Extend the Built-In OData Features with Custom Code - Learn how to customize your OData service with event handlers. - -## You will learn -- What custom event handlers are -- Where and how to define a simple event handler -- How to use a custom event handler to define an OData function import - -## Intro -This tutorial assumes you've completed the tutorial [Extend your Simple Data Model with a Second Entity](odata-06-extend-odata-service). If you have done, you'll have an OData service `Northbreeze` with two related entities. All OData operations - create, read, update, delete and query - are supported out of the box. - -In this tutorial, you'll learn how to add custom behaviour, in the form of handlers, to make your OData service do what you want it to do, beyond the standard operation handling. - -Before you start, open up the workspace in the SAP Business Application Studio (App Studio) dev space you were using in that previous tutorial, ready to add code. - ---- - -### Review the product data - -Let's take the `Products` entity as the target for our explorations of custom functions. Remind yourself of what the data looks like by starting up the service with `cds watch` in a terminal, just like you've done in the previous tutorial. - -Open up the service in a new browser tab or window, and navigate to the `Products` entity set. You should see the familiar list of products, with values for the properties in each case, and it should look like this (only the first two products are shown here): - -```JSON -{ - "@odata.context": "$metadata#Products", - "value": [ - { - "ProductID": 1, - "ProductName": "Chai", - "UnitsInStock": 39, - "Category_CategoryID": 1 - }, - { - "ProductID": 2, - "ProductName": "Chang", - "UnitsInStock": 17, - "Category_CategoryID": 1 - } - ] -} -``` - -Remember that at this stage your fully functioning OData service is a result of purely declarative definitions. Now it's time to add some simple business logic. - -### Create a service implementation - -Business logic in OData services belongs in a [service implementation](https://cap.cloud.sap/docs/node.js/core-services#implementing-services). The simplest way to do this is to create a `service.js` file in the same directory as your `service.cds` file, i.e. in the `srv/` directory. The framework will automatically recognize and use this "sibling" file. - -In a new `srv/service.js` file, add the following JavaScript: - -```JavaScript -module.exports = srv => { - srv.after('READ', 'Products', items => { - return items.map(item => { - if (item.UnitsInStock > 100) { - item.ProductName += ' SALE NOW ON!' - } - return item - }) - }) -} -``` - -Let's stare at this for a few moments. You won't be far wrong if you guess that it's something to do with adding an indication of a product sale for items where there's a high number of units in stock. But how does it work, and in what context? - -First, in order to be used by CAP's runtime framework, a service implementation file such as this needs to offer a function definition for the framework to call on startup. This "offer" is via Node.js's module export mechanism, and what's exported here is the anonymous function which (apart from the `module.exports =` part itself) is the entire file contents. - -When the framework finds and invokes this anonymous function, it passes a server object, which we can use to define event handlers via the [Handler Registration API](https://cap.cloud.sap/docs/node.js/core-services#srv-on-before-after). That's why we have a single `srv` parameter defined, and that's what we use to access the `srv.after` API to declare a function to be run under specific circumstances (more on that shortly). - -Examining that API call, we see this pattern: - -```JavaScript -srv.after('READ', 'Products', items => { ... }) -``` - -This is how we can add custom business logic to extend the standard handling that is provided for us out of the box. Specifically, this call defines a function (`items => { ... }`) that should be executed whenever there's an OData READ (or QUERY) operation on the `Products` entity data. - -The use of the specific `after` API call is quite common, and allows us to jump onto the request processing flow towards the end, when the heavy lifting of data retrieval from the persistence layer has been done for us. As well as `after`, the Handler Registration API supports `before` and `on` events, but right now, `after` is what we want here. - -What does the function specified in this API call do? As you'd correctly guessed, it just adds a string on to the end of the value for each of the product names, specifically for the cases where the number of units in stock is high. - -In its simplest form, the function provided is given the data retrieved, and whatever the function does ends up in the response to the original request. Note, however, that in the context of the `after` API call, the handler function cannot change the "shape" of the data, such as omit specific items. We'll look at how to do that later on in this tutorial. - -So with the simple `map` invocation, we are modifying the values for the `ProductName` properties of those items where the `UnitsInStock` value is more than 100. - -Once you've added this code and saved the file, check that the `cds watch` process has restarted the service successfully, and have another look at the `Products` entity set. - -Here's an example of what you should see; this data was retrieved using the system query options `$skip=4` and `$top=2` to narrow in on just two of the products, with "Grandma's Boysenberry Spread" having 120 units in stock and the extra "SALE NOW ON!" text: - -```JSON -{ - "@odata.context": "$metadata#Products", - "value": [ - { - "ProductID": 5, - "ProductName": "Chef Anton's Gumbo Mix", - "UnitsInStock": 0, - "Category_CategoryID": 2 - }, - { - "ProductID": 6, - "ProductName": "Grandma's Boysenberry Spread SALE NOW ON!", - "UnitsInStock": 120, - "Category_CategoryID": 2 - } - ] -} -``` - -### Modify the custom code - -That's great, but let's look now at a simple example of where we might want to change the shape of the data, or, as the documentation describes it, to make "asynchronous modifications". - -If we wanted to reduce the list of products returned - to omit those products that had a low stock count - we would not use the `after` API call, but the `on` API call, and provide a function that effectively replaces the standard processing. - -The prospect of doing this isn't as daunting as it first seems, as we're given everything that we need to be able to do this. - -Remove the entire call to `srv.after` and replace it with a call to `srv.on`, so that the resulting `service.js` content looks like this: - -```JavaScript -module.exports = srv => { - srv.on('READ', 'Products', async (req, next) => { - const items = await next() - return items.filter(item => item.UnitsInStock > 100) - }) -} -``` - -This differs from the previous step thus in a number of ways. - -First, we're using the `on` API call to provide a function that should be run _instead of_ standard processing when product data is requested. - -Next, the function we provide doesn't expect the data (like we did in the previous function, with the `items` parameter), as the data will not be provided to it. Instead, it's expecting to be given the original request object (`req`), and a reference to the subsequent standard handler (`next`). We can use this `next` handler to actually do the work of retrieving the data for us, and are then free to do what we want with it. - -Finally, because we're wanting to call that `next` function synchronously (with `await`), we must declare our function with the `async` keyword. - -Once we have the data, in `items`, we return a filtered subset that only includes those products where the value of the `UnitsInStock` property is greater than 100. - -Once you have this new implementation saved, and your service has restarted, check the `Products` entity set once more, and you should see only a small number of entries; if you're still using the data provided in the tutorials prior to this, there should be 10. - -### Define a function import - -That's great, but there's more that can be done in such a service implementation file. - -The two JavaScript functions you've provided so far have been to affect the processing of standard OData operations on the `Products` entity. But OData V4 defines [actions and functions](http://docs.oasis-open.org/odata/odata/v4.0/os/part1-protocol/odata-v4.0-os-part1-protocol.html#_Toc372793604), in addition to entities. Actions and functions can be bound, or unbound. Think of such things as the next generation of function imports that you might know from OData V2. - -So to round off this tutorial, let's define a simple [unbound function](https://cap.cloud.sap/docs/cds/cdl#actions) on our OData service. - -> Bear in mind the distinction between "function" in the JavaScript sense, and "function" in the OData sense. - -While the custom logic that we've written so far has been implicit in our OData service's definition, as they work as handlers for existing operations, an OData function needs to be explicitly declared and described in the service's metadata. - -To do this, extend the CDS definition in `srv/service.cds`, where you should add a line to define a function `TotalStockCount` in the `Main` service. The resulting content of `srv/service.cds` should look like this: - -```CDS -using northbreeze from '../db/schema'; - -service Main { - entity Products as projection on northbreeze.Products; - entity Categories as projection on northbreeze.Categories; - function TotalStockCount() returns Integer; -} -``` - -At this point, it's worth checking to see if this has any effect on your OData service. Once the CDS file is saved, and your service has restarted, navigate to the metadata document (that's the relative path `/odata/v4/main/$metadata`, but you knew that already, right?). It should look something like this: - -```XML - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -We can see that this simple declarative definition has already had an effect - there is evidence of this new function import: - -* in the `EntityContainer` element where it's listed alongside the two entity sets -* defined near the bottom, after the definitions of the `Categories` and `Products` entity types - -The function import definition here in the metadata document reflects what we intended; in particular, the function is called `TotalStockCount`, is unbound, and has an integer return type: - -```XML - - - -``` - -Great. Now we can get to writing the implementation of this function import. - -### Implement the function import - -The implementation of this function import might as well go in the same `srv/service.js` file as before, to keep things simple. Here's what the entire contents of the file should look like with all the additions: - -```JavaScript -const { Products } = cds.entities('northbreeze') - -module.exports = srv => { - srv.on('READ', 'Products', async (req, next) => { - const items = await next() - return items.filter(item => item.UnitsInStock > 100) - }) - - srv.on('TotalStockCount', async (req) => { - const items = await cds.tx(req).run(SELECT.from(Products)) - return items.reduce((a, item) => a + item.UnitsInStock, 0) - }) -} -``` - -Let's look at what's new. - -First, at the top of the file, there is this new line: - -```JavaScript -const { Products } = cds.entities('northbreeze') -``` - -Here, we're using destructuring to pull out the `Products` entity definition from the `northbreeze` service, via the `cds` module. - -Next, directly below the existing `srv.on('READ', 'Products', async (req, next) => { ... })` call that you already had, there is now a second call to the Handler Registration API to define a handler for the `TotalStockCount` function import. - -This handler is an anonymous function just like the other, except that it only expects and needs the request (in `req`). It uses this as a context for the transaction that it creates, within which it then retrieves the product data. - -> Note that `Products` is a constant, not a literal string, and refers to the entity set object that we retrieved via `cds.entities` earlier. - -The product data retrieved is stored in the `items` constant, and looks like this: - -```JavaScript -[ { ProductID: 1, - ProductName: 'Chai', - UnitsInStock: 39, - Category_CategoryID: 1 }, - { ProductID: 2, - ProductName: 'Chang', - UnitsInStock: 17, - Category_CategoryID: 1 }, - ... -] -``` - -It's then just a simple case of summing the values of the `UnitsInStock` property for each of the items, which we do cleanly with a simple [reduce](https://www.google.com/search?q=site%3Aqmacro.org+reduce) function, and return the result. Being a numeric value, the result type corresponds to what we defined as what the function import returns, back in the CDS file: - -```CDS -function TotalStockCount() returns Integer; -``` - -Once you've saved the service implementation file and the `cds watch` process has restarted the service, you should try this function import out. Switch to the other tab and navigate to the relative path: - -``` -/odata/v4/main/TotalStockCount() -``` - -The response should look something like this: - -```JSON -{ - "@odata.context": "$metadata#Edm.Int32", - "value": 3119 -} -``` - -That is, there are a total of 3119 stock units across all products. - -Well done! You've now successfully implemented an OData V4 unbound function, and hopefully feel comfortable enough to implement your own custom business logic for your CAP-powered OData services. - - ---- diff --git a/tutorials/odata-dd-1-origins/odata-dd-1-origins.md b/tutorials/odata-dd-1-origins/odata-dd-1-origins.md new file mode 100644 index 0000000000..cfddae61f2 --- /dev/null +++ b/tutorials/odata-dd-1-origins/odata-dd-1-origins.md @@ -0,0 +1,462 @@ +--- +parser: v2 +author_name: DJ Adams +author_profile: https://github.com/qmacro +auto_validation: false +primary_tag: software-product>sap-business-technology-platform +tags: [ software-product>sap-business-technology-platform, topic>cloud, programming-tool>odata, tutorial>beginner ] +time: 20 +--- + +# Learn about OData's origins + + Discover OData's origins in RSS and Atom. + +## You will learn + +- Where OData came from +- Why OData looks and acts the way it does + +## Intro + +OData is an open standard that is both a data format and a protocol for consuming and manipulating data in a uniform way. It's ISO/IEC approved and managed by the [OASIS organization](https://www.oasis-open.org/). + +OData has its origins in the world of weblogs and syndication, but now serves to power a great deal of the API and integration activities in typical SAP enterprise environments. This tutorial will help you understand OData's origins. + +> This tutorial belongs to the OData Deep Dive mission, a re-write of the original. The re-write is a work in progess, please proceed with caution! More info can be found in the blog post [OData Deep Dive rewrite in the open](https://qmacro.org/blog/posts/2026/02/02/odata-deep-dive-rewrite-in-the-open/). + +--- + +### Examine RSS, an ancestor of OData + +You can understand OData as being the combination of two essential parts. The first is the format, the second is the protocol. The format defines how data is structured, described and serialized. The protocol defines how that data is retrieved, manipulated, and maintained. + +The origin of OData's format comes from the world of weblogs: blogging and syndication. The Rich Site Summary (RSS) format was defined to describe a blog and the posts available in it, typically with the newest posts first, but in XML format for machine consumption. + +> RSS is also known as "RDF Site Summary" or "Really Simple Syndication". + +Let's look at an example of RSS. The British Broadcasting Corporation (BBC) maintains a number of [news feeds](https://www.bbc.co.uk/news/10628494), one of which is for [World News](https://feeds.bbci.co.uk/news/world/rss.xml). + +The content of this feed looks something like this (reduced to just a few items for brevity): + +```xml + + + + <![CDATA[BBC News]]> + + https://www.bbc.co.uk/news/world + + https://news.bbcimg.co.uk/nol/shared/img/bbc_news_120x60.gif + BBC News + https://www.bbc.co.uk/news/world + + RSS for Node + Mon, 19 Jan 2026 15:14:13 GMT + + + + 15 + + <![CDATA[IMF warns of trade tension risk to global growth]]> + + https://www.bbc.com/news/articles/c0r47ey0d1vo?at_medium=RSS&at_campaign=rss + https://www.bbc.com/news/articles/c0r47ey0d1vo#0 + Mon, 19 Jan 2026 11:04:56 GMT + + + + <![CDATA[Japan PM Takaichi calls snap election three months after taking office]]> + + https://www.bbc.com/news/articles/c1dk0x0v6pdo?at_medium=RSS&at_campaign=rss + https://www.bbc.com/news/articles/c1dk0x0v6pdo#0 + Mon, 19 Jan 2026 12:00:48 GMT + + + + <![CDATA[South African team helps search for politician swept away by Mozambique floodwaters]]> + + https://www.bbc.com/news/articles/c62nen4n971o?at_medium=RSS&at_campaign=rss + https://www.bbc.com/news/articles/c62nen4n971o#1 + Mon, 19 Jan 2026 14:22:15 GMT + + + + <![CDATA[China's birth rate hits record low as population continues to shrink]]> + + https://www.bbc.com/news/articles/c79r7v7qr53o?at_medium=RSS&at_campaign=rss + https://www.bbc.com/news/articles/c79r7v7qr53o#1 + Mon, 19 Jan 2026 07:12:01 GMT + + + + +``` + +Observe the structure of the XML document. + +- Following the XML declaration (first line), `` is the outermost enclosing element, and in the opening tag, comes with a number of namespace (`xmlns`) declarations and a version. Think of this as the "envelope". +- Nested a level within the `` element is the `` element, which is the encoded representation of this feed resource, with elements that one would expect here, conveying the feed's title, image link, last updated date, and so on. Think of this as the "header". +- Also within the `` element are multiple `` elements, each representing an individual news item in the context of the channel, i.e. world news items. Each item has elements conveying the details, such as news item title, short description, publication date, and so on. + +Here's the structure in simple terms: + +```text +rss + | + +-- channel + | + +-- item + | + +-- item + | + +-- ... +``` + +Within the RSS envelope, this is indeed much like a document (in the ERP sense), with a header and items. + +Beyond news feeds like this, the archetypal use case for RSS (and indeed Atom) is for weblogs, where a channel represents a weblog and the items represent the individual posts. + +### Examine the Atom Syndication Format + +Atom is a format very similar to RSS, serving the same purpose, that came about for reasons that are not relevant for this tutorial (but see the Further Info section for a link to more background details if you're curious). It's known as the Atom Syndication Format, described in [RFC 4287](https://tools.ietf.org/html/rfc4287). + +Some may call Atom a successor to RSS. Unlike RSS, which is just a _format_ specification, Atom also has a related _protocol_ which we'll look at shortly. + +Like RSS, the vast majority of Atom use is as machine readable representations of weblogs. + +Here's an example of the Atom format, representing the weblog at (the Atom feed itself is at ); just two entries are shown, for brevity: + +```xml + + + DJ Adams + Reserving the right to be wrong + + + 2026-01-14T00:00:00Z + https://qmacro.org/blog/ + + DJ Adams + + + Modules, modularity & reuse in CDS models - part 3 - publishing the simple reuse package + + 2026-01-14T00:00:00Z + https://qmacro.org/blog/posts/2026/01/14/modules-modularity-and-reuse-in-cds-models-part-3-publishing-the-simple-reuse-package/ + <p>(Get to all the parts in this series via the <a href="https://qmacro.org/blog/posts/2026/01/01/modules-modularity-and-reuse-in-cds-models/">series + post</a>.)</p><p>What we have from <a href="https://qmacro.org/blog/posts/2026/01/07/modules-modularity-and-reuse-in-cds-models-part-2-creating-a-simple-reuse-package/"> + ... (the rest of the blog post content) + + + + Modules, modularity & reuse in CDS models - part 2 - creating a simple reuse package + + 2026-01-07T00:00:00Z + https://qmacro.org/blog/posts/2026/01/07/modules-modularity-and-reuse-in-cds-models-part-2-creating-a-simple-reuse-package/ + <p>(Get to all the parts in this series via the <a href="https://qmacro.org/blog/posts/2026/01/01/modules-modularity-and-reuse-in-cds-models/">series post</a>.)</p> + <p>In <a href="https://qmacro.org/blog/posts/2026/01/01/modules-modularity-and-reuse-in-cds-models-part-1-an-introduction/#wrapping-up"> + ... (the rest of the blog post content) +1</a> + + + +``` + +Notice that in both RSS and Atom feeds, the item can contain just a short summary, as in the BBC World News RSS example, or the entire content, as in this weblog Atom example - although for brevity, again, the content has been elided. + +The Atom structure is very similar to that of RSS; apart from the element names themselves, the main difference is that there's no "envelope" element enclosing everything - the outermost element here is ``: + +```text +feed + | + +-- entry + | + +-- entry + | + +-- ... +``` + +While the XML representation of RSS is not namespaced, Atom's is - notice the non-qualified (i.e. default) `xmlns` attribute in the opening `` tag: + +```xml + +``` + +This, formally, is very much an Atom feed. Indeed, if you're curious, you may have opened that URL to see what it is. If you haven't yet, go ahead now :-) . + +### Examine the Atom Publishing Protocol + +Complementing the Atom Syndication Format is a protocol, designed to enable the manipulation of data stored in Atom-formatted resources. This was originally designed for weblog authoring - using tools that spoke the protocol one could create, edit and push posts to remote blogging systems for publication. + +Officially called the [Atom Publishing Protocol](https://tools.ietf.org/html/rfc5023), this protocol is alternatively known as AtomPub, or simply APP for short. + +AtomPub's Request For Comments (RFC) document ([RFC5023](https://tools.ietf.org/html/rfc5023)) describes a series of standard operations that can be performed on entries in an Atom feed - in other words, operations on XML representations of blog posts that are in the form of `entry` elements. + +These operations are for listing (querying) multiple entries, and creating, editing, retrieving & deleting individual entries, and they correspond to the standard HTTP methods (GET, POST, PUT and DELETE). + +> And if you're thinking these operations sound familiar, you'd be on exactly the right track - the same five operations are defined in OData. + +The Atom Publishing Protocol specification also includes the concept of a service document that describes what collections of entries that are available. Here's an example of an Atom service document, from that same RFC document: + +```xml + + + + Main Site + + My Blog Entries + + + + Pictures + image/png + image/jpeg + image/gif + + + +``` + +You will see that these fundamental building blocks of Atom are alive and well in the OData protocol today. In fact, this example Atom Publishing Protocol service document ... was taken from an [OData Specification Document](https://docs.oasis-open.org/odata/odata-atom-format/v4.0/cs01/odata-atom-format-v4.0-cs01.html#_Toc365464533). + +### Consider the basics of OData + +The ideas in Atom formed the foundation of OData. The [Wikipedia page](https://en.wikipedia.org/wiki/Open_Data_Protocol) has a good overview, but at a simple level, there is: + +- a service document describing the data available in a given OData service +- the concept of entity sets and entities, which are direct parallels of feeds and entries, respectively, in Atom +- a basic set of operations: Create, Read, Update, Delete and Query (commonly referred to as CRUD+Q) + +Since [OData has been under the stewardship of OASIS](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=odata), it has [moved through a few iterations](https://github.com/qmacro/odata-specs/blob/master/overview.md), the most current major version of which is 4. + +### Look at the public Northwind service + +There is a [publicly available set of OData services](https://services.odata.org) maintained by the OASIS organisation. They are known as the **Northwind** services because they offer a data set based on a business scenario that revolves around a company called **Northwind Traders**. This data set contains entities such as customers, products and suppliers. + +### Retrieve the Northwind service's service document + +Head to the OData V4 flavor of the Northwind service at . + +As is standard, the resource at an OData service's root URL is the service document, and it looks like this (some collections have been left out, for brevity): + +```xml + + + + Default + + Categories + + + Products + + + Suppliers + + + Territories + + + Alphabetical_list_of_products + + + +``` + +Compare this to the Atom service document earlier - from a structural point of view it's almost identical. From a content perspective, while the previous Atom service document example described blog content, this OData service document describes business entities such as products and suppliers. + +### Retrieve entity data + +The business entities are available from a resource address perspective by appending the name, such as `Suppliers`, onto the service's root URL. Go ahead and do this, to retrieve data for a handful of suppliers, by following this link: . + +Technically speaking in OData terms, by following this link you are performing an OData query operation on the `Suppliers` entity set, limiting the results to 3 via the `$top` system query option. We'll dig into these terms and much more in subsequent tutorials, but for now, take a look at what is returned, which is something like this: + +```json +{ + "@odata.context": "https://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Suppliers", + "value": [ + { + "SupplierID": 1, + "CompanyName": "Exotic Liquids", + "ContactName": "Charlotte Cooper", + "ContactTitle": "Purchasing Manager", + "Address": "49 Gilbert St.", + "City": "London", + "Region": null, + "PostalCode": "EC1 4SD", + "Country": "UK", + "Phone": "(171) 555-2222", + "Fax": null, + "HomePage": null + }, + { + "SupplierID": 2, + "CompanyName": "New Orleans Cajun Delights", + "ContactName": "Shelley Burke", + "ContactTitle": "Order Administrator", + "Address": "P.O. Box 78934", + "City": "New Orleans", + "Region": "LA", + "PostalCode": "70117", + "Country": "USA", + "Phone": "(100) 555-4822", + "Fax": null, + "HomePage": "#CAJUN.HTM#" + }, + { + "SupplierID": 3, + "CompanyName": "Grandma Kelly's Homestead", + "ContactName": "Regina Murphy", + "ContactTitle": "Sales Representative", + "Address": "707 Oxford Rd.", + "City": "Ann Arbor", + "Region": "MI", + "PostalCode": "48104", + "Country": "USA", + "Phone": "(313) 555-5735", + "Fax": "(313) 555-3349", + "HomePage": null + } + ] +} +``` + +These days, particularly with OData V4, the default representation of entity set resources is JSON. With earlier versions of OData that default was XML ... specifically Atom flavored XML! In fact, some OData servers still support the Atom flavored XML, which we can explicitly ask for using HTTP's `Accept` header in the request. + +Try this now, if you have access to an HTTP client, such as `curl` on the command line, or [Postman](https://web.postman.co/) on the Web. + +If you want to use `curl`, try this: + +```shell +curl \ + --header 'Accept: application/atom+xml' \ + --url 'https://services.odata.org/V4/Northwind/Northwind.svc/Suppliers?$top=3' +``` + +If you're using Postman or similar, be sure to add the `Accept` header with the value `application/atom+xml`. + +This should return the same supplier data resource, but in a different representation - Atom XML: + +```xml + + + https://services.odata.org/V4/Northwind/Northwind.svc/Suppliers + Suppliers + 2026-01-20T15:32:43Z + + + https://services.odata.org/V4/Northwind/Northwind.svc/Suppliers(1) + + + + + <updated>2026-01-20T15:32:43Z</updated> + <author> + <name/> + </author> + <content type="application/xml"> + <m:properties> + <d:SupplierID m:type="Int32">1</d:SupplierID> + <d:CompanyName>Exotic Liquids</d:CompanyName> + <d:ContactName>Charlotte Cooper</d:ContactName> + <d:ContactTitle>Purchasing Manager</d:ContactTitle> + <d:Address>49 Gilbert St.</d:Address> + <d:City>London</d:City> + <d:Region m:null="true"/> + <d:PostalCode>EC1 4SD</d:PostalCode> + <d:Country>UK</d:Country> + <d:Phone>(171) 555-2222</d:Phone> + <d:Fax m:null="true"/> + <d:HomePage m:null="true"/> + </m:properties> + </content> + </entry> + <entry> + <id>https://services.odata.org/V4/Northwind/Northwind.svc/Suppliers(2)</id> + <category term="#NorthwindModel.Supplier" scheme="http://docs.oasis-open.org/odata/ns/scheme"/> + <link rel="edit" title="Supplier" href="Suppliers(2)"/> + <link rel="http://docs.oasis-open.org/odata/ns/related/Products" type="application/atom+xml;type=feed" title="Products" href="Suppliers(2)/Products"/> + <title/> + <updated>2026-01-20T15:32:43Z</updated> + <author> + <name/> + </author> + <content type="application/xml"> + <m:properties> + <d:SupplierID m:type="Int32">2</d:SupplierID> + <d:CompanyName>New Orleans Cajun Delights</d:CompanyName> + <d:ContactName>Shelley Burke</d:ContactName> + <d:ContactTitle>Order Administrator</d:ContactTitle> + <d:Address>P.O. Box 78934</d:Address> + <d:City>New Orleans</d:City> + <d:Region>LA</d:Region> + <d:PostalCode>70117</d:PostalCode> + <d:Country>USA</d:Country> + <d:Phone>(100) 555-4822</d:Phone> + <d:Fax m:null="true"/> + <d:HomePage>#CAJUN.HTM#</d:HomePage> + </m:properties> + </content> + </entry> + <entry> + <id>https://services.odata.org/V4/Northwind/Northwind.svc/Suppliers(3)</id> + <category term="#NorthwindModel.Supplier" scheme="http://docs.oasis-open.org/odata/ns/scheme"/> + <link rel="edit" title="Supplier" href="Suppliers(3)"/> + <link rel="http://docs.oasis-open.org/odata/ns/related/Products" type="application/atom+xml;type=feed" title="Products" href="Suppliers(3)/Products"/> + <title/> + <updated>2026-01-20T15:32:43Z</updated> + <author> + <name/> + </author> + <content type="application/xml"> + <m:properties> + <d:SupplierID m:type="Int32">3</d:SupplierID> + <d:CompanyName>Grandma Kelly's Homestead</d:CompanyName> + <d:ContactName>Regina Murphy</d:ContactName> + <d:ContactTitle>Sales Representative</d:ContactTitle> + <d:Address>707 Oxford Rd.</d:Address> + <d:City>Ann Arbor</d:City> + <d:Region>MI</d:Region> + <d:PostalCode>48104</d:PostalCode> + <d:Country>USA</d:Country> + <d:Phone>(313) 555-5735</d:Phone> + <d:Fax>(313) 555-3349</d:Fax> + <d:HomePage m:null="true"/> + </m:properties> + </content> + </entry> +</feed> +``` + +Does this format and structure look familiar? Yes, of course it does - it's the same as the Atom Syndication Format example from earlier - a `<feed>` element containing `<entry>` elements, each with a `<content>` element where the actual item's data is to be found. + +Indeed, the set of formal documents that form the complete OData specification includes the [OData Atom Format Version 4.0](https://docs.oasis-open.org/odata/odata-atom-format/v4.0/odata-atom-format-v4.0.html), which is currently at Committee Specification level 02, the introduction to which reads: + +> "The OData protocol is comprised of a set of specifications for representing and interacting with structured content. The core specification for the protocol is in OData-Protocol. The OData Atom Format specification extends the former by defining representations for OData requests and responses using an Atom format." + +The use of "OData-Protocol" is a reference to a related document in the OData V4 specification set, specifically the [OData Version 4.0 Part 1: Protocol](https://docs.oasis-open.org/odata/odata/v4.0/odata-v4.0-part1-protocol.html) specification. In the next tutorial you'll learn more about the protocol specification and how to navigate it. + +### Further info + +- [Accuracy and precision in language](https://qmacro.org/blog/posts/2024/01/22/accuracy-and-precision-in-language/) on the distinction between "blog" and "post" +- [Monday morning thoughts: OData](https://qmacro.org/blog/posts/2018/08/20/monday-morning-thoughts-odata/) has further details on the journey from RSS, through Atom and AtomPub, to OData diff --git a/tutorials/odata-dd-2-standards/odata-dd-2-standards.md b/tutorials/odata-dd-2-standards/odata-dd-2-standards.md new file mode 100644 index 0000000000..9985637f3f --- /dev/null +++ b/tutorials/odata-dd-2-standards/odata-dd-2-standards.md @@ -0,0 +1,246 @@ +--- +parser: v2 +author_name: DJ Adams +author_profile: https://github.com/qmacro +auto_validation: false +primary_tag: software-product>sap-business-technology-platform +tags: [ software-product>sap-business-technology-platform, topic>cloud, programming-tool>odata, tutorial>beginner ] +time: 20 +--- + +# Get to know the OData standards resources on the Web + +<!-- description --> Get acquainted with the OData standards and how to navigate them. + +## You will learn + +- What the important Web based resources are for OData +- How to make sense of them + +## Intro + +There are myriad resources on the Web about OData. After all, it's a standard that's been around since 2007. There are some key resources that are important to know about, and in this tutorial, you'll find out about them and become more comfortable navigating them. + +> This tutorial belongs to the OData Deep Dive mission, a re-write of the original. The re-write is a work in progess, please proceed with caution! More info can be found in the blog post [OData Deep Dive rewrite in the open](https://qmacro.org/blog/posts/2026/02/02/odata-deep-dive-rewrite-in-the-open/). + +--- + +### Understand OASIS and its role + +OASIS is one of the world's foremost non-profit standards bodies, and is a destination for the discussion, management and standardization of open protocols, such as OData. Within OASIS there are various programmes, one of which is the set of [Projects and Technical Committees](https://www.oasis-open.org/projects-committees/). + +Within the context of a Technical Committee, specifications are developed in the open, following a lightweight process of review, feedback, approval and publication. There is a Technical Committee in OASIS for OData - the [OASIS Open Data Protocol (OData) TC](https://groups.oasis-open.org/communities/tc-community-home2?CommunityKey=e7cac2a9-2d18-4640-b94d-018dc7d3f0e2). + +### Take a first look at the technical work produced by the OData TC + +The OData TC page lists various work products in the section titled "Technical Work Produced by the Committee" on [the main OData TC page](https://groups.oasis-open.org/communities/tc-community-home2?CommunityKey=e7cac2a9-2d18-4640-b94d-018dc7d3f0e2). + +That list can be somewhat overwhelming at first, which is partly due to: + +- OData's considerable core scope +- use of other adjacent standards +- a dedicated attention to detail +- the protocol's inherent extensibility + +It is also due to the review, feedback, approval and publication process. Although relatively lightweight, this process does mean that there are various stages that documents can be in, defined briefly in the process's [Specification Lifecycle](https://docs.oasis-open.org/templates/TCHandbook/content/tcprocess/standardsapprovalprocess/specificationlifecycle.htm). Such document stages are in this [OASIS Documents](https://github.com/qmacro/odata-specs/blob/master/overview.md) resource which attempts to show the stages and progress of key OData standards documents over time. + +In that resource one can see the various stages represented, such as "cs" for Committee Specification, "cos" for Candidate OASIS Standard, "os" for OASIS Standard, and so on, stage identifiers that form part of the work product document's official name, such as: + +<https://docs.oasis-open.org/odata/odata/v4.01/os/part1-protocol/odata-v4.01-os-part1-protocol.html> + +Looking at this document as a typical example, what can we discern? Well: + +- it is in an HTML format +- it represents part 1 of the OData standard +- that part being about the protocol (as opposed to the URL conventions) +- the standard is at version 4.01 (the latest) +- the specification lifecycle stage is "os" i.e. the final one (OASIS Standard) + +We'll come back to this particular document later on in this tutorial. + +### Get an overview of the OData standards documents + +Going back to the list of work products, let's enumerate them and think briefly about what they are. Doing this is important if we want to be able to make sense of them, to navigate between and within them, and ultimately to find whatever we're looking for. + +Open up the [OData TC page](https://groups.oasis-open.org/communities/tc-community-home2?CommunityKey=e7cac2a9-2d18-4640-b94d-018dc7d3f0e2) in a window separate to this one and jump to the [Technical Work Produced by the Committee](https://groups.oasis-open.org/communities/tc-community-home2?CommunityKey=e7cac2a9-2d18-4640-b94d-018dc7d3f0e2#technical) section, ready to scroll through it as you work through this section. + +#### The main standard + +There's the main standard itself, the "OData Version 4.01 OASIS Standard". This comprises three distinct parts: + +- OData Version 4.01 Part 1: Protocol +- OData Version 4.01 Part 2: URL Conventions +- ABNF components: OData ABNF Construction Rules Version 4.01 and OData ABNF Test Cases Version 4.01 + +The [Augmented Backus-Naur form](https://en.wikipedia.org/wiki/Augmented_Backus%E2%80%93Naur_form) (ABNF) resource is the formal grammar of the OData protocol as described, in a form (ABNF) which is common for such protocol format descriptions. There's the grammar itself in one file, and a series of test cases for the grammar in another. It's rare that you'll need to consult this. + +The other two documents that form the main standard, Parts 1 and 2, are where most things in this OData mission, and most things that you'll encounter in normal circumstances when working with OData, are described. + +Notice that each of these documents is available in different formats: + +- an authorititave source format +- other formats (downstream from the source format) + +In this case, the authoritative source format for parts 1 and 2 is `.docx`, but in other OData standards documents cases, it is sometimes `.md`. Practically, one can use the `.html` format, as `.docx` is not compatible with the open Web. + +#### CSDL as a supporting standard + +Alongside the main standard there are supporting standards, the first one of which is expressed in a pair of documents: + +- OData Common Schema Definition Language (CSDL) JSON Representation Version 4.01 OASIS Standard +- OData Common Schema Definition Language (CSDL) XML Representation Version 4.01 OASIS Standard + +These describe two representations of Common Schema Definition Language (CSDL), and we might be most familiar with the XML representation of CSDL, in that it's the EDMX in OData service metadata documents, such as [the one for the Northwind service we looked at in the previous tutorial](https://services.odata.org/V4/Northwind/Northwind.svc/$metadata). Each of these documents is also available in multiple formats, again, the `.docx` format being authoritative. + +#### JSON format as a supporting standard + +Next in the list of supporting standards is: + +- OData JSON Format Version 4.01 OASIS Standard + +This is the formal description of the JSON format used to represent resources such as entities and entity sets transmitted and received in the context of OData operations. In other words, it describes everything we need to know (and more) about OData JSON payloads. In earlier versions of the OData standard, a version of the XML-based Atom Syndication Format was used, but this has been largely superseded by this JSON format. + +#### Extension standards + +The OData standard is extensible, and in that context, the next two standards documents listed in the work products section are in this category: + +- OData Extension for Data Aggregation Version 4.0 Committee Specification 03 +- OData Extension for Temporal Data Version 4.0 Committee Specification 01 + +Note that in addition to the multiple formats available for each extension standard, there are supplemental parts; as for the main OData standard, there's a pair of ABNF components (grammar construction rules, and test cases), and there's also a vocabulary, expressed in EDMX (and other formats) and containing annotation definitions relating to the given extension, definitions to be used in OData service metadata documents. + +#### Other supporting standards + +There is one more standard listed in the work products, that describes a method for retrying unsafe requests without incurring unintended side-effects. This is: + +- Repeatable Requests Version 1.0 Committee Specification 01 + +By now, we know the drill. This is a standards document that is available in various formats, and is at a particular specification lifecycle stage. + +#### Lifecycle stage identifiers + +In each case, the URLs pointing to the standards resources contain a path info section that indicates the stage that resource is in. The stage also appears in the final "filename" section. Let's have a look at the relative path info sections of the URLs representing the authoritative sources for each of the standards we've seen. + +First, there are the main standards: + +- `/odata/odata/v4.01/os/part1-protocol/odata-v4.01-os-part1-protocol.docx` +- `/odata/odata/v4.01/os/part2-url-conventions/odata-v4.01-os-part2-url-conventions.docx` +- `/odata/odata/v4.01/os/abnf/` +- `/odata/odata-csdl-json/v4.01/os/odata-csdl-json-v4.01-os.docx` +- `/odata/odata-csdl-xml/v4.01/os/odata-csdl-xml-v4.01-os.docx` +- `/odata/odata-json-format/v4.01/os/odata-json-format-v4.01-os.docx` + +These are all "OASIS Standard" documents, each with the "os" stage identifier. + +Here are the rest of the standards: + +- `/odata/odata-data-aggregation-ext/v4.0/cs03/odata-data-aggregation-ext-v4.0-cs03.md` +- `/odata/odata-temporal-ext/v4.0/cs01/odata-temporal-ext-v4.0-cs01.docx` +- `/odata/repeatable-requests/v1.0/cs01/repeatable-requests-v1.0-cs01.docx` + +All these are at various iteration levels of the "Committee Specification" stage ("cs"), with the extension for data aggregation being at level 03, and the other two being at level 01. + +### Explore the document URL chains + +We now know about the different technical work artifacts and how they are manifested, with the document URLs, each of which: + +- describes a specific part of the overall OData standard +- is tied to a particular specification lifecycle stage +- is in a format indicated in the URL + +Now, as a final step in learning about the OData standards resources, we should make sure we understand the myriad specification URLs at the top of any given document. + +By way of an example, visit the (HTML version of the) [OData Version 4.01 Part 1: Protocol](https://docs.oasis-open.org/odata/odata/v4.01/os/part1-protocol/odata-v4.01-os-part1-protocol.html). The first section looks like this: + +![odata-protocol-doc-first-section](odata-protocol-doc-first-section.png) + +There are a lot of URLs here! But the pattern is straightforward, and following it is easy given what we now know. We should start by noticing the three groups: + +- This stage +- Previous stage +- Latest stage + +The URLs are there to allow the navigation forwards and backwards through time, through the different specification lifecycle stages the given document has been through. + +> We should ignore the `.docx` and `.pdf` formats for the purposes of this exploration, and focus only on the `.html` format, for easy navigation. Note that the screenshot shows these HTML links with the standard "visited" color purple (as opposed to the standard blue of "unvisited" links), indicating that these are indeed the key links that have been navigated, while this section of the tutorial was being written. + +The URLs for the "Latest stage" links do not contain any specification lifecycle indicators, and so can - and should - be used to generically and canonically refer to the latest stage version of the document in question. + +The URLs for the links in the "This stage" and "Previous stage" sections do contain specification lifecycle indicators, and as such, are pointers to specific stage versions. + +Thus requesting the resource at either of these two URLs ... will result in the same resource content: + +- [/odata/odata/v4.01/odata-v4.01-part1-protocol.html](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html) +- [/odata/odata/v4.01/os/part1-protocol/odata-v4.01-os-part1-protocol.html](https://docs.oasis-open.org/odata/odata/v4.01/os/part1-protocol/odata-v4.01-os-part1-protocol.html) + +Following the "Previous stage" HTML link: + +- [/odata/odata/v4.01/cs02/part1-protocol/odata-v4.01-cs02-part1-protocol.html](https://docs.oasis-open.org/odata/odata/v4.01/cs02/part1-protocol/odata-v4.01-cs02-part1-protocol.html) + +leads to an earlier specification lifecycle stage (Committee Specification 02), which, in turn, via its own "Previous stage" link, leads to the next earliest, which is at the specification lifecycle stage with the indicator "csprd06" - Committee Specification / Public Review Draft 06: + +- [/odata/odata/v4.01/csprd06/part1-protocol/odata-v4.01-csprd06-part1-protocol.html](https://docs.oasis-open.org/odata/odata/v4.01/csprd06/part1-protocol/odata-v4.01-csprd06-part1-protocol.html) + +And so on, back in time. + +We can visualize the "This", "Previous" and "Latest" link sets for the "OData Version 4.01 Part 1: Protocol" document like this: + +```text ++--------------+ +--------------------------------------+ +| | | | +| Latest |--->| OASIS Standard | +| stage | | | +| | | | ++--------------+ +--------------------------------------+ + | + previous + V + +--------------------------------------+ + | | + | Committee Specification 02 | + | | + | | + +--------------------------------------+ + | + previous + V + +--------------------------------------+ + | | + | Committee Specification Draft 06 | + | / Public Review Draft 06 | + | | + +--------------------------------------+ + | + previous + V + +--------------------------------------+ + | | + | Committee Specification Draft 05 | + | / Public Review Draft 05 | + | | + +--------------------------------------+ + | + previous + V + +--------------------------------------+ + | | + | Committee Specification 01 | + | | + | | + +--------------------------------------+ + | + previous + V + +--------------------------------------+ + | | + | ... | + | | + | | + +--------------------------------------+ +``` + +### Further info + +- [Guidelines for Visualizing Links](https://www.nngroup.com/articles/guidelines-for-visualizing-links/) +- [OASIS document tree](https://github.com/qmacro/odata-specs/blob/master/overview.md) +- [OData Vocabularies](https://github.com/oasis-tcs/odata-vocabularies) diff --git a/tutorials/odata-dd-2-standards/odata-protocol-doc-first-section.png b/tutorials/odata-dd-2-standards/odata-protocol-doc-first-section.png new file mode 100644 index 0000000000..ffaf821390 Binary files /dev/null and b/tutorials/odata-dd-2-standards/odata-protocol-doc-first-section.png differ diff --git a/tutorials/odata-dd-3-northbreeze/cap-start-page.png b/tutorials/odata-dd-3-northbreeze/cap-start-page.png new file mode 100644 index 0000000000..27195d9904 Binary files /dev/null and b/tutorials/odata-dd-3-northbreeze/cap-start-page.png differ diff --git a/tutorials/odata-dd-3-northbreeze/odata-dd-3-northbreeze.md b/tutorials/odata-dd-3-northbreeze/odata-dd-3-northbreeze.md new file mode 100644 index 0000000000..80f3247425 --- /dev/null +++ b/tutorials/odata-dd-3-northbreeze/odata-dd-3-northbreeze.md @@ -0,0 +1,115 @@ +--- +parser: v2 +author_name: DJ Adams +author_profile: https://github.com/qmacro +auto_validation: false +primary_tag: software-product>sap-business-technology-platform +tags: [ software-product>sap-business-technology-platform, topic>cloud, programming-tool>odata, tutorial>beginner ] +time: 20 +--- + +# Set up your own OData service + +<!-- description --> Get your own OData service up and running. + +## You will learn + +- How CAP can effortlessly provide OData services + +## Intro + +In order to take you through various OData concepts in the [OData Deep Dive](https://developers.sap.com/mission.odata-deep-dive.html) mission, we'll need an OData service. Running your own instance of the OData service upon which subsequent tutorials in this mission are based means that you can explore & try out more than read-only concepts. The OData service is deliberately simple and is called Northbreeze, because it's based upon, and a cut down version, of the original [Northwind](https://services.odata.org/V4/Northwind/Northwind.svc/) service. + +The service is provided by a CAP server, and the definition is in a publicly available repository, along with the definition of a container image. + +> This tutorial belongs to the OData Deep Dive mission, a re-write of the original. The re-write is a work in progess, please proceed with caution! More info can be found in the blog post [OData Deep Dive rewrite in the open](https://qmacro.org/blog/posts/2026/02/02/odata-deep-dive-rewrite-in-the-open/). + +You will need: + +- A Docker engine (in the form of Docker Desktop, Podman, or similar) + +> This tutorial is to receive further updates to include: +> +> - Making multi-arch images available +> - Running as a dev container in VS Code and GitHub Codespaces +> - Running in an SAP Business Application Studio dev space +> +> In the meantime, you can simply clone the repository that has the source for the container image and the CAP-based OData service and run it yourself, locally. + +--- + +### Inspect the source for the OData service + +The OData service is deliberately simple and contains a small handful of entities in the domain model, and a single service made available via the OData V4 protocol. + +Head to <https://github.com/qmacro/odata-dd-server> and take a look around, especially in the CAP project directory [northbreeze](https://github.com/qmacro/odata-dd-server/tree/main/northbreeze). + +### Run as a Docker container + +Use the `docker` command line tool to pull the image which contains the Northbreeze CAP server, and run a container based upon it: + +```shell +docker run \ + --rm \ + --tty \ + --publish 4004:4004 \ + ghcr.io/qmacro/northbreeze +``` + +This should first emit `docker` output: + +```log +Unable to find image 'ghcr.io/qmacro/northbreeze:latest' locally +latest: Pulling from qmacro/northbreeze +c74c1b58c0fe: Pull complete +599d5b6b6766: Extracting [===>] ... +c9b629762372: ... +Digest: sha256:c2678197eb57da768edee4184901be3fa96d4c894a3396d09a2e5e36a1c91c42 +Status: Downloaded newer image for ghcr.io/qmacro/northbreeze:latest +``` + +Then the CAP server output should appear: + +```log +cds serve all --with-mocks --in-memory? +( live reload enabled for browsers ) + + ___________________________ + +[cds] - loaded model from 2 file(s): + + northbreeze/srv/main.cds + northbreeze/db/schema.cds + +[cds] - using bindings from: { registry: '~/.cds-services.json' } +[cds] - connect to db > sqlite { database: ':memory:' } + > init from northbreeze/db/data/northbreeze-Suppliers.csv + > init from northbreeze/db/data/northbreeze-Products.csv + > init from northbreeze/db/data/northbreeze-Categories.csv +/> successfully deployed to in-memory database. + +[cds] - using auth strategy { kind: 'mocked' } +[cds] - serving Main { + at: [ '/northbreeze' ], + decl: 'northbreeze/srv/main.cds:4' +} +[cds] - server listening on { url: 'http://localhost:4004' } +[cds] - server v9.7.0 launched in 260 ms +[cds] - [ terminate with ^C ] +``` + +At this point, visit <http://localhost:4004>, where the default start page will be displayed: + +![Default CAP server start page](cap-start-page.png) + +Links to the Northbreeze OData service document, metadata document, and entitysets for the three available entities are shown. + +### Visit the publicly available read-only service + +In case you haven't yet got round to running your own instance of the Northbreeze service, there's a publicly available instance that is fully read-only. So you can try out any of the read-only activities in subsequent tutorials in this mission (for other activities you will have to set up and run your own, as described in this tutorial). + +Head over to <https://odd.cfapps.eu10.hana.ondemand.com/> ("odd" is short for "OData Deep Dive") to see the start page, and take a look around. + +### Further info + +- [Capire](https://cap.cloud.sap/docs/) - the official CAP documentation diff --git a/tutorials/odata-dd-4-metadata/metadata-document.png b/tutorials/odata-dd-4-metadata/metadata-document.png new file mode 100644 index 0000000000..c75d3d2411 Binary files /dev/null and b/tutorials/odata-dd-4-metadata/metadata-document.png differ diff --git a/tutorials/odata-dd-4-metadata/odata-dd-4-metadata.md b/tutorials/odata-dd-4-metadata/odata-dd-4-metadata.md new file mode 100644 index 0000000000..2e84647c42 --- /dev/null +++ b/tutorials/odata-dd-4-metadata/odata-dd-4-metadata.md @@ -0,0 +1,420 @@ +--- +parser: v2 +author_name: DJ Adams +author_profile: https://github.com/qmacro +auto_validation: false +primary_tag: software-product>sap-business-technology-platform +tags: [ software-product>sap-business-technology-platform, topic>cloud, programming-tool>odata, tutorial>beginner ] +time: 30 +--- + +# Learn how to read OData metadata documents + +<!-- description --> Take a tour of a simple metadata document and get to know its key sections. + +## You will learn + +- What the metadata document looks like +- What it describes +- How to navigate it + +## Intro + +A key resource in any OData service is its metadata document. In this tutorial you'll take a tour of a simple metadata document (the one for the Northbreeze service introduced in the [previous tutorial in this mission](https://developers.sap.com/tutorials/odata-dd-3-northbreeze.html)). + +Throughout this tutorial you should endeavor to use your own instance of the Northbreeze service (see the previous [Northbreeze](https://developers.sap.com/tutorials/odata-dd-3-northbreeze.html) tutorial); for illustration purposes, URLs for the publicly available instance will be used here. + +> This tutorial belongs to the OData Deep Dive mission, a re-write of the original. The re-write is a work in progess, please proceed with caution! More info can be found in the blog post [OData Deep Dive rewrite in the open](https://qmacro.org/blog/posts/2026/02/02/odata-deep-dive-rewrite-in-the-open/). + +--- + +<!-- 1 --> +### Retrieve the Northbreeze metadata document + +Head over to your Northbreeze service and request the metadata document resource. The URL is formed from the OData service root: + +<https://odd.cfapps.eu10.hana.ondemand.com/northbreeze> + +with `$metadata` added as a further path segment: + +<https://odd.cfapps.eu10.hana.ondemand.com/northbreeze/$metadata> + +<!-- 2 --> +### Take a first look at the content + +Initially the content of this resource can be a little overwhelming. Here's what the first part looks like: + +![Part of the metadata document](metadata-document.png) + +But if we [stare at it](https://qmacro.org/blog/posts/2017/02/19/the-beauty-of-recursion-and-list-machinery/#initial-recognition) for long enough, it becomes less overwhelming and we start to see the structure. + +<!-- 3 --> +### Consider the high level XML structure + +Regard this drastically reduced version of the entire metadata document XML structure: + +```xml +<?xml version="1.0" encoding="utf-8"?> +<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx"> + <edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml"></edmx:Reference> + <edmx:Reference Uri="https://sap.github.io/odata-vocabularies/vocabularies/Common.xml"></edmx:Reference> + <edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml"></edmx:Reference> + <edmx:DataServices> + <Schema Namespace="Main" xmlns="http://docs.oasis-open.org/odata/ns/edm"> + <Annotation Term="Core.Links"></Annotation> + <EntityContainer Name="EntityContainer"> + <EntitySet Name="Products" EntityType="Main.Products"></EntitySet> + <EntitySet Name="Categories" EntityType="Main.Categories"></EntitySet> + <EntitySet Name="Suppliers" EntityType="Main.Suppliers"></EntitySet> + </EntityContainer> + <EntityType Name="Products"></EntityType> + <EntityType Name="Categories"></EntityType> + <EntityType Name="Suppliers"></EntityType> + <Annotations Target="Main.EntityContainer/Categories"> + <Annotation Term="Capabilities.DeleteRestrictions"></Annotation> + </Annotations> + </Schema> + </edmx:DataServices> +</edmx:Edmx> +``` + +It allows us to see the overall structure of this resource, and start to feel a bit more comfortable navigating it. Being XML, the first thing we see is the XML declaration (`<?xml ...?>` - see the [Key terminology](https://en.wikipedia.org/wiki/XML#Key_terminology) section of the XML page on Wikipedia), and then we have the document itself. + +The outermost (or "root") element is `Edmx`, which has: + +- a `Version` attribute which reflects the OData version +- a namespace prefix +- a namespace declaration + +It also contains, as children: + +- a number of references to vocabularies +- a single `DataServices` element + +The primary area of interest to us in any metadata document is the content within the `DataServices` element, as that's [where the rubber meets the road](https://en.wiktionary.org/wiki/the_rubber_meets_the_road) with respect to what the OData service represents for us as architects or developers. But it helps if we are comfortable with the rest of the document, the "context" for the content of the `DataServices` element so to speak, if only to be able to mentally put it aside, move past it and get to what we're looking for. + +So we will look briefly at namespaces in the next step. We'll look at OData vocabularies & annotations in the next couple of tutorials. + +<!-- 4 --> +### Understand the XML namespaces + +While not critical to getting to the heart of what the metadata document conveys, its worth dwelling for a moment on all those element name prefixes (such as the `edmx` part of `<edmx:Edmx>`, `<edmx:Reference>` and so on). + +These are artifacts relating to the use of XML namespaces. + +> There are actually two different types of namespaces at play in these OData metadata document resources. There are the XML namespaces, which are the subject of this step. There are also OData namespaces. These are found in `Namespace` attributes of the `<edmx:Include>` and `<Schema>` elements. We'll look at the schema namespace in a later step in this tutorial, and at the namespaces in the `<edmx:Include>` elements in the next tutorial. + +For the usual reasons, namespaces are used in XML to compartmentalize element and attribute names, which allow the use of various XML vocabularies (not to be confused with the OData vocabularies which we'll look at next) together in a single document, without element and attribute name collisions. + +These XML namespaces are declared with `xmlns` attributes, which are either in the pure `xmlns` form, or in a `xmlns:prefix` form. The first form is how a default namespace is declared, the second is how non-default (named) namespaces are declared. Any element can be specified with a namespace prefix (such as `edmx:Reference`) or without (such as `<Schema>`). Elements without a specific namespace prefix are considered to belong to the default namespace. + +So, if we look again at the entire XML structure, differently compacted this time: + +```xml +<?xml version="1.0" encoding="utf-8"?> +<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx"> + <edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml"> + <edmx:Include Alias="Capabilities" Namespace="Org.OData.Capabilities.V1"/> + </edmx:Reference> + <edmx:DataServices> + <Schema Namespace="Main" xmlns="http://docs.oasis-open.org/odata/ns/edm"> + <Annotation Term="Core.Links"> ... </Annotation> + <EntityContainer Name="EntityContainer"> + <EntitySet Name="Products" EntityType="Main.Products"> ... </EntitySet> + </EntityContainer> + <EntityType Name="Products"> ... </EntityType> + </Schema> + </edmx:DataServices> +</edmx:Edmx> +``` + +we see that there are two XML namespaces at play, a named one (i.e. using a prefix) and a default one: + +Namespace|Prefix|Covers +-|-|- +`http://docs.oasis-open.org/odata/ns/edmx`|`edmx`|`Edmx`, `Reference`, `Include`, `DataServices` +`http://docs.oasis-open.org/odata/ns/edm`|(default)|`Schema`, `EntityContainer`, `EntitySet`, `EntityType` etc + +As the primary area of interest in such resources is what's in the `DataServices` section (the entity type definitions, the entitysets, annotations and so on) it makes sense to specify the namespace that encompasses the elements that are used for such definitions ... as the the default, affording clarity in such declarations (i.e. less "busy", as the element names aren't prefixed). + +<!-- 5 --> +### Learn about the DataServices context + +To understand the context of the `DataServices` element, let's use what we learned in the [Standards](https://developers.sap.com/tutorials/odata-dd-2-standards.html) tutorial in this mission on navigating OData standards documents. + +We should refer to the OData standards document "OData Version 4.0. Part 3: Common Schema Definition Language (CSDL)", the latest version being available at the canonical URL <https://docs.oasis-open.org/odata/odata/v4.0/odata-v4.0-part3-csdl.html>, which brings us specifically to the "Oasis Standard Plus Errata 03" version which has its own URL <https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html>. + +The document's "Abstract" section tells us that we're on the right track: + +> "OData services are described by an Entity Data Model (EDM). The Common Schema Definition Language (CSDL) defines an XML representation of the entity data model exposed by an OData service." + +In this document, [section 3 Entity Model Wrapper](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752500) tells us all about this context: + +- the root `edmx:Edmx` element (a) is mandatory and (b) must contain a single `edmx:DataServices` element +- that single `edmx:DataServices` element must contain one or more `edm:Schema` elements +- it is in these `edm:Schema` elements that our OData service schemas (service and entity detail) are defined + +In our case, there's one schema, and therefore a single `edm:Schema` element. + +<!-- 6 --> +### Get acquainted with the schema element + +> The `edm` prefix to the `Schema` element name here is from the documentation; in our particular metadata document the namespace represented by this prefix, `http://docs.oasis-open.org/odata/ns/edm`, is defined as the default (see the previous step). From now on, element names in the standards document that are prefixed with `edm` will be written here without the prefix, to stay close to our specific metadata document. + +To become acquainted with the `Schema` element, we can now jump to [section 5 Schema](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752520) to know what to expect inside it. The section tells us to expect one or more of the following elements: + +- `Action` +- `Annotations` +- `Annotation` +- `ComplexType` +- `EntityContainer` +- `EntityType` +- `EnumType` +- `Function` +- `Term` +- `TypeDefinition` + +If we inspect what's in our `Schema`, we see these elements at the next level: + +- `Annotation` (and `Annotations`) +- `EntityContainer` +- `EntityType` + +Visualizing our path through this metadata document, we've now found our way to what we really want to know: + +```text ++------+ +| Edmx |--+ ++------+ | + | + +--------------+ + | DataServices |--+ + +--------------+ | + | + +--------+ + | Schema | + +--------+ + | + +---------+-------+------------------+ + | | | + +-----------------+ +------------+ +------------+ + | EntityContainer | | EntityType |-+ | Annotation |-+ + +-----------------+ +------------+ | +------------+ | + +------------+ +------------+ +``` + +Note that there is only a single entity container (see [section 13 Entity Container](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752596)), but multiple entity types and annotations. + +Annotations are covered in a subsequent tutorial, so that leaves the `EntityContainer` and `EntityType` elements. Let's take these one at a time to round out our brief look at the metadata document. + +<!-- 7 --> +### Take a brief look at the OData namespace + +Before we do, we should make a note of one more thing at this level, and that's the `Namespace` attribute in the `<Schema>` element: + +```xml +<Schema Namespace="Main" xmlns="http://docs.oasis-open.org/odata/ns/edm"> + ... +</Schema> +``` + +Both attributes of this element relate to namespaces: + +- `xmlns` gives us the XML namespace (as discussed previously) +- `Namespace` is an OData mechanism + +That OData namespace mechanism is described in the aforementioned CSDL standards document, in [section 5.1.1 Attribute namespace](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752522), thus: + +> "A schema is identified by a namespace. All `edm:Schema` elements MUST have a namespace defined through a `Namespace` attribute which MUST be unique within the document, and SHOULD be globally unique ... It is combined with the name of elements in the entity model to create unique qualified names ..." + +The value of this `Namespace` attribute is `Main` (this OData service is served from a CAP server, which generates the value from the service name). + +This is certainly unique within the metadata document itself, but the value is certainly not globally unique. + +This is fine according to the "MUST" and "SHOULD" terms, which are defined according to [RFC2119](https://www.ietf.org/rfc/rfc2119.txt) (see [section 1.1 Terminology](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752492)): + +- "MUST" means that the definition is an absolute requirement (which is fulfilled, here) +- "SHOULD" means that the definition is a recommendation + +Incidentally, here are some values for this OData namespace definition from similar services curated and maintained by OASIS: + +- [Northwind](https://services.odata.org/V4/Northwind/Northwind.svc/$metadata): + + ```xml + <Schema + xmlns="http://docs.oasis-open.org/odata/ns/edm" + Namespace="NorthwindModel"> + ``` + +- [TripPin](https://services.odata.org/V4/TripPinServiceRW/$metadata): + + ```xml + <Schema + xmlns="http://docs.oasis-open.org/odata/ns/edm" + Namespace="Microsoft.OData.SampleService.Models.TripPin"> + ``` + +Let's have a brief look at where this `Main` OData namespace is used. Here's another drastically reduced version of the entire XML document, showing where `Main` is found: + +```xml +<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx"> + <edmx:DataServices> + <Schema Namespace="Main" xmlns="http://docs.oasis-open.org/odata/ns/edm"> + <EntityContainer Name="EntityContainer"> + <EntitySet Name="Products" EntityType="Main.Products"> ... </EntitySet> + <EntitySet Name="Categories" EntityType="Main.Categories"> ... </EntitySet> + <EntitySet Name="Suppliers" EntityType="Main.Suppliers"> ... </EntitySet> + </EntityContainer> + <EntityType Name="Products"> + ... + <NavigationProperty Name="Category" Type="Main.Categories" Partner="Products"> ... </NavigationProperty> + <NavigationProperty Name="Supplier" Type="Main.Suppliers" Partner="Products"> ... </NavigationProperty> + </EntityType> + <EntityType Name="Categories"> + ... + <NavigationProperty Name="Products" Type="Collection(Main.Products)" Partner="Category"/> + </EntityType> + ... + <Annotations Target="Main.EntityContainer/Categories"> + <Annotation Term="Capabilities.DeleteRestrictions"> ... </Annotation> + ... + </Annotations> + </Schema> + </edmx:DataServices> +</edmx:Edmx> +``` + +The `Main` namespace is used to prefix the "variable building blocks" of the schema when referencing them. So: + +- the `EntitySet` "Products" refers to `Main.Products` as the type of the entity contained +- within the corresponding `EntityType` definition we see that the `NavigationProperty` "Category" is of type `Main.Categories` + +Following that to the "Categories" `EntityType` we see that: + +- there's another `NavigationProperty` "Products" that leads back to the `Main.Products` type, this time in a `Collection( ... )` expression (denoting a zero-or-more relationship) + +Finally: + +- annotations have targets, which are expressed as containees (such as the "Products" `EntitySet`) of the `Main.EntityContainer` element + +<!-- 8 --> +### Consider the entity container + +The `EntityContainer` element within the `Schema` represents the "shop front" of the OData service. It's here that the entitysets are declared. If there are any actions or functions defined in the service, they would be found listed here too, via `<ActionImport>` and `<FunctionImport>` elements respectively. + +If there are any relationships emanating from the `EntityType` upon which an `EntitySet` is based, these are declared in that definition. For example, the "Products" `EntityType` has relationships with "Categories" and "Suppliers", as declared with the `NavigationPropertyBinding` elements within the "Products" `EntitySet`: + +```xml +<EntitySet Name="Products" EntityType="Main.Products"> + <NavigationPropertyBinding Path="Category" Target="Categories"/> + <NavigationPropertyBinding Path="Supplier" Target="Suppliers"/> +</EntitySet> +``` + +Thus the `EntityContainer` is a great machine-readable overview of the entire service. At a high level (from an entityset perspective), it corresponds with the content of the service document, which looks like this: + +```json +{ + "@odata.context": "$metadata", + "@odata.metadataEtag": "...", + "value": [ + { + "name": "Products", + "url": "Products" + }, + { + "name": "Categories", + "url": "Categories" + }, + { + "name": "Suppliers", + "url": "Suppliers" + } + ] +} +``` + +<!-- 9 --> +### Take a look at the entity type definitions + +Last but certainly not least, we come to the `EntityType` elements within the `Schema`. These correspond directly to the business objects, or entities, in our data model, and as such, are described using elements that reflect such detail. Let's take one of the types as an example: + +```xml +<EntityType Name="Products"> + <Key> + <PropertyRef Name="ProductID"/> + </Key> + <Property Name="ProductID" Type="Edm.Int32" Nullable="false"/> + <Property Name="ProductName" Type="Edm.String"/> + <Property Name="QuantityPerUnit" Type="Edm.String"/> + <Property Name="UnitPrice" Type="Edm.Decimal" Scale="variable"/> + <NavigationProperty Name="Category" Type="Main.Categories" Partner="Products"> + <ReferentialConstraint Property="Category_CategoryID" ReferencedProperty="CategoryID"/> + </NavigationProperty> + <Property Name="Category_CategoryID" Type="Edm.Int32"/> + <NavigationProperty Name="Supplier" Type="Main.Suppliers" Partner="Products"> + <ReferentialConstraint Property="Supplier_SupplierID" ReferencedProperty="SupplierID"/> + </NavigationProperty> + <Property Name="Supplier_SupplierID" Type="Edm.Int32"/> + <Property Name="UnitsInStock" Type="Edm.Int32"/> + <Property Name="UnitsOnOrder" Type="Edm.Int32"/> + <Property Name="ReorderLevel" Type="Edm.Int32"/> + <Property Name="Discontinued" Type="Edm.Boolean"/> +</EntityType> +``` + +The `EntityType` clearly defines: + +- the individual properties and their [types](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752517) +- which property or properties make up the entity's key structure +- the navigation properties, their relationship conditions, and the type of the navigation (relation)'s target + +> Some of the `Property` elements have more information in further attributes (such as `Nullable="false"` for key properties, and `Scale="variable"` for how long the decimal place value can be). + +Mostly these definitions are self-explanatory, but it's worth digging in a little to the detail of navigation properties. Let's take the "Category" one here, and also look at the relevant part of the "Category" `EntityType`: + +```xml +<Schema Namespace="Main" xmlns="http://docs.oasis-open.org/odata/ns/edm"> + ... + <EntityType Name="Products"> + ... + <NavigationProperty Name="Category" Type="Main.Categories" Partner="Products"> + <ReferentialConstraint Property="Category_CategoryID" ReferencedProperty="CategoryID"/> + </NavigationProperty> + <Property Name="Category_CategoryID" Type="Edm.Int32"/> + ... + </EntityType> + <EntityType Name="Categories"> + <Key> + <PropertyRef Name="CategoryID"/> + </Key> + <Property Name="CategoryID" Type="Edm.Int32" Nullable="false"/> + <NavigationProperty Name="Products" Type="Collection(Main.Products)" Partner="Category"/> + </EntityType> + ... +</Schema> +``` + +If we stare at this for a moment, this relation is expressed beautifully: + +```text ++----------+ N:1 +------------+ +| Products |------| Categories | ++----------+ +------------+ +``` + +Looking at the "Products" `EntityType`, we see that: + +- there is a `Property` "Category_CategoryID" to hold the actual category foreign key +- there is also a `NavigationProperty` "Category" [along which we can traverse the relationship](https://odd.cfapps.eu10.hana.ondemand.com/northbreeze/Products?$top=3&$expand=Category) with, say, the OData system query option `$expand` +- the `ReferentialConstraint` that is contained within (and therefore qualifies) the `NavigationProperty` says that the value of "Category_CategoryID" must equal the value of the property "CategoryID" in the target ("Main.Categories") + +Looking at the "Categories" `EntityType`, we see that: + +- the "CategoryID" property is indeed the key property +- there's a reverse `NavigationProperty` "Products" but the type is a _collection_ of `Main.Products` signifying a to-many relationship (as in "N" in the relation diagram above) + +### Further info + +- [Serving OData APIs](https://cap.cloud.sap/docs/guides/protocols/odata) in Capire +- [ABAP RESTful Application Programming Model](https://help.sap.com/docs/abap-cloud/abap-rap/abap-restful-application-programming-model) diff --git a/tutorials/odata-dd-5-vocabularies/oasis-vocabularies-toc.png b/tutorials/odata-dd-5-vocabularies/oasis-vocabularies-toc.png new file mode 100644 index 0000000000..04e1bcc579 Binary files /dev/null and b/tutorials/odata-dd-5-vocabularies/oasis-vocabularies-toc.png differ diff --git a/tutorials/odata-dd-5-vocabularies/odata-dd-5-vocabularies.md b/tutorials/odata-dd-5-vocabularies/odata-dd-5-vocabularies.md new file mode 100644 index 0000000000..8b283e866a --- /dev/null +++ b/tutorials/odata-dd-5-vocabularies/odata-dd-5-vocabularies.md @@ -0,0 +1,257 @@ +--- +parser: v2 +author_name: DJ Adams +author_profile: https://github.com/qmacro +auto_validation: false +primary_tag: software-product>sap-business-technology-platform +tags: [ software-product>sap-business-technology-platform, topic>cloud, programming-tool>odata, tutorial>beginner ] +time: 20 +--- + +# Understand how vocabularies are used in OData metadata documents + +<!-- description --> Metadata can be extended, using content organized into vocabularies. + +## You will learn + +- How vocabularies are used to organize annotations +- How they're included within an OData metadata document + +## Intro + +The members of the OData Technical Committee have worked hard on OData as a robust, well-defined and extensible standard. In this tutorial we'll look at a key extensibility mechanism in the form of vocabularies, and how they're used to organize annotations. + +> This tutorial belongs to the OData Deep Dive mission, a re-write of the original. The re-write is a work in progess, please proceed with caution! More info can be found in the blog post [OData Deep Dive rewrite in the open](https://qmacro.org/blog/posts/2026/02/02/odata-deep-dive-rewrite-in-the-open/). + +--- + +### Examine the rest of the entity model wrapper + +In the previous [Metadata](https://developers.sap.com/tutorials/odata-dd-4-metadata.html) tutorial we saw how the schema was presented within a context. That context is called the [entity model wrapper](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752500). We left the examination of part of that wrapper - the references to vocabularies - to this tutorial. Let's start by digging into those now. + +Here's what the relevant section of the wrapper in [the Northbreeze OData service's metadata document](https://odd.cfapps.eu10.hana.ondemand.com/northbreeze/$metadata) looks like: + +```xml +<edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx"> + <edmx:Reference + Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml"> + <edmx:Include Alias="Capabilities" Namespace="Org.OData.Capabilities.V1"/> + </edmx:Reference> + <edmx:Reference + Uri="https://sap.github.io/odata-vocabularies/vocabularies/Common.xml"> + <edmx:Include Alias="Common" Namespace="com.sap.vocabularies.Common.v1"/> + </edmx:Reference> + <edmx:Reference + Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml"> + <edmx:Include Alias="Core" Namespace="Org.OData.Core.V1"/> + </edmx:Reference> + ... +</edmx:Edmx> +``` + +What are these references? Well, section [3.3 Element edmx:Reference](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752504) of the CSDL standards document is helpful here (as well as section [3.1 Element edmx:Edmx](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752504) telling us that there can be zero or more of them within an `<edmx:Edmx>` element). + +Basically, `edmx:Reference` elements point to external CSDL documents, specific content from which (indicated by the `edmx:Include` elements within) is then added to the overall scope of the referring (OData metadata) document. + +Think of it like an "include" or "import" as found in various programming languages; these references might look like this in pseudo-JavaScript, for example: + +```javascript +import { "Org.OData.Capabilities.V1" as "Capabilities" } from "https://oasis-tcs.github.io/.../Org.OData.Capabilities.V1.xml"; +import { "com.sap.vocabularies.Common.v1" as "Common" } from "https://sap.github.io/.../Common.xml"; +import { "Org.OData.Core.V1" as "Core" } from "https://oasis-tcs.github.io/.../Org.OData.Core.V1.xml"; +``` + +### Meet the OData vocabularies + +Note that each of the referenced external documents in our OData metadata document are resources in GitHub repositories: + +- two belonging to OASIS in <https://oasis-tcs.github.io/odata-vocabularies/> +- one belonging to SAP in <https://sap.github.io/odata-vocabularies/> + +Taking the first of the three `<edmx:Reference>` elements here: + +```xml +<edmx:Reference + Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml"> + <edmx:Include Alias="Capabilities" Namespace="Org.OData.Capabilities.V1"/> +</edmx:Reference> +``` + +we see that: + +- the reference points to an [XML representation](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml) of a CSDL document "Org.OData.Capabilities.V1" +- moving one level up from that CSDL document resource's location, there is a [vocabularies overview page](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/) listing each of the OASIS Technical Committee vocabularies, including this "Capabilities" one. + ![OASIS OData TC - Vocabularies](oasis-vocabularies-toc.png) +- for each of these vocabulary resources there are HTML, XML and JSON representations +- the HTML representation is especially useful for us as it describes the vocabulary's purpose in general, and gives details for each of the terms and types contained within it + +Here's a visual representation of those resources at <https://oasis-tcs.github.io/odata-vocabularies/>: + +```text +vocabularies/ <-- vocabularies overview page + | + +-- Org.OData.Capabilities.V1.html <-- detailed info on the vocabulary & its terms + | + +-- Org.OData.Capabilities.V1.xml <-- an XML representation of the vocabulary + | + +-- Org.OData.Capabilities.V1.json <-- a JSON representation of the vocabulary +``` + +If we look specifically at the [XML representation](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml) of the "Capabilities" vocabulary in CSDL form, we see this: + +```xml +<?xml version="1.0" encoding="utf-8"?> + ... + + Abstract: + This document contains terms describing capabilities of an OData service. + +--> +<edmx:Edmx xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx" Version="4.0"> + <edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Authorization.V1.xml"> + <edmx:Include Alias="Authorization" Namespace="Org.OData.Authorization.V1" /> + </edmx:Reference> + <edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml"> + <edmx:Include Alias="Core" Namespace="Org.OData.Core.V1" /> + </edmx:Reference> + <edmx:Reference Uri="https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Validation.V1.xml"> + <edmx:Include Alias="Validation" Namespace="Org.OData.Validation.V1" /> + </edmx:Reference> + <edmx:DataServices> + <Schema xmlns="http://docs.oasis-open.org/odata/ns/edm" Namespace="Org.OData.Capabilities.V1" Alias="Capabilities"> + ... + </Schema> + </edmx:DataServices> +</edmx:Edmx> +``` + +It's another EDMX document! There is definitely a certain beauty to the OData specifications that is special. + +Anyway, there are two thing of note here: + +- this document _also_ has references to vocabularies +- there is a single `<Schema>` element, with the OData namespace "Org.Data.Capabilities.V1" + +And that specific schema is exactly the one that's referenced in the `<edmx:Include>` element within our first `<edmx:Reference>` element: + +```xml +<edmx:Include Alias="Capabilities" Namespace="Org.OData.Capabilities.V1"/> +``` + +It just so happens that in this referenced CSDL document, there _is_ only one schema, but there could be more. + +So an `<edmx:Include>` element forms an important part of the `<edmx:Reference>`. + +> Indeed, [section 3.3 Element edmx:Reference](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752504) states that at least one `<edmx:Include>` or `<edmx:IncludeAnnotation>` is mandatory. + +The `<edmx:Include>` element serves to identify a particular schema to be included, from the referenced vocabulary. It also does one more thing here - it specifies a short name, in the form of a value for an `Alias` attribute, which can be used to refer to that imported vocabulary schema. Just like the `as` aliasing in our imaginary JavaScript equivalent example earlier. So instead of the full name "Org.OData.Capabilities.V1" being needed to prefix vocabulary terms, the short form "Capabilities" can be used, as we'll see in subsequent steps. + +### Take a first look at the annotations + +Now we understand how vocabulary resources are referenced to be included into an OData metadata document, and how they're used to organize annotations, let's now take a first look at what annotations are used in this particular OData metadata document, and how. + +To set the scene, if we take brief look at the CSDL specification, we learn that: + +- "Vocabulary annotations can be specified as a child of the model element being annotated or as a child of an `edm:Annotations` element that targets the model element." (from [section 4.6 Annotations](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752519)) +- "An annotation applies a term to a model element and defines how to calculate a value for the applied term." (from [section 14 Vocabulary and Annotation](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Vocabulary_and_Annotation)) + +Additionally: + +- "A service SHOULD NOT require a client to interpret annotations. Clients SHOULD ignore unknown terms and silently treat unexpected or invalid values (including invalid type, invalid literal expression, etc.) as an unknown value for the term." ( also from [section 14 Vocabulary and Annotation](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Vocabulary_and_Annotation)) + +With the information from the first two points here, we can see in our OData metadata document that both approaches to annotation are used. + +First, we see the "annotation as child element" approach where the annotation "Core.Links" is applied directly to the `<Schema>` element via the `<Annotation>` element which appears a direct child of `<Schema>`: + +```xml +<Schema Namespace="Main" xmlns="http://docs.oasis-open.org/odata/ns/edm"> + <Annotation Term="Core.Links"> + <Collection> + <Record> + <PropertyValue Property="rel" String="author"/> + <PropertyValue Property="href" String="https://cap.cloud.sap"/> + </Record> + </Collection> + </Annotation> + ... +</Schema> +``` + +Here, "Core" is the annotation vocabulary, and "Links" is the term (we'll look in more detail at this in the next tutorial). + +So in this case, the (entire) schema is being annotated with "Core.Links": + +```text ++--------+ +------+ +| | annotates | Core | +| Schema | <-------------- +----------+ +| | | Links | ++--------+ +----------+ +``` + +Further on in the metadata document we see an example of the other approach, where `<Annotation>` elements appear as direct children of a containing `<Annotations>` element, and the schema element to which the annotations are to be applied are specified in the `Target` attribute of that `<Annotations>` container: + +> Again, as explained in the previous [Metadata](https://developers.sap.com/tutorials/odata-dd-4-metadata.html) tutorial, in the "Get acquainted with the schema element" step, we'll leave out the XML namespace prefix `edm` here when writing elements that belong to that namespace. + +```xml +<Schema Namespace="Main" xmlns="http://docs.oasis-open.org/odata/ns/edm"> + ... + <EntityContainer Name="EntityContainer"> + <EntitySet Name="Categories" EntityType="Main.Categories"> + <NavigationPropertyBinding Path="Products" Target="Products"/> + </EntitySet> + </EntityContainer> + ... + <Annotations Target="Main.EntityContainer/Categories"> + <Annotation Term="Capabilities.DeleteRestrictions"> + <Record Type="Capabilities.DeleteRestrictionsType"> + <PropertyValue Property="Deletable" Bool="false"/> + </Record> + </Annotation> + <Annotation Term="Capabilities.InsertRestrictions"> + <Record Type="Capabilities.InsertRestrictionsType"> + <PropertyValue Property="Insertable" Bool="false"/> + </Record> + </Annotation> + <Annotation Term="Capabilities.UpdateRestrictions"> + <Record Type="Capabilities.UpdateRestrictionsType"> + <PropertyValue Property="Updatable" Bool="false"/> + </Record> + </Annotation> + </Annotations> +</Schema> +``` + +Here, three different annotations terms: + +- Capabilities.DeleteRestrictions +- Capabilities.InsertRestrictions +- Capabilities.UpdateRestrictions + +are being applied via `Target="Main.EntityContainer/Categories"` to the "Categories" entityset in the entity container named "EntityContainer" in the "Main" OData namespace. + +```text + +--------------+ + | Capabilities | + +--- +--------------------+ + | | DeleteRestrictions | + | +--------------------+ ++----------------------+ | +| Main.EntityContainer | | +--------------+ ++--------------------------+ annotates | | Capabilities | +| Categories | <-----------+--- +--------------------+ ++--------------------------+ | | InsertRestrictions | + | +--------------------+ + | + | +--------------+ + | | Capabilities | + +--- +--------------------+ + | UpdateRestrictions | + +--------------------+ +``` + +In the next tutorial we'll dig in to the detail of the annotations used in this OData metadata document. + +### Further info + +- [OData @ SAP - SAP Vocabularies](https://sap.github.io/odata-vocabularies/) diff --git a/tutorials/odata-dd-6-annotations/core-vocab-link.png b/tutorials/odata-dd-6-annotations/core-vocab-link.png new file mode 100644 index 0000000000..d0f5d09377 Binary files /dev/null and b/tutorials/odata-dd-6-annotations/core-vocab-link.png differ diff --git a/tutorials/odata-dd-6-annotations/core-vocab-top.png b/tutorials/odata-dd-6-annotations/core-vocab-top.png new file mode 100644 index 0000000000..ece0ba6ff0 Binary files /dev/null and b/tutorials/odata-dd-6-annotations/core-vocab-top.png differ diff --git a/tutorials/odata-dd-6-annotations/delete-restrictions-type.png b/tutorials/odata-dd-6-annotations/delete-restrictions-type.png new file mode 100644 index 0000000000..1af5b7e3ce Binary files /dev/null and b/tutorials/odata-dd-6-annotations/delete-restrictions-type.png differ diff --git a/tutorials/odata-dd-6-annotations/oasis-vocabularies-toc.png b/tutorials/odata-dd-6-annotations/oasis-vocabularies-toc.png new file mode 100644 index 0000000000..04e1bcc579 Binary files /dev/null and b/tutorials/odata-dd-6-annotations/oasis-vocabularies-toc.png differ diff --git a/tutorials/odata-dd-6-annotations/odata-dd-6-annotations.md b/tutorials/odata-dd-6-annotations/odata-dd-6-annotations.md new file mode 100644 index 0000000000..3c413d0892 --- /dev/null +++ b/tutorials/odata-dd-6-annotations/odata-dd-6-annotations.md @@ -0,0 +1,249 @@ +--- +parser: v2 +author_name: DJ Adams +author_profile: https://github.com/qmacro +auto_validation: false +primary_tag: software-product>sap-business-technology-platform +tags: [ software-product>sap-business-technology-platform, topic>cloud, programming-tool>odata, tutorial>beginner ] +time: 20 +--- + +# Learn how to read annotations in OData metadata documents + +<!-- description --> Annotations from different vocabularies can be found throughout OData metadata documents. + +## You will learn + +- How annotations are defined & structured +- How annotations are used to extend the information in the schema + +## Intro + +In all cases, annotations consist of two parts - the term and the value. In this tutorial we'll examine the terms, along with their types and possible values, in our [Northbreeze OData metadata document](https://odd.cfapps.eu10.hana.ondemand.com/northbreeze/$metadata). + +> This tutorial belongs to the OData Deep Dive mission, a re-write of the original. The re-write is a work in progess, please proceed with caution! More info can be found in the blog post [OData Deep Dive rewrite in the open](https://qmacro.org/blog/posts/2026/02/02/odata-deep-dive-rewrite-in-the-open/). + +--- + +### Revisit the Core.Links annotation + +One of the annotations we took a brief first look at in the previous tutorial was "Core.Links", where "Core" is the vocabulary alias, referring here to "Org.OData.Core.V1". + +```xml +<Schema Namespace="Main" xmlns="http://docs.oasis-open.org/odata/ns/edm"> + <Annotation Term="Core.Links"> + <Collection> + <Record> + <PropertyValue Property="rel" String="author"/> + <PropertyValue Property="href" String="https://cap.cloud.sap"/> + </Record> + </Collection> + </Annotation> + ... +</Schema> +``` + +The XML element structure contained within the `<Annotation>` element is the annotation term's value. [Staring at](https://qmacro.org/blog/posts/2017/02/19/the-beauty-of-recursion-and-list-machinery/#initial-recognition) this structure for a bit, we see that it's a [collection](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752651) of records, each of which have two properties (or fields) "rel" and "href", for which there are corresponding string values. + +There's only a single record in the collection, and it looks like this: + +rel|href +-|- +`author`|`https://cap.cloud.sap` + +### Dig into the XML representation of the Core vocabulary + +Let's first go the hard way round to understand what this term is, and how the value structure is defined. To do that, we should look at the [XML representation of the Org.OData.Core.V1 vocabulary](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml), and search for the relevant term. + +> As we search, we also will notice that the schema within this resource, specifically the "Org.OData.Core.V1" schema, is adorned with a "Core.Links" annotation itself, too! And it works because of the OData namespace "Core" provides a pointer to itself. Further beauty, which we'll see more of shortly, too. + +Within the "Org.OData.Core.V1" schema, we find two elements, right next to each other, that are relevant to our search - a `<Term>` and a `<ComplexType>`: + +```xml +<Term Name="Links" Type="Collection(Core.Link)" Nullable="false"> + <Annotation Term="Core.Description" String="Link to related information" /> +</Term> + +<ComplexType Name="Link"> + <Annotation Term="Core.Description" String="The Link type is inspired by the `atom:link` element, see [RFC4287](https://tools.ietf.org/html/rfc4287#section-4.2.7), and the `Link` HTTP header, see [RFC5988](https://tools.ietf.org/html/rfc5988)" /> + <Property Name="rel" Type="Edm.String" Nullable="false"> + <Annotation Term="Core.Description" String="Link relation type, see [IANA Link Relations](http://www.iana.org/assignments/link-relations/link-relations.xhtml)" /> + </Property> + <Property Name="href" Type="Edm.String" Nullable="false"> + <Annotation Term="Core.IsURL" /> + <Annotation Term="Core.Description" String="URL of related information" /> + </Property> +</ComplexType> +``` + +This is where the "Links" term, in the "Core" namespace (representing the vocabulary), is defined, i.e. in a `<Term>` element. The `<Term>` element itself is defined in the OData standards document that has accompanied us on this journey of discovery: "OData Version 4.0. Part 3: Common Schema Definition Language (CSDL)", specifically in [section 14.1 Element edm:Term](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752620). + +Here's what we can we discern from this `<Term>` element: + +- the name is "Links" +- the type is defined as being a collection (an array) of individual "Core.Link" items +- it's annotated with a "Core.Description" term (which is a part of this very vocabulary, more beauty!) that tells us this term is for "links to related information" + +The "Core.Link" item is defined with a corresponding `<ComplexType>` element, which: + +- is annotated with a "Core.Description" term telling us more about it +- includes, in that description, a reference to the `atom` namespace and RFC 4287 (Atom Syndication Format) which we looked at in the first tutorial in this mission on the [Origins](https://developers.sap.com/tutorials/odata-dd-1-origins.html) of OData +- is defined as having two properties "rel" and "href", each of which has the type "edm.String" and each of which are also annotated with "Core.Description" (the latter is also annotated with "Core.IsURL") + +Here's what that looks like, pictorally: + +```text +Namespace: Core + + +-- Description: "Link to related information" + | + | +-- Description: "The Link type is inspired by ..." + V | ++-------+ +------V----------------+ +| | | +----------+ ++ +| Links +--->| | Link | || +| | | | | || ++-------+ | +-+--------+ || + | | +----------+ || + | +-----+ rel |<---- Description: "Link relation type ..." + | | | (String) | || + | | +----------+ || + | | +----------+ || + | +-----+ href |<---- Description: "URL of related information ..." + | | (String) |<---- IsURL: true + | +----------+ || + ++----------------------+| + +-----------------------+ + (Collection) +``` + +### Explore the Core.IsURL annotation + +As a bonus, and to help drive home how OData vocabularies and metadata metadata (yes, that is deliberately written twice) works, let's spend a moment of practice following the other annotation with which the "Core.Link" complex type's property "href" is annotated, namely "Core.IsURL". + +Look again through the [Org.OData.Core.V1 schema XML](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.xml) for the "IsURL" term. You should find this: + +```xml +<Term Name="IsURL" Type="Core.Tag" Nullable="false" DefaultValue="true" AppliesTo="Property Term"> + <Annotation Term="Core.Description" String="Properties and terms annotated with this term MUST contain a valid URL" /> + <Annotation Term="Core.RequiresType" String="Edm.String" /> +</Term> +``` + +Descend yet one level deeper to find out what the "Core.Tag" type is, whereupon you should find: + +```xml +<TypeDefinition Name="Tag" UnderlyingType="Edm.Boolean"> + <Annotation Term="Core.Description" String="This is the type to use for all tagging terms" /> +</TypeDefinition> +``` + +And with the definition of this type being a Boolean (in the "edm" namespace, see the [Metadata](https://developers.sap.com/tutorials/odata-dd-4-metadata.html) tutorial earlier in this mission), we've bottomed out our investigation. + +Tracking back up to where we started this descent, in the schema in our [OData metadata document](https://odd.cfapps.eu10.hana.ondemand.com/northbreeze/$metadata), we can now confidently understand that: + +Level 0 (our OData metadata) + +- that schema is annotated with the "Links" term +- that "Links" term is in the Org.OData.Core.V1 vocabulary + +Level 1 (the "Core" vocabulary) + +- that vocabulary has the short alias "Core" +- within the "Core" vocabulary, the "Links" term is defined as a Collection of the "Link" complex type +- that "Link" complex type has two properties, "url" and "href" + +Level 2 (annotations used for vocabulary content) + +- the "Links" term, the "Link" complex type, and both properties are also themselves annotated with terms from "Core" +- the predominant term used in these (meta) annotations within the "Core" vocabulary is "Description" +- but there's also the term "IsURL", which is defined as being of type "Tag" + +Level 3 (annotations used for building blocks of annotations) + +- and the "Tag" type is defined as a Boolean +- as well as being annotated itself too (with the "Description" term) + +Phew! + +### Revisit the Core vocabulary via the HTML representation + +Now that we've done the hard work of examining the XML representation of the "Core" vocabulary, let's take a breather and look at the HTML representation (we saw how these are related in the previous tutorial on [Vocabularies](https://developers.sap.com/tutorials/odata-dd-5-vocabularies.html)), at <https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.html>. + +We see some familiar information that should help clarify and cement our understanding. + +![The first part of the HTML representation of the "Core" vocabulary](core-vocab-top.png) + +First, the description "Core terms needed to write vocabularies" explains why so many aspects of the "Core" vocabulary were indeed annotated themselves with terms from that very same vocabulary. + +Next, in the list of terms, we see the "Links" term with its type defined as "[Link]", i.e. an array (`[...]`) or collection of "Link" types. Following the hyperlinked "Link" we are taken to the [Link](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Core.V1.html#Link) type definition: + +![The Link type definition](core-vocab-link.png) + +The keen observers amongst you will realise that the descriptions in this HTML representation are taken directly from the values of the "Core.Description" terms that adorn the XML representation, suggesting that the HTML representation is generated from the XML representation too. + +> In fact, [the source of the HTML representation is in Markdown format](https://github.com/oasis-tcs/odata-vocabularies/blob/main/vocabularies/Org.OData.Core.V1.md), which makes sense too, given that there is Markdown in some of the "Core.Description" string values ([the description for this "Core.Link" type](https://github.com/oasis-tcs/odata-vocabularies/blob/main/vocabularies/Org.OData.Core.V1.xml#L119) is a good example of this). + +### Examine the other Capabilities annotations + +Now we understand how to read, interpret and navigate annotations, let's turn our attention to the other annotations in our [Northbreeze OData metadata document](https://odd.cfapps.eu10.hana.ondemand.com/northbreeze/$metadata): + +```xml +<Annotations Target="Main.EntityContainer/Categories"> + <Annotation Term="Capabilities.DeleteRestrictions"> + <Record Type="Capabilities.DeleteRestrictionsType"> + <PropertyValue Property="Deletable" Bool="false"/> + </Record> + </Annotation> + <Annotation Term="Capabilities.InsertRestrictions"> + <Record Type="Capabilities.InsertRestrictionsType"> + <PropertyValue Property="Insertable" Bool="false"/> + </Record> + </Annotation> + <Annotation Term="Capabilities.UpdateRestrictions"> + <Record Type="Capabilities.UpdateRestrictionsType"> + <PropertyValue Property="Updatable" Bool="false"/> + </Record> + </Annotation> +</Annotations> +``` + +From our first look at annotations in the previous tutorial on [Vocabularies](https://developers.sap.com/tutorials/odata-dd-5-vocabularies.html) we understand that these annotations are targeting the "Categories" entityset. XML has a reputation for being verbose, and that reputation is earned here. + +However, with the ability we now have to read and understand annotation terms & values, we can see that all these annotation terms are from the [Capabilities](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.xml) vocabulary (Org.OData.Capabilities.V1) and they are all of the same theme of operational limitations, with the terms being "DeleteRestrictions", "InsertRestrictions" and "UpdateRestrictions". + +The entityset is annotated with three terms, each of which has a record structure as its type. Let's dig in to the first occurring term which is "Capabilities.DeleteRestrictions". + +> The way this works for the other terms (for insert and update operations) is very similar; digging into those is left as an exercise for you, dear reader. + +Starting with the annotation target, which is "Main.EntityContainer/Categories", we see that the first of the three annotations that are being applied is "Capabilities.DeleteRestrictions": + +```xml +<Annotations Target="Main.EntityContainer/Categories"> + <Annotation Term="Capabilities.DeleteRestrictions"> + <Record Type="Capabilities.DeleteRestrictionsType"> + <PropertyValue Property="Deletable" Bool="false"/> + </Record> + </Annotation> + ... +</Annotations> +``` + +If we look at the [HTML representation of the Org.OData.Capabilities.V1 vocabulary](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.html) we see that "DeleteRestrictions" is indeed listed in the Terms section, and is defined as having the [DeleteRestrictionsType](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.html#DeleteRestrictionsType), which looks like this: + +![deleterestrictionstype definition](delete-restrictions-type.png) + +One of the properties in this type (this record structure, effectively) is the Boolean "Deletable", a value for which (`false`) is provided in the annotation for the entityset. + +> Note that the "DeleteRestrictionsType" type is defined as being derived from [DeleteRestrictionsBase](https://oasis-tcs.github.io/odata-vocabularies/vocabularies/Org.OData.Capabilities.V1.html#DeleteRestrictionsBase). The difference between this base type and the derived type is that the derived type has one additional property [NonDeletableNavigationProperties](https://github.com/oasis-tcs/odata-vocabularies/blob/main/vocabularies/Org.OData.Capabilities.V1.xml#L876) which is a detail we don't have to worry over at this level of exploration. + +Following these annotations terms, with their Boolean `false` values, we can see that the entityset is effectively read-only. + +> If at this point you're still looking at the metadata document for the the publicly available read-only service (mentioned in the [Northbreeze](https://developers.sap.com/tutorials/odata-dd-3-northbreeze.html) tutorial) at <https://odd.cfapps.eu10.hana.ondemand.com/northbreeze/$metadata>, all the entitysets are read-only and you'll see the same `<Annotations Target="...">` pattern for the other entitysets. + +By the way, this OData service is being served by a CAP Node.js server, where the entity projection(s) in the service definition have the simple CDS-level annotation `@readonly` assigned. This is translated into these triplets of "Capabilities" based delete, insert and update restriction annotations at the OData level. + +### Further info + +- the [@readonly](https://cap.cloud.sap/docs/guides/services/constraints#readonly) section of the Input Validation topic in Capire +- if you're wondering why there's a third vocabulary [com.sap.vocabularies.Common.v1](https://sap.github.io/odata-vocabularies/vocabularies/Common.xml) included in the references section of this OData service's metadata document, but there are no "Common" terms used, that's just because this is served by a CAP server, and the CAP compiler will by default always include references to both the "Core" and "Common" vocabularies (see the `csn2annotationEdm` function in `@sap/cds-compiler/lib/edm/annotations/genericTranslations.js` for details) diff --git a/tutorials/opps-manual-setup/opps-manual-setup.md b/tutorials/opps-manual-setup/opps-manual-setup.md index 601a10d453..0363d2ea7a 100644 --- a/tutorials/opps-manual-setup/opps-manual-setup.md +++ b/tutorials/opps-manual-setup/opps-manual-setup.md @@ -36,7 +36,7 @@ primary_tag: products>sap-business-technology-platform 2. Navigate to your subaccount. It is usually named `trial`. -In case you plan to use the trial subaccount that was initially created when setting up your SAP BTP trial account, please proceed with the following steps: +If you plan to use the trial subaccount that was initially created when setting up your SAP BTP trial account, please proceed with the following steps: 1. In the navigation pane, open **Services > Service Marketplace**. @@ -48,7 +48,7 @@ In case you plan to use the trial subaccount that was initially created when set <!-- The subscription process is finished once the status icon changes from **Processing** to **Subscribed**. --> - +<!-- In case you plan to use a manually created subaccount or want to add new services not present at the time of creating your subaccount, please proceed with the following steps: 1. In the navigation pane, open **Entitlements**. @@ -74,7 +74,7 @@ In case you plan to use a manually created subaccount or want to add new service 11. In the notification box shown in the header choose **Enable Cloud Foundry**. 12. Navigate back to your subaccount and choose **Create Space**. - +--> ### Set up roles and authorizations @@ -93,9 +93,9 @@ In order to use the apps provided with SAP Omnichannel Promotion Pricing, you mu 6. Under **Role Name**, select **`Configure_OPPS`** and **`Maintain_OPPS_Promotions`** from the dropdown list. -7. Assign **User** or **User Groups** to your role collection and choose **Save**. +7. Assign yourself as a **User** or assign a **User Groups** to your role collection and choose **Save**. -Optional: Once you have set up the roles and authorizations, you can do the following steps: +Optional: Once you have set up the roles and authorizations, you can check out our apps as follows: 1. Navigate back to your subaccount. diff --git a/tutorials/sapui5-appfrontend-create/sapui5-appfrontend-create.md b/tutorials/sapui5-appfrontend-create/sapui5-appfrontend-create.md index 90c74fff10..1cb80fe3cf 100644 --- a/tutorials/sapui5-appfrontend-create/sapui5-appfrontend-create.md +++ b/tutorials/sapui5-appfrontend-create/sapui5-appfrontend-create.md @@ -2,7 +2,7 @@ parser: v2 auto_validation: true primary_tag: programming-tool>html5 -tags: [ tutorial>beginner, programming-tool>html5, programming-tool>sapui5, topic>user-interface ] +tags:[ tutorial>beginner, programming-tool>html5, programming-tool>sapui5, topic>user-interface ] time: 20 author_name: Shrinivasan Neelamegam author_profile: https://github.com/neelamegams @@ -14,7 +14,7 @@ author_profile: https://github.com/neelamegams ## Prerequisites -- Install **Node.js (version ≥ 22.6.0)** on your machine +- Install **Node.js (version ≥ 22.6.0)** on your machine ## You will learn @@ -55,7 +55,6 @@ author_profile: https://github.com/neelamegams ui5 init ``` 3. Update `ui5.yaml` with the following code: - ```yaml specVersion: '4.0' metadata: @@ -65,10 +64,8 @@ framework: name: SAPUI5 version: '1.141.1' ``` - > We created a minimal `ui5.yaml` file describing our UI5 project. The UI5 Tooling uses this file to configure the web server that the application will be hosted on. - ### Create the view and index.html 1. Add a view `App.view.xml` inside `webapp/view` folder and paste the following code @@ -105,10 +102,7 @@ xmlns:f="sap.f" xmlns:m="sap.m" xmlns:mvc="sap.ui.core.mvc"> <html> <head> <meta charset="utf-8" /> - <meta - name="viewport" - content="width=device-width, initial-scale=1.0" - /> + <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Employee DashBoard