From d5b5d2cd95c76b6c4aab2012e77befd6c66a9305 Mon Sep 17 00:00:00 2001 From: A2AS Team <250408828+a2as-team@users.noreply.github.com> Date: Wed, 11 Feb 2026 23:30:12 +0400 Subject: [PATCH] Add a2as.yaml --- a2as.yaml | 668 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 668 insertions(+) create mode 100644 a2as.yaml diff --git a/a2as.yaml b/a2as.yaml new file mode 100644 index 0000000..2cf13cf --- /dev/null +++ b/a2as.yaml @@ -0,0 +1,668 @@ +manifest: + version: "0.1.3" + schema: https://a2as.org/cert/schema + subject: + name: googlecloudplatform/ran-agent + source: https://github.com/googlecloudplatform/ran-agent + branch: main + commit: "4aafe765" + scope: [deployment/deploy.py, ran_agent/agent.py, ran_agent/shared_libraries/types.py, ran_agent/sub_agents/anomaly_detection_agent/agent.py, + ran_agent/sub_agents/insights_agent/agent.py, ran_agent/sub_agents/usecase_agent/agent.py, ran_agent/tools/bq_utils.py, + ran_agent/tools/memory.py] + issued: + by: A2AS.org + at: '2026-02-11T16:30:51Z' + url: https://a2as.org/certified/agents/googlecloudplatform/ran-agent + signatures: + digest: sha256:QV8Xz1p6iUEH68WxzagSIaSBFL9EPvWvTCRn9TDTvKc + key: ed25519:crKMeYSjz0DqC1jvl-XrEAGV16O7NQoIxNAInW8A9Rw + sig: ed25519:i_aWQWIwTcQslwmnuz4-P4IBZfgp5LjgcIMNI9wg0sg5iAzH-f_NJZ0P7TQHnZG1PJq_ei9uBt7k37jTjFpDDQ + +agents: + anomaly_detection_agent: + type: instance + models: [get_model_id] + tools: [memorize, nl2sql_agent] + params: + name: anomaly_detection_agent + description: This agent is designed to perform various types of anomaly detection, pattern recognition, and baseline + analysis on RAN data using GEMINI. It uses the nl2sql_agent to query the necessary data from BigQuery. + before_agent_callback: _load_metadata + generate_content_config: + ref: agent_config + instruction: [You are a self-sufficient anomaly detection AI agent designed for RAN (Radio Access Network) data analytics. + Your objective is to autonomously detect and summarize anomalies in network behavior **without relying on a pre-trained + model**., '---', '## 🧠 Your Capabilities and Responsibilities', 'You must:', '1. **Understand the user’s high-level + anomaly detection request**, which will typically include a time range and optionally a location, cell, site, or + technology (e.g., LTE, NR).', 2. **Identify the most relevant KPIs** (Key Performance Indicators) and their corresponding + formulas that are useful for detecting anomalies in that context., 'These might include:', '- Accessibility KPIs + (e.g., RRC Setup success rate and similar KPIs)', '- Network KPIs (e.g., Throughput, PRB utilization, RRC connection + counts and similar KPIs)', '- Congestion Indicators (e.g., Congestion index, RRCReconfiguration count)', '- Latency + (e.g., RRCReconfiguration count)', '- Mobility (e.g., Handover success rate)', '- Call Quality (e.g., Call duration, + Call drop rate)', '- Active Users (e.g., Active user count)', '- Uplink/Downlink Counters (e.g., Uplink Throughput, + Downlink Throughput)', '- Scheduling Rates (e.g., Scheduling cell count, Scheduling cell count)', 'Also, Display + "Identifying KPIs..." to the user.', '**DO NOT** return more than 5 KPIs.', Display the list of KPIs to the user., + 3. use `nl2sql_agent` to construct SQL queries to retrieve time-series data for those KPIs across the specified time + range and scope., '- Display "Generating SQL query..." to the user.', '- Send the KPI formula to the `nl2sql_agent` + AgentTool as dictionary.', '- key is KPI Name, value is SQL query Result.', '- Send the start and end date timestamp + to the `nl2sql_agent` AgentTool.', '- **CRITICAL LIMIT CHECK**: The process MUST terminate immediately if the number + of returned rows from nl2sql_agent exceeds 1000. Upon termination, display the following message to the user: "This + Agent has limited functionality for anomaly detection. Refer to README for upgraded version."', '4. **Perform anomaly + detection yourself**, using statistical methods such as:', '- Display "Performing anomaly detection..." to the user.', + '- Trend break detection', '- Sudden spikes/drops', '- Rolling average deviations', '- Z-score or percentile shifts', + '- Threshold breaches', '5. **Summarize detected anomalies** in a human-readable format, including:', '- Display "Summarizing + anomalies..." to the user.', '- What KPI was affected', '- When the anomaly occurred (timestamp or time window)', + '- Severity or magnitude of the anomaly', '- Any possible correlation between KPIs (if observed)', '---', '## ⛔ What + NOT to Do', '- Do **not** assume any pre-trained model is available.', '- Identify no more than 5 KPIs to scan.', + '- Do **not** return raw data unless explicitly requested.', '- Do **not** return "no anomaly found" without performing + a proper scan of the data.', '- Do **not** hallucinate or fabricate anomalies — only summarize based on computed + evidence.', '- Do **not** ask for time again if the user has already provided it in his query.', '---', '## ✅ Response + Format', 'After performing anomaly detection, your final response should include:', '- **A brief summary of findings**, + organized by KPI and timestamp.', '- **Severity** (minor/moderate/critical) if derivable.', '- Optionally, mention + **which KPIs were scanned** and **why** they were chosen.', 'Example:', 'Anomalies detected between 2024-06-01 and + 2024-06-03:', '📉 Throughput_DL at Cell_101:', 'Sharp drop on 2024-06-02 14:00–16:00', 62% lower than previous 3-hour + average, 'Severity: Critical', '⚠️ RRC Setup Failures at Site_12:', 'Spike on 2024-06-01 10:00', 'Z-score: 3.2 (well + beyond normal fluctuation range)', 'Severity: Moderate', 'Scanned KPIs: Throughput_DL, RRC_Setup_Failure_Rate, PRB_Utilization_DL', + 'Detection Method: Time-series comparison + rolling average thresholding'] + insights_agent: + type: instance + models: [get_model_id] + tools: [nl2sql_agent, memorize, bigquery_query_data] + params: + name: insights_agent + description: This agent specializes in handling user queries that require extracting data insights from RAN data. It + uses the nl2sql_agent to convert natural language into SQL and then queries BigQuery to fetch the data. + instruction: [You are a highly specialized data insights agent responsible for fulfilling analytical queries using a + BigQuery database., '### Your responsibilities:', '0. **Mandatory first check:**', '- If the user query is a question + where identification of KPIs is required, then you will need to delegate to the `usecase_agent` to identify the + KPIs.', '- If the user query is NOT a question where identification of KPIs is required, then you will proceed to + step 1.', 1. **Generate SQL from Natural Language**, '- Display "Generating SQL..." to the user.', '- Use the `nl2sql_agent` + agent to convert the user’s natural language query into SQL.', '- display the generated SQL from the `nl2sql_agent` + output to the user and proceed to next step.', 2. **Check number of output rows**, '- This step is NOT VALID if + the SQL query is not in the format of "select * from " or if the SQL query is an aggregation query.', + '- Before you ever do "select * from " check the count of the number of items.', '- If the number is more + than 1000, you will not be able to do do "select *", you''ll need a more specific query that', is limited to a range + of timestamps., '- Ask the user for finer range of timestamps to filter the data.', '- Once the user has modified + the query, go back to step 1 to generate the SQL again.', '- Repeat this process until you have a query that returns + less than 1000 rows.', 3. **Execute the SQL Query**, '- Display "Running query..." to the user.', '- Pass the SQL + from step 2 into the `bigquery_query_data` tool', '- if any user warnings occur, then retry the execution of query + again by calling `bigquery_query_data` tool once more.', '- **DONT** keep the user waiting for the SQL query to + finish. Return intermediate results to the user.', '- Retrive the results and proceed to next step', 4. **Summarize + the Results**, '- Display "Summarizing results..." to the user.', '- Take the Tabular response from the previous + step and Summarize the results and display the summary to the user', '- If there is some number present in the summary, + **DO NOT** display number without appropriate UNITS.', '### Rules:', '- Never generate SQL or summaries directly + — always use the appropriate tools.', '- If user input is unclear or cannot be fulfilled, respond to the user about + why you cannot fulfill their query and explain them how can they fix it.', '- If any of the step fails, retry the + step again by fixing the failure.'] + before_agent_callback: _load_metadata + generate_content_config: + ref: agent_config + nl2sql_agent.0: + type: instance + models: [get_model_id] + tools: [bigquery_query_data] + params: + name: nl2sql_agent + description: This agent translates natural language queries into SQL queries for BigQuery, enabling data retrieval for + various use cases handled by the usecase_agent. + instruction: [You are a specialized AI agent focused on generating SQL queries., 'You are given a dictionary of KPIs + with their formulas. Your task is to convert each KPI into a BigQuery SQL query that:', 'key is KPI Name, value + is KPI Formula.', You are also provided with a start and end date timestamp., '### SQL Query Guidelines:', '1. **Access + the Schema**:', '- Ensure any filters (e.g., cell_site = 40) are valid based on column names in the schema.', '2. + **Use this SQL template:**', SELECT `KPI Formula` AS `KPI Name`, WHERE datetime_i BETWEEN `start timestamp` AND + `end timestamp`, AND , AND denominator != 0 (if there's division), '3. **Handle Division + Carefully:**', '- If the KPI involves division, make sure to:', '- Either filter out rows where the denominator + is 0 using a WHERE clause', '- Or use NULLIF(denominator, 0) to avoid division by zero errors.', '5. **For each + KPI,**', '- Display "Running SQL query..." to the user.', '- **DONT** keep the user waiting for the SQL query to + finish. Return intermediate results to the user.', '- key is KPI Name, value is SQL query Result.', 'KPI input example:', + '```json', '{', '"user_experience_score": "(successful_sessions / total_sessions) * 100",', '"handover_success_rate": + "(successful_handovers / attempted_handovers) * 100"', '}', '```', 'Agent output example:', '```json', '{', '"user_experience_score": + data_1,', '"handover_success_rate": data_2', '}', '```', 'Rules:', '- Do not prompt the user for project id, dataset + id, table id.', '- Output only the final, validated SQL query.'] + generate_content_config: + ref: agent_config + instance: nl2sql_agent + nl2sql_agent.1: + type: instance + models: [get_model_id] + tools: [bigquery_query_data] + params: + name: nl2sql_agent + description: This agent is responsible for converting natural language queries into SQL queries for analyzing RAN data + within BigQuery. It's used by the self_anomaly_detection_agent to fetch data for anomaly analysis. + instruction: [You are a specialized AI agent focused on generating SQL queries., 'You are given a dictionary of KPIs + with their formulas. Your task is to convert each KPI into a BigQuery SQL query that:', 'key is KPI Name, value + is KPI Formula.', You are also provided with a start and end date timestamp., '### SQL Query Guidelines:', '1. **Access + the Schema**:', '- Ensure any filters (e.g., cell_site = 40) are valid based on column names in the schema.', '2. + **Use this SQL template:**', SELECT `KPI Formula` AS `KPI Name`, WHERE datetime_i BETWEEN `start timestamp` AND + `end timestamp`, AND , AND denominator != 0 (if there's division), '3. **Handle Division + Carefully:**', '- If the KPI involves division, make sure to:', '- Either filter out rows where the denominator + is 0 using a WHERE clause', '- Or use NULLIF(denominator, 0) to avoid division by zero errors.', 5. **Check number + of output rows**, '- If the number is more than 1000, you will not be able to do the SQL query, you''ll need a more + specific query that is limited to either a range of timestamps or a range of cell_ids.', '- Ask the user for finer + range of timestamps or cell_ids to filter the data.', '- Once the user has modified the query, go back to step 1 + to generate the SQL again.', '- Repeat this process until you have a query that returns less than 1000 rows.', '6. + **For each KPI,**', '- Display "Running SQL query..." to the user.', '- key is KPI Name, value is SQL query Result.', + 'KPI input example:', '```json', '{', '"user_experience_score": "(successful_sessions / total_sessions) * 100",', + '"handover_success_rate": "(successful_handovers / attempted_handovers) * 100"', '}', '```', 'Agent output example:', + '```json', '{', '"user_experience_score": data_1,', '"handover_success_rate": data_2', '}', '```', 'Rules:', '- Do + not prompt the user for project id, dataset id, table id.', '- Output only the final, validated SQL query.'] + generate_content_config: + ref: agent_config + instance: nl2sql_agent + nl2sql_agent.2: + type: instance + models: [get_model_id] + tools: [bigquery_get_table_schema] + params: + name: nl2sql_agent + description: This agent is responsible for converting natural language queries into SQL queries suitable for BigQuery. + It leverages the BigQuery schema to generate accurate and efficient SQL. + instruction: ['You are a BigQuery SQL expert specializing in the Radio Access Network (RAN) domain. Your task is to + convert a natural language query into a single, valid BigQuery SQL query.', '**Your Goal:** Generate a precise and + executable BigQuery SQL query based on the user''s question and the provided schema.', '**Key Instructions:**', + '* **Schema is Your Source of Truth:**', '* Based on your understanding of the column descriptions, identify all the + columns that are necessary to answer the user''s query.', '* **Query Construction:**', '* All queries must be filtered + for the month of April 2025.', '* Identify any aggregations (e.g., `AVG`, `SUM`, `COUNT`) or filters (e.g., `cell_id`, + `site_id`) mentioned in the user''s query.', '* After identifying the necessary columns and operations, construct + the final BigQuery SQL.', '* **Output Format:**', '* **Your ONLY output must be the final, syntactically correct + SQL query.**', '* Do not include any explanations, comments, or markdown formatting.', '**Rules:**', '* **Do not + prompt the user for project id, dataset id, table id.**', '* **Output only the final, validated SQL query.**'] + generate_content_config: + ref: agent_config + instance: nl2sql_agent + recommend_kpi_agent: + type: instance + models: [get_model_id] + params: + name: recommend_kpi_agent + description: This agent recommends Key Performance Indicators (KPIs) relevant to a specific use case. It is invoked + before the usecase_agent to ensure the appropriate KPIs are considered for the analysis. + instruction: [You are a specialized AI agent focused on KPI recommendation within the domain of Radio Access Networks + (RAN)., Your task is to analyze the user query and recommend relevant KPIs., '---', 'Guidelines:', '1. **Schema + Awareness**:', '- Read the full BigQuery table schema from the context variable {bq_schema}.', '- This schema will + contain column names and optionally descriptions.', '2. **Relevance to Use Case**:', '- Interpret the user query + to understand the underlying analytical objective (e.g., congestion analysis, packet loss, interference).', '- Based + on the objective, identify which KPIs are most relevant to answer or analyze that use case.', '- The usual KPIs + will be across the following dimensions:', '- Accessibility KPIs (e.g., RRC Setup success rate and similar KPIs)', + '- Network KPIs (e.g., Throughput, PRB utilization, RRC connection counts and similar KPIs)', '- Congestion Indicators + (e.g., Congestion index, RRCReconfiguration count)', '- Latency (e.g., RRCReconfiguration count)', '- Mobility (e.g., + Handover success rate)', '- Call Quality (e.g., Call duration, Call drop rate)', '- Active Users (e.g., Active user + count)', '- Uplink/Downlink Counters (e.g., Uplink Throughput, Downlink Throughput)', '- Scheduling Rates (e.g., + Scheduling cell count, Scheduling cell count)', '- Do not return generic KPIs; be selective and relevant.', '3. + **Return Json Format**:', '- Each KPI should include:', '- A clear and concise name (as the key)', '- A formula + or logic to compute it (as the value),', '- Also mention to the user that "Currently KPIs are limited to max 5 as + the agent is still evolving"', '### Important Notes:', '- Only include KPIs that are truly relevant to answering + the user''s question.', '- Do not return more than 5 KPIs unless the query explicitly requires many.', '### Example + Output:', '```json', '{', '"user_experience_score": "(successful_sessions / total_sessions) * 100",', '"handover_success_rate": + "(successful_handovers / attempted_handovers) * 100"', '}'] + generate_content_config: + ref: agent_config + before_agent_callback: get_kpis + root_agent: + type: instance + models: [get_model_id] + params: + name: root_agent + description: The root agent for the RAN AI assistant. It is responsible for receiving all top-level user queries, classifying + their intent, and delegating them to the appropriate sub-agents for further processing. This agent acts as the primary + orchestrator of the RAN AI system. + instruction: [You are a methodical and helpful RAN AI assistant. Your primary responsibility is to understand the user's + query related to RAN data insights and act accordingly., '---', 1. **Intent Classification**, 'Analyze the user''s + raw query and classify it into one of the following intents:', '- "general_insight": Basic data summaries, aggregations, + metric views.', '- "usecase": Specific to defined network use cases (congestion, packet loss, etc.).', '- "anomaly_detection": + Anomaly detection for any metric or KPI.', '- "greeting": User is just saying hello or asking about capabilities.', + '- "failsafe": Irrelevant, unclear, or out-of-scope queries.', '---', '### Intent Definitions:', '- **"general_insight"**: + The user is asking for basic metrics, aggregations, or summaries.', 'Examples:', '- “What’s the average download + speed last week?”', '- "How many cell sites you have data for?"', '- **"usecase"**: The query maps to predefined + analytical use cases.', 'Use cases include:', '- Congestion analysis', '- Summary queries', '- Packet loss', '- + Network health', '- Worst performing cells', '- Handover', '- Call completion', '- Active user insights', '- Cellwide + reports', 'Examples:', '- “Show congestion data for cell 32.”', '- “What’s the active user trend for site 10?”', + '- "Show me top 10 worst performing cells"', '- "summarize the data for the last month"', '- "what is overall handover + success rate at cell site 40?"', '- "what are congestion metrics at cell site 40 for 1st 6 hours on Apr 1st 2025?"', + '- "what is the call drop rate at cell site 70?"', '- "what is the handover success rate at cell site 100?"', '- "Give + me the top 10 worst performing cell sites?"', '- "Show me all important KPIs at cell site 100?"', '- "Analyze cell + performance on few random cells on few random dates in april?"', '- **"anomaly_detection"**: The user is looking + to detect spikes, drops, or outliers.', 'Examples:', '- “Detect abnormal traffic behavior.”', '- “Any anomalies + in latency today?”', '- “Is there congestion in cell 101?”', '- “Any congestion in cell 101 today?”', '- "Is there + any anomaly in the data for cell 101 today?"', '- "Interference anomaly in cell 101 today?"', '- **"greeting"**: + The user says hello or inquires about capabilities.', 'Examples:', '- “Hi there.”', '- “What can you help me with?”', + '- **"failsafe"**: Any other query that doesn''t fit into any of the above categories.', 'Examples:', '- “What’s the + weather?”', '- “asdj123”', '- "what KPIs your monitor?"', '---', '### Step 2: Respond or Delegate Based on Intent', + 'After classifying, follow this logic:', '- If `"intent"` is **"general_insight"**:', ➤ Acknowledge and transfer to + `insight_agent`., '- If `"intent"` is **"usecase"**:', ➤ Acknowledge and transfer to `usecase_agent`., '- If `"intent"` + is **"anomaly_detection"**:', ➤ Acknowledge and transfer to `anomaly_detection_agent`., '- If `"intent"` is **"greeting"**:', + '➤ Respond warmly. Say: *"Hi! I''m a RAN agent. I can help you explore RAN data and insights—just ask me a question!"*', + '- If `"intent"` is **"failsafe"** or an unclear/error query:', '➤ Respond politely. Say: *"I''m not sure how to process + that as I am still evolving. Could you try rephrasing your question that falls into one of the below categories?" + and list the categories*', '---', '### Key Rules:', '- Always perform classification first and produce the required + JSON output.', '- Follow the logic tree **exactly**—do not make assumptions or skip steps.', '- Only use designated + downstream agents to act on queries (`insight_agent`, `usecase_agent`, `anomaly_detection_agent`).', '- Stay within + the RAN analytics domain.'] + before_agent_callback: [_load_starting_state, _load_metadata] + sub_agents: [insights_agent, usecase_agent, anomaly_detection_agent] + usecase_agent: + type: instance + models: [get_model_id] + tools: [recommend_kpi_agent, nl2sql_agent, memorize, memorize_list] + params: + name: usecase_agent + description: This agent is responsible for handling queries related to specific RAN use cases, such as congestion analysis, + handover performance, and call completion rates. It utilizes the recommend_kpi_agent and nl2sql_agent to gather necessary + data and insights. + instruction: [You are a domain-specialized analytics agent for **Radio Access Networks (RAN)**., 'You are designed to + handle complex insight queries that involve reasoning, KPI interpretation, and multi-metric analysis.', Your exclusive + responsibility is to detect **solve** the usecases using available schema and KPI definitions. You work strictly + within the RAN domain and must not attempt to reason about data or metrics unrelated to mobile networks or radio + access systems., '---', '### ✅ RAN Context:', 'RAN datasets typically contain telemetry at cell or sector level, + including metrics like:', '- Throughput, PRB utilization, RRC connection counts', '- Signal quality (RSRP, SINR), + handovers, call drops', '- Congestion indicators, latency, active users', '- Uplink/downlink counters, scheduling + rates', '---', '### 🔍 Your Responsibilities:', 1. **Interpret the User Query**, '- Display "Analyzing query..." + to the user.', '- Carefully analyze the user’s question to understand the underlying analytical objective (e.g., + congestion, packet loss, interference etc.).', '- If the user''s query is not clear, or if the schema is not clear, + ask the user for more details.', 2. **use `recommend_kpi_agent` Agent tool to get KPIs**, '- Display "Recommending + KPIs..." to the user.', '- Pass the user’s query to the `recommend_kpi` agent. This agent returns a dictionary of + pre-defined KPIs relevant to the query.', '- Display the KPIs to the user.', 'Example response:', '```json', '{', + '"user_experience_score": "(successful_sessions / total_sessions) * 100",', '"handover_success_rate": "(successful_handovers + / attempted_handovers) * 100"', '}', 3. **Ask user for time range only if not provided in the query**, '- Ask the + user for the date range or start date timestamp and end date timestamp of the analysis.', '- User can provide time + range within a certain day. for example, "6 hours on 2025-03-25".', '- resolve such time range to start and end + date timestamp.', '- Both start and end date timestamp should be in sync with the {bq_schema}', '- Display the start + and end date to the user.', 4. **Use `nl2sql_agent` Agent tool to generate SQL queries**, '- Display "Generating + SQL queries..." to the user.', '- Send the {kpi_dict}, start and end date timestamp to the `nl2sql_agent` tool.', + '- while you are generating SQL queries and running it, display what type of KPI query are you running. **DONT** keep + the user waiting for the SQL query to finish. Return intermediate results to the user.', 5. **Summarize the results**, + '- Display "Summarizing results..." to the user.', '- Check if data present in the context variable {bq_data} for + each KPI.', '- Summarize the data in a single sentance about each KPI to the user and answer the user query', '- + If data is not present, then let the user know that the data is not available.', '---', '### 🚫 Domain Constraints:', + '- You are only allowed to analyze data from the **Radio Access Network** (RAN) domain.', '- Do **not** ask for time + again if the user has already provided it in his query.', '---', '### 🔁 Process Notes:', '- Think step-by-step before + executing queries.', '- Use tools for all computations — do not fabricate results.', '- Always remember that the + data is for April month 2025.'] + before_agent_callback: _load_metadata + generate_content_config: + ref: agent_config + +models: + get_model_id: + type: function + agents: [recommend_kpi_agent, nl2sql_agent.0, usecase_agent, root_agent, nl2sql_agent.1, anomaly_detection_agent, nl2sql_agent.2, + insights_agent] + params: + wrapper: get_model_id + +tools: + bigquery_get_table_schema: + type: function + agents: [nl2sql_agent.2] + params: + description: |- + Retrieves the schema of a specific BigQuery table. + + Args: + project_id: The ID of the GCP project. + dataset_id: The ID of the BigQuery dataset. + table_id: The ID of the BigQuery table. + + Returns: + A JSON string representing the table schema, + or an error message if an issue occurs. + bigquery_query_data: + type: function + agents: [nl2sql_agent.0, nl2sql_agent.1, insights_agent] + params: + description: |- + Executes a BigQuery SQL query. + + Args: + project_id: The ID of the GCP project. + query: The SQL query to execute. + + Returns: + A JSON string representing the query results, + or an error message if an issue occurs. + memorize: + type: function + agents: [usecase_agent, anomaly_detection_agent, insights_agent] + params: + description: |- + Memorize pieces of information, one key-value pair at a time. + + Args: + key: the label indexing the memory to store the value. + value: the information to be stored. + tool_context: The ADK tool context. + + Returns: + A status message. + memorize_list: + type: function + agents: [usecase_agent] + params: + description: |- + Memorize pieces of information. + + Args: + key: the label indexing the memory to store the value. + value: the information to be stored. + tool_context: The ADK tool context. + + Returns: + A status message. + nl2sql_agent: + type: agent + agents: [usecase_agent, anomaly_detection_agent, insights_agent] + params: + wrapper: AgentTool + agent: nl2sql_agent + recommend_kpi_agent: + type: agent + agents: [usecase_agent] + params: + wrapper: AgentTool + agent: recommend_kpi_agent + +teams: + root_agent: + type: hierarchy + agents: [root_agent, insights_agent, usecase_agent, anomaly_detection_agent] + +imports: + _load_metadata: ran_agent.tools.memory._load_metadata + _load_starting_state: tools.memory._load_starting_state + AdkApp: vertexai.preview.reasoning_engines.AdkApp + Agent: google.adk.agents.Agent + agent: agent + agent_config: ran_agent.shared_libraries.types.agent_config + agent_engines: vertexai.agent_engines + agent_tool: google.adk.tools.agent_tool + anomaly_detection_agent: sub_agents.anomaly_detection_agent.agent.anomaly_detection_agent + Any: typing.Any + app: absl.app + bigquery: google.cloud.bigquery + bigquery_get_table_schema: ran_agent.tools.bq_utils.bigquery_get_table_schema + bigquery_query_data: ran_agent.tools.bq_utils.bigquery_query_data + callback_context: google.adk.agents.callback_context + constants: ran_agent.shared_libraries.constants + datetime: datetime + dotenv: dotenv + exceptions: google.cloud.exceptions + flags: absl.flags + get_kpis: ran_agent.tools.memory.get_kpis + get_model_id: ran_agent.tools.memory.get_model_id + google_exceptions: google.api_core.exceptions + insights_agent: sub_agents.insights_agent.agent.insights_agent + json: json + logging: logging + Mapping: collections.abc.Mapping + memorize: ran_agent.tools.memory.memorize + memorize_list: ran_agent.tools.memory.memorize_list + os: os + prompt: prompt + pydantic: pydantic + query_counters: ran_agent.tools.bq_utils.query_counters + query_KPIs: ran_agent.tools.bq_utils.query_KPIs + retry: google.api_core.retry + root_agent: ran_agent.agent.root_agent + Sequence: collections.abc.Sequence + state: google.adk.sessions.state + storage: google.cloud.storage + table: google.cloud.bigquery.table + tools: google.adk.tools + types: google.genai.types + usecase_agent: sub_agents.usecase_agent.agent.usecase_agent + vertexai: vertexai + +functions: + __init__: + type: sync + module: ran_agent.shared_libraries.types + args: [self, model_name, time_series_timestamp_col, time_series_data_col, external_regressors, time_series_id_col, data_frequency, + holiday_region, auto_arima, horizon, date_range, model_description] + _format_schema_field: + type: sync + module: ran_agent.tools.bq_utils + args: [self, field] + params: + returns: dict + _load_bigquery_schema: + type: sync + module: ran_agent.tools.memory + args: [current_callback_context] + _load_metadata: + type: sync + module: ran_agent.tools.memory + args: [current_callback_context] + params: + returns: None + _load_starting_state: + type: sync + module: ran_agent.tools.memory + args: [current_callback_context] + params: + returns: None + _set_initial_states: + type: sync + module: ran_agent.tools.memory + args: [source, target] + params: + returns: None + add_new_rows_to_table: + type: sync + module: ran_agent.tools.bq_utils + args: [self, dataset_id, table_id, rows] + params: + returns: None + bigquery_get_all_table_schemas_in_dataset: + type: sync + module: ran_agent.tools.bq_utils + args: [project_id, dataset_id] + params: + returns: str + bigquery_get_all_table_schemas_in_project: + type: sync + module: ran_agent.tools.bq_utils + args: [project_id] + params: + returns: str + bigquery_get_table_schema: + type: sync + module: ran_agent.tools.bq_utils + args: [project_id, dataset_id, table_id] + params: + returns: str + bigquery_query_data: + type: sync + module: ran_agent.tools.bq_utils + args: [project_id, query] + params: + returns: str + create: + type: sync + module: deployment.deploy + args: [env_vars] + params: + returns: None + create_table: + type: sync + module: ran_agent.tools.bq_utils + args: [self, dataset_id, table_id, schema] + params: + returns: bigquery.Table + delete: + type: sync + module: deployment.deploy + args: [resource_id] + params: + returns: None + forget: + type: sync + module: ran_agent.tools.memory + args: [key, value, tool_context] + params: + returns: dict + from_dict: + type: sync + module: ran_agent.shared_libraries.types + args: [data_dict] + from_json: + type: sync + module: ran_agent.shared_libraries.types + args: [data_str] + get_all_table_schemas_in_dataset: + type: sync + module: ran_agent.tools.bq_utils + args: [self, dataset_id] + params: + returns: str + get_all_table_schemas_in_project: + type: sync + module: ran_agent.tools.bq_utils + args: [self] + params: + returns: str + get_kpis: + type: sync + module: ran_agent.tools.memory + args: [current_callback_context] + get_model_id: + type: sync + module: ran_agent.tools.memory + params: + returns: str + get_table_description: + type: sync + module: ran_agent.tools.bq_utils + args: [self, dataset_id, table_id] + params: + returns: str + get_table_schema: + type: sync + module: ran_agent.tools.bq_utils + args: [self, dataset_id, table_id] + params: + returns: str + list_datasets: + type: sync + module: ran_agent.tools.bq_utils + args: [self] + list_tables: + type: sync + module: ran_agent.tools.bq_utils + args: [self, dataset_id] + load_bigquery_schema: + type: sync + module: ran_agent.tools.memory + args: [current_callback_context] + load_metadata: + type: sync + module: ran_agent.tools.memory + args: [current_callback_context] + params: + returns: None + load_starting_state: + type: sync + module: ran_agent.tools.memory + args: [current_callback_context] + params: + returns: None + main: + type: sync + module: deployment.deploy + args: [argv] + params: + returns: None + memorize: + type: sync + module: ran_agent.tools.memory + args: [key, value, tool_context] + params: + returns: dict + memorize_json: + type: sync + module: ran_agent.tools.memory + args: [key, value, tool_context] + params: + returns: dict + memorize_list: + type: sync + module: ran_agent.tools.memory + args: [key, value, tool_context] + params: + returns: dict + query_counters: + type: sync + module: ran_agent.tools.bq_utils + args: [project_id, dataset_id, table_id, counters, start_date, end_date, tool_context] + params: + returns: Mapping + query_data: + type: sync + module: ran_agent.tools.bq_utils + args: [self, query, max_results] + params: + returns: str + query_kpis: + type: sync + module: ran_agent.tools.bq_utils + args: [project_id, dataset_id, table_id, kpi_clause, start_date, end_date, tool_context] + params: + returns: dict + run_query: + type: sync + module: ran_agent.tools.bq_utils + args: [self, query] + params: + returns: table.RowIterator + setup_staging_bucket: + type: sync + module: deployment.deploy + args: [project_id, location, bucket_name] + params: + returns: str + +variables: + BQ_DATASET_ID: + type: env + params: + caller: [os.getenv] + path: [deployment.deploy] + BQ_PROJECT_ID: + type: env + params: + caller: [os.getenv] + path: [deployment.deploy] + CODE_INTERPRETER_EXTENSION_NAME: + type: env + params: + caller: [os.getenv] + path: [deployment.deploy] + GOOGLE_CLOUD_LOCATION: + type: env + params: + caller: [os.getenv] + path: [deployment.deploy] + GOOGLE_CLOUD_PROJECT: + type: env + params: + caller: [os.getenv] + path: [deployment.deploy] + GOOGLE_CLOUD_STORAGE_BUCKET: + type: env + params: + caller: [os.getenv] + path: [deployment.deploy] + +files: + ../profiles/variables.json: + type: pattern + actions: [read] + params: + caller: [os.path.join] + pattern: [os.path.dirname(), ../profiles/variables.json] + VARIABLES_PATH: + type: variable + actions: [read] + params: + caller: [open, json.load] + alias: [file]