Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/opentech/agentstack/pre-work/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,13 @@ To run models locally on your machine:
2. Run or pull the Granite model:

```bash
ollama pull granite4:micro-h
ollama pull ibm/granite4:micro-h
```

or

```bash
ollama run granite4:micro-h
ollama run ibm/granite4:micro-h
```

### Granite model links
Expand Down
2 changes: 1 addition & 1 deletion docs/opentech/agentstack/run/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ agentstack ui
```

You should be prompted to select a model provider and model if you did not already set these.
For this lab, use Ollama and granite4:micro-h. for the language model.
For this lab, use Ollama and `ibm/granite4:micro-h` for the language model.

You should see the UI launch in your browser.

Expand Down
4 changes: 2 additions & 2 deletions docs/opentech/beeaiframework/prework/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,15 @@ Most users can simply download from the Ollama [website](https://ollama.com/down
Please pull the models to be used in the workshop before arriving at the workshop!

```shell
ollama pull granite4:micro-h
ollama pull ibm/granite4:micro-h
```

## Chat with the model

For a quick test, you can use the ollama CLI to ask the model a question.

```shell
ollama run granite4:micro-h "what model am I chatting with and and who created you?"
ollama run ibm/granite4:micro-h "what model am I chatting with and and who created you?"
```

## Install `uv`
Expand Down
2 changes: 1 addition & 1 deletion docs/opentech/llm/lab-1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ If you see something similar, Open-WebUI is installed correctly! Continue on, if

Click *Getting Started*. Fill out the next screen and click the *Create Admin Account*. This will be your login for your local machine. Remember that this because it will be your Open-WebUI configuration login information if want to dig deeper into it after this workshop.

You should see the Open-WebUI main page now, with `granite4:micro-h` right there in the center!
You should see the Open-WebUI main page now, with `ibm/granite4:micro-h` right there in the center!

![main screen](../images/openwebui_main_screen.png)

Expand Down
8 changes: 4 additions & 4 deletions docs/opentech/llm/lab-2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,16 +33,16 @@ ollama list

And you'll see a list similar to the following:

```text
ollama list
```console
$ ollama list
NAME ID SIZE MODIFIED
granite4:micro-h ba791654cc27 1.9 GB 23 hours ago
ibm/granite4:micro-h 076afb3855dc 1.9 GB About a minute ago
```

Next, use Ollama to run one of the models:

```shell
ollama run granite4:micro-h
ollama run ibm/granite4:micro-h
```

And ask it questions, like this:
Expand Down
8 changes: 4 additions & 4 deletions docs/opentech/llm/lab-5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ Open up [Open-WebUI](http://localhost:8080/), and you should see something like
If you see this that means Open-WebUI is installed correctly, and we can continue configuration, if not, please find a workshop TA or
raise your hand we'll be there to help you ASAP.

Next as a sanity check, run the following command to confirm you have the [granite4:micro-h](https://ollama.com/library/granite4:micro-h)
Next as a sanity check, run the following command to confirm you have the [ibm/granite4:micro-h](https://ollama.com/ibm/granite4:micro-h)
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.

```shell
ollama pull granite4:micro-h
ollama pull ibm/granite4:micro-h
```

If you didn't know, the supported languages with `granite4:micro-h` now include:
If you didn't know, the supported languages with `ibm/granite4:micro-h` now include:

- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may fine-tune this Granite model for languages beyond these 12 languages.

Expand All @@ -45,7 +45,7 @@ And the Capabilities also include:
- Multilingual dialog use cases
- Fill-In-the-Middle (FIM) code completions

Next click on the down arrow at the top and select the "granite4:micro-h" if it's not already selected.
Next click on the down arrow at the top and select the "ibm/granite4:micro-h" if it's not already selected.

![model selection](../images/openwebui_model_selection.png)

Expand Down
4 changes: 2 additions & 2 deletions docs/opentech/llm/pre-work/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,15 +38,15 @@ Most users can simply download from the Ollama [website](https://ollama.com/down
Please pull the models to be used in the workshop before arriving at the workshop!

```shell
ollama pull granite4:micro-h
ollama pull ibm/granite4:micro-h
```

## Chat with the model

For a quick test, you can use the ollama CLI to ask the model a question.

```shell
ollama run granite4:micro-h "what model am I chatting with and and who created you?"
ollama run ibm/granite4:micro-h "what model am I chatting with and and who created you?"
```

## Install Open WebUI
Expand Down
4 changes: 2 additions & 2 deletions docs/opentech/mellea/pre-work/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,13 +37,13 @@ Most users can simply download from the Ollama [website](https://ollama.com/down
Please pull the models to be used in the workshop before arriving at the workshop!

```shell
ollama pull granite4:micro-h
ollama pull ibm/granite4:micro-h
```

## Chat with the model

For a quick test, you can use the ollama CLI to ask the model a question.

```shell
ollama run granite4:micro-h "what model am I chatting with and and who created you?"
ollama run ibm/granite4:micro-h "what model am I chatting with and and who created you?"
```
2 changes: 1 addition & 1 deletion opentech/agentstack/src/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@
# Try several models to see how your agent performs. Note that you may need to modify the system prompt for each model, as they all have their own system prompt best practice.

PROVIDER_ID = os.getenv("PROVIDER_ID", "ollama")
MODEL_ID = os.getenv("MODEL_ID", "granite4:micro-h")
MODEL_ID = os.getenv("MODEL_ID", "ibm/granite4:micro-h")
MODEL_NAME = ":".join([PROVIDER_ID, MODEL_ID]) if PROVIDER_ID and MODEL_ID else None

# Load the chat model
Expand Down
74 changes: 37 additions & 37 deletions opentech/beeaiframework/notebooks/beeai.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -171,50 +171,14 @@
},
"source": [
"### *❗* Exercise: Select your Language Model Provider\n",
"\n",
"/\n",
"Change the `provider` and `model` variables to your desired provider and model.\n",
"\n",
"If you select a provider that requires an API key URL or Project_ID, select the key icon on the left menu and set the variables to match those in the userdata.get() function.\n",
"\n",
"Try several models to see how your agent performs. Note that you may need to modify the system prompt for each model, as they all have their own system prompt best practice."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YdDRJpoPhvrz"
},
"outputs": [],
"source": [
"load_dotenv()\n",
"\n",
"provider=\"ollama\"\n",
"\n",
"# Ollama - No parameters required\n",
"if provider==\"ollama\":\n",
" model=\"granite4:micro-h\" # model=\"granite4:tiny-h\"\n",
" provider_model=provider+\":\"+model\n",
" llm=ChatModel.from_name(provider_model, ChatModelParameters(temperature=0))\n",
"\n",
"# OpenAI - Place OpenAI API Key in Colab Secrets (key icon) as OPENAI_KEY\n",
"elif provider==\"openai\":\n",
" model=\"gpt-5-mini\"\n",
" provider_model=provider+\":\"+model\n",
" api_key = userdata.get('OPENAI_KEY') #Set secret value using key in left menu\n",
" llm=ChatModel.from_name(provider_model, ChatModelParameters(temperature=1), api_key=api_key)\n",
"# WatsonX - Place Project ID, API Key and WatsonX URL in Colab Secrets (key icon)\n",
"elif provider==\"watsonx\":\n",
" model=\"ibm/granite-3-8b-instruct\"\n",
" provider_model=provider+\":\"+model\n",
" project_id = userdata.get('WATSONX_PROJECT_ID') #Set secret value using key in left menu\n",
" api_key = userdata.get('WATSONX_APIKEY') #Set secret value using key in left menu\n",
" base_url = userdata.get('WATSONX_URL') #Set secret value using key in left menu\n",
" llm=ChatModel.from_name(provider_model, ChatModelParameters(temperature=0), project_id=project_id, api_key=api_key, base_url=base_url)\n",
"else:\n",
" print(\"Provider \" + provider + \" undefined\")"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down Expand Up @@ -252,6 +216,42 @@
"Try modifying the system prompt. Customize the \"basic rules\" section to add your own. Note that changes to the system prompt will affect the performance of the model. Creating a great `System Prompt` is an art, not a science."
]
},
{
"cell_type": "code",
"outputs": [],
"source": [
"load_dotenv()\n",
"\n",
"provider=\"ollama\"\n",
"\n",
"# Ollama - No parameters required\n",
"if provider==\"ollama\":\n",
" model=\"ibm/granite4:micro-h\" # model=\"ibm/granite4:tiny-h\"\n",
" provider_model=provider+\":\"+model\n",
" llm=ChatModel.from_name(provider_model, ChatModelParameters(temperature=0))\n",
"\n",
"# OpenAI - Place OpenAI API Key in Colab Secrets (key icon) as OPENAI_KEY\n",
"elif provider==\"openai\":\n",
" model=\"gpt-5-mini\"\n",
" provider_model=provider+\":\"+model\n",
" api_key = userdata.get('OPENAI_KEY') #Set secret value using key in left menu\n",
" llm=ChatModel.from_name(provider_model, ChatModelParameters(temperature=1), api_key=api_key)\n",
"# WatsonX - Place Project ID, API Key and WatsonX URL in Colab Secrets (key icon)\n",
"elif provider==\"watsonx\":\n",
" model=\"ibm/granite-3-8b-instruct\"\n",
" provider_model=provider+\":\"+model\n",
" project_id = userdata.get('WATSONX_PROJECT_ID') #Set secret value using key in left menu\n",
" api_key = userdata.get('WATSONX_APIKEY') #Set secret value using key in left menu\n",
" base_url = userdata.get('WATSONX_URL') #Set secret value using key in left menu\n",
" llm=ChatModel.from_name(provider_model, ChatModelParameters(temperature=0), project_id=project_id, api_key=api_key, base_url=base_url)\n",
"else:\n",
" print(\"Provider \" + provider + \" undefined\")"
],
"metadata": {
"id": "YdDRJpoPhvrz"
},
"execution_count": 0
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
218 changes: 17 additions & 201 deletions opentech/docling/notebooks/Chunking.ipynb

Large diffs are not rendered by default.

Loading
Loading