LCORE-716: integration tests for conversation management endpoints#1362
LCORE-716: integration tests for conversation management endpoints#1362anik120 wants to merge 1 commit intolightspeed-core:mainfrom
Conversation
WalkthroughAdds three new integration test modules: v1 and v2 conversation endpoint suites and a test fixture. v1 tests mock an AsyncLlamaStack client and override DB engine/session; v2 tests exercise an in-memory SQLite conversation cache. Tests cover list/get/delete/update and error paths. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/integration/endpoints/test_conversations_v1_integration.py (1)
548-624: Assert the turn metadata this test seeds.Right now this only proves
chat_historyis present. Ifprovider,model, or the turn timestamps stopped being returned, the test would still pass.Suggested assertions
# Verify response includes turn metadata assert response.conversation_id == TEST_CONVERSATION_ID assert response.chat_history is not None + assert len(response.chat_history) == 1 + + turn = response.chat_history[0] + assert turn.provider == "test-provider" + assert turn.model == "test-model" + assert turn.started_at is not None + assert turn.completed_at is not None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_conversations_v1_integration.py` around lines 548 - 624, The test_get_conversation_with_turns_metadata currently only checks response.conversation_id and that response.chat_history exists; update it to assert the seeded turn metadata is returned by get_conversation_endpoint_handler by verifying response.chat_history contains a turn for TEST_CONVERSATION_ID with the expected provider ("test-provider"), model ("test-model"), and the timestamps (started_at and completed_at) match or are present on the returned turn objects; ensure you reference the response object from get_conversation_endpoint_handler and its chat_history items when adding these assertions so the test will fail if provider, model, or turn timestamps stop being returned.tests/integration/endpoints/test_conversations_v2_integration.py (1)
90-127: KeepCacheEntryfixtures aligned with the model contract.
models/cache_entry.pyonly defines the query/response/provider/model/timestamp/tool fields. Passingconversation_id,user_id, andtopic_summaryintoCacheEntry(...)makes these fixtures depend on data the cache entry model does not own, which is easy to misread as “topic summary is being persisted here” when it is not.Also applies to: 516-550
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_conversations_v2_integration.py` around lines 90 - 127, The test fixture create_test_cache_entry is constructing a CacheEntry with fields (conversation_id, user_id, topic_summary) that are not part of the CacheEntry contract in models/cache_entry.py; update create_test_cache_entry to only pass the model-owned fields (query, response, provider, model, referenced_documents, tool_calls, tool_results, started_at, completed_at) when instantiating CacheEntry and remove conversation_id, user_id, and topic_summary from the CacheEntry(...) call (if tests need those values keep them as separate local variables or return a tuple/dict alongside the CacheEntry), ensuring all other usages of create_test_cache_entry are adjusted accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/endpoints/test_conversations_v1_integration.py`:
- Around line 488-544: The test asserts that an APIStatusError from
mock_llama_stack_client.conversations.items.list should map to a 500, but
get_conversation_endpoint_handler (see conversations_v1.py mapping around the
APIStatusError handling) currently maps APIStatusError with
response.status_code=404 to a 404; update the test to expect HTTP 404 instead of
500 and verify the error detail shape accordingly (i.e., assert
exc_info.value.status_code == status.HTTP_404_NOT_FOUND and that
exc_info.value.detail is the expected dict/structure).
---
Nitpick comments:
In `@tests/integration/endpoints/test_conversations_v1_integration.py`:
- Around line 548-624: The test_get_conversation_with_turns_metadata currently
only checks response.conversation_id and that response.chat_history exists;
update it to assert the seeded turn metadata is returned by
get_conversation_endpoint_handler by verifying response.chat_history contains a
turn for TEST_CONVERSATION_ID with the expected provider ("test-provider"),
model ("test-model"), and the timestamps (started_at and completed_at) match or
are present on the returned turn objects; ensure you reference the response
object from get_conversation_endpoint_handler and its chat_history items when
adding these assertions so the test will fail if provider, model, or turn
timestamps stop being returned.
In `@tests/integration/endpoints/test_conversations_v2_integration.py`:
- Around line 90-127: The test fixture create_test_cache_entry is constructing a
CacheEntry with fields (conversation_id, user_id, topic_summary) that are not
part of the CacheEntry contract in models/cache_entry.py; update
create_test_cache_entry to only pass the model-owned fields (query, response,
provider, model, referenced_documents, tool_calls, tool_results, started_at,
completed_at) when instantiating CacheEntry and remove conversation_id, user_id,
and topic_summary from the CacheEntry(...) call (if tests need those values keep
them as separate local variables or return a tuple/dict alongside the
CacheEntry), ensuring all other usages of create_test_cache_entry are adjusted
accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 552e05f7-46cf-405c-a40e-eeab62ea6e0b
📒 Files selected for processing (2)
tests/integration/endpoints/test_conversations_v1_integration.pytests/integration/endpoints/test_conversations_v2_integration.py
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
tisnik
left a comment
There was a problem hiding this comment.
Please create 1st PR with refactoring of existing fixtures. Then these tests will be a bit shorter. Then we can look howto make these tests even shorter by refactoring common patterns from them.
|
|
||
|
|
||
| @pytest.fixture(name="mock_llama_stack_client") | ||
| def mock_llama_stack_client_fixture( |
There was a problem hiding this comment.
would be nice to refactor it out, as it is used in other tests too. So you can make module with this fixture
| yield mock_client | ||
|
|
||
|
|
||
| @pytest.fixture(name="patch_db_session", autouse=True) |
There was a problem hiding this comment.
again, probably can be refactored. In this case, it needs to be decided it autouse can still be used
anik120
left a comment
There was a problem hiding this comment.
Peer review feedback from @radofuchs
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
|
|
||
| # Verify response indicates success (local deletion succeeded) | ||
| assert response.conversation_id == TEST_CONVERSATION_ID | ||
| assert response.success is True |
There was a problem hiding this comment.
Check for 200 status code.
There was a problem hiding this comment.
Looks like the handler returning without exception indicates HTTP 200 here, there's no explicit 200 status code to be checked here @radofuchs
When the handler is called directly, response = await delete_conversation_endpoint_handler(...) we get back a ConversationDeleteResponse model object, not an HTTP response. FastAPI only adds the HTTP status code when the endpoint is called via an actual HTTP request.
tests/integration/endpoints/test_conversations_v1_integration.py
Outdated
Show resolved
Hide resolved
0eb3eca to
c757eeb
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/conftest.py`:
- Around line 178-195: Replace the banned unittest.mock.patch usage by accepting
the pytest-mock fixture and using mocker.patch as a context manager: add the
pytest fixture parameter (mocker) to the test function/fixture that yields
test_request, remove the import of unittest.mock.patch, and change the with
patch("authorization.resolvers.NoopAccessResolver.get_actions",
return_value=standard_actions): block to use with
mocker.patch("authorization.resolvers.NoopAccessResolver.get_actions",
return_value=standard_actions): so the NoopAccessResolver.get_actions is stubbed
with standard_actions via pytest-mock.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 60fefba2-83b9-409e-b146-cc5b4dc91859
📒 Files selected for processing (3)
tests/integration/conftest.pytests/integration/endpoints/test_conversations_v1_integration.pytests/integration/endpoints/test_conversations_v2_integration.py
✅ Files skipped from review due to trivial changes (1)
- tests/integration/endpoints/test_conversations_v2_integration.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/integration/endpoints/test_conversations_v1_integration.py
c757eeb to
3abadd4
Compare
This PR introduces comprehensive integration tests for conversation management endpoints in
both v1 (database + Llama Stack) and v2 (cache-based) APIs.
**Test Coverage:**
- List conversations (GET `/v{1,2}/conversations`)
- Get conversation details (GET `/v{1,2}/conversations/{id}`)
- Delete conversation (DELETE `/v{1,2}/conversations/{id}`)
- Update conversation (PUT `/v{1,2}/conversations/{id}`)
**Scenarios Tested:**
- Success paths with real database/cache integration
- Error handling (invalid IDs, not found, connection errors, API errors)
- Edge cases (empty lists, non-existent resources, cache unavailable)
- Special features (tool calls, turn metadata, multi-turn conversations)
**Notes**
- Tests use real database sessions (SQLite in-memory) and cache instances for true integration testing
- Only external Llama Stack client is mocked to avoid external dependencies
Signed-off-by: Anik Bhattacharjee <anbhatta@redhat.com>
3abadd4 to
026a639
Compare
|
Fyi @radofuchs I had to introduce a new Since This includes elevated "OTHERS" permissions like: As a result it was not possible to use that to test authenticated users' permissions. The new that excludes all "OTHERS" elevated permissions. That allowed for testing that users can only access their own data. |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
Description
This PR introduces comprehensive integration tests for conversation management endpoints in both v1 (database + Llama Stack) and v2 (cache-based) APIs.
Test Coverage:
/v{1,2}/conversations)/v{1,2}/conversations/{id})/v{1,2}/conversations/{id})/v{1,2}/conversations/{id})Scenarios Tested:
Notes
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
All 33 integration tests introduced here pass in 0.62s:
Summary by CodeRabbit