Skip to content

Elasticsearch sync Job manage table#107

Closed
vanitha1822 wants to merge 3 commits intomainfrom
nd/vs/es_update
Closed

Elasticsearch sync Job manage table#107
vanitha1822 wants to merge 3 commits intomainfrom
nd/vs/es_update

Conversation

@vanitha1822
Copy link
Copy Markdown
Member

@vanitha1822 vanitha1822 commented Mar 19, 2026

📋 Description

JIRA ID:

Missed some columns. Added those in the same file to maintain the order of the columns.


✅ Type of Change

  • 🐞 Bug fix (non-breaking change which resolves an issue)

Summary by CodeRabbit

  • New Features

    • Enhanced job records with additional metadata and timestamps (created/updated times, trigger info) and clarified status/default values for clearer job history.
  • Chores

    • Updated database schema: adjusted column types/nullability/defaults and revised indexing strategy to reflect the schema changes.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 19, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4e184223-0690-4f59-9a86-d95a25406f8f

📥 Commits

Reviewing files that changed from the base of the PR and between 817155a and d0a4cf8.

📒 Files selected for processing (1)
  • src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql

📝 Walkthrough

Walkthrough

Alters the Elasticsearch job tracking table schema: changes column types and nullability/defaults, adds triggered_by, created_date, and last_updated columns, restructures indexes, and sets explicit MySQL engine/charset/collation options.

Changes

Cohort / File(s) Summary
Database Migration
src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql
Replaces table creation with MySQL-specific DDL: changes current_offset from BIGINT to int, converts started_at/completed_at from DATETIME to nullable timestamp, removes NOT NULL from counter columns while keeping defaults, sets status default 'PENDING', adds triggered_by, created_date (CURRENT_TIMESTAMP), and last_updated (CURRENT_TIMESTAMP ON UPDATE), removes old indexes and adds idx_status, idx_created_date, and idx_job_type, and specifies ENGINE=InnoDB, CHARSET=utf8mb4, COLLATE=utf8mb4_unicode_ci.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hopped through rows and fields tonight,
I nudged defaults and timestamps right,
New columns sprout, old indexes rest,
A tidy table, trimmed and dressed.
sniffle — schema snug and bright.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title is partially related to the changeset. It mentions the Elasticsearch sync job management table, which is the subject of the change, but uses imprecise phrasing ('manage table' is vague about whether it's updating, creating, or managing the table schema).
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch nd/vs/es_update

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@vanitha1822 vanitha1822 requested a review from paras-dba March 19, 2026 08:59
@vanitha1822 vanitha1822 self-assigned this Mar 19, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql (1)

9-9: Consider using BIGINT for current_offset for consistency.

current_offset is defined as int (max ~2.1 billion) while other counters like total_records, processed_records are BIGINT. If the offset could potentially exceed INT's range for large datasets, this may cause overflow issues.

Suggested change for consistency
-  `current_offset` int DEFAULT '0',
+  `current_offset` bigint DEFAULT '0',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql` at line 9,
Change the current_offset column from INT to BIGINT to match other counters and
avoid overflow: update the column definition in the V7__ELASTICSEARCH migration
(column name current_offset) to use BIGINT with the same default and
nullability, and review any places that cast or assume INT (e.g.,
queries/variables that reference current_offset or use it to compute totals) to
ensure they handle 64-bit values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql`:
- Line 22: The migration contains a hardcoded AUTO_INCREMENT=54 in the table
definition (the token "AUTO_INCREMENT=54"); remove the AUTO_INCREMENT=54 clause
from the CREATE/ALTER TABLE statement so MySQL can manage the next
auto-increment value automatically, leaving the ENGINE and CHARSET/COLLATE
settings intact (i.e., update the line containing "AUTO_INCREMENT=54" to omit
that clause).
- Line 22: The SQL CREATE statement in migration V7__ELASTICSEARCH.sql ends with
") ENGINE=InnoDB AUTO_INCREMENT=54 DEFAULT CHARSET=utf8mb4
COLLATE=utf8mb4_0900_ai_ci" but lacks a terminating semicolon; update that
closing statement in V7__ELASTICSEARCH.sql to append a semicolon so the
statement ends with "...COLLATE=utf8mb4_0900_ai_ci;" ensuring the migration SQL
is properly terminated.

---

Nitpick comments:
In `@src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql`:
- Line 9: Change the current_offset column from INT to BIGINT to match other
counters and avoid overflow: update the column definition in the
V7__ELASTICSEARCH migration (column name current_offset) to use BIGINT with the
same default and nullability, and review any places that cast or assume INT
(e.g., queries/variables that reference current_offset or use it to compute
totals) to ensure they handle 64-bit values.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a4479f8a-f8d3-4964-8e91-6b16a78d0580

📥 Commits

Reviewing files that changed from the base of the PR and between ea5c345 and 817155a.

📒 Files selected for processing (1)
  • src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql

@sonarqubecloud
Copy link
Copy Markdown

@vanitha1822
Copy link
Copy Markdown
Member Author

No need to create the table again. In next PR, add the missing columns in correct position.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant