Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
USE db_identity;

SET @dbname = 'db_identity';
SET @tablename = 't_elasticsearch_sync_job';

-- Add triggered_by
SET @columnname = 'triggered_by';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND COLUMN_NAME = @columnname) > 0,
'SELECT 1',

Check failure on line 11 in src/main/resources/db/migration/dbidentity/V12__Elasticsearch_Columns.sql

View check run for this annotation

SonarQubeCloud / SonarCloud Code Analysis

Define a constant instead of duplicating this literal 9 times.

See more on https://sonarcloud.io/project/issues?id=PSMRI_Amrit-DB&issues=AZ0ejtCRzF9ecMqza8PU&open=AZ0ejtCRzF9ecMqza8PU&pullRequest=112
CONCAT('ALTER TABLE ', @dbname, '.', @tablename,

Check failure on line 12 in src/main/resources/db/migration/dbidentity/V12__Elasticsearch_Columns.sql

View check run for this annotation

SonarQubeCloud / SonarCloud Code Analysis

Define a constant instead of duplicating this literal 3 times.

See more on https://sonarcloud.io/project/issues?id=PSMRI_Amrit-DB&issues=AZ0ejtCRzF9ecMqza8PW&open=AZ0ejtCRzF9ecMqza8PW&pullRequest=112
' ADD COLUMN triggered_by VARCHAR(100) DEFAULT NULL AFTER error_message;')
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

-- Add created_date
SET @columnname = 'created_date';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND COLUMN_NAME = @columnname) > 0,
'SELECT 1',
CONCAT('ALTER TABLE ', @dbname, '.', @tablename,
' ADD COLUMN created_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER triggered_by;')
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
Comment on lines +19 to +30
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's see the full context of both migration files
cd src/main/resources/db/migration/dbidentity && ls -la V*.sql | head -20

Repository: PSMRI/AMRIT-DB

Length of output: 1026


🏁 Script executed:

# Check the V7 migration to understand the existing schema
cat src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql

Repository: PSMRI/AMRIT-DB

Length of output: 952


🏁 Script executed:

# Check the V12 migration file in full
cat src/main/resources/db/migration/dbidentity/V12__Elasticsearch_Columns.sql

Repository: PSMRI/AMRIT-DB

Length of output: 4695


🌐 Web query:

MySQL ADD COLUMN NOT NULL DEFAULT CURRENT_TIMESTAMP existing rows behavior

πŸ’‘ Result:

When adding a column to an existing MySQL table using ALTER TABLE ADD COLUMN with NOT NULL and DEFAULT CURRENT_TIMESTAMP, the existing rows are populated with the default value, which is the current timestamp at the time the ALTER statement executes.

Citations:


Backfill created_date from started_at before making it non-null.

Line 26 will stamp every pre-existing job record with the migration execution time, not the row's original start time. When MySQL executes ADD COLUMN created_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, all existing rows are populated with the current timestamp at that momentβ€”not the historical started_at value. This causes two problems: (1) historical chronology is lost, and (2) the idx_created_date index will be built on incorrect timestamps.

Add created_date as nullable first, backfill from started_at, then alter it to NOT NULL:

Suggested fix
-         ' ADD COLUMN created_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER triggered_by;')
+         ' ADD COLUMN created_date TIMESTAMP NULL AFTER triggered_by;')
 ));
 PREPARE stmt FROM `@preparedStatement`;
 EXECUTE stmt;
 DEALLOCATE PREPARE stmt;
+
+UPDATE db_identity.t_elasticsearch_sync_job
+SET created_date = COALESCE(started_at, CURRENT_TIMESTAMP)
+WHERE created_date IS NULL;
+
+ALTER TABLE db_identity.t_elasticsearch_sync_job
+MODIFY COLUMN created_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP;
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
-- Add created_date
SET @columnname = 'created_date';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND COLUMN_NAME = @columnname) > 0,
'SELECT 1',
CONCAT('ALTER TABLE ', @dbname, '.', @tablename,
' ADD COLUMN created_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER triggered_by;')
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
-- Add created_date
SET `@columnname` = 'created_date';
SET `@preparedStatement` = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = `@dbname` AND TABLE_NAME = `@tablename` AND COLUMN_NAME = `@columnname`) > 0,
'SELECT 1',
CONCAT('ALTER TABLE ', `@dbname`, '.', `@tablename`,
' ADD COLUMN created_date TIMESTAMP NULL AFTER triggered_by;')
));
PREPARE stmt FROM `@preparedStatement`;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
UPDATE db_identity.t_elasticsearch_sync_job
SET created_date = COALESCE(started_at, CURRENT_TIMESTAMP)
WHERE created_date IS NULL;
ALTER TABLE db_identity.t_elasticsearch_sync_job
MODIFY COLUMN created_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP;
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/resources/db/migration/dbidentity/V12__Elasticsearch_Columns.sql`
around lines 19 - 30, The migration currently adds created_date as NOT NULL with
DEFAULT CURRENT_TIMESTAMP which stamps existing rows with the migration time;
instead, first add created_date as a nullable column (use the same
`@preparedStatement` / PREPARE stmt pattern to execute an ALTER TABLE ... ADD
COLUMN created_date TIMESTAMP NULL AFTER triggered_by), then run an UPDATE to
backfill created_date from started_at (e.g., UPDATE <table> SET created_date =
started_at WHERE created_date IS NULL), and finally ALTER TABLE to make
created_date NOT NULL with DEFAULT CURRENT_TIMESTAMP and then create
idx_created_date; keep using the same `@preparedStatement/PREPARE` stmt/EXECUTE
stmt/DEALLOCATE PREPARE pattern around each DDL/UPDATE.


-- Add last_updated
SET @columnname = 'last_updated';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND COLUMN_NAME = @columnname) > 0,
'SELECT 1',
CONCAT('ALTER TABLE ', @dbname, '.', @tablename,
' ADD COLUMN last_updated TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP AFTER created_date;')
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

-- Modify columns (always execute)
ALTER TABLE db_identity.t_elasticsearch_sync_job
MODIFY COLUMN status VARCHAR(50) NOT NULL DEFAULT 'PENDING',
MODIFY COLUMN total_records BIGINT DEFAULT 0,
MODIFY COLUMN processed_records BIGINT DEFAULT 0,
MODIFY COLUMN success_count BIGINT DEFAULT 0,
MODIFY COLUMN failure_count BIGINT DEFAULT 0,
MODIFY COLUMN current_offset INT DEFAULT 0,
Comment on lines +46 to +52
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

fd "V7__ELASTICSEARCH.sql" src/

Repository: PSMRI/AMRIT-DB

Length of output: 122


🏁 Script executed:

fd "V12__Elasticsearch_Columns.sql" src/

Repository: PSMRI/AMRIT-DB

Length of output: 131


🏁 Script executed:

cat -n src/main/resources/db/migration/dbidentity/V7__ELASTICSEARCH.sql

Repository: PSMRI/AMRIT-DB

Length of output: 1092


🏁 Script executed:

cat -n src/main/resources/db/migration/dbidentity/V12__Elasticsearch_Columns.sql

Repository: PSMRI/AMRIT-DB

Length of output: 5563


Fix the counter column types and constraints in this ALTER statement.

Line 52 narrows current_offset from BIGINT to INT; the original V7 schema explicitly defined it as BIGINT NOT NULL DEFAULT 0 to prevent offset overflow on large jobs. The same block also removes NOT NULL from all five counter columns (lines 48–51), which violates the original schema design. On large sync jobs, the INT offset can overflow, and missing NOT NULL constraints allow unexpected NULL values in counter fields.

πŸ”§ Restore original types and NOT NULL constraints
 ALTER TABLE db_identity.t_elasticsearch_sync_job
 MODIFY COLUMN status VARCHAR(50) NOT NULL DEFAULT 'PENDING',
- MODIFY COLUMN total_records BIGINT DEFAULT 0,
- MODIFY COLUMN processed_records BIGINT DEFAULT 0,
- MODIFY COLUMN success_count BIGINT DEFAULT 0,
- MODIFY COLUMN failure_count BIGINT DEFAULT 0,
- MODIFY COLUMN current_offset INT DEFAULT 0,
+ MODIFY COLUMN total_records BIGINT NOT NULL DEFAULT 0,
+ MODIFY COLUMN processed_records BIGINT NOT NULL DEFAULT 0,
+ MODIFY COLUMN success_count BIGINT NOT NULL DEFAULT 0,
+ MODIFY COLUMN failure_count BIGINT NOT NULL DEFAULT 0,
+ MODIFY COLUMN current_offset BIGINT NOT NULL DEFAULT 0,
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ALTER TABLE db_identity.t_elasticsearch_sync_job
MODIFY COLUMN status VARCHAR(50) NOT NULL DEFAULT 'PENDING',
MODIFY COLUMN total_records BIGINT DEFAULT 0,
MODIFY COLUMN processed_records BIGINT DEFAULT 0,
MODIFY COLUMN success_count BIGINT DEFAULT 0,
MODIFY COLUMN failure_count BIGINT DEFAULT 0,
MODIFY COLUMN current_offset INT DEFAULT 0,
ALTER TABLE db_identity.t_elasticsearch_sync_job
MODIFY COLUMN status VARCHAR(50) NOT NULL DEFAULT 'PENDING',
MODIFY COLUMN total_records BIGINT NOT NULL DEFAULT 0,
MODIFY COLUMN processed_records BIGINT NOT NULL DEFAULT 0,
MODIFY COLUMN success_count BIGINT NOT NULL DEFAULT 0,
MODIFY COLUMN failure_count BIGINT NOT NULL DEFAULT 0,
MODIFY COLUMN current_offset BIGINT NOT NULL DEFAULT 0,
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/resources/db/migration/dbidentity/V12__Elasticsearch_Columns.sql`
around lines 46 - 52, The ALTER statement incorrectly narrows current_offset to
INT and drops NOT NULL on counters; update the ALTER for table
db_identity.t_elasticsearch_sync_job so that current_offset is BIGINT NOT NULL
DEFAULT 0 and the counter columns total_records, processed_records,
success_count, and failure_count are defined as BIGINT NOT NULL DEFAULT 0 (leave
status as VARCHAR(50) NOT NULL DEFAULT 'PENDING') to restore the original types
and NOT NULL constraints.

MODIFY COLUMN started_at TIMESTAMP NULL DEFAULT NULL,
MODIFY COLUMN completed_at TIMESTAMP NULL DEFAULT NULL,
MODIFY COLUMN estimated_time_remaining BIGINT NULL AFTER last_updated,
MODIFY COLUMN processing_speed DOUBLE NULL AFTER estimated_time_remaining;

-- Drop indexes if exist
SET @indexname = 'idx_job_status';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND INDEX_NAME = @indexname) > 0,
CONCAT('DROP INDEX ', @indexname, ' ON ', @dbname, '.', @tablename),

Check failure on line 63 in src/main/resources/db/migration/dbidentity/V12__Elasticsearch_Columns.sql

View check run for this annotation

SonarQubeCloud / SonarCloud Code Analysis

Define a constant instead of duplicating this literal 3 times.

See more on https://sonarcloud.io/project/issues?id=PSMRI_Amrit-DB&issues=AZ0ejtCRzF9ecMqza8PV&open=AZ0ejtCRzF9ecMqza8PV&pullRequest=112
'SELECT 1'
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

SET @indexname = 'idx_started_at';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND INDEX_NAME = @indexname) > 0,
CONCAT('DROP INDEX ', @indexname, ' ON ', @dbname, '.', @tablename),
'SELECT 1'
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

SET @indexname = 'idx_status_started_at';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND INDEX_NAME = @indexname) > 0,
CONCAT('DROP INDEX ', @indexname, ' ON ', @dbname, '.', @tablename),
'SELECT 1'
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

-- Add new indexes (only if not exist)
SET @indexname = 'idx_status';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND INDEX_NAME = @indexname) > 0,
'SELECT 1',
CONCAT('CREATE INDEX idx_status ON ', @dbname, '.', @tablename, ' (status);')
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

SET @indexname = 'idx_created_date';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND INDEX_NAME = @indexname) > 0,
'SELECT 1',
CONCAT('CREATE INDEX idx_created_date ON ', @dbname, '.', @tablename, ' (created_date);')
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

SET @indexname = 'idx_job_type';
SET @preparedStatement = (SELECT IF(
(SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = @dbname AND TABLE_NAME = @tablename AND INDEX_NAME = @indexname) > 0,
'SELECT 1',
CONCAT('CREATE INDEX idx_job_type ON ', @dbname, '.', @tablename, ' (job_type);')
));
PREPARE stmt FROM @preparedStatement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
Loading