fix - entity too big error for stats#363
Conversation
frikky
left a comment
There was a problem hiding this comment.
Just minor stuff to consider. Fine to merge if you're ok
| log.Printf("[WARNING] SetOrgStatistics: GCS archive failed for org %s: %s – trimming anyway", id, archiveErr) | ||
| } | ||
|
|
||
| if len(stats.DailyStatistics) > 60 { |
There was a problem hiding this comment.
Is there any chance at all that 60 can be too big as well? Have you checked the average size per day?
There was a problem hiding this comment.
This should not happen. GCP allows a maximum entity size of 1 MB, and storing 60 days of data should not cause any issues.
| } | ||
| dateMap := make(map[string]DailyStatistics, dateMapCap) | ||
| for _, d := range existingStats { | ||
| dateMap[d.Date.UTC().Format("2006-01-02")] = d |
There was a problem hiding this comment.
This feels weird mapping-wise, but with the small scale should be fine
|
|
||
| gcsWriter := obj.NewWriter(ctx) | ||
| if _, writeErr := gcsWriter.Write(mergedBytes); writeErr != nil { | ||
| _ = gcsWriter.Close() |
There was a problem hiding this comment.
Does this actually return a value?
Can't you just defer gcsWriter.Close()?
There was a problem hiding this comment.
Okay, changing this :)
There was a problem hiding this comment.
@frikky I looked into this and found that using defer gcsWriter.Close() can cause the error from Close() to be missed.
I’ve used the following code instead, which checks and returns the error properly. This makes it easier to debug issues in case of GCS write failure:
if closeErr := gcsWriter.Close(); closeErr != nil { return fmt.Errorf("archiveOldStatsToGCSBucket: failed to close GCS writer for org %s: %w", orgId, closeErr) }
| } | ||
|
|
||
| // Sideload GCS overflow stats (entries >60 days old archived from Datastore), cached 30 min. | ||
| if project.Environment == "cloud" && len(orgFileBucket) > 0 { |
There was a problem hiding this comment.
"Max size" is also a problem onprem tho
There was a problem hiding this comment.
Let's make this available on Cloud first. In case we should prioritize onprem as well let me know :)
|
@frikky We can merge this PR :) |
@frikky