NEW @W-20485780@ - Add Fixes and Suggestions support in eslint#441
NEW @W-20485780@ - Add Fixes and Suggestions support in eslint#441nikhil-mittal-165 wants to merge 20 commits intodevfrom
Conversation
Define Fix and Suggestion types in engine-api, add core interfaces and implementations, update JSON/SARIF/XML output formats, and pass includeFixes/includeSuggestions flags through RunOptions to all engines.
Convert ESLint native Rule.Fix and Linter.LintSuggestion into the engine-agnostic Fix and Suggestion types. Add byte-offset to line/column conversion, file content caching, and Fixable tag on rule descriptions.
Define Fix and Suggestion types in engine-api, add core interfaces and implementations, update JSON/SARIF/XML output formats, and pass includeFixes/includeSuggestions flags through RunOptions to all engines.
|
implementation treats byte offsets as character indices, which will produce incorrect results for files with multi-byte characters do you agree ? |
|
Assumes UTF-8 encoding. ESLint may analyze files with different encodings? |
we always read the sourxce file in utf-8 encoding see this ( )this gets internally converted to javascript strings and the ranges are calculated based on character indices which will be uniform no matter the source encoding, i have added a test case now for multibyte files as well please check the commit titled "multibyte test case" |
39bfe73 to
d2be5ab
Compare
|
|
||
| function getFileContent(filePath: string, cache: Map<string, string>): string { | ||
| if (!cache.has(filePath)) { | ||
| cache.set(filePath, fsSync.readFileSync(filePath, 'utf8')); |
There was a problem hiding this comment.
This is a big no-no... you will kill the performance and responsiveness of Code Analyzer if you add in sync based operations. Why are you not using promises?
There was a problem hiding this comment.
fixed it will now consume it directly from lintResult
| if (!cache.has(filePath)) { | ||
| cache.set(filePath, fsSync.readFileSync(filePath, 'utf8')); | ||
| } | ||
| return cache.get(filePath)!; |
There was a problem hiding this comment.
If there are a lot of files, your cache is going to get very very large I'm guessing.
All this code is sensitive... so please find the best approach and test on very large projects with lots of violations and measure performance before and after.
There was a problem hiding this comment.
not using cache anymore , did a comparison of before and after post your review comment fix ,
the repo i used was this contains 12k tarket file. (https://github.com/SalesforceFoundation/NPSP)
took 23- 24 seconds to run all engines (without sfge) in both cases with fixes and without fixes.
i have also attached the evidences of the fixes here testing all the scenarios
https://docs.google.com/document/d/1dTAXfDjgbx2nLrytjGV2PkjCDUpUZGh8TwubaxbOEtk
| if (lineStartOffsets) { | ||
| if (includeFixes && resultMsg.fix) { | ||
| violation.fixes = [convertEslintFix(file, resultMsg.fix, lineStartOffsets)]; | ||
| } |
There was a problem hiding this comment.
nit: Nested if increases code complexity we can simply return violations if lineStartOffsets is null/empty. Something like this:
if (!lineStartOffsets) {
return violation;
}
if (includeFixes && resultMsg.fix) {
violation.fixes = [convertEslintFix(file, resultMsg.fix, lineStartOffsets)];
}
if (includeSuggestions && resultMsg.suggestions?.length) {
violation.suggestions = resultMsg.suggestions.map(s =>
convertEslintSuggestion(file, s, lineStartOffsets));
}
No description provided.