Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@
# Change Log

## 0.9.19

- Added configuration option to control the max cache size while resolving references in semantic analysis, defaulting to 500MB
- Added environment variable to control max per-log output length for most logging from the DLS, defaulting to 1000 bytes..
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changelog entry ends with a double period ("1000 bytes.."). Please fix the punctuation to avoid shipping a typo in release notes.

Suggested change
- Added environment variable to control max per-log output length for most logging from the DLS, defaulting to 1000 bytes..
- Added environment variable to control max per-log output length for most logging from the DLS, defaulting to 1000 bytes.

Copilot uses AI. Check for mistakes.

## 0.9.18
- Fixed a rare case where the DLS would crash when reporting device contexts
Expand Down
1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ crossbeam = "0.8"
crossbeam-deque = "0.8.1"
crossbeam-utils = "0.8.7"
env_logger = "0.11"
hashlink = "0.11"
itertools = "0.14"
jsonrpc = "0.19"
lsp-types = { version = "0.97" }
Expand Down
3 changes: 3 additions & 0 deletions USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,9 @@ instantiated.
including direct instantiation sites (so this is a super-set of
`goto-implementations`)

## Relevant environment variables
* `MAX_LOG_MESSAGE_LENGTH` When set, will truncate any outputted logs that are longer than the value. Defaults to 1000 bytes. Set to 0 to turn off truncation.
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs say logs will be truncated to MAX_LOG_MESSAGE_LENGTH, but the implementation currently slices to max_len and then appends " ...", producing output longer than the configured value when truncation occurs. Please clarify the behavior here (either adjust docs to mention the suffix, or adjust truncation logic to keep the final output within the configured limit).

Suggested change
* `MAX_LOG_MESSAGE_LENGTH` When set, will truncate any outputted logs that are longer than the value. Defaults to 1000 bytes. Set to 0 to turn off truncation.
* `MAX_LOG_MESSAGE_LENGTH` When set, will truncate any outputted log messages that are longer than the value and append `" ..."`. Up to `MAX_LOG_MESSAGE_LENGTH` bytes of the original message are preserved; the suffix may cause the final output to slightly exceed this value. Defaults to 1000 bytes. Set to 0 to turn off truncation.

Copilot uses AI. Check for mistakes.

## In-Line Linting Configuration
It may be desireable to control linting on a per-file basis, rather than
relying on the linting configuration. This can be done with in-line
Expand Down
2 changes: 1 addition & 1 deletion clients.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Once you have this basic support in place, the hard work begins:
(note that you need to support dynamic registration of
"didChangeConfiguration" and support the "workspace/configuration" request on the client
for pull-style updates)
- For the config options, see [config.rs](./src/config.rs#L99-L111)
- For the config options, see [config.rs](./src/config.rs#L123-L141)
* Check for and install the DLS
- Download the latest [binary](https://github.com/intel/dml-language-server/actions/workflows/rust.yml).
Currently, official releases are not being made from a public-facing repository. So you should on occassion
Expand Down
76 changes: 55 additions & 21 deletions src/actions/analysis_queue.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,22 +13,24 @@ use std::sync::{Arc, Mutex};
use std::sync::atomic::AtomicBool;
use std::thread::{self, Thread};
use std::time::SystemTime;

use crate::actions::DeviceAnalysisJobOptions;
use crate::lint::LintCfg;
use crate::lint::LinterAnalysis;

use itertools::{Either, Itertools};

use crate::actions::analysis_storage::{AnalysisStorage, ResultChannel,
TimestampedStorage, timestamp_is_newer};
use crate::analysis::{DeviceAnalysis, IsolatedAnalysis};
use crate::analysis::{AnalysisError, DeviceAnalysis, IsolatedAnalysis};
use crate::analysis::structure::objects::Import;

use crate::concurrency::JobToken;
use crate::file_management::CanonPath;
use crate::vfs::{TextFile, Vfs};
use crate::server::ServerToHandle;

use log::{info, debug, trace, error};
use crate::logging::{info, debug, trace, error};
use crossbeam::channel;

// Maps in-process device jobs the timestamps of their dependencies
Expand Down Expand Up @@ -76,7 +78,7 @@ impl AnalysisQueue {
tracking_token: JobToken) -> bool {
match LinterJob::new(tracking_token, storage, cfg, vfs, file) {
Ok(newjob) => {
self.enqueue(QueuedJob::FileLinterJob(newjob));
self.enqueue(newjob.into());
true
},
Err(desc) => {
Expand Down Expand Up @@ -104,7 +106,7 @@ impl AnalysisQueue {
Ok(newjob) => {
debug!("Enqueued isolated analysis job of {}",
newjob.path.as_str());
self.enqueue(QueuedJob::IsolatedAnalysisJob(newjob));
self.enqueue(newjob.into());
true
},
Err(desc) => {
Expand All @@ -119,8 +121,9 @@ impl AnalysisQueue {
storage: &mut AnalysisStorage,
device: &CanonPath,
bases: HashSet<CanonPath>,
tracking_token: JobToken) -> bool {
match DeviceAnalysisJob::new(tracking_token, storage, bases, device) {
tracking_token: JobToken,
device_analysis_options: DeviceAnalysisJobOptions) -> bool {
match DeviceAnalysisJob::new(tracking_token, storage, bases, device, device_analysis_options) {
Ok(newjob) => {
if let Some((_, previous_bases)) = self.device_tracker
.lock().unwrap()
Expand All @@ -146,7 +149,7 @@ impl AnalysisQueue {
}
}
debug!("Enqueued device analysis job of {:?}", device);
self.enqueue(QueuedJob::DeviceAnalysisJob(newjob));
self.enqueue(newjob.into());
true
},
Err(desc) => {
Expand Down Expand Up @@ -360,16 +363,33 @@ impl Drop for AnalysisQueue {
}
}

#[allow(clippy::large_enum_variant)]
#[derive(Debug)]
enum QueuedJob {
IsolatedAnalysisJob(IsolatedAnalysisJob),
FileLinterJob(LinterJob),
DeviceAnalysisJob(DeviceAnalysisJob),
IsolatedAnalysisJob(Box<IsolatedAnalysisJob>),
FileLinterJob(Box<LinterJob>),
DeviceAnalysisJob(Box<DeviceAnalysisJob>),
Sentinel,
Terminate,
}

impl From<IsolatedAnalysisJob> for QueuedJob {
fn from(job: IsolatedAnalysisJob) -> Self {
QueuedJob::IsolatedAnalysisJob(Box::new(job))
}
}

impl From<LinterJob> for QueuedJob {
fn from(job: LinterJob) -> Self {
QueuedJob::FileLinterJob(Box::new(job))
}
}

impl From<DeviceAnalysisJob> for QueuedJob {
fn from(job: DeviceAnalysisJob) -> Self {
QueuedJob::DeviceAnalysisJob(Box::new(job))
}
}

impl QueuedJob {
fn hash(&self) -> Option<u64> {
match self {
Expand Down Expand Up @@ -439,7 +459,7 @@ impl IsolatedAnalysisJob {
self.context.clone()
};
let import_paths = analysis.get_import_names();
self.report.send(TimestampedStorage::make_isolated_result(
self.report.send(TimestampedStorage::make_timestamped(
self.timestamp,
analysis)).ok();
self.notify.send(ServerToHandle::IsolatedAnalysisDone(
Expand All @@ -448,9 +468,12 @@ impl IsolatedAnalysisJob {
import_paths
)).ok();
},
Err(e) => {
trace!("Failed to create isolated analysis: {}", e);
Err(AnalysisError::VFSError(e)) => {
error!("Failed to create isolated analysis: {}", e);
// TODO: perhaps collect this for reporting to server
},
Err(AnalysisError::Cancelled) => {
debug!("Isolated analysis of {} was cancelled", self.path.as_str());
}
}
}
Expand All @@ -467,13 +490,15 @@ pub struct DeviceAnalysisJob {
notify: channel::Sender<ServerToHandle>,
hash: u64,
token: JobToken,
device_analysis_options: DeviceAnalysisJobOptions,
}

impl DeviceAnalysisJob {
fn new(token: JobToken,
analysis: &mut AnalysisStorage,
bases: HashSet<CanonPath>,
root: &CanonPath)
root: &CanonPath,
device_analysis_options: DeviceAnalysisJobOptions)
-> Result<DeviceAnalysisJob, String> {
info!("Creating a device analysis job of {:?}", root);
// TODO: Use some sort of timestamp from VFS instead of systemtime
Expand Down Expand Up @@ -526,30 +551,36 @@ impl DeviceAnalysisJob {
notify: analysis.notify.clone(),
hash,
token,
device_analysis_options,
})
}

fn process(self) {
let root_path = self.root.path.clone();
info!("Started work on deviceanalysis of {:?}, depending on {:?}",
self.root.path,
self.bases.iter().map(|i|&i.stored.path)
.collect::<Vec<&CanonPath>>());
match DeviceAnalysis::new(self.root,
self.bases,
self.import_sources,
self.device_analysis_options,
self.token.status) {
Ok(analysis) => {
info!("Finished device analysis of {:?}", analysis.name);
self.notify.send(ServerToHandle::DeviceAnalysisDone(
analysis.path.clone())).ok();
self.report.send(TimestampedStorage::make_device_result(
self.report.send(TimestampedStorage::make_timestamped(
self.timestamp,
analysis)).ok();
},
// In general, an analysis shouldn't fail to be created
Err(e) => {
trace!("Failed to create device analysis: {}", e);
Err(AnalysisError::VFSError(e)) => {
error!("Failed to create device analysis: {}", e);
// TODO: perhaps collect this for reporting to server
},
Err(AnalysisError::Cancelled) => {
debug!("Device analysis of {} was cancelled", root_path.as_str());
}
}
}
Expand Down Expand Up @@ -606,14 +637,17 @@ impl LinterJob {
self.ast,
self.token.status) {
Ok(analysis) => {
self.report.send(TimestampedStorage::make_linter_result(
self.report.send(TimestampedStorage::make_timestamped(
self.timestamp,
analysis)).ok();
self.notify.send(ServerToHandle::LinterDone(
self.file.clone())).ok();
},
Err(e) => {
debug!("Failed to create isolated linter analysis: {}", e);
Err(AnalysisError::VFSError(e)) => {
error!("Failed to create isolated linter analysis: {}", e);
},
Err(AnalysisError::Cancelled) => {
debug!("Linter analysis of {} was cancelled", self.file.as_str());
}
}
}
Expand Down
68 changes: 30 additions & 38 deletions src/actions/analysis_storage.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
// SPDX-License-Identifier: Apache-2.0 and MIT
//! Stores currently completed analysis.

use log::{debug, info, trace};
use crate::logging::{debug, trace, info};

use crossbeam::channel;

Expand All @@ -26,12 +26,29 @@ use crate::lint::LinterAnalysis;

use crate::file_management::{PathResolver, CanonPath};

#[allow(clippy::large_enum_variant)]
#[derive(Debug)]
pub enum AnalysisResult {
Isolated(IsolatedAnalysis),
Linter(LinterAnalysis),
Device(DeviceAnalysis),
Isolated(Box<IsolatedAnalysis>),
Linter(Box<LinterAnalysis>),
Device(Box<DeviceAnalysis>),
}

impl From<IsolatedAnalysis> for AnalysisResult {
fn from(analysis: IsolatedAnalysis) -> Self {
AnalysisResult::Isolated(Box::new(analysis))
}
}

impl From<LinterAnalysis> for AnalysisResult {
fn from(analysis: LinterAnalysis) -> Self {
AnalysisResult::Linter(Box::new(analysis))
}
}

impl From<DeviceAnalysis> for AnalysisResult {
fn from(analysis: DeviceAnalysis) -> Self {
AnalysisResult::Device(Box::new(analysis))
}
}

impl AnalysisResult {
Expand All @@ -52,29 +69,13 @@ pub struct TimestampedStorage<T> {
pub stored: T,
}

impl TimestampedStorage<AnalysisResult> {
pub fn make_isolated_result(timestamp: SystemTime,
analysis: IsolatedAnalysis)
-> TimestampedStorage<AnalysisResult>{
TimestampedStorage {
timestamp,
stored : AnalysisResult::Isolated(analysis),
}
}
pub fn make_device_result(timestamp: SystemTime,
analysis: DeviceAnalysis)
-> TimestampedStorage<AnalysisResult>{
TimestampedStorage {
timestamp,
stored : AnalysisResult::Device(analysis),
}
}
pub fn make_linter_result(timestamp: SystemTime,
analysis: LinterAnalysis)
-> TimestampedStorage<AnalysisResult> {
impl <T> TimestampedStorage<T> {
pub fn make_timestamped<F>(timestamp: SystemTime, result: F) -> Self
where F: Into<T>
{
TimestampedStorage {
timestamp,
stored: AnalysisResult::Linter(analysis),
stored: result.into(),
}
}
}
Expand Down Expand Up @@ -489,10 +490,7 @@ impl AnalysisStorage {
trace!("was new, or fresh compared to previous");
dependencies_to_update.insert(canon_path.clone());
self.isolated_analysis.insert(canon_path.clone(),
TimestampedStorage {
timestamp,
stored: analysis,
});
TimestampedStorage::make_timestamped(timestamp, *analysis));
self.last_use.insert(canon_path.clone(),
Mutex::new(SystemTime::now()));
self.update_last_use(&canon_path);
Expand All @@ -509,10 +507,7 @@ impl AnalysisStorage {
AnalysisResult::Linter(analysis) => {
let canon_path = analysis.path.clone();
self.lint_analysis.insert(canon_path.clone(),
TimestampedStorage {
timestamp,
stored: analysis,
});
TimestampedStorage::make_timestamped(timestamp, *analysis));
},
}
}
Expand Down Expand Up @@ -542,10 +537,7 @@ impl AnalysisStorage {
trace!("was not invalidated by recent \
isolated analysis");
self.device_analysis.insert(canon_path,
TimestampedStorage {
timestamp,
stored: analysis,
});
TimestampedStorage::make_timestamped(timestamp, *analysis));
}
}
} else {
Expand Down
2 changes: 1 addition & 1 deletion src/actions/hover.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
// © 2024 Intel Corporation
// SPDX-License-Identifier: Apache-2.0 and MIT
use log::*;
use crate::logging::*;

use crate::span::{Range, ZeroIndexed};
use serde::{Deserialize, Serialize};
Expand Down
Loading
Loading