Skip to content

ThePhoenixAgency/AI-Pulse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,170 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-PULSE Banner

Curated content from the best sources

GitHub Profile Repository Reader Documentation Support

Last Update: Sat, 28 Feb 2026 18:23:15 GMT


About The Developer

Built by ThePhoenixAgency - AI & Cybersecurity Specialist

Passionate about building secure, privacy-first applications that make a difference. This project showcases my expertise in full-stack development, security engineering, and data privacy.


Real-Time News Roundup

AI - Artificial Intelligence / IA - Intelligence Artificielle

Source: VentureBeat AI The artificial intelligence coding revolution comes with a catch: it's expensive. Claude Code, Anthropic's terminal-based AI agent that can write, debug, and deploy code autonomously, has captured the imagination of software developers worldwide. But its pricing — ranging from $20 to $200 per month depending on usage — has sparked a growing rebellion among the very programmers it aims to serve. Now, a free alternative is gaining traction. Goose, an open-source AI agent developed by Block (the financial technology company formerly known as Square), offers nearly identical functionality to Claude Code but runs entirely on a user's local machine. No subscription fees. No cloud dependency. No rate limits that reset every five hours. "Your data stays with you, period," said Parth Sareen, a software engineer who demonstrated the tool during a recent livestream. The captures the core appeal: Goose gives developers complete control over their AI-powered workflow, including the ability to work offline — even on an airplane. The project has exploded in popularity. Goose now boasts more than 26,100 stars on GitHub, the code-sharing platform, with 362 contributors and 102 releases since its launch. The latest version, 1.20.1, shipped on January 19, 2026, reflecting a development pace that rivals commercial products. For developers frustrated by Claude Code's pricing structure and usage caps, Goose represents something increasingly rare in the AI industry: a genuinely free, no-strings-attached option for serious work. Anthropic's new rate limits spark a developer revolt To understand why Goose matters, you need to understand the Claude Code pricing controversy. Anthropic, the San Francisco artificial intelligence company founded by former OpenAI executives, offers Claude Code as part of its subscription tiers. The free plan provides no access whatsoever. The Pro plan, at $17 per month with annual billing (or $20 monthly), limits users to just 10 to 40 prompts every five hours — a constraint that serious developers exhaust within minutes of intensive work. The Max plans, at $100 and $200 per month, offer more headroom: 50 to 200 prompts and 200 to 800 prompts respectively, plus access to Anthropic's most powerful model, Claude 4.5 Opus. But even these premium tiers come with restrictions that have inflamed the developer community. In late July, Anthropic announced new weekly rate limits. Under the system, Pro users receive 40 to 80 hours of Sonnet 4 usage per week. Max users at the $200 tier get 240 to 480 hours of Sonnet 4, plus 24 to 40 hours of Opus 4. Nearly five months later, the frustration has not subsided. The problem? Those "hours" are not actual hours. They represent token-based limits that vary wildly depending on codebase size, conversation length, and the complexity of the code being processed. Independent analysis suggests the actual per-session limits translate to roughly 44,000 tokens for Pro users and 220,000 tokens for the $200 Max plan. "It's confusing and vague," one developer wrote in a widely shared analysis. "When they say '24-40 hours of Opus 4,' that doesn't really tell you anything useful about what you're actually getting." The backlash on Reddit and developer forums has been fierce. Some users report hitting their daily limits within 30 minutes of intensive coding. Others have canceled their subscriptions entirely, calling the new restrictions "a joke" and "unusable for real work." Anthropic has defended the changes, stating that the limits affect fewer than five percent of users and target people running Claude Code "continuously in the background, 24/7." But the company has not clarified whether that figure refers to five percent of Max subscribers or five percent of all users — a distinction that matters enormously. How Block built a free AI coding agent that works offline Goose takes a radically different approach to the same problem. Built by Block, the payments company led by Jack Dorsey, Goose is what engineers call an "on-machine AI agent." Unlike Claude Code, which sends your queries to Anthropic's servers for processing, Goose can run entirely on your local computer using open-source language models that you download and control yourself. The project's documentation describes it as going "beyond code suggestions" to "install, execute, edit, and test with any LLM." That last phrase — "any LLM" — is the key differentiator. Goose is model-agnostic by design. You can connect Goose to Anthropic's Claude models if you have API access. You can use OpenAI's GPT-5 or Google's Gemini. You can route it through services like Groq or OpenRouter. Or — and this is where things get interesting — you can run it entirely locally using tools like Ollama, which let you download and execute open-source models on your own hardware. The practical implications are significant. With a local setup, there are no subscription fees, no usage caps, no rate limits, and no concerns about your code being sent to external servers. Your conversations with the AI never leave your machine. "I use Ollama all the time on planes — it's a lot of fun!" Sareen noted during a demonstration, highlighting how local models free developers from the constraints of internet connectivity. What Goose can do that traditional code assistants can't Goose operates as a command-line tool or desktop application that can autonomously perform complex development tasks. It can build entire projects from scratch, write and execute code, debug failures, orchestrate workflows across multiple files, and interact with external APIs — all without constant human oversight. The architecture relies on what the AI industry calls "tool calling" or "function calling" — the ability for a language model to request specific actions from external systems. When you ask Goose to create a new file, run a test suite, or check the status of a GitHub pull request, it doesn't just generate text describing what should happen. It actually executes those operations. This capability depends heavily on the underlying language model. Claude 4 models from Anthropic currently perform best at tool calling, according to the Berkeley Function-Calling Leaderboard, which ranks models on their ability to translate natural language requests into executable code and system commands. But newer open-source models are catching up quickly. Goose's documentation highlights several options with strong tool-calling support: Meta's Llama series, Alibaba's Qwen models, Google's Gemma variants, and DeepSeek's reasoning-focused architectures. The tool also integrates with the Model Context Protocol, or MCP, an emerging standard for connecting AI agents to external services. Through MCP, Goose can access databases, search engines, file systems, and third-party APIs — extending its capabilities far beyond what the base language model provides. Setting Up Goose with a Local Model For developers interested in a completely free, privacy-preserving setup, the process involves three main components: Goose itself, Ollama (a tool for running open-source models locally), and a compatible language model. Step 1: Install Ollama Ollama is an open-source project that dramatically simplifies the process of running large language models on personal hardware. It handles the complex work of downloading, optimizing, and serving models through a simple interface. Download and install Ollama from ollama.com. Once installed, you can pull models with a single command. For coding tasks, Qwen 2.5 offers strong tool-calling support: ollama run qwen2.5 The model downloads automatically and begins running on your machine. Step 2: Install Goose Goose is available as both a desktop application and a command-line interface. The desktop version provides a more visual experience, while the CLI appeals to developers who prefer working entirely in the terminal. Installation instructions vary by operating system but generally involve downloading from Goose's GitHub releases page or using a package manager. Block provides pre-built binaries for macOS (both Intel and Apple Silicon), Windows, and Linux. Step 3: Configure the Connection In Goose Desktop, navigate to Settings, then Configure Provider, and select Ollama. Confirm that the API Host is set to http://localhost:11434 (Ollama's default port) and click Submit. For the command-line version, run goose configure, select "Configure Providers," choose Ollama, and enter the model name when prompted. That's it. Goose is now connected to a language model running entirely on your hardware, ready to execute complex coding tasks without any subscription fees or external dependencies. The RAM, processing power, and trade-offs you should know about The obvious question: what kind of computer do you need? Running large language models locally requires substantially more computational resources than typical software. The key constraint is memory — specifically, RAM on most systems, or VRAM if using a dedicated graphics card for acceleration. Block's documentation suggests that 32 gigabytes of RAM provides "a solid baseline for larger models and outputs." For Mac users, this means the computer's unified memory is the primary bottleneck. For Windows and Linux users with discrete NVIDIA graphics cards, GPU memory (VRAM) matters more for acceleration. But you don't necessarily need expensive hardware to get started. Smaller models with fewer parameters run on much more modest systems. Qwen 2.5, for instance, comes in multiple sizes, and the smaller variants can operate effectively on machines with 16 gigabytes of RAM. "You don't need to run the largest models to get excellent results," Sareen emphasized. The practical recommendation: start with a smaller model to test your workflow, then scale up as needed. For context, Apple's entry-level MacBook Air with 8 gigabytes of RAM would struggle with most capable coding models. But a MacBook Pro with 32 gigabytes — increasingly common among professional developers — handles them comfortably. Why keeping your code off the cloud matters more than ever Goose with a local LLM is not a perfect substitute for Claude Code. The comparison involves real trade-offs that developers should understand. Model Quality: Claude 4.5 Opus, Anthropic's flagship model, remains arguably the most capable AI for software engineering tasks. It excels at understanding complex codebases, following nuanced instructions, and producing high-quality code on the first attempt. Open-source models have improved dramatically, but a gap persists — particularly for the most challenging tasks. One developer who switched to the $200 Claude Code plan described the difference bluntly: "When I say 'make this look modern,' Opus knows what I mean. Other models give me Bootstrap circa 2015." Context Window: Claude Sonnet 4.5, accessible through the API, offers a massive one-million-token context window — enough to load entire large codebases without chunking or context management issues. Most local models are limited to 4,096 or 8,192 tokens by default, though many can be configured for longer contexts at the cost of increased memory usage and slower processing. Speed: Cloud-based services like Claude Code run on dedicated server hardware optimized for AI inference. Local models, running on consumer laptops, typically process requests more slowly. The difference matters for iterative workflows where you're making rapid changes and waiting for AI feedback. Tooling Maturity: Claude Code benefits from Anthropic's dedicated engineering resources. Features like prompt caching (which can reduce costs by up to 90 percent for repeated contexts) and structured outputs are polished and well-documented. Goose, while actively developed with 102 releases to date, relies on community contributions and may lack equivalent refinement in specific areas. How Goose stacks up against Cursor, GitHub Copilot, and the paid AI coding market Goose enters a crowded market of AI coding tools, but occupies a distinctive position. Cursor, a popular AI-enhanced code editor, charges $20 per month for its Pro tier and $200 for Ultra—pricing that mirrors Claude Code's Max plans. Cursor provides approximately 4,500 Sonnet 4 requests per month at the Ultra level, a substantially different allocation model than Claude Code's hourly resets. Cline, Roo Code, and similar open-source projects offer AI coding assistance but with varying levels of autonomy and tool integration. Many focus on code completion rather than the agentic task execution that defines Goose and Claude Code. Amazon's CodeWhisperer, GitHub Copilot, and enterprise offerings from major cloud providers target large organizations with complex procurement processes and dedicated budgets. They are less relevant to individual developers and small teams seeking lightweight, flexible tools. Goose's combination of genuine autonomy, model agnosticism, local operation, and zero cost creates a unique value proposition. The tool is not trying to compete with commercial offerings on polish or model quality. It's competing on freedom — both financial and architectural. The $200-a-month era for AI coding tools may be ending

Source: VentureBeat AI Nous Research, the open-source artificial intelligence startup backed by crypto venture firm Paradigm, released a new competitive programming model on Monday that it says matches or exceeds several larger proprietary systems — trained in just four days using 48 of Nvidia's latest B200 graphics processors. The model, called NousCoder-14B, is another entry in a crowded field of AI coding assistants, but arrives at a particularly charged moment: Claude Code, the agentic programming tool from rival Anthropic, has dominated social media discussion since New Year's Day, with developers posting breathless testimonials about its capabilities. The simultaneous developments underscore how quickly AI-assisted software development is evolving — and how fiercely companies large and small are competing to capture what many believe will become a foundational technology for how software gets written. type: embedded-entry-inline id: 74cSyrq6OUrp9SEQ5zOUSl NousCoder-14B achieves a 67.87 percent accuracy rate on LiveCodeBench v6, a standardized evaluation that tests models on competitive programming problems published between August 2024 and May 2025. That figure represents a 7.08 percentage point improvement over the base model it was trained from, Alibaba's Qwen3-14B, according to Nous Research's technical report published alongside the release. "I gave Claude Code a description of the problem, it generated what we built last year in an hour," wrote Jaana Dogan, a principal engineer at Google responsible for the Gemini API, in a viral post on X last week that captured the prevailing mood around AI coding tools. Dogan was describing a distributed agent orchestration system her team had spent a year developing — a system Claude Code approximated from a three-paragraph prompt. The juxtaposition is instructive: while Anthropic's Claude Code has captured imaginations with demonstrations of end-to-end software development, Nous Research is betting that open-source alternatives trained on verifiable problems can close the gap — and that transparency in how these models are built matters as much as raw capability. How Nous Research built an AI coding model that anyone can replicate What distinguishes the NousCoder-14B release from many competitor announcements is its radical openness. Nous Research published not just the model weights but the complete reinforcement learning environment, benchmark suite, and training harness — built on the company's Atropos framework — enabling any researcher with sufficient compute to reproduce or extend the work. "Open-sourcing the Atropos stack provides the necessary infrastructure for reproducible olympiad-level reasoning research," noted one observer on X, summarizing the significance for the academic and open-source communities. The model was trained by Joe Li, a researcher in residence at Nous Research and a former competitive programmer himself. Li's technical report reveals an unexpectedly personal dimension: he compared the model's improvement trajectory to his own journey on Codeforces, the competitive programming platform where participants earn ratings based on contest performance. Based on rough estimates mapping LiveCodeBench scores to Codeforces ratings, Li calculated that NousCoder-14B's improvemen t— from approximately the 1600-1750 rating range to 2100-2200 — mirrors a leap that took him nearly two years of sustained practice between ages 14 and 16. The model accomplished the equivalent in four days. "Watching that final training run unfold was quite a surreal experience," Li wrote in the technical report. But Li was quick to note an important caveat that speaks to broader questions about AI efficiency: he solved roughly 1,000 problems during those two years, while the model required 24,000. Humans, at least for now, remain dramatically more sample-efficient learners. Inside the reinforcement learning system that trains on 24,000 competitive programming problems NousCoder-14B's training process offers a window into the increasingly sophisticated techniques researchers use to improve AI reasoning capabilities through reinforcement learning. The approach relies on what researchers call "verifiable rewards" — a system where the model generates code solutions, those solutions are executed against test cases, and the model receives a simple binary signal: correct or incorrect. This feedback loop, while conceptually straightforward, requires significant infrastructure to execute at scale. Nous Research used Modal, a cloud computing platform, to run sandboxed code execution in parallel. Each of the 24,000 training problems contains hundreds of test cases on average, and the system must verify that generated code produces correct outputs within time and memory constraints — 15 seconds and 4 gigabytes, respectively. The training employed a technique called DAPO (Dynamic Sampling Policy Optimization), which the researchers found performed slightly better than alternatives in their experiments. A key innovation involves "dynamic sampling" — discarding training examples where the model either solves all attempts or fails all attempts, since these provide no useful gradient signal for learning. The researchers also adopted "iterative context extension," first training the model with a 32,000-token context window before expanding to 40,000 tokens. During evaluation, extending the context further to approximately 80,000 tokens produced the best results, with accuracy reaching 67.87 percent. Perhaps most significantly, the training pipeline overlaps inference and verification — as soon as the model generates a solution, it begins work on the next problem while the previous solution is being checked. This pipelining, combined with asynchronous training where multiple model instances work in parallel, maximizes hardware utilization on expensive GPU clusters. The looming data shortage that could slow AI coding model progress Buried in Li's technical report is a finding with significant implications for the future of AI development: the training dataset for NousCoder-14B encompasses "a significant portion of all readily available, verifiable competitive programming problems in a standardized dataset format." In other words, for this particular domain, the researchers are approaching the limits of high-quality training data. "The total number of competitive programming problems on the Internet is roughly the same order of magnitude," Li wrote, referring to the 24,000 problems used for training. "This suggests that within the competitive programming domain, we have approached the limits of high-quality data." This observation echoes growing concern across the AI industry about data constraints. While compute continues to scale according to well-understood economic and engineering principles, training data is "increasingly finite," as Li put it. "It appears that some of the most important research that needs to be done in the future will be in the areas of synthetic data generation and data efficient algorithms and architectures," he concluded. The challenge is particularly acute for competitive programming because the domain requires problems with known correct solutions that can be verified automatically. Unlike natural language tasks where human evaluation or proxy metrics suffice, code either works or it doesn't — making synthetic data generation considerably more difficult. Li identified one potential avenue: training models not just to solve problems but to generate solvable problems, enabling a form of self-play similar to techniques that proved successful in game-playing AI systems. "Once synthetic problem generation is solved, self-play becomes a very interesting direction," he wrote. A $65 million bet that open-source AI can compete with Big Tech

Source: VentureBeat AI Railway, a San Francisco-based cloud platform that has quietly amassed two million developers without spending a dollar on marketing, announced Thursday that it raised $100 million in a Series B funding round, as surging demand for artificial intelligence applications exposes the limitations of legacy cloud infrastructure. TQ Ventures led the round, with participation from FPV Ventures, Redpoint, and Unusual Ventures. The investment values Railway as one of the most significant infrastructure startups to emerge during the AI boom, capitalizing on developer frustration with the complexity and cost of traditional platforms like Amazon Web Services and Google Cloud. "As AI models get better at writing code, more and more people are asking the age-old question: where, and how, do I run my applications?" said Jake Cooper, Railway's 28-year-old founder and chief executive, in an exclusive interview with VentureBeat. "The last generation of cloud primitives were slow and outdated, and now with AI moving everything faster, teams simply can't keep up." The funding is a dramatic acceleration for a company that has charted an unconventional path through the cloud computing industry. Railway raised just $24 million in total before this round, including a $20 million Series A from Redpoint in 2022. The company now processes more than 10 million deployments monthly and handles over one trillion requests through its edge network — metrics that rival far larger and better-funded competitors. Why three-minute deploy times have become unacceptable in the age of AI coding assistants Railway's pitch rests on a simple observation: the tools developers use to deploy and manage software were designed for a slower era. A standard build-and-deploy cycle using Terraform, the industry-standard infrastructure tool, takes two to three minutes. That delay, once tolerable, has become a critical bottleneck as AI coding assistants like Claude, ChatGPT, and Cursor can generate working code in seconds. "When godly intelligence is on tap and can solve any problem in three seconds, those amalgamations of systems become bottlenecks," Cooper told VentureBeat. "What was really cool for humans to deploy in 10 seconds or less is now table stakes for agents." The company claims its platform delivers deployments in under one second — fast enough to keep pace with AI-generated code. Customers report a tenfold increase in developer velocity and up to 65 percent cost savings compared to traditional cloud providers. These numbers come directly from enterprise clients, not internal benchmarks. Daniel Lobaton, chief technology officer at G2X, a platform serving 100,000 federal contractors, measured deployment speed improvements of seven times faster and an 87 percent cost reduction after migrating to Railway. His infrastructure bill dropped from $15,000 per month to approximately $1,000. "The work that used to take me a week on our previous infrastructure, I can do in Railway in like a day," Lobaton said. "If I want to spin up a new service and test different architectures, it would take so long on our old setup. In Railway I can launch six services in two minutes." Inside the controversial decision to abandon Google Cloud and build data centers from scratch What distinguishes Railway from competitors like Render and Fly.io is the depth of its vertical integration. In 2024, the company made the unusual decision to abandon Google Cloud entirely and build its own data centers, a move that echoes the famous Alan Kay maxim: "People who are really serious about software should make their own hardware." "We wanted to design hardware in a way where we could build a differentiated experience," Cooper said. "Having full control over the network, compute, and storage layers lets us do really fast build and deploy loops, the kind that allows us to move at 'agentic speed' while staying 100 percent the smoothest ride in town." The approach paid dividends during recent widespread outages that affected major cloud providers — Railway remained online throughout. This soup-to-nuts control enables pricing that undercuts the hyperscalers by roughly 50 percent and newer cloud startups by three to four times. Railway charges by the second for actual compute usage: $0.00000386 per gigabyte-second of memory, $0.00000772 per vCPU-second, and $0.00000006 per gigabyte-second of storage. There are no charges for idle virtual machines — a stark contrast to the traditional cloud model where customers pay for provisioned capacity whether they use it or not. "The conventional wisdom is that the big guys have economies of scale to offer better pricing," Cooper noted. "But when they're charging for VMs that usually sit idle in the cloud, and we've purpose-built everything to fit much more density on these machines, you have a big opportunity." How 30 employees built a platform generating tens of millions in annual revenue Railway has achieved its scale with a team of just 30 employees generating tens of millions in annual revenue — a ratio of revenue per employee that would be exceptional even for established software companies. The company grew revenue 3.5 times last year and continues to expand at 15 percent month-over-month. Cooper emphasized that the fundraise was strategic rather than necessary. "We're default alive; there's no reason for us to raise money," he said. "We raised because we see a massive opportunity to accelerate, not because we needed to survive." The company hired its first salesperson only last year and employs just two solutions engineers. Nearly all of Railway's two million users discovered the platform through word of mouth — developers telling other developers about a tool that actually works. "We basically did the standard engineering thing: if you build it, they will come," Cooper recalled. "And to some degree, they came." From side projects to Fortune 500 deployments: Railway's unlikely corporate expansion Despite its grassroots developer community, Railway has made significant inroads into large organizations. The company claims that 31 percent of Fortune 500 companies now use its platform, though deployments range from company-wide infrastructure to individual team projects. Notable customers include Bilt, the loyalty program company; Intuit's GoCo subsidiary; TripAdvisor's Cruise Critic; and MGM Resorts. Kernel, a Y Combinator-backed startup providing AI infrastructure to over 1,000 companies, runs its entire customer-facing system on Railway for $444 per month. "At my previous company Clever, which sold for $500 million, I had six full-time engineers just managing AWS," said Rafael Garcia, Kernel's chief technology officer. "Now I have six engineers total, and they all focus on product. Railway is exactly the tool I wish I had in 2012." For enterprise customers, Railway offers security certifications including SOC 2 Type 2 compliance and HIPAA readiness, with business associate agreements available upon request. The platform provides single sign-on authentication, comprehensive audit logs, and the option to deploy within a customer's existing cloud environment through a "bring your own cloud" configuration. Enterprise pricing starts at custom levels, with specific add-ons for extended log retention ($200 monthly), HIPAA BAAs ($1,000), enterprise support with SLOs ($2,000), and dedicated virtual machines ($10,000). The startup's bold strategy to take on Amazon, Google, and a new generation of cloud rivals Railway enters a crowded market that includes not only the hyperscale cloud providers—Amazon Web Services, Microsoft Azure, and Google Cloud Platform—but also a growing cohort of developer-focused platforms like Vercel, Render, Fly.io, and Heroku. Cooper argues that Railway's competitors fall into two camps, neither of which has fully committed to the new infrastructure model that AI demands. "The hyperscalers have two competing systems, and they haven't gone all-in on the new model because their legacy revenue stream is still printing money," he observed. "They have this mammoth pool of cash coming from people who provision a VM, use maybe 10 percent of it, and still pay for the whole thing. To what end are they actually interested in going all the way in on a new experience if they don't really need to?" Against startup competitors, Railway differentiates by covering the full infrastructure stack. "We're not just containers; we've got VM primitives, stateful storage, virtual private networking, automated load balancing," Cooper said. "And we wrap all of this in an absurdly easy-to-use UI, with agentic primitives so agents can move 1,000 times faster." The platform supports databases including PostgreSQL, MySQL, MongoDB, and Redis; provides up to 256 terabytes of persistent storage with over 100,000 input/output operations per second; and enables deployment to four global regions spanning the United States, Europe, and Southeast Asia. Enterprise customers can scale to 112 vCPUs and 2 terabytes of RAM per service. Why investors are betting that AI will create a thousand times more software than exists today Railway's fundraise reflects broader investor enthusiasm for companies positioned to benefit from the AI coding revolution. As tools like GitHub Copilot, Cursor, and Claude become standard fixtures in developer workflows, the volume of code being written — and the infrastructure needed to run it — is expanding dramatically. "The amount of software that's going to come online over the next five years is unfathomable compared to what existed before — we're talking a thousand times more software," Cooper predicted. "All of that has to run somewhere." The company has already integrated directly with AI systems, building what Cooper calls "loops where Claude can hook in, call deployments, and analyze infrastructure automatically." Railway released a Model Context Protocol server in August 2025 that allows AI coding agents to deploy applications and manage infrastructure directly from code editors. "The notion of a developer is melting before our eyes," Cooper said. "You don't have to be an engineer to engineer things anymore — you just need critical thinking and the ability to analyze things in a systems capacity." What Railway plans to do with $100 million and zero marketing experience Railway plans to use the new capital to expand its global data center footprint, grow its team beyond 30 employees, and build what Cooper described as a proper go-to-market operation for the first time in the company's five-year history. "One of my mentors said you raise money when you can change the trajectory of the business," Cooper explained. "We've built all the required substrate to scale indefinitely; what's been holding us back is simply talking about it. 2026 is the year we play on the world stage." The company's investor roster reads like a who's who of developer infrastructure. Angel investors include Tom Preston-Werner, co-founder of GitHub; Guillermo Rauch, chief executive of Vercel; Spencer Kimball, chief executive of Cockroach Labs; Olivier Pomel, chief executive of Datadog; and Jori Lallo, co-founder of Linear. The timing of Railway's expansion coincides with what many in Silicon Valley view as a fundamental shift in how software gets made. Coding assistants are no longer experimental curiosities — they have become essential tools that millions of developers rely on daily. Each line of AI-generated code needs somewhere to run, and the incumbents, by Cooper's telling, are too wedded to their existing business models to fully capitalize on the moment. Whether Railway can translate developer enthusiasm into sustained enterprise adoption remains an open question. The cloud infrastructure market is littered with promising startups that failed to break the grip of Amazon, Microsoft, and Google. But Cooper, who previously worked as a software engineer at Wolfram Alpha, Bloomberg, and Uber before founding Railway in 2020, seems unfazed by the scale of his ambition. "In five years, Railway [will be] the place where software gets created and evolved, period," he said. "Deploy instantly, scale infinitely, with zero friction. That's the prize worth playing for, and there's no bigger one on offer." For a company that built a $100 million business by doing the opposite of what conventional startup wisdom dictates — no marketing, no sales team, no venture hype—the real test begins now. Railway spent five years proving that developers would find a better mousetrap on their own. The next five will determine whether the rest of the world is ready to get on board.

Source: VentureBeat AI Salesforce on Tuesday launched an entirely rebuilt version of Slackbot, the company's workplace assistant, transforming it from a simple notification tool into what executives describe as a fully powered AI agent capable of searching enterprise data, drafting documents, and taking action on behalf of employees. The new Slackbot, now generally available to Business+ and Enterprise+ customers, is Salesforce's most aggressive move yet to position Slack at the center of the emerging "agentic AI" movement — where software agents work alongside humans to complete complex tasks. The launch comes as Salesforce attempts to convince investors that artificial intelligence will bolster its products rather than render them obsolete. "Slackbot isn't just another copilot or AI assistant," said Parker Harris, Salesforce co-founder and Slack's chief technology officer, in an exclusive interview with Salesforce. "It's the front door to the agentic enterprise, powered by Salesforce." From tricycle to Porsche: Salesforce rebuilt Slackbot from the ground up Harris was blunt about what distinguishes the new Slackbot from its predecessor: "The old Slackbot was, you know, a little tricycle, and the new Slackbot is like, you know, a Porsche." The original Slackbot, which has existed since Slack's early days, performed basic algorithmic tasks — reminding users to add colleagues to documents, suggesting channel archives, and delivering simple notifications. The new version runs on an entirely different architecture built around a large language model and sophisticated search capabilities that can access Salesforce records, Google Drive files, calendar data, and years of Slack conversations. "It's two different things," Harris explained. "The old Slackbot was algorithmic and fairly simple. The new Slackbot is brand new — it's based around an LLM and a very robust search engine, and connections to third-party search engines, third-party enterprise data." Salesforce chose to retain the Slackbot brand despite the fundamental technical overhaul. "People know what Slackbot is, and so we wanted to carry that forward," Harris said. Why Anthropic's Claude powers the new Slackbot — and which AI models could come next The new Slackbot runs on Claude, Anthropic's large language model, a choice driven partly by compliance requirements. Slack's commercial service operates under FedRAMP Moderate certification to serve U.S. federal government customers, and Harris said Anthropic was "the only provider that could give us a compliant LLM" when Slack began building the new system. But that exclusivity won't last. "We are, this year, going to support additional providers," Harris said. "We have a great relationship with Google. Gemini is incredible — performance is great, cost is great. So we're going to use Gemini for some things." He added that OpenAI remains a possibility as well. Harris echoed Salesforce CEO Marc Benioff's view that large language models are becoming commoditized: "You've heard Marc talk about LLMs are commodities, that they're democratized. I call them CPUs." On the sensitive question of training data, Harris was unequivocal: Salesforce does not train any models on customer data. "Models don't have any sort of security," he explained. "If we trained it on some confidential conversation that you and I have, I don't want Carolyn to know — if I train it into the LLM, there is no way for me to say you get to see the answer, but Carolyn doesn't." Inside Salesforce's internal experiment: 80,000 employees tested Slackbot with striking results Salesforce has been testing the new Slackbot internally for months, rolling it out to all 80,000 employees. According to Ryan Gavin, Slack's chief marketing officer, the results have been striking: "It's the fastest adopted product in Salesforce history." Internal data shows that two-thirds of Salesforce employees have tried the new Slackbot, with 80% of those users continuing to use it regularly. Internal satisfaction rates reached 96% — the highest for any AI feature Slack has shipped. Employees report saving between two and 20 hours per week. The adoption happened largely organically. "I think it was about five days, and a Canvas was developed by our employees called 'The Most Stealable Slackbot Prompts,'" Gavin said. "People just started adding to it organically. I think it's up to 250-plus prompts that are in this Canvas right now." Kate Crotty, a principal UX researcher at Salesforce, found that 73% of internal adoption was driven by social sharing rather than top-down mandates. "Everybody is there to help each other learn and communicate hacks," she said. How Slackbot transforms scattered enterprise data into executive-ready insights During a product demonstration, Amy Bauer, Slack's product experience designer, showed how Slackbot can synthesize information across multiple sources. In one example, she asked Slackbot to analyze customer feedback from a pilot program, upload an image of a usage dashboard, and have Slackbot correlate the qualitative and quantitative data. "This is where Slackbot really earns its keep for me," Bauer explained. "What it's doing is not just simply reading the image — it's actually looking at the image and comparing it to the insight it just generated for me." Slackbot can then query Salesforce to find enterprise accounts with open deals that might be good candidates for early access, creating what Bauer called "a really great justification and plan to move forward." Finally, it can synthesize all that information into a Canvas — Slack's collaborative document format — and find calendar availability among stakeholders to schedule a review meeting. "Up until this point, we have been working in a one-to-one capacity with Slackbot," Bauer said. "But one of the benefits that I can do now is take this insight and have it generate this into a Canvas, a shared workspace where I can iterate on it, refine it with Slackbot, or share it out with my team." Rob Seaman, Slack's chief product officer, said the Canvas creation demonstrates where the product is heading: "This is making a tool call internally to Slack Canvas to actually write, effectively, a shared document. But it signals where we're going with Slackbot — we're eventually going to be adding in additional third-party tool calls." MrBeast's company became a Slackbot guinea pig—and employees say they're saving 90 minutes a day Among Salesforce's pilot customers is Beast Industries, the parent company of YouTube star MrBeast. Luis Madrigal, the company's chief information officer, joined the launch announcement to describe his experience. "As somebody who has rolled out enterprise technologies for over two decades now, this was practically one of the easiest," Madrigal said. "The plumbing is there. Slack as an implementation, Enterprise Tools — being able to turn on the Slackbot and the Slack AI functionality was as simple as having my team go in, review, do a quick security review." Madrigal said his security team signed off "rather quickly" — unusual for enterprise AI deployments — because Slackbot accesses only the information each individual user already has permission to view. "Given all the guardrails you guys have put into place for Slackbot to be unique and customized to only the information that each individual user has, only the conversations and the Slack rooms and Slack channels that they're part of—that made my security team sign off rather quickly." One Beast Industries employee, Sinan, the head of Beast Games marketing, reported saving "at bare minimum, 90 minutes a day." Another employee, Spencer, a creative supervisor, described it as "an assistant who's paying attention when I'm not." Other pilot customers include Slalom, reMarkable, Xero, Mercari, and Engine. Mollie Bodensteiner, SVP of Operations at Engine, called Slackbot "an absolute 'chaos tamer' for our team," estimating it saves her about 30 minutes daily "just by eliminating context switching." Slackbot vs. Microsoft Copilot vs. Google Gemini: The fight for enterprise AI dominance

Source: VentureBeat AI Anthropic released Cowork on Monday, a new AI agent capability that extends the power of its wildly successful Claude Code tool to non-technical users — and according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself. The launch marks a major inflection point in the race to deliver practical AI agents to mainstream users, positioning Anthropic to compete not just with OpenAI and Google in conversational AI, but with Microsoft's Copilot in the burgeoning market for AI-powered productivity tools. "Cowork lets you complete non-technical tasks much like how developers use Claude Code," the company announced via its official Claude account on X. The feature arrives as a research preview available exclusively to Claude Max subscribers — Anthropic's power-user tier priced between $100 and $200 per month — through the macOS desktop application. For the past year, the industry narrative has focused on large language models that can write poetry or debug code. With Cowork, Anthropic is betting that the real enterprise value lies in an AI that can open a folder, read a messy pile of receipts, and generate a structured expense report without human hand-holding. How developers using a coding tool for vacation research inspired Anthropic's latest product The genesis of Cowork lies in Anthropic's recent success with the developer community. In late 2024, the company released Claude Code, a terminal-based tool that allowed software engineers to automate rote programming tasks. The tool was a hit, but Anthropic noticed a peculiar trend: users were forcing the coding tool to perform non-coding labor. According to Boris Cherny, an engineer at Anthropic, the company observed users deploying the developer tool for an unexpectedly diverse array of tasks. "Since we launched Claude Code, we saw people using it for all sorts of non-coding work: doing vacation research, building slide decks, cleaning up your email, cancelling subscriptions, recovering wedding photos from a hard drive, monitoring plant growth, controlling your oven," Cherny wrote on X. "These use cases are diverse and surprising — the reason is that the underlying Claude Agent is the best agent, and Opus 4.5 is the best model." Recognizing this shadow usage, Anthropic effectively stripped the command-line complexity from their developer tool to create a consumer-friendly interface. In its blog post announcing the feature, Anthropic explained that developers "quickly began using it for almost everything else," which "prompted us to build Cowork: a simpler way for anyone — not just developers — to work with Claude in the very same way." Inside the folder-based architecture that lets Claude read, edit, and create files on your computer Unlike a standard chat interface where a user pastes text for analysis, Cowork requires a different level of trust and access. Users designate a specific folder on their local machine that Claude can access. Within that sandbox, the AI agent can read existing files, modify them, or create entirely new ones. Anthropic offers several illustrative examples: reorganizing a cluttered downloads folder by sorting and intelligently renaming each file, generating a spreadsheet of expenses from a collection of receipt screenshots, or drafting a report from scattered notes across multiple documents. "In Cowork, you give Claude access to a folder on your computer. Claude can then read, edit, or create files in that folder," the company explained on X. "Try it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes." The architecture relies on what is known as an "agentic loop." When a user assigns a task, the AI does not merely generate a text response. Instead, it formulates a plan, executes steps in parallel, checks its own work, and asks for clarification if it hits a roadblock. Users can queue multiple tasks and let Claude process them simultaneously — a workflow Anthropic describes as feeling "much less like a back-and-forth and much more like leaving messages for a coworker." The system is built on Anthropic's Claude Agent SDK, meaning it shares the same underlying architecture as Claude Code. Anthropic notes that Cowork "can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks." The recursive loop where AI builds AI: Claude Code reportedly wrote much of Claude Cowork Perhaps the most remarkable detail surrounding Cowork's launch is the speed at which the tool was reportedly built — highlighting a recursive feedback loop where AI tools are being used to build better AI tools. During a livestream hosted by Dan Shipper, Felix Rieseberg, an Anthropic employee, confirmed that the team built Cowork in approximately a week and a half. Alex Volkov, who covers AI developments, expressed surprise at the timeline: "Holy shit Anthropic built 'Cowork' in the last... week and a half?!" This prompted immediate speculation about how much of Cowork was itself built by Claude Code. Simon Smith, EVP of Generative AI at Klick Health, put it bluntly on X: "Claude Code wrote all of Claude Cowork. Can we all agree that we're in at least somewhat of a recursive improvement loop here?" The implication is profound: Anthropic's AI coding agent may have substantially contributed to building its own non-technical sibling product. If true, this is one of the most visible examples yet of AI systems being used to accelerate their own development and expansion — a strategy that could widen the gap between AI labs that successfully deploy their own agents internally and those that do not. Connectors, browser automation, and skills extend Cowork's reach beyond the local file system Cowork doesn't operate in isolation. The feature integrates with Anthropic's existing ecosystem of connectors — tools that link Claude to external information sources and services such as Asana, Notion, PayPal, and other supported partners. Users who have configured these connections in the standard Claude interface can leverage them within Cowork sessions. Additionally, Cowork can pair with Claude in Chrome, Anthropic's browser extension, to execute tasks requiring web access. This combination allows the agent to navigate websites, click buttons, fill forms, and extract information from the internet — all while operating from the desktop application. "Cowork includes a number of novel UX and safety features that we think make the product really special," Cherny explained, highlighting "a built-in VM [virtual machine] for isolation, out of the box support for browser automation, support for all your claude.ai data connectors, asking you for clarification when it's unsure." Anthropic has also introduced an initial set of "skills" specifically designed for Cowork that enhance Claude's ability to create documents, presentations, and other files. These build on the Skills for Claude framework the company announced in October, which provides specialized instruction sets Claude can load for particular types of tasks. Why Anthropic is warning users that its own AI agent could delete their files The transition from a chatbot that suggests edits to an agent that makes edits introduces significant risk. An AI that can organize files can, theoretically, delete them. In a notable display of transparency, Anthropic devoted considerable space in its announcement to warning users about Cowork's potential dangers — an unusual approach for a product launch. The company explicitly acknowledges that Claude "can take potentially destructive actions (such as deleting local files) if it's instructed to." Because Claude might occasionally misinterpret instructions, Anthropic urges users to provide "very clear guidance" about sensitive operations. More concerning is the risk of prompt injection attacks — a technique where malicious actors embed hidden instructions in content Claude might encounter online, potentially causing the agent to bypass safeguards or take harmful actions. "We've built sophisticated defenses against prompt injections," Anthropic wrote, "but agent safety — that is, the task of securing Claude's real-world actions — is still an active area of development in the industry." The company characterized these risks as inherent to the current state of AI agent technology rather than unique to Cowork. "These risks aren't new with Cowork, but it might be the first time you're using a more advanced tool that moves beyond a simple conversation," the announcement notes. Anthropic's desktop agent strategy sets up a direct challenge to Microsoft Copilot The launch of Cowork places Anthropic in direct competition with Microsoft, which has spent years attempting to integrate its Copilot AI into the fabric of the Windows operating system with mixed adoption results. However, Anthropic's approach differs in its isolation. By confining the agent to specific folders and requiring explicit connectors, they are attempting to strike a balance between the utility of an OS-level agent and the security of a sandboxed application. What distinguishes Anthropic's approach is its bottom-up evolution. Rather than designing an AI assistant and retrofitting agent capabilities, Anthropic built a powerful coding agent first — Claude Code — and is now abstracting its capabilities for broader audiences. This technical lineage may give Cowork more robust agentic behavior from the start. Claude Code has generated significant enthusiasm among developers since its initial launch as a command-line tool in late 2024. The company expanded access with a web interface in October 2025, followed by a Slack integration in December. Cowork is the next logical step: bringing the same agentic architecture to users who may never touch a terminal. Who can access Cowork now, and what's coming next for Windows and other platforms For now, Cowork remains exclusive to Claude Max subscribers using the macOS desktop application. Users on other subscription tiers — Free, Pro, Team, or Enterprise — can join a waitlist for future access. Anthropic has signaled clear intentions to expand the feature's reach. The blog post explicitly mentions plans to add cross-device sync and bring Cowork to Windows as the company learns from the research preview. Cherny set expectations appropriately, describing the product as "early and raw, similar to what Claude Code felt like when it first launched." To access Cowork, Max subscribers can download or update the Claude macOS app and click on "Cowork" in the sidebar. The real question facing enterprise AI adoption For technical decision-makers, the implications of Cowork extend beyond any single product launch. The bottleneck for AI adoption is shifting — no longer is model intelligence the limiting factor, but rather workflow integration and user trust. Anthropic's goal, as the company puts it, is to make working with Claude feel less like operating a tool and more like delegating to a colleague. Whether mainstream users are ready to hand over folder access to an AI that might misinterpret their instructions remains an open question. But the speed of Cowork's development — a major feature built in ten days, possibly by the company's own AI — previews a future where the capabilities of these systems compound faster than organizations can evaluate them. The chatbot has learned to use a file manager. What it learns to use next is anyone's guess.

Source: VentureBeat AI Alfred Wahlforss was running out of options. His startup, Listen Labs, needed to hire over 100 engineers, but competing against Mark Zuckerberg's $100 million offers seemed impossible. So he spent $5,000 — a fifth of his marketing budget — on a billboard in San Francisco displaying what looked like gibberish: five strings of random numbers. The numbers were actually AI tokens. Decoded, they led to a coding challenge: build an algorithm to act as a digital bouncer at Berghain, the Berlin nightclub famous for rejecting nearly everyone at the door. Within days, thousands attempted the puzzle. 430 cracked it. Some got hired. The winner flew to Berlin, all expenses paid. That unconventional approach has now attracted $69 million in Series B funding, led by Ribbit Capital with participation from Evantic and existing investors Sequoia Capital, Conviction, and Pear VC. The round values Listen Labs at $500 million and brings its total capital to $100 million. In nine months since launch, the company has grown annualized revenue by 15x to eight figures and conducted over one million AI-powered interviews. "When you obsess over customers, everything else follows," Wahlforss said in an interview with VentureBeat. "Teams that use Listen bring the customer into every decision, from marketing to product, and when the customer is delighted, everyone is." Why traditional market research is broken, and what Listen Labs is building to fix it Listen's AI researcher finds participants, conducts in-depth interviews, and delivers actionable insights in hours, not weeks. The platform replaces the traditional choice between quantitative surveys — which provide statistical precision but miss nuance—and qualitative interviews, which deliver depth but cannot scale. Wahlforss explained the limitation of existing approaches: "Essentially surveys give you false precision because people end up answering the same question... You can't get the outliers. People are actually not honest on surveys." The alternative, one-on-one human interviews, "gives you a lot of depth. You can ask follow up questions. You can kind of double check if they actually know what they're talking about. And the problem is you can't scale that." The platform works in four steps: users create a study with AI assistance, Listen recruits participants from its global network of 30 million people, an AI moderator conducts in-depth interviews with follow-up questions, and results are packaged into executive-ready reports including key themes, highlight reels, and slide decks. What distinguishes Listen's approach is its use of open-ended video conversations rather than multiple-choice forms. "In a survey, you can kind of guess what you should answer, and you have four options," Wahlforss said. "Oh, they probably want me to buy high income. Let me click on that button versus an open ended response. It just generates much more honesty." The dirty secret of the $140 billion market research industry: rampant fraud Listen finds and qualifies the right participants in its global network of 30 million people. But building that panel required confronting what Wahlforss called "one of the most shocking things that we've learned when we entered this industry"—rampant fraud. "Essentially, there's a financial transaction involved, which means there will be bad players," he explained. "We actually had some of the largest companies, some of them have billions in revenue, send us people who claim to be kind of enterprise buyers to our platform and our system immediately detected, like, fraud, fraud, fraud, fraud, fraud." The company built what it calls a "quality guard" that cross-references LinkedIn profiles with video responses to verify identity, checks consistency across how participants answer questions, and flags suspicious patterns. The result, according to Wahlforss: "People talk three times more. They're much more honest when they talk about sensitive topics like politics and mental health." Emeritus, an online education company that uses Listen, reported that approximately 20% of survey responses previously fell into the fraudulent or low-quality category. With Listen, they reduced this to almost zero. "We did not have to replace any responses because of fraud or gibberish information," said Gabrielli Tiburi, Assistant Manager of Customer Insights at Emeritus. How Microsoft, Sweetgreen, and Chubbies are using AI interviews to build better products The speed advantage has proven central to Listen's pitch. Traditional customer research at Microsoft could take four to six weeks to generate insights. "By the time we get to them, either the decision has been made or we lose out on the opportunity to actually influence it," said Romani Patel, Senior Research Manager at Microsoft. With Listen, Microsoft can now get insights in days, and in many cases, within hours. The platform has already powered several high-profile initiatives. Microsoft used Listen Labs to collect global customer stories for its 50th anniversary celebration. "We wanted users to share how Copilot is empowering them to bring their best self forward," Patel said, "and we were able to collect those user video stories within a day." Traditionally, that kind of work would have taken six to eight weeks. Simple Modern, an Oklahoma-based drinkware company, used Listen to test a new product concept. The process took about an hour to write questions, an hour to launch the study, and 2.5 hours to receive feedback from 120 people across the country. "We went from 'Should we even have this product?' to 'How should we launch it?'" said Chris Hoyle, the company's Chief Marketing Officer. Chubbies, the shorts brand, achieved a 24x increase in youth research participation—growing from 5 to 120 participants — by using Listen to overcome the scheduling challenges of traditional focus groups with children. "There's school, sports, dinner, and homework," explained Lauren Neville, Director of Insights and Innovation. "I had to find a way to hear from them that fit into their schedules." The company also discovered product issues through AI interviews that might have gone undetected otherwise. Wahlforss described how the AI "through conversations, realized there were like issues with the the kids short line, and decided to, like, interview hundreds of kids. And I understand that there were issues in the liner of the shorts and that they were, like, scratchy, quote, unquote, according to the people interviewed." The redesigned product became "a blockbuster hit." The Jevons paradox explains why cheaper research creates more demand, not less Listen Labs is entering a massive but fragmented market. Wahlforss cited research from Andreessen Horowitz estimating the market research industry at roughly $140 billion annually, populated by legacy players — some with more than a billion dollars in revenue — that he believes are vulnerable to disruption. "There are very much existing budget lines that we are replacing," Wahlforss said. "Why we're replacing them is that one, they're super costly. Two, they're kind of stuck in this old paradigm of choosing between a survey or interview, and they also take months to work with." But the more intriguing dynamic may be that AI-powered research doesn't just replace existing spending — it creates new demand. Wahlforss invoked the Jevons paradox, an economic principle that occurs when technological advancements make a resource more efficient to use, but increased efficiency leads to increased overall consumption rather than decreased consumption. "What I've noticed is that as something gets cheaper, you don't need less of it. You want more of it," Wahlforss explained. "There's infinite demand for customer understanding. So the researchers on the team can do an order of magnitude more research, and also other people who weren't researchers before can now do that as part of their job." Inside the elite engineering team that built Listen Labs before they had a working toilet Listen Labs traces its origins to a consumer app that Wahlforss and his co-founder built after meeting at Harvard. "We built this consumer app that got 20,000 downloads in one day," Wahlforss recalled. "We had all these users, and we were thinking like, okay, what can we do to get to know them better? And we built this prototype of what Listen is today." The founding team brings an unusual pedigree. Wahlforss's co-founder "was the national champion in competitive programming in Germany, and he worked at Tesla Autopilot." The company claims that 30% of its engineering team are medalists from the International Olympiad in Informatics — the same competition that produced the founders of Cognition, the AI coding startup. The Berghain billboard stunt generated approximately 5 million views across social media, according to Wahlforss. It reflected the intensity of the talent war in the Bay Area. "We had to do these things because some of our, like early employees, joined the company before we had a working toilet," he said. "But now we fixed that situation." The company grew from 5 to 40 employees in 2024 and plans to reach 150 this year. It hires engineers for non-engineering roles across marketing, growth, and operations — a bet that in the AI era, technical fluency matters everywhere. Synthetic customers and automated decisions: what Listen Labs is building next Wahlforss outlined an ambitious product roadmap that pushes into more speculative territory. The company is building "the ability to simulate your customers, so you can take all of those interviews we've done, and then extrapolate based on that and create synthetic users or simulated user voices." Beyond simulation, Listen aims to enable automated action based on research findings. "Can you not just make recommendations, but also create spawn agents to either change things in code or some customer churns? Can you give them a discount and try to bring them back?" Wahlforss acknowledged the ethical implications. "Obviously, as you said, there's kind of ethical concerns there. Of like, automated decision making overall can be bad, but we will have considerable guardrails to make sure that the companies are always in the loop." The company already handles sensitive data with care. "We don't train on any of the data," Wahlforss said. "We will also scrub any sensitive PII automatically so the model can detect that. And there are times when, for example, you work with investors, where if you accidentally mention something that could be material, non public information, the AI can actually detect that and remove any information like that." How AI could reshape the future of product development Perhaps the most provocative implication of Listen's model is how it could reshape product development itself. Wahlforss described a customer — an Australian startup — that has adopted what amounts to a continuous feedback loop. "They're based in Australia, so they're coding during the day, and then in their night, they're releasing a Listen study with an American audience. Listen validates whatever they built during the day, and they get feedback on that. They can then plug that feedback directly into coding tools like Claude Code and iterate." The vision extends Y Combinator's famous dictum — "write code, talk to users" — into an automated cycle. "Write code is now getting automated. And I think like talk to users will be as well, and you'll have this kind of infinite loop where you can start to ship this truly amazing product, almost kind of autonomously." Whether that vision materializes depends on factors beyond Listen's control — the continued improvement of AI models, enterprise willingness to trust automated research, and whether speed truly correlates with better products. A 2024 MIT study found that 95% of AI pilots fail to move into production, a statistic Wahlforss cited as the reason he emphasizes quality over demos. "I'm constantly have to emphasize like, let's make sure the quality is there and the details are right," he said. But the company's growth suggests appetite for the experiment. Microsoft's Patel said Listen has "removed the drudgery of research and brought the fun and joy back into my work." Chubbies is now pushing its founder to give everyone in the company a login. Sling Money, a stablecoin payments startup, can create a survey in ten minutes and receive results the same day. "It's a total game changer," said Ali Romero, Sling Money's marketing manager. Wahlforss has a different phrase for what he's building. When asked about the tension between speed and rigor — the long-held belief that moving fast means cutting corners — he cited Nat Friedman, the former GitHub CEO and Listen investor, who keeps a list of one-liners on his website. One of them: "Slow is fake." It's an aggressive claim for an industry built on methodological caution. But Listen Labs is betting that in the AI era, the companies that listen fastest will be the ones that win. The only question is whether customers will talk back.

Source: Hugging Face Blog Back to Articles GOAL: End-to-end Machine Learning experiments Setup and Install Install Codex Install the Hugging Face Skills Connect to Hugging Face Your first AI Experiment Instruct Codex to do an end-to-end fine-tuning experiment Updating the Training Report Dataset Validation Review Before Submitting Track Progress using the Training Report Use Your Model Hardware and Cost What's Next Resources Codex Hugging Face Skills Building on our work to get Claude Code to train open source models, we are now getting Codex to go further. We gave Codex access to the Hugging Face Skills repository, which contains skills for Machine Learning and AI tasks such as training or evaluating models. With HF skills, a coding agent can: Fine-tune and apply RL alignment on language models Review, explain, and act on live training metrics from Trackio Evaluate checkpoints and act on evaluation results Create reports from experiments Export to and quantize models with GGUF for local deployment Publish models to the Hub This tutorial dives even deeper and shows you how it works and how to use it yourself. So let's get started. Codex uses AGENTS.md files to accomplish specialized tasks, whilst Claude Code uses 'Skills'. Fortunately, 'HF-skills' is compatible with both approaches and works with major coding agents like Claude Code, Codex, or Gemini CLI. With HF-s

Source: Hugging Face Blog Back to Articles The "Black Box" Problem of Agent Benchmarks The Experiment: Diagnosing ITBench Agents Finding 1: Stronger models like Gemini-3-Flash shows surgical (isolated failure modes) per trace whereas open sourced Kimi-K2 and GPT-oss-120b show compounding failure patterns Finding 2: "Non-Fatal" vs. "Fatal" Failures The "Non-Fatal" (Benign) Flaws The "Fatal" Flaws Case Study: Gemini-3-Flash (Decisive but Overconfident) Case Study: GPT-OSS-120B A different (and more useful) way to read the plots: “fatal” vs “non-fatal” Recoverable / structural (show up even in successful traces) Fatal / decisive (strongly associated with failed traces) Conclusion Ayhan Sebin Saurabh Jha Rohan Arora Daby Sow Mert Cemri Melissa Pan Ion Stoica ITBench HF Space ITBench HF Dataset MAST HF Dataset ITBench Github MAST Github IBM Research and UC Berkeley collaborated to study how agentic LLM systems break in real-world IT automation, for tasks involving incident triage, logs/metrics queries, and Kubernetes actions in long-horizon tool loops. Benchmarks typically reduce performance to a single number, telling you whether an agent failed but never why. To solve this black-box problem, we applied MAST (Multi-Agent System Failure Taxonomy), an emerging practice for diagnosing agentic reliability ). By leveraging MAST to analyze ITBench—the industry benchmark for SRE, Se

Source: VentureBeat AI When the creator of the world's most advanced coding agent speaks, Silicon Valley doesn't just listen — it takes notes. For the past week, the engineering community has been dissecting a thread on X from Boris Cherny, the creator and head of Claude Code at Anthropic. What began as a casual sharing of his personal terminal setup has spiraled into a viral manifesto on the future of software development, with industry insiders calling it a watershed moment for the startup. "If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went further, declaring that with Cherny's "game-changing updates," Anthropic is "on fire," potentially facing "their ChatGPT moment." The excitement stems from a paradox: Cherny's workflow is surprisingly simple, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted on X after implementing Cherny's setup, the experience "feels more like Starcraft" than traditional coding — a shift from typing syntax to commanding autonomous units. Here is an analysis of the workflow that is reshaping how software gets built, straight from the architect himself. How running five AI agents at once turns coding into a real-time strategy game The most striking revelation from Cherny's disclosure is that he does not code in a linear fashion. In the traditional "inner loop" of development, a programmer writes a function, tests it, and moves to the next. Cherny, however, acts as a fleet commander. "I run 5 Claudes in parallel in my terminal," Cherny wrote. "I number my tabs 1-5, and use system notifications to know when a Claude needs input." By utilizing iTerm2 system notifications, Cherny effectively manages five simultaneous work streams. While one agent runs a test suite, another refactors a legacy module, and a third drafts documentation. He also runs "5-10 Claudes on claude.ai" in his browser, using a "teleport" command to hand off sessions between the web and his local machine. This validates the "do more with less" strategy articulated by Anthropic President Daniela Amodei earlier this week. While competitors like OpenAI pursue trillion-dollar infrastructure build-outs, Anthropic is proving that superior orchestration of existing models can yield exponential productivity gains. The counterintuitive case for choosing the slowest, smartest model In a surprising move for an industry obsessed with latency, Cherny revealed that he exclusively uses Anthropic's heaviest, slowest model: Opus 4.5. "I use Opus 4.5 with thinking for everything," Cherny explained. "It's the best coding model I've ever used, and even though it's bigger & slower than Sonnet, since you have to steer it less and it's better at tool use, it is almost always faster than using a smaller model in the end." For enterprise technology leaders, this is a critical insight. The bottleneck in modern AI development isn't the generation speed of the token; it is the human time spent correcting the AI's mistakes. Cherny's workflow suggests that paying the "compute tax" for a smarter model upfront eliminates the "correction tax" later. One shared file turns every AI mistake into a permanent lesson Cherny also detailed how his team solves the problem of AI amnesia. Standard large language models do not "remember" a company's specific coding style or architectural decisions from one session to the next. To address this, Cherny's team maintains a single file named CLAUDE.md in their git repository. "Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time," he wrote.

Source: Siecle Digital Aux États-Unis, Anthropic vient d’ouvrir un bras de fer inédit avec le ministère de la Défense, qui montre que l’intelligence artificielle s’invite désormais au coeur des arbitrages géopolitiques. En effet, comme le rapporte Le Figaro, l’administration Trump demande d’autoriser une utilisation sans restriction de Claude, son modèle d’IA, par le Pentagone. Une requête que l’entreprise […]

Source: OpenAI Blog GABRIEL is a new open-source toolkit from OpenAI that uses GPT to turn qualitative text and images into quantitative data, helping social scientists analyze research at scale.

Source: OpenAI Blog OpenAI’s business model scales with intelligence—spanning subscriptions, API, ads, commerce, and compute—driven by deepening ChatGPT adoption.

Source: OpenAI Blog OpenAI expands data residency for ChatGPT Enterprise, ChatGPT Edu, and the API Platform, enabling eligible customers to store data at rest in-region.

Source: Hugging Face Blog Back to Articles TL;DR Table-of-Contents Datasets: Ready for the Next Wave of Large-Scale Robot Learning What's New in Datasets v3.0? New Feature: Dataset Editing Tools! Simulation Environments: Expanding Your Training Grounds LIBERO Support Meta-World Integration Codebase: Powerful Tools For Everyone The New Pipeline for Data Processing Multi-GPU Training Made Easy Policies: Unleashing Open-World Generalization PI0 and PI0.5 GR00T N1.5 Robots: A New Era of Hardware Integration with the Plugin System Key Benefits Reachy 2 Integration Phone Integration The Hugging Face Robot Learning Course Deep Dive: The Modern Robot Learning Tutorial Final thoughts from the team We're thrilled to announce a series of significant advancements across LeRobot, designed to make open-source robot learning more powerful, scalable, and user-friendly than ever before! From revamped datasets to versatile editing tools, new simulation environments, and a groundbreaking plugin system for hardware, LeRobot is continuously evolving to meet the demands of cutting-edge embodied AI. TL;DR LeRobot v0.4.0 delivers a major upgrade for open-source robotics, introducing scalable Datasets v3.0, powerful new VLA models like PI0.5 and GR00T N1.5, and a new plugin system for easier hardware integration. The release also adds support for LI

Source: Hugging Face Blog Back to Articles Why this matters How the collaboration works Benefits for the community Join us We’re excited to announce a new collaboration between Hugging Face and VirusTotal, the world’s leading threat-intelligence and malware analysis platform. This collaboration enhances the security of files shared across the Hugging Face Hub, helping protect the machine learning community from malicious or compromised assets. As of today HF Hub hosts 2.2 Million Public model artifacts. As we continue to grow into the world’s largest open platform for Machine Learning models and datasets, ensuring that shared assets remain safe is essential. Threats can take many forms: Malicious payloads disguised as model files or archives Files that have been compromised before upload Binary assets linked to known malware campaigns Dependencies or serialized objects that execute unsafe code when loaded By collaborating with VirusTotal, we’re adding an extra layer of protection and visibility by enabling files shared through H

Source: Hugging Face Blog Back to Articles Intel and Hugging Face collaborated to demonstrate the real-world value of upgrading to Google’s latest C4 Virtual Machine (VM) running on Intel Xeon 6 processors (codenamed Granite Rapids (GNR)). We specifically wanted to benchmark improvements in the text generation performance of OpenAI GPT OSS Large Language Model(LLM). The results are in, and they are impressive, demonstrating a 1.7x improvement in Total Cost of Ownership(TCO) over the previous-generation Google C3 VM instances. The Google Cloud C4 VM instance further resulted in: 1.4x to 1.7x TPOT throughput/vCPU/dollar Lower price per hour over C3 VM Introduction GPT OSS is a common name for an open-source Mixture of Experts (MoE) model released by OpenAI. An MoE model is a deep neural network architecture that uses specialized “expert” sub-networks and a “gating network” to decide which experts to use for a given input. MoE models allow you to scale your model capacity efficiently without linearly scaling compute costs. They also allow for specialization, where different “experts” learn different skills, allowing them to adapt to diverse data distributions. Even with very large parameters, only a small subset of experts is activated per token, making CPU inference viable. Intel and Hugging Face collaborated to merge an expert execution optimization (PR #40304)

Source: Hugging Face Blog Back to Articles TL;DR: This work shows how a lightweight vision–language model can acquire GUI-grounded skills and evolve into an agentic GUI coder. We release all training recipes, data-processing tools, resulting model, demo and datasets to enable full reproducibility and foster further research . Find the collection here. This video demonstrates the model obtained through the recipe described below, executing a task end-to-end. Table of Contents Introduction

  1. Data Transformation and Unified Action Space The Challenge of Inconsistent Action Spaces Our Unified Approach Example Data Transformation Custom Action Space Adaptation with Action Space Converter Key Features Usage Example Transformed and Released Datasets 2. Phase 1: From Zero to Perception Training Data Optimization Experiments Image Resolution and Coordinate System Analysis Key Findings Phase 1 Results 3. Phase 2: From Perception to Cognition Training Data Phase 2 Results 4. All you need is Open Source
  2. Conclusion What's Next? Introduction Graphical User Interface (GUI) automation is one of the most challenging frontiers in computer vision. Developing models that see and interact with user interfaces enables AI agents to navigate mobile, desktop, and web platforms. This will reshape the future of digital interaction. In th

Source: Hugging Face Blog Back to Articles Accessing SAIR 1. Install essentials 2. Authenticate 3. Load the main table (sair.parquet) 4. (Optional) List available structure archives 5. (Optional) Download and extract structures Questions? This summer, SandboxAQ released the Structurally Augmented IC50 Repository (SAIR), the largest dataset of co-folded 3D protein-ligand structures paired with experimentally measured IC₅₀ labels, directly linking molecular structure to drug potency and overcoming a longstanding scarcity in training data. This dataset is now available on Hugging Face, and for the first time, researchers have open access to more than 5 million AI‑generated, high‑accuracy protein-ligand 3D structures, each paired with validated empirical binding potency data. SAIR is an open-sourced dataset and is publicly available for free under a permissive CC BY 4.0 license, making it immediately actionable for commercial and non-commercial R&D pipelines. More than just a dataset, SAIR is a strategic asset that bridges the long-standing data gap in AI-powered drug design. It empowers pharmaceutical, biotech, and tech‑bio leaders to accelerate R&D, expand target horizons, and supercharge AI models – moving more of the costly, lengthy drug design and optimization from the wet lab to in silico. This means shorter hit‑to‑lead timelines, more efficient lead optimization, fewer

Source: Siecle Digital Si l’Europe veut s’imposer comme un terrain stratégique pour les géants de l’intelligence artificielle, entre les ambitions politiques, les talents et les infrastructures technologiques, les capitales rivalisent pour attirer les laboratoires de recherche les plus avancés. Comme le rapporte Reuters, OpenAI a décidé de transformer son bureau londonien en son plus grand centre de recherche […]

Source: Siecle Digital Avec une nouvelle mise à jour annoncée par Google, Gemini change de dimension sur Android, l’intelligence artificielle devient un véritable agent capable d’agir directement dans les applications. Une évolution qui esquisse le futur de l’assistance mobile, où l’IA ne se contente plus de répondre, mais exécute. Cette fonctionnalité marque une étape supplémentaire dans la stratégie […]

Source: Towards Data Science Utilizing feature stores like Feast and distributed compute frameworks like Ray in production machine learning systems

Source: Towards Data Science A deep dive into the Sharpness-Aware-Minimization (SAM) algorithm and how it improves the generalizability of modern deep learning models

Source: OpenAI Blog Introducing GPT-5.3-Codex-Spark—our first real-time coding model. 15x faster generation, 128k context, now in research preview for ChatGPT Pro users.

Source: Hugging Face Blog Back to Articles China's Organic Open Source AI Ecosystem The Established The Normalcy of "DeepSeek Moments" Foundations for the Future Looking Back to Look Forward This is the third and final blog in a three-part series on China's open source community's historical advancements since January 2025's "DeepSeek Moment." The first blog on strategic changes and open artifact growth is available here, and the second blog on architectural and hardware shifts is available here. In this third article, we examine paths and trajectories of prominent Chinese AI organizations, and posit future directions for open source. For AI researchers and developers contributing to and relying on the open source ecosystem and for policymakers understanding the rapidly changing environment, due to intraorganizational and global community gains, open source is the dominant and popular approach for Chinese AI organizations for the near future. Openly sharing artifacts from models to papers to deployment infrastructure maps to a strategy with the goal of large-scale deployment and integration. China's Organic Open Source AI Ecosystem Having examined strategic and architectural changes since DeepSeek's R1, we get a glimpse for the first time at how an organic open source AI ecosystem is taking shape in China. A culmination of powerful players, some established in open

Source: OpenAI Blog OpenAI and Snowflake partner in a $200M agreement to bring frontier intelligence into enterprise data, enabling AI agents and insights directly in Snowflake.

Source: OpenAI Blog Taisei Corporation uses ChatGPT Enterprise to support HR-led talent development and scale generative AI across its global construction business.

Source: Hugging Face Blog Back to Articles What are agent skills? 1. Get the teacher (Claude Opus 4.5) to build a kernel 2. Make an agent skill from the trace 3. Take your skill to an open source, smaller, or cheaper model Deep dive tutorial into building kernels with agent skills Setup and Install Skill Generation Generate the Skill Evaluate on a Different Model How the evaluation in upskill works What's Next Resources The best thing about agent skills is upskilling your agents on hard problems. There are two ways to look at that: You can take Opus 4.5 or other SOTA models and tackle the hardest problems out there. You can take models that run on your laptop and upskill them to harder problems. In this blog post, we’ll show you how to take on the latter. This blog post walks through the process of using a new tool, upskill, to generate and evaluate agent skills with large models and use them with smaller models. We will benchmark upskill on the task of writing CUDA kernels for diffusers models, but the process is generally useful for cutting costs, or using smaller models on hard and domain-specific problems. What are agent skills? In case you missed it, agent skills are taking the coding agent game by storm. In fact, they’re a straightforward concept to define model context as files, like instructions as markdown and code as scripts. The file for

Source: OpenAI Blog A data-driven report on how workers across industries use ChatGPT—covering adoption trends, top tasks, departmental patterns, and the future of AI at work.

Source: Hugging Face Blog Back to Articles The Seeds of China’s Organic Open Source AI Ecosystem DeepSeek R1: A Turning Point From DeepSeek to AI+: Strategic Realignmentt Global Reception and Response This is the first blog in a series that will examine China’s open source community’s historical advancements in the past year and its reverberations in shaping the entire ecosystem. Much of 2025’s progress can be traced back to January’s “DeepSeek Moment”, when Hangzhou-based AI company DeepSeek released their R-1 model. The first blog addresses strategic changes and the explosion of new open models and open source players. The second covers architectural and hardware choices largely by Chinese companies made in the wake of a growing open ecosystem, available here. The third analyzes prominent organizations’ trajectories and the future of the global open source ecosystem, available here. For AI researchers and developers contributing to and relying on the open source ecosystem and for policymakers understanding the rapidly changing environment, there has never been a better time to build and release open models and artifacts, as proven by the past year’s immense growth catalyzed by DeepSeek. Notably, geopolitics has driven adoption; while models developed in China have been dominating across metrics throughout 2025 and new players leapfrogging each other, Western AI communities are see

Source: Hugging Face Blog Back to Articles Discover more in our official blogpost, featuring an interactive experience The journey of building world-class Arabic language models has been one of continuous learning and iteration. Today, we're excited to announce Falcon-H1-Arabic, our most advanced Arabic language model family to date, representing a significant leap forward in both architecture and capabilities. This release embodies months of research, community feedback, and technical innovation, culminating in three powerful models that set new standards for Arabic natural language processing. Building on Success: The Evolution from Falcon-Arabic When we launched Falcon-Arabic a few months ago, the response from the community was both humbling and enlightening. Developers, researchers and students across the Arab world used the model for real use cases, pushing them to its limits and providing invaluable feedback. We learned where the model excelled and, more importantly, where it struggled. Long-context understanding, dialectal variations, mathematical reasoning, and domain-specific knowledge emerged as key areas requiring deeper attention. We didn't just want to make incremental improvements, we wanted to fundamentally rethink our approach. The result is Falcon-H1-Arabic, a model family that addresses every piece of feedback we received while

Source: OpenAI Blog OpenAI is collaborating with Deutsche Telekom to bring advanced, multilingual AI experiences to millions of people across Europe. ChatGPT Enterprise will also be deployed to help employees at Deutsche Telekom improve workflows and accelerate innovation.

Source: OpenAI Blog Key findings from OpenAI’s enterprise data show accelerating AI adoption, deeper integration, and measurable productivity gains across industries in 2025.

Source: OpenAI Blog OpenAI takes an ownership stake in Thrive Holdings to accelerate enterprise AI adoption, embedding frontier research and engineering directly into accounting and IT services to boost speed, accuracy, and efficiency while creating a scalable model for industry-wide transformation.

Source: Hugging Face Blog Back to Articles Looking to show off your robotics aptitude? The AMD Open Robotics Hackathon hosted by AMD, Hugging Face, and Data Monsters is the place to do it. Whether you’re a student, hobbyist, startup founder, or seasoned engineer, this event brings together makers, coders, and roboticists for a fast-paced, hands-on competition that turns bold ideas into functioning demos. The first of two in-person hackathons will take place from December 5-7, 2025 in Tokyo Japan. Our next stop will be in Paris France from December 12-14, 2025. Preparing for the Hackathon: Form a team of up to four roboticists (ages 18+) to take on two missions over the course of 3 days. Mission 1 — An instructor-led exploration and preparation session. Learn how to set up the LeRobot development environment using AMD AI solutions Mission 2 — Build your own creative solution to a real-world problem. Your team has two days to develop an innovative freestyle project using LeRobot technical proficiency: • Strong Linux development skills and experience with Python and related tooling and containerization • Machine learning skills, familiarity with PyTorch, and hands-on experience with model training and inference • Bonus if your team has experience with ROCm, LeRobot, and embedded development. Hardware will be provided to contestants in the form of SO-101 robotics kits, AMD Ryz

Source: Hugging Face Blog Back to Articles Summary The State of Global Compute The Beginning of a Rewiring The Reaction: Powering Chinese AI How China’s Compute Landscape Catalyzed the Cambrian Explosion of Open Models Advances in Compute-Constrained Environments Pushing the Technical Frontier The Aftermath: Hardware, Software and Soft Power From Sufficient to Demanded Domestic Synergy A New Software Landscape Looking Ahead Acknowledgements Appendix: A Timeline of Chip Usage and Controls Summary The status quo of AI chip usage, that was once almost entirely U.S.-based, is changing. China’s immense progress in open-weight AI development is now being met with rapid domestic AI chip development. In the past few months, highly performant open-weight AI models’ inference in China has started to be powered by chips such as Huawei’s Ascend and Cambricon, with some models starting to be trained using domestic chips. There are two large implications for policymakers and AI researchers and developers respectively: U.S. export controls correlates with expedited Chinese chip production, and chip scarcity in China likely incentivized many of the innovations that are open-sourced and shaping global AI development. China’s chip development correlates highly with stronger export controls from the U.S. Under uncertainty of chip access, Chinese companies have innovated wit

Source: Hugging Face Blog Back to Articles A hands-on guide to collecting data, training policies, and deploying autonomous medical robotics workflows on real hardware SO-ARM Starter Workflow; Building an Embodied Surgical Assistant Technical Implementation Sim-to-Real Mixed Training Approach Hardware Requirements Data Collection Implementation Simulation Teleoperation Controls Model Training Pipeline End-to-End Sim Collect–Train–Eval Pipelines Generate Synthetic Data in Simulation Train and Evaluate Policies Convert Models to TensorRT Getting Started Resources A hands-on guide to collecting data, training policies, and deploying autonomous medical robotics workflows on real hardware Simulation has been a cornerstone in medical imaging to address the data gap. However, in healthcare robotics until now, it's often been too slow, siloed, or difficult to translate into real-world systems. That’s now changing. With new advances in GPU-accelerated simulation and digital twins, developers can design, test, and validate robotic workflows entirely in virtual environments - reducing prototyping time from months to days, improving model accuracy, and enabling safer, faster innovation before a single device reaches the operating room. That's why NVIDIA introduced Isaac for Healthcare earlier this year, a developer framework for AI healthcare robotics, that enables develope

Source: Hugging Face Blog Back to Articles The Story Behind the Library The Foundation Years (2020-2021) The Great Shift: Git to HTTP (2022) An Expanding API Surface (2022–2024) Ready. Xet. Go! (2024-2025) Measuring Growth and Impact Building for the Next Decade Modern HTTP Infrastructure with httpx and hf_xet Agents Made Simple with MCP and Tiny-Agents A Fully-Featured CLI for Modern Workflows Cleaning House for the Future The Migration Guide Acknowledgments TL;DR: After five years of development, huggingface_hub has reached v1.0 - a milestone that marks the library's maturity as the Python package powering 200,000 dependent libraries and providing core functionality for accessing over 2 million public models, 0.5 million public datasets, and 1 million public Spaces. This release introduces breaking changes designed to support the next decade of open machine learning, driven by a global community of almost 300 contributors and millions of users. We highly recommend upgrading to v1.0 to benefit from major performance improvements and new capabilities. pip install --upgrade huggingface_hub Major changes in this release include the migration to httpx as the backend library, a completely redesigned hf CLI (which replaces the deprecated huggingface-cli) featuring a Typer-based interface with a significantly expanded feature set, and full adoption of hf_xet for file t

Source: Hugging Face Blog Back to Articles With the growing capability of large language models (LLMs), a new class of models has emerged: Vision Language Models (VLMs). These models can analyze images and videos to describe scenes, create captions, and answer questions about visual content. While running AI models on your own device can be difficult as these models are often computationally demanding, it also offers significant benefits: including improved privacy since your data stays on your machine, and enhanced speed and reliability because you're not dependent on an internet connection or external servers. This is where tools like Optimum Intel and OpenVINO come in, along with a small, efficient model like SmolVLM. In this blog post, we'll walk you through three easy steps to get a VLM running locally, with no expensive hardware or GPUs required (though you can run all the code samples from this blog post on Intel GPUs). Deploy your model with Optimum Small models like SmolVLM are built for low-resource consumption, but they can be further optimized. In this blog post we will see how to optimize your model, to lower memory usage and speedup inference, making it more efficient for deployment on devices with limited resources. To follow this tutorial, you need to install optimum and openvino, which you can do with: pip install optimum-intel[openvino] transf

Source: Hugging Face Blog Back to Articles Getting Started in 5 Minutes How It Works: What This Means for Developers How Teams Are Using This Feature? Show Us What You Build! The pace of AI development today is breathtaking. Every single day, hundreds of new models appear on the Hugging Face Hub, some are specialized variants of popular base models like Llama or Qwen, others feature novel architectures or have been trained from scratch for specific domains. Whether it's a medical AI trained on clinical data, a coding assistant optimized for a particular programming language, or a multilingual model fine-tuned for specific cultural contexts, the Hugging Face Hub has become the beating heart of open-source AI innovation. But here's the challenge: finding an amazing model is just the beginning. What happens when you discover a model that's 90% perfect for your use case, but you need that extra 10% of customization? Traditional fine-tuning infrastructure is complex, expensive, and often requires significant DevOps expertise to set up and maintain. This is exactly the gap that Together AI and Hugging Face are bridging today. We're announcing a powerful new capability that makes the entire Hugging Face Hub available for fine-tuning using Together AI's infrastructure. Now, any compatible LLM on the Hub, whether it's from Meta or an individual contributor, can be fine-tuned with

Source: Hugging Face Blog Back to Articles TL;DR Training Data Training Recipe and Novel Components Architecture Three-Phase Training Approach Novel Training Techniques Results Natural Language Understanding (NLU) Retrieval Performance Learning Languages in the Decay Phase Efficiency Improvements Usage Examples Fine-tuning Examples Encoders Model Family and Links TL;DR This blog post introduces mmBERT, a state-of-the-art massively multilingual encoder model trained on 3T+ tokens of text in over 1800 languages. It shows significant performance and speed improvements over previous multilingual models, being the first to improve upon XLM-R, while also developing new strategies for effectively learning low-resource languages. mmBERT builds upon ModernBERT for a blazingly fast architecture, and adds novel components to enable efficient multilingual learning. If you are interested in trying out the models yourself, some example boilerplate is available at the end of this blogpost! Training Data Figure 1: the training data is progressively annealed to include more languages and more uniform sampling throughout training. mmBERT was trained on a carefully curated multilingual dataset totaling over 3T tokens across three distinct training phases. The foundation of our training data consists of three primary open-source and high

Source: Hugging Face Blog Back to Articles Elevated by machine learning Learn about our NSS Model How we trained the model Get started experimenting with NSS today! Neural Super Sampling (NSS), a next-generation AI-powered upscaling solution from Arm is released for graphics and gaming developers to start experimenting today! Elevated by machine learning NSS is designed for real-time performance on future mobile devices with Arm Neural Technology. However, latency depends on implementation factors such as GPU configuration, resolution, and use case. In our Enchanted Castle demo video below, NSS reduced GPU workload by 50 percent. The model rendered at 540p and upscaled to 1080p in just 4ms in sustained performance setup. Your browser does not support the video tag. Learn about our NSS Model Neural Super Sampling (NSS) is a parameter prediction model for real-time temporal super sampling developed by Arm, optimized for execution on Neural Accelerators (NX) in mobile GPUs. It enables high-resolution rendering at a lower compute cost by reconstructing high-quality output frames from low-resolution temporal inputs. NSS is particularly suited for mobile gaming, XR, and other power-constrained graphics use cases. Get started with our NSS model today. If you want to go deeper check out the following resources: Technical Blog: How Neural Super Sampl

Source: Hugging Face Blog Measuring Open-Source Llama Nemotron Models on DeepResearch Bench

Source: Towards Data Science How reusable, lazy-loaded instructions solve the context bloat problem in AI-assisted development.

Source: Siecle Digital Alors que de nombreuses entreprises multiplient les expérimentations autour de l’intelligence artificielle, le passage à l’échelle reste encore complexe, comme en témoigne une récente étude du MIT. Entre les projets avortés qui s’enchaînent et les déploiements réellement industrialisés, le fossé est bien réel. Face à ce constat, les éditeurs d’IA cherchent de nouveaux relais de […]

Source: OpenAI Blog Microsoft and OpenAI continue to work closely across research, engineering, and product development, building on years of deep collaboration and shared success.

Source: Towards Data Science A practical guide to identifying, restoring, and transforming elements within your images

Source: Towards Data Science Have you ever wondered what happens when you apply a filter in a DAX expression? Well, Today I will take you on a deep dive into this fascinating topic, with examples to help you learn something new and surprising.

Source: Siecle Digital La technologie n’est plus le refuge automatique qu’elle était devenue. Après plusieurs années d’euphorie, portées par la promesse d’une intelligence artificielle capable de transformer tous les secteurs, les marchés ont brutalement changé de ton récemment. L’IA, hier moteur incontesté de croissance, est soudain apparue comme une menace potentielle pour une partie des modèles économiques des […]

Source: Towards Data Science Understanding the foundational distortion of digital audio from first principles, with worked examples and visual intuition

Source: Towards Data Science Inside the research that shows algorithmic price-fixing isn't a bug in the code. It's a feature of the math.

Source: Hugging Face Blog Back to Articles This blog post covers how to use Unsloth and Hugging Face Jobs for fast LLM fine-tuning (specifically LiquidAI/LFM2.5-1.2B-Instruct ) through coding agents like Claude Code and Codex. Unsloth provides ~2x faster training and ~60% less VRAM usage compared to standard methods, so training small models can cost just a few dollars. Why a small model? Small language models like LFM2.5-1.2B-Instruct are ideal candidates for fine-tuning. They are cheap to train, fast to iterate on, and increasingly competitive with much larger models on focused tasks. LFM2.5-1.2B-Instruct runs under 1GB of memory and is optimized for on-device deployment, so what you fine-tune can be served on CPUs, phones, and laptops. You will need We are giving away free credits to fine-tune models on Hugging Face Jobs. Join the Unsloth Jobs Explorers organization to claim your free credits and one-month Pro subscription. A Hugging Face account (required for HF Jobs) Billing setup (for verification, you can monitor your usage and manage your billing in your billing page). A Hugging Face token with write permissions (optional) A coding agent (Open Code, Claude Code, or Codex) Run the Job If you want to train a model using HF Jobs and Unsloth, you can simply use the hf jobs CLI to submit a job. First, you need to install the hf CLI.

Source: OpenAI Blog Introducing Lockdown Mode and Elevated Risk labels in ChatGPT to help organizations defend against prompt injection and AI-driven data exfiltration.

Source: Hugging Face Blog Back to Articles tl;dr: We built an agent skill that teaches coding agents how to write production CUDA kernels. Then we pointed Claude and Codex at two real targets: a diffusers pipeline and a transformers model. The agents produced working kernels for both, with correct PyTorch bindings and benchmarks, end to end. Writing CUDA kernels is hard. Writing CUDA kernels that correctly integrate with transformers and diffusers is harder. There are architecture-specific memory access patterns, vectorization strategies, warp shuffle reductions, and a dozen integration pitfalls that trip up even experienced developers. It is exactly the kind of specialized, high-stakes problem where agent skills shine. We gave coding agents the domain knowledge they need, like which GPU architecture to target, how to structure a kernel-builder project, when to use shared memory versus registers, and how to write PyTorch bindings. The agents did the rest. If you have used the LLM training skill or read We Got Claude to Teach Open Models, the pattern will feel familiar: package domain expertise into a skill, point the agent at a problem, and let it work. Why a skill for kernels? The Kernel Hub solved the distribution of custom hardware kernels. You can load pre-compiled kernels from the Hub with a single get_kernel call. No builds, no flags. However, someone still

Source: Hugging Face Blog Back to Articles What Is OpenEnv? The Calendar Gym: A Production-Grade Benchmark What We Learned Looking Ahead Appendix: Common error cases in tool use Specific error cases found in the wild AI agents often perform impressively in controlled research settings, yet struggle when deployed in real-world systems where they must reason across multiple steps, interact with real tools and APIs, operate under partial information, and recover from errors in stateful, permissioned environments—highlighting a persistent gap between research success and production reliability. OpenEnv is an open-source framework from Meta and Hugging Face designed to address this challenge by standardizing how agents interact with real environments. As part of this collaboration, Turing contributed a production-grade calendar management environment to study tool-using agents under realistic constraints such as access control, temporal reasoning, and multi-agent coordination. In this post, we explore how OpenEnv works in practice, why calendars serve as a powerful benchmark for real-world agent evaluation, and what our findings reveal about the current limitations of tool-using agents. What Is OpenEnv? OpenEnv is a framework for evaluating AI agents against real systems rather than simulations. It provides a standardized way to connect agents to real tools and

Source: Hugging Face Blog Back to Articles Step 1: Configure the data source Step 2: Build the flow visually Step 3: Review and run See it in action! Running Existing Workflows Run the Glaive Code Assistant workflow Get started SyGra 2.0.0 introduces Studio, an interactive environment that turns synthetic data generation into a transparent, visual craft. Instead of juggling YAML files and terminals, you compose flows directly on the canvas, preview datasets before committing, tune prompts with inline variable hints, and watch executions stream live—all from a single pane. Under the hood it’s the same platform, so everything you do visually generates the corresponding SyGra compatible graph config and task executor scripts. What Studio lets you do Configure and validate models with guided forms (OpenAI, Azure OpenAI, Ollama, Vertex, Bedrock, vLLM, custom endpoints). Connect Hugging Face, file-system, or ServiceNow data sources and preview rows before execution. Configure nodes by selecting models, writing prompts (with auto-suggested variables), and defining outputs or structured schemas. Design downstream outputs using shared state variables and Pydantic-powered mappings.

Source: Hugging Face Blog Back to Articles Evaluation is broken What We're Shipping Why This Matters Get Started TL;DR: Benchmark datasets on Hugging Face can now host leaderboards. Models store their own eval scores. Everything links together. The community can submit results via PR. Verified badges prove that the results can be reproduced. Evaluation is broken Let's be real about where we are with evals in 2026. MMLU is saturated above 91%. GSM8K hit 94%+. HumanEval is conquered. Yet some models that ace benchmarks still can't reliably browse the web, write production code, or handle multi-step tasks without hallucinating, based on usage reports. There is a clear gap between benchmark scores and real-world performance. Furthermore, there is another gap within reported benchmark scores. Multiple sources report different results. From Model Cards, to papers, to evaluation platforms, there is no alignment in reported scores. The result is that the community lacks a single source of truth. What We're Shipping Decentralized and transparent evaluation reporting. We are going to take evaluations on the Hugging Face Hub in a new direction by decentralizing reporting and allowing the entire community to openly report scores for benchmarks. At first, we will start with a shortlist of 4 benchmarks and over time we’ll expand to the most relev

Source: OpenAI Blog How OpenAI built an in-house AI data agent that uses GPT-5, Codex, and memory to reason over massive datasets and deliver reliable insights in minutes.

Source: OpenAI Blog PVH Corp., parent company of Calvin Klein and Tommy Hilfiger, is adopting ChatGPT Enterprise to bring AI into fashion design, supply chain, and consumer engagement.

Source: OpenAI Blog A technical deep dive into the Codex agent loop, explaining how Codex CLI orchestrates models, tools, prompts, and performance using the Responses API.

Source: OpenAI Blog Cisco and OpenAI redefine enterprise engineering with Codex, an AI software agent embedded in workflows to speed builds, automate defect fixes, and enable AI-native development.

Source: OpenAI Blog OpenAI is investing in Merge Labs to support new brain computer interfaces that bridge biological and artificial intelligence to maximize human ability, agency, and experience.

Source: OpenAI Blog By rolling out ChatGPT Enterprise company-wide, Zenken has boosted sales performance, cut preparation time, and increased proposal success rates. AI-supported workflows are helping a lean team deliver more personalized, effective customer engagement.

Source: OpenAI Blog OpenAI and Datadog brand graphic with the OpenAI wordmark on the left, the Datadog logo on the right, and a central abstract brown fur-like texture panel on a white background.

Source: OpenAI Blog ChatGPT Health is a dedicated experience that securely connects your health data and apps, with privacy protections and a physician-informed design.

Source: OpenAI Blog OpenAI is strengthening ChatGPT Atlas against prompt injection attacks using automated red teaming trained with reinforcement learning. This proactive discover-and-patch loop helps identify novel exploits early and harden the browser agent’s defenses as AI becomes more agentic.

Source: OpenAI Blog OpenAI is updating its Model Spec with new Under-18 Principles that define how ChatGPT should support teens with safe, age-appropriate guidance grounded in developmental science. The update strengthens guardrails, clarifies expected model behavior in higher-risk situations, and builds on our broader work to improve teen safety across ChatGPT.

Source: OpenAI Blog OpenAI shares new AI literacy resources to help teens and parents use ChatGPT thoughtfully, safely, and with confidence. The guides include expert-vetted tips for responsible use, critical thinking, healthy boundaries, and supporting teens through emotional or sensitive topics.

Source: OpenAI Blog OpenAI and the U.S. Department of Energy have signed a memorandum of understanding to deepen collaboration on AI and advanced computing in support of scientific discovery. The agreement builds on ongoing work with national laboratories and helps establish a framework for applying AI to high-impact research across the DOE ecosystem.

Source: Hugging Face Blog Back to Articles It has become increasingly challenging to assess whether a model’s reported improvements reflect genuine advances or variations in evaluation conditions, dataset composition, or training data that mirrors benchmark tasks. The NVIDIA Nemotron approach to openness addresses this by publishing transparent and reproducible evaluation recipes that make results independently verifiable. NVIDIA released Nemotron 3 Nano 30B A3B with an explicitly open evaluation approach to make that distinction clear. Alongside the model card, we are publishing the complete evaluation recipe used to generate the results, built with the NVIDIA NeMo Evaluator library, so anyone can rerun the evaluation pipeline, inspect the artifacts, and analyze the outcomes independently. We believe that open innovation is the foundation of AI progress. This level of transparency matters because most model evaluations omit critical details. Configs, prompts, harness versions, runtime settings, and logs are often missing or underspecified, and even small differences in these parameters can materially change results. Without a complete recipe, it’s nearly impossible to tell whether a model is genuinely more intelligent or simply optimized for a benchmark. This blog shows developers exactly how to reproduce the evaluation behind Nemotron 3 Nano 30B A3B usin

Source: OpenAI Blog OpenAI introduces FrontierScience, a benchmark testing AI reasoning in physics, chemistry, and biology to measure progress toward real scientific research.

Source: Hugging Face Blog Back to Articles Introduction Introduction What is CUGA? Open Source and Open Models Integration with Langflow: Visual Agent Design Made Simple Try the Hugging Face Demo: A Hands-On Preview Conclusion and Call to Action AI agents are rapidly becoming essential for building intelligent applications, but creating robust, adaptable agents that scale across domains remains a challenge. Many existing frameworks struggle with brittleness, tool misuse, and failures when faced with complex workflows. CUGA (Configurable Generalist Agent) was designed to overcome these limitations. It's an open-source, AI Agent that combines flexibility, reliability, and ease of use for enterprise use cases. By abstracting orchestration complexity, CUGA empowers developers to focus on domain requirements rather than the internals of agent building. And now, with its integration into Hugging Face Spaces, experimenting with CUGA and open models has never been easier. What is CUGA? CUGA is a configurable, general-purpose AI agent that supports complex, multi-step tasks across web and API environments. It has achieved state-of-the-art performance on leading benchmarks: #1 on AppWorld - a benchmark with 750 real-world tasks across 457 APIs Top-tier on WebArena (#1 from 02/25 - 09/25) - showcases CUGA Computer Use capabilities with a compl

Source: OpenAI Blog BBVA is expanding its work with OpenAI through a multi-year AI transformation program, rolling out ChatGPT Enterprise to all 120,000 employees. Together, the companies will develop AI solutions that enhance customer interactions, streamline operations, and help build an AI-native banking experience.

Source: OpenAI Blog GPT-5.2 is OpenAI’s strongest model yet for math and science, setting new state-of-the-art results on benchmarks like GPQA Diamond and FrontierMath. This post shows how those gains translate into real research progress, including solving an open theoretical problem and generating reliable mathematical proofs.

Source: OpenAI Blog Disney and OpenAI have reached an agreement to bring more than 200 Disney, Marvel, Pixar and Star Wars characters to Sora for fan-inspired short videos. The agreement emphasizes responsible AI in entertainment and includes Disney’s company-wide use of ChatGPT Enterprise and the OpenAI API.

Source: OpenAI Blog GPT-5.2 is the latest model family in the GPT-5 series. The comprehensive safety mitigation approach for these models is largely the same as that described in the GPT-5 System Card and GPT-5.1 System Card. Like OpenAI’s other models, the GPT-5.2 models were trained on diverse datasets, including information that is publicly available on the internet, information that we partner with third parties to access, and information that our users or human trainers and researchers provide or generate.

Source: OpenAI Blog Denise Dresser is joining as Chief Revenue Officer, overseeing OpenAI’s global revenue strategy across enterprise and customer success. She will help more businesses put AI to work in their day-to-day operations as OpenAI continues to scale.

Source: OpenAI Blog Commonwealth Bank of Australia partners with OpenAI to roll out ChatGPT Enterprise to 50,000 employees, building AI fluency at scale to improve customer service and fraud response.

Source: OpenAI Blog OpenAI and Instacart are deepening their longstanding partnership by bringing the first fully integrated grocery shopping and Instant Checkout payment app to ChatGPT.

Source: Hugging Face Blog We Got Claude to Fine-Tune an Open Source LLM

Source: OpenAI Blog OpenAI is acquiring Neptune to deepen visibility into model behavior and strengthen the tools researchers use to track experiments and monitor training.

Source: OpenAI Blog Mirakl is redefining commerce through AI agents and ChatGPT Enterprise—achieving faster documentation, smarter customer support, and building toward agent-native commerce with Mirakl Nexus.

Source: OpenAI Blog Accenture and OpenAI are collaborating to help enterprises bring agentic AI capabilities into the core of their business and unlock new levels of growth.

Source: OpenAI Blog OpenAI shares details about a Mixpanel security incident involving limited API analytics data. No API content, credentials, or payment details were exposed. Learn what happened and how we’re protecting users.

Source: Hugging Face Blog Back to Articles The Solution Why Foundation Models as the Base API Package Traits: Include Only What You Need Image Support (and API Design Trade-offs) Try It Out: chat-ui-swift What's Next Get Involved Links LLMs have become essential tools for building software. But for Apple developers, integrating them remains unnecessarily painful. Developers building AI-powered apps typically take a hybrid approach, adopting some combination of: Local models using Core ML or MLX for privacy and offline capability Cloud providers like OpenAI or Anthropic for frontier capabilities Apple's Foundation Models as a system-level fallback Each comes with different APIs, different requirements, different integration patterns. It's a lot, and it adds up quickly. When I interviewed developers about building AI-powered apps, friction with model integration came up immediately. One developer put it bluntly: I thought I'd quickly use the demo for a test and maybe a quick and dirty build but instead wasted so much time. Drove me nuts. The cost to experiment is high, which discourages developers from discovering that local, open-source models might actually work great for their use case. Today we're announcing AnyLanguageModel, a Swift package that provides a drop-in replacement for Apple's Foundation Models framework with support for multiple model providers. Our goal is to reduc

Source: Hugging Face Blog Back to Articles Ethics in Practice: Consent as System Infrastructure The Technical Details Approach Unlocking the Voice Consent Gate In this blog post, we introduce the idea of a 'voice consent gate' to support voice cloning with consent. We provide an example Space and accompanying code to start the ball rolling on the idea. Realistic voice generation technology has gotten uncannily good in the past few years. In some situations, it’s possible to generate a synthetic voice that sounds almost exactly like the voice of a real person. And today, what once felt like science fiction is reality: Voice cloning. With just a few seconds of recorded speech, anyone’s voice can be made to say almost anything. Voice generation, and in particular the subtask of voice cloning, has notable risks and benefits. The risks of “deepfakes”, such as the cloned voice of former President Biden used in robocalls, can mislead people into thinking that people have said things that they haven’t said. On the other hand, voice cloning can be a powerful beneficial tool, helping people who’ve lost the ability to speak communicate in their own voice again, or assisting people in learning new languages and dialects. So how do we create meaningful use without malicious use? We’re exploring one possible answer: a voice consent gate. That’s a system where a voice can be cloned only when the

Source: Hugging Face Blog Back to Articles TLDR Streaming: The Same Easy API The Challenge: Streaming at Scale Under the Hood: What We Improved How are we faster than plain S3: Xet Need a custom streaming pipeline ? Push streaming to the limit Get Started and See the Difference TLDR We boosted load_dataset('dataset', streaming=True), streaming datasets without downloading them with one line of code! Start training on multi-TB datasets immediately, without complex setups, downloading, no "disk out of space", or 429 “stop requesting!” errors.It's super fast! Outrunning our local SSDs when training on 64xH100 with 256 workers downloading data. We've improved streaming to have 100x fewer requests, → 10× faster data resolution → 2x sample/sec, → 0 worker crashes at 256 concurrent workers. Loading data, especially at the terabyte scale, is a major pain in any machine learning workflow. We suffered this while training SmolLM3, at one point we had to wait 3 hours before each run to download enough data. Streaming has always been possible in the datasets library, but large scale training with massive datasets remained a challenge. That changes today . We spent a few months improving the backend, focusing on streaming datasets to make it faster and more efficient. What did we do exactly? Streaming: The Same Easy API First things first: our

Source: Hugging Face Blog Back to Articles Project History Acknowledgements Getting Started Today, we are announcing that Sentence Transformers is transitioning from Iryna Gurevych’s Ubiquitous Knowledge Processing (UKP) Lab at the TU Darmstadt to Hugging Face. Hugging Face's Tom Aarsen has already been maintaining the library since late 2023 and will continue to lead the project. At its new home, Sentence Transformers will benefit from Hugging Face's robust infrastructure, including continuous integration and testing, ensuring that it stays up-to-date with the latest advancements in Information Retrieval and Natural Language Processing. Sentence Transformers (a.k.a. SentenceBERT or SBERT) is a popular open-source library for generating high-quality embeddings that capture semantic meaning. Since its inception by Nils Reimers in 2019, Sentence Transformers has been widely adopted by researchers and practitioners for various natural language processing (NLP) tasks, including semantic search, semantic textual similarity, clustering, and paraphrase mining. After years of development and training by and for the community, over 16,000 Sentence Transformers models are publicly available on the Hugging Face Hub, serving more than a million monthly unique users. "Sentence Transformers has been a huge success story and a culmination of our long-standing research on computing semantic similarities

Source: Hugging Face Blog Back to Articles We have added Chandra and OlmOCR-2 to this blog, as well as OlmOCR Scores of the models TL;DR: The rise of powerful vision-language models has transformed document AI. Each model comes with unique strengths, making it tricky to choose the right one. Open-weight models offer better cost efficiency and privacy. To help you get started with them, we’ve put together this guide. In this guide, you’ll learn: The landscape of current models and their capabilities When to fine-tune models vs. use models out-of-the-box Key factors to consider when selecting a model for your use case How to move beyond OCR with multimodal retrieval and document QA By the end, you’ll know how to choose the right OCR model, start building with it, and gain deeper insights into document AI. Let’s go! Table-of-Contents Supercharge your OCR Pipelines with Open Models Brief Introduction to Modern OCR Model Capabilities Transcription Handling complex components in documents Output formats Locality Awareness in OCR Model Prompting Cutting-edge Open OCR Models Comparing Latest Models Evaluating Models Benchmarks Cost-efficiency Open OCR Datasets Tools to Run Models Locally Remotely Going Beyond OCR Visual Document Retrievers Using Vision Language Models for Document Question Answering Wrapping up Brief Int

Source: Hugging Face Blog Back to Articles Current State of The Art: Where AI Meets Food Allergy Research The need for data Collection release The Protein and Molecular Allergenicity Layer The Clinical, Immunological, and Therapeutic Layer The Food, Ingredient, and Regulatory Layer Accessing the collection What’s coming next? Final remarks Appendix SDAP 2.0: Structural Database of Allergenic Proteins DAVIS: Kinase inhibitor binding affinities QsarDB: repository for (Q)SAR models e-Drug3D Database Stanford Drug Data: Offsides/Twosides DrugCentral: open drug information repository MedKG: medical knowledge graph PDBBind+: protein-ligand binding database Human Metabolome Database (HMDB) Therapeutic Target Database Therapeutic Data Commons (TDC) STITCH: Chemical–Protein Interaction Database M3-20M Multi-Modal Molecule Dataset SAIR (Structurally Augmented IC Repository) AllerBase AlgPred 2.0 Dataset AllerCatPro 2.0 AllergenAI NetAllergen QM9: Molecular Property Prediction Dataset for Quantum Chemistry Let’s get straight to the point: worldwide, an estimated 220 million people suffer from at least one food allergy, and in the United States alone, this accounts for roughly 10% of the population. This means that if you don’t have an allergy, you’ll likely know someone who does — and it’s not a pleasant situation to be in. This condition affects no

Source: Hugging Face Blog Back to Articles Open Data for India's AI Future What’s in the Dataset? How We Built It Data Generation Pipeline Embedded Cultural Context Private By Design Who This Is For Practical AI Applications Why It Matters Start Building with Nemotron-Personas-India A compound AI approach to Indian personas grounded in real-world distributions Open Data for India's AI Future India represents one of the world's largest AI opportunities — with over 700 million internet users, a multitude of languages, and a rapidly growing developer ecosystem. Yet, most open datasets reflect Western norms and English-only contexts, creating a data gap that limits AI adoption in India's multilingual, multi-script environment. Today, we're releasing Nemotron-Personas-India, the first open synthetic dataset of Indic personas aligned to India's real-world demographic, geographic, and cultural distributions. Licensed under CC BY 4.0, this dataset offers a privacy-preserving, regulation-ready foundation for scaling AI systems that reflect Indian society—without relying on sensitive personal data. Built with NeMo Data Designer, NVIDIA's enterprise-grade synthetic data generation microservice, Nemotron-Personas-India extends our global collection of Sovereign AI datasets. It builds on the success of our US and Japan persona datasets and includes

Source: Hugging Face Blog Back to Articles Gaia2: Agentic Evaluation on Real Life Assistant Tasks How does Gaia2 run? Results Compare with your favorite models! Evaluating on Gaia2 Beyond Gaia2: study your agents with ARE 1) Testing an agent on a simple task: event organisation 2) Understanding agents: deep diving the traces 3) Playing around and extending the demo: Connecting the agent to your own MCPs Conclusion In an ideal world, AI agents would be reliable assistants. When given a query, they would easily manage ambiguity in instructions, construct step-by-step plans, correctly identify necessary resources, execute those plans without getting sidetracked, and adapt to unexpected events, all while maintaining accuracy and avoiding hallucinations. However, developing agents and testing these behaviors is no small feat: if you have ever tried to debug your own agent, you’ve probably observed how tedious and frustrating this can be. Existing evaluation environments are tightly coupled with the tasks they evaluate, lack real-world flexibility, and do not reflect the messy reality of open-world agents: simulated pages never fail to load, events don’t spontaneously emerge, and asynchronous chaos is absent. That’s why we’re very happy to introduce Gaia2, the follow-up to the agentic benchmark GAIA, allowing analysis of considerably more complex be

Source: Hugging Face Blog The Wayback Machine is an initiative of the Internet Archive, a 501(c)(3) non-profit, building a digital library of Internet sites and other cultural artifacts in digital form. Other projects include Open Library & archive-it.org. Your use of the Wayback Machine is subject to the Internet Archive's Terms of Use.

Source: Hugging Face Blog Back to Articles Authors: Dhruv Nathawani, Shuoyang Ding US, Vitaly Lavrukhin US, Jane Polak Scowcroft US, Oleksii Kuchaiev US NVIDIA continues releasing permissive datasets in support of the open ecosystem with 6 Million Multilingual Reasoning Dataset. Continuing the success of the recent Nemotron Post-Training Dataset v1 release used in Llama Nemotron Super model, and our Llama Nemotron Post-Training Dataset release earlier this year, we’re excited to release the reasoning dataset translated into five target languages: French, Spanish, German, Italian, and Japanese. The newly released NVIDIA Nemotron Nano 2 9B brings these capabilities to the edge with leading accuracy and efficiency with a hybrid Transformer–Mamba architecture and a configurable thinking budget—so you can dial accuracy, throughput, and cost to match your real‑world needs. Model Highlights (TL;DR) Model size: 9B parameters Architecture: Hybrid Transformer–Mamba (Mamba‑2 + a small number of attention layers) for higher throughput at similar accuracy to Transformer‑only peers Throughput: Up to 6× higher token generation than other leading models in its size class Cost: Thinking budget lets you control how many “thinking” tokens are used—saving up to 60% lower reasoning costs Target: Agents for customer service, support chatbots, analytics copilots, and edge/RTX dep


OpenClaw

Source: Gladys Assistant (Forum) Salut à tous ! Vous avez sûrement entendu parler d’OpenClaw, le framework IA qui a explosé sur GitHub il y a quelques semaines, et qui vient d’être racheté par OpenAI. J’ai fait le test de connecter OpenClaw à Gladys pour tester les possibilités, et c’est vraiment bluffant Je vous en dis plus en vidéo : J'ai laissé l'IA OpenClaw contrôler ma Maison (C'est fou) Note : Je vous déconseille d’installer OpenClaw sur votre serveur Gladys, c’est un logiciel encore en début de vie, qui touche un peu à tout et qui a été pas mal décrié pour ses failles de sécurité. Pour ce test, j’ai déployé OpenClaw sur une VM dans le Cloud pour rester dans un environnement isolé 7 messages - 3 participant(e)s Lire le sujet en entier


Cybersecurity / Cybersécurité

Source: SecurityWeek CISA has released an advisory to warn about four vulnerabilities discovered by a researcher in Gardyn Home and Gardyn Studio.

Source: The Hacker News Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic's Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials. "The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables – executing

Source: The Hacker News Cybersecurity researchers have discovered four malicious NuGet packages that are designed to target ASP.NET web application developers to steal sensitive data. The campaign, discovered by Socket, exfiltrates ASP.NET Identity data, including user accounts, role assignments, and permission mappings, as well as manipulates authorization rules to create persistent backdoors in victim applications.

Source: The Hacker News Cybersecurity researchers have disclosed details of a new ClickFix campaign that abuses compromised legitimate sites to deliver a previously undocumented remote access trojan (RAT) called MIMICRAT (aka AstarionRAT). "The campaign demonstrates a high level of operational sophistication: compromised sites spanning multiple industries and geographies serve as delivery infrastructure, a multi-stage

Source: The Hacker News Cybersecurity researchers have discovered what they say is the first Android malware that abuses Gemini, Google's generative artificial intelligence (AI) chatbot, as part of its execution flow and achieves persistence. The malware has been codenamed PromptSpy by ESET. The malware is equipped to capture lockscreen data, block uninstallation efforts, gather device information, take screenshots,

Source: Bleeping Computer The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has released new details about RESURGE, a malicious implant used in zero-day attacks exploiting CVE-2025-0282 to breach Ivanti Connect Secure devices. [...]

Source: The Hacker News Cybersecurity researchers have disclosed details of a malicious Go module that's designed to harvest passwords, create persistent access via SSH, and deliver a Linux backdoor named Rekoobe. The Go module, github[.]com/xinfeisoft/crypto, impersonates the legitimate "golang.org/x/crypto" codebase, but injects malicious code that's responsible for exfiltrating secrets entered via terminal password

Source: The Hacker News Cybersecurity researchers have disclosed details of a new botnet loader called Aeternum C2 that uses a blockchain-based command-and-control (C2) infrastructure to make it resilient to takedown efforts. "Instead of relying on traditional servers or domains for command-and-control, Aeternum stores its instructions on the public Polygon blockchain," Qrator Labs said in a report shared with The

Source: The Hacker News Cybersecurity researchers have disclosed details of a new malicious package discovered on the NuGet Gallery, impersonating a library from financial services firm Stripe in an attempt to target the financial sector. The package, codenamed StripeApi.Net, attempts to masquerade as Stripe.net, a legitimate library from Stripe that has over 75 million downloads. It was uploaded by a user named

Source: The Hacker News The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added a recently disclosed vulnerability in FileZen to its Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation. The vulnerability, tracked as CVE-2026-25108 (CVSS v4 score: 8.7), is a case of operating system (OS) command injection that could allow an authenticated user to execute

Source: The Hacker News Cybersecurity researchers have disclosed details of a new cryptojacking campaign that uses pirated software bundles as lures to deploy a bespoke XMRig miner program on compromised hosts. "Analysis of the recovered dropper, persistence triggers, and mining payload reveals a sophisticated, multi-stage infection prioritizing maximum cryptocurrency mining hashrate, often destabilizing the victim

Source: The Hacker News Security news rarely moves in a straight line. This week, it feels more like a series of sharp turns, some happening quietly in the background, others playing out in public view. The details are different, but the pressure points are familiar. Across devices, cloud services, research labs, and even everyday apps, the line between normal behavior and hidden risk keeps getting thinner. Tools

Source: The Hacker News Cybersecurity researchers have disclosed what they say is an active "Shai-Hulud-like" supply chain worm campaign that has leveraged a cluster of at least 19 malicious npm packages to enable credential harvesting and cryptocurrency key theft. The campaign has been codenamed SANDWORM_MODE by supply chain security company Socket. As with prior Shai-Hulud attack waves, the malicious code embedded

Source: The Hacker News Artificial intelligence (AI) company Anthropic has begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities and suggest patches. The capability, called Claude Code Security, is currently available in a limited research preview to Enterprise and Team customers. "It scans codebases for security vulnerabilities and suggests targeted

Source: PortSwigger Research Welcome to the Top 10 Web Hacking Techniques of 2025, the 19th edition of our annual community-powered effort to identify the most innovative must-read web security research published in the last year

Source: PortSwigger Research Nominations are now open for the top 10 new web hacking techniques of 2024! Every year, security researchers from all over the world share their latest findings via blog posts, presentations, PoCs, an

Source: Krebs on Security In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to assemble Kimwolf, the world's largest and most disruptive botnet. Since then, the person in control of Kimwolf -- who goes by the handle "Dort" -- has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher's home. This post examines what is knowable about Dort based on public information.

Source: The Hacker News New research has found that Google Cloud API keys, typically designated as project identifiers for billing purposes, could be abused to authenticate to sensitive Gemini endpoints and access private data. The findings come from Truffle Security, which discovered nearly 3,000 Google API keys (identified by the prefix "AIza") embedded in client-side code to provide Google-related services like

Source: CERT-FR Une vulnérabilité a été découverte dans Stormshield Network Security. Elle permet à un attaquant de provoquer un contournement de la politique de sécurité.

Source: The Hacker News A "coordinated developer-targeting campaign" is using malicious repositories disguised as legitimate Next.js projects and technical assessments to trick victims into executing them and establish persistent access to compromised machines. "The activity aligns with a broader cluster of threats that use job-themed lures to blend into routine developer workflows and increase the likelihood of code

Source: The Hacker News SolarWinds has released updates to address four critical security flaws in its Serv-U file transfer software that, if successfully exploited, could result in remote code execution. The vulnerabilities, all rated 9.1 on the CVSS scoring system, are listed below - CVE-2025-40538 - A broken access control vulnerability that allows an attacker to create a system admin user and execute arbitrary

Source: The Hacker News The Iranian hacking group known as MuddyWater (aka Earth Vetala, Mango Sandstorm, and MUDDYCOAST) has targeted several organizations and individuals mainly located across the Middle East and North Africa (MENA) region as part of a new campaign codenamed Operation Olalampo. The activity, first observed on January 26, 2026, has resulted in the deployment of new malware families that share

Source: The Hacker News With $5.5 trillion in global AI risk exposure and 700,000 U.S. workers needing reskilling, four new AI certifications and Certified CISO v4 help close the gap between AI adoption and workforce readiness. EC-Council, creator of the world-renowned Certified Ethical Hacker (CEH) credential and a global leader in applied cybersecurity education, today launched its Enterprise AI Credential Suite,

Source: Krebs on Security Microsoft today released updates to fix more than 50 security holes in its Windows operating systems and other software, including patches for a whopping six "zero-day" vulnerabilities that attackers are already exploiting in the wild.

Source: Krebs on Security A new Internet-of-Things botnet called Kimwolf has spread to more than 2 million devices, forcing infected systems to participate in massive distributed denial-of-service (DDoS) attacks and to relay other malicious and abusive Internet traffic. Kimwolf's ability to scan the local networks of compromised systems for other IoT devices to infect makes it a sobering threat to organizations, and new research reveals Kimwolf is surprisingly prevalent in government and corporate networks.

Source: Krebs on Security Our first story of 2026 revealed how a destructive new botnet called Kimwolf rapidly grew to infect more than two million devices by mass-compromising a vast number of unofficial Android TV streaming boxes. Today, we'll dig through digital clues left behind by the hackers, network operators, and cybercrime services that appear to have benefitted from Kimwolf's spread.

Source: PortSwigger Research Manual testing doesn't have to be repetitive. In this post, we're introducing Repeater Strike - a new AI-powered Burp Suite extension designed to automate the hunt for IDOR and similar vulnerabilities

Source: PortSwigger Research Tired of repeating yourself? Automate your web security audit trail. In this post I'll introduce a new Burp AI extension that takes the boring bits out of your pen test. Web security testing can be a

Source: PortSwigger Research Have you ever wondered how many vulnerabilities you've missed by a hair's breadth, due to a single flawed choice? We've just released Shadow Repeater, which enhances your manual testing with AI-powere

Source: PortSwigger Research HTTP cookies often control critical website features, but their long and convoluted history exposes them to parser discrepancy vulnerabilities. In this post, I'll explore some dangerous, lesser-known

Source: PortSwigger Research The strength of our URL Validation Bypass Cheat Sheet lies in the contributions from the web security community, and today’s update is no exception. We are excited to introduce a new and improved IP a

Source: PortSwigger Research We're delighted to announce three major research releases from PortSwigger Research will be published at both Black Hat USA and DEF CON 32. In this post, we'll offer a quick teaser of each talk, info

Source: PortSwigger Research Security research involves a lot of failure. It's a perpetual balancing act between taking small steps with a predictable but boring outcome, and trying out wild concepts that are so crazy they might

Source: PortSwigger Research In this post, I'll share my approach to developing custom automation to aid research into under-appreciated attack classes and (hopefully) push the boundaries of web security. As a worked example, I'l


Local News (IP-based) / Informations locales (basées IP)

Source: Le Dauphiné Isère Sud Ce dimanche 28 octobre 2012, des chutes de neige abondantes surprennent tout le monde. Une telle quantité tombée à Grenoble en octobre, c'est du "jamais vu depuis au moins 70 ans", selon les météorologues de MeteoNews. Avec la couche épaisse de poudreuse sur les routes, la pagaille gagne rapidement le centre-ville de la capitale des Alpes. Retour en images.

Source: Le Dauphiné Isère Sud Le week-end dernier auraient dû avoir lieu les 32e de finale de Coupe de France de football. Mais le Covid-19 est passé par là... Alors, pour le plaisir, on vous propose de revivre, en images, l'un des plus grands exploits de l'histoire de la compétition. C'était il y a 6 ans, jour pour jour. Dans un Stade des Alpes en fusion, le GF38 de Bengriba et Cattier éliminait l'OM de Bielsa et Thauvin, alors leader de Ligue 1. Inoubliable.

Source: Le Dauphiné Isère Sud Pour la première fois de leur histoire, le FCG et le VRDR se croisaient sur un terrain de rugby en octobre 2019 au Stade des Alpes. Ce "derby du Dauphiné" inédit tournait largement en faveur des Isérois (49-8). Retour en images sur la soirée dans l'enceinte grenobloise.

Source: Le Dauphiné Isère Afin d’assurer une couverture la plus exhaustive possible des élections municipales, la rédaction invite les candidats ou futures listes ne s’étant pas encore manifestés à nous contacter par mail à LDLcentregre@ledauphine.

Source: Le Dauphiné Isère Près de cent personnes se sont rassemblées ce samedi en fin d’après-midi devant le BHV pour dénoncer l’ouverture, mercredi 25 février, de l’enseigne d’ultra fast fashion dans le centre-ville.

Source: Le Dauphiné Isère C’est l’épilogue annoncé d’un feuilleton commencé en 2018. Le gigantesque entrepôt Amazon situé à deux pas de l’aéroport Saint-Exupéry ouvrira ses portes en juin. À terme, 3 000 personnes travailleront quotidiennement au traitement de plusieurs milliers de colis destinés à arroser la région Auvergne-Rhône-Alpes essentiellement.

Source: Le Dauphiné Isère Sud Les séismes de magnitude égale ou supérieure à 5 sur l’échelle de Richter sont inhabituels en France. Dans le grand Sud-Est, ce seuil n’a été dépassé que cinq fois au cours des 60 dernières années : en Ardèche, en Isère, dans les Alpes-de-Haute-Provence et les Hautes-Alpes, ainsi qu’en Haute-Savoie. Voici les tremblements de terre les plus puissants enregistrés dans nos départements.

Source: Le Dauphiné Isère Sud Le FC Grenoble Rugby reçoit ce vendredi 10 novembre à 21 h, pour la compte de la 10e journée de Pro D2, son "voisin" de Valence Romans Drôme Rugby. Il ne s'agit que de la quatrième confrontation entre ces deux équipes, le VRDR ayant vu le jour en 2016. En revanche, les matches depuis les années 50 jusqu'aux années 90 n'ont pas manqué entre les Isérois et les Drômois de Valence et de Romans. Retour en images sur ces "derbys" d'antan et de maintenant.

Source: Le Dauphiné Isère Sud Le soir du 13 novembre 2015, neuf hommes ont mené une série d’attaques aux abords du Stade de France à Saint-Denis, de terrasses de restaurants et dans la salle de concerts du Bataclan à Paris, faisant 130 morts et plus de 350 blessés. Après l'horreur, dans les jours qui ont suivi, des rassemblements avaient été organisés à Grenoble, Valence ou encore Annecy. Dans d'autres villes, on a pleuré la perte de proches tués lors de ces attentats comme à Gap, Jarrie ou Gilly-sur-Isère. Retour en images sur ces moments de recueillement.

Source: Le Dauphiné Isère Sud Alors que la Foire des Rameaux revient ce week-end à Grenoble, nous faisons une dernière plongée dans nos archives. On remonte le temps jusque dans les années 1980 avec un forain qui menace de s'immoler par le feu et Alain Carignon qui fait du toboggan. Retour en images.


French Government / Gouvernement & Services Publics

Source: ANSSI (CERT-FR) De multiples vulnérabilités ont été découvertes dans les produits Microsoft. Certaines d'entre elles permettent à un attaquant de provoquer un déni de service à distance, une atteinte à la confidentialité des données et un contournement de la politique de sécurité.

Source: ANSSI (CERT-FR) Ce bulletin d'actualité du CERT-FR revient sur les vulnérabilités significatives de la semaine passée pour souligner leurs criticités. Il ne remplace pas l'analyse de l'ensemble des et alertes publiés par le CERT-FR dans le cadre d'une analyse de risques pour prioriser l'application des...


France News / Journaux France

Source: Franceinfo "Je vous demande de veiller à la mise en vigilance des forces de sécurité intérieures placées sous votre autorité", écrit le ministre dans un télégramme envoyé aux préfets samedi et consulté par France Télévisions. Publié le 28/02/2026 18:13 Mis à jour le 28/02/2026 18:57 Temps de lecture : 1min Le ministre de l'Intérieur, Laurent Nuñez, lors de sa visite au Salon de l'agriculture de Paris, le 26 février 2026. (DANIEL PERRON / HANS LUCAS / AFP) Un message pour "détecter toute action susceptible de troubler l'ordre public". Le ministre de l'Intérieur, Laurent Nuñez, a demandé, samedi 28 février, une "mise en vigilance" en France des forces de sécurité intérieure "dans le contexte de tensions internationales en Iran et au Moyen-Orient". "Je vous demande d'être particulièrement attentifs, et le cas échéant de signaler immédiatement aux services de renseignement


Weather / Météo

Source: Météo-Paris Le blizzard de la fin du mois de décembre 1970 qui a provoqué un chaos monumental sur l'autoroute du soleil entre Valence et Montélimar. Une tempête de neige de l'ampleur qu'a connu New-York ces derniers jours. archives meteo-paris.com (photo colorisée) New York vient de subir sa plus forte tempête de neige depuis plusieurs décennies avec 50 cm à Central Park ! Un tel blizzard peut-il se produire en FraIA nce ? Notre article vous apporte les explications. Un demi-mètre de neige à New York ! New York n'est pas étrangère aux tempêtes de neige. Malgré tout, celle qui est survenue ce lundi 23 février 2026 a présenté une intensité peu commune. Les cumuls de neige atteignent 50 centimètres à Central Park dans le cœur de la ville et 58 cm à la station de l'aéroport de La Guardia ! Une tempête de neige d'une telle intensité est rare, ne se produisant en moyenne que tous les 25 ans sur New York ! L'île de Long Island fut encore plus touchée avec jusqu'à 74 cm mesurés à l'aéroport MacArthur, un cumul inédit depuis 1963 ! 50 cm de neige ont recouvert New York (USA) ce lundi 23 février 2026 - photo Pictures of New York Malgré tout, il ne s'agit pas de la tempête la plus importante sur New York. En effet, la plus forte tempête est survenue les 22 et 23 janvier 2016, il y a seulement 10 ans. À l'époque, il était tombé 70 centimètres de neige à Central Park, un cumul qui n'avait jamais été mesuré depuis le début des relevés météorologiques en 1869 ! La vie locale avait été fortement ralentie et les voitures avaient été ensevelies sous ce manteau neigeux inédit. 70 cm de neige à New York après la tempête de janvier 2016, un record ! - photo Jackson Krule De telles tempêtes de neige sont-elles possibles en France ? Des tempêtes de neige de la dimension de celle qui vient de toucher le nord-est des USA ne sont pas véritablement possibles en France et ce pour des raisons géographiques. En effet, lorsque l'air polaire descend sur le nord-est des États-Unis, il atteint l'océan Atlantique sans avoir rencontré de mer. Ainsi, il est encore particulièrement froid. Cet air glacial crée un gros contraste thermique avec les eaux douces de l'Atlantique, ce qui peut générer une cyclogenèse explosive au dessus de l'océan avec des dépressions se creusant très rapidement et générant des tempêtes de neige de forte intensité sur la côte est américaine, comme ce fut le cas ce lundi 23 février 2026. Image satellite sur l'est des USA et l'Atlantique le lundi 23 février 2026 - NASA En France, la donne est différente car notre pays est entouré par plusieurs mers et par l'Atlantique. Il est donc impossible pour une masse d'air polaire d'atteindre notre pays intacte. Les coulées d'air froid touchant l'hexagone sont beaucoup moins intenses que celles qui touchent le nord de l'Amérique et le contraste avec la douceur des eaux océaniques s'en trouve donc réduit, ce qui donne des dépressions moins intenses. Malgré tout, on peut tout de même observer des creusements dépressionnaires à la rencontre entre air doux et air froid, qui peuvent engendrer de fortes tempêtes de neige et du blizzard, comme ce fut le cas en Normandie en mars 2013. On peut également évoquer le blizzard survenu au début du mois de janvier 1979, entre la Beauce et Paris, ainsi que celui de février 1986, qui a touché pratiquement les mêmes régions. Par ailleurs, durant la nuit du 25 au 26 février 1958, un violent blizzard a frappé le Nord-Pas-de-Calais, entraînant la formation de congères pouvant dépasser deux mètres de hauteur. Dans le sud-est de la France, les épisodes de blizzard résultent généralement de la combinaison d’une profonde dépression en Méditerranée et d’une masse d’air polaire. La situation fut particulièrement chaotique en février 1954, aux alentours de Perpignan, en février 1956 en Provence, ainsi qu’à la fin du mois de décembre 1970 dans la moyenne vallée du Rhône. Cette liste n’est bien sûr pas exhaustive : de nombreux autres épisodes ont également provoqué, dans nos régions de plaine, la formation de congères atteignant plusieurs mètres de hauteur. Congères de 2 mètres à Gonneville dans le Cotentin le 13 mars 2013 - via infoclimat.fr En résumé : les tempêtes de neige qui touchent la France sont liées à des dépressions généralement moins creuses et moins vastes qu'aux États-Unis, à cause de conflits thermiques moins exacerbés entre air polaire et air océanique. En plus d'être généralement moins fortes, elles sont aussi bien moins fréquentes que de l'autre côté de l'Atlantique où elles surviennent chaque hiver. Malgré tout, les conflits de masses d'air qui concernent l'Europe peuvent tout de même aboutir à des épisodes neigeux majeurs. À lire également : >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Congères impressionnantes dans les rues de New York en mars 1888 - archives New York vient de vivre une impressionnante tempête de neige. Toutefois, la ville a connu des blizzards encore plus impressionnants par le passé. Notre article vous propose de découvrir les quatre tempêtes les plus marquantes de l'histoire new-yorkaise. #4 - Tempête de Mars 1888 : 53 cm En terme de conséquences directes, il s'agit de la tempête de neige la plus dévastatrice de l'histoire de New York ! Et pourtant, elle ne se classe qu'au 4ème rang en terme de quantité de neige tombée avec un cumul de 53 cm sur la ville. Malgré tout, elle est marquée par des vents très violents, ayant formé des congères atteignant parfois 1 à 2 mètres au cœur de la ville ! À l'époque, il n'y a pas de prévision météo et personne ne se prépare à cet épisode. La ville s'en trouve paralysée et les lignes électriques sont endommagées. Cette tempête causera plus de 400 décès, dont environ 200 à New York City. Les rues de New York après la tempête de neige remarquable de mars 1888 - archives #3 - Tempête de Décembre 1947 : 67 cm La tempête de neige survenue au lendemain de Noël 1947 reste à ce jour sur le podium des plus fortes à avoir frappé New York. Le 26 décembre 1947, la ville se retrouve ensevelie sous un manteau neigeux de 67 cm ! La ville se retrouve alors totalement paralysée. De plus, ce blizzard avait été très mal anticipé par les services météo de l'époque, qui n'avaient pas prévenu la population. Cette tempête de neige a coûté la vie à 77 personnes dans le nord-est des USA. 67 cm de neige dans les rues de New York après la tempête de neige le 27 décembre 1947 - photo Al Fenn #2 - Tempête de Février 2006 : 68 cm Parmi les tempêtes de neige les plus marquantes sur New York, celle survenue les 11 et 12 février 2006 a marqué les esprits. À l'époque, une dépression se creuse autour de 970 hPa au large des côtes américaines et des chutes de neige particulièrement forte affectent les États du nord-est des USA. New York est particulièrement touchée et reçoit 68 centimètres de neige, un cumul qui - à l'époque - établit un record absolu ! New York City estime que les opérations massives de déneigement ont coûté environ 27 millions de dollars à la ville. New York sous le blizzard lors de la tempête du 12 février 2006, apportant 68 cm ! - photo Wikipedia #1 - Tempête de Janvier 2016 : 70 cm Si les tempêtes pré-citées ont été remarquables, elles ne sont pas - climatologiquement parlant - celles ayant apporté le plus gros cumul de neige. La plus importante tempête de neige à New York est récente puisqu'elle est survenue les 22 et 23 janvier 2016, il y a seulement 10 ans. À l'époque, il était tombé 70 centimètres de neige à Central Park, un cumul qui n'avait jamais été mesuré depuis le début des relevés météorologiques en 1869 ! La vie locale avait été fortement ralentie et les voitures avaient été ensevelies sous ce manteau neigeux inédit. 70 cm de neige à New York après la tempête de janvier 2016, un record ! - photo Jackson Krule Précision : Les chiffres cités dans cet article sont issus des mesures de la hauteur de neige à Central Park, qui sert de référence pour la ville de New York. Ainsi, les météorologues et climatologues américains classent l'importance des blizzards new-yorkais sur cette base. À lire également : >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris La vague de froid début par 2 m de neige dans le Midi ! Le 30 janvier 1986, une tempête de neige d’une violence exceptionnelle s’abat sur le Languedoc-Roussillon, l’Ariège et le sud du Massif central. En l’espace d’à peine une journée et demie, les cumuls pulvérisent tous les records : près de deux mètres de neige à Loubaresse, en Ardèche, 1,70 mètre à Réal, dans les Pyrénées-Orientales, et jusqu’à 50 centimètres à Carcassonne. Les routes disparaissent, les villages se retrouvent coupés du monde. L’armée est appelée en renfort. Très vite, la situation devient critique. Un million de personnes sont plongées dans le noir. Le plan ORSEC est déclenché en Ardèche. Dans certaines communes du Massif central, l’électricité ne revient qu’au bout de trois semaines, déclenchant une vive polémique sur la gestion de la crise. L’Ardèche dans la tempête de neige extraordinaire du 30 janvier 1986 - archives meteo-paris.com Plusieurs jours de blizzard entre la Bretagne et la Beauce Cette tempête annonce un mois de février hors normes. Après l’hiver déjà mémorable de 1984-1985, le froid s’installe durablement. Sur la moitié nord du pays, février 1986 devient le mois le plus froid depuis 1956. La vague de froid débute le 5 février et ne lâche prise que le 28. Une durée exceptionnelle, aux conséquences dramatiques. Selon le climatologue Daniel Rousseau, près de 13 000 décès supplémentaires sont recensés par rapport à un hiver jugé « normal ». Les régions situées à la frontière de l’air glacial paient un lourd tribut. La Bretagne, les Pays de la Loire, le Centre, la Bourgogne et Rhône-Alpes sont régulièrement balayés par la neige et de véritables blizzards. Dans le Loiret, le plan ORSEC est à nouveau déclenché : les axes routiers sont paralysés, comme lors de l’hiver 1979. Sur la façade atlantique, les paysages deviennent irréels : 30 centimètres de neige à Pornic, 16 à Lorient. À La Baule ou à Quiberon, des skieurs arpentent les plages. Mi-février 1986 : Les agriculteurs viennent au secours des automobilistes piégés par la neige dans la Beauce - photo meteo-paris.com De la neige jusque sur le littoral de la Corse ! Plus au sud, le froid se fait plus bref, entre le 8 et le 14 février, mais suffisamment intense pour recouvrir de neige toute la Corse, y compris Ajaccio, ainsi que la Côte d’Azur. À Nice, le carnaval est annulé. Le 28 février marque la fin de cet épisode glacial, mais dans la brutalité. De fortes chutes de neige touchent toute la moitié nord, déposant 20 centimètres sur la région parisienne. En Bretagne, des pluies verglaçantes transforment routes et trottoirs en pièges mortels. À Lorient, l’hôpital accueille 75 blessés en seulement huit heures. Les trois quarts des écoles ferment leurs portes. L’hiver 1986 laisse derrière lui un pays éprouvé, figé par le froid et marqué durablement dans les mémoires. La neige à Ajaccio au début du mois de février 1986 - photo meteo-paris.com A lire également : >>> Février 1956 : la pire vague de froid du XXème siècle en France ! >>> Février 1986 : au cœur de trois hivers exceptionnels >>> Notre chronique météo >>> Notre almanach météo >>> Nos bulletins météo réactualisés tous les jours >>> Tendances saisonnières Auteur : Guillaume Séchet

Source: Météo-Paris Le château de Versailles sous la neige dans une ambiance polaire le 15 février 2010 – Archives Météo-Villes Un mois de janvier exceptionnellement neigeux ! Après un automne 2009 plutôt doux, l'hiver a fait un retour remarqué dès le mois de décembre sur la France. Une première vague de froid s'est en effet propagée jusqu'à la France à partir du 13 décembre avec de l’air glacial venu de Russie envahissant tout le pays et de la neige sur de nombreuses régions, atteignant même la région de Nice le 18 décembre. Cette première vague de froid se termine quasiment le jour de Noël, mais la douceur qui caractérise les derniers jours de l’année ne sera que très éphémère. Après une courte pause plus douce donc, le froid revient de plus belle à partir du 31 décembre 2009. Le début du mois de janvier 2010 est ainsi très hivernal sur la quasi totalité de la France avec de première chutes de neige le 1er janvier entre les côtes de la Manche et l'Île-de-France (10cm en Seine-Maritime). Les 3 et 4 janvier, c'est sur le sud de la France que la neige se montre plus étendue, tombant du Poitou à la région Rhône-Alpes en passant par le Limousin et l'Auvergne, on mesure 20cm à Grenoble, 13cm à Lyon et 8cm à Clermont-Ferrand. 8 cm à Lyon le 4 janvier 2010 – Archives Météo-Villes Le 6 janvier, la neige tombe par averses en Normandie avec 20cm mesurés à Honfleur. Le lendemain, on mesure 30cm à Alençon, 20cm à Dreux, Chartres et entre le sud du 78, 91 et 77 au passage d'une goutte froide alors qu'il neige également sur le sud-est entre le Languedoc-Roussillon et la Camargue puis jusqu'en vallée du Rhône et dans les Alpes la nuit suivante. Alors que la matinée du 8 janvier est glaciale dans le nord du pays avec jusqu'à -20,6°C 0 Brétigny-sur-Orge, battant le record de 1985, de très fortes chutes de neige persistent sur le sud. On mesure jusqu'à 50cm à Gap, 35cm à Grenoble, 30cm à Orange et même 20cm en Camargue. Ces chutes persistent le lendemain avec des cumuls devenant impressionnants dans l'intérieur du Languedoc-Roussillon jusqu'au Tarn. Certains villages d'altitude sont coupés du monde avec parfois plus de 50 à 60cm cumulés ! 60cm de neige le 9 janvier 2010 aux Martys (900m) dans le département de l'Aude – Photographie : meteo81 La neige remonte d'ailleurs de nouveau sur le nord du pays durant cette journée, apportant 2 à 10cm supplémentaires jusqu'en Alsace, Bourgogne, Franche Comté et même en Bretagne. De très nombreuses région françaises sont ainsi sous la neige. Après une nouvelle perturbation accompagnée de neige (3 à 7cm sur une large partie du nord et de l'est du pays) et de pluie verglaçantes entre les Pays de la Loire, la Bretagne et la Basse-Normandie, le temps se montrera plus sec jusqu'au 20 janvier avec un redoux très progressif sur la majorité du pays. Celui-ci restera néanmoins une nouvelle-fois de courte durée. Une troisième vague de froid envahira en effet la France à partir du 25 janvier 2010. Le 28 janvier, de nouvelles averses de neige ont déjà lieu sur un large quart nord-est jusqu'au nord des Alpes avnat une perturbatioj plus active le lendemain, apportant des chutes plus franches en Champagne-Ardenne, Lorraine, Bourgogne, Franche-Comté et au sud de l’Alsace. La neige tient surtout au-dessus de 300m d’altitude. Le 31 janvier, la neige atteint de nouveau la Côte d'Azur avec plusieurs centimètres entre Nice et Fréjus, surtout entre Cannes et Saint-Raphaël où les plages sont bien blanchies. Dans le même temps, on mesure -14°C à Aurillac et -11°C à Nevers. Plages blanchies par la neige à Cannes le 31 janvier 2010 au matin – Archives Météo-Villes Un mois de février tout aussi hivernal ! Le mois de février débute donc sous le froid et la neige sur de nombreuses régions. Dès le 1er févier, des chutes de neige sont observées sur une partie du nord et de l'est avec par exemple 5/7cm à Lyon. Le 2 février, on mesure 30cm de neige dès 200m d'altitude dans le nord-est avant un net redoux entre le 3 et le 4 sur la totalité du pays. À La Mure, on passe par exemple de -16,9°C en début de nuit à 10.7°C durant la journée ! Encore une fois, le redoux restera temporaire puisque la 4ème vague de froid de la saison envahira la France le 9 février avec notamment le retour de la neige sur le nord dès la fin de journée. Le 10 février, on observe des averses de neige sur une large partie du nord et du centre du pays, plus fréquentes sur le Pas-de-Calais. C'est néanmoins le 11 février que la neige va se montrer plus généralisée et même parfois forte une nouvelle-fois sur la Côte d'Azur. On mesure en effet 5cm à Nice en fin de journée du 11 février, 10 à 15cm à Cannes et jusqu'à 30-40cm dans la région de Grasse dès 200m d'altitude ! Les jours suivants seront plus secs mais généralement glaciales sur la quasi-totalité du pays, excepté entre la pointe Bretonne, la Côte d'Azur et le littoral de la Corse avec des gelées généralisées et parfois fortes, les minimales descendant souvent sous les -10/-15°C. Les chutes de neige successives et le froid deviennent pesants pour bon nombre de français - Une du 10 février 2010 Cette 4ème vague de froid prendra fin le 17 février avec la reprise d'un flux océanique très perturbé mais aussi plus doux. Plusieurs tempêtes successives concerneront d'ailleurs la France à la fin du mois de février, notamment la fameuse tempête Xynthia le 27 février au soir. L'hiver n'avait néanmoins pas dit son dernier mot, celui-ci faisant un dernier retour durant le mois de mars. Ce début d'année 2010 fut donc exceptionnellement neigeux sur notre pays, le plus neigeux depuis l'hiver 1986-1987, si bien que peu de régions ont été épargnées par les flocons entre décembre, janvier et février. À lire également : >>> Février 1991 : vague de froid et neige de la Bretagne à la côte d'Azur >>> Février 1956 : la pire vague de froid du XXème siècle en France ! >>> Février 1954 : une vague de froid pas comme les autres >>> -20°C en plaine : retour sur la vague de froid de février 2012 >>> Jusqu'à 70 cm de neige en plaine fin janvier en Roussillon ! >>> Un premier hiver de la seconde guerre mondiale glacial ! Auteur : Tristan Bergen

Source: Météo-Paris À la fin de l’hiver 1963, l’épaisseur du manteau neigeux est parfois spectaculaire dans les massifs montagneux, notamment dans les Vosges, où elle dépasse localement 10 mètres sur les plus hautes crêtes. Archives meteo-paris.com L'hiver 1962-63 fut le plus long depuis des siècles Après une fin des années 1950 et un début des années 1960 relativement cléments, l’hiver 1962-1963 s’impose comme l’un des plus longs et des plus marquants du XXe siècle. À Paris, il devient le plus froid enregistré depuis l’hiver 1879-1880. Le gel apparaît dès la mi-novembre 1962 et s’installe durablement, ne s’interrompant que brièvement jusqu’au début du mois de mars 1963. Au même moment, la guerre d’Algérie s’achève, provoquant l’exode massif des pieds-noirs vers la métropole. Habitués à des hivers plus doux, ils découvrent une France au climat presque polaire. À Marseille, les paquebots Ville-d’Oran et Hairouan voient même leur départ pour Alger retardé d’une journée en raison de la neige et du froid intense. Partout, le pays se fige. À Deauville, les yachts restent prisonniers des glaces, tandis que des milliers de péniches sont immobilisées sur des canaux et des rivières gelés. Dès la fin décembre, le Rhin, le Rhône et la Seine charrient des glaçons, bientôt rejoints par la Garonne et la Loire. La Bretagne est loin d'échapper à cette vague de froid monumentale... ici, à l'entrée de Rennes, à la fin du mois de février 1963- archives meteo-paris.com La période du 12 janvier au 6 février est la plus rigoureuse Car cet épisode est arquée par un gel quasi permanent. Les températures atteignent des niveaux exceptionnels : -27 °C à Ambérieux, -26 °C à Vichy, -23 °C à Lyon, -18 °C à Montpellier, -14 °C à Dinard et -13 °C à Paris. Marseille connaît sa quatrième chute de neige de l’hiver, avec encore 20 cm supplémentaires. La pénurie de combustibles refait surface : la consommation de charbon augmente de 40 % et celle de fuel double. Début février, de nombreux locataires des HLM parisiens se retrouvent sans chauffage. L’Union des Vieux de France réclame une allocation d’urgence pour les personnes âgées. Aux confins de la Bourgogne, du Berry, de la Lorraine et de l’Isère, quelques loups venus d’Europe de l’Est sont aperçus, poussés par le froid. Mieux vaut tirer partie de la situation… À Carcassonne, sous plusieurs dizaines de centimètres de neige au mois de février 1963 - photo colorisée - archives meteo-paris.com - 29°C dans l'Hérault... une banquise sur le littoral de la Mer du Nord... ! L’intensité du gel est telle qu’une banquise se forme sur le littoral de la mer du Nord, de Dunkerque jusqu’à La Panne, en Belgique. La mer gèle également en Charente-Maritime, à La Courbe. Tous les grands fleuves charrient des glaçons, certains se retrouvant même localement totalement pris par les glaces. Le 4 février, une violente tempête de neige paralyse le Languedoc-Roussillon et la Corse. Des usines s’effondrent sous le poids de la neige. À Saint-Martin-de-Londres, la température chute à -29 °C, détruisant des vergers entiers. Sur la Côte d’Azur, la production florale des serres d’Antibes est anéantie. Un redoux temporaire, les 6 et 7 février, laisse espérer une amélioration, mais le froid reprend rapidement ses droits. Les 19 et 20 février, de nouvelles chutes de neige recouvrent le pays. En région parisienne, 15 à 20 cm de poudreuse transforment les pentes en pistes de ski improvisées. Des skieurs sur l'esplanade du Trocadéro, devant la Tour Eiffel, après les chutes de neige de la fin du mois de février 1963 - photo colorisée - archives meteo-paris.com De très nombreux décès... et un dégel très tardif En mars, le dégel provoque d’importants dégâts sur les routes. Si l’agriculture souffre moins qu’en 1956, les blés d’hiver sont partiellement détruits. En France, le nombre de décès liés à cet hiver exceptionnel atteint 30 000, un bilan alarmant en raison de la durée et de l’intensité du froid. Cet hiver 1962-1963 se révèle également remarquable par son ampleur à l’échelle mondiale : l’Est des États-Unis, le Canada, la Chine, le Japon, la Sibérie et l’ensemble de l’Europe occidentale connaissent des conditions extrêmement rigoureuses, tandis que l’Alaska, l’Islande, l’Afrique du Nord, le Moyen-Orient et l’Inde bénéficient d’une douceur inhabituelle. >>> Après la guerre, l’épreuve du grand froid de l’hiver 47-48 >>> Le supplice du terrible hiver 1917 >>> Jusqu'à 60 cm de neige sur la Côte d'Azur à la fin du mois de février ! >>> -40 au vent à Marseille et Dunkerque bloqué par la banquise : c'est possible ! >>> Février 1963 polaire, au terme de l'hiver le plus long du 20e siècle >>> Février 1954 : une vague de froid pas comme les autres Auteur : Guillaume Séchet

Source: Météo-Paris Après un début d'année particulièrement arrosé, l'anticyclone rétablit un temps calme qui semble parti pour durer. Peut-on craindre une sécheresse malgré un hiver très pluvieux ? Notre article vous donne des éléments de réponse. Des nappes très bien rechargées ! Après un début 2026 particulièrement pluvieux, les nappes phréatiques ont pu se recharger efficacement. À quelques jours du printemps météorologique et à l'approche de la fin de saison de recharge des nappes, la situation est plus que satisfaisante dans la majeure partie des régions. 70% des nappes de France affichent des niveaux égaux ou supérieurs à la normale. Autre bonne nouvelle, l'Aude et les Pyrénées-Orientales - qui souffraient d'une sécheresse chronique - ont reçu des pluies très abondantes et leurs nappes sont remontées à des niveaux inédits depuis de longues années. Niveau des nappes phréatiques ce jeudi 26 février 2026 - info-secheresse.fr Outre la situation en profondeur, il faut également évoquer la situation en surface. Après ce début d'année 2026 exceptionnel : l'indice d'humidité des sols atteint des records ! Il y a quelques jours, l'indice à échelle nationale était à son plus haut niveau depuis le début des mesures pour cette époque de l'année ! D'ailleurs, il arrive même sur le podium des situations où les sols ont été les plus humides en France, toutes dates confondues. Seuls décembre 1982 et janvier 1994, théâtres de graves inondations, avaient connu une humidité moyenne légèrement supérieure à la situation actuelle. Indice d'humidité des sols en moyenne nationale du 18 février 2025 au 17 février 2026 - Météo France En résumé : en cette fin février 2026, nous sommes aux antipodes d'une situation de sécheresse avec des sols gorgés en humidité en surface couplés à des nappes phréatiques affichant des niveaux élevés dans de nombreuses régions. Une sécheresse reste-t-elle possible d'ici l'été ? Avec des nappes phréatiques à des niveaux souvent très satisfaisants, le spectre de la sécheresse est forcément moins important qu'il n'a pu l'être au cours des dernières années. Cependant, il faut surveiller le tournant actuel. En effet, il semblerait que le retour de conditions météorologiques plus sèches s'inscrive dans la durée. Les dernières tendances pour mars 2026 envisagent un mois sec en France, voire très sec dans la moitié sud où le déficit pluviométrique pourrait être marqué. À une saison où la végétation en éveil est gourmande en eau, les sols seront donc amenés à s'assécher. Anomalie pluviométrique envisagée en Europe en mars 2026 - NOAA Les nappes phréatiques hautes ne nous protègent pas d'un risque de sécheresse superficielle. Comme son nom l'indique, elle se traduit par un déficit prononcé d'humidité des sols en surface, pouvant altérer le bon développement de la végétation. C'est pourquoi on l'appelle souvent "sécheresse agricole". Contrairement à la sécheresse en profondeur (liée aux nappes), la sécheresse superficielle peut apparaître en seulement quelques semaines lorsque l'anticyclone s'installe et que la pluie manque, surtout si l'ensoleillement est important et que les températures sont élevées. La sécheresse superficielle des sols peut apparaître en quelques semaines - photo Fabrice Elsner Bien Ainsi, le risque d'une importante sécheresse en profondeur semble très limité cette année, grâce au niveau des nappes élevé en sortie d'hiver. En revanche, un printemps sec et chaud suffirait à assécher considérablement les sols en surface et pourrait occasionner une sécheresse de surface, même si les nappes sont hautes. Il est important de différencier ces deux types de sécheresses, qui peuvent se produire indépendamment l'une de l'autre. À lire également : >>> Près de 140 records de chaleur battus ce mercredi en France ! >>> Et si le mois de mars était très sec ? >>> Un blizzard new-yorkais est-il possible en France ? >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Neige à Paris - Place de l'Opéra - fin février 1948 - archives meteo-paris.com Une descente froide polaire particulièrement puissante À la fin de janvier et au début de février 1948, une puissante cellule anticyclonique s’installe sur la Scandinavie et l’Europe du Nord. Cette configuration bloque les perturbations atlantiques et favorise un flux persistant d’air continental très froid en provenance d’Europe orientale et de Russie. Les masses d’air, sèches et glaciales, s’étendent vers l’ouest et le sud-ouest du continent. L'air glacial qui descend de la mer baltique vers la France, le 20 février 1948 - Source : Wetterzentrale Une vague de froid intense à la fin du mois de février Du 20 au 27 février 1948 : le froid et la neige envahissent toute la France - la Bretagne est particulièrement concernée par cette offensive hivernale et la température descend à -13° à Brest où la ville est recouverte d’un épais manteau blanc. Les 22 et 23 février 1948 , une tempête de neige d’une rare violence paralyse la moitié nord. La température descend à -20°C à Clermont-Ferrand et -10 à -12°C en Ile-de-France. La neige atteint même la Côte d’Azur. Les maximales restent fréquemment négatives pendant plusieurs jours consécutifs, et le froid est accentué par des vents parfois soutenus, augmentant la sensation de gel. Évolution des températures à Paris au cours du mois de février 1948 - source : site meteo-climat La capitale ainsi que d'autres grandes villes françaises sont paralysées La neige, parfois abondante, persiste au sol en raison des températures durablement négatives. Dans certaines zones, les cours d’eau gèlent partiellement et les sols restent pris par le gel sur une profondeur inhabituelle. Des chasse-neige font leur apparition dans les rues de Paris où la circulation devient praticable impraticable… La paralysie de la Capitale est donc un sujet majeur. L'utilisation de chasse-neige devenue nécessaire dans Paris à la fin du mois de février 1948 - archives meteo-paris.com Pénuries de charbon et infrastructures fragilisées La vague de froid de février 1948 survient dans un contexte d’après-guerre marqué par des infrastructures fragilisées et des pénuries, notamment de charbon et de combustible. Les transports ferroviaires et routiers sont fortement perturbés par la neige et le gel. L’approvisionnement en énergie devient difficile dans plusieurs régions, entraînant des coupures de chauffage. Sur le plan humain, le froid intense provoque une surmortalité, en particulier parmi les populations les plus vulnérables. L’agriculture subit également des dégâts notables, avec des cultures et des arbres fruitiers affectés par le gel prolongé. Une circulation difficile sur la place de la Concorde à la fin du mois de février 1948 - archives meteo-paris.com La vague de froid de février 1948 est souvent associée, dans les archives météorologiques, au « grand hiver 1947-1948 ». Elle demeure une référence pour l’étude des épisodes de froid extrême en Europe de l’Ouest, tant par sa durée que par son intensité et ses impacts socio-économiques. À lire également : >>> Le dernier hiver de la guerre fut terriblement froid en France... >>> Le supplice du terrible hiver 1917 >>> Le blizzard meurtrier de la fin février 1958 >>> Jusqu'à 60 cm de neige sur la Côte d'Azur à la fin du mois de février ! >>> -40 au vent à Marseille et Dunkerque bloqué par la banquise : c'est possible ! >>> Nos très nombreux articles (1 à 3 par jour) >>> Notre almanach météo les principaux évènements climatiques en France depuis 1850 >>> Notre chronique sur les évènements climatiques depuis 1709 >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Guillaume Séchet

Source: Météo-Paris Les anomalies de température prévue pour mercredi prochain seront très importantes : localement jusqu'à 12°C au-dessus des normales de saison ! Après de longues semaines marquées par une météo très agitée, entre pluies abondantes, vents violents et inondations, la situation commence enfin à s’améliorer. Même si l’hiver n’est pas encore terminé, la fin du mois de février annonce souvent les premiers signes du printemps — et c’est bien ce qui semble se profiler cette année. Retour de l'anticyclone des Açores Depuis plusieurs semaines, l’anticyclone des Açores nous délaisse. Positionné bien trop au sud, il laisse le courant perturbé océanique influencer l’ensemble de l’Europe, y compris la péninsule Ibérique. Dans ce contexte, la France connaît un temps particulièrement instable, ce qui explique des inondations de plus en plus étendues et marquées. La situation s’apprête toutefois à évoluer. Dès ce week-end, les hautes pressions devraient progressivement remonter vers l’Europe continentale. L’anticyclone se centrera alors sur l’Andalousie. Le flux d’ouest océanique restera présent sur la France, mais il sera nettement moins actif : les pluies se cantonneront principalement aux côtes de la Manche. Évolution de la masse d'air prévue entre mardi et mercredi - animation WRF Meteociel Parenthèse printanière d'une à deux journées À partir de lundi, et surtout mardi, les hautes pressions se décaleront davantage vers le continent. Le vent s’orientera au sud, favorisant la remontée d’un air très doux en provenance du Maroc. En cette fin février, le soleil gagne déjà en hauteur dans le ciel. Dans une telle configuration, les températures peuvent grimper plus facilement qu’au cœur de l’hiver. La période de la Saint-Valentin marque d’ailleurs souvent le début du réveil de la nature : certains oiseaux entament leur période de reproduction, ce qui serait à l’origine de cette fête. La fin du mois de février offre régulièrement une ou deux journées aux accents printaniers. Les journée de mardi et surtout mercredi devraient illustrer parfaitement cette tendance : le ciel sera enfin dégagé sur les trois quarts du pays et les températures dépasseront fréquemment les 15 °C l’après-midi. Avec un vent de sud soufflant au pied des Pyrénées, le seuil de chaleur pourrait même être approché, voire atteint localement, avec jusqu’à 25 °C sur le Pays basque. Prévisions des températures METEO-VILLES pour mercredi prochain Cette parenthèse printanière pourrait être d'assez courte durée. Dès jeudi, le courant perturbé océanique devrait reprendre le dessus, entraînant le retour des nuages, de quelques pluies par l’ouest et, mécaniquement, des températures un peu moins agréables, même si la douceur restera d’actualité. À lire également : >>> Et si le mois de mars était très sec ? >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Pourquoi pleut-il autant depuis le début de l'année ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Guillaume Séchet

Source: Météo-Paris Alors que la douceur va s'affirmer dans les prochains jours, on se demande si l'hiver est terminé en France. Toutefois, le passé lointain comme récent nous ont montré que l'hiver peut encore se manifester bien au delà du mois de février. Congères de 2 mètres à Gonneville dans le Cotentin (50) le 13 mars 2013 - via infoclimat.fr D'importantes coulées froides peuvent encore survenir Lorsque les jours rallongent et que le pôle Nord se réchauffe, le vortex polaire - qui concentre la majorité de l'air froid aux hautes latitudes - devient moins stable. C'est à dire qu'il devient de moins en moins compact. Il peut alors se déformer et entraîne avec lui des ondulations plus importantes du courant jet. Ainsi, des coulées d'air froid peuvent d'échapper du pôle en direction des latitudes moyennes comme la France. Il est donc tout à fait classique d'observer des coups de froid tardifs sur nos régions en mars, ce pourquoi il ne faut jamais enterrer l'hiver trop vite. Schéma d'un vortex polaire instable et d'un courant jet ondulant (classique au printemps) - NOAA Il suffit de regarder les relevés du passé pour se rendre compte qu'un froid marqué peut encore survenir au cours du mois de mars. À Paris-Montsouris, on peut encore descendre sous les -5°C durant la première partie du mois. L'exemple le plus récent date du 13 mars 2013 avec une température qui affichait -5,5°C à l'aube. De plus, on peut encore assister à des journées sans dégel jusqu'à début mars. D'ailleurs, le jour sans dégel le plus tardif est assez récent puisqu'il s'agissait également du 13 mars 2023 où le thermomètre n'avait pas dépassé -1,4°C dans la capitale ! Températures minimales et maximales les plus basses mesurées à Paris-Montsouris en mars depuis 1886 - infoclimat.fr Offensives hivernales en mars : des exemples récents Il est donc important de rappeler qu'il est beaucoup trop tôt pour enterrer l'hiver. Si l'hiver météorologique s'achève au 28 février, le froid et la neige en plaine peuvent encore se manifester bien après ! Rappelons qu'il peut encore neiger sur toute la France au cours du mois de mars. Il n'y a d'ailleurs pas besoin d'aller fouiller dans les archives lointaines pour retrouver des épisodes hivernaux marquants en mars. En 2010, l'agglomération de Perpignan s'était retrouvée sous 25 à 40 cm de neige le 8 mars et les températures plongeaient localement jusqu'à -10°C dans le nord-est de l'hexagone ! 30 à 40 cm de neige sur l'agglomération de Perpignan (66) le lundi 8 mars 2010 - Météo Villes Encore plus près de nous, on peut évoquer la tempête de neige historique qui s'était produite de la Bretagne à la Belgique le 12 mars 2013. La Normandie avait été la région la plus touchée et Météo France avait même déclenché la vigilance ROUGE neige dans la Manche et le Calvados. Le vent violent avait causé des congères atteignant 1 à 2 mètres de haut ! Par endroits, les véhicules étaient littéralement ensevelis ! Hors-congères, il était tombé 20 à 40 cm de façon généralisé dans ces départements. Le village des Pieux dans le Cotentin (50), dans la tempête de neige du 13 mars 2013 - via infoclimat.fr S'il n'y a pour le moment pas de réel signal vers un retour du froid, ce dernier ne peut aucunement être exclu alors que nous ne sommes qu'en février. Ces exemples passés nous rappellent que des coulées d'air froid peuvent suivre les premiers pics de douceur printanière. À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Tempête Pedro : la goutte d'eau de trop ! >>> 1 mort et de gros dégâts : la tempête Nils a frappé fort ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris La plage de La Ciotat (13) prise d'assaut le dimanche 22 février 2026 - photo mairie Les températures s'envolent à des niveaux records en cette fin de mois de février, atteignant 25°C dans le centre de la France et flirtant avec les 30°C au pied des Pyrénées. Les chaleurs précoces sont de plus en plus fréquentes et faciles à atteindre. Presque 30°C fin février ! Le printemps est déjà là, pour ne pas dire l'été ! Ces dernières heures, les températures s'envolent à des niveaux remarquables en France. Les premières chaleurs de l'année ont concerné le sud-ouest ce mardi 24 février 2026. Au pied des Pyrénées, la barre des 25°C a été allègrement dépassée et on s'est même approché des 30°C dans le Béarn avec une température maximale de 29,5°C mesurée à Saint-Gladie-Arrive-Munein, loin devant son record de 27,0°C en février 2020 ! On peut aussi noter 26,6°C à Biarritz, 26,2°C à Pau, 25,9°C à Saint-Girons ou encore 25,2°C à Dax. Températures maximales relevées au sud-ouest le mardi 24 février 2026 - meteociel.fr Cette surchauffe inhabituelle alors que nous sommes encore en hiver s'est poursuivie ce mercredi 25 février 2026 en s'étendant jusqu'au nord de la France. Le pays a vécu une journée hors norme avec jusqu'à 28°C dans le Pays Basque et des dizaines de records de douceur/chaleur battus jusqu'aux rivages de la Mer du Nord ! Parmi les records les plus marquants, on peut citer 26,5°C à Biscarrosse dans les Landes, 25,6°C à Tiranges en Haute-Loire, 25,2°C à Montgivray dans l'Indre, 25,1°C à Tulle en Corrèze, 24,7°C à Montluçon dans l'Allier, 22,4°C à Orléans dans le Loiret ou encore 22,3°C au Mans dans la Sarthe ! Températures maximales mesurées en France ce mercredi 25 février 2026 - Météo Villes Plus de 100 records mensuels battus le 25 février !! Plus de 100 records de douceur et chaleur ont été battus ce mercredi 25 février 2026, la preuve qu'il s'agissait bien de l'une des journées les plus chaudes, jamais enregistrée pour un mois de février. Par exemple 26,5°C à Biscarrosse (40), 25,6°C à Tiranges (43), 25,2°C à Montgivray (36), 25,1°C à Tulle (19), 24,7°C à Montluçon (03), 22,4°C à Orléans (45), 22,3°C au Mans (72). Beaucoup datent de la fin février 1960, 1990, 1998 ou 2019 >>> liste des records de ce 25 février 2026 ici >>> Carte des records mensuels de température battues le 25 février 2026 - Meteociel.fr Chaleur précoce de plus en plus facile à atteindre Lorsque l'on parle de chaleur précoce, il est difficile de ne pas évoquer les 31,2°C de Saint-Girons (Ariège) le 29 février 1960. Toutefois, il faut préciser que le flux de sud observé à l'époque était nettement plus marqué et que la température de la masse d'air à 1500m flirtait avec les 20°C sur la façade atlantique ! Hier, la masse d'air affichait "seulement" 12 à 14°C à 1500m en Aquitaine, ce qui n'a pas empêché le thermomètre d'approcher les 30°C ! Cela montre à quel point il devient facile d'atteindre des sommets thermiques, même sans masse d'air record. Si la même situation que fin février 1960 se produisait de nos jours, il est probable que nous atteindrions 32-33°C au pied des Pyrénées ! Comparatif des masses d'air observées les 29 février 1960 et 24 février 2026 - meteociel.fr Il faut dire que cela fait plusieurs années que la fin de l'hiver météorologique ressemble souvent au printemps. Nous sommes sur 8 mois de février consécutifs plus doux que la normale en France et avec des anomalies conséquentes puisque 5 des 8 derniers mois de février ont enregistré un écart thermique égal ou supérieur à +2°C ! On ne compte plus les pics de douceur/chaleur records. En février 2025, il avait fait 19,5°C en Belgique. En février 2024, 22°C dans le centre de la France. En février 2021, quasiment 23°C en Alsace. En février 2020, pas moins de 27°C sur la côte basque. Sans oublier la remarquable fin février 2019 avec 20 à 25°C sur la quasi-totalité du pays et 27°C en Aquitaine ! Anomalie thermique (aux normales 1991-2020) en France au mois de février de 1988 à 2026 - Météo France Avec le réchauffement climatique, le mois de février tend à perdre ses caractéristiques hivernales et devient de plus en plus un mois de printemps. Cela engendre un éveil précoce de la végétation, qui est alors surexposée au risque de gel tardif en mars et avril. Pour autant, il reste possible de vivre des mois de février froids en France, comme ce fut le cas pour la dernière fois en 2018. À lire également : >>> Un blizzard new-yorkais est-il possible en France ? >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris Schéma des remontées de sable du Sahara qui pourraient se produire sur le France, notamment au début de ce mois de mars 2026 - illustration, reprise dans le livre "Y a plus de saison", Guillaume Séchet, 2008 C'est le retour du sable du Sahara après quelques mois d'absence… Des quantités qui pourraient être plus massives au début du mois de mars. De premières remontées de sable en cours Avec le puissant courant océanique que nous avons connu au cours de ces dernières semaines, il était impossible que du sable du Sahara remonte vers nos régions. Mais la situation a bien évolué… Le vent s'est orienté au sud sur toute l’Europe occidentale et les remontées de sable du Sahara ont commencé à nous intéresser. Cependant, jusqu'à ce week-end, ces remontées de poussière saharienne resteront relativement limitées et peu visibles dans le ciel. Elles vont d'ailleurs s'évacuer vers l'est poussée par un léger courant océanique à partir de vendredi. Simulation des remontées de poussière saharienne à 3000 m d'altitude d'ici mercredi soir - Meteociel Des remontées beaucoup plus massives début mars Pour que ce phénomène soit beaucoup plus massif, il faut qu’une dépression se forme sur la péninsule Ibérique et plonge vers le désert marocain et algérien, happant avec elle d’importantes quantité de sable du Sahara qui traverse la Méditerranée et arrive jusque sur nos régions. Et ce sera justement et probablement le cas au début du mois de mars, lorsqu’une goutte froide s’installera sur l’Espagne et en engendrera d’importantes remontées de poussière vers la France. Si cette échéance est encore assez lointaine, le risque est assez élevé et les scénarios qui vont dans ce sens se suivent et se ressemblent. Prévisions des remontées de sable du Sahara entre le 1er et le 4 mars - météo grecque (université d'Athènes) Un phénomène assez fréquent à la fin de l'hiver et au printemps Le début du printemps est d’ailleurs une période assez favorable pour ce type de phénomènes car des gouttes froides viennent souvent s’isoler sur la péninsule Ibérique; et les remontées chaudes en provenance d’Afrique du Nord sont assez fréquentes. Ce fut par exemple le cas les : >>> 20 mars 2025, >>> 3 mars 2025, >>> 17 février 2025, >>> 6 avril 2024, >>> 30 mars 2024, >>> 20 février 2023, >>> 26 mars 2022 >>> 16 mars 2022 Grosses quantités de sable saharien dans le ciel d'Aguilas (sud-ouest de l'Espagne) ce 14 mars 2022 - photo Jose Gome Ros Auteur : Guillaume Séchet

Source: Météo-Paris Le froid et la neige pourront-ils revenir dans les prochaines semaines sur la France ? - Image d'illustration Cette année, le printemps semble avoir pris de l’avance sur la France. La fin février est particulièrement douce, voire chaude, avec des températures exceptionnellement élevées sur de nombreuses régions cette semaine. Ces températures anormalement élevées annoncent-elles la fin de l’hiver et l’absence de retour du froid en France ? Probablement pas... Quel temps pour le mois de mars ? Pour le moment, la majorité des modèles saisonniers s'accordent sur le fait que le mois de mars devrait se montrer plus doux que la normale sur la France mais également plus sec. Les anomalies de températures restent en effet positives sur la France tout comme sur une large partie de l'Europe alors que les anomalies de précipitations restent négatives sur la majorité du pays, excepté près de la Méditerranée où le temps pourrait se montrer plus régulièrement humide. Anomalies de températures et de précipitations sur la France pour le mois de mars 2026 – via TropicalTidBits Dans ce contexte, nous devrions donc retrouver un mois de mars régulièrement anticyclonique sur la majorité de la France avec un temps doux ou très doux en moyenne sur le mois. Aucun signal de retour du froid plus ou moins durable n'est pour le moment envisagé pour ce premier mois du printemps 2026. Mais cela veut-il dire que l'hiver est bel et bien terminé ? Des coups de froid restent-ils possibles ? Une douceur précoce et marquée dès la fin du mois de février ne rime pas forcément avec la fin de l'hiver sur la France. Par le passé, certains pics de douceur durant cette période ont été suivis de retour plus ou moins marqué du froid et même de la neige sur notre pays durant le mois de mars, parfois même plus tardivement. 2021 : Le printemps en février, l'hiver en mars ! Durant la seconde quinzaine du mois de février 2021 par exemple, le printemps semblait déjà s'installer alors même que l'hiver n'était pas encore terminé. Du 16 au 25 février, une douceur exceptionnelle a en effet concerné la France avec des températures restant bien au-dessus des normales de la période. Températures maximales relevées sur la France le 24 février 2021 – Via Infoclimat Le 24 février 2021, on dépassait ainsi les 15-20°C sur la totalité du pays, souvent plus de 21-22°C sur la moitié sud et parfois plus de 24-25°C entre le sud-ouest et le Massif Central. De nombreux records mensuels de chaleur avaient ainsi été battus durant cette période et beaucoup pensaient que l'hiver avait bel et bien pris fin. Pourtant, trois semaines plus tard, entre le 15 et le 23 mars 2021, la neige et le froid avaient décidé de faire leur retour sur notre pays. Sous un flux ayant basculé au nord/nord-ouest puis au nord-est en altitude, de l'air bien froid pour la période s'était en effet engouffré sur la France, apportant d'abord d'abondantes chutes de neige en montagne avant que la neige ne s'invite jusqu'en plaine peu avant l'équinoxe de printemps. On avait en effet pu relever 4-5cm de neige à Clermont-Ferrand le 19 mars alors qu'on y relevait plus de 22°C un mois plus tôt. 4 à 5cm de neige le vendredi 19 mars 2021 au matin à Clermont-Ferrand – Photographie : Daniel Paquet via Twitter : @Danieldeclerm Des chutes de neige avaient également pu être observées à basse voire très basse altitude jusque sur le sud-est de la France, ainsi que du côté des Pyrénées avant un retour au sec par la suite sous un froid persistant. Des gelées étaient en effet observées sur les ¾ de la France alors que le printemps calendaire débutait. 1960 : l'été en février avant le retour de la neige et du gel à la fin du printemps ! La fin du mois de février s'était également montrée anormalement chaude sur la France. Entre le 27 et le 29 février, de l'air chaud en provenance du Sahara avait envahit tout le pays, apportant des températures dignes d'une fin de printemps voire même d'un début d'été. . Durant cette période, les températures se sont également élevées jusqu’à 29°C à Biarritz, 28°C à Pau, 26°C à Clermont-Ferrand, 24°C à Nevers, 22°C à Reims et 21°C à Paris. Sous un effet de foehn, on avait même pu relever jusqu'à 31°C à Saint-Girons en Ariège, un record pour un mois de février en France. La douceur /chaleur de la fin février 1960 près du lac du bois de Boulogne, à Paris – Archives Météo-Villes Malgré tout, l'hiver n'avait pas dit son dernier mot sur notre pays. En effet, le froid avait fait un retour brutal et remarqué à la fin du mois d'avril. Du 26 avril au 5 mai, de l'air particulièrement froid pour la période avait réussi à s'engouffrer jusqu'à la France, apportant des chutes de neige jusqu'en plaine sur certaines régions. Le 29 avril, il tombe 5 cm de neige à Belfort et 4 cm à Luxeuil-les-Bains dans les Vosges. Le lendemain, les gelées sont généralisées sur le pays avec -4°C à Limoges, -3°C à Nevers. Ce temps froid persiste jusqu'à début mai, engendrant d'importants dégâts sur les cultures. On peut encore citer d'autres exemples de douceur précoce suivie de coups de froids tardifs, comme l'année 1998 où une douceur exceptionnelle avait concerné la France à la fin du mois de février avant un retour temporaire du froid et de la neige pour Pâques. Ainsi, une période de douceur exceptionnelle ne rime pas forcément avec la fin de l'hiver, des coups de froid temporaires restant encore possibles jusqu'au mois d'avril voire même jusqu'au début du mois de mai. À lire également : >>> Près de 140 records de chaleur battus ce mercredi en France ! >>> Et si le mois de mars était très sec ? >>> Un blizzard new-yorkais est-il possible en France ? >>> La sécheresse va-t-elle succéder aux inondations ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Tristan Bergen

Source: Météo-Paris Température de la masse d'air vers 1500m le 13 février 1929 - réanalyse via meteociel.fr La vague de froid de février 1929 fait partie des plus intenses ayant touché la France au cours du XXième siècle. Les températures étaient descendues jusqu'à -30 degrés en plaine auvergnate ! Retour sur cet épisode marquant. -30°C en Auvergne : une vague de froid intense ! La vague de froid de février 1929 a été marquante, suivant un froid déjà marqué dès la fin janvier. En deuxième décade de février, un puissant anticyclone scandinave s'oppose à une dépression sur l'Italie, permettant la mise en place d'un véritable Moscou-Paris. Cet dernier advecte un air glacial vers la France et la masse d'air à 850 hPa (vers 1500 mètres d'altitude) atteint les -20°C dans l'est du pays, des niveaux rares ! Le pic de froid survient entre les 11 et 15 février 1929 avec des températures qui restent remarquablement basses de nuit comme de jour. La région de Genève est particulièrement concernée et le lac Léman gèle en grande partie ! La rade de Genève prise par la glace lors de la vague de froid de février 1929 - Chronique Météo Villes À Strasbourg, la moyenne des températures minimales sur l'ensemble du mois affiche -13°C et celle des maximales -3°C ! Cela correspond à un déficit mensuel de -11,5°C aux normales climatologiques modernes ! On enregistre pas moins de 5 nuits entre -20 et -22°C et les températures maximales plafonnent entre -12 et -15°C du 11 au 14 février 1929 ! C'est en Auvergne qu'il fait le plus froid. Le thermomètre chute jusqu'à -30°C en plaine de Limagne, dans la région de Clermont-Ferrand ! Températures minimales et maximales mesurées à Strasbourg (67) en février 1929 - infoclimat.fr Les fleuves pris dans la glace Avec un froid si intense et qui a débuté dès la dernière semaine de janvier 1929, de nombreuses rivières de France sont entièrement prises par les glaces. La Somme est entièrement gelée à Amiens, tout comme la Meuse, l'Aisne à Rethel, l'Yonne et la Seine en amont de Montereau, de nombreux tronçons de la Loire et une bonne partie du Rhône. Les régions méditerranéennes ne sont pas épargnées par cette vague de froid et certaines villes côtières subissent des chutes de neige. Le Rhône est d'ailleurs partiellement pris dans la glace jusqu'à Arles dans les Bouches-du-Rhône ! Le Rhône partiellement gelé à Arles (13) en février 1929 - Chronique Météo Villes Cette vague de froid rend la vie quotidienne particulièrement difficile. À la campagne, l'eau courante n'étant généralement pas encore installée à cette époque, il faut aller chercher l'eau aux fontaines mais la plupart ne fonctionnent plus ! On fait alors fondre des blocs de glace ou de la neige pour bénéficier de l'eau. Il faut dire qu'elle était abondante dans certaines régions. On relevait entre 10 et 20 cm de neige de la Bretagne au Lyonnais (10 cm à Angers, 17 cm à Clermont-Ferrand). La Saône gelée se traverse à la marche à Chalon-sur-Saône (71) en février 1929 - Chronique Météo Villes À lire également : >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris La décrue est en vue sur la France avec le retour de conditions bien plus sèches - Celle-ci s'annonce néanmoins lente dans certains secteurs. Images impressionnantes de la crue de la Charente à Saintes ce mercredi 18 février 2026, dont le niveau va continuer de grimper cette nuit et demain. (photos via EPTB Charente) La décrue s'amorce en cette fin de semaine Le temps se montre exceptionnellement perturbé et humide en France depuis maintenant plusieurs semaines. En conséquences, les crues sont nombreuses et parfois très importantes à travers le pays, notamment sur l'ouest et le sud-ouest de la France. Plusieurs tronçons sont d'ailleurs toujours placés en vigilance rouge par Vigicrues ce vendredi 20 février : - Les basses vallées angevines - La Loire aval - La Loire saumuroise - La Charente aval Sur ces secteurs, les niveaux des cours d'eau continuent d'augmenter en cette fin de semaine avant un pic de crue attendu ce week-end. À Saintes par exemple, la Charente devrait atteindre un pic dimanche 22 février autour de 6.60 m (record de 6.84 m en décembre 1982). Évolution du niveau de la Charente à Saintes du 8 au 22 février 2026 – Vigicrues Ce pic devrait être suivi d'une lente décrue sur ce secteur, comme sur la majorité des cours d'eau de l'ouest de la France. En effet, une poussée anticyclonique est observée dès ce vendredi sur la France, ce qui permettra le retour d'un temps plus calme et sec au moins jusqu'en milieu voire fin de semaine prochaine. Cumuls de précipitations attendus jusqu'au vendredi 27 février 2026 sur la France – Modèle GFS via meteociel Ce retour au sec devrait donc permettre aux niveaux des cours d'eau d'entamer une baisse plus ou moins marquée dès ce week-end et ce durant plusieurs jours, une bonne nouvelle pour les régions sinistrées. Une décrue qui s'annonce lente dans certains secteurs Toutefois, il ne faut pas s'attendre à un retour à la normal dès la semaine prochaine. La décrue s'annonce en effet lente à très lente sur la majorité des cours d'eau français. Il faut en effet attendre que toute l'eau des bassins versants s'évacue avant que les fleuves et rivières retrouvent des niveaux plus normaux, ce qui s'annonce long sachant que les sols sont complètement saturés d'eau sur la quasi-totalité du pays. Il est donc logique d'observer un temps de retard entre l'arrêt des pluies et le début de la décrue, le temps que la pluie tombée en amont des cours d'eau se propage en aval. Schéma d'explication d'un bassin versant – METEO-EXTREME À cela s'ajoute la fonte plus ou moins marquée du manteau neigeux attendue dans les prochains jours sur les reliefs. Cette période plus calme et sèche devrait en effet s'accompagner d'un regain de douceur printanière sur notre pays, que ce soit en basses couches mais également en montagne. Cette douceur devrait engendrer un début de fonte du manteau neigeux parfois exceptionnel présent sur nos reliefs. On relevait en effet souvent plus de 250 à 350 cm de neige en haute montagne du côté des Alpes ce 20 février, parfois 4 mètres du côté de l'Isère. Les Pyrénées sont également très enneigées avec en général 250 à 280 cm sur les sommets de la région. Ainsi, l'eau de fonte devrait donc de nouveau alimenter les cours d'eau et ainsi maintenir des niveaux assez hauts malgré l'arrêt des précipitations. La couche de neige dépasse les 250cm en haute montagne dans les Pyrénées en cette fin de semaine comme ici au col du Portalet - Via Twitter @CyNPirineos Enfin, il est important de noter que certains scénarios envisagent déjà le retour de l'influence océanique perturbée pour la fin de semaine prochaine avec des pluies de nouveau généralisées et des perturbations successives, ce qui pourrait engendrer une nouvelle hausse du niveau des cours d'eau. Cette tendance reste néanmoins incertaine et sera à confirmer dans les prochains jours. À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Un air printanier en début de semaine prochaine ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Tristan Bergen

Source: Météo-Paris Vague de froid du mois de février 1917 à Paris : Même les chevaux ne résistent pas ! archives meteo-paris.com Les deux derniers hivers de la Première Guerre mondiale sont particulièrement rudes en France, déjà en partie dévastée par trois années de combats. Entre le 20 janvier et le 15 février 1917, une vague de froid exceptionnelle frappe surtout le Nord et l’Est du pays, atteignant son apogée au début de février avec des températures extrêmement glaciales. Dans « Le Monde Illustré » du 3 février 1917, on note que cet hiver renoue avec la tradition, car selon le journal « les grands hivers d’antan deviennent de plus en plus rares… ». Il est vrai que la courbe de l’évolution des températures hivernales en France indique un réchauffement du début du siècle à l’arrivée de la Première Guerre mondiale. Des conditions météo insoutenables pour le moral des troupes Les sols gelés de l’Aisne, paradoxalement, permettent des mouvements de troupes impossibles sur sols boueux en temps normal. Cependant, l’armée française souffre terriblement du froid, étant nettement sous-équipée pour y résister, contrairement à l’armée allemande. Les régiments ne disposent que de quelques peaux de bête, et certains tirailleurs algériens sont même chaussés de souliers découverts et vêtus de culottes courtes. Ces conditions difficiles affectent grandement le moral des troupes. La relève sous la neige durant la guerre - début 1917 - archives meteo-paris.com Jusqu’à -26°C dans les plaines et vallées de l'Est de la France ! Le froid atteint son point culminant au tout début du mois de février avec des températures glaciales : -26 °C à Bonneville, -23 °C à Commercy, -22 °C à Montbrison, -20 °C à Grenoble, -18 °C à Lyon, -17 °C à Alençon et Clermont-Ferrand, et -15,5 °C à Paris. Les dix premiers jours de février sont comparés à la situation de février 1895. A Paris, le déneigement des voies de circulation est très compliqué en raison du manque de main-d’œuvre. Les femmes sont alors réquisitionnées. Février 1917 - archives meteo-paris.com Les rivières gèlent peu à peu Les rivières de l’Est commencent à geler le 24 janvier, tandis que celles du Nord, y compris celles de la région parisienne, le sont dans les derniers jours de janvier, un phénomène inédit depuis 1895. La navigation devient impossible sur les canaux puis sur la Seine. Parallèlement, la forte demande en charbon engendre d’importantes difficultés d’approvisionnement à Paris, comme à Londres. Malgré l’utilisation de quelques brise-glace et la construction de barrages pour retenir les glaces près de Rouen, les péniches restent bloquées entre Rouen et Paris. Un service spécial de transports automobiles est alors mis en place. Rouen - 7 février 1917 - archives meteo-paris.com Le prix du charbon s'envole !! Les files d’attente pour acheter du charbon s’allongent, et les prix s’envolent. Même les bourgeoises des beaux quartiers doivent attendre des heures, ce qui ne manque pas de provoquer quelques grincements de dents, tant figurés que réels. La pénurie de charbon, alors que de nombreuses machines en dépendaient à l’époque, a de plus en plus d’impact sur l’activité économique. Des lignes de tramway sont interrompues, des usines ferment leurs portes, et les blanchisseries, chauffées au coke, cessent progressivement leurs activités. Certains journaux s’indignent même que les prisonniers allemands soient mieux chauffés que les Français. La rareté du charbon entraîne une flambée des prix du bois de chauffage dans les grandes villes. Il est alors vendu au kilo, après avoir été scié et pesé sur des balances à main. Par ailleurs, les fourrures en peau de lapin deviennent très bon marché. Déchargement par la main-d'œuvre après l'immobilisation par le gel - vague de froid 1917 - archives meteo-paris.com A lire également : >>> Notre chronique météo de l'année 1917 >>> Le dernier hiver de la guerre fut terriblement froid en France... >>> Froid polaire pour fin janvier et jusqu'à 1 m de neige à Carcassonne ! >>> Tous nos articles (en moyenne, deux par jour) >>> Notre compte Twitter (référence pour les médias) Auteur : Guillaume Séchet

Source: Météo-Paris Certaines plages de la façade Atlantique sont bondées comme en plein été à la fin du mois de février 1990 - Archives Météo-Villes Un changement radical de temps Après trois semaines particulièrement agitées avec une succession de perturbations apportant notamment d'importantes quantités de neige en montagne, des pluies abondantes sur de nombreuses régions et même une puissante tempête sur le nord-ouest de la France, la situation change radicalement sur la France pour la dernière décade de février 1990. Une poussée anticyclonique se met en effet en place sur le pays à partir du 20 février, apportant le retour d'un temps calme et sec mais également un net regain de douceur avec la bascule du flux au sud/sud-ouest en altitude. Situation atmosphérique du 22 février 1990 sur l'Europe – Wetterzentrale Si les semaines précédentes s'était déjà montrées assez douces sous l'influence du flux océanique perturbé, les températures ont pris une toute autre mesure au début de la dernière décade de février 1990. De la chaleur en février ! À partir du 20 février donc, les températures s'envolent sur la totalité du pays. Une véritable vague de chaleur hivernale se met en effet en place sur la France sous ce puissant flux de sud/sud-ouest en altitude. Dès le 20 février, on atteint 20°C jusque dans les Hauts-de-France, 18,5°C à Paris et 19°C à Strasbourg, mais c'est notamment les journées des 23 et 24 février qui se montrent exceptionnellement douces sur la totalité du pays et même chaudes sur certaines régions. Le 23 février, le seuil de chaleur est régulièrement dépassé sur le sud-ouest de la France avec par exemple 25,7°C à Biarritz, 25,9 à Mont-de-Marsan et même 27,2°C à Dax ! Le 24 février, les 26/27°C sont dépassés sur le sud de l'Aquitaine avec jusqu'à 28°C à Peyrehorade (40) et même 28,1°C à Agnos (64). On relève également jusqu'à 25°C à Bordeaux, 23,5°C à Clermont-Ferrand, 22,6°C à Bourges, 22°C à Mulhouse, 21°C à Orléans et 20°C à Paris. Certaines stations du centre de la France atteignent également le seuil de chaleur. De très nombreux records sont observés. Températures maximales relevées sur la France le 24 février 1990 – archive Météo-Villes Cette vague de douceur/chaleur exceptionnelle pour la période prendra fin sur la majorité du pays dès le lendemain avec le retour d'air océanique moins doux. Seules les régions allant du Massif Central au nord-est conserveront des températures très douces avec 22,3°C à Saint-Étienne, 21,8°C à Colmar, 19,9°C à Vichy. Deux tempêtes toucheront ensuite la France entre le 26 et le 28 février. Le mois de février 1990 s'est montré dans l'ensemble exceptionnellement doux. La température moyenne nationale durant ce mois a en effet dépassé la normale 1981/2010 de + 4°C, ce qui n'a jamais été égalé jusqu'à aujourd'hui. Seul le mois de févier 2024 s'est rapproché de cette température moyenne mensuelle exceptionnelle avec une anomalie de + 3,6°C à l'échelle du pays. Anomalies de températures en février entre 1967 et 2016 sur la France – Météo-France À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Un air printanier en début de semaine prochaine ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Tristan Bergen

Source: Météo-Paris Anomalies thermiques prévues entre le 23 février et le 2 mars 2026 - modèle ECMWF Si les températures ont déjà connu une légère hausse ces derniers jours, ce n’est rien en comparaison de ce qui nous attend cette semaine. Le courant va en effet se diriger vers le sud, permettant à l’air chaud en provenance du Maroc de se propager directement sur la France. La Belgique, l’Allemagne et la Suisse seront également touchées par cette vague de douceur en Europe occidentale. Jusqu'à 12°C au-dessus de la normale ! Tout au long de la semaine, les températures seront largement supérieures aux normales saisonnières pour une fin février, notamment entre mardi et mercredi, lorsque le soleil illuminera les trois quarts du pays. Les écarts à la normale pourraient atteindre sept à dix degrés mardi, et même dix à douze degrés mercredi ! Anomalies de températures maximales prévues mardi et mercredi Plus de 15° presque partout durant quatre jours Entre lundi et jeudi, les températures dépasseront les quinze degrés Celsius l’après-midi sur presque tout le territoire français. La journée la plus douce, voire localement chaude, sera probablement celle de mercredi, car la grande douceur qui aura déjà touché le quart sud-ouest la veille remontera vers les régions du Nord. Températures maximales prévues entre lundi 23 et jeudi 26 février selon METEO-VILLES.COM Peut-être quelques records mensuels battus ? Il sera difficile d’atteindre les records mensuels de douceur pour un mois de février, qui s’élèvent à près de trente degrés dans l’extrême Sud-Ouest et à vingt à vingt-trois degrés sur la plupart des régions. Néanmoins, à Paris, il est possible de se rapprocher des records pour une fin février. Le record pour la troisième décade de février est de 21,4 degrés en 1960. Pour un 24 février, le record est de 20,3 degrés en 1990, et pour un 25 février, il a fait jusqu’à 17,9 degrés à Paris Montsouris en 2019. Records de températures maximales pour un mois de février - Meteociel Pour une fin février, les années de référence sont donc 1960, 1990 et 2019. Pour la moitié sud, on peut également citer 2020 et 2012. Les quais de Seine à la fin du mois de février 2019 avec des températures autour de 18°C à l'ombre à Paris - archives meteo-paris.com >>> Soleil et douceur favorisent floraisons et pollens >>> Douceur exceptionnelle battant des records météo fin février >>> Douceur : comme un air de printemps >>> Douceur exceptionnelle entre mardi et mercredi Auteur : Guillaume Séchet

Source: Météo-Paris 80 cm de neige et route paralysée dans la région de Saint-Maximin (Var) le 28 février 2001 - Chronique Météo Villes La fin du mois de février 2001 avait été marquée par des chutes de neige exceptionnelles dans le sud-est de la France. Il était tombé jusqu'à 80 cm dans le Var, causant une véritable paralysie sur les routes ! Retour sur cet épisode marquant. Tempête de neige au sud-est Fin février 2001, la France subit sa première véritable offensive hivernale depuis novembre 1999 ! Une dépression circule sur le bassin parisien et advecte de l'air froid sur le pays. Dans le même temps, un minimum dépressionnaire secondaire se creuse dans le golfe de Gênes, ce qui entraîne un retour d'est responsable de fortes précipitations persistantes sur le nord de l'Italie et le sud-est de la France. L'isothermie se met en place et il se met à neiger en plaine sur la Provence, particulièrement durant la nuit du 27 au 28 février 2001. Situation météo en Europe le mercredi 28 février 2001 - réanalyse via meteociel.fr Il neige alors dans tout le sud-est de la France mais les quantités sont surtout remarquables sur la Provence ainsi qu'en Ardèche. Le mercredi 28 février 2001, on mesure jusqu'à 80 centimètres de neige au sol à Saint-Maximin dans le Var, 65 cm à Sault dans le Vaucluse ou encore et 52 cm à Régusse (Var) ! De tels cumuls sont remarquables pour ces régions et la vie quotidienne s'en trouve particulièrement affectée. Manteau de neige remarquable à Saint-Maximin (83) le 28 février 2001 - Chronique Météo Villes Une véritable pagaille sur les routes ! Avec de telles quantités de neige dans une région si peu habituée à ce phénomène, circuler devient presque mission impossible dans certains secteurs ! Les axes secondaires sont rendus impraticables car recouverts par plusieurs dizaines de centimètres d'une neige lourde et collante ! Cette neige engendre également des dégâts et de nombreuses coupures d'électricité. Plus de 100.000 foyers sont privés de courant le 28 février 2001 ! Route ensevelie sous un épais manteau neigeux à Signes dans le Var le 28 février 2001 - Chronique Météo Villes La tempête de neige ayant frappé en pleine semaine, entre le mardi 27 et le mercredi 28 février 2001, beaucoup de travailleurs et de chauffeurs routiers se retrouvent coincés sur la route. Les grands axes ne sont pas épargnés. L'autoroute A8 est notamment paralysée dans le Var et plusieurs milliers de personnes deviennent des naufragés de la route, ce qui conduira rapidement à une polémique sur le manque de préparation face à un tel épisode. Des milliers de naufragés sur les routes de Provence le 28 février 2001 - Chronique Météo Villes À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Un air printanier en début de semaine prochaine ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris A Ville d'Avray (ouest de Paris), la couche de neige atteint 55 cm ! photo d'archive meteo-paris.com Tempête de neige sur le nord de la France Dès la fin février 1946, l'anticyclone s'érige sur l'Atlantique et le Groenland, entraînant un décrochage d'air polaire qui envahit tout le nord de l'Europe et plonge en Mer du Nord. Au sud de cet air froid, une dépression se creuse vers le Portugal puis migre au dessus de la France où elle transite durant plusieurs jours début mars 1946. À l'avant de la dépression, l'air doux englobe le sud de l'hexagone tandis qu'elle rabat l'air froid sur les régions du nord. Un important conflit de masses d'air se produit avec en son centre un épisode neigeux actif et persistant des Pays de la Loire à la Belgique, frappant particulièrement l'Île-de-France ! Situation météo en Europe au 1er mars 1946 - Météo Villes On relève 40 cm de neige au sol à Paris, une épaisseur jamais mesurée depuis le début des relevés météo à Paris-Montsouris ! 80 ans plus tard, cette mesure n'a toujours pas été égalée. La presse de l'époque indique qu'il faut sans doute remonter à l'hiver 1829-1830 pour trouver trace d'une couche de neige semblable dans la région parisienne. La couche fut même encore plus importante à l'ouest de Paris, atteignant jusqu'à 55 cm dans les Yvelines ! Les photos d'époque montrent les terrasses parisiennes ensevelies sous la neige ! Paris sous 40 cm de neige, le 2 mars 1946 ! photo d'archive meteo-paris.com Paris paralysée par 40 cm de neige ! Début mars 1946, Paris s'était alors transformée en une véritable station de sports d'hiver ! De nombreux habitaient avaient chaussé les skis pour circuler dans les rues, dévalant les marches du Trocadéro jusqu'au pied de la Tour Eiffel ou encore les pentes de la butte Montmartre ! Six mois après la fin de la seconde guerre mondiale, les parisiens profitent pleinement de cet épisode exceptionnel. Pour autant, cette neige abondante cause aussi de nombreux problèmes, paralysant la circulation. Skieur descendant la butte Montmartre à Paris début mars 1946 - Chronique Météo Villes L’épaisseur de neige est telle que des toits et des verrières s’effondrent dans plusieurs quartiers de la capitale et que les denrées alimentaires peinent à être acheminées dans la région où les rayons se vident. Le trafic ferroviaire s'en trouve également paralysé. Les rues des villes sont difficilement praticables et de nombreuses chutes surviennent. Il faudra plusieurs jours pour assister à un retour progressif à la normale des activités. Neige abondante au métro Brochant de Paris début mars 1946 - Chronique Météo Villes Il s'agit, aujourd'hui encore, de l'épisode neigeux le plus important sur Paris depuis le début des relevés météorologiques. À lire également : >>> Près de 140 records de chaleur battus ce mercredi en France ! >>> Et si le mois de mars était très sec ? >>> Un blizzard new-yorkais est-il possible en France ? >>> 80 cm de neige dans le Var : la pagaille de la fin février 2001 >>> L'hiver sans fin... de la mi-novembre à la mi-mars ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik

Source: Météo-Paris La douceur couplée à l'humidité importante de ce mois de février permettent à la végétation de redémarrer très fortement et précocement. Par conséquent, les pollens se répandant et les allergiques subissent déjà leurs effets néfastes. Pic de douceur après les pluies : explosion des végétaux Les conditions météorologiques récentes et à venir réunissent tous les ingrédients pour l'explosion de la végétation. En effet, une grande douceur s'installe sur la France et va s'accentuer ces prochains jours avec un pic durant les mardi 24 et mercredi 25 février 2026. Les 20°C pourront être atteints jusque dans les régions du nord et on prévoit les premiers 25°C de la saison dans le sud de l'Aquitaine, le tout avec un beau soleil ! Avec des sols très humides suite aux pluies abondantes des dernières semaines et des températures dignes d'avril, la végétation croît très rapidement et les pollens se répandent. Températures maximales prévues les mardi 23 et mercredi 24 février 2026 - Météo Villes Par conséquent, les pollens font leur retour en force et les risques allergiques seront élevés en France durant cette semaine aux airs printaniers. Ce mardi 23 février 2026, le risque d'allergies sera d'un niveau jugé "élevé" par Atmo-France sur la majeure partie des régions, parfois un peu moindre dans certains secteurs du sud-ouest. Cette situation se répètera mercredi et jeudi, avant des risques moins élevés vendredi en raison du passage d'un front pluvieux. Carte du risque allergique valable pour le mardi 24 février 2026 - Atmo-France Cyprès et aulne : les principales menaces Si vous êtes allergiques, il est donc vivement recommandé de reprendre votre traitement ou de prendre rendez-vous chez votre médecin et/ou allergologue. Les personnes allergiques aux cyprès sont particulièrement concernées puisque ce pollen représente une menace importante cette semaine avec des concentrations très élevées dans les régions méditerranéennes, élevées dans le sud-ouest et modérées sur les autres régions. Le pollen de cyprès possède un pouvoir allergisant très important et génère souvent des rhino-conjonctivites. Les pollens de cyprès représentent une très forte menace allergique - photo d'illustration L'autre pollen qui pose problème aux quatre coins du pays est celui de l'aulne. Il est dégagé par ce que l'on appelle des chatons (photo ci-dessous), similaires à ceux du noisetier. Ce pollen est plus discret visuellement mais n'en demeure pas moins redoutable puisqu'il génère des rhino-conjonctivites et des crises d'asthme chez les sujets allergiques. Sa concentration est actuellement élevé et induit un risque de réaction allergique important dans la plupart des régions françaises. Les pollens d'aulne sont libérés par ce que l'on appelle des "chatons" - photo d'illustration Avec le retour du soleil et de températures particulièrement douces après de longues semaines de mauvais temps, beaucoup vont passer de longues heures en extérieur. Il convient donc d'être particulièrement vigilants face aux pollens. À lire également : >>> Les stations des Alpes ensevelies sous plusieurs mètres de neige ! >>> Et si le mois de mars était très sec ? >>> Un air printanier en début de semaine prochaine ! >>> Notre bulletin météo réactualisé quotidiennement >>> Notre compte Twitter très suivi et référence dans tous les médias ! Auteur : Alexandre Slowik


International News / Journaux internationaux

Source: The Guardian World Legendary nightclub Le Palace, where Serge Gainsbourg and Prince also performed, to rise again In the late 1970s, Le Palace in Paris’s busy theatre district was one of continental Europe’s most famous nightclubs. On the opening night on 1 March 1978, Grace Jones stunned VIP guests with her rendition of Edith Piaf’s classic La Vie en Rose. Later, Serge Gainsbourg and Prince came to perform, Bob Marley was photographed there and Mick Jagger, Andy Warhol and Karl Lagerfeld were part of a glittering cast of international celebrities, politicians, designers and models who came to drink and dance. Continue reading...

Source: The Guardian World Department of foreign affairs warns travellers of risk of reprisal attacks, further escalation and flight cancellations in Middle East US-Israel attack on Iran – live updates Get our breaking news email, free app or daily news podcast Australia has declared its support for US action against Iran to prevent it from obtaining a nuclear weapon and “to prevent Iran continuing to threaten international peace and security”. But Australia’s department of foreign affairs (Dfat) has warned of the risk of “reprisal attacks and further escalation” across the Middle East after the attack. Continue reading...

Source: The Guardian World US threats to seize Greenland have created ‘new international fault lines’ that can be used to spread disinformation, Danish intelligence agencies say Denmark’s intelligence services have warned that a foreign power may try to sway the general election on 24 March, saying the main threat was from Russia over support for Ukraine but also citing the chaos caused by US efforts to seize Greenland. The PET police intelligence service and FE military intelligence said in a joint statement the election campaign could be marked by disinformation and cyberattacks “to sow division, influence the public debate or to target candidates, parties or specific political programmes”. Continue reading...

Source: New York Times World Judges at the International Criminal Court have heard starkly different interpretations this week of the words of former President Rodrigo Duterte of the Philippines.


Raspberry Pi

Source: Framboise 314 Derrière Macé Robotics, Nicolas mêle réparation électronique au composant et conception de cartes pour des besoins professionnels, tout en développant des robots mobiles pour l’éducation et la recherche. On trouve notamment des projets de robots basés sur Raspberry Pi et Raspberry Pi Pico (MRPi1, MR-Pico), accompagnés de contenus et documentations. Dans ce contexte, il organise […] Cet article Concours Mace Robotics : un Raspberry Pi 5 (et un Pico 2W) à gagner ! a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Les 14 et 15 février 2026, je vous donne rendez-vous à Vitré pour le salon Tech Inn’Vitré (Usages numériques), organisé par Vitré Communauté et Makeme. Deux jours pour découvrir des usages concrets du numérique, tester, manipuler… et surtout échanger “en vrai”. Tech Inn’Vitré 2026 : rendez-vous les 14 & 15 février au Centre culturel de […] Cet article Retrouvez nous sur Tech Inn’Vitré les 14 et 15 février 2026 a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 La SunFounder Fusion HAT+ ressemble à un simple HAT pour Raspberry Pi… jusqu’au moment où vous réalisez que c’est plutôt un couteau suisse pour robot “assisté par IA”. Elle ne “fait” pas l’IA toute seule : les neurones restent sur le Raspberry Pi (un Pi 5 dans mon cas), mais la carte apporte le muscle […] Cet article SunFounder Fusion HAT+ : alimentation 2×18650, moteurs et contrôle “IA-ready” pour Raspberry Pi a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Dans cette seconde partie, le Raspberry Pi 5 passe à l’action avec la vidéo temps réel accélérée par Hailo-10H. Détection de personnes, cadrage dynamique, pose squelette et reconnaissance des mains : on enchaîne les modèles concrets. L’objectif est d’évaluer les performances réelles, les limites, et les bons compromis en situation réelle. Ici, pas de cloud […] Cet article Raspberry Pi AI HAT+ 2 : vision par ordinateur en vidéo avec Hailo-10H (Partie 2) a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Avec la carte Raspberry Pi AI HAT+ 2, la Fondation Raspberry Pi introduit une carte HAT+ intégrant l’accélérateur Hailo-10H et 8 Go de mémoire dédiée, conçue exclusivement pour le Raspberry Pi 5. Connectée en PCIe Gen 3, elle vise l’exécution locale de modèles d’IA sans dépendre du cloud. Dans ce premier article, je vous présente […] Cet article Raspberry Pi AI HAT+ 2 : présentation matérielle et installation sur Raspberry Pi 5 a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Aujourd’hui, je vous propose de découvrir Pimmich, un cadre photo connecté open source basé sur Raspberry Pi, pensé pour afficher vos souvenirs sans cloud ni abonnement, en restant 100% local. Avec les récents changements côté Google Photos, beaucoup d’entre vous ont dû revoir leurs habitudes… et Aurélien a eu le bon réflexe : s’appuyer sur […] Cet article Pimmich – Un cadre photo connecté open source basé sur Raspberry Pi a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 Installer un serveur perso chez soi, sur un Raspberry Pi 5 ou un Pi 500+, c’est à la portée de tout maker… à condition de suivre la bonne méthode. Dans cet article, on va poser YunoHost sur un SSD NVMe, faire la post-installation, installer une première appli (WordPress), puis rendre le serveur accessible depuis l’extérieur […] Cet article Raspberry Pi 5 + SSD : installer YunoHost, HTTPS Let’s Encrypt et WordPress (pas à pas) a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 L’application Google Earth n’est plus réellement maintenue sous Linux, et elle n’existe plus du tout en version native pour les architectures ARM, comme celles des Raspberry Pi. La dernière version officielle pour Linux date de 2020, et son installation sur un Pi (ARM) est aujourd’hui vouée à l’échec. En pratique, pour utiliser Google Earth sous […] Cet article Utiliser Google Earth sur Raspberry Pi : la solution Web qui fonctionne a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Framboise 314 La question du pilotage des LEDs WS2812B sur Raspberry Pi 5 a récemment été soulevée par Victor lors d’un échange sur un réseau social. Le Raspberry Pi 5 introduit une nouvelle architecture matérielle qui complique le pilotage des LEDs WS2812B avec les bibliothèques historiques. Les solutions basées sur le PWM ou le DMA montrent rapidement […] Cet article Raspberry Pi 5 : piloter des LEDs WS2812B de manière fiable avec le bus SPI a été publié en premier sur Framboise 314, le Raspberry Pi à la sauce française..... - Framboise 314, le Raspberry Pi à la sauce française.... - La référence du Raspberry Pi en France - Par l'auteur du livre "Raspberry Pi 4" paru aux Edts. ENI

Source: Toms Hardware Raspberry Pi VEEB Projects has put together a cool transparent Raspberry Pi display using a glass dome and a program that replicates the Pepper's Ghost effect.

Source: Toms Hardware Raspberry Pi Abe's Projects has put together a custom mini PC using two Raspberry Pi Picos featuring a touchscreen, custom apps, and a built in keyboard.

Source: RaspberryTips.fr Vous n’avez pas besoin de capteurs, d’écrans ni de gadgets supplémentaires pour créer quelque chose de génial avec votre Raspberry Pi. En fait, de nombreux projets parmi les plus utiles et gratifiants peuvent être réalisés avec rien d’autre que votre Pi, une carte microSD et une alimentation. Le Raspberry Pi peut être utilisé comme serveur...

Source: Raspberry Pi We’ve updated our pages, forms, and CAPTCHA infrastructure on raspberrypi.com to improve accessibility for screen reader users.

Source: Toms Hardware Raspberry Pi Powered by the Raspberry Pi Zero 2W, Jeff Merrick's slab of 1970 / 1980s aesthetic screams the "charm" of the worn and broken Alien universe that belies the powerful single board computer within.

Source: Toms Hardware Raspberry Pi Raspberry Pis latest AI accessory brings a more powerful Hailo NPU, capable of LLMs and image inference, but the price tag is a key deciding factor.

Source: Toms Hardware Raspberry Pi The price of a Raspberry Pi now has parity with Intel N100 mini PCs at just over $200, with flash memory price spikes continuing to push prices up across the board.

Source: Toms Hardware Raspberry Pi The invitation to Mayor-elect Mamdani's inauguration lists Raspberry Pi and Flipper Zero as prohibited items but does not provide a reason.

Source: Toms Hardware Raspberry Pi This Raspberry Pi project captures Wi-Fi data and then blasts it out as sound to make it feel like you're connecting via a dial-up modem.

Source: Toms Hardware Raspberry Pi Raspberry Pi has released an updated version of the Raspberry Pi 500 and this time the omitted NVMe storage is present, as is an RGB mechanical keyboard.

Source: Toms Hardware Raspberry Pi Raspberry Pi releases a smaller model of its updated touch display. This time with $20 off the price but the same display as the larger model.

Source: Toms Hardware Raspberry Pi Argon40’s new Raspberry Pi Compute Module 5-powered laptop has it all, but the price makes it a considered purchase.

Source: Toms Hardware Raspberry Pi GamerCard is a retro gaming handheld so portable than it's literally the size of a gift card, so you can now casually spend $170 at checkout.

Source: Toms Hardware Raspberry Pi Spacerower is using a Raspberry Pi Zero to power this custom 3D-printed camera that instantly prints out photos using thermal paper.

Source: Toms Hardware Raspberry Pi Goblinhan Yıkan has created a Raspberry Pi Pico-powered fight stick that has extra buttons for throwing random combos while playing fighting games.

Source: Toms Hardware Raspberry Pi Dan McCreary shows off how to create your own FFT sound spectrum analyzer using our favorite microcontroller, the Raspberry Pi Pico 2.

Source: Toms Hardware Raspberry Pi Raspberry Pi's new $25 PoE+ Injector bring power over Ethernet for the Raspberry Pi 3B+ and 4 via existing PoE HATs. The Raspberry Pi has to wait for the PoE+ HAT+ which has been in development since late 2023.

Source: Toms Hardware Raspberry Pi André Esser is using a Raspberry Pi to power this ASCII camera project that he recently created for Pi Jam, celebrating Pi day.

Source: Toms Hardware Raspberry Pi Maker Jackw01 and Soaporsalad have put together a cool Raspberry Pi handheld featuring a Raspberry Pi 2 W that's small enough to fit in an Altoids tin.

Source: Toms Hardware Raspberry Pi John Park has created a cool Raspberry Pi-powered wall arcade that features multiple matrix panels as its main display.

Source: Toms Hardware Raspberry Pi NeverCode has created a Raspberry Pi Pico smart clock and shared lots of details on how you can recreate it for yourself at home.

Source: Toms Hardware Raspberry Pi Pollux Labs is using a Raspberry Pi to power this rotary phone project that integrates Chat GPT and remembers previous conversations.

Source: Toms Hardware Raspberry Pi Raspberry Pi has announced the general availability of the RP2350 A and B, the SoC that powers the Raspberry Pi Pico 2 which features both an Arm and RISC-V CPU

Source: Toms Hardware Raspberry Pi ClockworkPi has released a cool Raspberry Pi Pico kit that lets you create a calculator suitable for handling your math homework or playing games.

Source: Toms Hardware Raspberry Pi Glossyio has created a Raspberry Pi-powered traffic monitor that uses AI to monitor traffic and look for statistics around specific travelers like cyclists and pedestrians.

Source: Toms Hardware Raspberry Pi Maker 3megabytesofhotram is using a Raspberry Pi to power a voice-activated paper towel dispenser that makes it easier than ever to dry your hands.

Source: Toms Hardware Raspberry Pi Tribal2 is using a Raspberry Pi to drive this cool interactive LED world map that integrates with his smart home setup.

Source: Toms Hardware Raspberry Pi Blink twice to control the robot arm

Source: Toms Hardware Raspberry Pi The Civitas Universe has put together a cool Raspberry Pi cyberdeck that scans brains and features a cool cyberpunk theme in a custom 3D-printed case.

Source: Toms Hardware Raspberry Pi Install Windows 11 for Arm on the Raspberry Pi 5 using the simplest installation method that we have ever encountered.

Source: Toms Hardware Raspberry Pi (Image credit: Tom's Hardware) RFID cards and tags are everywhere! We use them in buildings for access control. Printers and photocopiers can use them to identify staff members. Livestock tagging and pet identification tags all use a form of RFID. The tech to read an RFID device is cheap, for around $5 you can get the reader, and for $4, a Raspberry Pi Pico can read the IDs from the cards / tags.In this how to, we will learn how to read RFID tags and cards using an MFRC522 reader and a Raspberry Pi Pico, the goal will be to create a fictional RFID access control system that will allow users into a building, or alert security to remove them. Before we can do that, we need to identify the ID of our cards / tags. The first section of this how to will do just that, and then we will insert some code to control two LEDs to simulate the locking mechanism.For this how to you will needRaspberry Pi Pico running MicroPythonMFRC522 RFID readerLarge breadboard11 x Male to male jumper wiresGreen LEDRed LED2 x 100 Ohm resistors (Brown - Black - Brown - Gold)Building the Hardware (Image credit: Tom's Hardware)The hardware build is split into two sections. First is the wiring for the MFRC522 RFID reader. The reader uses SPI to communicate with the Raspberry Pi Pico and it requires seven pins to do so. Two are for power (3.3V and GND) and the rest are for SPI.Swipe to scroll hori

Source: Toms Hardware Raspberry Pi Arnov Sharma built a Raspberry Pi Pico studio light from scratch that can be controlled using push buttons to adjust the LEDs with precision.

Source: Toms Hardware Raspberry Pi Yaluke has created a Raspberry Pi Pico-powered protractor that can be used to calculate rotation data for simulating steering wheels when driving.

Source: Toms Hardware Raspberry Pi Arnov Sharma has created a temperature gun from scratch using a Raspberry Pi Pico 2 as the main board.

Source: Toms Hardware Raspberry Pi Arnov Sharma has put together a cool Raspberry Pi-powered handheld console for playing the classic game Snake on a Matrix.

Source: Toms Hardware Raspberry Pi Visible_Turnover3952 has created a Raspberry Pi-powered cat house with luxurious smart home features and automated systems to keep them cozy.

Source: Toms Hardware Raspberry Pi Three new carrier boards for your Compute Module 5 and the older Compute Module 4 which bring Raspberry Pi 5 accessories to the CM5, and PoE before Raspberry Pi releases its version.

Source: Toms Hardware Raspberry Pi Nicholas LaBonte is using a Raspberry Pi to power this custom cyberdeck handheld complete with custom-milled keys and wood finishing.

Source: Toms Hardware Raspberry Pi Tonight-we-ride has put together a cool Raspberry Pi music player with a touchscreen and customizable interface with Winamp.

Source: Toms Hardware Raspberry Pi Coming soon is a Kickstarter that sees the Compute Module 5 inside of a custom designed laptop.

Source: Toms Hardware Raspberry Pi Efren Lopez has created a Raspberry Pi-powered Creeper robot from the Minecraft universe complete with an AI chip and a motorized body.

Source: Toms Hardware Raspberry Pi Aforsberg has created a cool LED matrix display for their 1U server rack that's decked out like the WOPR computer from the 1983 movie War Games.

Source: Toms Hardware Raspberry Pi The Raspberry Pi 2040 now officially supports 200 MHz operation, thanks to the latest Pico-SDK release.

Source: Toms Hardware Raspberry Pi Bicapitate has created a custom Raspberry Pi-powered 3D-printed map of Manhattan that displays the location of subway trains in real time using LEDs and optical fiber.

Source: Raspberry Pi Spy Displaying the pinout of a Raspberry Pi Pico is possible using my “picopins” script. The script displays the pinout in a colour coded format showing the location of power, ground and GPIO pins. I find it useful if I’m coding Pico projects on my laptop or Pi 400 and need to check the location of [...]

Source: Raspberry Pi Spy This guide explains how to disable auto-login on Raspberry Pi OS. By default when you install the Raspberry Pi OS with the desktop it will auto-login when you power-up the Pi. This is really convenient for lots of projects as it gets you straight to the desktop. If you are using your Pi as a [...]


IoT - Internet of Things / IdO - Internet des Objets

Source: Home Assistant (Blog officiel) Home Assistant 2025.11! November is here, and we’ve been hard at work refining some of the main experiences that you interact with every day, and I think you’re going to love what we’ve built. My personal favorite this release? The brand new target picker. It’s one of those changes that seems simple on the surface, but makes such a huge difference in how you build automations. You can finally see exactly what you’re targeting, with full context about which device an entity belongs to and which area it’s in. No more guessing whether you’re controlling the right ceiling light when you have three of them! But that’s just the beginning. We’re continuing with the automation editor improvements, this time with a completely redesigned dialog for adding triggers, conditions, and actions. It’s cleaner, easier to read, and sets the foundation for some really exciting stuff coming in future releases. And speaking of making things clearer, you can now control exactly how entity names appear on your dashboard cards. Want to show just the entity name? The device name? The area? Or combine them? Even if you rename things, your dashboards will stay perfectly in sync. No more manual updates needed! Oh, and energy dashboard fans will appreciate the new pie chart view for device energy, complete with totals displayed in the corner of every energy card. Enjoy the release! ../Frenck PS: Oh, and pssst… Don’t tell anyone , but there might be something exciting being released on November 19th. Hit the bell on this announced YouTube stream to not miss it. Stay tuned! A brand new target picker A brand new way to add triggers, conditions, and actions in your automations Naming entities on your dashboard Energy pie Progress for Home Assistant and Add-on updates Integrations New integrations Noteworthy improvements to existing integrations Now available to set up from the UI Integration quality scale achievements Farewell to the following Other noteworthy changes Improved logging efficiency The new Home Dashboard keeps getting smarter Patch releases 2025.11.1 - November 7 2025.11.2 - November 14 2025.11.3 - November 21 Need help? Join the community Backward-incompatible changes All changes A huge thank you to all the contributors who made this release possible! And a special shout-out to @bramkragten, @JLo, @MindFreeze, @agners, and @piitaya who helped write the release notes this release. Also, @silamon and @GemPolisher for putting effort into tweaking its contents. Thanks to them, these release notes are in great shape. A brand new target picker Have you ever been building an automation and wondered, “Wait, which ceiling light is this?” when you see three entities all named “Ceiling light”? Or tried to figure out how many lights you’re actually controlling when you target an entire floor or area? We’ve all been there. Until now, the target picker didn’t show you the full picture. You couldn’t see which device an entity belonged to or which area it was assigned to. And when you selected a floor or area as your target, you had no idea how many entities you were actually affecting. This uncertainty meant many of you stuck with targeting individual entities, even though larger targets (like areas and floors) can make your automations much more flexible. The new target picker changes all that. Now you get full context for everything you’re targeting, and you can see exactly how many entities will be affected by your action. Want to dig deeper? You can expand any floor, area, or device to see exactly which entities are included and where they’re coming from. This makes it so much easier to build automations that scale with your home. When you target an area or floor, your automation automatically adapts as you add or remove devices. No more updating your automations every time you add a new light or sensor. Your automations just work, which is exactly how it should be. A brand new way to add triggers, conditions, and actions in your automations It’s no secret that we’re currently working hard on making automations easier to create. After the release of the automation sidebar two releases ago, we are now introducing a new dialog to add triggers, conditions, and actions. The changes are purely cosmetic: the dialog is bigger, so the description of each block is simpler to read, with a two-pane layout to ease both navigation and block selection. The building blocks (which are used to perform more complex conditions or sequences of actions, such as repeating actions or branching out your sequence into multiple paths) have been moved into the main dialog on a second tab. There is now a single entry point to add something to an automation instead of two, greatly reducing the number of buttons in complex automations. As mentioned above, these changes are purely cosmetic, for now! But this new dialog is the foundation of what’s coming next, and we cannot wait to present that to you once it finally lands. Naming entities on your dashboard A few releases ago, we gave the entity picker a big upgrade by adding more context so you could easily see where each entity belongs (May 2025 release). In this release, we’re bringing that same flexibility to your dashboards. You can now choose how names appear on your cards: show the entity, device, area, floor, or even combine them. This gives you full control over how your dashboards look and feel. For example, in a dedicated section for a specific device, you might choose to display only the entity name to avoid repeating the device name on every card. Of course, you can still set a custom name if you want complete control over the text shown. And the best part? If you rename an entity or device, your dashboards will automatically stay in sync. No more manual edits needed; everything just updates itself. Energy pie We’ve added a new layout to the devices energy graph: “pie” . You can toggle between the regular bar chart and the new pie chart by clicking the icon in the top-right corner. Doing this made the top-right corner of the other energy cards feel empty, so we used that space to display the total energy for the selected period. For example, if the date picker is set to today, the total solar energy for today will be displayed in the corner of the solar production graph card. Progress for Home Assistant and Add-on updates With this release, you can now track the progress of updates to Home Assistant and Add-ons (managed by the Supervisor)! The progress includes the stages of downloading and unpacking, so the time required will vary based on your internet speed, CPU performance, and system load. As a result, the progress is not reflected as perfectly linear, but it does still provide a good estimate of how far along the update is. Integrations Thanks to our community for keeping pace with the new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] and improvements to existing ones! You’re all awesome. New integrations We welcome the following new integrations in this release: Actron Air, added by @kclif9 Sunricher DALI, added by @niracler Sunricher DALI, a platform for managing and monitoring DALI-based lighting systems. Fing, added by @Lorenzo-Gasparini Fing integration provides network scanning, device detection, and presence monitoring capabilities using the Fing platform. Firefly III, added by @erwindouna Firefly III project, a free open source personal finance manager with full transaction management, budgets, categories, and reports. iNELS, added by @epdevlab iNELS smart home system to manage lighting, heating, and automation components for enhanced home control. Lunatone Gateway, added by @MoonDevLT Lunatone Gateway, enabling control and monitoring of DALI lighting systems through Lunatone’s DALI gateway interface. Meteo.lt, added by @xE1H Lithuanian Hydrometeorological Service (LHMT) to provide regional weather forecasts for locations in Lithuania. Nintendo Parental Controls, added by @pantherale0 Nintendo Parental Controls integration connects with Nintendo’s parental management service, allowing you to monitor and manage device usage and restrictions. OpenRGB, added by @felipecrs OpenRGB integration allows unified control of RGB lighting across various hardware brands and devices through the OpenRGB project. Noteworthy improvements to existing integrations It’s not just new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] that have been added; existing integrations are also being constantly improved. Here are some of the noteworthy changes: The SwitchBot integration now supports garage door openers. Thanks @zerzhang! @tr4nt0r added support for notifications to the Habitica integration. Nice work! The VegeHub integration now has support for switches to control actuators. Cool @Thulrus! The Portainer integration gained support for switches, buttons, and sensors, so you can control and monitor all your containers! Well done @erwindouna! The Volvo integration can now show the location of your car and has buttons to control it. We got @thomasddn to thank for that! ElevenLabs can now be used for speech-to-text. Thanks @ehendrix23! You can now control the LEDs of supported UniFi network devices! Thanks @Sese-Schneider! @barneyonline added binary sensors to the Yardian integration. Nice! You can now set the temperature of your 3D printer’s tool and bed with the OctoPrint integration. Thanks @AmadeusW! The Niko Home Control integration now also adds your scenes into Home Assistant! Thanks @VandeurenGlenn! Your Control4 climate devices (for example, thermostats) are now supported in Home Assistant. Thanks @davidrecordon! Support for controlling Growatt MIN/TLX inverters was added, and you can now enable grid charge! Thanks @johanzander! @hanwg added event entities to the Telegram bot integration. You can use these entities to more easily automate when you get a message, for example! Cool! The Xbox integration now has support for images! It shows an image of the game you are currently playing, the avatar, and the Gamerpic for yourself and your friends. Thanks @tr4nt0r! @AndyTempel added support for solar production forecasting to Victron Remote Monitoring, so you can now use it in the energy dashboard to see a forecast of how much solar energy you will produce today! The Shelly integration now supports climate and valve entities. Thanks @thecode! @starkillerOG improved the Reolink integration; it can now report bicycles and the type of person, vehicle, and animal. So you now know if a man or a woman is detected on your cameras. Great work! Now available to set up from the UI While most integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] can be set up directly from the Home Assistant user interface, some were only available using a YAML configuration. We keep moving more integrations to the UI, making them more accessible for everyone to set up and use. The following integration is now available via the Home Assistant UI: London Underground by @HarvsG Integration quality scale achievements One thing we are incredibly proud of in Home Assistant is our integration quality scale. This scale helps us and our contributors to ensure integrations are of high quality, maintainable, and provide the best possible user experience. This release, we celebrate several integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] that have improved their quality scale: Seven integrations reached platinum APC UPS Daemon, thanks to @yuxincs IMGW-PIB, thanks to @bieniu LG WebOS TV, thanks to @thecode Mealie, thanks to @andrew-codechimp NextDNS, thanks to @bieniu ntfy, thanks to @tr4nt0r Volvo, thanks to @thomasddn Four integrations reached silver 1-wire, thanks to @epenet Ubiquiti airOS, thanks to @CoMPaTech LetPot, thanks to @jpelgrom Switcher, thanks to @thecode This is a huge achievement for these integrations and their maintainers. The effort and dedication required to reach these quality levels is significant, as it involves extensive testing, documentation, error handling, and often complete rewrites of parts of the integration. A big thank you to all the contributors involved! Farewell to the following The following integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] are no longer available as of this release: Vultr has been removed. The integration has not been working since the API v1 that it used was taken offline in September 2023. IBM Watson IoT Platform has been removed. On September 8, 2020, IBM announced the withdrawal of its support for the IBM Watson IoT Platform and successively discontinued all versions until September 30, 2022. Plum Lightpad has been removed. Their servers have been shut down, which made the integration non-functional. Other noteworthy changes There are many more improvements in this release; here are some of the other noteworthy changes: @thecode added group support for valves, so you can group multiple valves into one. Searching in data tables got a lot better; you can now search over multiple columns at once. Thanks @wendevlin! Energy graphs now show the total of the period in the top-right corner. Great addition, @MindFreeze! Thanks to @karwosts, you can now use images from any integration providing images for your dashboard background. Improved logging efficiency If you’re using the Home Assistant Operating System, we have some great news for you! We’ve made our logging system way more efficient. You might not realize it, but all those Home Assistant logs you can find in Settings > System > Logs were actually being stored on your disk twice. Home Assistant OS keeps all logs for everything, including Home Assistant itself, in a very efficient way, even across restarts! But on top of that, we were also writing them to a log file in your Home Assistant configuration folder. That’s not ideal. It takes twice the disk space, but more importantly, it causes unnecessary wear on your storage medium, which means it will fail sooner. This is especially concerning if you’re using an SD card in, for example, a Raspberry Pi. As of this release, we’ve stopped writing logs to the configuration folder. You can still view and download all logs from the Home Assistant settings page, just like before. We’ve adapted that page to read the logs from the OS directly instead. Tip Are you more into the command line? No worries, our Home Assistant CLI has you covered. Check it out by running ha core logs --help for more information. The new Home Dashboard keeps getting smarter Following the improvements introduced in the latest releases, this release makes the experience even smoother and more intuitive. We’ve simplified and reorganized things: Suggested entities and favorites are now combined into a single, smart section, showing you what’s most relevant in one place. Areas are now grouped by floor, making it easier to browse and understand your home’s layout at a glance. The Lights, Climate, and Security views have been moved to their own dedicated dashboards, so you can access them directly under Settings > Dashboards. These dashboards now also include devices that aren’t assigned to any specific area, ensuring nothing is overlooked. These improvements bring everything together more naturally, helping your Home Dashboard feel less like a setup and more like a true reflection of your home. Patch releases We will also release patch releases for Home Assistant 2025.11 in November. These patch releases only contain bug fixes. Our goal is to release a patch release once a week, aiming for Friday. 2025.11.1 - November 7 Improve scan interval for Airthings Corentium Home 2 (@LaStrada - #155694) Remove @progress_step decorator from ZHA and Hardware integration (@puddly - #155867) Fix KNX Climate humidity DPT (@farmio - #155942) Truncate password before sending it to bcrypt (@cdce8p - #155950) Fix for corrupt restored state in miele consumption sensors (@astrandb - #155966) Handle empty fields in SolarEdge config flow (@tronikos - #155978) Fix SolarEdge unload failing when there are no sensors (@tronikos - #155979) Bump aioamazondevices to 8.0.1 (@chemelli74 - #155989) Fix Growatt integration authentication error for legacy config entries (@johanzander - #155993) Bump tuya-device-sharing-sdk to 0.2.5 (@epenet - #156014) Bump onedrive-personal-sdk to 0.0.16 (@zweckj - #156021) Fix the exception caused by the missing Foscam integration key (@Foscam-wangzhengyu - #156022) Bump intents to 2025.11.7 (@synesthesiam - #156063) 2025.11.2 - November 14 Bump cronsim to 2.7 (@dgomes - #155648) Avoid firing discovery events when flows immediately create a config entry (@puddly - #155753) Remove arbitrary forecast limit for meteo_lt (@xE1H - #155877) Fix progress step bugs (@emontnemery - #155923) Make sure to clean register callbacks when mobile_app reloads (@TimoPtr - #156028) Bump pyportainer 1.0.13 (@erwindouna - #155783) Bump pyportainter 1.0.14 (@erwindouna - #156072) Log HomeAssistantErrors in ZHA config flow (@TheJulianJES - #156075) Bump aio-ownet to 0.0.5 (@jrieger - #156157) Fix MFA Notify setup flow schema (@abmantis - #156158) Update xknx to 3.10.1 (@farmio - #156177) Forbid to choose state in Ukraine Alarm integration (@PaulAnnekov - #156183) Fix set_absolute_position angle (@starkillerOG - #156185) Fix config flow reconfigure for Comelit (@chemelli74 - #156193) Bump pyvesync to 3.2.1 (@cdnninja - #156195) Fix Climate state reproduction when target temperature is None (@mib1185 - #156220) Foscam Integration with Legacy Model Compatibility (@Foscam-wangzhengyu - #156226) Bump pypalazzetti lib from 0.1.19 to 0.1.20 (@dotvav - #156249) Bump pySmartThings to 3.3.2 (@joostlek - #156250) Correct migration to recorder schema 51 (@emontnemery - #156267) Improve logging of failing miele action commands (@astrandb - #156275) Ituran: Don’t cache properties (@shmuelzon - #156281) tplink: handle repeated, unknown thermostat modes gracefully (@rytilahti - #156310) Check collation of statistics_meta DB table (@emontnemery - #156327) Fix support for Hyperion 2.1.1 (@antoniocifu - #156343) Update pyMill to 0.14.1 (@Danielhiversen - #156396) Prevent sensor updates caused by fluctuating “last seen” timestamps in Xbox integration (@tr4nt0r - #156419) Fix update progress in Teslemetry (@Bre77 - #156422) Bump pyvesync to 3.2.2 (@cdnninja - #156423) Fix lamarzocco update status (@zweckj - #156442) Add firmware flashing debug loggers to hardware integrations (@puddly - #156480) URL-encode the RTSP URL in the Foscam integration (@Foscam-wangzhengyu - #156488) Update Home Assistant base image to 2025.11.0 (@sairon - #156517) Bump pySmartThings to 3.3.3 (@joostlek - #156528) Update bsblan to python-bsblan version 3.1.1 (@liudger - #156536) Bump reolink-aio to 0.16.5 (@starkillerOG - #156553) Bump python-open-router to 0.3.3 (@joostlek - #156563) Bump ZHA to 0.0.78 (@TheJulianJES - #155937) Bump ZHA to 0.0.79 (@TheJulianJES - #156571) Fix sfr_box entry reload (@epenet - #156593) Fix model_id in Husqvarna Automower (@Thomas55555 - #156608) Add debounce to Alexa Devices coordinator (@chemelli74 - #156609) 2025.11.3 - November 21 Cache token info in Wallbox (@hesselonline - #154147) Bump version of python_awair to 0.2.5 (@averybiteydinosaur - #155798) Fix args passed to check_config script (@tmonck - #155885) update methods to non deprecated methods in vesync (@cdnninja - #155887) Fix wrong BrowseError module in Kode (@charrus - #155971) Bump universal-silabs-flasher to v0.1.0 (@puddly - #156291) Reset state on error during VOIP announcement (@jaminh - #156384) Bump pyiCloud to 2.2.0 (@PaulCavill - #156485) Fix is_matching in samsungtv config flow (@FredrikM97 - #156594) Bump async-upnp-client to 0.46.0 (@edenhaus - #156622) Bump tplink-omada-api to 1.5.3 (@MarkGodwin - #156645) Fix missing description placeholders in MQTT subentry flow (@jbouwh - #156684) Fix missing temperature_delta device class translations (@jbouwh - #156685) Bump ohmepy and remove advanced_settings_coordinator (@dan-r - #156764) Fix blocking call in cync (@epenet - #156782) Lamarzocco fix websocket reconnect issue (@zweckj - #156786) Fix hvv_departures to pass config_entry explicitly to DataUpdateCoordinator (@Copilot - #156794) Bump aioautomower to 2.7.1 (@Thomas55555 - #156826) Bump pySmartThings to 3.3.4 (@joostlek - #156830) Bump universal-silabs-flasher to 0.1.2 (@puddly - #156849) Bump onedrive-personal-sdk to 0.0.17 (@zweckj - #156865) Bump aiounifi to 88 (@Sese-Schneider - #156867) Rework CloudhookURL setup for mobile app (@TimoPtr - #156940) Bump go2rtc to 1.9.12 and go2rtc-client to 0.3.0 (@edenhaus - #156948) Update frontend to 20251105.1 (@bramkragten - #156992) Throttle Decora wifi updates (@joostlek - #156994) Need help? Join the community Home Assistant has a great community of users who are all more than willing to help each other out. So, join us! Our very active Discord chat server is an excellent place to be, and don’t forget to join our amazing forums. Found a bug or issue? Please report it in our issue tracker to get it fixed! Or check our help page for guidance on more places you can go.

Source: Home Assistant (Blog officiel) Home Assistant 2026.2! February is the month of love, and this release is here to share it! The new Home Dashboard is now the official default for all new installations. If you’ve been using Home Assistant for a while and never customized your default view, you’ll get a suggestion to switch; give it a try! I also need your help! The Open Home Foundation device database is being built as a community-powered resource to help everyone make informed decisions about smart home devices. Head to Home Assistant Labs to opt in and contribute your anonymized device data. Add-ons are now called Apps! After a lot of community discussion, it was time to use terminology that everyone understands. Your TV has apps, your phone has apps, and now Home Assistant has apps too. My personal favorite this release? The completely redesigned Quick search! If you’re like me and navigate Home Assistant using your keyboard, you’re going to love this one. Press ⌘ + K (or Ctrl + K on Windows/Linux) and you have instant access to everything. Enjoy the release! ../Frenck A new way to view your home Discovered devices at a glance Area assignments made easy Faster area edits UX and visual upgrades Device database: We need your help! Help us out and share your devices See the data in action Join us in building something meaningful Add-ons are now called Apps A faster, snappier Apps panel Purpose-specific triggers and conditions progress New triggers New conditions A brand new card: The distribution card Quick search: The fastest way to anything Your favorite shortcuts still work Integrations New integrations Noteworthy improvements to existing integrations Integration quality scale achievements Now available to set up from the UI Other noteworthy changes Add buttons to your heading card Pick specific entities in your area card Patch releases 2026.2.1 - February 6 2026.2.2 - February 13 2026.2.3 - February 20 Need help? Join the community Backward-incompatible changes All changes A huge thank you to all the contributors who made this release possible! And a special shout-out to @laupalombi and @mkerstner who helped write the release notes this release. Also, @wollew, @Diegorro98, and @MindFreeze for putting effort into tweaking its contents. Thanks to them, these release notes are in great shape. A new way to view your home The Home Dashboard is now Overview as it becomes the official default standard, replacing the old “Overview” for all new instances. If you’re a long-time user who never customized your default view, we’ll suggest the switch to you; otherwise, you can find it in Settings > Dashboards to try it out whenever you’re ready. Liked the old Overview as a way to build your custom dashboards? You can still do it. Go to Settings > Dashboards, select Create, and pick the Overview (legacy) template. Discovered devices at a glance Check out the new card in the For You section! It instantly displays any new devices your Home Assistant has discovered, allowing you to add them on the spot or jump straight to device management without digging through menus. Area assignments made easy In the last release, we added a dedicated Devices area within the Home Dashboard to catch everything currently unassigned. Now this section provides quick prompts to help you categorize your devices into the right rooms, keeping your setup organized with minimal effort. Faster area edits Need to swap the area temperature sensor? Area pages now feature a shortcut in the Edit button. This lets you jump straight to the area’s configuration to update primary sensors like humidity or temperature in seconds. We’ve also tidied up the interface by removing awkward empty spaces and fixing issues with some back arrows. Navigating through your sub-menus should now feel as smooth and predictable as you’d expect. UX and visual upgrades Modern look in the default theme: We’ve retired the old blue top bar in favor of a clean, consistent theme that matches our Settings page. This distraction-free design lets your cards and data take center stage. Personalized themes per user: Themes have moved! You can now find and toggle your favorite looks directly within your User profile, making it easier to set up a theme that works for you in any device you are logged in. Device database: We need your help! Finding reliable information about smart home devices before you buy them can be challenging. That’s why we’re building the Open Home Foundation device database: a community-powered resource that helps you make informed decisions based on real-world data. We’ve been working with early contributors to lay the groundwork, and the results are already impressive: over 10,000 unique devices across more than 260 integrations have been submitted by Home Assistant users who opted in to share their anonymized data. Help us out and share your devices Since we’re still in the early stages, the device database lives in Home Assistant Labs, where you can opt in to share anonymized information about the devices in your home. We have also added a new section called Device analytics to Home Assistant Analytics, which shows up when you enable it in Home Assistant Labs. If you opt in, you are, of course, able to opt out at any time. Privacy is our foundation. We collect zero personal data, period. Only aggregated, anonymized device information is shared if someone chooses to opt in, providing valuable insights while keeping your privacy intact. You can preview what is being sent using the Preview device analytics option available in the top-right corner on the Analytics page. Read our Data Use Statement for complete details. See the data in action We’ve launched an initial public dashboard where you can explore aggregated statistics as it grows. This is just our first step. We want to build what comes next together with you. Join us in building something meaningful Head to Settings > System > Labs to enable device analytics and start contributing your real-world anonymized device data to help others make better choices. Read our blog post for more details and join the conversation in our Discord project channel; we’d love to hear your ideas, feedback, and questions as we shape this resource together. Add-ons are now called Apps Starting with this release, add-ons are now called apps! You might be wondering: why change the name? The answer comes down to making Home Assistant more approachable for everyone, especially newcomers. When you first open Home Assistant, you see two sections that sound very similar: “Add-ons” and “Integrations.” Both names imply something you add to extend Home Assistant, but they serve fundamentally different purposes. For those of us who’ve been in the ecosystem for a while, this distinction is second nature. But we keep seeing new users getting confused, attempting to install add-ons when they need integrations, or vice versa. This is where the rename helps: use terminology that people already understand. Most people know what an “app” is. You open your phone’s app store, you pick an app, you install it. Your TV has an app store. Your NAS has apps. Heck, even some fridges have apps these days. It’s a concept everyone understands. The same mental model now applies to Home Assistant: Apps are standalone applications that run alongside Home Assistant. Integrations are connections that connect Home Assistant to your devices and services. Apps are separate software managed by your Home Assistant Operating System, running next to Home Assistant itself. They can be things like code editors, media servers, MQTT brokers, or database tools. Some apps even pair with integrations: for example, the Mosquitto MQTT broker app provides the service, while the MQTT integration connects Home Assistant to it. Existing documentation, community posts, and tutorials will continue to reference “add-ons” for some time. Search engines and AI assistants will also need time to catch up. We’ve put redirects in place to ensure that searching for “add-ons” will still get you where you need to go. Thank you to everyone who participated in the community discussion and architecture proposal. Whether you supported the idea, pushed back, or landed somewhere in between, your feedback was invaluable. A faster, snappier Apps panel Besides the rename, we did a major refactoring under the hood of the Apps panel (formerly known as the Add-ons panel) in this release. Previously, this panel was served by a separate process (the Supervisor), but it has now been fully integrated into the Home Assistant frontend. You shouldn’t notice much of a difference visually, but the panel is now much faster and snappier to use. More importantly, this change makes future development on Apps significantly easier, paving the way for more improvements down the road. Purpose-specific triggers and conditions progress In Home Assistant 2025.12, we introduced purpose-specific triggers and conditions. Instead of thinking in technical state changes, you can simply pick things like “When a light turns on” or “If the climate is heating” when building your automations. In Home Assistant 2026.1, we added more triggers and laid the groundwork for conditions. This feature is still being refined in Home Assistant Labs, but we continue to expand it with every release. This release brings a mix of new triggers and, for the first time, a whole set of purpose-specific conditions! New triggers The following new triggers have been added in this release: Calendar triggers fire when a calendar event starts or ends. Person triggers now cover when a person arrives home or leaves home. Vacuum triggers fire when a vacuum cleaner returns to its dock. New conditions Purpose-specific conditions are expanding! In the previous release, we introduced the first purpose-specific condition for lights. This release adds a whole set of new conditions across many more entity types. Just like triggers, conditions now allow you to express your intent in a more natural way. Instead of checking if the state of an entity equals a specific value, you can now simply ask “If the climate is heating” or “If the lock is locked”. The following purpose-specific conditions are now available: Alarm control panel conditions check if the alarm is armed (home, away, night, or vacation), disarmed, or triggered. Assist satellite conditions check if your voice assistant satellites are idle, listening, processing, or responding. Climate conditions check if the climate device is on, off, heating, cooling, or drying. Device tracker conditions check if a device is home or not home. Fan conditions check if a fan is on or off. Humidifier conditions check if a humidifier is on, off, humidifying, or drying. Lawn mower conditions check if your lawn mower is mowing, docked, paused, returning, or encountering an error. Lock conditions check if a lock is locked, unlocked, open, or jammed. Media player conditions check if a media player is on, off, playing, paused, or not playing. Person conditions check if a person is home or not home. Siren conditions check if a siren is on or off. Switch conditions check if a switch is on or off. Vacuum conditions check if a vacuum is cleaning, docked, paused, returning, or encountering an error. Head over to Settings > System > Labs to enable purpose-specific triggers and conditions and give them a try! A brand new card: The distribution card Meet the distribution card, a brand new dashboard card that visualizes how values are distributed across multiple entities. It displays your data as a proportional horizontal bar chart with an interactive legend, perfect for seeing at a glance where your power, storage, or any other measurable quantity is going. The card is fully interactive: select legend items to hide or show entities (the percentages recalculate dynamically), and select bar segments to open the more-info dialog for that entity. When you have many entities, the legend shows the first items with a More button to expand the rest. The distribution card is smart about what you can combine. It validates that all entities share the same domain and device class, so you won’t accidentally mix power sensors with battery sensors. It even handles related units gracefully: mixing watts and kilowatts works just fine. Some ideas for how you might use it: Power monitoring: See which circuits or appliances are consuming the most electricity right now. Storage usage: Visualize how storage is distributed across drives or folders. Any proportional data: Compare any group of entities with the same unit. Thanks to @jlpouffier for building this card! Quick search: The fastest way to anything We continue to make it easier to access and find things in Home Assistant. The quick bar has been completely redesigned and is now simply called Quick search. Think of it as the command center for your entire Home Assistant: navigate anywhere, run commands, find entities, devices, or areas, all from a single, unified search. Open Quick search from anywhere by pressing ⌘ + K on macOS or Ctrl + K on Windows and Linux. The new design features category filters at the top: Navigate, Commands, Entities, Devices, and Areas. Select a filter to instantly narrow your results, or just start typing to search across everything. Full keyboard navigation makes Quick search a power user’s friend. Use the arrow keys to move through results, Enter to select, and Esc to close. On mobile, you can assign Quick search to a gesture for one-tap access. Your favorite shortcuts still work If you’ve been using the single-key shortcuts from the old quick bar, they still work! The difference is that they now open Quick search with the corresponding filter already selected: e opens Quick search with the Entities filter d opens Quick search with the Devices filter c opens Quick search with the Commands filter a still opens Assist directly m still creates a My link for the current page (unrelated but still useful mention! ) This means your muscle memory is preserved while you get access to all the new capabilities. Integrations Thanks to our community for keeping pace with the new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] and improvements to existing ones! You’re all awesome New integrations We welcome the following new integrations in this release: Cloudflare R2, added by @corrreia Cloudflare R2. R2 offers generous free tier storage with no egress fees, making it an affordable option for keeping your backups safe in the cloud. Green Planet Energy, added by @petschni Green Planet Energy. Monitor hourly prices and optimize your energy consumption by shifting it to cheaper hours. HDFury, added by @glenndehaan HDFury HDMI video processing devices, like the VRROOM and Diva. Manage HDMI port selection, operation modes, audio muting, and monitor input/output signal status. NRGkick, added by @andijakl NRGkick Gen2 mobile EV charger locally. Track charging status, energy consumption, power flow across all phases, and device temperatures without requiring a cloud connection. Prana, added by @prana-dev-official Prana heat recovery ventilation systems. Prana HRV units provide balanced mechanical ventilation with energy-efficient heat exchange, and you can now control and monitor them directly from Home Assistant. uHoo, added by @getuhoo and @joshsmonta uHoo indoor air quality monitors to track temperature, humidity, CO2, PM2.5, and other air quality metrics. Also includes proprietary health indices for virus and mold risk. Noteworthy improvements to existing integrations It is not just new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] that have been added; existing ones are also being constantly improved. Here are some of the noteworthy changes to existing integrations: ESPHome integration now supports water heater devices! Thanks, @dhoeben, for adding this! Music Assistant integration now supports pre-announce URLs, thanks to @arturpragacz. Use your custom announcement sounds before your text-to-speech message plays! @fr33mang made it possible to play your “Liked Songs” collection directly in the Spotify integration. No more searching for that special playlist. The Sonos integration now shows your podcast favorites in the media browser, thanks to @divers33. May we recommend the Home Assistant Podcast? @starkillerOG added a new pet chime option to the Reolink integration. Now you can trigger a special chime when your furry friends are at the door! The SmartThings integration now supports audio notifications, thanks to @vmonkey. @Lash-L improved the Roborock integration by adding sensors for the dock water box status. Nice! The Tibber integration received several enhancements from @Danielhiversen: new binary sensors for EV charger status, additional temperature and grid sensors, and more EV settings to fine-tune your charging experience. @LG-ThinQ-Integration added support for controlling humidifiers and dehumidifiers in the LG ThinQ integration. Thanks! Thanks to @ptarjan, the Hikvision integration now has camera support! You can view snapshots and streams from your Hikvision cameras and NVRs directly in Home Assistant. @cdnninja added PM1 and PM10 air quality sensors to the VeSync integration. Nice! The Bang & Olufsen integration received battery support from @mj23000. You can now monitor battery levels and charging status for your portable Beosound speakers and Beoremote One remotes. @erwindouna enhanced the Portainer integration with a new prune images button and a state sensor. Awesome! Thanks to @klaasnicolaas, the Powerfox integration now supports gas meters alongside electricity meters. @terop added an Indoor Air Quality Score (IAQS) sensor to the Ruuvi integration. Great! @pandanz added an ambient temperature sensor to the ToGrill integration. Keep an eye on the temperature around your grill , not just inside it! @tr4nt0r added support for sequence IDs to the ntfy integration, allowing notifications to be updated, and added two new actions to dismiss and delete notifications. Integration quality scale achievements One thing we are incredibly proud of in Home Assistant is our integration quality scale. This scale helps us and our contributors to ensure integrations are of high quality, maintainable, and provide the best possible user experience. This release, we celebrate several integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] that have improved their quality scale: 3 integrations reached platinum Airobot, thanks to @mettolen Duck DNS, thanks to @tr4nt0r Saunum, thanks to @mettolen 4 integrations reached silver Feedreader, thanks to @mib1185 NINA, thanks to @DeerMaximum Velbus, thanks to @cereal2nd Velux, thanks to @wollew 1 integration reached bronze TP-Link Omada, thanks to @MarkGodwin This is a huge achievement for these integrations and their maintainers. The effort and dedication required to reach these quality levels is significant, as it involves extensive testing, documentation, error handling, and often complete rewrites of parts of the integration. A big thank you to all the contributors involved! Now available to set up from the UI While most integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] can be set up directly from the Home Assistant user interface, some were only available using YAML configuration. We keep moving more integrations to the UI, making them more accessible for everyone to set up and use. The following integrations are now available via the Home Assistant UI: Namecheap DynamicDNS, done by @tr4nt0r OpenEVSE, done by @c00w Proxmox VE, done by @erwindouna WaterFurnace, done by @masterkoppa Other noteworthy changes There are many more improvements in this release; here are some of the other noteworthy changes: The Developer tools have been moved to the Settings area. This change keeps all administrative and system tools in one central location, making the interface cleaner and more consistent. We understand this might take some getting used to, and we hear you! We’re actively exploring adding full sidebar menu customization capabilities in the future, giving you the flexibility to organize your navigation exactly the way you want it. Dashboards now support calendar colors! Pick a color for each calendar, and it will show up in your calendar cards. The Google Calendar integration already supports this feature, thanks to @Misiu. @karwosts added live inline template previews to the template editor. As you type, you can instantly see the result of your template without needing to manually refresh. The sidebar now features a subtle scroll fade effect and keeps Settings always visible at the bottom, so you never have to scroll to find it. Thanks, @ildar170975! @MindFreeze added tap action and image tap action options to the area card, giving you more control over what happens when you interact with your areas. The entity card now supports actions, thanks to @ildar170975. Configure tap, hold, or double-tap actions to trigger anything you want directly from the card. @Thomas55555 added parts per billion (ppb) as a valid unit of measurement for sulfur dioxide sensors and number entities. The Energy dashboard now supports power sensors in other formats without the need for a template sensor thanks to @MindFreeze. You can now use a single sensor with an inverted polarity for grid or battery. You can also configure two separte positive sensors for charge and discharge (or import/export). Add buttons to your heading card The heading card now supports button badges, giving you a new way to add quick actions right alongside your section headings. Display an icon, text, or both, pick a custom color, and configure tap, hold, or double-tap actions to trigger anything you want. You can also set visibility conditions to show or hide buttons based on entity states. Combined with the existing entity badges, this makes the heading card a versatile anchor for your dashboard sections, whether you want to display status information, provide quick controls, or both. Thanks to @piitaya for this addition! Pick specific entities in your area card The area card now lets you select individual entities as control buttons, not just entire types of entities like all lights or all switches in the area. Previously, adding a light control meant showing all lights in the area. Now you can pick exactly which entities appear. Great job, @MindFreeze! Patch releases We will also release patch releases for Home Assistant 2026.2 in February. These patch releases only contain bug fixes. Our goal is to release a patch release once a week, aiming for Friday. 2026.2.1 - February 6 Fix redundant off preset in Tuya climate (@epenet - #161040) Fix device_class of backup reserve sensor (@jonootto - #161178) Bump evohome-async to 1.1.3 (@zxdavb - #162232) Bump google_air_quality_api to 3.0.1 (@Thomas55555 - #162233) Bump denonavr to 1.3.2 (@ol-iver - #162271) Fix multipart upload to use consistent part sizes for R2/S3 (@corrreia - #162278) Add mapping for stopped state to denonavr media player (@ol-iver - #162283) Fix unicode escaping in MCP server tool response (@luochen1990 - #162319) Bump pyenphase to 2.4.5 (@catsmanac - #162324) Fix Shelly Linkedgo Thermostat status update (@thecode - #162339) Update pynintendoparental requirement to version 2.3.2.1 (@pantherale0 - #162362) Fix conversion of data for todo.* actions (@boralyl - #162366) Bump python-smarttub to 0.0.47 (@mdz - #162367) Add missing config flow strings to SmartTub (@mdz - #162375) Remove entity id overwrite for ambient station (@joostlek - #162403) Bump librehardwaremonitor-api to version 1.9.1 (@Sab44 - #162409) Remove double unit of measurement for yardian (@joostlek - #162412) Fix invalid yardian snapshots (@epenet - #162422) Make bad entity ID detection more lenient (@arturpragacz - #162425) Bump aioamazondevices to 11.1.3 (@jamesonuk - #162437) 2026.2.2 - February 13 Bump essent-dynamic-pricing to 0.3.1 (@jaapp - #160958) Fix AsyncIteratorReader blocking after stream exhaustion (@ElCruncharino - #161731) Fix absolute humidity sensor on HmIP-WGT glass thermostats (@lackas - #162455) Fix device_class of backup reserve sensor in teslemetry (@Bre77 - #162458) Fix device_class of backup reserve sensor in Tessie (@Bre77 - #162459) Fix JSON serialization of time objects in OpenAI tool results (@Shulyaka - #162490) Fix JSON serialization of datetime objects in Google Generative AI tool results (@Shulyaka - #162495) Fix JSON serialization of time objects in Ollama tool results (@Shulyaka - #162502) Fix JSON serialization of time objects in Open Router tool results (@Shulyaka - #162505) Fix JSON serialization of time objects in Cloud conversation tool results (@Shulyaka - #162506) Fix Green Planet Energy price unit conversion (@petschni - #162511) Bump grpc to 1.78.0 (@allenporter - #162520) Fix Tesla Fleet partner registration to use all regions (@Bre77 - #162525) Sentence-case “speech-to-text” in google_cloud (@NoRi2909 - #162534) Add new Miele mappings (@aturri - #162544) Fix config flow bug for Telegram bot (@hanwg - #162555) Add timeout to B2 metadata downloads to prevent backup hang (@ElCruncharino - #162562) migrate velbus config entries (@cereal2nd - #162565) Bump aioimmich to 0.12.0 (@mib1185 - #162573) Bump aioautomower to 2.7.3 (@Thomas55555 - #162583) Increase max tasks retrieved per page to prevent timeout (@boralyl - #162587) Pin setuptools to 81.0.0 (@joostlek - #162589) Improve MCP SSE fallback error handling (@allenporter - #162655) Bump intellifire4py to 4.3.1 (@jeeftor - #162659) Bump reolink-aio to 0.19.0 (@starkillerOG - #162672) Fix handling when FRITZ!Box reboots in FRITZ!Smarthome (@mib1185 - #162676) fix to cloudflare r2 setup screen info (@corrreia - #162677) Fix handling when FRITZ!Box reboots in FRITZ!Box Tools (@mib1185 - #162679) Bump onedrive-personal-sdk to 0.1.2 (@zweckj - #162689) Fix unavailable status in Tuya (@epenet - #162709) Fix alarm refresh warning for Comelit SimpleHome (@chemelli74 - #162710) Fix image platform state for Vodafone Station (@chemelli74 - #162747) Fix bug in edit_message_media action for Telegram bot (@hanwg - #162762) Bump cryptography to 46.0.5 (@edenhaus - #162783) Bump pySmartThings to 3.5.2 (@joostlek - #162809) Filter out transient zero values from qBittorrent alltime stats (@Xitee1 - #162821) Bump slixmpp to 1.13.2 (@Lyokovic - #162837) Bump pydaikin to 2.17.2 (@YoshiWalsh - #162846) Bump pytouchlinesl to 0.6.0 (@jnsgruk - #162856) Add Miele TQ1000WP tumble dryer programs and program phases (@andrei-marinache - #162871) Bump ZHA to 0.0.90 (@puddly - #162894) Log remaining token duration in onedrive (@zweckj - #162933) 2026.2.3 - February 20 Add the ability to select region for Roborock (@Lash-L - #160898) Fix dynamic entity creation in eheimdigital (@autinerd - #161155) Fix HomematicIP entity recovery after access point cloud reconnect (@lackas - #162575) Show progress indicator during backup stage of Core/App update (@hbludworth - #162683) Fix Z-Wave climate set preset (@MartinHjelmare - #162728) Block redirect to localhost (@edenhaus - #162941) Bump pypck to 0.9.10 (@alengwenus - #162333) Bump pypck to 0.9.11 (@alengwenus - #163043) Fix blocking call in Xbox config flow (@tr4nt0r - #163122) Bump ical to 13.2.0 (@allenporter - #163123) Add Lux to homee units (@Taraman17 - #163180) Fix remote calendar event handling of events within the same update period (@allenporter - #163186) Fix Control4 HVAC action mapping for multi-stage and idle states (@davidrecordon - #163222) NRGkick: do not update vehicle connected timestamp when vehicle is not connected (@andijakl - #163292) Add Miele dishwasher program code (@astrandb - #163308) Bump pyrainbird to 6.0.5 (@allenporter - #163333) Fix touchline_sl zone availability when alarm state is set (@molsmadsen - #163338) Bump pySmartThings to 3.5.3 (@joostlek - #163375) Fix hassfest requirements check (@cdce8p - #163681) Bump eheimdigital to 1.6.0 (@autinerd - #161961) Need help? Join the community Home Assistant has a great community of users who are all more than willing to help each other out. So, join us! Our very active Discord chat server is an excellent place to be, and don’t forget to join our amazing forums. Found a bug or issue? Please report it in our issue tracker to get it fixed! Or check our help page for guidance on more places you can go.

Source: Gladys Assistant (Forum) Bonjour tout le monde Fabien 45 ans attiré par la gestion quotidienne de mon entourage de par la domotique, une connaissance ma installer home assistant il y a environ 2 ans car étant novice mes capacités informatiques ne me le permettait pas. Aujourd’hui je découvre Gladys qui me paraît plus accessible pour mon niveau entre-temps j’ai aménagé dans une maison qui est beaucoup plus adaptée à la domotique que l’ancienne, un réseau Rj 45 à été totalement créé preseque dans toute les pièce et connecté à la fibre. Je souhaite créer des automatisations et ajouter une surveillance vidéo. Actuellement sur Home assistant j’ai des capteurs de température , sonde , prise connectée, relais connecter le tout en ZigBee et Alexa en assistant vocal. Novice je risque de vous casser les pieds 6 messages - 6 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) Hi, I am using Domoticz on a Raspberry Pi. After an update, a login screen appears upon startup. It asks for a username and password. Then a message appears stating they are incorrect, and after three attempts, a screen appears with the message: "congratulations …….the end of the internet". The strange thing is that Domoticz is otherwise working fine; the lights are on. But Domoticz is inaccessible. Software and hardware used: Domoticz running on a Raspberry Pi. What I have already found or tried: I connect to the Raspberry Pi via SSH. I tried to repair Domoticz or completely reinstall it. Nothing helped. Does anyone have a solution to my problem? Statistics: Posted by miguell — Tuesday 24 February 2026 9:46 — Replies 3 — Views 389

Source: Home Assistant (Blog officiel) After an amazing 2025 that saw 12 new Works with Home Assistant partners join the program, it’s now time to say “Hei” to the first partner joining us this year: Heiman. Founded back in 2005, Heiman specialize in smart home security devices, and are bringing an impressive selection of safety-focused sensors and alarms to the program: including the first Matter carbon monoxide alarms to be certified, along with smoke alarms designed for international markets. Keep it local, keep it safe If you’re new to the Works with Home Assistant program, it’s designed to help you identify devices that work brilliantly with Home Assistant, and support the Open Home Foundation’s principles of privacy, choice, and sustainability. These values all pivot around local control, something that’s essential when it comes to home safety. Your smoke and CO alarms need to work when you need them most, regardless of your internet connection or cloud service status (though if you want to check in on your devices while away from home, Home Assistant Cloud provides secure remote access, and your subscription helps fund this very program, among other things!). Our in-house team has thoroughly tested Heiman’s devices to ensure they meet this key requirement, and we’re happy to report they did! But Heiman has gone further still by using the Matter open connectivity standard… Why this matters Matter was launched to be a unifying connectivity type with interoperability at its heart. Instead of being locked into one company’s ecosystem, Matter devices work across Home Assistant, as well as other platforms like Google Home. Heiman’s Matter devices work over Thread, which adds another layer of benefits. Thread is a low-power wireless mesh network protocol that creates resilient connectivity throughout your home, perfect for battery-powered sensors that need reliable communication while staying energy efficient. This is ideal for battery-powered sensors like Heiman’s that need to be energy efficient while maintaining reliable communication. So why does all this matter for safety devices specifically? Well firstly, it’s important to know these smart devices will still work as “dumb” ones, so there’s always a failsafe if you decide to rebuild your Thread network, or start making tweaks. If your sensors integrate locally, it means you can automate basic checks, such as reminders to test an alarm once a month, or notifications of hardware faults. If you want to go even further, your smoke alarm could trigger emergency lighting, your CO detector could shut off your gas fireplace, or your leak sensor could close water valves, all without sending your private data through a third-party server. And this is just the sort of complete, interoperable ecosystem Heiman aims to provide. "Our core goal has always been to enable every family to enjoy a safe and intelligent living experience. Home Assistant, as a world-leading open source smart home platform, has an open and inclusive ecological philosophy and strong compatibility with multi-brand and multi-protocol devices, which are highly consistent with the direction of our product research and development. We deeply understand that only by integrating into an open ecosystem can we break down device barriers and provide users with a truly seamless whole-house smart solution."

  • Leo Xie, Software Engineer Manager at Heiman Working with the community Heiman is showing they’re true to these ambitions. Beyond getting certified, they’re planning to take an active role in the Home Assistant community by participating in discussions, listening to real-world feedback, and continuously optimizing their products based on what users actually need. They’re also sharing their technical expertise in smart home security, collaborating with developers to explore innovative safety scenarios that benefit everyone. Devices Heiman’s commitment to openness and community is also reflected in the devices we’ve certified, which also meet strict safety regulations across the US, Europe, Asia and beyond. Before Heiman joined, we had one Zigbee smoke alarm in the program. Now there are Matter options for multiple regions, plus the first certified carbon monoxide alarms: more choice, more coverage. What devices have been certified? Heiman Smart Smoke Alarm (USA) Heiman Smart Smoke Alarm (EU and China) Heiman Smart Carbon Monoxide Alarm (USA) Heiman Smart Carbon Monoxide Alarm (EU and China) Heiman Motion Sensor Heiman Water Leak Sensor Heiman Humidity and Temperature Sensor Also worth noting: Heiman’s global presence allows them to deliver quality devices at prices that won’t break the bank. Safety sensors and alarms shouldn’t be a luxury, and Heiman’s approach means they don’t have to be. No more guessing games! Accessible pricing is just one way Heiman expands choice for users. We’ve found they also deliver on the other core principles behind the Works with Home Assistant program: local control protects privacy, and open standards ensure sustainability. And that’s the whole point of our certification process: to make it easier for you to spot manufacturers who genuinely commit to these values, taking the guesswork out of building your open home. For full details of all Works with Home Assistant partners, check out our certified device list. Welcome to the program, Heiman, we’re excited to see what the community builds with these devices! Frequently asked questions If I have a device that is not listed under Works with Home Assistant, does this mean it’s not supported? No! It just means that it hasn’t gone through a testing schedule with our team, or doesn’t fit the requirements of the program. It might function perfectly well but be added to the testing schedule in the future. OK, so what’s the point of the Works with program? It highlights the devices we know work well with Home Assistant and the brands that make a long-term commitment to keeping support for these devices going. The certification agreement specifies that brands must continue to support the devices in the program. How were these devices tested? All devices in this list were tested using a standard Home Assistant Green Hub with the Home Assistant Connect ZBT-2 as the Thread Border Router and with our certified Matter integration. Will you be adding more Heiman devices to the program? Why not! We’re thrilled to foster a close relationship with the team at Heiman to work together on any upcoming releases or add in further products that are not yet listed here. We are also chatting with them about some exciting future plans.

Source: Domoticz (Forum News) I designed a plugin for Airplane Tracking based on the work of @janpep 'Script for Airplanes.live API'. Installation is quite simple.

  1. mkdir ~/domoticz/plugins/AirPlaneTracker
  2. Copy the code to ~/domoticz/plugins/AirPlaneTracker/plugin.py
  3. do a 'sudo systemctl restart domoticz'
  4. Goto hardware and select the plugin 'Airplane Tracker', give it a name (adapt settings) and click on 'Add'
  5. Goto the 'Utility'tab, find the Airplanes - Counter, click edit and change the counter Type from Energy to Custom
  6. Off you go The names of the sensors are made up of 2 parts.
  • The name you give to the plugin @install
  • the name of the sensor Example: I gave the plugin the name Vliegtuigen so the name of the Tracker sensor is Vliegtuigen - Tracker The sensors are: Tracker - displays the planes flying over your head Counter - counts the planes flying over your head Types - Counts the types of planes flying over your head When you click on the CallSign of an plane like KLM1844 or RYR9HN or BAW979 in de Tracker sensor, a new Airplanes.live tab opens in your browser and shows the plane on the map. This is a one day score within 9 miles from my house. Code: # Airplane Tracker Plugin for Domoticz# Author: Hein""" Airplane Tracker Monitor live air traffic around your location using the airplanes.live API. Features Tracker: Shows the last 3 aircraft detected with details (altitude, speed, direction) Counter: Cumulative count of all aircraft seen (resets daily by Domoticz) Types: Daily summary of aircraft types detected Configuration Radius: Detection radius in miles around your Domoticz location API Interval: Time between API calls in seconds (default: 60). Log Unknown Types: Logs unclassified types to /unknown_aircraft_types.log. Log Level: Controls logging verbosity. Important - Counter Device: After creation, go to Devices, click Edit on the Counter, and change Type to Custom. """import Domoticzimport urllib.request, json, socket, time, math, osfrom datetime import datetimeclass BasePlugin: def init(self): self.last_seen_ts = {} self.tracker_buffer = {} self.type_counts = {} self.logged_unknowns = set() self.today = datetime.now().strftime('%Y-%m-%d') self.total_count = 0 self.home_dir = os.path.expanduser("") self.last_api_call = 0 def should_log(self, level): log_level = Parameters.get("Mode6", "Error") if log_level == "Info": return True elif log_level == "Status": return level in ["Status", "Error"] else: return level == "Error" def classify_aircraft(self, t, desc): t = t.upper() if t else "" desc = desc.upper() if desc else "" # 1. Helicopters (Priority) if "HELICOPTER" in desc or "ROTORCRAFT" in desc or t in ["EC35", "EC45", "H135", "H145", "AS32", "EH10", "NH90", "CH47"]: return ("Helicopters", "helicopter") # 2. Special / Military / Large Transport if "RIVET" in desc or t == "R135": return ("Boeing RC-135 (Radar)", "known") if "AWACS" in desc or t == "E3TF": return ("Boeing E-3 AWACS", "known") if "GLOBEMASTER" in desc or t == "C17": return ("Boeing C-17 Globemaster", "known") if "ATLAS" in desc or t == "A400": return ("Airbus A400M Atlas", "known") if "HERCULES" in desc or t == "C130": return ("Lockheed C-130 Hercules", "known") if "STRATOTANKER" in desc or t == "K35R": return ("Boeing KC-135 Tanker", "known") # Specifiek voor A330 tankers (Voyager/MRTT) op basis van omschrijving if "VOYAGER" in desc or "MRTT" in desc: return ("Airbus A330 Tanker/Transport", "known") if "BELUGA" in desc or t in {"A3ST", "A337"}: return ("Airbus Beluga", "known") # 3. Airbus Families if t.startswith("A38") or "380" in desc: return ("Airbus A380", "known") if t.startswith("A35") or "350" in desc: return ("Airbus A350", "known") if t.startswith("A34") or "340" in desc: return ("Airbus A340", "known") # Algemene A330 check (vangt nu ook A332 op die niet Voyager is) if t.startswith("A33") or "330" in desc: return ("Airbus A330", "known") if (t.startswith("A31") or t.startswith("A32") or t.startswith("A2") or "A32" in desc or "A-32" in desc): return ("Airbus A320-family", "known") if t.startswith("BCS") or "A220" in desc: return ("Airbus A220", "known") # 4. Boeing Families if t.startswith("B73") or t.startswith("B3") or "737" in desc: return ("Boeing 737", "known") if t.startswith("B74") or "747" in desc: return ("Boeing 747", "known") if t.startswith("B77") or "777" in desc: return ("Boeing 777", "known") if t.startswith("B78") or "787" in desc: return ("Boeing 787", "known") if t.startswith("B75") or "757" in desc: return ("Boeing 757", "known") if t.startswith("B76") or "767" in desc: return ("Boeing 767", "known") # 5. Commercial Regional if (t.startswith("E17") or t.startswith("E19") or t.startswith("E2") or "E-JET" in desc or "E170" in desc or "E190" in desc): return ("Embraer E-Jet Family", "known") if t.startswith("DH8") or "DASH 8" in desc or "Q400" in desc: return ("Dash 8 / Q400", "known") if t.startswith("AT") or "ATR" in desc: return ("ATR 42/72", "known") if t.startswith("RJ") or t.startswith("B46") or "AVRO" in desc or "BAE 146" in desc: return ("Avro RJ / BAe 146", "known") if "FOKKER" in desc or t in ["F70", "F100"]: return ("Fokker 70/100", "known") # 6. Business & Recreational if (t in ["E135", "E140", "E145", "ER3", "ER4", "ERJ"] or "ERJ-1" in desc or "ERJ 1" in desc): return ("Business & Recreational", "small") small_codes = { "C25A", "C25B", "C25C", "C525", "C550", "C560", "C680", "C510", "C500", "C501", "C551", "GLF4", "GLF5", "GLF6", "G280", "GALX", "FA50", "FA7X", "FA20", "FA2K", "FA10", "LJ35", "LJ45", "LJ60", "BE20", "BE30", "BE40", "BE9L", "BE10", "PC12", "PC6", "PC24", "CL30", "CL35", "CL60", "GL5T", "H25B", "H25C", "HA4T", "SW4", "P28A", "P28R", "P28T", "PA46", "C172", "C182", "C208", "SR20", "SR22" } small_keywords = ["CITATION", "GULFSTREAM", "FALCON", "LEARJET", "HAWKER", "CHALLENGER", "PHENOM", "LEGACY", "BEECHCRAFT", "PILATUS", "CESSNA", "PIPER", "CIRRUS", "METRO", "KINGAIR"] if t in small_codes or any(k in desc for k in small_keywords): return ("Business & Recreational", "small") # 7. Other Commercial / Classics if "MD11" in desc or t == "MD11": return ("McDonnell Douglas MD-11", "known") if "MD8" in desc or t.startswith("MD8"): return ("McDonnell Douglas MD-80", "known") return ("Other", "other") def log_unknown_type(self, hex_id, t, desc, reg): if Parameters.get("Mode3", "False") != "True": return log_key = f"{t}|{desc}" if log_key in self.logged_unknowns: return self.logged_unknowns.add(log_key) log_file = os.path.join(self.home_dir, "unknown_aircraft_types.log") timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S') log_entry = f"{timestamp} | hex={hex_id} | t={t} | desc={desc} | r={reg}\n" try: with open(log_file, 'a') as f: f.write(log_entry) except: pass def get_cardinal_dir(self, angle): directions = ["N", "NNE", "NE", "ENE", "E", "ESE", "SE", "SSE", "S", "SSW", "SW", "WSW", "W", "WNW", "NW", "NNW"] return directions[int((angle + 11.25) / 22.5) % 16] def calculate_distance(self, lat1, lon1, lat2, lon2): R = 6371 dLat, dLon = math.radians(lat2 - lat1), math.radians(lon2 - lon1) a = math.sin(dLat/2)**2 + math.cos(math.radians(lat1)) * math.cos(math.radians(lat2)) * math.sin(dLon/2)**2 return R * 2 * math.atan2(math.sqrt(a), math.sqrt(1-a)) def onStart(self): if 1 not in Devices: Domoticz.Device(Name="Tracker", Unit=1, TypeName="Text", Used=1).Create() if 2 not in Devices: Domoticz.Device(Name="Counter", Unit=2, Type=113, Subtype=0, Used=1).Create() else: try: if Devices[2].sValue: self.total_count = int(Devices[2].sValue.split(';')[0]) except: self.total_count = 0 if 3 not in Devices: Domoticz.Device(Name="Types", Unit=3, TypeName="Text", Used=1).Create() Domoticz.Log("Airplane Tracker started.") def onHeartbeat(self): try: if "Location" not in Settings: return loc = Settings["Location"].split(";") my_lat, my_lon = float(loc[0]), float(loc[1]) now_ts, now_dt = time.time(), datetime.now() if now_dt.strftime('%Y-%m-%d') != self.today: self.today = now_dt.strftime('%Y-%m-%d') self.last_seen_ts.clear(); self.tracker_buffer.clear(); self.type_counts.clear() api_interval = int(Parameters.get("Mode2", "60")) if (now_ts - self.last_api_call) < api_interval: self._update_displays(now_ts, now_dt, False); return self.last_api_call = now_ts url = f"https://api.airplanes.live/v2/point/{my_lat}/{my_lon}/{Parameters['Mode1']}" req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'}) new_reg_found = False try: with urllib.request.urlopen(req, timeout=10) as r: ac_data = json.loads(r.read().decode()).get('ac', []) for p in ac_data: hex_id = p.get('hex', '0').upper() if hex_id == '0': continue flight = (p.get('flight') or hex_id).strip() t, desc, reg = p.get('t', ''), p.get('desc', 'Unknown'), p.get('r', '') if hex_id not in self.last_seen_ts or (now_ts - self.last_seen_ts[hex_id]) > 300: self.total_count += 1; new_reg_found = True t_disp, cat = self.classify_aircraft(t, desc) self.type_counts[t_disp] = self.type_counts.get(t_disp, 0) + 1 if cat == "other": self.log_unknown_type(hex_id, t, desc, reg) self.last_seen_ts[hex_id] = now_ts t_disp, _ = self.classify_aircraft(t, desc) alt = round((p.get('alt_geom') or p.get('alt_baro', 0)) * 0.3048) gs = round(p.get('gs', 0) * 1.852) v_rate = p.get('baro_rate', 0) v_str = f" (↑ {round(v_rate * 0.3048)} m/min)" if v_rate > 128 else (f" (↓ {round(abs(v_rate) * 0.3048)} m/min)" if v_rate < -128 else "") dist = self.calculate_distance(my_lat, my_lon, p.get('lat', 0), p.get('lon', 0)) link = f"{flight}" line1 = f"{now_dt.strftime('%H:%M')} {link} - {t_disp} ({reg})" line2 = f"{alt}m{v_str}, {gs}km/h, {dist:.1f}km {self.get_cardinal_dir(p.get('track', 0))}" self.tracker_buffer[hex_id] = (line1 + "" + line2 + "", now_ts) except Exception as e: if self.should_log("Error"): Domoticz.Error(f"API Error: {e}") self._update_displays(now_ts, now_dt, new_reg_found) except Exception as e: if self.should_log("Error"): Domoticz.Error(f"Heartbeat Error: {e}") def _update_displays(self, now_ts, now_dt, new_reg_found): expired = [k for k, v in self.tracker_buffer.items() if (now_ts - v[1]) > 1800] for k in expired: del self.tracker_buffer[k] if 1 in Devices: sorted_p = sorted(self.tracker_buffer.values(), key=lambda x: x[1], reverse=True) Devices[1].Update(nValue=0, sValue="".join([p[0] for p in sorted_p[:3]]) if sorted_p else now_dt.strftime('%H:%M') + " No traffic") if 2 in Devices and new_reg_found: Devices[2].Update(nValue=0, sValue=str(self.total_count)) if 3 in Devices and self.type_counts: sorted_t = sorted(self.type_counts.items(), key=lambda x: (x[0] == "Other", -x[1], x[0])) Devices[3].Update(nValue=0, sValue="".join([f"{n}: {c}" for n, c in sorted_t]))_plugin = BasePlugin()def onStart(): _plugin.onStart()def onHeartbeat(): _plugin.onHeartbeat() Statistics: Posted by HvdW — Tuesday 10 February 2026 14:31 — Replies 3 — Views 263

Source: Home Assistant (Blog officiel) It’s been a busy few months composing behind the scenes, building up to a massive crescendo. Today, the beat finally drops on Music Assistant’s biggest update yet. With version 2.7, Music Assistant is getting all jazzed up with a visual overhaul, a chart-topping lineup of new features and providers, along with a brand-new streaming protocol we’re spinning up ourselves. Of course, you can always update and experience all the great new stuff without reading the rest of this, but you might miss a deep cut. In fact, we can’t even cover everything in this blog (there really is that much), so go sing your praises for anything we missed in the ! Table of contents Marvin joins the team A visual overhaul Users and logins Remote music streaming Introducing Sendspin AirPlay additions Lyrics support Smart fading And much more Join the audio revolution “With a Little Help from My Friends” Marvin joins the team Music Assistant has gained its first full-time employee at the Open Home Foundation. No, not me! My day job is leading the Ecosystems department at the foundation (which comprises all the software projects the Foundation has that are not Home Assistant itself). Marvin will be joining the foundation in the new year to work full-time on Music Assistant, leading the project’s day-to-day operations. Marvin has been contributing to the project for three years now, working on all sorts of parts of the project, and specifically with the Apple Music and YouTube providers. Not to worry, I’m pretty obsessed with my audio setup and will still be tinkering on my little pet project . “Everything in Its Right Place” A visual overhaul Music Assistant joining the foundation has given us a lot more than a nice open home; it’s given the project clearer direction and some expert help. One area some people felt Music Assistant fell short was its UI and UX, and in version 2.7, we’re starting the process of giving it a major overhaul, making it look as good as your music sounds! This is just the beginning of a big process, so expect every update to bring more polish. The first thing you’ll probably notice is the collapsible navbar on the left of the screen, which looks pretty familiar to another Assistant . Now it’s much more intuitive, especially for new users. The settings page has also been made much easier to navigate with breadcrumbs. The biggest star of the show is the new Built-in Player, which lets you listen to music on the browser you’re using to hunt for your next track. Great for double-checking if the next song is family-friendly before sending it to every speaker in the home. “Bulletproof” Users and logins A lot of new features we’ve implemented wouldn’t be possible without some form of login and authentication. It was a much-requested feature, as security even within your home shouldn’t be ignored. We know logging in every once in a while can be a minor inconvenience, but we’ve tried to make it as unobtrusive as possible, even implementing a way to use your Home Assistant login as a “Single Sign-On”. You can now have different user profiles with their own music providers. No more having four Tidal accounts all sitting next to each other, cluttering up the Playlists tab. You can even assign who has access to each speaker; say goodbye to the kids playing Demon Hunters on your office speaker during your performance review . In Settings, just head to the User Management section, where you can add and edit your new users. “Around the world” Remote music streaming One feature made possible with our new login interface is remote music streaming – yes, that’s correct, Music Assistant anywhere you can connect to the internet. We’ve created a new web app that allows for remote connections while you’re out and about. It uses Home Assistant Cloud’s built-in multimedia streaming capabilities (WebRTC) to help route the audio from your Music Assistant server to wherever you are. A Home Assistant Cloud subscription is not required to use this feature; a big shoutout to Nabu Casa for providing their infrastructure for free to our users. Home Assistant Cloud subscribers get access to even more powerful routing, which improves streaming in more places. This subscription also supports the full-time development of Music Assistant . This connection is peer-to-peer and end-to-end encrypted, meaning no one will know if you’re listening to ABBA . I wouldn’t say it’s ready to replace your current music streaming service, but it’s a great way to get your FLACs playing at a friend’s house. You could even open two instances of the web app and stream it to two devices, and they’ll be synchronized… but how is that even possible? “Spin me right round” Introducing Sendspin For some time, the Music Assistant team has been looking for the best way to stream audio, album art, and other music visualizations to the devices we have around our homes. There are a couple of projects out there doing cool stuff with streaming audio, but not any that fit our needs. So, when it doesn’t exist, it’s time to start building. Introducing Sendspin, a new multimedia streaming and synchronizing protocol. It’s fully open source and free to use. Sendspin can stream high-fidelity audio, album art, and visualizer data, automatically adapting to each device’s capabilities. Imagine an e-paper display showcasing the album cover, while multiple speakers play in sync, and smart lights pulse to the rhythm. The best way to use it right now is either via your browser or a Home Assistant Voice Preview Edition running beta firmware. We’ve built the experimental ability to use Sendspin on Google Cast-capable speakers (we’re also looking to do the same with AirPlay-capable speakers), which will allow Sendspin to work with a lot of different hardware. A big thanks to Maxim and Kevin at the Open Home Foundation, who have been instrumental in making Sendspin a reality. Even though it can do some impressive stuff today, it’s very much a tech preview, and this announcement is our call to all developers and DIY audio hobbyists – we need your help building and testing this. This is the spec, start building with it! All the best things in life are meant to be shared, and your music should be as free and open as the software we love. So spin that record , drop the needle, and send that music across your entire home. “Aeroplane” AirPlay additions We recently added support for external audio sources, the first being Spotify Connect. This allows you to stream audio from the Spotify app to your Music Assistant server, which could send it across all your speakers, even if they don’t support Spotify Connect. We’ve now added the ability to send AirPlay audio to Music Assistant, which you can then send anywhere in your home. We also now support AirPlay 2 speakers as a player provider, which means perfectly synced audio across all your AirPlay 2-capable speakers, like HomePods. We recommend reading the limitations in the documentation, as not all AirPlay 2 devices are made equal . “Sing” Lyrics support Never again be left guessing what Kurt is saying in Smells Like Teen Spirit. As of Music Assistant 2.6, you can now see the lyrics of the song you’re playing. If the lyrics provider supports it, there is the ability to have these words time-synced, making it more like karaoke. Lyrics can be found when you open the queue menu and it will be in the “lyrics” tab (this tab will only appear if the track name, artist and album are matched to the lyrics providers). We started with support of LRCLIB, but have since added Tidal lyric syncing, Genius lyrics, and local LRC files. “Smooth operator” Smart fading Music Assistant is now your personal in-house DJ, perfectly blending one song into the next, and unlike a DJ it always takes your requests . This latest update adds Smart fading, which takes into account the BPM of each song, to make crossfading between songs sound more natural. To turn it on, go to your player of choice, scroll down to the Audio section, and choose “Enable Smart Fades”. “All the small things” And much more None of these updates are small things, but I’m running out of space, so here is the rest of the hot 100: There are now DSP presets that allow you to quickly save and apply custom configurations. Track and share your listening history, with the addition of scrobbling, with support for LastFM, ListenBrainz, and Subsonic. Several new player providers have been added, including Yamaha MusicCast, and Roku devices running Media Assistant. Added VBAN as a new input provider. New radio and podcast providers include Radio Paradise, Podcast Index, BBC Sounds, gPodder, iTunes Podcasts, Dl.fm, and ARD Audiothek. Can’t follow Phish on tour? Luckily, the new Phish.in provider has you covered. There’s also Nugs.net if you’re looking for more live music. Another cool hodgepodge of audio is the Internet Archive, which can now be added as a provider. One of Japan’s biggest streaming platforms Niconico has been added as an audio provider . “Rebel yell” Join the audio revolution Your music, your players – it’s time to take back control of your music and the devices you want to play it on. If you’re new to Music Assistant, check how to get started here. While we’re excited about these new features, we’re not hitting pause anytime soon. We’d love to hear your feedback in the or on Discord.

Source: Home Assistant (Blog officiel) Boo! We just celebrated our birthday , which means it is time for spooky season; get ready for Halloween! And, hello to the October release of Home Assistant 2025.10! This release iterates on some of the features we introduced in the last couple of releases, but also introduces some brand-new ones! The highlight of this release is definitely the iterations of the automation editor, which gained a sidebar last release, and now has gained undo/redo functionality, a resizable sidebar, improved copy/paste, and more! Thanks for all the feedback you provided on the previous release; it made a massive difference in this release. Using multiple wake words for voice assistants is now possible, which opens up a lot of possibilities, especially for dual-language households (like mine ). Dashboards get more intelligent by suggesting entities based on your usage patterns, and the AI Task can now generate images, which I’m curious to see what the community will do with it! Enjoy the release! ../Frenck Automation editor The sidebar is resizable CTRL+V The overflow menu is back Undo/Redo Repeat repeat repeat repeat Automation editor feedback AI Task - Draw me a sheep Dashboards get smarter - let your home suggest what to show Voice Hello, hola Beep boop Integrations New integrations Noteworthy improvements to existing integrations Integration quality scale achievements Now available to set up from the UI Other noteworthy changes New more information dialog for media player entities Sync zooming charts in the history panel Template & YAML editors get a toolbar Patch releases 2025.10.1 - October 3 2025.10.2 - October 10 2025.10.3 - October 17 2025.10.4 - October 24 Need help? Join the community Backward-incompatible changes All changes A huge thank you to all the contributors who made this release possible! And a special shout-out to @JLo, @laupalombi, and @piitaya who helped write the release notes this release. Also, @googanhiem, @SeraphicRav, @tronikos, and @richardpolzer for putting effort into tweaking its contents. Thanks to them, these release notes are in great shape. Automation editor In the last release, we introduced a new layout for the automation editor, and your feedback has been invaluable in helping us refine it! This release fixes a few of the most common issues we managed to gather from all of you. Thanks for all the feedback! The sidebar is resizable Working on an action that is too complex for a small sidebar? Maybe one with a few YAML fields? You can now resize the sidebar to adapt the layout to your current task! CTRL+V We previously introduced keyboard shortcuts to copy and cut. Pasting was more complex to bring to life because you can paste a block (trigger, condition, action) in many different locations in your automation. In this release, we introduce a really simple pattern. If you previously copied a block, you can paste it below any block simply by selecting it and pressing CTRL+V. Another very simple, but very welcome, quality-of-life improvement to the automation editor! The overflow menu is back We initially relocated the overflow menu (the menu that appears when you click the ⋮) with all the options related to a block on the sidebar, thinking this would make the flow cleaner. Due to popular demand and helpful feedback that some actions were more difficult to reach (such as testing a condition or running an action), we decided to bring it back to the main section of the editor as well. Undo/Redo We’ve all been there: you’re building a complex automation, make a mistake, and want to revert it, only to find out that it’s really not simple. Up until now, the only way to revert some unsaved changes made to an automation was to close it and start over again… A very painful workflow. This release introduces an Undo functionality (and its associated Redo). You can now undo up to 75 steps back in your automation editing history (and redo them if you want). Standard keyboard shortcuts (CTRL+Z and CTRL+Y) are also available! An amazing contribution from @jpbede, thanks! Repeat repeat repeat repeat Finally, we noticed some unwanted complexity in our “repeat” building block, which allows you to repeat one or multiple actions for as long as you need to. This complexity stemmed from the fact that we were trying to cover four main use cases in a single block. We decided to split this building block into four smaller ones, with simpler descriptions explaining each use case. Nice! Here’s how they were separated: Repeat multiple times - Repeat a sequence of actions a fixed number of times. Repeat until - Repeat a sequence of actions until a condition is satisfied. The condition is checked after each run of the sequence. Repeat while - Repeat a sequence of actions as long as a condition is satisfied. The condition is checked before each run of the sequence. Repeat for each - Repeat a sequence for each element of a list. Note For our advanced users: This evolution is only cosmetic. The YAML format of the repeat block does not change; this means your existing automations will not be affected by this change. Automation editor feedback Tip One of Home Assistant’s greatest strengths is our community. We’re building this automation editor together, and your input will shape where it goes next. There are two ways to get involved: Share your thoughts in our survey Join the conversation in the automations & scripts development channel on Discord AI Task - Draw me a sheep In 2025.8, we introduced a way to generate data using the LLM of your choice, paving the way to more AI-driven automations, dashboards, and other smart home interactions. In this release, we introduce a way to generate images! Now every time someone rings your doorbell, you can receive a notification with a cartoon version of the doorbell snapshot. @JLo has made this example a reality, and here’s his demo with the associated automation! Automation details alias: Demo Doorbell triggers: - trigger: state entity_id: - binary_sensor.doorbell_demo to: "on" actions: - action: notify.mobile_app_iphone data: title: " Doorbell " message: Processing image ... data: tag: doorbell - action: ai_task.generate_data data: task_name: Doorbell description instructions: |- Someone rang my doorbell. Instructions: - Describe the scene, describe every person on the scene - Count People - Count Animals entity_id: ai_task.ai_task_gpt_4o structure: summary: description: >- Summary of the scene and the people inside it. Keep it under 180 characters selector: text: null person_count: description: Number of person in the scene selector: number: null animal_count: description: Number of animal in the scene selector: number: null attachments: media_content_id: media-source://media_source/local/doorbell_test.png media_content_type: image/png metadata: title: doorbell_test.png thumbnail: null media_class: image children_media_class: null navigateIds: - {} - media_content_type: app media_content_id: media-source://media_source response_variable: ai - action: notify.mobile_app_iphone data: title: >- Doorbell ({{ai.data.person_count}} 🏻 / {{ai.data.animal_count}} ) message: "{{ai.data.summary}}" data: tag: doorbell - action: ai_task.generate_image data: task_name: Manga instructions: Transform this image into a super cute manga! entity_id: ai_task.google_ai_task attachments: media_content_id: media-source://media_source/local/doorbell_test.png media_content_type: image/png metadata: title: doorbell_test.png thumbnail: null media_class: image children_media_class: null navigateIds: - {} - media_content_type: app media_content_id: media-source://media_source response_variable: ai_image enabled: true - action: notify.mobile_app_iphone data: title: >- Doorbell ({{ai.data.person_count}} 🏻 / {{ai.data.animal_count}} ) message: "{{ai.data.summary}}" data: tag: doorbell image: http://homeassistant.local:8123{{ai_image.url}} enabled: true mode: single Image generation is already working great, and we cannot wait to see what you will build with this! Dashboards get smarter - let your home suggest what to show In the last release, we introduced the Home dashboard, offering a simpler way to control and monitor your smart home if you don’t have the time, energy, or need to customize your own dashboard in detail. Now we’ve added a new concept: sections of suggested entities. This follows a basic algorithm that suggests entities you have interacted with the most in the past. It then shows these entities based on the hour of the day, with only relevant controls being suggested. Adding prediction entities to any dashboard If you’re creating a manual dashboard with sections, you can integrate these prediction controls directly into it. The setup follows a section-based approach: Add a new section. Open and edit the YAML of that section. Replace the entire section YAML with the following snippet: strategy: type: common-controls title: Common controls Tip One of Home Assistant’s greatest strengths is our community. We’re building this dashboard together, and your input will shape where it goes next. There are two ways to get involved: Share your thoughts in our survey Join the conversation in the dashboard development channel on Discord Voice Hello, hola For a very long time, ESPHome-based voice assistants (even the tiny Atom Echo) secretly supported multiple wake words under the hood. With this release, we’re finally opening up this feature to you! You can now define two wake words and two assistants for every voice assistant in your home! This makes it straightforward to support dual-language households by assigning different wake words to different languages. For example, “Okay Nabu” could be used for French, while “Hey Jarvis” is used for English. Multiple wake words and assistants can be used for other purposes as well. Want to keep your local and cloud-based voice assistants separate? Easy! “Okay Nabu” could be used for a cloud-based assistant while “Hey Jarvis” is used for a local one. We’d love to hear feedback on how you plan to use multiple wake words in your home! Beep boop After a voice command, Assist responds with a short confirmation like “Turned on the lights” or “Brightness set”. This lets you know that it understood your command and took the appropriate actions. However, if you’re in the same room as the voice assistant, this confirmation can feel redundant since you can see or hear that the appropriate actions were taken. Starting with this release, Assist will detect if your voice command’s actions all took place within the same area as the satellite device. If so, a short confirmation “beep” will be played instead of the full verbal response. Besides being less verbose, this also serves as a quick reminder that your voice command only affected the current area. Note This feature does not work for AI-enabled Assistants, as they can generate a wide variety of responses that can’t be replaced with a simple beep. Integrations Thanks to our community for keeping pace with the new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] and improvements to existing ones! You’re all awesome New integrations We welcome the following new integrations in this release: Compit, added by @Przemko92 Cync, added by @Kinachi249 Droplet, added by @sarahseidman ekey bionyx, added by @richardpolzer IRM KMI, added by @jdejaegh Libre Hardware Monitor, added by @Sab44 Portainer, added by @erwindouna Smart Meter B Route, added by @SeraphicRav SFTP Storage, added by @maretodoric Usage Prediction, added by @balloob Victron Remote Monitoring, added by @AndyTempel Noteworthy improvements to existing integrations It is not just new integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] that have been added; existing integrations are also being constantly improved. Here are some of the noteworthy changes to existing integrations: Philips Hue expanded with support for MotionAware sensors on the new Hue Bridge Pro! Thanks, @marcelveldt! LG added support to the LG ThinQ integration to now provide energy usage sensors for better energy monitoring of your devices! Nice! Amazing work from @natekspencer: Litter-Robot got several enhancements: last feeding sensors, food dispensed today tracking, next feeding sensors, gravity mode switch, and globe light settings for Litter-Robot 4! AccuWeather now provides hourly forecasts, giving you more detailed weather predictions throughout the day! Thanks, @bieniu! The Blue Current integration got a new start charge session action for managing your EV charging! Nice work, @NickKoepr! The Ecowitt integration now supports the LDS01 sensor! Great addition, @GSzabados! Reolink cameras got several new features including encoding select entity, Home Hub siren support, and color temperature support for light entities! Awesome work from @starkillerOG! Geocaching enthusiasts will love the new cache sensors added to the Geocaching integration by @marc7s! Nice if you have hidden one! Lutron Caseta now supports multi-tap actions for more advanced button control! Thanks, @rlopezdiez! Thanks to @alexqzd, SmartThings air conditioners can now control the AC display light! Shelly devices received massive updates including illuminance sensor for Plug US Gen4, presence component entities, virtual buttons support, object-based entities, presence zone component support, and cable unplugged sensor for Flood Gen4! Great work from @chemelli74, @bieniu, and @thecode! The SwitchBot integration expanded device support with Plug Mini EU, RelaySwitch 2PM, and K11+ Vacuum! Thanks, @zerzhang! The SwitchBot Cloud integration got several improvements including AC off support, humidifier platform, Plug-Mini-EU support, and Climate Panel support! Great work from @SeraphicRav and @XiaoLing-git! Thanks to @timmo001, the System Bridge integration now includes a power usage sensor for better system monitoring! Exciting to see that the Tasmota integration now supports camera functionality! Nice addition from @anishsane! Using the Tibber integration? It now provides 15-minute price data, which goes into effect on October 1st. Good timing, @Danielhiversen! The Tuya integration received extensive updates with support for various new device categories and sensors: energy sensors for TDQ devices, power sensors for ZNDB devices, energy sensors for DLQ devices, solar inverter support, energy consumption for several smart switches, PM10 air quality monitoring, motor rotation mode for curtains that support it, charge state for siren alarms, cooking thermometer support, cat toilet support, electric desk support, white noise machine support, and water quality sensor support! What an impressive list! Thanks, @zzysszzy, @rokam, and @mhalano! The Workday integration now has a calendar that you can view from the calendar sidebar! Thanks, @gjohansson-ST! The ntfy integration got a big upgrade! You can now send richer, customizable notifications with tags, icons, URLs, and attachments. Plus, with the new event platform, you can subscribe to topics and trigger automations from incoming messages. Thanks, @tr4nt0r! Integration quality scale achievements One thing we are incredibly proud of in Home Assistant is our integration quality scale. This scale helps us and our contributors to ensure integrations are of high quality, maintainable, and provide the best possible user experience. This release, we celebrate several integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] that have improved their quality scale: 3 integrations reached platinum Android TV Remote, thanks to @tronikos Miele, thanks to @astrandb Sleep as Android, thanks to @tr4nt0r 2 integrations reached silver Samsung Smart TV, thanks to @chemelli74 Whirlpool Appliances, thanks to @abmantis 3 integrations reached bronze NextDNS, thanks to @bieniu Opower, thanks to @tronikos Sonos, thanks to @PeteRager This is a huge achievement for these integrations and their maintainers. The effort and dedication required to reach these quality levels is significant, as it involves extensive testing, documentation, error handling, and often complete rewrites of parts of the integration. A big thank you to all the contributors involved! Now available to set up from the UI While most integrationsIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] can be set up directly from the Home Assistant user interface, some were only available using YAML configuration. We keep moving more integrations to the UI, making them more accessible for everyone to set up and use. The following integrations are now available via the Home Assistant UI: Nederlandse Spoorwegen (NS), done by @heindrichpaul Satel Integra, done by @Tommatheussen Other noteworthy changes There are many more improvements in this release; here are some of the other noteworthy changes: The “Logbook” has been renamed to “Activity” in the UI. This better reflects its purpose of showing a timeline of activities and events in your Home Assistant instance. Matter continues to expand with occupancy sensing hold time, climate running state for heat/cool fans, and thermostat outdoor temperature sensors! Great contributions from @lboue and @virtualbitzz! Lawn mower entities now support start mowing and dock intents for better voice control! Thanks, @piitaya! The analog clock we introduced last release got some more options! You can now enable a smooth motion for the seconds hand. Beautiful, @timmo001! Need the version of the Home Assistant Mobile Companion App you are using? If you have installed the latest versions of our apps, the version is now shown on the about page in the settings menu! Nice one, @TimoPtr! The thermostat card now supports water heater entities. Thanks, @karwosts! Thanks to @cr7pt0gr4ph7, the add-on configuration UI has gotten support for more complex configurations; this means you will get a better experience when configuring add-ons with more complex options (like lists or user accounts). Well done! Talking about add-ons, we now include switch entities for those, making it easier to control your add-ons. Thanks, @felipecrs! Using a webhook trigger in your automation? You can now make it even more dynamic by using a template for the webhook_id. Thanks, @RoboMagus! We now have support for MCF (1000 Cubic Feet) as an alternate unit of measure for volume, thanks to @ekobres, @xtimmy86x added m/min for speed sensors, and @pioto added inH₂O pressure unit support. Nice! New more information dialog for media player entities This one, we have @jpbede and @matthiasdebaat to thank for! The ‘more information’ dialogs for media players have a revamped design, offering a cleaner and more intuitive interface. Sync zooming charts in the history panel When you have multiple charts in the history panel, zooming in on one chart will now automatically zoom in on all other charts as well. This makes it easier to compare data across different entities. Well done, @birrejan! Template & YAML editors get a toolbar @TCWORLD has contributed a toolbar for the YAML and template code editors in our UI. This solves an issue where the previous floating button would float over the content of the editor and obscure it from view. The new toolbar also includes undo and redo buttons, bringing the same convenient undo and redo functionality we introduced for the automation editor to these code editors as well. Plus, there’s a nice little copy button to quickly copy your code! Nice! Patch releases We will also release patch releases for Home Assistant 2025.10 in October. These patch releases only contain bug fixes. Our goal is to release a patch release once a week, aiming for Friday. 2025.10.1 - October 3 Bump airOS dependency (@CoMPaTech - #153065) Bump airOS module for alternative login url (@CoMPaTech - #153317) Bump aiohasupervisor to 0.3.3 (@agners - #153344) Do not reset the adapter twice during ZHA options flow migration (@puddly - #153345) Fix Nord Pool 15 minute interval (@gjohansson-ST - #153350) Explicitly check for None in raw value processing of modbus (@alengwenus - #153352) Set config entry to None in ProxmoxVE (@mib1185 - #153357) Explicit pass in the config entry to coordinator in airtouch4 (@mib1185 - #153361) Add Roborock mop intensity translations (@starkillerOG - #153380) Correct blocking update in ToGrill with lack of notifications (@elupus - #153387) Bump python-roborock to 2.49.1 (@Lash-L - #153396) Pushover: Handle empty data section properly (@linuxkidd - #153397) Increase onedrive upload chunk size (@zweckj - #153406) Bump pyportainer 1.0.2 (@erwindouna - #153326) Bump pyportainer 1.0.3 (@erwindouna - #153413) Disable thinking for unsupported gemini models (@Shulyaka - #153415) Fix Satel Integra creating new binary sensors on YAML import (@Tommatheussen - #153419) Update markdown field description in ntfy integration (@tr4nt0r - #153421) Fix Z-Wave RGB light turn on causing rare ZeroDivisionError (@TheJulianJES - #153422) Bump aiohomekit to 3.2.19 (@bdraco - #153423) Fix sentence-casing in user-facing strings of slack (@NoRi2909 - #153427) Add missing translation for media browser default title (@timmo001 - #153430) Fix missing powerconsumptionreport in Smartthings (@joostlek - #153438) Update Home Assistant base image to 2025.10.0 (@agners - #153441) Disable baudrate bootloader reset for ZBT-2 (@puddly - #153443) Add translation for turbo fan mode in SmartThings (@joostlek - #153445) Fix next event in workday calendar (@gjohansson-ST - #153465) Update OVOEnergy to 3.0.1 (@timmo001 - #153476) Fix missing parameter pass in onedrive (@zweckj - #153478) Bump pyTibber to 0.32.2 (@Danielhiversen - #153484) Bump reolink-aio to 0.16.1 (@starkillerOG - #153489) Fix VeSync zero fan speed handling (@cdnninja - #153493) Bump universal-silabs-flasher to 0.0.35 (@puddly - #153500) Debounce updates in Idasen Desk (@abmantis - #153503) Z-Wave to support migrating from USB to socket with same home ID (@balloob - #153522) When discovering a Z-Wave adapter, always configure add-on in config flow (@balloob - #153575) 2025.10.2 - October 10 Prevent reloading the ZHA integration while adapter firmware is being updated (@puddly - #152626) Wallbox fix Rate Limit issue for multiple chargers (@hesselonline - #153074) Fix power device classes for system bridge (@timmo001 - #153201) Bump PyCync to 0.4.1 (@Kinachi249 - #153401) Updated VRM client and accounted for missing forecasts (@AndyTempel - #153464) Bump python-roborock to 2.50.2 (@Lash-L - #153561) Bump aioamazondevices to 6.2.8 (@chemelli74 - #153592) Switch Roborock to v4 of the code login api (@Lash-L - #153593) Fix MQTT Lock state reset to unknown when a reset payload is received (@jbouwh - #153647) Gemini: Use default model instead of where applicable (@Shulyaka - #153676) Fix ViCare pressure sensors missing unit of measurement (@CFenner - #153691) Bump pyvesync to 3.1.0 (@cdnninja - #153693) Modbus Fix message_wait_milliseconds is no longer applied (@peetersch - #153709) Bump opower to 0.15.6 (@tronikos - #153714) Version bump pydaikin to 2.17.0 (@fredrike - #153718) Version bump pydaikin to 2.17.1 (@fredrike - #153726) Fix missing google_assistant_sdk.send_text_command (@tronikos - #153735) Bump airOS to 0.5.5 using formdata for v6 firmware (@CoMPaTech - #153736) Align Shelly presencezone entity to the new API/firmware (@bieniu - #153737) Synology DSM: Don’t reinitialize API during configuration (@oyvindwe - #153739) Upgrade python-melcloud to 0.1.2 (@Sander0542 - #153742) Fix sensors availability check for Alexa Devices (@chemelli74 - #153743) Bump aioamazondevices to 6.2.9 (@chemelli74 - #153756) Remove stale entities from Alexa Devices (@chemelli74 - #153759) vesync correct fan set modes (@cdnninja - #153761) Handle ESPHome discoveries with uninitialized Z-Wave antennas (@balloob - #153790) Fix Tuya cover position when only control is available (@epenet - #153803) Bump pySmartThings to 3.3.1 (@joostlek - #153826) Catch update exception in AirGradient (@joostlek - #153828) Add motion presets to SmartThings AC (@joostlek - #153830) Fix delay_on and auto_off with multiple triggers (@Petro31 - #153839) Fix PIN validation for Comelit SimpleHome (@chemelli74 - #153840) Bump aiocomelit to 1.1.1 (@chemelli74 - #153843) Limit SimpliSafe websocket connection attempts during startup (@bachya - #153853) Handle timeout errors gracefully in Nord Pool services (@gjohansson-ST - #153856) Add plate_count for Miele KM7575 (@derytive - #153868) Fix restore cover state for Comelit SimpleHome (@chemelli74 - #153887) fix typo in icon assignment of AccuWeather integration (@CFenner - #153890) Add missing translation string for Satel Integra subentry type (@Tommatheussen - #153905) Do not auto-set up ZHA zeroconf discoveries during onboarding (@TheJulianJES - #153914) sharkiq dependency bump to 1.4.2 (@Freebien - #153931) Fix HA hardware configuration message for Thread without HAOS (@TheJulianJES - #153933) Adjust OTBR config entry name for ZBT-2 (@TheJulianJES - #153940) Bump pylamarzocco to 2.1.2 (@zweckj - #153950) Bump holidays to 0.82 (@gjohansson-ST - #153952) Fix update interval for AccuWeather hourly forecast (@bieniu - #153957) Bump env-canada to 0.11.3 (@michaeldavie - #153967) Fix empty llm api list in chat log (@arturpragacz - #153996) Don’t mark ZHA coordinator as via_device with itself (@joostlek - #154004) Filter out invalid Renault vehicles (@epenet - #154070) Bump aioamazondevices to 6.4.0 (@chemelli74 - #154071) Bump brother to version 5.1.1 (@bieniu - #154080) Fix for multiple Lyrion Music Server on a single Home Assistant server for Squeezebox (@peteS-UK - #154081) Z-Wave: ESPHome discovery to update all options (@balloob - #154113) Add missing entity category and icons for smlight integration (@piitaya - #154131) Update frontend to 20251001.2 (@bramkragten - #154143) IOmeter bump version v0.2.0 (@jukrebs - #154150) Bump deebot-client to 15.1.0 (@edenhaus - #154154) Fix Shelly RPC cover update when the device is not initialized (@thecode - #154159) Fix shelly remove orphaned entities (@thecode - #154182) 2025.10.3 - October 17 Bump aioasuswrt to 1.5.1 (@kennedyshead - #153209) PushSafer: Handle empty data section properly (@LennartC - #154109) Remove redudant state write in Smart Meter Texas (@srirams - #154126) Fix state class for Overkiz water consumption (@Yvan13120 - #154164) Bump frontend 20251001.4 (@piitaya - #154218) Bump aioamazondevices to 6.4.1 (@chemelli74 - #154228) Move URL out of Mealie strings.json (@andrew-codechimp - #154230) Move URL out of Mastodon strings.json (@andrew-codechimp - #154231) Move URL out of Switcher strings.json (@thecode - #154240) Remove URL from ViCare strings.json (@CFenner - #154243) Fix August integration to handle unavailable OAuth implementation at startup (@bdraco - #154244) Fix Yale integration to handle unavailable OAuth implementation at startup (@bdraco - #154245) Move url like strings to placeholders for nibe (@elupus - #154249) Add description placeholders in Uptime Kuma config flow (@tr4nt0r - #154252) Add description placeholders to pyLoad config flow (@tr4nt0r - #154254) Fix home wiziard total increasing sensors returning 0 (@jbouwh - #154264) Bump pyprobeplus to 1.1.0 (@pantherale0 - #154265) Update Snoo strings.json to include weaning_baseline (@dschafer - #154268) Move Electricity Maps url out of strings.json (@jpbede - #154284) Bump aioamazondevices to 6.4.3 (@chemelli74 - #154293) Move URL out of Overkiz Config Flow descriptions (@iMicknl - #154315) AsusWRT: Pass only online clients to the device list from the API (@Vaskivskyi - #154322) Move Ecobee authorization URL out of strings.json (@ogruendel - #154332) Move URLs out of SABnzbd strings.json (@shaiu - #154333) Move developer url out of strings.json for coinbase setup flow (@ogruendel - #154339)

Source: Home Assistant (Blog officiel) Last year, we laid out our vision for AI in the smart home, which opened up experimentation with AI in Home Assistant. In that update, we made it easier to integrate all sorts of local and cloud AI tools, and provided ways to use them to control and automate your home. A year has passed, a lot has happened in the AI space, and our community has made sure that Home Assistant has stayed at the frontier. We beat big tech to the punch; we were the first to make AI useful in the home. We did it by giving our community complete control over how and when they use AI, making AI a powerful tool to use in the home. As opposed to something that takes over your home. Our community is taking advantage of AI’s unique abilities (for instance, its image recognition or summarizing skills), while having the ability to exclude it from mission-critical things they’d prefer it not to handle. Best of all, this can all be run locally, without any data leaving your home! Moreover, if users don’t want AI in their homes, that’s their choice, and they can choose not to enable any of these features. I hope to see big tech take an approach this measured, but judging by their last couple of keynotes, I’m not holding my breath. Over the past year, we’ve added many new AI features and made them easy to use directly through Home Assistant’s user interface. We have kept up with all the developments in AI land and are using the latest standard to integrate more models and tools than ever before. We’re also continuing to benchmark local and cloud models to give users an idea of what works best. Keep reading to check out everything new, and maybe you can teach your smart home some cool new tricks. Local AI is making the home very natural to control Big thanks to our AI community contributor team: @AllenPorter, @shulyaka, @tronikos, @IvanLH, @Joostlek! Supercharging voice control with AI We were doing voice assistants before AI was cool. In 2023, we kicked off our Year of the Voice. Since then, we’ve worked towards our goal of building all the parts needed for a local, open, and private voice assistant. When AI became the rage, we were quick to integrate it. Today, users can chat with any large language model (LLM) that is integrated into Home Assistant, whether that’s in the cloud or run locally via a service like Ollama. Where Assist, our home-grown (non-AI) voice assistant agent, is focused on a predetermined list of mostly home control commands, AI allows you to ask more open-ended questions. Summarize what’s happening across the smart home sensors you’ve exposed to Assist, or get answers to trivia questions. You can even give your LLM a personality! Users can also leverage the power of AI to speak the way they speak, as LLMs are much better at understanding the intent behind the words. By default, Assist will handle commands first. Only questions or commands it can’t understand will be sent to the AI you’ve set up. For instance, “Turn on the kitchen light” can be handled by Assist, while “It’s dark in the kitchen, can you help?” could be processed by an AI. This speeds up response times for simple commands and makes for a more sustainable voice assistant. Another powerful addition from the past year is context sharing between agents. So your Assist agent can share the most recent commands with your chosen AI agent. This shared context lets you say something like “Add milk to my shopping list,” which Assist will act on, and to add more items, just say “Add rice.” The AI agent understands that these commands are connected and can act accordingly. Here is an excellent walkthrough video of JLo's AI-powered home, showing many of these new features in action Another helpful addition keeps the conversation going; if the LLM asks you a question, your Assist hardware will listen for your reply. If you say something like “It’s dark”, it might ask whether you’d like to turn on some lights, and you could tell it to proceed. We have taken this even further than other voice assistants, as you can now have Home Assistant initiate conversations. For example, you could set up an automation that detects when the garage door is open and asks if you’d like to close it (though this can also be done without AI with a very clever Blueprint). AI pushed us to completely revamp our Text-to-Speech (TTS) system to take advantage of streaming responses from LLMs. While local AI models can be slow, we use a simple trick to make the delay almost unnoticeable. Now, both Piper (our local TTS) and Home Assistant Cloud TTS can begin generating audio as soon as the LLM produces the first few words, improving the speed of the spoken response by a factor of ten. Prompt: “Tell me a long story about a frog” Setup Time to start speaking Cloud, non-streaming 6.62 sec Cloud, streaming 0.51 sec (13x faster) Piper, non-streaming 5.31 sec Piper, streaming 0.56 sec (9.5x faster) Ollama gemma3:4b on an RTX 3090, and Piper on an i5 Great hardware to work with AI People built some really cool voice hardware, from landline telephones to little talking robots, but the fact that it was so DIY was always a barrier to entry. To make our voice assistant available to everyone, we released the Home Assistant Voice Preview Edition. This is an easy and affordable way to try Home Assistant Voice. It has some seriously powerful audio processing hardware inside its sleek package. If you were on the fence about trying out voice, it really is the best way to get started. It’s now easier than ever to set up your Assist hardware to work with LLMs with our Voice Assistants settings page, and you can even assign a different LLM to each device. The LLM can recognize the room it’s in and the devices within it, making its responses more relevant. Assist was built to be a great way to control devices in your home, but with AI, it becomes so much more. AI-powered suggestions Last month, Home Assistant launched a new opt-in feature to leverage the power of AI when automating with Home Assistant. The goal is to shorten the journey from a blank slate to your finished idea. When saving an automation or script, users can now leverage the new Suggest button: When clicked, it will send your automation configuration along with the titles of your existing automations and labels to AI to suggest a name, description, category, and labels for your new automation. Over the coming months, we’re going to explore what other features can benefit from AI suggestions. To opt-in to this feature, you need to take two steps. First, you need to configure an integration that provides an AI Tasks entity. For local AI, you can configure Ollama, or you can also leverage cloud-based AI like Google, OpenAI, or Anthropic. Once configured, you need to go to the new AI Task preferences pane under System -> General and pick the AI Task entity to power suggestions in the UI. If you don’t configure an AI Tasks entity, the Suggest button will not be visible. AI Tasks gets the job done Enabling AI Tasks does more than quickly label and summarize your automations; its true superpower is making AI easy to use in templates, scripts, and automations. AI Tasks allow other code to leverage AI to generate data, including options to attach files and define how you want that data output (for instance, a JSON schema). We have all seen those incredible community creations, where a user leverages AI image recognition and analysis to detect available parking spots or count the number of chickens in the chicken coop. It’s likely that AI Tasks can now help you easily do this in Home Assistant, without the need for complex scripts, extra add-ons, or HACS integrations. Below is a template entity that counts chickens in a video feed, all via a short and simple set of instructions. template: - triggers: - trigger: homeassistant event: start - trigger: time_pattern minutes: "/5" actions: - action: ai_task.generate_data data: task_name: Count chickens instructions: >- This is the inside of my coop. How many birds (chickens, geese, and ducks) are inside the coop? structure: birds: selector: number: attachments: media_content_id: media-source://camera/camera.chicken_coop media_content_type: image/jpeg response_variable: result sensor: - name: "Chickens" state: "{{ result.data.birds }}" state_class: total This template sends a snapshot of the camera to the AI, asking it to analyze what is going on. It defines that the output should always be a number, since we want to use that information in Home Assistant. All of this is embedded in a template entity that automatically updates every 5 minutes. An AI Task could also be embedded in an automation, a script, or any other place that can execute actions. Lastly, users can set a default AI Task entity. This allows users to skip picking an entity ID when creating AI automations. It also lets you migrate everything that uses AI Tasks to the latest model with a single click. This also makes it easy to share blueprints that leverage AI Tasks, like this blueprint that analyzes a camera snapshot when motion is detected: MCP opens a whole new world Model Context Protocol (MCP) is a thin layer allowing LLMs to integrate anything. When the specification was announced, we quickly jumped on it and integrated it into Home Assistant. Effectively, these servers give Home Assistant’s Assist conversation agent access to all sorts of new tools. You could connect MCP servers that give Assist access to the latest news stories, your to-do lists, or a server that catalogues your vinyl collection, allowing you to have richer conversations (“Okay Nabu, which Replacements albums do I have, and which aren’t on my Vinyl-to-Purchase list?”). On the flip side, you can also turn Home Assistant into an MCP server, allowing an AI system to access information about your home. For instance, you could create a local AI that’s great at making Home Assistant automations, and it could include all your entity names or available actions. MCP keeps gaining more support, and there are some great cloud and self-hosted solutions available. How to pick a model There are a lot of models available, it’s hard to know where to start. Luckily, Home Assistant’s resident AI guru @AllenPorter is here to help. He has put together an incredibly useful Home LLM Leaderboard. This dataset includes his extensive tests of cloud and local LLM options, and even has tests that give small local LLMs a fighting chance (see assist-mini). Currently, the charts show the big cloud players’ most recent models ranking pretty close to each other, while recent local models that use 8GB or more of VRAM are nearly keeping up. In the past, there was a big disparity between most models, but now it’s hard to go wrong. This is especially helpful as the options for LLMs in Home Assistant have just grown exponentially with the addition of OpenRouter, a unified interface for LLMs. With OpenRouter, users can access over 400 new models in Home Assistant, and it supports AI Tasks right from day one. We really are spoiled for choice. The future is Open, and Open Source Home Assistant is open. We believe that you should be in control of your data, and your smart home. All of it. Local LLMs and the way we have architected Home Assistant extends this choice to the AI space, all while maintaining your privacy. Most crucially, we’ve made all of this open source. We are community-driven and work on this together with our community. The Open Home Foundation has no investors and is not beholden to anyone but our users. Our work is funded through hardware purchases and Home Assistant Cloud subscriptions, allowing us to make all the technology we build free and open.

Source: Home Assistant Community Forum (Latest) I come from Deconz and it worked satisfactorily for 7 years (raspbee1+2+conbee3). The fact that new devices weren’t supported right from the start was never a big problem. But then I wanted to streamline my setup a bit and, after much deliberation, switched to ZHA, with ZBT-2 to be added later. Three months have now passed. It runs just as stably as Deconz. Very good! But the way the devices are supported is terrible. Especially when it comes to something as essential as lights. In my case, Hue Bulbs. Color loop: just on/off, no further settings. DIY Flash light: Lights do not retain their previous state. DIY Dimming lights with buttons: Forget it! Build something! DIY! No problem, but how do you stop a transition? Search the annals of GitHub - done There’s more, but it’s not that important. So far, I’ve managed to get everything the way I wanted it, but in the long run, I don’t know what else is in store for me. I know you can do everything yourself, but why replicate native functions? For me, ZHA is nothing more than a good starting point when you start with HomeAssistant. I’m going to switch back to Deconz or maybe even Z2M. After experiencing that very trivial things don’t work out of the box, I don’t believe that HomeAssistant or NabuCasa are serious about the ZBT-2. Maybe they should focus more on the basics than on gimmicks like voice, AI, and dashboards. 1 post - 1 participant Read full topic

Source: Gladys Assistant (Forum) Bonjour à tous, je viens d’installer MQTT dans Gladys ainsi que l’intégration zwavejs, et tout est connecté. Par contre, je ne vois pas ma clé dans la découverte du réseau et je n’ai rien pour configurer la Gateway, comme précisé dans la doc. C’est beaucoup moins à configurer que dans Jeedom. Avez-vous une idée ? 4 messages - 2 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) @pierre-gilles , Je rebondis sur ce sujet du coup, j’ai terminé la vérification pour la PR sur l’amélioration du recalcul du suivi de l’énergie. Je pense que tu peux review : github.com/GladysAssistant/Gladys Energy monitoring - New Add date range recalculation for energy monitoring with improved jobs and tests master ← Terdious:energy-recalc-date-and-multi-features 05:16PM - 06 Jan 26 UTC +3203 -428 ### Pull Request check-list To ensure your Pull Request can be accepted as fa…st as possible, make sure to review and check all of these items: - [x] If your changes affects code, did your write the tests?

  • Are tests passing? (npm test on both front/server)
  • Is the linter passing? (npm run eslint on both front/server)
  • Did you run prettier? (npm run prettier on both front/server)
  • If you are adding a new features/services, did you run integration comparator? (npm run compare-translations on front)
  • Did you test this pull request in real life? With real devices? If this development is a big feature or a new service, we recommend that you provide a Docker image to the community (forum) for testing before merging.
  • If your changes modify the API (REST or Node.js), did you modify the API documentation? (Documentation is based on in code)
  • If you are adding a new features/services which needs explanation, did you modify the user documentation? See the GitHub repo and the website.
  • Did you add fake requests data for the demo mode (front/src/config/demo.js) so that the demo website is working without a backend? (if needed) See https://demo.gladysassistant.com. NOTE: these things are not required to open a PR and can be done afterwards / while the PR is open. ### Description of change * Translate * +156 lines / -21 lines * Front * +923 lines / -137 lines * Server * +699 lines / -126 lines * Tests * +1232 lines / -129 lines - Add start/end date support for energy consumption and cost recalculation (full or selected features).
  • Purge recalculated consumption states within the selected period before recomputing.
  • Return job_id on recalculation endpoints and handle errors on the UI side.
  • Display recalculation period in background jobs.

Source: Gladys Assistant (Forum) Salut à tous ! Vous avez sûrement entendu parler d’OpenClaw, le framework IA qui a explosé sur GitHub il y a quelques semaines, et qui vient d’être racheté par OpenAI. J’ai fait le test de connecter OpenClaw à Gladys pour tester les possibilités, et c’est vraiment bluffant Je vous en dis plus en vidéo : J'ai laissé l'IA OpenClaw contrôler ma Maison (C'est fou) Note : Je vous déconseille d’installer OpenClaw sur votre serveur Gladys, c’est un logiciel encore en début de vie, qui touche un peu à tout et qui a été pas mal décrié pour ses failles de sécurité. Pour ce test, j’ai déployé OpenClaw sur une VM dans le Cloud pour rester dans un environnement isolé 7 messages - 3 participant(e)s Lire le sujet en entier

Source: Adafruit Blog Hodzinets shares: USB Solder dispenser is easy to print, no soldering required (ironically). download the files on: https://makerworld.com/en/models/2337723-motorized-usb-solder-dispenser Every Thursday is #3dthursday here at Adafruit! The DIY 3D printing community has passion and dedication for making solid objects from digital models. Recently, we have noticed electronics projects integrated with 3D printed enclosures, brackets, and sculptures, […]

Source: Gladys Assistant (Forum) Bonjour tout le monde, je découvre Gladys Assistant et c’est très intéressant. Est ce possible d’installer Gladys sur une box internet différente de FreeBox Delta ? Je pense qu’il faut au moins la fx VMs. Merci de vos et à bientôt ! 2 messages - 2 participant(e)s Lire le sujet en entier

Source: openHAB Community (Latest) I’ve been using the Sonos binding for years. I have about 12 amps and I play whole house music from a filesystem of MP3s on my local NAS. My openhab is a 5.1.3 installation on a Debian-based server at my house. I use 3 scripts: A morning script that sets the volume of 10 different amps, joins them to a single “control” amp, and starts playback on that amp. A later-in-the-morning script that increases the volume plus-one-at-a-time on a handful of those amps (the ones closer to my bedroom) An evening script that decreases the volume minus-one-at-a-time on all amps, until they reach 0, then pauses the music, then resets the volumes to sane values for the next day. I have been annoyed for some time, that the Coordinator value isn’t working right. It doesn’t properly reflect the “control” amp. Most of the amps either show NULL, or their own UDN as the value of the Coordinator. I was motivated recently to look at the code, because when I turn on the family room TV, I want the kitchen amp to change sources to family room TV (rather than the “control” amp playing the whole-house music). Having finished messing around with the Blink Binding, I decided to look at the Sonos Binding code. I updated the demo distribution “app” xml to include the Sonos binding (alongside the Blink binding I’ve been developing). I struggled with the stupid dependency refresh buttons for an hour or two (translation: I don’t understand what it’s doing), and I got it working. I use a Mac laptop as my desktop / development environment. openhab calls itself a 5.2.0-SNAPSHOT version. It found the Sonos amps, and I added the Things from the inbox, then created a number of Items. To my surprise, it all works perfectly on the laptop instance of openhab running within eclipse. I tried to copy the .jar file from the Mac to the server, just like I had done successfully with the Blink binding jar file, but it won’t run, it complains about a upnp package missing: 2026-02-25 00:55:12.669 [WARN ] [org.apache.felix.fileinstall ] - Error while starting bundle: file:/usr/share/openhab/addons/org.openhab.binding.sonos-5.2.0-SNAPSHOT.jar org.osgi.framework.BundleException: Could not resolve module: org.openhab.binding.sonos [311] Unresolved requirement: Import-Package: org.openhab.core.config.discovery.upnp at org.eclipse.osgi.container.Module.start(Module.java:463) ~[org.eclipse.osgi-3.18.0.jar:?] at org.eclipse.osgi.internal.framework.EquinoxBundle.start(EquinoxBundle.java:445) ~[org.eclipse.osgi-3.18.0.jar:?] at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundle(DirectoryWatcher.java:1260) [bundleFile:3.7.4] Stymied by not being able to run the 5.2.0 SNAPSHOT version of Sonos on my server, I tried a different approach… Guessing perhaps I had some stale Things in my server-based openhab 5.1.3 instance, I deleted all 66 Sonos Items, all Sonos Things, and uninstalled the Sonos Add-On. I reinstalled the add-on from the marketplace, and proceeded to recreate dozens of Things and Items. It still doesn’t work. The volume Items are all NULL, and the Coordinator values are generally set to each amp’s own UDN. The controls work but the state updates back to the Items don’t: If I use the MediaControl for the “control” amp, I can advance to the next song (good) If I use debian server openhab to modify the volume of one amp, then the Item will show the new volume (good) (and also, the volume does change in the room) If I use the Sonos app to modify the volume of one amp, then the debian server openhab Item does not change (bad), but the Mac/eclipse openhab Item does change (good) So it’s probably not the Sonos Binding. But what is it? Server openhab Sonos Binding is not receiving incoming updates from the Sonos hardware, but Mac openhab Sonos Binding is. Not even sure where to start. The server is on wired ethernet, in the same IP network range as the MacBook’s wifi. They are both on a 192.168.192./18 network. Both the MacBook and the server are in 192.168.213. range and the Sonos amps are in 192.168.215.* range. Again, based on netmask these are the same network, it is a /18, NOT a /24. During openhab startup on the server (but not on the MacBook), there is an ominous message: 2026-02-25 00:46:29.974 [INFO ] [.network.internal.utils.NetworkUtils] - CIDR prefix is smaller than /24 on interface with address 192.168.213.20/18, truncating to /24, some addresses might be lost But I don’t know who’s responsible for that message, so I don’t know the scope of what it affects. Might be relevant, but then again, the MacBook eclipse openhab works just fine on this same network range (and without the INFO log message). Any ideas? Thanks in advance! 4 posts - 2 participants Read full topic

Source: Domoticz (Forum News) Domoticz version: 2025.2 (build 17252) Platform: (Raspbian 13.2) After upgrading to the latest beta version 2025.2 (build 17252), I noticed an issue with the lastUpdate.minutesAgo parameter in DzVents. I am using OWFS 1-Wire temperature sensors. The sensors update correctly — at least once per minute (confirmed in the device log and hardware level). In my DzVents scripts, I check: device.lastUpdate.minutesAgo Since the last update, even though the sensor was updated less than a minute ago, Domoticz shows an increasing minutesAgo value indefinitely for randomly selected sensors. The value keeps increasing as if the device had not been updated at all, while in reality: The device value is changing correctly. The device “Last Update” timestamp in the UI appears correct. Only lastUpdate.minutesAgo in DzVents is incorrect. The issue does not affect all sensors at once — it appears randomly on different OWFS devices. The same scripts worked correctly in previous versions. Statistics: Posted by tomes — Tuesday 24 February 2026 23:19 — Replies 4 — Views 414

Source: Gladys Assistant (Forum) ​Bonjour, ​Nouveau sur Gladys et habitué à bidouiller sur Home Assistant, j’ai voulu migrer pour plus de simplicité. Cependant, je rencontre plusieurs problèmes suite à une mauvaise configuration réseau qui a fait crasher mon serveur Unraid. ​Ma Configuration : ​Hardware : Lenovo ThinkCentre M710q. ​Stockage : DAS TerraMaster 4 baies (USB). ​Dongles : Zigbee (sur /dev/ttyACM0) et Bluetooth Asus BT500. ​Gladys : Installée via Docker sur Unraid (actuellement en mode Bridge sur le port 8006 pour éviter les conflits). ​Le problème : Suite à une tentative de configuration du dongle Bluetooth, j’ai passé Gladys en mode réseau Host, ce qui a provoqué un crash total d’Unraid (conflit d’IP/Ports) et une corruption de ma clé de boot. J’ai dû recréer une « New Config » et supprimer l’image Docker corrompue. ​Depuis, j’ai deux soucis majeurs : ​Services MQTT et Zigbee : Les containers intégrés n’existent plus. J’ai tenté une réinstallation manuelle via l’onglet Apps d’Unraid, mais Gladys ne semble plus communiquer avec eux. ​Y a-t-il un moyen de forcer Gladys à réinstaller et gérer elle-même ces containers (Mosquitto/Zigbee2MQTT) pour retrouver le fonctionnement natif ? ​ m’assurer que les nouveaux containers pointent correctement vers mes anciennes données dans /mnt/user/appdata/ ? ​Bluetooth (Asus BT500) : Mon dongle n’est toujours pas reconnu dans l’interface Gladys (« Pas de dispositif trouvé »). J’ai pourtant activé le mode Privileged et tenté un passthrough USB via /dev/bus/usb, mais sans succès. ​Avez-vous une astuce pour stabiliser le Bluetooth sur Unraid et lier correctement les services MQTT/Zigbee après un tel crash ? ​Merci d’avance pour votre aide ! Cordialement, Didier 5 messages - 3 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Hello, Je ne sais pas si c’est depuis la mise à jour ou bien depuis ma modification de mqtt que j’ai passé dans docker mais plus aucun appareil zigbee2mqtt n’est controlable dans Gladys. J’ai pas mal de logs comme ceci dans les logs : 2026-02-23T19:05:07+0100 handleMqttMessage.js:109 () Failed to convert value for device Prise baie informatique: Error: Zigbee2mqqt expose not found on device "Prise baie informatique" with property "linkquality". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:07+0100 handleMqttMessage.js:109 () Failed to convert value for device Prise baie informatique: Error: Zigbee2mqqt expose not found on device "Prise baie informatique" with property "power". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:07+0100 handleMqttMessage.js:109 () Failed to convert value for device Prise baie informatique: Error: Zigbee2mqqt expose not found on device "Prise baie informatique" with property "state". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:07+0100 handleMqttMessage.js:109 () Failed to convert value for device Prise baie informatique: Error: Zigbee2mqqt expose not found on device "Prise baie informatique" with property "voltage". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:09+0100 handleMqttMessage.js:109 () Failed to convert value for device Capteur air salon: Error: Zigbee2mqqt expose not found on device "Capteur air salon" with property "humidity". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) 2026-02-23T19:05:09+0100 handleMqttMessage.js:109 () Failed to convert value for device Capteur air salon: Error: Zigbee2mqqt expose not found on device "Capteur air salon" with property "linkquality". at Zigbee2mqttManager.readValue (/src/server/services/zigbee2mqtt/lib/readValue.js:16:11) at /src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:105:31 at Array.forEach () at Zigbee2mqttManager.handleMqttMessage (/src/server/services/zigbee2mqtt/lib/handleMqttMessage.js:97:41) at MqttClient. (/src/server/services/zigbee2mqtt/lib/connect.js:60:12) at MqttClient.emit (node:events:519:28) at MqttClient._handlePublish (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:1277:12) at MqttClient._handlePacket (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:410:12) at work (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:321:12) at Writable.writable._write (/src/server/services/zigbee2mqtt/node_modules/mqtt/lib/client.js:335:5) at doWrite (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:409:139) at writeOrBuffer (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:398:5) at Writable.write (/src/server/services/zigbee2mqtt/node_modules/readable-stream/lib/_stream_writable.js:307:11) at TLSSocket.ondata (node:internal/streams/readable:1009:22) at TLSSocket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:561:12) at readableAddChunkPushByteMode (node:internal/streams/readable:512:3) at TLSSocket.Readable.push (node:internal/streams/readable:392:5) at TLSWrap.onStreamRead (node:internal/stream_base_commons:189:23) Tout semble bien connecté : Lorsque je commande mes appareils directement sur zigbee2mqtt cela fonctionne et les logs de zigbee2mqtt ont bien l’air de publier dans mosquitto : [2026-02-17 10:47:44] info: z2m:mqtt: MQTT publish: topic 'zigbee2mqtt/Prise baie informatique', payload '{"child_lock":"UNLOCK","countdown":0,"current":0.78,"device":{"applicationVersion":192,"dateCode":"","friendlyName":"Prise baie informatique","hardwareVersion":1,"ieeeAddr":"0xa4c138ab6aac9d17","manufacturerID":4417,"manufacturerName":"_TZ3000_2putqrmw","model":"A1Z","networkAddress":43679,"powerSource":"Mains (single phase)","stackVersion":0,"type":"Router","zclVersion":3},"energy":570.99,"indicator_mode":"off/on","last_seen":"2026-02-17T10:47:44+01:00","linkquality":174,"power":128,"power_outage_memory":"restore","state":"ON","voltage":231}' [2026-02-17 10:47:48] info: z2m:mqtt: MQTT publish: topic 'zigbee2mqtt/bridge/health', payload '{"response_time":1771321668141,"os":{"load_average":[0,0,0],"memory_used_mb":486.23,"memory_percent":24.7113},"process":{"uptime_sec":838810,"memory_used_mb":137.43,"memory_percent":6.9845},"mqtt":{"connected":true,"queued":0,"published":634770,"received":4819},"devices":{"0xa4c138ab6aac9d17":{"messages":148990,"messages_per_sec":0.1776,"leave_count":0,"network_address_changes":0},"0x00158d000638ef50":{"messages":32871,"messages_per_sec":0.0392,"leave_count":0,"network_address_changes":0},"0xd44867fffe59ba37":{"messages":785,"messages_per_sec":0.0009,"leave_count":0,"network_address_changes":0},"0x00158d00045c12cc":{"messages":475,"messages_per_sec":0.0006,"leave_count":0,"network_address_changes":0},"0x0c2a6ffffe8e37df":{"messages":881,"messages_per_sec":0.0011,"leave_count":0,"network_address_changes":0},"0xa4c1380d71bb31e9":{"messages":28392,"messages_per_sec":0.0338,"leave_count":0,"network_address_changes":0},"0x00158d0001dda511":{"messages":3151,"messages_per_sec":0.0038,"leave_count":0,"network_address_changes":0},"0x00124b0024c5cb2e":{"messages":4441,"messages_per_sec":0.0053,"leave_count":0,"network_address_changes":0},"0xf84477fffef9d58f":{"messages":31032,"messages_per_sec":0.037,"leave_count":0,"network_address_changes":0},"0xa4c1383f0627cc38":{"messages":5304,"messages_per_sec":0.0063,"leave_count":0,"network_address_changes":0},"0x00158d0004659191":{"messages":4106,"messages_per_sec":0.0049,"leave_count":0,"network_address_changes":0},"0x00124b0024c2511f":{"messages":2138,"messages_per_sec":0.0025,"leave_count":0,"network_address_changes":0},"0x00124b0024c25088":{"messages":2971,"messages_per_sec":0.0035,"leave_count":0,"network_address_changes":0},"0x6cfd22fffe6129ee":{"messages":203,"messages_per_sec":0.0002,"leave_count":0,"network_address_changes":0},"0xa4c138cb550205b3":{"messages":13363,"messages_per_sec":0.0159,"leave_count":0,"network_address_changes":0},"0x94a081fffebb85f2":{"messages":398,"messages_per_sec":0.0005,"leave_count":0,"network_address_changes":0},"0x0c2a6ffffe25efb9":{"messages":1083,"messages_per_sec":0.0013,"leave_count":0,"network_address_changes":0},"0xb4e3f9fffe277145":{"messages":2096,"messages_per_sec":0.0025,"leave_count":0,"network_address_changes":0},"0xa4c1384c3d763b66":{"messages":3816,"messages_per_sec":0.0046,"leave_count":0,"network_address_changes":0},"0xb4e3f9fffe2a8585":{"messages":2098,"messages_per_sec":0.0025,"leave_count":0,"network_address_changes":0},"0x842712fffe416873":{"messages":2875,"messages_per_sec":0.0034,"leave_count":0,"network_address_changes":0},"0x0015bc00310097e5":{"messages":5404,"messages_per_sec":0.0064,"leave_count":0,"network_address_changes":0},"0xb43522fffec9c4c7":{"messages":1234,"messages_per_sec":0.0015,"leave_count":0,"network_address_changes":0},"0x881a14fffeef639c":{"messages":1316,"messages_per_sec":0.0016,"leave_count":0,"network_address_changes":0},"0x00158d000460b981":{"messages":3606,"messages_per_sec":0.0043,"leave_count":0,"network_address_changes":0},"0x842712fffe1a4991":{"messages":1524,"messages_per_sec":0.0018,"leave_count":0,"network_address_changes":0},"0xc4d8c8fffe951c8c":{"messages":358,"messages_per_sec":0.0004,"leave_count":0,"network_address_changes":0},"0x08ddebfffee2db59":{"messages":380,"messages_per_sec":0.0005,"leave_count":0,"network_address_changes":0},"0x0c2a6ffffe061429":{"messages":905,"messages_per_sec":0.0011,"leave_count":0,"network_address_changes":0},"0x00158d0004201e7f":{"messages":283,"messages_per_sec":0.0003,"leave_count":0,"network_address_changes":0},"0x08ddebfffe107373":{"messages":3748,"messages_per_sec":0.0045,"leave_count":0,"network_address_changes":0},"0x00158d0006329432":{"messages":455,"messages_per_sec":0.0005,"leave_count":0,"network_address_changes":0},"0x00124b0024fc1490":{"messages":323,"messages_per_sec":0.0004,"leave_count":0,"network_address_changes":0},"0x08ddebfffe80c391":{"messages":147,"messages_per_sec":0.0002,"leave_count":0,"network_address_changes":0},"0x08ddebfffe8138e9":{"messages":92,"messages_per_sec":0.0001,"leave_count":0,"network_address_changes":0}}}' [2026-02-17 10:47:59] info: z2m:mqtt: MQTT publish: topic 'zigbee2mqtt/Prise baie informatique', payload '{"child_lock":"UNLOCK","countdown":0,"current":0.89,"device":{"applicationVersion":192,"dateCode":"","friendlyName":"Prise baie informatique","hardwareVersion":1,"ieeeAddr":"0xa4c138ab6aac9d17","manufacturerID":4417,"manufacturerName":"_TZ3000_2putqrmw","model":"A1Z","networkAddress":43679,"powerSource":"Mains (single phase)","stackVersion":0,"type":"Router","zclVersion":3},"energy":570.99,"indicator_mode":"off/on","last_seen":"2026-02-17T10:47:59+01:00","linkquality":174,"power":128,"power_outage_memory":"restore","state":"ON","voltage":231}' Une idée @pierre-gilles du problème ? Merci 13 messages - 3 participant(e)s Lire le sujet en entier

Source: openHAB Community (Latest) Hi all, I have several H2EU Rollershutter devices that I operate in Matter mode. The devices have been correctly added to OpenHAB via Matter. I can see their state. If I close them manually via the hardware button, the state updates fine in OpenHAB to a value between 0 and 100 (where 100 is fully closed). The weird thing happens, when I try to send commands to the device: Sending “100” as command closes the rollershutter exactly 1%. After the blinds stop moving, state in OpenHAB shows as “1”. Sending any other number between 0 and 99 drives the blinds to fully open state. Consequently it seems as if the value is somehow divided by 100 before it is sent to the device. I tried multiplying my value by 100 (e.g. sending 5000), but anything >100 is ignored completely. Log, when sending “100”: 2026-02-23 17:48:45.261 [INFO ] [openhab.event.ItemCommandEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' received command 100 (source: org.openhab.ui=>org.openhab.core.io.rest$meph) 2026-02-23 17:48:45.261 [INFO ] [penhab.event.ItemStatePredictedEvent] - Item 'Rollo_WZ_links_Window_Covering_Lift' predicted to become 100 2026-02-23 17:48:45.262 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' changed from 0 to 100 (source: org.openhab.core.autoupdate.optimistic) 2026-02-23 17:48:46.445 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Active_Power' changed from 0 W to 97 W (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:0#electricalpowermeasurement-activepower) 2026-02-23 17:48:46.461 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' changed from 100 to 1 (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:1#windowcovering-lift) 2026-02-23 17:48:47.449 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Active_Power' changed from 97 W to 0 W (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:0#electricalpowermeasurement-activepower) Log when sending “50” (after “100” was sent and blinds are 1% closed): 2026-02-23 17:48:58.093 [INFO ] [openhab.event.ItemCommandEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' received command 50 (source: org.openhab.ui=>org.openhab.core.io.rest$meph) 2026-02-23 17:48:58.095 [INFO ] [penhab.event.ItemStatePredictedEvent] - Item 'Rollo_WZ_links_Window_Covering_Lift' predicted to become 50 2026-02-23 17:48:58.096 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' changed from 1 to 50 (source: org.openhab.core.autoupdate.optimistic) 2026-02-23 17:48:59.261 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Active_Power' changed from 0 W to 3 W (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:0#electricalpowermeasurement-activepower) 2026-02-23 17:49:00.280 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Active_Power' changed from 3 W to 0 W (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:0#electricalpowermeasurement-activepower) 2026-02-23 17:49:01.320 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'Rollo_WZ_links_Window_Covering_Lift' changed from 50 to 0 (source: org.openhab.core.thing$matter:node:0e7486ecf4:17811946753991212479:1#windowcovering-lift) I also already tried applying a “|(input*100)” transformation to the channel. It did not really change a thing (other than 0 and 1 being the only “valid” numbers to send - still not able to lower the blinds below 1%). Problem does not seem to be about the hardware of the device itself (I thought that first), since a parallel HomeAssistant installation can operate the rollershutter fine. OpenHAB is on version 5.1.1 (running within docker container) Firmware on all devices is upgraded to the current version. I am really running out of ideas what else I can try. If anybody would have any ideas about this, it would be greatly appreciated. 5 posts - 2 participants Read full topic

Source: Gladys Assistant (Forum) Bonjour à tous, Je viens de publier une nouvelle version de Gladys, avec plusieurs améliorations et corrections Nouveautés Intégration Matterbridge : Gladys vous permet maintenant de lancer un container Matterbridge en un clic dans Gladys, ouvrant la porte à encore plus d’appareils compatibles. Si vous voulez savoir j’ai développé cette intégration avec l’IA, je vous en dis plus sur YouTube : Claude AI a codé mon intégration domotique en 30 min (j'ai rien fait) La documentation : Matterbridge | Gladys Assistant Attention, si vous avez déjà lancé Matterbridge sur votre instance, cela peut entraîner un conflit de port. Il n’y a pas de réel intérêt à vouloir lancer Matterbridge via Gladys si vous avez déjà pu le lancer vous-même en dehors de Gladys Favoris pour les intégrations : Vous pouvez désormais marquer vos intégrations préférées en favoris pour les retrouver plus facilement. Tasmota : Découverte automatique de l’IP via MQTT, lien direct vers l’interface de l’appareil, et un meilleur tri lors de la découverte. Corrections et améliorations Widget température dans les pièces : Les valeurs aberrantes de température sont désormais exclues, et la conversion en Fahrenheit pour la valeur maximale est corrigée. Chat : Les espaces dans les messages sont maintenant correctement préservés grâce au pre-wrap. Exemple: DuckDB mis à jour en version 1.4.4. Correction de fautes de frappe dans les traductions. Amélioration de la robustesse du service MCP. Merci à tous les contributeurs : @bertrandda, @mutmut @Will_71 @qleg et @Terdious pour ce beau travail collaboratif ! Bonne mise à jour ! 12 messages - 5 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) Hello. I have a problem with my Ecodevice (First génération) provided by GCE Electronics. (2 inputs for téléinfo one and two and 2 others inputs for 2 counters) It works perfectly when I connect only teleinfo1. All devices are created: Teleinfo 1 Alerte courant,Teleinfo 1 Tarif en cours,Teleinfo 1 Pourcentage de Charge,Teleinfo 1 Courant, Teleinfo 1 kWh Total (It's also OK for the 2 counters C1 and C2) But when I connect the second signal téléinfo2, devices are also created, except 1 (the most important) : Teleinfo 2 kWh. The datas for téleinfo1 are briefly displayed on the sensor Teleinfo 1 kWh but immediatly replaced with Teleinfo2's datas. On the log, I have twice the line for Teleinfo 2 kWh Total (2 differents lines for others sensors like teleinfo courant) 2026-02-23 12:55:47.050 ECO-DEVICES: Fetching Teleinfo 1 data 2026-02-23 12:55:47.152 ECO-DEVICES: Fetching Teleinfo 2 data 2026-02-23 12:55:47.152 ECO-DEVICES: P1 Smart Meter (Teleinfo 2 kWh Total) 2026-02-23 12:55:47.257 ECO-DEVICES: P1 Smart Meter (Teleinfo 2 kWh Total) ... 2026-02-23 12:55:47.259 ECO-DEVICES: Current (Teleinfo 2 Courant) 2026-02-23 12:56:47.754 ECO-DEVICES: Current (Teleinfo 1 Courant) ... Iknow that Ecodevice (first generation) is an old system. I know also that this system is only used in France ,and for almost everyone only one teleinfo signal. And perhaps the problem comes from GCE Electronics. So I'm not sure someone can help me… But I still hope... Patrick Statistics: Posted by Pchatill — Monday 23 February 2026 13:06 — Replies 1 — Views 241

Source: Gladys Assistant (Forum) Bonjour, Comme promis, voici un tutoriel afin de connecter une clé SMLIGHT à Gladys via le réseau et non via USB car sinon on perd tout l’intérêt de cette clé en la branchant en USB Gladys ne prenant en charge que le mode USB @pierre-gilles arrête moi si je me trompe , je suis partie sur une installation avec un MQTT et Z2M externe à Gladys. @pierre-gilles ? Ici pour z2m en https j’utilise le port 443 mais pour éviter tout conflit si vous décider de tout installer sur la même machine il faudrait modifier par le port 4343 par exemple J’avais déjà fais un tuto complet sur HAOS à l’époque ou je l’utilisais que vous pouvez retrouver ici : Home Assistant Communauté Francophone – 29 May 25 [Tuto] Installer HAOS sur Proxmox avec Z2M et MQTT (Full SSL/TLS - Lets Encrypt) Home Assistant - Tutoriels & Partages Installation tutoriel L’idée de ce tutoriel est de regrouper un peu toutes les infos que j’ai pu trouver pour l’installation de HAOS, Z2M et MQTT sur des VM séparés au sein de Proxmox et le tout en full SSL/TLS (Lets encrypt) sur ce type de machine : ... Temps de lecture: 7 mins J'aime: 15 Installation Mosquitto (Mqtt) Installer une VM sous ubuntu 24.04, mettez la complétement à jour et lancez les commandes suivantes : Pour installer docker : curl -sSL https://get.docker.com/ | CHANNEL=stable sh systemctl enable --now docker Ajouter un utilisateur docker_mosquitto par exemple : adduser docker_mosquitto Récuperer son ID : (Ici 1002) cat /etc/passwd [ grep docker_mosquitto Créer le dossier mosquitto dans /opt mkdir /opt/mosquitto Créer le fichier docker-compose.yml avec le contenu suivant : (Remplacer 1002 par les ID que l’on a récupéré juste avant) services: mosquitto: image: eclipse-mosquitto:2.0.22 container_name: mosquitto restart: unless-stopped user: "1002:1002" ports: - "1883:1883" # MQTT - "8883:8883" # MQTTS (secure) - "9001:9001" # WebSockets volumes: - ./mosquitto/config:/mosquitto/config - ./mosquitto/data:/mosquitto/data - ./mosquitto/log:/mosquitto/log - /etc/localtime:/etc/localtime:ro - /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem:/etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem:ro - /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem:/etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem:ro - /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem:/etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem:ro Créer un dossier mosquitto et les sous dossiers et appliquer les droits : (Celui-ci contiendra la configuration les datas et les logs) mkdir /opt/mosquitto/mosquitto mkdir /opt/mosquitto/mosquitto/data mkdir /opt/mosquitto/mosquitto/config mkdir /opt/mosquitto/mosquitto/log touch mkdir /opt/mosquitto/mosquitto/log/mosquitto.log chown -R 1002:1002 /opt/mosquitto/mosquitto Créer le fichier de config dans /opt/mosquitto/mosquitto/config/mosquitto.conf persistence true persistence_location /mosquitto/data/ log_dest file /mosquitto/log/mosquitto.log listener 1883 localhost allow_anonymous false #password_file /mosquitto/config/passwd tls_version tlsv1.3 listener 8883 certfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem cafile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem keyfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem listener 9001 protocol websockets certfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem cafile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem keyfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem Activez le SSL : (A adapter en fonction du plugin que vous utilisez pour récupérer le certificat de votre nom de domaine) Ici il s’agit d’un exemple avec infomaniak apt install certbot apt install python3-pip pip install certbot-dns-infomaniak export INFOMANIAK_API_TOKEN=xxx certbot certonly \ --authenticator dns-infomaniak \ --server https://acme-v02.api.letsencrypt.org/directory \ --agree-tos \ --rsa-key-size 4096 \ -d 'mqtt.xxx.local.srv-home.fr' Par défaut, certbot installe un service qui renouvelle périodiquement ses certificats automatiquement. Pour ce faire, la commande doit connaître la clé API, sinon elle échouera silencieusement. Afin d’activer le renouvellement automatique de vos certificats génériques, vous devrez modifier /lib/systemd/system/certbot.service. Ajoutez-y la ligne suivante dans Service, en remplaçant par votre jeton : Environment="INFOMANIAK_API_TOKEN=" Ensuite ouvrez le fichier de config nano /etc/letsencrypt/renewal/xxx.conf Ajouter renew_hook = docker restart mosquitto chmod -R 755 /etc/letsencrypt/live chmod -R 755 /etc/letsencrypt/archive Lancer le container cd /opt/mosquitto docker compose up -d Vous pouvez voir les logs du container docker : docker logs mosquitto -f Activer l’authentification (Remplacer username par un utilisateur, par exemple « mqttuser » docker exec -it mosquitto mosquitto_passwd -c /mosquitto/config/passwd username Décommenter la ligne « password_file /mosquitto/config/passwd » dans le fichier « /opt/mosquitto/mosquitto/config/mosquitto.conf » ce qui donner maintenant persistence true persistence_location /mosquitto/data/ log_dest file /mosquitto/log/mosquitto.log listener 1883 localhost allow_anonymous false password_file /mosquitto/config/passwd tls_version tlsv1.3 listener 8883 certfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem cafile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem keyfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem listener 9001 protocol websockets certfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/cert.pem cafile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/chain.pem keyfile /etc/letsencrypt/live/mqtt.xxx.local.srv-home.fr/privkey.pem Relancer ensuite le container : docker restart mosquitto Vous pourrez ensuite renseigner ce login dans Gladys et juste après dans zigbee2mqtt Installation Zigbee2mqtt Installer une VM sous ubuntu 24.04, mettez la complétement à jour et suivre la procédure suivante : Linux Docker | Zigbee2MQTT Pour installer docker : curl -sSL https://get.docker.com/ | CHANNEL=stable sh systemctl enable --now docker Voici mon fichier docker-compose.yml : services: zigbee2mqtt: container_name: zigbee2mqtt image: koenkk/zigbee2mqtt:2.8.0 restart: unless-stopped volumes: - ./data:/app/data - /run/udev:/run/udev:ro - /etc/localtime:/etc/localtime:ro - /etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/fullchain.pem:/etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/fullchain.pem:ro - /etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/privkey.pem:/etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/privkey.pem:ro

devices:

- /dev/serial/by-id/usb-Silicon_Labs_Sonoff_Zigbee_3.0_USB_Dongle_Plus_0001-if00-port0:/dev/serial/by-id/usb-Silicon_Labs_Sonoff_Zigbee_3.0_USB_Dongle_Plus_0001-if00-port0 ports: - "443:443" # Port externe 443 → port interne 443 environment: - TZ=Europe/Paris networks: - z2m_net networks: z2m_net: driver: bridge Voici mon fichier /opt/z2m/data/configuration.yaml à titre d’exemple, il faut modifier le auth_token qui vous permettra de vous connecter à l’interface web ainsi que le mot de passe de l’utilisateur z2m que l’on a mis précédemment lors de l’installation de MQTT

homeassistant: enabled: false mqtt: base_topic: zigbee2mqtt server: mqtts://mqtt.xxx.local.srv-home.fr:8883 user: mqttuser password: achanger keepalive: 60 reject_unauthorized: true version: 4 include_device_information: true serial: port: tcp://192.168.xx.xx:7638 baudrate: 460800 adapter: zstack disable_led: false advanced: pan_id: GENERATE network_key: GENERATE channel: 25 homeassistant_legacy_entity_attributes: false legacy_api: false legacy_availability_payload: false log_level: info log_syslog: app_name: Zigbee2MQTT eol: /n host: localhost localhost: localhost path: /dev/log pid: process.pid port: 514 protocol: udp4 type: '5424' last_seen: ISO_8601 frontend: enabled: true package: zigbee2mqtt-windfront port: 443 host: 0.0.0.0 url: https://z2m.xxx.local.srv-home.fr ssl_cert: /etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/fullchain.pem ssl_key: /etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/privkey.pem auth_token: achanger Remplacer ceci par l’IP de votre clé et n’oubliez pas de changer auth_token et password port: tcp://192.168.xx.xx:7638 Activer le SSL : (A adapter en fonction du plugin que vous utilisez pour récupérer le certificat de votre nom de domaine) Ici il s’agit d’un exemple avec infomaniak apt install certbot apt install python3-pip pip install certbot-dns-infomaniak export INFOMANIAK_API_TOKEN=xxx certbot certonly \ --authenticator dns-infomaniak \ --server https://acme-v02.api.letsencrypt.org/directory \ --agree-tos \ --rsa-key-size 4096 \ -d 'z2m.xxx.local.srv-home.fr' Par défaut, certbot installe un service qui renouvelle périodiquement ses certificats automatiquement. Pour ce faire, la commande doit connaître la clé API, sinon elle échouera silencieusement. Afin d’activer le renouvellement automatique de vos certificats génériques, vous devrez modifier /lib/systemd/system/certbot.service. Ajoutez-y la ligne suivante dans Service, en remplaçant par votre jeton : Environment="INFOMANIAK_API_TOKEN=" nano /etc/letsencrypt/renewal/z2m.xxx.local.srv-home.fr Ajouter (Si vous avez une astuce pour intégrer un reload je suis preneur renew_hook = docker restart zigbee2mqtt chmod -R 755 /etc/letsencrypt/live chmod -R 755 /etc/letsencrypt/archive Dans le fichier /etc/systemd/system/zigbee2mqtt.service j’ai du remplacer User=pi par User=root. J’e n’ai pas trouvé de moyen de faire autrement pour le moment mais si vous avez une idée pour le faire tourner avec un utilisateur avec moins de droits, je suis preneur également Lancer le container cd /opt/z2m docker compose up -d Zigbee2mqtt doit être maintenant disponible à l’adresse https://z2m.xxx.local.srv-home.fr en indiquant bien le mot de passe que vous avez mis dans auth_token Ne pas utiliser le SSL/TLS Vous pouvez très bien vous passer de la partie SSL/TLS et utiliser le port 8080 pour Z2M et le port 1883 avec Mqtt en modifiant la conf Config z2m : docker-compose.yml : services: zigbee2mqtt: container_name: zigbee2mqtt image: koenkk/zigbee2mqtt:2.8.0 restart: unless-stopped volumes: - ./data:/app/data - /run/udev:/run/udev:ro - /etc/localtime:/etc/localtime:ro

devices:

- /dev/serial/by-id/usb-Silicon_Labs_Sonoff_Zigbee_3.0_USB_Dongle_Plus_0001-if00-port0:/dev/serial/by-id/usb-Silicon_Labs_Sonoff_Zigbee_3.0_USB_Dongle_Plus_0001-if00-port0 ports: - "8080:8080" # Port externe 8080 → port interne 8080 environment: - TZ=Europe/Paris networks: - z2m_net networks: z2m_net: driver: bridge configuration.yaml :

homeassistant: enabled: false mqtt: base_topic: zigbee2mqtt server: mqtt://mqtt.xxx.local.srv-home.fr:1883 user: mqttuser password: achanger keepalive: 60 reject_unauthorized: true version: 4 include_device_information: true serial: port: tcp://192.168.xx.xx:7638 baudrate: 460800 adapter: zstack disable_led: false advanced: pan_id: GENERATE network_key: GENERATE channel: 25 homeassistant_legacy_entity_attributes: false legacy_api: false legacy_availability_payload: false log_level: info log_syslog: app_name: Zigbee2MQTT eol: /n host: localhost localhost: localhost path: /dev/log pid: process.pid port: 514 protocol: udp4 type: '5424' last_seen: ISO_8601 frontend: enabled: true package: zigbee2mqtt-windfront port: 8080 host: 0.0.0.0 url: http://192.168.xx.xx # ssl_cert: /etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/fullchain.pem # ssl_key: /etc/letsencrypt/live/z2m.xxx.local.srv-home.fr/privkey.pem auth_token: achanger Conf mqtt : persistence true persistence_location /mosquitto/data/ log_dest file /mosquitto/log/mosquitto.log listener 1883 allow_anonymous false password_file /mosquitto/config/passwd Configurer ensuite Gladys pour utiliser le broker MQTT externe : Et tout devrait être ok : Le tutoriel n’est peut être pas parfait mais essaye de couvrir tout les cas possible et il est possible que j’ai oublié des choses donc si vous avez des remarques ou des questions, n’hésitez pas 5 messages - 2 participant(e)s Lire le sujet en entier

Source: Domoticz (Forum News) Hi all, This morning my Docker container was updated to latest beta (17204). There seems to be a problem with a Python library if I interpret the log correctly: 2026-02-21 13:32:06.430 Status: Domoticz V2025.2 (build 17204) (c)2012-2026 GizMoCuz 2026-02-21 13:32:06.430 Status: Build Hash: 71a61ff22, Date: 2026-02-20 11:33:19 2026-02-21 13:32:06.431 Status: Startup Path: /opt/domoticz/ 2026-02-21 13:32:06.511 Status: PluginSystem: Failed dynamic library load, install the latest libpython3.x library that is available for your platform. 2026-02-21 13:32:06.513 Status: PluginSystem: 'ConBee2' Registration ignored, Plugins are not enabled. 2026-02-21 13:32:06.513 Status: PluginSystem: 'SolarEdge' Registration ignored, Plugins are not enabled. 2026-02-21 13:32:06.513 Status: PluginSystem: 'Shelly MQTT' Registration ignored, Plugins are not enabled. Should not be related (because I use Docker) but I am running Docker on a headless Debia Trixie NUC. I am posting this in the hope that this will be corrected in the next beta. I have no desire to go fiddle inside the container Statistics: Posted by Sjonnie2017 — Saturday 21 February 2026 13:40 — Replies 2 — Views 318

Source: Domoticz (Forum News) After a update from beta 17099 to 17189 Domoticz did not restart. Did the update throuh the settingsscreen. Counter counted until 100, then the message update failed, no internet connection.... After a domoticz service restart was domoticz running normaal on the downloaded beta 17189. I thought this problem was solved? Attached the update.log and domoticz crash log domoticz_crash.log update.log Statistics: Posted by Rik60 — Sunday 15 February 2026 20:21 — Replies 5 — Views 518

Source: Domoticz (Forum News) Since yesterday I'm getting the following error on the buienradar integration: Code: Error: Internet weer: Invalid data received (station measurement empty), or no data returned! Started just somewhere during the day. Updated this morning to latest beta, did not resolve the issue. Any clue to the root cause? API change? Statistics: Posted by JanJaap — Tuesday 10 February 2026 11:45 — Replies 2 — Views 198

Source: Domoticz (Forum News) System: Raspberry Pi ZeroW, Raspbian Bookworm Lite. It is a new, headless installation, no hardware attached. SSH is functional, but i am not sure if the port 8080 or 443 are opened. wget 192.162.1.82:8080 is not answering. Code: sudo service domoticz.sh statusdomoticz.service - LSB: Home Automation System Loaded: loaded (/etc/init.d/domoticz.sh; generated) Active: active (exited) since Sat 2026-02-07 19:54:02 CET; 7min ago Docs: man:systemd-sysv-generator(8) Process: 2666 ExecStart=/etc/init.d/domoticz.sh start (code=exited, status=0/SUCCESS) CPU: 331msFeb 07 19:54:00 haz11 systemd[1]: Starting domoticz.service - LSB: Home Automation System...Feb 07 19:54:01 haz11 domoticz.sh[2666]: Time synchronized, starting Domoticz...Feb 07 19:54:02 haz11 domoticz.sh[2666]: Illegal instruction <------------------!!!!Feb 07 19:54:02 haz11 systemd[1]: Started domoticz.service - LSB: Home Automation System. It seems to be a critical line: Code: Feb 07 19:54:02 haz11 domoticz.sh[2666]: Illegal instruction: What is the problem there, and what is the solution? Is the Pi Zero suitable for running domoticz? If the current os or domoticz version is not suitable for the Pi Zero, which is the latest usable version? Greetings to everyone and have nice Weekend Statistics: Posted by rabbit — Saturday 07 February 2026 20:36 — Replies 26 — Views 859

Source: Domoticz (Forum News) I used "Open Hardware Monitor" for a long time with a Domoticz "Motherboard" hardware entry. With my new PC "Open Hardware Monitor" doesn't support many of the motherboards sensors. Some older Domoticz release notes are showing that "Libre Hardware Monitor" is supported. However, if I try to use it like "Open Hardware Monitor" it seems not to be recogniced by Domoticz. Further more I couldn't find any how to hints. Can anyone give advise? Addition: Domoticz Log shows "Warning, neither Libre Hardware Monitor nor Open Hardware Monitor are installed on this system." However, Libre Hardware Monitor is running: Domoticz 2025.2 is running on Windows 11 as Libre Hardware Monitor 0.9.5 Statistics: Posted by Itschi — Friday 06 February 2026 8:51 — Replies 1 — Views 229

Source: Domoticz (Forum News) Version: 2025.2 Platform: Pi4 Plugin/Hardware: RFXxl Description: E-mail notifications with a picture (security cam) do not work anymore. Errormessage is: 2026-02-03 17:16:09.188 Error: SMTP Mailer: Error sending Email to: ! 2026-02-03 17:16:09.188 Error: libcurl: (55) 2026-02-03 17:16:09.188 Error: Send failure: Connection reset by peer settings screen attached error messages and test e-mail are received ok. Until recently the mail with a snapshot of the security cam where received. the snapshot works in the camera settings menu settings screen attached Statistics: Posted by Verdwaald — Tuesday 03 February 2026 22:15 — Replies 2 — Views 222

30. EN v2.0.2

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.2 — Advanced Energy Reporting & Charting System Unleashed! We are excited to announce the stable release of HomeGenie 2.0.2, bringing powerful new capabilities for data visualization and energy management to your programmable intelligence platform. This release focuses on transforming raw system data into clear, actionable insights directly within your HomeGenie dashboard. Key Highlights & New Capabilities: Advanced Charting Widget & UI: Multi-Dataset Visualization: The Chart component now supports rendering multiple datasets simultaneously, allowing for mixed bar and line chart types to represent diverse data. Intuitive Historical Navigation: Navigate through your historical data with ease using new Year and Day selectors, complete with convenient Prev/Next navigation buttons. Dynamic Labels: X-axis labels now dynamically adapt based on the selected date range, enhancing data clarity and readability. Optimized Performance: Refactored to employ a "One Worker per Dataset" logic for modular and parallel data fetching, ensuring responsive performance even with large historical logs. UI/UX Improvements: Enhanced Chart.js integration provides theme-aware colors and opacity (supporting glassmorphism effects), alongside fixes for update synchronization and layout shifts. Daily Energy Reporting System (Backend): New Automation Program: Introduced the Daily Energy Report program, designed to automatically aggregate and log Meter.Watts.Hour data from your devices. YAML Persistence: Implemented robust data storage using daily YAML files (e.g., YYYY_DDD_daily_stats.yaml) for reliable historical tracking of energy consumption. Dedicated APIs: Exposed Statistics.Providers/DailyEnergyReport for dynamic widget configuration, enabling seamless integration of daily data. Added DataProcessing.Statistics/DailyEnergyReport APIs for efficient fetching of raw dataset values. Flexible Data Retrieval: Implemented logic to retrieve specific energy data for chosen years and days via new API parameters. Device Integration: Added the Include in Daily Wh report feature toggle, allowing users to easily select which devices contribute to the energy reports. Why HomeGenie 2.0.2? This release further solidifies HomeGenie's commitment to empowering you with local-first, cloud-independent, and intelligent programmable systems. Gain unparalleled insights into your energy consumption and system behavior, enabling smarter automations and a more efficient environment—all managed privately on your own hardware. Download the latest stable build of HomeGenie 2.0.2 from our repository today! Happy Automating! Full Changelog: v2.0.1...v2.0.2

31. EN v2.0.1

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.1 — Your Local AI-Powered Programmable Intelligence Unleashed! We are thrilled to announce the official stable release of HomeGenie 2.0.1, culminating over three years of dedicated development into a completely re-imagined platform. HomeGenie has evolved into a robust, local-first, and privacy-centric system of programmable intelligence, with Agentic AI at its core. This release empowers you with cutting-edge capabilities to transform any environment into a truly intelligent and autonomous system. Key Highlights & New Capabilities: Local AI & Lailama Agentic Engine: Intelligent & Adaptive: The Lailama engine dynamically optimizes its parameters (Context Window, Batch Size) based on your system's available RAM, ensuring stable and efficient operation across diverse hardware. Granular AI Control: A brand-new, intuitive configuration UI (supporting both Light and Dark themes) allows you to fine-tune Lailama's behavior: Adjust Creativity (Temperature) from precise logic to creative responses. Manage Working Memory (Context Window) for enhanced AI recall. Control System Context Sharing to feed real-time module and sensor data to the AI for highly accurate agentic actions. Enhanced Reasoning: Improved System Report formatting and refined system prompts for Lailama and Gemini providers lead to smarter intent recognition and execution. AI Vision Suite: Full integration of YOLO (Object Detection, Instance Segmentation, Pose Estimation) directly on server and ESP32-CAM modules. Agentic Scheduling (Genie Command): The Scheduler now hosts AI-driven tasks, allowing natural language commands to define complex automations autonomously. Speech Recognition & Synthesis: Improved microphone input and voice responses in the new AI chat interface. Async Model Downloads: Robust Download Manager for GGUF models with pause/resume support. Developer API & Framework: New Licensing Model: Re-licensed to GNU Affero General Public License v3.0 for a protected open-source ecosystem. Extended Widget Capabilities: zuix.d.ts is updated with new widget controller methods: this.apiCall(), this.showSettings(), and this.translate() for deeper integration. Universal Fluent API Generator: Generates ready-to-use C#, JavaScript, and Python code with a unified syntax for module interaction. ModuleField API: Added the .decimalValue property to ModuleField for simplified numeric handling in UI logic. User Interface (UI) & Experience (UX) Overhaul: Modernized UI: A sleek, responsive, and multilingual interface with full support for Light/Dark themes and enhanced readability. Redesigned AI Chat: "Bottom sheet" style chat with explicit thought processes, smart scroll, token buffering, and unified "Stop" commands. Customizable Dashboards: New preferences for custom wallpapers (including animated GIFs), widget card colors, opacity, and blur effects. Quick Control Sheets: Implemented Floating Action Buttons (FABs) for rapid control of scenes, lights, colors, and shutters directly from the dashboard. Smart Display Integration: ESP32/ESP32-S3 devices with touch displays now function as customizable and autonomous control centers. Revamped Log Events Viewer: Interactive chart preview with seamless navigation for efficient log analysis. Code Editor Minimap: Enabled for faster navigation within long scripts. Clearer Visual Programming: The "VPL" entry has been renamed to "Visual Program" for improved clarity and accessibility. Why HomeGenie 2.0.1? This release represents our unwavering commitment to empowering users with local-first, cloud-independent, and intelligent programmable systems. With Lailama, your HomeGenie server transforms into a truly autonomous agent, capable of understanding, reasoning, and acting on your unique environment or project—all while keeping your data private and secure on your hardware. Happy Automating! Full Changelog: v2.0.0-rc.15...v2.0.1

Source: HomeGenie (GitHub Releases) HomeGenie Server v2.0.0-rc.15 - The Era of Local Agentic AI This release introduces significant advancements in local AI integration and a refined developer experience for building integrated widgets and automation programs. Local AI & Lailama Enhancements The Lailama engine is now more intelligent, highly customizable, and fully context-aware. Dynamic Memory Optimization: The engine now automatically optimizes model parameters (Context Window and Batch Size) based on the system's available RAM, ensuring stability across different hardware profiles. Refined Configuration UI: A new, intuitive settings panel (supporting both Light and Dark themes) for granular AI control: Creativity (Temperature): Real-time adjustment from precise logic to creative responses. Working Memory (Context Window): Manage how much information the AI can process simultaneously. System Context Sharing: Toggle to provide the AI with real-time status of modules and sensors for precise agentic actions. Improved Context Manager: Enhanced formatting for the System Report to reduce hallucinations and improve intent-to-API mapping. System Prompt Tuning: Refined default prompts for both Lailama and Gemini providers to improve reasoning consistency. Developer API & Framework (zuix.js) We have extended the widget controller capabilities to allow deeper integration with the HomeGenie core. New Controller Methods: Updated zuix.d.ts with built-in methods: apiCall: Directly invoke HomeGenie backend services and program APIs from a widget. showSettings: Programmatically open the widget's configuration interface. translate: Easily handle localized labels using the internal i18n engine. ModuleField API: Added the .decimalValue property to the ModuleField object for simplified numeric handling in the UI layer. UI & Editor Improvements Code Editor: Enabled the minimap for faster navigation within long scripts and programs. Chat UI: Fixed vertical message alignment to the bottom and optimized rendering during token streaming to eliminate Markdown flickering. File Editor Fix: Resolved a layout issue where scrollable content could overlap dialog action buttons. Visual Programming: Renamed the "VPL" entry to "Visual Program" for better clarity and accessibility. Happy Automating! Full Changelog: v2.0.0-rc.14...v2.0.0-rc.15

Source: Gladys Assistant (Blog officiel) La sécurité, c'est la base de la domotique. Aujourd'hui, une alarme complète débarque dans Gladys pour vous permettre de gérer la sécurité de votre maison.

Source: Domoticz (Forum News) Using domoticz version 2024.7 on windows 11 having rebuilt the database following corruption. The domoticz app is working fine no errors in the log. When I try to do a backup I get an empty file created called 'backupdatabase.php'. Backup button was working normally before. Anyone got any ideas ? Statistics: Posted by peterchef — Saturday 28 February 2026 16:40 — Replies 2 — Views 48

Source: Domoticz (Forum News) As discussed in an other topic. I have my Marstek Venus E2 working with hame-relay and hm2mqtt working all in docker ... install https://github.com/tomquist/hame-relay compose.yaml Code: services: mqtt-forwarder: container_name: hame-relay image: ghcr.io/tomquist/hame-relay:latest restart: unless-stopped dns_search: . volumes: - ./config:/app/config environment: - LOG_LEVEL=infonetworks: {} Code: cat config.json{ "broker_url": "mqtt://your-broker-ip", "username": "your-marstek-user", "password": "your-marstek-password", "inverse_forwarding": "true"} install https://github.com/tomquist/hm2mqtt compose.yaml Code: services: hm2mqtt: container_name: hm2mqtt image: ghcr.io/tomquist/hm2mqtt:latest restart: unless-stopped environment: - MQTT_BROKER_URL=mqtt://your-broker-ip:1883 - MQTT_USERNAME='' - MQTT_PASSWORD='' - MQTT_POLLING_INTERVAL=60 - MQTT_RESPONSE_TIMEOUT=45 - POLL_CELL_DATA=true - POLL_EXTRA_BATTERY_DATA=true - POLL_CALIBRATION_DATA=true - DEVICE_0=HMG-50:0019aa0d4dcb # 12-character MAC address# - DEVICE_0=HMA-1:0019aa0d4dcb # 12-character MAC addressnetworks: {} The DEVICE must be HMg-50 for VENUS E2 the MAC must be the Bluetooth MAC address of your battery from now on hm2mqtt polls the Marstek cloud every minute and sends data over mqtt as I had MQTTAD running, some 94 devices are created in domoticz !

  • don't be surprised to see some timeouts on polling
  • don't be surprised to see some very strange values in the TEMPERATURE devices Statistics: Posted by eddieb — Saturday 28 February 2026 16:14 — Replies 0 — Views 39

Source: Home Assistant Community Forum (Latest) Hi all, I needed accurate price data from ENGIE so I spent some time reverse-engineering their platform. I’m planning to publish my integration on HACS soon but I would like to have it tested by other people first. A few things to note: Currently, only 2FA via SMS and e-mail are supported. Passkeys and itsme support are off the table, not interested in touching those. I’m also not sure what will happen when ENGIE forces 2FA on all of their customers in March. I was only able to test with Easy Variabel. The API call for prices uses MONTHLY for maxGranularity, so this might not be ideal for people with a dynamic contract. ENGIE seems to be very strict with their access tokens: they get revoked after about 2 minutes, so the integration refreshes the token every minute in the background. There is a binary sensor to keep track of the authentication status. The integration creates sensors for offtake and injection prices (both including and excluding VAT), per EAN. Injection sensors are only created when the data is available (e.g. gas will only have offtake). For those interested in the login flow, I’ve included a Bruno collection in the repository. Feel free to test this out by spinning up the devcontainer or by manually adding the ingration, all contributions and improvements are welcome github.com GitHub - DaanVervacke/hass-engie-be: Retrieve energy price data from the ENGIE Belgium... Retrieve energy price data from the ENGIE Belgium API 1 post - 1 participant Read full topic

Source: Home Assistant Community Forum (Latest) Hevy to Garmin Connect – Automatic Strength Workout Sync via Home Assistant GitHub: github.com/amasolov/hevy2garmin The Problem I use Hevy to track my strength workouts (exercises, sets, reps, weights) and a Garmin watch for everything else. Garmin Connect is my single dashboard for all fitness data, but there’s no native integration between the two. I wanted my Hevy workouts to show up in Garmin Connect automatically — as proper strength training activities with exercise names, sets, reps, and weights — without any manual effort. The Solution hevy2garmin fetches workouts from the official Hevy API, converts them into Garmin FIT files (the native binary format Garmin uses), and uploads them to Garmin Connect. It runs from Home Assistant on a 30-minute schedule, so my workouts appear in Garmin shortly after I finish them. How It Works Fetches recent workouts from Hevy (last 7 days by default) via the official Hevy API v1 Skips workouts that were already synced (tracked in a state file — no duplicates) Auto-maps Hevy exercise names to Garmin FIT exercise categories using keyword heuristics (e.g., “Barbell Bench Press” → bench_press, “Romanian Deadlift” → deadlift) Builds valid FIT files with a custom binary encoder — each set appears in Garmin with the correct exercise name, reps, weight, and duration Uploads the FIT files to Garmin Connect via OAuth (using the garth library) Key Features Custom FIT builder — no external FIT SDK needed; generates proper strength training activities that Garmin displays correctly Smart exercise mapping — automatically maps Hevy exercises to Garmin categories using keyword heuristics. Unknown exercises are logged for manual review Multi-user support — configure multiple users in users.json, each synced independently Garmin MFA support — a helper script creates an OAuth session locally (where you can enter MFA), then the sync script reuses it with token auto-refresh Duplicate prevention — state file per user ensures workouts are never synced twice Retry logic — handles transient API errors and Garmin rate limits gracefully Home Assistant Integration The HA integration uses PyScript (available via HACS) to trigger a shell_command on a cron schedule. The sync runs in the background with nohup to avoid the 60-second shell_command timeout. Full Setup Guide Step 1: Install PyScript Install PyScript via HACS: Open HACS in your Home Assistant UI Go to Integrations → click the + button → search for PyScript → install it Restart Home Assistant Go to Settings → Devices & Services → Add Integration → search for PyScript → add it Alternatively, add pyscript: to your configuration.yaml manually. Step 2: Clone the project to your Home Assistant SSH into your Home Assistant host and clone the repo into the config directory: cd /config/scripts git clone https://github.com/amasolov/hevy2garmin.git cd hevy2garmin Note: If you’re using Home Assistant OS (supervised), you can SSH in via the SSH & Web Terminal add-on. The config directory is typically /config/ or /homeassistant/ depending on your setup. Step 3: Create a Python virtual environment and install dependencies cd /config/scripts/hevy2garmin python3 -m venv .venv .venv/bin/pip install --upgrade pip .venv/bin/pip install -r requirements.txt This installs garth (Garmin OAuth library) and requests. Step 4: Get your Hevy API key Go to hevy.com/settings (requires Hevy Pro) Under the Developer section, generate or copy your API key Step 5: Configure users cd /config/scripts/hevy2garmin cp users.json.example users.json Edit users.json with your details: [ { "id": "myname", "hevy_api_key": "YOUR_HEVY_API_KEY", "garmin_email": "your.email@example.com", "garmin_password": "your_garmin_password" } ] You can add multiple users to the array if needed.

Source: Home Assistant Community Forum (Latest) Got a super weird problem. The last few days I’ve noticed issues with HA. The weather app periodically reports unavailable, automations are intermittently not working. Finally got some time to dig into it and it seems to be a network problem. HTTP requests are periodically failing in the container with “Network unreachable”. This occurs both to addresses on my local network and to the Internet. I can re-create this by running a wget inside the container, e.g. f0a135ba2c48:/config# wget https://api.glowmarkt.com -O - >/dev/null Connecting to api.glowmarkt.com (154.51.163.100:443) wget: server returned error: HTTP/1.1 404 Not Found f0a135ba2c48:/config# wget https://api.glowmarkt.com -O - >/dev/null Connecting to api.glowmarkt.com (154.51.163.100:443) wget: can't connect to remote host (154.51.163.100): Network unreachable f0a135ba2c48:/config# wget https://api.glowmarkt.com -O - >/dev/null Connecting to api.glowmarkt.com (154.51.163.100:443) wget: can't connect to remote host (154.51.163.100): Network unreachable f0a135ba2c48:/config# wget https://api.glowmarkt.com -O - >/dev/null Connecting to api.glowmarkt.com (154.51.163.100:443) wget: server returned error: HTTP/1.1 404 Not Found I’ve tried deleting and recreating the the container. I’ve updated to the latest version. I run a number of containers on this host and the issue is isolated to this container, the others are fine. As is network connectivity on the host its self. And it seems to be only HTTP(S)/TCP traffic impacted. Pings at least works fine. I’m running Ubuntu 24.04. Anyone else seen this? I’m a bit stumped. 3 posts - 2 participants Read full topic

Source: Gladys Assistant (Forum) Bonjour à tous Novice j’ai réussi je ne sais à créé mon compte Gladys mais en activant essai de Gladys + j’ai du sauter des épisodes, depuis plus rien. Ça c’était sur mon pc perso. Depuis j’ai récupéré un mini pc sur lequel y serra gladys. Si je comprend bien on télécharge Ubuntu sur mon perso, Balena Etcher, je flasch Ubuntu et quand je retourne sur Balena Etcher plus de possibilités de sélectionner SELECT TARGET la flèche de ma souris est remplacé par un sens interdit. Auriez vous une idée du problème. 2 messages - 2 participant(e)s Lire le sujet en entier

Source: Gladys Assistant (Forum) Bonjour je souhaiterai m’équiper d’un Nas Ugreen DH2300 (Octo Core ARM Rockchip RK3576 cadencé à 2,2 GHz, accompagné de 4 Go de RAM DDR4) et j’aimerais savoir si Gladys pourrait tourner sur ce type de matériel en utilisant Docker ou ai je besoin d’un modèle plus puissant ? @pierre-gilles 3 messages - 2 participant(e)s Lire le sujet en entier

41. EN v2.0.7

Source: HomeGenie (GitHub Releases) HomeGenie v2.0.7 - AI Vision & Integrated NVR Welcome to HomeGenie 2.0.7! This major update transforms your smart home setup into a cutting-edge, AI-powered video surveillance platform. We've completely rewritten how HomeGenie handles cameras and added a built-in Network Video Recorder (NVR) that works seamlessly with our newest Artificial Intelligence models. Here is what’s new: Smarter, Faster AI Next-Gen Object Detection: We have upgraded our AI engine to support YOLO26, the absolute latest generation of AI vision models. It is faster, more accurate, and catches details like never before. Smarter Text Processing: The underlying LLamaSharp engine has been updated, making text-based automations and local LLM processing snappier. Total Performance Control: Worried about AI using all your CPU/GPU? You can now set a "Max Requests Per Second" limit on Object Detection, Pose Estimation, and Segmentation. Your system stays perfectly responsive while still catching all the action. The "AI Vision Hub" Dashboard Keep an eye on everything at once. We've introduced the AI Vision Hub, a brand-new dashboard tailored for security. You can pin up to 5 live camera feeds side-by-side, watching real-time AI bounding boxes and detections directly on the video streams. Built-in Smart NVR (Network Video Recorder) You no longer need third-party software to record your cameras. HomeGenie now includes a highly optimized, enterprise-grade NVR system out of the box. AI-Triggered Recording: Why record hours of empty rooms? HomeGenie’s NVR talks directly to the AI engine. You can set it to record only when a specific object (like a person or a car) is detected, saving massive amounts of disk space. Continuous recording is also available. Ultra-Efficient Saving: If your camera supports standard video streams, HomeGenie saves the footage without re-encoding it. This means you can record dozens of HD cameras even on low-power devices like a Raspberry Pi without breaking a sweat. Makes "Dumb" Cameras Smart: Got a cheap camera that on

Sponsor this project

Packages

 
 
 

Contributors

Languages