The Computer Emergencies Response Team of Ukraine (CERT-UA) has disclosed details of a new campaign that has targeted governments and municipal healthcare institutions, mainly clinics and emergency hospitals, to deliver malware capable of stealing sensitive data from Chromium-based web browsers and WhatsApp.
The activity, which was observed between March and April
Threat actors have been observed weaponizing n8n, a popular artificial intelligence (AI) workflow automation platform, to facilitate sophisticated phishing campaigns and deliver malicious payloads or fingerprint devices by sending automated emails.
“By leveraging trusted infrastructure, these attackers bypass traditional security filters, turning productivity tools into delivery
Researchers warn that a flaw in Anthropic’s Model Context Protocol allows unsanitized commands to execute silently, enabling full system compromise across widely used AI environments.
Sophos’ Ross McKerchar discusses leadership at scale, retaining talent, defending against AI-enabled threats, and the industry’s growing trust problem.
A recently disclosed critical security flaw impacting nginx-ui, an open-source, web-based Nginx management tool, has come under active exploitation in the wild.
The vulnerability in question is CVE-2026-33032 (CVSS score: 9.8), an authentication bypass vulnerability that enables threat actors to seize control of the Nginx service. It has been codenamed MCPwn by Pluto Security.
“
A number of critical vulnerabilities impacting products from Adobe, Fortinet, Microsoft, and SAP have taken center stage in April’s Patch Tuesday releases.
Topping the list is an SQL injection vulnerability impacting SAP Business Planning and Consolidation and SAP Business Warehouse (CVE-2026-27681, CVSS score: 9.9) that could result in the execution of arbitrary database
Few technologies have moved from experimentation to boardroom mandate as quickly as AI. Across industries, leadership teams have embraced its broader potential, and boards, investors, and executives are already pushing organizations to adopt it across operational and security functions. Pentera’s AI Security and Exposure Report 2026 reflects that momentum: every CISO surveyed
Microsoft on Tuesday released updates to address a record 169 security flaws across its product portfolio, including one vulnerability that has been actively exploited in the wild.
Of these 169 vulnerabilities, 157 are rated Important, eight are rated Critical, three are rated Moderate, and one is rated Low in severity. Ninety-three of the flaws are
OpenAI on Tuesday unveiled GPT-5.4-Cyber, a variant of its latest flagship model, GPT‑5.4, that’s specifically optimized for defensive cybersecurity use cases, days after rival Anthropic unveiled its own frontier model, Mythos.
“The progressive use of AI accelerates defenders – those responsible for keeping systems, data, and users safe – enabling them to find and fix problems
Two high-severity security vulnerabilities have been disclosed in Composer, a package manager for PHP, that, if successfully exploited, could result in arbitrary command execution.
The vulnerabilities have been described as command injection flaws affecting the Perforce VCS (version control software) driver. Details of the two flaws are below -
Google has announced the integration of a Rust-based Domain Name System (DNS) parser into the modem firmware as part of its ongoing efforts to beef up the security of Pixel devices and push memory-safe code at a more foundational level.
“The new Rust-based DNS parser significantly reduces our security risk by mitigating an entire class of vulnerabilities in a risky area, while also laying
Cybersecurity researchers have unmasked a novel ad fraud scheme that has been found to leverage search engine poisoning (SEO) techniques and artificial intelligence (AI)-generated content to push deceptive news stories into Google’s Discover feed and trick users into enabling persistent browser notifications that lead to scareware and financial scams.
The campaign, which has been
Abstract: The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistication of attacks by seasoned cybercriminals. This paper examines the evolving relationship between cybercriminals and AI using a unique dataset from a cyber threat intelligence platform. Analyzing more than 160 cybercrime forum conversations collected over seven months, our research reveals how cybercriminals understand AI and discuss how they can exploit its capabilities. Their exchanges reflect growing curiosity about AI’s criminal applications through legal tools and dedicated criminal tools, but also doubts and anxieties about AI’s effectiveness and its effects on their business models and operational security. The study documents attempts to misuse legitimate AI tools and develop bespoke models tailored for illicit purposes. Combining the diffusion of innovation framework with thematic analysis, the paper provides an in-depth view of emerging AI-enabled cybercrime and offers practical insights for law enforcement and policymakers...
The cybersecurity industry is obsessing over Anthropic’s new model, Claude Mythos Preview, and its effects on cybersecurity. Anthropic said that it is not releasing it to the general public because of its cyberattack capabilities, and has launched Project Glasswing to run the model against a whole slew of public domain and proprietary software, with the aim of finding and patching all the vulnerabilities before hackers get their hands on the model and exploit them.
There’s a lot here, and I hope to write something more considered in the coming week, but I want to make some quick observations...
All the leading AI chatbots are sycophantic, and that’s a problem:
Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them.
One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language...
Unknown threat actors compromised CPUID (“cpuid[.]com”), a website that hosts popular hardware monitoring tools like CPU-Z, HWMonitor, HWMonitor Pro, and PerfMonitor, for less than 24 hours to serve malicious executables for the software and deploy a remote access trojan called STX RAT.
The incident lasted from approximately April 9, 15:00 UTC, to about April 10, 10:00 UTC, with
Adobe has released emergency updates to fix a critical security flaw in Acrobat Reader that has come under active exploitation in the wild.
The vulnerability, assigned the CVE identifier CVE-2026-34621, carries a CVSS score of 8.6 out of 10.0. Successful exploitation of the flaw could allow an attacker to run malicious code on affected installations.
It has been described as
Posted by Jiacheng Lu, Software Engineer, Google Pixel Team
Google is continuously advancing the security of Pixel devices. We have been focusing on hardening the cellular baseband modem against exploitation. Recognizing the risks associated within the complex modem firmware, Pixel 9 shipped with mitigations against a range of memory-safety vulnerabilities. For Pixel 10, Google is advancing its proactive security measures further. Following our previous discussion on "Deploying Rust in Existing Firmware Codebases", this post shares a concrete application: integrating a memory-safe Rust DNS(Domain Name System) parser into the modem firmware. The new Rust-based DNS parser significantly reduces our security risk by mitigating an entire class of vulnerabilities in a risky area, while also laying the foundation for broader adoption of memory-safe code in other areas.
Here we share our experience of working on it, and hope it can inspire the use of more memory safe languages in low-level environments.
Why Modem Memory Safety Can’t Wait
In recent years, we have seen increasing interest in the cellular modem from attackers and security researchers. For example, Google's Project Zero gained remote code execution on Pixel modems over the Internet. Pixel modem has tens of Megabytes of executable code. Given the complexity and remote attack surface of the modem, other critical memory safety vulnerabilities may remain in the predominantly memory-unsafe firmware code.
Why DNS?
The DNS protocol is most commonly known in the context of browsers finding websites. With the evolution of cellular technology, modern cellular communications have migrated to digital data networks; consequently, even basic operations such as call forwarding rely on DNS services.
DNS is a complex protocol and requires parsing of untrusted data, which can lead to vulnerabilities, particularly when implemented in a memory-unsafe language (example: CVE-2024-27227). Implementing the DNS parser in Rust offers value by decreasing the attack surfaces associated with memory unsafety.
Picking a DNS library
DNS already has a level of support in the open-source Rust community. We evaluated multiple open source crates that implement DNS. Based on criteria shared in earlier posts, we identified hickory-proto as the best candidate. It has excellent maintenance, over 75% test coverage, and widespread adoption in the Rust community. Its pervasiveness shows its potential as the de-facto DNS choice and long term support. Although hickory-proto initially lacked no_std support, which is needed for Bare-metal environments (see our previous post on this topic), we were able to add support to it and its dependencies.
Adding no_std support
The work to enable no_std for hickory-proto is mostly mechanical. We shared the process in a previous post. We undertook modifications to hickory_proto and its dependencies to enable no_std support. The upstream no_std work also results in a no_std URL parser, beneficial to other projects.
We built prototypes and measured size with size-optimized settings. Expectedly, hickory_proto is not designed with embedded use in mind, and is not optimized for size. As the Pixel modem is not tightly memory constrained, we prioritized community support and code quality, leaving code size optimizations as future work.
However, the additional code size may be a blocker for other embedded systems. This could be addressed in the future by adding additional feature flags to conditionally compile only required functionality. Implementing this modularity would be a valuable future work.
Hook-up Rust to modem firmware
Before building the Rust DNS library, we defined several Rust unit tests to cover basic arithmetic, dynamic allocations, and FFI to verify the integration of Rust with the existing modem firmware code base.
Compile Rust code to staticlib
While using cargo is the default choice for compilation in the Rust ecosystem, it presents challenges when integrating it into existing build systems. We evaluated two options:
Using cargo to build a staticlib before the modem builds. Then add the produced staticlib into the linking step.
Directly work with rustc and integrate the Rust compilation steps into the existing modem build system.
Option #1 does not scale if we are going to add more Rust components in the future, as linking multiple staticlibs may cause duplicated symbol errors. We chose option #2 as it scales more easily and allows tighter integration into our existing build system. Our existing C/C++ codebase uses Pigweed to drive the primary build system. Pigweed supports Rust targets (example) with direct calls to rustc through rust tools defined in GN.
We compiled all the Rust crates, including hickory-proto, its dependencies, and core, compiler_builtin, alloc, to rlib. Then, we created a staticlib target with a single lib.rs file which references all the rlib crates using extern crate keywords.
Build core, alloc, and compiler_builtins
Android’s Rust Toolchain distributes source code of core, alloc, and compiler_builtins, and we leveraged this for the modem. They can be included to the build graph by adding a GN target with crate_root pointing to the root lib.rs of each crate.
Pixel modem firmware already has a well-tested and specialized global memory allocation system to support some dynamic memory allocations. alloc support was added by implementing the GlobalAlloc with FFI calls to the allocators C APIs:
Pixel modem firmware already implements a backend for the Pigweed crash facade as the global crash handler. Exposing it into Rust panic_handler through FFI unifies the crash handling for both Rust and C/C++ code.
#![no_std]
use core::panic::PanicInfo;
extern "C" {
pub fn PwCrashBackend(sigature: *const i8, file_name: *const i8, line: u32);
}
#[panic_handler]
fn panic(panic_info: &PanicInfo) -> ! {
let mut filename = "";
let mut line_number: u32 = 0;
if let Some(location) = panic_info.location() {
filename = location.file();
line_number = location.line();
}
let mut cstr_buffer = [0u8; 128];
// Never writes to the last byte to make sure `cstr_buffer` is always zero
// terminated.
let (_, writer) = cstr_buffer.split_last_mut().unwrap();
for (place, ch) in writer.iter_mut().zip(filename.bytes()) {
*place = ch;
}
unsafe {
PwCrashBackend(
"Rust panic\0".as_ptr() as *const i8,
cstr_buffer.as_ptr() as *const i8,
line_number,
);
}
loop {}
}
Link Rust staticlib
The Pixel modem firmware linking has a step that calls the linker to link all the objects generated from C/C++ code. By using llvm-ar -x to extract object files from the Rust combined staticlib and supplying them to the linker, the Rust code appears in the final modem image.
There was a performance issue we experienced due to weak symbols during linking. The inclusion of Rust core and compiler-builtin caused unexpected power and performance regressions on various tests. Upon analysis, we realized that modem optimized implementations of memset and memcpy provided by the modem firmware are accidentally replaced by those defined in compiler_builtin. It seems to happen because both compiler_builtin crate and the existing codebase defines symbols as weak, linker has no way to figure out which one is weaker. We fixed the regression by stripping the compiler_builtin crate before linking using a one line shell script.
For the DNS parser, we declared the DNS response parsing API in C and then implemented the same API in Rust.
int32_t process_dns_response(uint8_t*, int32_t);
The Rust function returns an integer standing for the error code. The received DNS answers in the DNS response are required to be updated to in-memory data structures that are coupled with the original C implementation, therefore, we use existing C functions to do it. The existing C functions are dispatched from the Rust implementation.
pub unsafe extern "C" fn process_dns_response(
dns_response: *const u8,
response_len: i32,
) -> i32 {
//... validate inputs `dns_response` and `response_len`.
// SAFETY:
// It is safe because `dns_response` is null checked above. `response_len`
// is passed in, safe as long as it is set correctly by vendor code.
match process_response(unsafe {
slice::from_raw_parts(dns_response, response_len)
}) {
Ok(()) => 0,
Err(err) => err.into(),
}
}
fn process_response(response: &[u8]) -> Result<()> {
let response = hickory_proto::op::Message::from_bytes(response)?;
let response = hickory_proto::xfer::DnsResponse::from_message(response)?;
for answer in response.answers() {
match answer.record_type() {
hickory_proto::RecordType:... => {
// SAFETY:
// It is safe because the callback function does not store
// reference of the inputs or their members.
unsafe {
callback_to_c_function(...)?;
}
}
// ... more match arms omitted.
}
}
Ok(())
}
In our case, the DNS responding parsing function API is simple enough for us to hand write, while the callbacks back to C functions for handling the response have complex data type conversions. Therefore, we leveraged bindgen to generate FFI code for the callbacks.
Build third-party crates
Even with all features disabled, hickory-proto introduces more than 30 dependent crates. Manually written build rules are difficult to ensure correctness and scale poorly when upgrading dependencies into new versions.
Fuchsia has developed cargo-gnaw to support building their third party Rust crates. Cargo-gnaw works by invoking cargo metadata to resolve dependencies, then parse and generate GN build rules. This ensures correctness and ease of maintenance.
Conclusion
The Pixel 10 series of phones marks a pivotal moment, being the first Pixel device to integrate a memory-safe language into its modem.
While replacing one piece of risky attack surface is itself valuable, this project lays the foundation for future integration of memory-safe parsers and code into the cellular baseband, ensuring the baseband’s security posture will continue to improve as development continues.
Special thanks to Armando Montanez, Bjorn Mellem, Boky Chen, Cheng-Yu Tsai, Dominik Maier, Erik Gilling, Ever Rosales, Hungyen Weng, Ivan Lozano, James Farrell, Jeffrey Vander Stoep, Jiacheng Lu, Jingjing Bu, Min Xu, Murphy Stein, Ray Weng, Shawn Yang, Sherk Chung, Stephan Chen, Stephen Hines.
Cybersecurity researchers have flagged yet another evolution of the ongoing GlassWorm campaign, which employs a new Zig dropper that’s designed to stealthily infect all integrated development environments (IDEs) on a developer’s machine.
The technique has been discovered in an Open VSX extension named “specstudio.code-wakatime-activity-tracker,” which masquerades as WakaTime, a
While much of the discussion on AI security centers around protecting ‘shadow’ AI and GenAI consumption, there’s a wide-open window nobody’s guarding: AI browser extensions.
A new report from LayerX exposes just how deep this blind spot goes, and why AI extensions may be the most dangerous AI threat surface in your network that isn’t on anyone’s
Google has made Device Bound Session Credentials (DBSC) generally available to all Windows users of its Chrome web browser, months after it began testing the security feature in open beta.
The public availability is currently limited to Windows users on Chrome 146, with macOS expansion planned in an upcoming Chrome release.
“This project represents a significant
A critical security vulnerability in Marimo, an open-source Python notebook for data science and analysis, has been exploited within 10 hours of public disclosure, according to findings from Sysdig.
The vulnerability in question is CVE-2026-39987 (CVSS score: 9.3), a pre-authenticated remote code execution vulnerability impacting all versions of Marimo prior to and including
Details have emerged about a now-patched security vulnerability in a widely used third-party Android software development kit (SDK) called EngageLab SDK that could have put millions of cryptocurrency wallet users at risk.
“This flaw allows apps on the same device to bypass Android security sandbox and gain unauthorized access to private data,” the Microsoft Defender
Posted by Ben Ackerman, Chrome team, Daniel Rubery, Chrome team and Guillaume Ehinger, Google Account Security team
Following our April 2024 announcement, Device Bound Session Credentials (DBSC) is now entering public availability for Windows users on Chrome 146, and expanding to macOS in an upcoming Chrome release. This project represents a significant step forward in our ongoing efforts to combat session theft, which remains a prevalent threat in the modern security landscape.
Session theft typically occurs when a user inadvertently downloads malware onto their device. Once active, the malware can silently extract existing session cookies from the browser or wait for the user to log in to new accounts, before exfiltrating these tokens to an attacker-controlled server. Infostealer malware families, such as LummaC2, have become increasingly sophisticated at harvesting these credentials. Because cookies often have extended lifetimes, attackers can use them to gain unauthorized access to a user’s accounts without ever needing their passwords; this access is then often bundled, traded, or sold among threat actors.
Crucially, once sophisticated malware has gained access to a machine, it can read the local files and memory where browsers store authentication cookies. As a result, there is no reliable way to prevent cookie exfiltration using software alone on any operating system. Historically, mitigating session theft relied on detecting the stolen credentials after the fact using a complex set of abuse heuristics – a reactive approach that persistent attackers could often circumvent. DBSC fundamentally changes the web's capability to defend against this threat by shifting the paradigm from reactive detection to proactive prevention, ensuring that successfully exfiltrated cookies cannot be used to access users’ accounts.
How DBSC Works
DBSC protects against session theft by cryptographically binding authentication sessions to a specific device. It does this using hardware-backed security modules, such as the Trusted Platform Module (TPM) on Windows and the Secure Enclave on macOS, to generate a unique public/private key pair that cannot be exported from the machine. The issuance of new short-lived session cookies is contingent upon Chrome proving possession of the corresponding private key to the server. Because attackers cannot steal this key, any exfiltrated cookies quickly expire and become useless to those attackers. This design allows large and small websites to upgrade to secure, hardware-bound sessions by adding dedicated registration and refresh endpoints to their backends, while maintaining complete compatibility with their existing front-end. The browser handles the complex cryptography and cookie rotation in the background, allowing the web app to continue using standard cookies for access just as it always has.
Google rolled out an early version of this protocol over the last year. For sessions protected by DBSC, we have observed a significant reduction in session theft since its launch.
An overview of the DBSC protocol showing the interaction between the browser and server.
Private by design
A core tenet of the DBSC architecture is the preservation of user privacy. Each session is backed by a distinct key, preventing websites from using these credentials to correlate a user's activity across different sessions or sites on the same device. Furthermore, the protocol is designed to be lean: it does not leak device identifiers or attestation data to the server beyond the per-session public key required to certify proof of possession. This minimal information exchange ensures DBSC helps secure sessions without enabling cross-site tracking or acting as a device fingerprinting mechanism.
Engagement with the ecosystem
DBSC was designed from the beginning to be an open web standard through the W3C process and adoption by the Web Application Security Working Group. Through this process we partnered with Microsoft to design the standard to ensure it works for the web and got input from many in the industry that are responsible for web security.
Additionally, over the past year, we have also conducted two Origin Trials to ensure DBSC effectively serves the requirements of the broader web community. Many web platforms, including Okta, actively participated in these trials and their own testing and provided essential feedback to ensure the protocol effectively addresses their diverse needs.
If you are a web developer and are looking for a way to secure your users against session theft, refer to our developer guide for implementation details. Additionally, all the details about DBSC can be found on the spec and the corresponding github. Feel free to use the issues page to report bugs or provide feature requests.
Future improvements
As we continue to evolve the DBSC standard, future iterations will focus on increasing support across diverse ecosystems and introducing advanced capabilities tailored for complex enterprise environments. Key areas of ongoing development include:
Securing Federated Identity: In modern enterprise environments, Single Sign-On (SSO) is ubiquitous. We are expanding the DBSC protocol to support cross-origin bindings, ensuring that a relying party (RP) session remains continuously bound to the same original device key used by the Identity Provider (IdP). This guarantees that the high-assurance security of the initial device binding is maintained throughout the entire federated login process, creating an unbroken chain of trust.
Advanced Registration Capabilities: While DBSC provides robust protection for established cookies, some environments require an even stronger foundation when the session is first created. We are developing mechanisms to bind DBSC sessions to pre-existing, trusted key material rather than generating a new key at sign-in. This advanced capability enables websites to integrate complementary technologies, such as mTLS certificates or hardware security keys, creating a highly secure registration environment.
Broader Device Support: We are also actively exploring the potential addition of software-based keys to extend protections to devices without dedicated secure hardware.
A previously undocumented threat cluster dubbed UAT-10362 has been attributed to spear-phishing campaigns targeting Taiwanese non-governmental organizations (NGOs) and suspected universities to deploy a new Lua-based malware called LucidRook.
“LucidRook is a sophisticated stager that embeds a Lua interpreter and Rust-compiled libraries within a dynamic-link library (DLL) to download and
From hallucinations and bias to model collapse and adversarial abuse, today’s AI is built on probability rather than truth, yet enterprises are deploying it at speed without fully understanding the risks.
Thursday. Another week, another batch of things that probably should’ve been caught sooner but weren’t.
This one’s got some range — old vulnerabilities getting new life, a few “why was that even possible” moments, attackers leaning on platforms and tools you’d normally trust without thinking twice. Quiet escalations more than loud zero-days, but the kind that matter more in
As AI tools become more accessible, employees are adopting them without formal approval from IT and security teams. While these tools may boost productivity, automate tasks, or fill gaps in existing workflows, they also operate outside the visibility of security teams, bypassing controls and creating new blind spots in what is known as shadow AI. While similar to the phenomenon of
Threat actors have been exploiting a previously unknown zero-day vulnerability in Adobe Reader using maliciously crafted PDF documents since at least December 2025.
The finding, detailed by EXPMON’s Haifei Li, has been described as a highly-sophisticated PDF exploit. The artifact (“Invoice540.pdf”) first appeared on the VirusTotal platform on November 28, 2025. A second
We added a new chapter to our Testing Handbook: a comprehensive security checklist for C and C++ code. We’ve identified a broad range of common bug classes, known footguns, and API gotchas across C and C++ codebases and organized them into sections covering Linux, Windows, and seccomp. Whereas other handbook chapters focus on static and dynamic analysis, this chapter offers a strong basis for manual code review.
LLM enthusiasts rejoice: we’re also developing a Claude skill based on this new chapter. It will turn the checklist into bug-finding prompts that an LLM can run against a codebase, and it’ll be platform and threat-model aware. Be sure to give it a try when we release it.
And after reading the chapter, you can test your C/C++ review skills against two challenges at the end of this post. Be in the first 10 to submit correct answers to win Trail of Bits swag!
What’s in the chapter
The chapter covers five areas: general bug classes, Linux usermode and kernel, Windows usermode and kernel, and seccomp/BPF sandboxes. It starts with language-level issues in the bug classes section—memory safety, integer errors, type confusion, compiler-introduced bugs—and gets progressively more environment-specific.
The Linux usermode section focuses on libc gotchas. This section is also applicable to most POSIX systems. It ranges from well-known problems with string methods, to somewhat less known caveats around privilege dropping and environment variable handling. The Linux kernel is a complicated beast, and no checklist could cover even a part of its intricacies. However, our new Testing Handbook chapter can give you a starting point to bootstrap manual reviews of drivers and modules.
The Windows sections cover DLL planting, unquoted path vulnerabilities in CreateProcess, and path traversal issues. This last bug class includes concerns like WorstFit Unicode bugs, where characters outside the basic ANSI set can be reinterpreted in ways that bypass path checks entirely. The kernel section addresses driver-specific concerns such as device access controls, denial of service through improper spinlock usage, security issues arising from passing handles from usermode to kernelmode, and various sharp edges in Windows kernel APIs.
Linux seccomp and BPF features are often used for sandboxing. While more modern tools like Landlock and namespaces exist for this task, we still see a combination of these older features during audits. And we always uncover a lot of issues. The new Testing Handbook chapter covers sandbox bypasses we’ve seen, like io_uring syscalls that execute without the BPF filter ever seeing them, the CLONE_UNTRACED flag that lets a tracee effectively disable seccomp filters, and memory-level race conditions in ptrace-based sandboxes.
Test your review skills
We’ve provided two challenges below that contain real bug classes from the checklist. Try to spot the issues, then submit your answers. If you’re in the first 10 to submit correct answers, you’ll receive Trail of Bits swag. The challenge will close April 17, so get your answers in before then.
Stuck? Don’t worry. We’ll be publishing the answers in a follow-up blog post, so don’t forget to #like and #subscribe, by which we mean add our RSS feed to your reader.
The many quirks of Linux libc
In this simple ping program, there are two libc gotchas that make the program trivially exploitable. Can you find and explain the issues? If you can’t, check out the handbook chapter. Both bugs are covered in the Linux usermode section.
#include<stdio.h>#include<stdlib.h>#include<string.h>#include<arpa/inet.h>#define ALLOWED_IP "127.3.3.1"
intmain(){charip_addr[128];structin_addrto_ping_host,trusted_host;// get address
if(!fgets(ip_addr,sizeof(ip_addr),stdin))return1;ip_addr[strcspn(ip_addr,"\n")]=0;// verify address
if(!inet_aton(ip_addr,&to_ping_host))return1;char*ip_addr_resolved=inet_ntoa(to_ping_host);// prevent SSRF
if((ntohl(to_ping_host.s_addr)>>24)==127)return1;// only allowed
if(!inet_aton(ALLOWED_IP,&trusted_host))return1;char*trusted_resolved=inet_ntoa(trusted_host);if(strcmp(ip_addr_resolved,trusted_resolved)!=0)return1;// ping
charcmd[256];snprintf(cmd,sizeof(cmd),"ping '%s'",ip_addr);system(cmd);return0;}
Windows driver registry gotchas
This Windows Driver Framework (WDF) driver request handler queries product version values from the registry. There are several bugs here, including an easy-to-exploit denial of service, but one of them leads to kernel code execution by messing with the registry values. Can you figure out the bug and how to exploit it?
NTSTATUSInitServiceCallback(_In_WDFREQUESTRequest){NTSTATUSstatus;PWCHARregPath=NULL;size_tbufferLength=0;// fetch the product registry path from the request
status=WdfRequestRetrieveInputBuffer(Request,4,®Path,&bufferLength);if(!NT_SUCCESS(status)){TraceEvents(TRACE_LEVEL_ERROR,TRACE_QUEUE,"%!FUNC! Failed to retrieve input buffer. Status: %d",(int)status);returnstatus;}/* check that the buffer size is a null-terminated
Unicode (UTF-16) string of a sensible size */if(bufferLength<4||bufferLength>512||(bufferLength%2)!=0||regPath[(bufferLength/2)-1]!=L'\0'){TraceEvents(TRACE_LEVEL_ERROR,TRACE_QUEUE,"%!FUNC! Buffer length %d was incorrect.",(int)bufferLength);returnSTATUS_INVALID_PARAMETER;}ProductVersionInfoversion={0};HandlerCallbackhandlerCallback=NewCallback;intreadValue=0;// read the major version from the registry
RTL_QUERY_REGISTRY_TABLEregQueryTable[2];RtlZeroMemory(regQueryTable,sizeof(RTL_QUERY_REGISTRY_TABLE)*2);regQueryTable[0].Name=L"MajorVersion";regQueryTable[0].EntryContext=&readValue;regQueryTable[0].Flags=RTL_QUERY_REGISTRY_DIRECT;regQueryTable[0].QueryRoutine=NULL;status=RtlQueryRegistryValues(RTL_REGISTRY_ABSOLUTE,regPath,regQueryTable,NULL,NULL);if(!NT_SUCCESS(status)){TraceEvents(TRACE_LEVEL_ERROR,TRACE_QUEUE,"%!FUNC! Failed to query registry. Status: %d",(int)status);returnstatus;}TraceEvents(TRACE_LEVEL_INFORMATION,TRACE_QUEUE,"%!FUNC! Major version is %d",(int)readValue);version.Major=readValue;if(version.Major<3){// versions prior to 3.0 need an additional check
RtlZeroMemory(regQueryTable,sizeof(RTL_QUERY_REGISTRY_TABLE)*2);regQueryTable[0].Name=L"MinorVersion";regQueryTable[0].EntryContext=&readValue;regQueryTable[0].Flags=RTL_QUERY_REGISTRY_DIRECT;regQueryTable[0].QueryRoutine=NULL;status=RtlQueryRegistryValues(RTL_REGISTRY_ABSOLUTE,regPath,regQueryTable,NULL,NULL);if(!NT_SUCCESS(status)){TraceEvents(TRACE_LEVEL_ERROR,TRACE_QUEUE,"%!FUNC! Failed to query registry. Status: %d",(int)status);returnstatus;}TraceEvents(TRACE_LEVEL_INFORMATION,TRACE_QUEUE,"%!FUNC! Minor version is %d",(int)readValue);version.Minor=readValue;if(!DoesVersionSupportNewCallback(version)){handlerCallback=OldCallback;}}SetGlobalHandlerCallback(handlerCallback);}
We’re not done yet
Our goal is to continuously update the handbook, including this chapter, so that it remains a key resource for security practitioners and developers who are involved in the source code security review process. If your favorite gotcha is not there, please send us a PR.
Checklist-based review, even combined with skilled-up LLMs, is only a single step in securing a system. Do it, but remember that it’s just a starting point for manual review, not a substitute for deep expertise. If you need help securing your C/C++ systems, contact us.
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: “The package is a pile of shit.”
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security...
An apparent hack-for-hire campaign likely orchestrated by a threat actor with suspected ties to the Indian government targeted journalists, activists, and government officials across the Middle East and North Africa (MENA), according to findings from Access Now, Lookout, and SMEX.
Two of the targets included prominent Egyptian journalists and government critics, Mostafa
Hackers vowed to revive its efforts against America when the time was right — demonstrating how digital warfare has become ingrained in military conflict.
Cybersecurity researchers have flagged a new variant ofmalware called Chaosthat’scapable of hitting misconfigured cloud deployments, marking an expansion of the botnet’s targeting infrastructure.
“Chaos malware is increasingly targeting misconfigured cloud deployments, expanding beyond its traditional focus on routers and edge devices,” Darktrace said in a new report.
Cybersecurity researchers have lifted the curtain on a stealthy botnet that’s designed for distributed denial-of-service (DDoS) attacks.
Called Masjesu, the botnet has been advertised via Telegram as a DDoS-for-hire service since it first surfaced in 2023. It’s capable of targeting a wide range of IoT devices, such as routers and gateways, spanning multiple architectures.
“Built for
The Fragmented State of Modern Enterprise Identity
Enterprise IAM is approaching a breaking point. As organizations scale, identity becomes increasingly fragmented across thousands of applications, decentralized teams, machine identities, and autonomous systems.
The result is Identity Dark Matter: identity activity that sits outside the visibility of centralized IAM and
A malicious supply chain compromise has been identified in the Python Package Index package litellm version 1.82.8. The published wheel contains a malicious .pth file (litellm_init.pth, 34,628 bytes) which is automatically executed by the Python interpreter on every startup, without requiring any explicit import of the litellm module.
There are a lot of really boring things we need to do to help secure all of these critical libraries: SBOMs, SLSA, SigStore. But we have to do them.
AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: “instant software.” Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.
AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve...
WhatsApp’s new “Private Inference” feature represents one of the most ambitious attempts to combine end-to-end encryption with AI-powered capabilities, such as message summarization. To make this possible, Meta built a system that processes encrypted user messages inside trusted execution environments (TEEs), secure hardware enclaves designed so that not even Meta can access the plaintext. Our now-public audit, conducted before launch, identified several vulnerabilities that compromised WhatsApp’s privacy model, all of which Meta has patched. Our findings show that TEEs aren’t a silver bullet: every unmeasured input and missing validation can become a vulnerability, and to securely deploy TEEs, developers need to measure critical data, validate and never trust any unmeasured data, and test thoroughly to detect when components misbehave.
The challenge of using AI with end-to-end encryption
WhatsApp’s Private Processing attempts to resolve a fundamental tension: WhatsApp is end-to-end encrypted, so Meta’s servers cannot read, alter, or analyze user messages. However, if users also want to opt in to AI-powered features like message summarization, this typically requires sending plaintext data to servers for computationally expensive processing. To solve this, Meta uses TEEs based on AMD’s SEV-SNP and Nvidia’s confidential GPU platforms to process messages in a secure enclave where even Meta can’t access them or learn meaningful information about the message contents.
The stakes in WhatsApp are high, as vulnerabilities could expose millions of users’ private messages. Our review identified 28 issues, including eight high-severity findings that could have enabled attackers to bypass the system’s privacy guarantees. The following sections explore noteworthy findings from the audit, how they were fixed, and the lessons they impart.
Key lessons for TEE deployments
Lesson 1: Never trust data outside your measurement
In TEE systems, an “attestation measurement” is a cryptographic checksum of the code running in the secure enclave; it’s what clients check to ensure they’re interacting with legitimate, unmodified software. We discovered that WhatsApp’s system loaded configuration files containing environment variables after this fingerprint was taken (issue TOB-WAPI-13 in the report).
This meant that a malicious insider at Meta could inject an environment variable, such as LD_PRELOAD=/path/to/evil.so, forcing the system to load malicious code when it started up. The attestation would still verify as valid, but the attacker’s malicious code would be running inside, potentially violating the system’s security or privacy guarantees by, for example, logging every message being processed to a secret server.
Meta fixed this by strictly validating environment variables: they can now contain only safe characters (alphanumeric plus a few symbols like dots and dashes), and the system explicitly checks for dangerous variables like LD_PRELOAD. Every piece of data your TEE loads must either be part of the measured boot process or be treated as potentially hostile.
Lesson 2: Do not trust data outside your measurement (have we already mentioned this?)
ACPI tables are configuration data that inform an operating system about the available hardware and how to interact with it. We found these tables weren’t included in the attestation measurement (TOB-WAPI-17), creating a backdoor for attackers.
Here’s why this matters: a malicious hypervisor (the software layer that manages virtual machines) could inject fake ACPI tables defining malicious “devices” that can read and write to arbitrary memory locations. When the secure VM boots up, it processes these tables and grants the fake devices access to memory regions that should be protected. An attacker could use this to extract user messages or encryption keys directly from the VM’s memory, and the attestation report will still verify as valid and untampered.
Meta addressed this by implementing a custom bootloader that verifies ACPI table signatures as part of the secure boot process. Now, any tampering with these tables will change the attestation measurement, alerting clients that something is wrong.
Lesson 3: Correctly verify security patch levels
AMD regularly releases security patches for its SEV-SNP firmware, fixing vulnerabilities that could allow attackers to compromise the secure environment. The WhatsApp system did check these patch levels, but it made an important error: it trusted the patch level that the firmware claimed to be running (in the attestation report), rather than verifying it against AMD’s cryptographic certificate (TOB-WAPI-8).
An attacker who had compromised an older, vulnerable firmware could simply lie about their patch level. Researchers have publicly demonstrated attacks that can extract encryption keys from older SEV-SNP firmware versions. An attacker could use these published techniques against WhatsApp users to exfiltrate secret data while the client incorrectly believes it’s connected to a secure, updated system.
Meta’s solution was to validate patch levels against the VCEK certificate’s X.509 extensions. These extensions are cryptographically signed data from AMD that can’t be forged by compromised firmware. The client then enforces minimum patch levels based on values set in the WhatsApp client source code.
Lesson 4: Attestations need freshness guarantees
Before our review, when a client connected to the Private Processing system, the server would generate an attestation report proving its identity, but this report didn’t include any timestamp or random value from the client (TOB-WAPI-7). This meant that an attacker who compromised a TEE once could save its attestation report and TLS keys, then replay them indefinitely.
Achieving a one-time compromise of a TEE is typically much more feasible and much less severe than a persistent compromise affecting each individual session. For example, consider an attacker who can extract TLS session keys through a side channel attack or other vulnerability. For a single attack, the impact tends to be short-lived, as the forward security of TLS makes the exploit impactful for only a single TLS session. However, without freshness, that single success becomes a permanent backdoor because the TEE’s attestation report from that compromised session can be replayed indefinitely. In particular, the attacker can now run a fake server anywhere in the world, presenting the stolen attestation to clients who will trust it completely. Every WhatsApp user who connects would send their messages to the attacker’s server, believing it’s a secure Meta TEE.
Meta addressed this issue by including the TLS client_random nonce in every attestation report. Now each attestation is tied to a specific connection and can’t be replayed. When implementing remote-attested transport protocols, we recommend performing attestation over a value derived from the handshake transcript, such as the scheme specified in the IETF draft Remote Attestation with Exported Authenticators.
How Meta fixed the remaining issues
Before their launch, Meta resolved 16 issues completely and partially addressed four others. The remaining eight unresolved issues are low- and informational-severity issues that Meta has deliberately not addressed. Meta provided a justification for each of these decisions, which can be reviewed in appendix F of our audit report. In addition, they’ve implemented broader improvements, such as automated build pipelines with provenance verification and published authorized host identities in external logs.
Beyond individual vulnerabilities: Systemic challenges in TEE deployment
While Meta has resolved these specific issues, our audit revealed the need to solve more complex challenges in securing TEE-based systems.
Physical security matters: The AMD SEV-SNP threat model doesn’t fully protect against advanced physical attacks. Meta needed to implement additional controls around which CPUs could be trusted (TOB-WAPI-10). If you are interested in a more detailed discussion on physical attacks targeting these platforms, check out our webinar, which discusses recently published physical attacks targeting both AMD SEV-SNP and Intel’s SGX/TDX platforms.
Transparency requires reproducibility: For external researchers to verify the system’s security, they need to be able to reproduce and examine the CVM images. Meta has made progress in this area, but achieving full reproducibility remains challenging, as issue TOB-WAPI-18 demonstrates.
Complex systems need comprehensive testing: Many of the issues we found could have been caught with negative testing, specifically testing what happens when components misbehave or when malicious inputs are provided.
The path forward for securely deploying TEEs
Can TEEs enable privacy-preserving AI features? Our audit suggests the answer is yes, but only with rigorous attention to implementation details. The issues we found weren’t fundamental flaws in the TEE model but rather implementation and deployment gaps that a determined attacker could exploit. These are subtle flaws that other TEE deployments are likely to replicate.
This audit shows that while TEEs provide strong isolation primitives, the large host-guest attack surface requires careful design and implementation. Every unmeasured input, every missing validation, and every assumption about the execution environment can become a vulnerability. Your system is only as secure as your TEE implementation and deployment.
For teams building on TEEs, our advice is clear: engage security reviewers early, invest in comprehensive testing (especially negative testing), and remember that security in these systems comes from getting hundreds of details right, not just the big architectural decisions.
The promise of confidential computing is compelling. But, as this audit shows, realizing that promise requires rigorous attention to security at every layer of the stack.
For more details on the technical findings and Meta’s fixes, see our full audit report. If you’re building systems with TEEs and want to discuss security considerations, we offer free office hours sessions where we can share insights from our extensive experience with these technologies.
According to a new law, the Hong Kong police can demand that you reveal the encryption keys protecting your computer, phone, hard drives, etc.—even if you are just transiting the airport.
In a security alert dated March 26, the U.S. Consulate General said that, on March 23, 2026, Hong Kong authorities changed the rules governing enforcement of the National Security Law. Under the revised framework, police can require individuals to provide passwords or other assistance to access personal electronic devices, including cellphones and laptops.