๐Ÿ”ง Cloud IR ยท Supply Chain

The Pipeline Is the Target

What three weeks of supply chain attacks taught us about cloud incident response

CQ Forensics Team Cloud Incident Response
โ† Back to blog

When a security scanner becomes the delivery mechanism for the malware it was meant to detect, something fundamental has shifted. That's not a thought experiment. It happened in February 2026, when Trivy โ€” a tool trusted inside CI pipelines across thousands of organizations to find vulnerabilities โ€” was itself compromised and used to steal the secrets of the pipelines that ran it.

The weeks that followed only deepened the pattern. Between late February and late April 2026, a sustained campaign attributed primarily to the threat group TeamPCP moved methodically across ecosystems: GitHub Actions first, then npm, then PyPI. Major packages โ€” Trivy, LiteLLM, Telnyx, xinference โ€” were compromised not through fake typo-squatted names, but by gaining unauthorized access to the real projects and publishing malicious versions through legitimate channels. Then, in a 48-hour window between April 21 and 23, three separate campaigns simultaneously hit npm, PyPI, and Docker Hub, all with the same objective: steal credentials from developer environments and CI/CD pipelines.

We've spent time analyzing these campaigns, the forensic artifacts they leave, and the gaps they expose in how most organizations think about cloud incident response. What follows is what we've learned โ€” and why we built something to help.

The anatomy of a modern supply chain attack

The mental model most organizations carry into cloud security was shaped by an older threat landscape: an external attacker finds a vulnerability in an internet-facing application, exploits it, and establishes a foothold on a host. From there, they move laterally, escalate privilege, and extract data. The response playbook reflects this: isolate the host, image the disk, analyze the filesystem.

The supply chain campaigns of early 2026 do not follow this sequence at all. The attacker never touches a vulnerable application. They never appear at a network perimeter. They show up inside a build system as a package update โ€” installed by the CI runner itself, running with the full set of environment variables that runner has access to, authenticated to everything the pipeline is authenticated to.

What TeamPCP actually targeted

Every payload documented across the Trivy, LiteLLM, Telnyx, and xinference compromises had the same collection list: environment variables, SSH keys, cloud credentials (AWS, Azure, GCP), Kubernetes configs, Docker registry tokens, shell history, database connection strings, CI/CD secrets, and npm and PyPI publish tokens. The goal was not to disrupt software delivery. The goal was to extract the keys to the kingdom and use them to expand the campaign further.

This is what makes these attacks so forensically challenging. The malicious code runs in a legitimate context, under a legitimate identity, doing legitimate-looking network operations โ€” fetching dependencies, uploading artifacts โ€” except that it is also silently encoding and exfiltrating credentials to a C2 infrastructure. The CanisterSprawl worm documented by Socket and StepSecurity took this further: once it found npm publish tokens on a compromised machine, it automatically identified every package that token could publish, bumped the patch version, injected its payload, and republished them. If it also found PyPI credentials, it jumped ecosystems entirely. The campaign propagated itself.

"The tools teams use to check for risk are not automatically exempt from it. A scanner running with access to CI secrets is a high-value target."

The Trivy compromise makes this concrete. Trivy is a container and filesystem scanner. Organizations run it inside CI pipelines specifically because it has access to everything the pipeline builds โ€” images, filesystems, configuration files. That access, combined with the trust most teams give to security tooling ("it's our scanner, of course it can read everything"), makes it an extraordinarily high-value target. When attackers compromised it via force-pushed tags on trivy-action and setup-trivy, thousands of workflows silently resolved existing version references to attacker-controlled code. The scanner became the exfiltrator.

What the forensic picture looks like

When an organization realizes it may have been affected by one of these campaigns, the investigation looks different from a traditional host compromise. There is often no suspicious process tree, no anomalous login, no lateral movement through the network. What there is:

The log gap

Most CI/CD platforms capture build logs but not the full environment state at the time of execution. If data-event logging was not enabled on cloud storage at the time of the compromise, you may not be able to determine with certainty what the malicious package read. You will typically know the identity it ran under, and therefore what it could have accessed โ€” which, for notification purposes, is often the assumption you are forced to make. This is one of the most consistent findings we see in post-compromise reviews: organizations that cannot reconstruct what happened because they did not log what they needed to log.

The blast radius problem

Pipeline identities are frequently over-privileged. A CI role or service account that was created to deploy to one environment has, over time, accumulated permissions to read secrets across multiple environments, assume roles in production accounts, and push to shared artifact registries. When a compromised package runs under that identity, the blast radius is not the pipeline โ€” it is every system that identity could reach. Scoping a supply chain compromise correctly requires mapping that blast radius before you can determine what to rotate, what to investigate, and what notifications are required.

What we commonly find after a supply chain compromise

Pipeline service accounts or CI roles with production-account access. Long-lived cloud credentials stored as CI environment variables instead of using OIDC-federated short-lived tokens. No artifact signing or signature verification at deploy time. No behavioral monitoring on function or runner invocations. Logging enabled on the CI system but not on the downstream cloud resources the pipeline touches.

The time-window challenge

These attacks frequently linger. The TeamPCP campaign moved across Trivy, npm packages, and PyPI packages over a period of weeks. If a compromised package version was installed in a pipeline on day one and the compromise was not publicly disclosed until day fourteen, every secret in scope of that pipeline during that window must be treated as compromised โ€” regardless of whether you can prove exfiltration. In practice, most organizations cannot prove it either way without comprehensive logging, which brings us back to the log gap.

The pattern no one is watching for

The attack pattern that the CanisterSprawl and TeamPCP campaigns illustrate is not new โ€” it was visible in earlier supply chain incidents โ€” but it has become significantly more automated and cross-ecosystem. The most important characteristic: the attack credential-hops. It starts with one compromised package, uses the credentials it finds to compromise additional packages, uses those packages to compromise additional environments, and so on. Each hop adds to the campaign's reach and makes attribution and scoping progressively more difficult.

The Datadog Security Labs analysis of the LiteLLM and Telnyx compromises documented this clearly: the operator moved from project to project, reusing access and tradecraft from each previous stage. By March 20, they were running a self-propagating npm worm across 28 packages in one publisher scope and 16 in another. By March 22, the same callback infrastructure was serving a Kubernetes-focused payload. The campaign kept moving while most of the affected organizations were still trying to understand whether they were affected at all.

The decentralized C2 problem

The CanisterSprawl campaigns used an Internet Computer Protocol (ICP) canister โ€” effectively a smart contract โ€” as a C2 channel. This is notable from a defensive and forensic standpoint because ICP canisters are decentralized, censorship-resistant, and cannot be taken down by a traditional domain seizure or hosting-provider request. The exfiltrated data goes to infrastructure that defenders cannot sinkhole and that remains operational regardless of takedowns elsewhere in the campaign's infrastructure. Blocking by domain name is insufficient; defenders need to be looking at the behavioral pattern โ€” outbound connections from a runner process that should not be making them โ€” rather than the specific destination.

What good response looks like

We have run supply chain compromise investigations involving each of the major clouds, and the organizations that contain these incidents quickly share a small set of characteristics that are worth naming directly.

They know what their pipeline identities can do. Before an incident, they have enumerated the permissions of every CI service account, IAM role, or service principal used in their build and deploy systems. When a compromise is discovered, they can tell you within minutes what the blast radius is โ€” not after days of policy enumeration.

They use short-lived credentials in CI. Organizations that have migrated to OIDC-federated short-lived credentials (GitHub Actions to AWS via OIDC, Workload Identity Federation to GCP, equivalent patterns in Azure) have a structurally smaller blast radius than those using long-lived keys stored as environment variables. A short-lived token that was valid for the duration of a specific build is not useful to an attacker after the build completes. A long-lived access key stored as a repository secret is.

They have data-event logging enabled. When we can pull Cloud Storage, S3, or Blob storage access logs and see exactly which objects a compromised identity accessed and when, the investigation closes in hours. When we cannot, the investigation involves risk-based assumption of worst-case access, which expands the scope of notification obligations and the remediation effort.

They can act on the pipeline plane, not just the host plane. Containment for a supply chain compromise means disabling the compromised package version at the registry level or artifact store, preventing it from being reinstalled, rolling back to a verified-clean artifact, revoking the compromised identity, and rebuilding from clean. None of these actions are in a traditional host-compromise playbook.

  1. 01
    Within the first hour Scope the pipeline identity

    Identify every permission the CI identity holds, every secret it can read, and every downstream system it can reach. This drives everything else.

  2. 02
    In parallel Revoke the compromised identity and quarantine the artifact

    Stop new invocations of the compromised code. Block the malicious package version in your artifact store. Disable the CI identity โ€” then re-issue clean, scoped credentials once the pipeline is clean.

  3. 03
    Evidence first Export logs before rotating secrets

    Pull the CI job logs, the cloud control-plane logs for the compromised identity, and any data-event logs from storage resources within the blast radius. Rotation destroys the ability to determine what the credential was used for after the compromise.

  4. 04
    Rotate comprehensively Assume everything in scope was read

    Rotate every secret the compromised identity could have accessed. If you cannot prove it didn't, treat it as compromised. Partial rotation is the most common cause of reinfection and re-compromise in these campaigns.

  5. 05
    Rebuild clean Rebuild from a verified artifact

    Do not redeploy the same artifact digest that was compromised. Rebuild from source, validate signatures end-to-end, and verify the new artifact's behavior before returning it to service.

Why cloud IR needed a new framework

Most of the incident response frameworks organizations still use were designed for a world of on-premises infrastructure and endpoint-centric attacks. They describe a four-phase linear lifecycle โ€” Preparation, Detection and Analysis, Containment and Eradication, Recovery โ€” that assumes a relatively stable environment with clear host-level forensic artifacts. That model does not map cleanly to cloud-native and supply chain incidents, where the attacker moves through API calls instead of network connections, the forensic artifacts are control-plane logs rather than disk images, and the "blast radius" is defined by IAM policy rather than network topology.

NIST recognized this in April 2025, when it published Special Publication 800-61 Revision 3. The revised guidance retired the linear four-phase lifecycle in favor of a model organized around the six Functions of the NIST Cybersecurity Framework 2.0: Govern, Identify, Protect, Detect, Respond, and Recover. The shift matters because cloud incidents don't move in a tidy sequence. Detection and containment happen simultaneously. Lessons learned in recovery inform detection engineering for the next incident. The new model accommodates this reality.

We built the CQ Forensics Cloud Incident Response Framework against that updated model โ€” and specifically to address the gaps we encounter repeatedly in supply chain and cloud-native IR engagements.

Free download

The CQ Forensics Cloud Incident Response Framework

A modular, fully generic framework for AWS, Microsoft Azure, and Google Cloud, aligned to NIST SP 800-61r3 and CSF 2.0. Built to be adopted by any organization โ€” and updated to reflect the current threat landscape, including the supply chain compromise scenario we have described throughout this post.

  • Six scenarios Host compromise, identity/IAM, data exfiltration, ransomware, container/Kubernetes, and serverless & supply chain.
  • Three clouds at parity AWS, Azure, and GCP procedures at equal depth โ€” including per-cloud containment matrices and log query examples.
  • NIST SP 800-61r3 Aligned to the April 2025 revision and the CSF 2.0 six-Function model, with a mapping table in the appendix.
  • Modular Each scenario is self-contained. Adopt the whole framework or pull the scenarios that match your threat model.
  • Ready to adapt Placeholders marked for local context โ€” contacts, account IDs, escalation thresholds โ€” not locked to any organization.
  • ATT&CK mapped Each scenario maps to MITRE ATT&CK for Cloud and Containers, with per-cloud control-plane event tables.

The Serverless and Supply Chain scenario (Section 3.6) covers exactly the attack pattern described in this post: compromised CI/CD identity, malicious package injection, function-level containment, pipeline reconstruction, and end-to-end artifact verification.

Tell us where to send the framework and we'll unlock the download on the next page.

What comes next

The TeamPCP campaign is not over. The three-ecosystem wave of April 21โ€“23 shows that the operational tempo is increasing, not decreasing, and that the campaign operators have demonstrated the ability to pivot between ecosystems quickly when one vector is addressed. Several packages affected in the April wave were still under active investigation at the time of publication.

Organizations that have not already done so should treat the past three weeks as a forcing function for three immediate actions: audit the permissions of every CI identity and service account, migrate long-lived CI credentials to OIDC-federated short-lived tokens, and verify that data-event logging is enabled on every cloud storage resource reachable from the CI plane. These are not new recommendations. They are the recommendations that would have materially reduced the impact of every supply chain campaign documented in this post.

The supply chain is not a peripheral attack surface anymore. It is where the most sophisticated credential-harvesting campaigns of 2026 are operating. The organizations that respond well to these incidents are the ones that have thought through the pipeline-specific response playbook in advance โ€” not the ones reaching for a host-compromise checklist when a compromised package lands in their CI runner.

If you would like to discuss how CQ Forensics approaches supply chain IR, or if you are responding to an incident now, reach out to our team directly.


Questions, story ideas, or want to be notified when we post? Reach us at response@cqforensics.ai.

๐Ÿšจ Call 24/7 Hotline: (480) 815-2012