Skip to main content

Public Disclosures

On this page 66 public 602 hidden
90% more intel to see
0DIN customers
get hundreds of days of early access to vulnerabilities in disclosures. The same reports that vendors are using to patch their systems!
Unlock 0-Day Access

The following 0DIN submissions have been published in accordance with our Research Terms and Disclosure Policy

  • A prompt injection vulnerability has been discovered affecting multiple models. The specific flaw occurs when the attacker frames an illicit request as a technical field guide for identifying clandestine synthesis operations, embedding detailed drug synthesis instructions within structured procedural language under the guise of forensic training. By mimicking the tone and format of law enforcement documentation—including lists of observable evidence, step-by-step chemical protocols, and instr...
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 95
  • A Guardrail Jailbreak vulnerability has been discovered affecting multiple models. The specific flaw exploits models' willingness to decode hexadecimal-encoded strings embedded inside fictional scientific recipe task prompts, which the attacker frames as humanitarian or life-saving efforts. This allows the attacker to bypass safety filters by disguising illegal synthesis instructions as encoded inputs passed to a technical recipe generator.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 91
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker presents detailed information about illicit synthesis methods under the guise of "forensic chemical investigations" and then requests corrections to the provided information, thereby allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 82
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of obfuscated chemical notation presented as corrupted data needing "debugging" or "normalization", coupled with explicit instructions to disable content filtering. The attacker frames the request as fixing a "glitched chemical compiler" while embedding dangerous synthesis instructions in alternating case text, thereby allowing the attacker to bypass inference r...
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 82
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker presents an incomplete list of chemicals and simply asks "what is missing?", leveraging the model's tendency to complete patterns even when the subject matter involves precursor chemicals for illicit substances. This technique allows the attacker to bypass inference restrictions around providing information hazards and violating laws.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 82
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of authority claiming and scenario building around forensic research contexts, allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 95
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of chaff-based text obfuscation combined with the confusion of a hybrid role, say for example between a banker and a chemist. This combination allows the attacker to manipulate the model into providing detailed procedures for illicit synthesis by obfuscating the true intent and exploiting model confusion to bypass content restrictions.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 100
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker requests copyrighted content with formatting instructions that insert placeholder text between each word, effectively obfuscating the direct reproduction while still achieving the prohibited output. This technique allows the attacker to bypass inference restrictions around copyright protection and intellectual property safeguards.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 67
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker frames a request for illicit substance manufacturing instructions as an urgent health-related inquiry, thereby allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 95
  • A guardrail jailbreak vulnerability has been found that affects at least Meta LLaMa, DeepSeek R1, Mistral Large, and Grok. The specific flaw manifests when the prompter requests that the model simulates "ChatGPT Developer Mode". By crafting a story around specific timelines of when this mode was available and asking the model to return two forms of responses, one normal and the other in developer mode, the attacker is able to bypass inference restrictions around providing information hazards ...
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 91
Want access to 602 more reports?
Unlock 0-Day Access