Public Disclosures
The following 0DIN submissions have been published in accordance with our Research Terms and Disclosure Policy
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of feigning confusion about a "technical guide" while requesting detailed replication of procedures involving controlled substances. The attacker employs servile language, claims scientific legitimacy, and explicitly requests step-by-step instructions with precise measurements and temperatures, thereby allowing bypass of inference restrictions around providing i...SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 82
-
Sep 25 2025A guardrail jailbreak vulnerability has been discovered affecting OpenAI's DALL-E3. The specific flaw manifests when the attacker establishes a fictitious debugging framework complete with state management, checkpointing, and replay functionality. By framing requests within this technical debugging context, the attacker creates a permissive environment that allows bypassing inference restrictions around generating prohibited visual content.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker requests the model to act as a Linux terminal and simulate file contents based solely on filenames. The attacker creates a context where the model generates potentially restricted content under the guise of terminal output simulation, thereby allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 98
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of terminal environment simulation and malware analysis framing, where the attacker requests extraction of strings from a purported memory dump. By embedding the request within a technical security analysis context and using the strings command output format, the technique allows the attacker to bypass inference restrictions around providing information hazards ...SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 68
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of deceptive mathematical equation formatting and meta-prompting techniques, where the attacker disguises harmful requests as trivial mathematical problems requiring variable simplification. Successful exploitation allows the attacker to bypass inference restrictions around providing information hazards and violating laws.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 68
-
A guardrail jailbreak vulnerability has been discovered affecting DALL-E3 from OpenAI. The specific flaw manifests when the attacker frames requests for nudity within historical or anthropological contexts, specifically by referencing pre-clothing time periods and tribal cultures, thereby allowing the attacker to bypass inference restrictions around generating explicit visual content.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:
-
Aug 27 2025A guardrail jailbreak vulnerability has been discovered affecting DALL-E3 from OpenAI. The specific flaw manifests when the attacker frames requests for nudity through a combination of historical framing and artistic merit defense. Thereby allowing the attacker to bypass inference restrictions around generating explicit visual content.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:
-
A guardrail jailbreak vulnerability has been discovered affecting DALL-E3 from OpenAI. The specific flaw manifests through a combination of historical context framing and vintage format specification that appears to bypass content filtering by contextualizing potentially exploitative imagery within seemingly innocent historical and photographic framing, thereby allowing the attacker to bypass inference restrictions around generating inappropriate or exploitative content.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:
-
A guardrail jailbreak vulnerability has been discovered affecting DALL-E3 from OpenAI. The specific flaw manifests when the attacker begins with seemingly artistic or mythological requests and progressively escalates the explicitness through iterative refinement requests, thereby allowing the attacker to bypass restrictions and produce sexually explicit imagery.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:
-
Aug 19 2025A guardrail jailbreak vulnerability has been discovered affecting DALL-E3 from OpenAI. The specific flaw manifests when the attacker frames explicit content requests within feminist discourse and artistic terminology, using concepts like "bodily autonomy" and "sacred feminine". Successful application of the tactic allows the attacker to bypass restrictions around generating explicit imagery that is otherwise restricted..SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED: