A woman’s claim that an artificial intelligence-generated video of her was used as evidence in court is gaining attention online and has rekindled concerns over the growing risks of deepfake technology in legal proceedings.
In a video posted on Instagram, Sacha Granger recounted her experience, stating: “Someone was able to submit a fake altered AI video of me and pass it off as accurate in a court of law.”
Granger said the case against her was ultimately dismissed, but she described the situation as “scary,” warning that the implications extend far beyond her individual case.

Disputed Video Evidence
According to Granger, the video presented in court appeared to show her behaving in an intoxicated manner—an allegation she strongly denied. She said the footage, believed to be from a security or doorbell camera, had been manipulated to alter her appearance and behavior.
“I was looking at the video… it looks like me, but it’s not me,” she said, adding that the version shown in court depicted her as “drunk” and “inebriated,” despite her insistence that she does not drink or use drugs.
Granger noted that even those around her questioned the authenticity of the footage, describing it as inconsistent with reality.
Case Outcome and Lingering Concerns
While the judge dismissed the case, Granger said the situation could have had far more serious consequences if the video had been accepted without question.
“That can happen to anyone,” she said. “Anybody’s life can be changed instantly with a fake altered video from AI.”
Calls for Regulation
Granger used her experience to call for stronger safeguards around the use of artificial intelligence in legal contexts, particularly when it comes to evidence submission.
“There needs to be legislation… AI is stamped, AI is documented,” she said, questioning how courts can reliably distinguish between authentic and manipulated media.
Broader Legal Implications
The incident highlights a growing challenge for courts as AI-generated content becomes increasingly sophisticated and accessible. Legal systems traditionally rely on the authenticity and integrity of evidence, but deepfake technology is complicating that standard.
Without clear rules or verification mechanisms, experts warn that manipulated digital content could pose risks to due process and fairness in judicial proceedings.
Granger’s case, though resolved in her favor, adds to mounting concerns about how the legal system must adapt to address the evolving threats posed by artificial intelligence.
