ClothOff is a controversial AI-powered "nudify" service that uses advanced deep learning algorithms, including GANs and diffusion models, to digitally remove clothing from uploaded photos. Available mainly through clothoff.net, it also provides mobile apps for Android, iOS, and MacOS, a Telegram bot, and features like generating realistic nude images, custom undress videos with motion and effects, face swaps (including porn variants), multi-uploads, adjustable body parameters (e.g., breast and butt size), sex poses, and queue skipping. The platform offers free trials for basic undress and video generation, with premium access via one-time purchases of VIP Coins for enhanced quality, speed, and advanced tools. It claims strong privacy measures—no data storage, automatic deletion of uploads, no distribution without consent—and strictly prohibits non-consensual use, processing images of minors (with alleged technical blocks and account bans for attempts), and illegal activities. ClothOff also states it partners with Asulabel to donate funds supporting AI abuse victims. Nevertheless, the tool has sparked major ethical and legal backlash for enabling non-consensual deepfake pornography and child sexual abuse material (CSAM). A prominent federal lawsuit in New Jersey (Jane Doe v. AI/Robotics Venture Strategy 3 Ltd.) alleges it was used to create and distribute hyper-realistic fake nudes of a minor from her social media photos, invoking the TAKE IT DOWN Act for mandatory image removals, data destruction, damages, and potential shutdown. The case, supported by Yale Law clinics, highlights real harms like emotional distress, bullying, and harassment. Investigative reports from Der Spiegel, Bellingcat, Ars Technica, The Guardian, and others link operations to regions in the former Soviet Union, reveal acquisitions of rival nudify apps, and document its role in widespread abuse cases, including school incidents worldwide. Despite blocks in countries like the UK and Italy, and removals from platforms like Telegram, ClothOff continues to operate with millions of users, intensifying global debates over AI consent, privacy, and regulation. The company denies responsibility for misuse.