Videntifier Nexus

Don't Chase Harmful Content. Intercept It.

The Best Time to Remove Harmful Content Is Before It Goes Live

Nexus is a harmful visual content moderation API and moderation platform that identifies known harmful images and videos at the point of upload — stopping them before they reach your users, before they spread across your network, and before they put your platform at regulatory risk.

Starting from €100/month
1 Month free evaluation trial available
The Problem

Most harmful content isn't new.
It's the same content, coming back.

The overwhelming majority of harmful material encountered on platforms today is already-identified content — catalogued by dedicated organisations fighting harmful content online, such as NCMEC, IWF, and C3P — being recirculated in slightly modified form. Re-encoded, cropped, watermarked. The content is the same. The hash is different.

Current hash-based approaches are unable to keep up. Moderation teams end up reviewing the same material repeatedly in different containers, burning resource on a problem that is fundamentally solvable.

Common UGC Modifications

Video Hash Check

Content

Original

Identified

The Video Gap

Video is the fastest growing medium on the internet, and it is where the detection gap is widest. Cryptographic hashes match exact files only — any modification breaks the match. Perceptual hashes can tolerate some degree of image modification, but generally do not scale to video. Most platforms are left with no meaningful technical defence against re-encoded or modified video uploads.

The Clock is Running

Every minute content is live, it gets seen, shared, and re-uploaded. User reports don't prevent harm - they document it.

Visual Fingerprints

Nexus addresses this directly with Videntifier Visual Fingerprints, which provide fast and accurate identification of known harmful video content regardless of how it has been modified or re-encoded.

How Nexus Works

Block harmful visual content at the gate,
before it goes live.

Nexus is a harmful visual content moderation API that integrates directly into your upload pipeline. Every image and video submitted to your platform is scanned at the moment of upload — before it is published, before it is distributed, before anyone sees it.

Trusted Partners

Build your platform defenses
on the shoulders of giants.

Your defences, built on decades of expert work

Organisations like NCMEC, IWF, C3P, and Tech Against Terrorism have spent decades building and maintaining the world's most comprehensive and legally verified databases of harmful content. Nexus connects your platform directly to all of them. You are not starting from scratch. You are inheriting decades of work.

And more — with ongoing expansion

Videntifier's detection technology is not just connected to NCMEC's database — it is used by NCMEC itself for its own internal detection operations. The organisation that maintains one of the world's largest verified CSAM databases relies on the same underlying technology that powers Nexus.

Videntifier's technology provides us with a unique and scalable way to detect previously identified harmful content with extreme speed and accuracy.

Derek Bezy
VP Technology Division, NCMEC

Ready to protect your platform?

Why Platforms Choose Nexus

Six reasons platforms choose Nexus.

Platforms that deploy Nexus are solving six distinct problems simultaneously — and finding that proactive harmful visual content detection addresses all of them at once.

For a large platform, a harmful content incident is a news cycle. For a smaller one, it can be existential.

A single advertiser pullout, app store removal, or viral press story can cause more damage than years of moderation investment would have cost.

Advertisers have brand safety clauses that enable immediate withdrawal from platforms associated with harmful content — and users who encounter it are unlikely to return.

App stores have removed platforms for content violations. Users who encounter harmful visual content are more likely to leave and not return to your platform.

The commercial cost of a single incident will exceed the annual cost of prevention.

Video is the fastest growing medium on the internet, and it is where harmful content detection has its widest unsolved gap.

Cryptographic hashes match only exact files. Perceptual hashing algorithms can tolerate some degree of image modification, but they are not designed for video. Most platforms are left with no effective technical defence against re-encoded or modified video uploads.

Nexus addresses this directly using Videntifier Visual Fingerprints — built specifically for this purpose.

Visual Fingerprints operate on both images and video and are robust to the modifications and format changes routinely used to evade detection, making Nexus an effective video moderation API for platforms where re-encoded or modified content is the primary evasion challenge.

This was clearly demonstrated in a Nexus customer case study, where Nexus identified more than 100,000 videos through Videntifier Visual Fingerprints compared to just 17 matches against cryptographic hashes.

100,000
Videos found with Nexus*
17
Found with cryptographic hashes alone*

* Source: Nexus customer case study.

Nexus supports the full range of hash types in use across the industry today, combining them with Videntifier Visual Fingerprints for maximum coverage.

CapabilityCryptographic
(MD5, SHA-1, SHA-256)
Perceptual
(PDQ, PhotoDNA)
Nexus Visual
Fingerprints
Exact file match
Minor compression / resize / format change
Significant re-encoding or heavy modification
Significant cropping
Video content
Video identified from a short clip or single frame
Near-zero false positives

Nexus handles the detection of known harmful visual content automatically at upload.

This frees your moderation team from reviewing material that has already been identified and catalogued by trusted organisations, enabling them to focus on the novel, ambiguous, and contextually complex content where their judgement is genuinely needed.

Nexus allows you to bring your own data into its identification pipeline by connecting your private Videntifier Identification Engine instance directly to your Nexus account.

This makes your team's institutional knowledge permanent and actionable. Every piece of harmful content they review and remove can be added to your own private identification engine instance, feeding directly into the Nexus moderation pipeline. Once a piece of content has been seen and actioned by your team, neither they nor your platform should ever need to encounter it again — in its original form or any modified version of it. Your team's decisions compound over time, making your defences progressively stronger with every review.

This turns your moderation team from a reactive function into an active contributor to your platform's long-term protection. The more they moderate, the less they need to moderate.

Nexus collapses all the complexity of operating and maintaining your own harmful content detection infrastructure into a single API call.

The harmful content detection landscape is fragmented. Dedicated organisations fighting harmful content online each operate their own databases with their own data formats, access requirements, and update cadences. Building this independently means separate provider relationships, separate infrastructure, and ongoing maintenance.

What you would need to build & maintainBuild it yourselfNexus
Database provider relationships (NCMEC, IWF, C3P, TAT…)Negotiate each individuallyIncluded
Hash format support (MD5, SHA-1, SHA-256, PDQ, PhotoDNA + Videntifier visual fingerprinting)Build & maintain eachIncluded
Query infrastructure & serversOwn hardware, own opsIncluded
Database updates & versioningYour responsibilityManaged for you
New harm category coverageResearch & integrate manuallyAutomatic as Nexus expands
GDPR-compliant data handlingYour legal & engineering problemBuilt in
Ongoing maintenance & monitoringOngoing costIncluded

The alternative to Nexus is not another product. It's months of engineering work, multiple database agreements, and detection that still misses modified content.

Nexus detects multiple categories of harmful visual content through a single integration:

  • CSAM & CSEM — child sexual abuse material, child sexual exploitative material
  • TVEC — terrorist and violent extremist content
  • IBSA/IBSVcoming soon — image-based sexual abuse / image-based sexual violence
  • Deepfakescoming soon — non-consensual synthetic media
  • Animal abusecoming soon — animal cruelty content

As new harm categories emerge and partner databases expand, your protection expands automatically — with zero additional engineering.

Regulators globally are moving toward requiring proactive harmful content detection from platforms of all sizes.

The EU Digital Services Act already imposes strict proactive detection obligations on the largest platforms, and enforcement scope is expanding. For smaller platforms, lighter-touch obligations apply today — but the regulatory direction is clear and is not reversing.

Deploying Nexus now builds a documented record of proactive detection that protects your platform ahead of obligations that are growing.

Starting at €100/month, Nexus is not a trust and safety budget line.
It is the cheapest insurance your platform can buy.

Get Started

The question is not whether you can afford Nexus. It's whether you can afford not to have it.

Every day your platform operates without proactive detection is a day of exposure — regulatory, financial, and reputational. Nexus closes that exposure starting at €100/month. Trusted by multiple platforms and powered by the same technology that NCMEC uses internally. The conversation starts here.

FAQ

Common questions regarding to Nexus

Can't find the answer you're looking for?

Yes. Only hashes are submitted to the Nexus API — no raw images or videos ever leave your infrastructure.

Pricing starts at €100/month and scales with your traffic. No hidden costs, no enterprise contracts required.

No. Nexus is purpose-built for identifying known harmful visual content that has already been catalogued by trusted organisations. AI moderation is built to analyse context, intent, and novel content. The two address different parts of the moderation problem and work well together.