§ Bio

Trust & safety, AI compliance, and the quiet decisions that shape platforms.

I'm Lubos. I've spent seven years working on online safety and platform governance, the kind of work that mostly happens before anyone outside the company sees it.

This site is where I think out loud about it.

Global Compliance & Risk Management London, UK
Lubos Dusek
§ 01 · Why this work

The work that matters rarely makes it to the press release.

Regulation around online platforms is evolving faster than most teams can keep up with. New legal frameworks land every couple of years, each one rewriting what platforms have to do, who's accountable, and how it gets enforced. The headline gets the attention. The actual work happens below it, in the policy decisions, the risk thresholds, the escalation chains, and the moderation queues that turn each new rule into something teams can act on at speed.

Seven years inside that work, and counting. This site is where I write some of it down.

§ 02 · The arc

Seven years on the same problem, getting harder.

Each role taught me something the next one needed. The through-line: how do you make platforms accountable to the people they affect, before the regulator forces the question?

Dec 2023 to Present Klaviyo · London

Compliance Specialist · Global Compliance

I joined Klaviyo's Global Compliance team in December 2023. Different surface area: email, SMS, and push messaging across 150,000+ businesses in regulated jurisdictions. But the underlying questions felt familiar. How do you operationalise regulation that's still being interpreted? How do you give compliance teams tools that scale without replacing their judgment?

That last question is what got me building. The AI tools that came out of it now sit inside global compliance workflows, with human review checkpoints because hallucination risk in compliance work isn't theoretical. Alongside the tooling, the Global Compliance Knowledge Base got rebuilt into 120+ structured pages, now the source of truth across compliance, legal, and customer-facing teams. A parallel rebuild of the cross-functional escalation models cut routing ambiguity for high-risk regulatory cases under GDPR, CCPA, CAN-SPAM, and TCPA.

Day-to-day, I run compliance review on product and partner decisions where regulation is in play.

§ Same period

Side quests

How did I get to compliance? I came in from the operational side, where the consequences of platform decisions were direct and immediate.
Apr 2022 to Dec 2023 Lockwood Publishing · Nottingham

Trust & Safety

After TikTok, I moved to Lockwood Publishing in Nottingham. A social mobile platform with millions of active users and a meaningful underage population. Different platform size, same shape of work.

The end-to-end CSAM detection chain sat with me: hash-matching against industry hash lists (NCMEC, IWF), formal referrals to law enforcement under mandatory reporting obligations. The role also covered platform-side response to reactive law enforcement requests from FBI and UK NCA CEOP, with user data, communications records, and platform evidence for active investigations.

Chat-filter detection rules got calibrated against grooming language patterns, iterating against false-positive and false-negative rates that actually matched the harm profile rather than vanity precision metrics. Trust & safety input flowed into product and feature decisions, contributing to safety-by-design conversations rather than retrofitting safety after launch.

The Online Safety Act was being drafted while this work was happening. Every conversation about feature design eventually came back to the same question: how do you build for kids when the regulatory floor is rising under you in real time?

Before that, I started where most platform-governance careers start: doing the work at industrial scale.
Sep 2019 to Apr 2022 TikTok · London

Community Content Management Specialist

TikTok was where it started, September 2019. Just before everything happened. I was 21, the platform was already huge but still figuring itself out, and the regulatory environment around large online platforms was about to harden in ways no one fully predicted.

The role: community content, reviewing high-severity violations across global markets. A meaningful share fell into child safety: CSAM and CSEA cases requiring mandatory reporting under jurisdiction-specific frameworks, coordinated takedowns of grooming networks alongside legal and supervisory authorities, cross-border escalations.

Systemic detection gaps in child-safety classifiers got surfaced and fed back into policy and enforcement teams. Trend analysis on coded language, evasive behaviour, and platform-specific exploitation patterns came with the role. It was the kind of work that makes you very serious very fast.

What stuck: you can't moderate your way out of a systemic problem. The detection has to get better, the policy has to get tighter, the product has to be designed differently from the start.

§ 03 · The reading list

Regulation I work with.

Most of these arrive in waves. GDPR set the modern baseline. The DSA, OSA, and AI Act are the current wave. The next one is forming around AI liability and platform competition.

EU Artificial Intelligence Act 2024/1689 AIDA Artificial Intelligence and Data Act OECD AI Principles Online Safety Act OSA GDPR UK GDPR PECR DSA Digital Services Act CCPA HIPAA CAN-SPAM TCPA EU Digital Markets Act DMA ICO Children's Code
If any of this resonates, say hello.

I'm always happy to talk about online safety, AI compliance, the practicalities of operationalising regulation, or interesting projects that touch any of the above.