Designing a scalable Transaction Monitoring platform

Product Design | UX | UI
Shipped FrankieOne’s first Transaction Monitoring MVP in our v2 platform in one quarter, from discovery to API-aligned UX and helping secure our first TM customer.
Role: Product Design Lead (sole designer on team)
Team: Engineering Lead, Backend + Frontend Engineers

The opportunity

FrankieOne had strong KYC and KYB capabilities, but customers increasingly required ongoing monitoring of transactional behaviour. Sales teams were receiving detailed requirement sheets from prospects, but these were largely shaped by familiarity with legacy systems.

The opportunity was to design a Transaction Monitoring product that solved operator problems — not replicate outdated tooling.

Key challenges

  • Requirements driven by legacy expectations
  • No formal case management product
  • New backend architecture being built in parallel
  • PM departure during early build phase
  • Need to support multiple monitoring vendors via a single API layer

Separating feature requests from root problems

The core challenge was not building alerts — it was designing a scalable information architecture that reduced cognitive load for compliance operators.

What customers asked for

  • Alert queues
  • Case management
  • Risk configuration
  • Escalation workflows

What operators actually needed

  • Clear Activity → Alert → Entity relationships
  • Efficient review flows
  • Audit-ready decision trails
  • Flexible risk configuration
  • Scalable multi-vendor, multi-workflow architecture

Reframing the problem
From “case management” to existing capabilities

THE CHALLENGE

Most transaction monitoring activities require an extensive case management system
Customers consistently asked for “case management” — a standard feature in most transaction monitoring platforms.

Initially, we assumed this meant building a dedicated case management system.

However, through customer interviews and sales conversations, I identified that much of what customers described as “case management” was already supported by the FrankieOne platform.
The gap wasn’t functionality — it was framing
FrankieOne already supported:
  • Creating custom filtered lists
  • Assignment and notifications
  • Entity-level comments, and audit trails
The issue was that these capabilities were not positioned as a cohesive system.This created a perceived product gap in sales conversations.

THE SHIFT

Instead of building a new system, I reframed these capabilities into a foundational case management narrative.
This allowed Sales to:
  • Confidently position the product against competitors
  • Demonstrate investigation workflows using existing features
  • Reduce pressure to build unnecessary functionality early
This shifted the problem from “what do we need to build?” to “how do we make existing capabilities legible and valuable?”

SYSTEM MODEL

Structuring risk and investigation

The core design challenge was not just designing UI — it was defining how risk is structured and evaluated across the system.
I established a clear relationship between three levels:
Alerts — rule-level signals triggered by specific conditions
Activities — transactional and behavioural events that may trigger multiple alerts
Entities — the customer being evaluated over time
An activity can trigger multiple alerts across different rules, but operators ultimately assess risk at the activity level.

This model ensured that:

  • Alerts are not evaluated in isolation
  • Activities reflect combined risk signals
  • Entities provide longitudinal context across activities
This became the foundation for how investigation and decision-making works in the product.

Key design decisions

Progressive disclosure for high-density data

The alerts table needed to support high-volume scanning without overwhelming operators with detail.
I introduced a side drawer pattern to progressively reveal alert and activity details alongside the table, allowing operators to:
  • Review alerts in context without losing their place
  • Compare multiple alerts and activities quickly
  • Avoid disruptive page transitions
This enabled efficient triage while preserving access to deeper investigation layers.

Linking alerts to activities as the core investigative model

An activity can trigger multiple alerts across different rules, but operators ultimately need to assess risk at the activity level.
I designed a clear relationship between alerts and activities:
  • Alerts surface rule-level signals
  • Activities represent the combined risk context
Activities are visibly marked as suspicious based on aggregated alert outcomes, ensuring that risk is not evaluated in isolation.

Designing for permission-based views without fragmentation

Different operator roles (e.g. AML vs Fraud) require access to different data.
Instead of creating separate interfaces, I designed the system as modular components:
  • Sections, badges and data blocks can be shown or hidden
  • Layouts remain structurally consistent regardless of permissions
This ensured the experience remained coherent and scalable across roles without duplicating UI or creating broken states.

Supporting non-linear investigation flows

Investigation is not a linear process. Operators frequently move between alerts and activities, comparing individual events against broader behavioural patterns.
I designed navigation and linking to support this:
  • Seamless movement between alerts and related activities
  • Side-by-side comparison via progressive disclosure
  • Quick access to related alerts for pattern recognition
This allowed operators to follow investigative paths naturally, rather than forcing a rigid workflow.

Structured decision-making through reason systems

Resolution decisions require consistent and auditable reasoning.
I designed a structured reason system:
  • Operators select contextual reasons based on alert outcomes
  • Reasons ladder up from alerts to activities
  • Only activities with confirmed “true positive” reasons are automatically marked as suspicious
This ensured that risk classification is both explainable and system-driven, rather than relying on inconsistent manual judgement.

Continuous discovery & validation

Customer & internal discovery

  • Requirement sheets from prospects
  • Interviewed customers to understand workflow pain points
  • Partnered with Sales and Implementations to identify friction
  • Distinguished legacy familiarity from genuine operational gaps

Information architecture exploration

  • Rapid wireframes to test IA options
  • Entity-level vs activity-level review
  • Alert grouping logic and data priority
  • Flow clarity over visual polish

AI-assisted concept prototyping

  • Built fast AI-generated mockups to explore UX directions and review common patterns using v0 (Vercel)
  • Used them for internal alignment before committing to high-fidelity
  • Reduced iteration cycles before formal design phase

Demo-led validation

  • Conducted walkthrough demos with existing customers
  • Used prototypes in live prospect calls
  • Gauged alignment and commercial viability before MVP build

Designing through ambiguity

At the start of the design phase, the PM left the company. I partnered directly with the Engineering Lead to shape both the product direction and execution.
My responsibilities expanded to include:
  • Prioritisation with Sales & Implementations
  • Feature scoping and backlog shaping
  • Ongoing customer and internal validation

Designing with the API, not after

DESIGNING BOTH UX AND TECH AT THE SAME TIME

Designing directly against the API reduced rework and ensured system integrity across vendors

I worked closely with engineering leads from both my squad and the platform team to ensure both ux and tech architecture worked well together with intention.
The process required:
  • Referencing Swagger documentation throughout UX design, giving feedback where needed
  • Mapping all data objects and available actions
  • Ensuring frontend behaviour aligned with backend capabilities
  • Designing scalable IA to support multiple TM vendors

MVP in One Quarter

The MVP was scoped against real customer timelines and delivered with continuous engineering sync.
Within one quarter, we:
  • Designed core activity and alert review experience
  • Built alert workflows and resolution experiences
  • Created new monitoring steps for transactions and activities
  • Integrated new backend architecture

What we achieved and learned

We had hired a new PM by the time we launched the MVP, and they were deeply experienced with our product, and even used to be a customer of FrankieOne at a previous company. This accelerated our learnings for the next iteration of the product. This meant we were able to quickly discover deeper investigation and resolution journeys that came with the Transaction monitoring product.

Our first customer

  • Launching quickly allowed us to sign and provide early access to our first customer
  • Working closely with our first customer uncovered deeper needs
  • Built backlog based on real world needs and actual operator behaviour
  • Having a base for our transaction monitoring product gave us the starting point to discover case management as a product

Evolving beyond the MVP

As we move forward, we will face new and ongoing challenges:
  • Serving multiple customers with different compliance models and operational workflows
  • We're currently connected to one vendor, and the product has yet to be tested against more
  • As we discover more complexity, maintaining scalable yet intuitive information hierarchy will be key
  • Case management and orchestrated automation will be key to the success of an effective transaction monitoring product