
We work in industries where what we build reaches millions of people. A recommendation engine shapes what someone buys. A risk model decides who gets a loan. A diagnostic tool influences treatment. The work matters and the margin for error is small.
We work closely with client teams. That means understanding not just the technology but the context around it. What the regulators expect, how the teams actually work, what the data really looks like.

Boehringer Ingelheim
Most of our pharma work involves processing large volumes of research data and documents. Clinical trial reports, regulatory submissions, lab results. We build systems that help teams find what they need faster without losing the detail that matters.
Compliance shapes the architecture from day one. GxP, GDPR for health data, audit trails.
Boehringer Ingelheim uses message testing as part of clinical trial positioning. Healthcare professionals, payers, insurers, and patients are interviewed to measure how they respond to product messaging. Real interviews are expensive, slow to coordinate across global markets, and limited in how many variations you can test.
We built an agentic AI interview platform. Drug teams upload transcripts from real interviews. A processing pipeline sanitizes all personally identifiable information before the data enters the system. Personas are grounded through two layers of retrieval. The first uses chunk-based semantic similarity search for direct grounding. The second runs GraphRAG with local and global searches across a knowledge graph built from hundreds of thousands of internal documents, enabling multi-hop retrieval to named entities across the corpus.

28
drug teams
6 mo → 3 wks
cycle time
40%
fewer interviews
Each persona is configured with traits: region, specialty, role, and attitude toward new treatments. An interviewer agent conducts structured conversations with these personas simultaneously across segments, from early adopters to conservative prescribers, across the US, EU, Japan, and China.
The platform is multimodal. It presents slides with drug dosages, efficacy data, and positioning statements alongside conversational prompts. Responses are analyzed across segments to surface which messages land, where they fall flat, and how regional differences shape reception. The underlying data involves real patients with real conditions. Every layer of the system is designed with that in mind.


ABN AMRO
Our work in financial services centers on enterprise data architecture and governance. Large banks operate thousands of interconnected applications, and understanding how they connect, what data flows through them, and how that data is classified is foundational to nearly every operational decision. Terminology carries particular weight in this context, where a word like "default" has specific regulatory meaning that demands consistency across the entire organization.
Mapping dependencies across an enterprise of this scale is typically slow and manual, with knowledge about applications, data flows, and ownership distributed across spreadsheets, SharePoint sites, and various internal systems. ABN AMRO needed a way to bring structure to this body of institutional knowledge.
We built a document intelligence platform that processed over ten thousand technical documents, architectural records, and corporate specifications. A parsing pipeline traversed each document to extract entities, relationships, and dependencies, using neural word embedding models built on Word2Vec for semantic similarity and relationship extraction. This was pre-LLM work, with a production NLP pipeline handling named entity recognition and dependency parsing to turn unstructured text into structured, graph-ready data.

10,000+
documents processed
5,000+
applications mapped
1,000+
business entities
The extracted knowledge fed into a graph database built on Neo4J and CosmosDB, modeling relationships across applications, teams, data flows, and business processes. Traversal algorithms powered data lineage visualization and impact analysis, giving teams a way to trace how changes in one system ripple through others. This is the kind of cross-system visibility that banking regulators expect under frameworks like BCBS 239. Data classification and terminology were standardized across the full corpus, and the platform was deployed on Azure Cloud.


MyParcel / PostNL
Logistics platforms move high volumes of real-time data through complex integration layers. Shipment records, carrier APIs, address validation, tracking updates. When throughput reaches hundreds of thousands of daily transactions, the infrastructure has to be fast, reliable, and precise.
MyParcel connects merchants to carriers across Europe, processing over a hundred thousand shipments per day through PostNL, DHL, UPS, FedEx, DPD, and more. Each carrier has its own API conventions, address formats, and status taxonomies, and the platform needed a data infrastructure that could handle this volume reliably while keeping search and retrieval fast across millions of records.
We re-architected the data processing pipeline from ingestion through indexing. An event-driven microservices architecture handles real-time data from each carrier integration, with LLM-powered ETL pipelines automating the cleaning, validation, and correction of address and shipment information. ElasticSearch provides sub-second search and retrieval across the full shipment corpus.

100K+
daily shipments
ML-powered
carrier matching
sub-second
search latency
A machine learning model replaced the deterministic rule engine that previously assigned shipments to carriers. The model evaluates parcel characteristics, destination constraints, and service requirements to match each shipment to the best-fit carrier, adapting as networks and conditions change. The front-end infrastructure was rebuilt on Vue, Nuxt, and Vite, and the full platform runs on AWS with auto-scaling to absorb demand spikes.


ING
Retail technology touches end consumers directly. We build systems that handle the full chain, from product catalog to checkout to payment processing. The work has to be fast for the user and secure underneath.
ING offered small and medium-sized businesses a turnkey e-commerce platform called Kassa Compleet. Merchants could set up a webshop, manage inventory, and accept payments through a single integrated system. We built the platform end to end.
The payment gateway handles real-time VISA and Mastercard authorization, clearing, and settlement with PCI-DSS compliant data handling and 3D Secure authentication. A supervised machine learning model runs fraud detection across every transaction, using logistic regression with geospatial feature engineering. The model correlates cardholder and merchant locations, analyzes cross-border transaction patterns, and scores risk based on historical behavior per card and merchant.

800K+
monthly transactions
1,000+
active webshops
real-time
fraud scoring
This was pre-deep learning work, demonstrating classical ML techniques in high-stakes financial systems. The platform runs on AWS across Angular, Node.js, and Python.

Have a project in mind? Get in touch →