Resources
AFASA

What AFASA-compliant authentication actually looks like in production

6 mins read

After the first article on compliance versus readiness, the most common question our team gets is the same one: fine, but what does this actually look like when it is running?

Most conversations about AFASA-compliant authentication stop at the requirement level. Biometrics. Liveness detection. Device-bound authentication. The circular is specific about what institutions must implement. It is less specific about what that implementation looks like at 9pm on a Tuesday when a customer in Cagayan de Oro is trying to add a new payee on a four-year-old Android handset with an inconsistent connection.

That gap between regulatory requirement and production reality is where most implementations either hold up or start showing cracks. After working across more than 50 BSP-supervised institutions in the Philippines, our team has a clear picture of what the gap looks like and what it takes to close it.

The three components that actually matter

AFASA-compliant biometric authentication in a Philippine BFSI context requires three things working correctly in sequence.

  • Liveness and Deepfake Detection. Its job is to confirm that the person in front of the camera is physically present and alive, not a printed photograph, not a screen replay, not an AI-generated video feed. This sounds straightforward. In production it is not. The liveness system has to work in inconsistent lighting conditions. It has to handle lower-resolution front cameras on budget devices. It has to distinguish between a genuine blink and a looped video. And it has to do all of this fast enough that the customer does not abandon the transaction out of frustration.

  • ID matching. A live selfie gets the person in front of the camera. ID matching connects that person to a verified identity document. In the Philippines, that means the system has to work reliably across the PhilSys national ID, driver's license at various issue formats, postal IDs, and passports. A system trained primarily on identity documents from other markets will produce accuracy numbers in a controlled environment that do not reflect what happens when a postal ID from 2017 is presented under fluorescent lighting in a rural branch. The accuracy gap between a well-calibrated system and a poorly calibrated one is not marginal. It shows up in false rejection rates, and false rejection rates show up in customer complaints and abandoned transactions.

  • Face retrieval. This is the component that most institutions underinvest in during initial deployment, and it is the one that matters most for ongoing compliance. Face retrieval is what enables re-authentication beyond onboarding. When a customer initiates a large transfer, adds a new payee, or updates their registered mobile number, the system needs to verify their identity again against the biometric anchor established at onboarding. Circular 1213 covers all of these touchpoints explicitly. An institution that deploys strong onboarding authentication but weak re-authentication has not solved the problem. It has moved the exposure point downstream.

What breaks in production

The failure modes our team sees most consistently are not technology failures. They are integration and calibration failures.

  1. Miscalibration - Liveness systems require tuning. A threshold set too strict creates unacceptable false rejection rates for legitimate customers. A threshold set too loose allows presentation attacks through. Getting this right requires testing on the actual device profiles and demographic spread of the institution's customer base, not a benchmark dataset from another market.

  2. Incomplete lifecycle mapping - Teams scope for onboarding, complete the integration, and then discover three months later that the re-authentication touch points were not included in the original implementation. Adding them after go-live is more complex and more expensive than building them in from the start. The institutions that map the full lifecycle before implementation begins avoid this almost entirely.

  3. Network performance assumptions - Authentication flows designed for stable broadband connections behave differently on the mobile networks most Filipino customers actually use. Timeout handling, image compression, and fallback logic all need to be tested under realistic network conditions before deployment, not after.

What good looks like

When these components are working correctly together, the customer experience is unremarkable in the best possible way. The transaction completes. The fraud attempt does not. The institution has a defensible audit trail. The false rejection rate is low enough that the compliance team is not fielding customer complaints about the authentication system.

The institutions that are already running this in production did not get there by treating AFASA compliance as a procurement decision. They treated it as an infrastructure build. The procurement decision was just the beginning.


 

Alok Chaubey | Chief Revenue Officer, Trusting Social Philippines

Alok Chaubey is a revenue architect with 19 years of experience scaling fintech, credit intelligence, and alternative data across India and Southeast Asia. At Trusting Social, Alok has been instrumental in onboarding 50+ institutional clients in the Philippines, driving AI-powered identity and fraud intelligence to bridge the gap for the unbanked. He holds an MBA from Nottingham Trent University.

Is your institution actually ready for June 30?

Not just compliant-ready. There is a difference, and it usually shows up mid-implementation. Let's talk through where your institution stands before the deadline hits.

LET'S TALK