Redefining Trust, Security and Customer Experience with AI
Artificial intelligence (AI) is no longer a future concept in banking. AI is already shaping how customers are served, how risks are managed, and how trust is earned. For financial institutions, that last point matters most. Trust is the currency of banking, and AI is changing the way it’s built, tested, and sustained.
Nam A Bank is one of the pioneering banks in Vietnam to apply AI robots in transactions
In the past, trust was anchored primarily in brand strength, balance sheet resilience, and long-standing relationships. Those still matter. But in a digital-first world, trust is increasingly judged by system behaviour: whether services are consistently reliable, whether decisions are fair and explainable, whether data is protected, and whether customers can get help quickly when it matters.
Phil Wright, COO, HSBC Vietnam said, AI is a powerful tool to improve customer outcomes and strengthen the integrity of the financial system, but only if it’s deployed responsibly, securely, and with clear human accountability. The question isn’t whether banks should use AI. The question is how we use it in a way that earns trust every day.
Trust is becoming evidence-based and regulators are raising the bar
AI is raising expectations from both customers and regulators. People don’t just want a smooth app experience; they want confidence that the bank is acting in their best interests, protecting their data, and making decisions that are consistent and fair.
That’s why trust is becoming more measurable. It’s no longer enough to say, “trust us”. Banks need to demonstrate governance, auditability, and outcomes. When AI is involved in decisions, whether it’s detecting fraud, prioritising service requests, or supporting risk processes, leaders must be able to answer basic questions with clarity: What data is being used? For what purpose? How is the model tested? How is it monitored? What happens when it’s wrong?
“This direction of travel is visible in conversations globally, from emerging AI regulations to strengthened expectations around model risk management, operational resilience, and data protection. In Vietnam, the regulatory landscape is also evolving quickly. The Government has signalled clear intent to shape the safe development and use of AI, and organisations should expect increasing focus on transparency, accountability, privacy, and cybersecurity as Vietnam’s AI regulatory framework matures. For banks, the practical implication is straightforward: build Responsible AI now, in a way that can stand up to future scrutiny”, said Phil Wright.
At HSBC, its approach is to embed Responsible AI into the full lifecycle: selecting appropriate use cases, ensuring strong data governance and privacy protections, testing for robustness and bias, documenting decisions, and monitoring continuously for drift and unintended impacts. In banking, “set and forget” is not a strategy.
And crucially, accountability remains human. AI can inform decisions, but it doesn’t own them. For material decisions and exceptions - especially those that could significantly affect a customer - humans must remain in the loop. That’s not a limitation of AI; it’s a feature of responsible banking.
AI changes the game, but it doesn’t end it
One of the most common questions is whether AI can eliminate fraud. AI is particularly effective at identifying patterns that are difficult for humans to see at scale: unusual transaction behaviour, device and network signals, mule-account activity, and emerging scam typologies. This helps banks move from reactive controls to earlier intervention, preventing harm rather than simply responding after the fact. However, as banks improve controls, criminals evolve tactics with AI Deep Fake vulnerabilities becoming more sophisticated.
A practical example is how HSBC has applied advanced analytics and machine learning to strengthen financial crime controls. One initiative is our Dynamic Risk Assessment in transaction monitoring, which helps risk-rate activity more dynamically and prioritise alerts more intelligently. The objective is not to “automate judgement”, but to improve signal quality, reducing noise, focusing investigative effort where it matters most, and supporting faster, more consistent outcomes.
But in Phil Wright’s opinion, AI is only one layer of defence. A robust fraud strategy combines AI with strong identity and access management, authentication, transaction controls, and clear escalation paths. It also requires continuous tuning and monitoring, because fraud patterns shift quickly.
There’s another dimension that matters just as much: customer experience and fairness. Overly aggressive models can create high false positives which can block legitimate transactions, frustrating customers, and potentially excluding vulnerable groups. Responsible AI means balancing protection with access, and ensuring there’s a clear path for human review when a customer is impacted.
Customer experience is a differentiator when it’s built on trust
In financial services, customer experience has become a major differentiator. But it’s not simply about speed or convenience. The best experience is one that customers can rely on, especially in moments that matter: a suspected scam, a disputed transaction, a major life purchase, or financial hardship.
“AI can improve experience in tangible ways: faster onboarding, quicker query resolution, more proactive alerts, and more personalised support. It can also help colleagues by summarising information, suggesting next-best actions, and reducing manual work so they can spend more time on complex customer needs”, stated Phil Wright.
However, Phil Wright said, personalisation must be handled carefully. Customers should feel helped, not watched. That’s why Responsible AI principles such as data minimisation, purpose limitation, transparency, and security, are essential. The goal is to use data in ways that are expected, appropriate, and beneficial, with clear boundaries.
And again, humans matter. Banking is not purely transactional. When customers face complex or stressful situations, they want a person who can listen, explain, and take responsibility. AI should augment colleagues, not replace the human relationship at the heart of trust.
The real challenges are data, models, and people
Adopting AI securely is achievable, but in Phil Wright’s views, it requires banks to address three core challenges.
First: data risk. AI depends on data quality, lineage, and access control. If you can’t prove where data came from, who touched it, and whether it’s appropriate for the purpose, you’re building on sand. Strong privacy controls and disciplined data governance are foundational, not optional.
Second: model risk. Models can drift as customer behaviour and economic conditions change. They can behave unpredictably at the edges. And with GenAI, new risks emerge such as prompt injection, sensitive data leakage, and overreliance on outputs that may sound confident but be incorrect. These risks require robust testing, explainability where needed, continuous monitoring, and clear controls around third-party tools and dependencies.
Third: people and process. Secure AI isn’t just a technology problem, it’s operating discipline. Teams need the skills to build and validate models, the processes to manage change safely, and the culture to challenge outcomes and escalate concerns. When something unusual happens, judgement matters. That’s why human-led oversight and incident response remain essential.
Embedding AI into core decision-making
Many organisations treat AI as an add-on by running a set of pilots, proofs of concept, and isolated tools. That approach rarely scales, and it often increases risk. To embed AI into core decision-making, Phil Wright suggests that banks should start with the decision and the customer journey, not the model. Identify where prediction, prioritisation, or automation improves outcomes, and define success metrics that include risk and fairness, not just efficiency.
Next, design “decisioning with controls”. Be explicit about decision rights, thresholds, and human review points. For example: AI can recommend and rank cases, but a colleague approves high-impact actions; exceptions are routed for review; and customers have a path to challenge outcomes. This is how AI becomes usable in a regulated environment.
Finally, industrialise delivery. Reusable platforms, model lifecycle management, monitoring, and change management are what turn AI from a one-off project into a safe, repeatable capability. And it must be cross-functional from day one with business, technology, risk, and compliance working together.
Governance that enables innovation safely
Responsible AI governance should feel familiar to banking, but extended for AI’s unique risks.
“A strong model is the three lines of defence, clearly applied to AI: the business owns the use case and outcomes; independent risk and compliance provide challenge and approval; and audit provides assurance. Clarity of ownership matters because ambiguity is where problems hide. Governance must also be lifecycle-based: from use-case assessment to retirement, with continuous monitoring for drift and unintended impacts. It must be transparent and documented: data lineage, testing evidence, explainability appropriate to the decision, and decision logs. This supports audit, regulatory review, and customer recourse”, emphasized Phil Wright.
Governance isn’t there to slow innovation. It’s there to make innovation safe, scalable, and worthy of trust.
Preparing the workforce
Finally, none of this works without people. Preparing for an AI-driven future requires role-based upskilling: AI literacy for all colleagues, deeper skills for builders, and governance capability for leaders and control functions. It also requires clear ways of working: approved tools, defined use cases, data handling rules, and explicit expectations for human-in-the-loop.
Phil Wright said, this reduces “shadow AI” and ensures innovation happens within guardrails. And it requires change leadership: communicating purpose, measuring outcomes across productivity and risk, and supporting colleagues as roles evolve. Responsible transformation includes looking after people, not just deploying technology.
Innovate at pace, but never at the expense of trust
AI can make banking safer, faster, and more personalised. It can help us detect fraud earlier, serve customers better, and support colleagues with smarter tools. But in financial services, progress is only progress if it strengthens trust. That’s why the path forward is clear: Responsible AI by design, secure foundations, strong governance, and humans in the loop, so innovation moves at pace, while accountability remains firmly in human hands. In the end, customers and regulators aren’t asking banks to use less AI. They’re asking banks to use AI well.