Not OK: Blocking abusive payment messages
Women’s safety, including in workplaces, has rightly been in focus in recent months.
For us as a bank, a clear extension of this discourse is financial safety – and what more the industry can do to ensure all customers feel safe and protected in their financial affairs.
Sadly, however, around two years ago a disturbing form of abuse was uncovered involving a digital banking feature that was put in place to benefit customers but was being used in abusive ways.
These abusers had worked out that one of the many features of Australia’s new industry-wide payment platform launched in 2018 – the ability to include messages with up to 280 characters to enhance a payment’s remittance information – enabled them to use transactions to send inappropriate messages to their victims.
That people would even think to use the banking system as an alternative messaging platform was quite incredible, but to use it to send abuse, to threaten and intimidate was unnerving and, quite frankly, disgusting.
Almost three decades in customer care roles at Westpac has shown me how often disrespect, shocking behaviour, manipulation and abuse in personal lives can overflow into banking. But what shocked me most about the abusive payment messages was its scale – affecting hundreds of customers, daily – and the fact that the abuse directed towards women was particularly appalling.
The issue can be traced back to about a year after the New Payments Platform went live and the industry detected – in Westpac’s case, by our fraud team – a notable increase in “colourful” language in payment messages.
While most of this was simply banter – for example, jokes between friends while paying each other their share for a night out – some was far more sinister, with abusive, harassing and sometimes violent undertones. The perpetrators were making low value transactions – often as little as 1c – as a means to contact their victims.
From that moment, banks moved to deal with these unintended consequences – and have agreed as an industry
At Westpac, we immediately began manual detection and escalation while developing an automated monitoring solution with advanced data analytics. Since switching on the new technology in January, more than 6000 payments made by Westpac and St.George customers have been blocked, in real-time, and the customer notified, because they included a message with words deemed inappropriate or offensive.
We took another major step last month, enabling customers who receive an abusive message to report it to us by clicking a report button, alerting a dedicated team.
Despite the relatively small numbers of customers involved in the context of the vast majority who use payments respectfully, the importance of introducing these capabilities – in particular, the abuse reporting function – cannot be underestimated and the implications have been eye-opening.
Among the customers who have reported abuse – so far, around a dozen – many have told us the person sending the abuse had been blocked on traditional social media platforms and phones, sometimes due to having an apprehended violence order against them. Yet, frighteningly, they had circumvented the AVO by using payments to perpetuate their harassment.
In those circumstances, we’ve escalated the issue to police, and warned the senders’ financial institutions.
Among customers whose transactions have been blocked, our response varies depending on the level of threat – from harmless banter to dangerous intimidation. Some have been advised to change their language before the transaction is processed. Around 75 repeat offenders have been sent warning letters to stop doing it; and, in a handful of extreme circumstances, customers have been “unbanked”. A few have also warranted an “unusual matter report” being lodged with the financial crime and intelligence agency, AUSTRAC.
On a positive note, since the introduction of our interventions, we’ve seen inappropriate messages fall and we’d expect it will change perpetrator behaviour over time as they realise we just won’t tolerate it.
Importantly, the ability to report abuse has in fact opened a new way for people – especially women – to send us a cry for help. It’s a signal for us to check on their safety and provide extra care using our well-established, confidential support model for customers experiencing domestic or family violence or financial abuse – from helping with emergency assistance for someone fleeing an abusive relationship, to helping unwind banking arrangements into which they’ve been coerced.
Cracking down on abusive messages and raising the standards of language used in payments has been a step in the right direction to restore safety of our customers and, when all industry players are on board with blocks and reporting functionality, theoretically this behaviour can be stamped out altogether.
But, unfortunately, human nature and history tells us perpetrators are likely to keep seeking new means to harass and abuse others, including by manipulating technology.
That puts the onus on us to be constantly vigilant to new behaviours and abuse of new technologies. If there’s one place where people should feel safe and spared from the abuse, it’s digital banking experiences – and we’re up for making that happen.
If you or someone you know is experiencing violence or abuse, please contact 1800 RESPECT.