2021 will go down as one of the worst years for spam messages. While our email services like gmail offer pretty sophisticated AI to determine what’s junk and what’s legit, traditional SMS messages have become vectors for a lot of scam messages this year.
Generally, these have been fairly easy to spot to the trained observer, but given these contain links, it is easy to understand how some get invited to tap the link to see what’s on the other side and then enter their details only to have them compromised.
The SMSs are often strange, unexpected, garbled messages with an invitation to click on the link to what is often a malicious website. Despite many of us complaining online, it seemed that Australian telcos were unable to do anything about the SMS scams, but fortunately, companies like Telstra have working on a solution.
This year Telstra has received 11,100 SMS scam reports from customers compared to just 50 in 2020. This explosion is due in large part to the cybercrime campaign known as FluBot, a clear example of how technology is evolving and criminals are getting smarter.
To get ahead of the challenge, our technology solutions are also evolving, and we are developing a new cyber safety capability to help turn the tables on the scammers targeting Australians. The tool is designed to automatically detect and block scam SMS messages as they travel across our network, stopping them before they reach your mobile phone.
This new technology is complex but in simple terms, it applies knowledge of what a scam SMS looks like as it travels across our network and if it looks suspicious it will block it. It does this by automatically scanning the content of messages to find suspicious patterns and characteristics, along with other data including time, sender, number of messages sent, and recipient.
Telstra is currently running a pilot of this capability inside Telstra, so that any scam SMS messages sent to our people help ‘train’ the systems to spot the difference between a legitimate and a malicious SMS. The more scams it sees the smarter it becomes, which means this is ultimately a data problem, which is processed using machine learning to analyse the text and are trained to determine which sentence structures are spammy and which are legit.
During the pilot, employees’ family and friends, as well as volunteers, will be added to the program over the coming month. Similar to how Tesla are training their cars to be autonomous, manual intervention will be required, helping to add to the training inputs to improve the model.
A small technical team will access the platform to review suspected scam messages where the sender and recipient data are removed and not identifiable to protect privacy. The platform is also set up so it cannot be used for any other purpose.
It is secured to the same high standard as the rest of our network, with access restricted and logged. Once the system reaches the point where it can accurately and effectively block the majority of scam SMS we plan to enable it across our mobile network, probably early next year.
This effort is part of Telstra’s ‘Cleaner Pipes’ initiative, which aims to leverage their expertise to proactively protect customers, businesses and the nation more broadly against cybercrimes and scams.
Even with all of this work, it is important to understand that this will not be a foolproof solution to scams; no technology is perfect, malicious actors will continue to find new methods to attempt to scam Australians, and SMS will likely continue to be a part of that. We all need to remain alert to the possibility that when we receive an unexpected message it may not be genuine.
Given this problem is shared by all Australians it would be great to see all telco providers work together to solve this issue and improve the detection of spam SMS.
Telstra says they’re proud to address this complex issue with the help of the Federal Government which is providing the necessary guidance and regulatory amendment to support the development and use of this technical capability.
While they don’t provide detail on exactly what this means, it is likely that they required permission to intercept messages before delivery to the intended recipient, bust open the URLs from potentially malicious messages, and execute them in a sandbox to confirm if the shortened URL would have taken the user to a known malicious destination. If it does, they could block the message before the user ever receives it.
On the surface, this sounds like a scary concept, but when you realise this technique has been applied to mail and security software for years, it’s really nothing new. Let’s hope Telstra’s AI is well implemented and we don’t get any false positives, particularly given SMS messages are now heavily relied on for MultiFactor Authentication requests.