Trend Micro says developing app to detect deepfakes | ABS-CBN
ADVERTISEMENT

Welcome, Kapamilya! We use cookies to improve your browsing experience. Continuing to use this site means you agree to our use of cookies. Tell me more!
Trend Micro says developing app to detect deepfakes
Trend Micro says developing app to detect deepfakes
Arthur Fuentes,
ABS-CBN News
Published May 17, 2024 02:24 PM PHT

Trend Micro Global Risk and Security Strategist Shannon Murphy talks about generative AI and its implications for cybersecurity during the company's Risk to Resilience World Tour held in BGC, Taguig on Thursday, May 16, 2024. Arthur Fuentes, ABS-CBN News

MANILA - Cybersecurity firm Trend Micro said it is developing software for quickly spotting deepfake video and audio as these new technologies rise as cybersecurity threats.
MANILA - Cybersecurity firm Trend Micro said it is developing software for quickly spotting deepfake video and audio as these new technologies rise as cybersecurity threats.
During a briefing with tech journalists, Trend Micro Global Risk and Security Strategist Shannon Murphy said the company is developing security software that can analyze video and audio and tell if these were generated by artificial intelligence.
During a briefing with tech journalists, Trend Micro Global Risk and Security Strategist Shannon Murphy said the company is developing security software that can analyze video and audio and tell if these were generated by artificial intelligence.
“This is actively in development right now,” Murphy said on the sidelines of the company’s Risk to Resilience World Tour held in BGC, Taguig on Thursday.
“This is actively in development right now,” Murphy said on the sidelines of the company’s Risk to Resilience World Tour held in BGC, Taguig on Thursday.
Deepfakes or AI-generated video clips that use another person’s face and voice, have become a concern recently over their use in scams and misinformation.
Deepfakes or AI-generated video clips that use another person’s face and voice, have become a concern recently over their use in scams and misinformation.
ADVERTISEMENT
Murphy said the cybersecurity firm has been doing a lot of research into the different trends and activities in the cybercriminal underground, and this has influenced the company to invest in deepfake and audiofake detection.
Murphy said the cybersecurity firm has been doing a lot of research into the different trends and activities in the cybercriminal underground, and this has influenced the company to invest in deepfake and audiofake detection.
Clues to detecting deepfakes
Murphy said that Trend Micro aims to release its deepfake detector “later this year.”
Murphy said that Trend Micro aims to release its deepfake detector “later this year.”
She said that while people can teach themselves to spot clues to a deepfake, Trend Micro aims to let technology do the heavy lifting to analyzing manipulated videos.
She said that while people can teach themselves to spot clues to a deepfake, Trend Micro aims to let technology do the heavy lifting to analyzing manipulated videos.
Their software in development does this by analyzing biological, audio frequency, and spatial signs.
Their software in development does this by analyzing biological, audio frequency, and spatial signs.
Using biological signals, the software “can actually see the heartbeat underneath your eyes,” Murphy said.
Using biological signals, the software “can actually see the heartbeat underneath your eyes,” Murphy said.
“Your forehead also gets warmer when you're speaking, and it [a deepfake] is not able to do that,” she added.
“Your forehead also gets warmer when you're speaking, and it [a deepfake] is not able to do that,” she added.
A real person’s voice also has a lot of frequency dynamism in it, “and that is hard for a deepfake or an audiofake to do today as well.”
A real person’s voice also has a lot of frequency dynamism in it, “and that is hard for a deepfake or an audiofake to do today as well.”
The software can also go down to the pixel level to check if the face matches up with the background.
The software can also go down to the pixel level to check if the face matches up with the background.
“You and I won't be able to see that, but again, we can use, you know, spatial-based signals, computer vision, and that type of thing to start to actually detect that.”
“You and I won't be able to see that, but again, we can use, you know, spatial-based signals, computer vision, and that type of thing to start to actually detect that.”
Deepfakes and the 2025 elections
A video clip recently surfaced which used the voice of President Ferdinand Marcos Jr. The manipulated audio was designed to sound like the President authorized the use of force against China amid tension in the West Philippine Sea.
A video clip recently surfaced which used the voice of President Ferdinand Marcos Jr. The manipulated audio was designed to sound like the President authorized the use of force against China amid tension in the West Philippine Sea.
With the 2025 elections on the horizon, the Department of Information and Communication Technology also warned about the possible proliferation of deepfakes which aim to misinform voters and swing elections.
With the 2025 elections on the horizon, the Department of Information and Communication Technology also warned about the possible proliferation of deepfakes which aim to misinform voters and swing elections.
David Ng, managing director for Trend Micro in Singapore, the Philippines and Indonesia, said AI will figure prominently in the coming polls.
David Ng, managing director for Trend Micro in Singapore, the Philippines and Indonesia, said AI will figure prominently in the coming polls.
“So in the elections to come, there will be a lot of AI that will be used to slander opposition.
“So in the elections to come, there will be a lot of AI that will be used to slander opposition.
“But I think the good thing is that a lot of organizations are also signing up to have a responsible AI,” Ng said.
“But I think the good thing is that a lot of organizations are also signing up to have a responsible AI,” Ng said.
He noted that tech firms have been forming rules on the ethical use of AI. But he also added that new cybersecurity laws need to be enacted.
He noted that tech firms have been forming rules on the ethical use of AI. But he also added that new cybersecurity laws need to be enacted.
Ng said social media platforms also need to step up.
Ng said social media platforms also need to step up.
“The second is a call to the social platforms themselves to do better content moderation, to actually flag that type of behavior as well, to help inform the electorate so they can make the best possible decision.”
“The second is a call to the social platforms themselves to do better content moderation, to actually flag that type of behavior as well, to help inform the electorate so they can make the best possible decision.”
ChatGPT for criminals?
Murphy meanwhile said there was no evidence of criminal groups deploying their own large language models such as DarkBard or FraudGPT.
Murphy meanwhile said there was no evidence of criminal groups deploying their own large language models such as DarkBard or FraudGPT.
She acknowledged that these concepts were being discussed in criminal forums for use in creating new malware or using the AI to launch attacks.
She acknowledged that these concepts were being discussed in criminal forums for use in creating new malware or using the AI to launch attacks.
“But it's very challenging to actually pull this off,” Murphy.
“But it's very challenging to actually pull this off,” Murphy.
She added that there was one “legitimate good attempt” called WormGPT.
She added that there was one “legitimate good attempt” called WormGPT.
“It was up for about two weeks, I believe, and then it was pulled down from the developer because it hit this mainstream media and he was afraid of going to jail, essentially, so he pulled it.”
“It was up for about two weeks, I believe, and then it was pulled down from the developer because it hit this mainstream media and he was afraid of going to jail, essentially, so he pulled it.”
“Everything else, though, you know, the DarkBard, the FraudGPT, all of those, we saw pricing come out and we saw demo videos come out. Bad guys–it's not always bad guys versus good guys,” Murphy said.
“Everything else, though, you know, the DarkBard, the FraudGPT, all of those, we saw pricing come out and we saw demo videos come out. Bad guys–it's not always bad guys versus good guys,” Murphy said.
The supposed criminal AI turned out to be vaporware, or just a scam on other scammers.
The supposed criminal AI turned out to be vaporware, or just a scam on other scammers.
“Bad guys are totally willing to scam other bad guys.”
“Bad guys are totally willing to scam other bad guys.”
She said it was more common for criminals to try to "jailbreak" LLMs or trick these AI systems into giving answers or solutions that violate the ethical guardrails set by developers.
She said it was more common for criminals to try to "jailbreak" LLMs or trick these AI systems into giving answers or solutions that violate the ethical guardrails set by developers.
ADVERTISEMENT
ADVERTISEMENT