New Delhi, March 14 : Researchers have warned to avoid chatbots that don't appear on a company's website or app and be cautious of providing any personal information to someone users are chatting with online, a new report said on Tuesday. ChatGPT 4 Release Date, Features and Updates: OpenAI Soon To Launch Next-Gen Chatbot That Can Generate Videos and Images, Here's All You Need To Know.
According to the Norton Consumer Cyber Safety Pulse report, cybercriminals can now quickly and easily craft email or social media phishing lures that are even more convincing by using AI chatbots like ChatGPT, making it more difficult to tell what's legitimate and what's a threat. Visual ChatGPT Released, How Is It Different From Its Text-Only Version? Here’s Everything You Need To Know.
"We know cybercriminals adapt quickly to the latest technology, and we're seeing that ChatGPT can be used to quickly and easily create convincing threats," said Kevin Roundy, Senior Technical Director of Norton. Moreover, the report said that bad actors can also use AI technology to create deepfake chatbots.
These chatbots can impersonate humans or legitimate sources, like a bank or government entity, to manipulate victims into turning over their personal information to gain access to sensitive information, steal money or commit fraud.
To stay safe from these new threats, experts advise users to think before clicking on links in response to unsolicited phone calls, emails or messages. Further, they also recommend users to keep the security solution updated and ensure that it has a full set of security layers that go beyond known malware detection, such as behavioural detection and blocking.
(The above story first appeared on LatestLY on Mar 14, 2023 03:30 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).