Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Tech

Gmail, Outlook and Apple users urged to watch out for this new email scam: Cybersecurity experts sound alarm

Artificial intelligence: authentic scams.

AI tools are being maliciously used to send “hyper-personalized emails” that are so sophisticated victims can’t identify that they’re fraudulent.

According to the Financial Times, AI bots are compiling information about unsuspecting email users by analyzing their “social media activity to determine what topics they may be most likely to respond to.”

Scam emails are subsequently sent to the users that appear as if they’re composed by family and friends. Because of the personal nature of the email, the recipient is unable to identify that it is actually nefarious.

“This is getting worse and it’s getting very personal, and this is why we suspect AI is behind a lot of it,” Kristy Kelly, the chief information security officer at the insurance agency Beazley, told the outlet.

“We’re starting to see very targeted attacks that have scraped an immense amount of information about a person.” 

Malicious actors use artificial intelligence to scrape online data about potential targets as well as pen convincing fraudulent emails. Kaspars Grinvalds – stock.adobe.com

“AI is giving cybercriminals the ability to easily create more personalized and convincing emails and messages that look like they’re from trusted sources,” security company McAfee recently warned. “These types of attacks are expected to grow in sophistication and frequency.”

While many savvy internet users now know the telltale signs of traditional email scams, it’s much harder to tell when these new personalized messages are fraudulent.

Gmail, Outlook, and Apple Mail do not yet have adequate “defenses in place to stop this,” Forbes reports.

“Social engineering,” ESET cybersecurity advisor Jake Moore told Forbes “has an impressive hold over people due to human interaction but now as AI can apply the same tactics from a technological perspective, it is becoming harder to mitigate unless people really start to think about reducing what they post online.”

Experts warn that the phishing emails are so advanced that they can slip past security measures and dupe users. Prostock-studio – stock.adobe.com

Bad actors are also able to utilize AI to write convincing phishing emails that mimic banks, accounts and more. According to data from the US Cybersecurity and Infrastructure Security Agency and cited by the Financial Times, over 90% of successful breaches start with phishing messages.

These highly sophisticated scams can bypass the security measures, and inbox filters meant to screen emails for scams could be unable to identify them, Nadezda Demidova, cybercrime security researcher at eBay, told The Financial Times.

“The availability of generative AI tools lowers the entry threshold for advanced cybercrime,” Demidova said.

Users have been urged to bolster online account security and to verify the legitimacy of links and their senders before clicking. PhotoGranary – stock.adobe.com

McAfee warned that 2025 would usher in a wave of advanced AI used to “craft increasingly sophisticated and personalized cyber scams,” according to a recent blog post.

Software company Check Point issued a similar prediction for the new year.

“In 2025, AI will drive both attacks and protections,” Dr. Dorit Dor, the company’s chief technology officer, said in a statement. “Security teams will rely on AI-powered tools tailored to their unique environments, but adversaries will respond with increasingly sophisticated, AI-driven phishing and deepfake campaigns.”

To protect themselves, users should never click on links within emails unless they can verify the legitimacy of the sender. Experts also recommend bolstering account security with two-factor authentication and strong passwords or passkeys.

“Ultimately,” Moore told Forbes, “whether AI has enhanced an attack or not, we need to remind people about these increasingly more sophisticated attacks and how to think twice before transferring money or divulging personal information when requested — however believable the request may seem.”

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button