But today, a new threat is emerging at a pace never seen: LLM-powered social engineering. Large Language Models have become so sophisticated that, for the first time, it's possible for attackers to create astonishingly convincing scams, impersonations, and manipulative messages en masse. These AI-driven threats are now considered some of the biggest security threats in crypto, as investors increasingly rely on digital communication, online support, and automated systems.
When AI Meets Social Engineering
Until recently, social engineering traditionally relied on human effort: scammers crafting individual messages, building fake personas, and tricking victims themselves, by hand. Everything changed with the emergence of AI.
LLMs can now
Create highly persuasive text.
Imitate writing styles
Translate conversations into multiple languages
Generate fake expert advice.
Automate scam conversations
Create realistic personas in minutes.
What that means is, crypto users, exchanges, Web3 startups, wallets, and even developers have a new category of risks to deal with where the attack surface will be much greater since the attackers gained superhuman communication capabilities. The speed, scale, and personalized nature of such attacks make them extremely dangerous, especially for those engaged in wallet management, customer support, or online crypto communities.
Understanding LLM-Powered Social Engineering
Social engineering involves manipulating individuals into giving out sensitive information, sanctioning transactions, or clicking on malicious links. These tactics, as applied with LLMs, become:
Faster
More personalized
More automated
Harder to detect
Highly scalable
LLMs have access to huge volumes of data, be it social media profiles, forum posts, transaction histories, or publicly shared wallet information. The messages originate with words that imply trust, confidence, authority, or urgency—some of the common psychological triggers in scams.
Why AI Makes Social Engineering More Dangerous
LLMs allow attackers to:
Perform mass attacks with zero human effort
Tailor scams based on a victim's background
Create perfect grammar and fluent conversation.
Produce content in several languages instantly
Generate believable scripts for voice scams.
Simulate professional communication styles It is no longer mere phishing; it has escalated to AI-boosted psychological manipulation
How Attackers Use LLMs in Crypto Scams
State-of-the-art attacks that were once impractical to execute because they took up too much time, or which were otherwise too complicated, become practical with LLMs. Some common use-cases are listed below.
1. Posing as Crypto Companies and their Support Teams
For example, LLMs can compose messages that might have the same meaning as those coming from
Wallet operators
Crypto exchanges
Blockchain foundations
NFT marketplaces
DeFi protocol teams
Customer service representatives
The bad actors will craft the email, chat message, or response to support that looks and sounds fully legitimate. Most crypto users become victims because they believe they are interacting with a valid representative.
Common impersonation messages include:
"We detected an issue with your wallet. Please verify ownership."
"Your account has been temporarily locked. Reset your credentials here."
A suspicious transaction was detected. Approve this verification request.
Victims often sign permissions for malicious smart contracts unknowingly.
2. Fake Airdrops and Token Claims
The airdrop scams have existed for many years, but now LLMs supercharge them.
AI can
The generation of personalized messages claiming eligibility
Describe the details of the project in a very convincing, yet completely false manner.
Write plausible technical instructions.
Redirect victims to malicious claim websites
Attackers also leverage LLMs for scraping Twitter, Discord, and Telegram to identify users that are actively engaging with Web3 communities.
3. Deepfake Text Conversations for High-Value Theft
For instance, the LLMs may generate conversations like this to convince a victim to transfer money:
False founder announcements
sham project updates
Fake investor instructions
Fake developer messages
If an attacker gains access to a community channel, an LLM can immediately post plausible warnings, requests, or instructions which sound urgent and credible.
4. Automated "Pig Butchering" Schemes
The pig butchering scam involves long-term manipulations, wherein an attacker gets either emotional or financial trust from the victim before stealing.
LLMs enable scammers to:
Have daily conversation
Create romantic or mentor-like personas
Provide fake market analysis.
Pretend to be trading experts.
Create convincing screenshots of profits
What used to take months of effort can now be automated by AI in minutes.
5. Malicious Smart Contract Explanations
One of the newest threats:
Attackers are using LLMs to describe malicious smart contracts as harmless.
Victims often ask questions like:
"Is this permission safe to sign?"
What does this transaction do?
Is this NFT mint contract verified? An AI-powered scam bot will provide totally plausible but absolutely untrue explanations.
6. AI-generated phishing emails and landing pages
LLMs generate phishing content that is professionally designed, free of grammatical errors, and perfectly aligned with the tone of a brand.
They are used by attackers for the generation of:
Emails
Announcement posts
Technical documentation
FAQ pages
Customer support responses That makes phishing sites almost indistinguishable from originals.