Large language models and generative AI have dominated the news cycle and captivated the imaginations of many enthusiasts, but even casual users have started to pay attention. Billions have been poured into the “AI” sector, and now that end-users can get a taste of its abilities, we’re seeing the need to apply AI to everything.
Your social media feeds are filled with AI slop, your search results are stocked with AI-generated answers, and your devices offer AI-powered features.
What for? I don’t know, but AI powers it, so go with it.
NVIDIA needs the chip demand, or the stock market will tank.
Whenever a new technology hits the market, it shakes things up, and it can destroy the old technology we previously relied on.
That is the essence of tech disruption.
Bitcoin was created over a decade before these AI and LLM models took the world by storm. Now that the average internet user has access to an advanced chatbot for the price of a cup of coffee, we’ve seen all sorts of theories pop up.
Like:
- Will AI disrupt Bitcoin?
- Can AI steal your Bitcoin?
These questions, or rather theories, originate mostly from the crowd of people who don’t understand how computers work or gold bugs desperately holding on to anything that can discredit Bitcoin, even going as far as to base their ideas on the realm of science fiction.
While it sounds outlandish to a holder, the idea of AI being able to access 12 – 24 words sounds plausible enough, especially for a normie who will not bother doing research.
So, Bitcoin is cooked, and AI will steal all of it. It is the FUD of this cycle, along with mining centralisation, but let’s tackle one FUD narrative at a time, shall we?
Can AI guess your keys?
No!
Moving on!
It’s unlikely that an AI will be able to generate enough processing power to figure out how to guess a specific number out of a list of more numbers than atoms in the known universe.
A basic seed phrase consisting of 12 words created by a random word generator would be almost impossible to guess, given that there are more than 132 quadrillion potential word combinations.
Now, consider that most of us have moved to 24 words and are throwing into the ring multi-sig, and your guessing seed hack is over before it begins.
There are far easier targets or better ways to generate income with that computing power that offers a better risk-reward ratio.
Can AI Steal your keys?
If an AI can’t guess your keys, maybe it can steal them. An LLM can only source information that is on a scrapeable database, like public websites; it relies on bots to scour the web looking for new data sets that it inputs into its model.
If you’ve created a seed phrase using a signing device, that combination of words has never touched an internet-connected device. There is no record of them for an LLM to scrape, so you’re safe.
Where it does move into the danger zone is when users leave their seed phrase on a device. This could be a text file on your phone or laptop, a notes app connected to the cloud or the good old screenshot of a seed phrase that is auto-uploaded to your cloud storage service.
Suppose an AI can tap into those devices or cloud accounts. In that case, it tickets for you, and considering that Microsoft, Apple and Google, popular consumer cloud platforms, both have AI products, it’s not out of the possibility that their models could tap into cloud content.
To add insult to injury, the push for on-device AI is on Apple is pushing Apple Intelligence, and Google is pushing Gemini, and who knows what access these AIs have on your device?
- Can it take screenshots without you knowing?
- Can it log keystrokes without you knowing?
While a total invasion of privacy, the rapid advancement of artificial intelligence has pushed developers to take breaks off, and as AI becomes more sophisticated, who knows if they’ll abide by the hard limits set for them?
Would I leave a hot wallet on my AI-enabled phone (if I Had one) with a sizable amount of Bitcoin?
Probably not, but that’s just me.
AI Is A Threat To Custodial Bitcoin
If we consider the range of Bitcoin custody solutions, your hot wallet and cold wallet setup is far safer than using a third-party custodian, and if anything, AI is a catalyst that should promote the value of self-custody.
Each year, billions of dollars are stolen from exchanges. Blockchain forensic firm Chainalysis confirmed the trend will continue after recently disclosing their research that they expect a rise to $12.4 billion in fraudulent address transfers for 2025.
Generative AI has significantly lowered barriers for scammers, enabling them to create compelling synthetic identities, fake investment schemes, and deepfake-driven scams. The report revealed that 85% of scams involve fully verified accounts that bypass traditional identity verification.
“GenAI is amplifying scams by making fraud more scalable, cost-effective, and harder to detect. It allows criminals to impersonate real users, generate fake content, and orchestrate elaborate investment scams.”
Elad Fouks, head of fraud products at Chainalysis
The Evolution of AI-Enhanced Security Threats
Modern AI systems have transformed the security landscape through their ability to process vast amounts of data, recognise patterns, and adapt to new situations.
While these capabilities drive incredible innovations, they also enhance the sophistication of potential security threats.
Adversarial AI can be used to:
- Analyse blockchain transactions to identify high-value wallets
- Generate convincing phishing attempts using natural language processing
- Automate social engineering attacks at scale
- Detect patterns in key generation methods
- Create deepfake videos or clone voices for impersonation scams
Now we are seeing protective AI released to try and stop the rogue AIs, but do I really have time to worry about all that drama and hope the AI fighting for me wins?
No, so I’m just not going to use an AI-powered device close to anything I do with Bitcoin; it’s just that simple.
Common Attack Vectors and Prevention Strategies
So, how are AI-powered hackers emptying out people’s accounts?
Well, it’s got far less to do with breaking through firewalls, finding exploits in their servers, or running SQL injection scripts and more to do with mass spamming personalised messaging to get users to give up their passwords.
Why?
Because the average user is always going to be the easiest point of attack, and with AI you can attack more average users than ever before.
Social Engineering Enhanced by AI
AI language models can generate highly persuasive messages that appear to come from legitimate sources. These include urgent security alerts, fake customer support conversations, or investment opportunities that seem too good to pass up.
To protect yourself:
- Never share private keys or seed phrases with anyone, regardless of how legitimate they appear
- Enable two-factor authentication on all cryptocurrency accounts
- Verify requests through official channels, not through links in messages
- Be especially wary of time-pressure tactics or unusual reward offers
Apart from the social engineering that often trends towards Romance scams, other methods of using AI to break into users’ accounts include:
AI-driven phishing:
AI analyses publicly available information and even social media to craft highly personalised and convincing phishing emails, increasing the likelihood of victims clicking malicious links or revealing sensitive information. This goes beyond simple personalisation and can mimic writing styles and even predict emotional responses to tailor the attack.
Automated vulnerability discovery and exploitation:
AI algorithms can scan vast networks and systems much faster than humans, identifying vulnerabilities and even automatically developing exploits to take advantage of them. This accelerates the attack process and allows hackers to target a wider range of victims.
Polymorphic malware generation:
AI can create malware that constantly changes its code (polymorphism) to evade detection by traditional antivirus software. The AI can learn what detection methods are being used and adapt the malware in real time.
Deepfake-enhanced social engineering:
AI-generated deepfakes can be used to impersonate trusted individuals, such as executives or colleagues, to manipulate victims into performing actions like transferring money or revealing confidential information. This adds a new level of realism and persuasiveness to social engineering attacks.
Predictive attacks:
AI can analyse data to predict the best time to launch an attack, maximising its impact and minimising the chances of detection. This could involve targeting systems during off-peak hours or exploiting predictable patterns in employee behaviour.
Automated Vulnerability Scanning
AI systems can scan any research released on weaknesses in wallet software, exchange platforms, and smart contracts.
Counter these threats by:
- Keeping all software updated to the latest version
- Using hardware wallets for significant holdings
- Regularly reviewing authorised applications and revoking unnecessary permissions
- Conducting transactions with your own node as far as possible
Best Practices for Modern Crypto Security
The most secure approach to protecting significant cryptocurrency holdings is through cold storage:
When you pick a cold storage solution:
- Use hardware wallets from reputable manufacturers
- Store backup seed phrases in multiple secure, physical locations
- Consider multi-signature setups for institutional-grade security
- Test recovery procedures regularly with small amounts
Transaction Security
When making transactions:
- Double-check all wallet addresses before sending, and use your own node instead of a third-party explorer
- Use test transactions for large transfers
- Consider transaction privacy tools when appropriate
- Monitor blockchain analytics for suspicious activity
Building a Security-First Mindset
The most effective defence against AI-enhanced threats is developing strong security habits:
- Treat all unexpected contacts with scepticism
- Maintain separate wallets for different purposes (trading, long-term storage, etc.)
- Regular security audits of your crypto holdings and access methods
- Stay informed about emerging threats and security practices
$2 trillion secure, $900 trillion far less secure
As AI technology evolves, we can expect threats to become more sophisticated. Today, AI empowers hackers by automating attacks, personalising phishing emails, and creating malware that evades traditional security.
These tactics pose a significant risk to custodial services banks and fintech companies, as AI-powered attacks can lead to data breaches, financial fraud, and service disruptions.
While Bitcoin on exchanges is at risk, hacks don’t affect the chain’s efficacy, which will continue as normal. The same cannot be said for centralised databases and traditional finance security systems.
As AI-powered hackers look for their next payday, they will be looking at Bitcoin and any financial product they can get their hands on.
If you’re worried about the funds stored in the most secure open database in the world, wouldn’t you be worried about every other far less secure financial database, too?
According to Statista, in 2023, Cybercrime losses totalled well over 12 billion, a number that 4x’d in the last 5 years, so I’m sure that 2024 and beyond those numbers will only break new highs, off the backs of AI.
Skynet is not coming for your Sats
The intersection of AI and Bitcoin security presents both challenges and opportunities. While artificial intelligence can enhance potential threats, understanding these risks can only drive a premium on the holding of self-custody.
If anything, a $200 signing device looks as cheap as chips when you consider the amount of attack vectors it mitigates.
Even after securing your funds, remember that security is not a one-time setup but an ongoing learning, adaptation, and vigilance process.
The most effective defence is a combination of technical solutions and human awareness; when you’re responsible for your own wealth, you can’t afford to fall asleep on your watch even for a second, or in today’s terms of hacking speeds, a micro-second.