- February 16 2026
- Neha Purkar
Inside the Attacker’s Mind
In May 2021, a gang of Russian cyber criminals called REvil broke into Ireland’s Health Service Executive (HSE), locking up key computer systems and forcing doctors to cancel around 80% of surgeries nationwide. The hackers, led by cyber-psychopath Yevgeniy Nikulin, demanded 20 million dollars in Bitcoin from a dark web portal and even leaked patient data when the HSE didn't pay up fast enough.
Cyber attackers like REvil don’t just chase greed or grudges. Their minds run on deeper psychological currents that cybersecurity still tends to ignore.
How do these people shrug off wrecked lives and billions in damage like it’s just “business”? How do they rationalise inflicting global chaos, data breaches, and financial ruin without any guilt?
To answer that, we need to move beyond firewalls and forensics into frameworks: theories from psychology that explain why some attackers strike impulsively while others stalk relentlessly for years.
Let’s step into their heads and see why they do it.
Self-Determination Theory: Autonomy as a Drug
Self-Determination Theory (SDT), proposed by Deci and Ryan, says people feel most alive when three core needs are met:
- Autonomy (“I choose what I do”)
- Competence (“I’m good at it”)
- Relatedness (“I belong to a group that gets me”)
Hackers twist this into a behavioural engine. For some, slipping past defences and watching a “secure” network crumble gives a dopamine hit. Every bypass is a dopamine micro-dose. Their behaviour is intrinsically motivated: the challenge itself is the reward.
Then there are the others, whose behaviour is extrinsically “controlled” and driven by payouts, status on forums, fear of losing rank in their crew. Dark web jobs make cyber crime feel normal.
- Autonomy kicks (“we decide the targets, we set the rules”)
- Competence highs (mocking incident responders in real time)
- Extrinsic payout (Bitcoin windfalls that validated their identity as elite)
When autonomy, competence, and relatedness are all satisfied inside a criminal subculture, you don’t just get one-off attacks, but you get sustained operations.
The Dark Triad: Personality wired for Predation
The “Dark Triad” is the toxic trio of personality traits that keeps turning up in cybercrime research. It includes:
Machiavellianism
Strategic manipulation, long-game scheming, using people as tools. Machiavellians, like FIN7, architect the operations by carefully crafted phishing, research-heavy targeting, and quiet persistence inside networks.
Narcissism
Grandiosity, craving admiration, entitlement, rage when ego is threatened. Narcissists crave spectacle, they want headlines, attribution, and a spotlight. Public leaks like WannaCry, high-profile defacements, bragging on Telegram or underground forums, function as narcissistic supply.
Psychopathy
Low empathy, shallow affect, impulsivity or calculated cruelty. Psychopaths are the ones who can watch hospitals go dark and feel nothing. They sit at the top of the malicious-intent food chain, focusing on gain, not harm.
Empirical studies show Dark Triad scorers try longer on difficult tasks, especially when they feel it proves superiority. Applied to hacking, intrusion becomes an ego puzzle: a way to demonstrate they’re smarter than others.
Worse, high psychopathy and Machiavellianism correlate with insider threats. When you don’t really care about colleagues and see organisations as faceless systems, exfiltrating data feels like a clever move, not a betrayal.
Moral Disengagement: How Guilt gets Switched off
The famous Psychologist, Albert Bandura, proposed the theory of Moral Disengagement. This theory explains how people commit harm without drowning in guilt.
It’s not that their moral code vanishes; it gets bypassed using cognitive tricks like:
Moral justification -
I’m fighting corrupt corporations” or “They deserve it for being careless.”
Labeling
It’s not “extortion”; it’s “business,” “security testing gone wrong.”
Advantageous comparison
“If they had decent security, this wouldn’t have happened.”
Diffusion
“I’m just a coder; the admin posts the ransom; the boss decides the targets.”
Dehumanization
Victims become “users, ” “records” – not humans anymore.
Minimization
“It’s just data,” “Insurance will cover it,” “No one really gets hurt.”
Studies with imprisoned cyber offenders show a high reliance on these mechanisms.
In the REvil–HSE case, taunts about the health system’s “pathetic defences” and “your fault for not paying” are textbook displacement and minimization. The narrative isn’t “We attacked a healthcare system." instead it is “They failed their own people.” Group chats amplify this. Shared mockery of victims, in-jokes about “noobs,” and bragging about hauls create “collective effervescence”: group energy that normalises harm. Inside this bubble, cruelty becomes culture.
Rational Choice and Prospect Theory: Crime as a “Good Bet”
Beneath the drama, many attackers run on Rational Choice Theory: people weigh costs and benefits, then act. Cybercrime feels like a skewed equation: High perceived payoff (Bitcoin, data), Low perceived risk (jurisdictional gaps, weak law enforcement) and Flexible exit strategies (proxies, crypto mixers). Prospect Theory from behavioural economics adds another nuance: humans overweight small probabilities when the payoff is huge.
A slim chance at a million-dollar ransom feels better than a boring guaranteed salary in a dead-end job. Add in anonymity and distance, and the risk feels even further reduced. The end result: from the attacker’s perspective, cybercrime isn’t chaos. It’s a calculated, psychologically-tilted bet.
Dual-Process Thinking
Daniel Kahneman’s dual-process model gives us another lens. This includes two systems: System 1 is fast, automatic and impulsive while System 2 is slow, deliberate and strategic.
Opportunistic hackers are mostly System 1 operators. They see an easy target like a leaked credential or misconfigured server and feel a flash of temptation. Then they hit fast for a small but immediate reward.
Whereas persistent attackers live in System 2 mode. They watch for weeks and map networks, escalate privileges, test backups. Then they attack at the worst time (e.g., holidays, quarter-end, crisis periods).
Psychologically, opportunists are more likely to be driven by impulsivity, sensation-seeking, and weak self-control. Persistent actors overlap with traits like high need for cognition, strategic Machiavellianism, and what criminologists call “criminal professionalism.”
The cyber battlefield holds both: the digital smash-and-grab and the long con.
Social Identity and Group Norms: “We” justify “I”
Cybercrime isn’t only a solo sport played by loners in hoodies. It’s also a social identity.
On dark web forums and private channels, group norms do heavy lifting: Newcomers learn the language: “targets,” “fish,” “logs,” “drops” – all of which create emotional distance while Status systems (reputation points, “vouches,” rankings) reward technical skill and audacity. Ideological wrappers (“hacktivism,” “anti-West,” “revenge for sanctions”) also provide ready-made narratives.
Social Identity Theory explains how once you define yourself as part of an “in-group” (elite hackers, a specific crew), you exaggerate differences with the “out-group” (victims, companies, governments). This bias makes it easier to harm outsiders while feeling loyal, even heroic, inside your own circle.
In-group praise replaces moral doubt. The question shifts from “Is this right?” to “Does this boost our status?”
Cognitive Dissonance: When Beliefs and Actions Clash
Most people like to think of themselves as “good.” When their actions clearly hurt others, a psychological tension called cognitive dissonance kicks in: the brain scrambles to align belief and behaviour.
Cyber offenders either change their behaviour (“I should stop doing this”) or change beliefs (“It’s not really that harmful,” “Everyone would do the same if they could,” “I’m just more honest about my greed”).
Sometimes they go for the easier and faster path i.e. Over time, every successful attack that goes unpunished becomes evidence for a new self-story: “I am smart, untouchable, justified.” The original discomfort fades. The moral muscle atrophies.
Cognitive dissonance doesn’t just disappear; it gets resolved in favour of the crime.
Learned Behaviour: From Script kiddies to Architects
Cybercrime is also learned behaviour. The Social Learning Theory says people learn by observing, imitating, and being rewarded. Watching tutorials, logs of past breaches, code repositories, Telegram channels and then imitating by running scripts, copying malware, reusing playbooks to get rewarded (First successful compromise, first ransom paid, first “respect” comment from an established actor.)
Each positive reinforcement strengthens the behaviour chain. Banter about “easy money,” screenshots of wallets, and success stories turn crime into a replicable lifestyle pattern. Over time, the identity of “script kiddie” can evolve into “operator” and then “architect.”
This is not random talent. It’s conditioned competence embedded in a deviant culture.
Turning psychology into a Defensive weapon
If attackers hack systems, we need to hack MINDSETS.
By understanding the psychological engines behind cyber crime, we can point to a different kind of defence strategy:
Disrupt autonomy and competence highs
Make intrusion feel frustrating, unpredictable, and unrewarding. Deception technologies (honeypots, fake data, moving-target defences) waste their time and erode the sense of mastery they live on. If every door might be a trap, autonomy shrinks.
Attack status and reputation
Publicly attributing operations, exposing handles, and documenting failures or arrests hits their core needs. Losing face in front of peers can be as painful as losing money.
Raise perceived risk, not just real risk
Clear cross-border prosecutions, visible sentencing, and widely shared case studies shift the cost–benefit calculation. When the myth of “low risk” breaks, Rational Choice falls apart.
Break moral disengagement narratives
Storytelling matters. Highlight real-world victims: cancelled surgeries, delayed treatments, people whose lives were derailed. Turn “data” back into humans. Make it harder for attackers and sympathisers to hide behind “it’s just numbers.”
Target the pipeline, not only the elite
Many future attackers sit today in grey areas: cheating in games, minor fraud, low-level data theft. Early interventions that reshape norms, provide legitimate autonomy/competence outlets (Bug bounties, ethical hacking paths), and challenge “easy money” myths can divert them before the Dark Triad culture locks in.
Use behavioural signals, not just technical ones
Insider-threat programmes should include psychological risk factors: chronic grievance, entitlement, sudden shift to “us vs them” narratives. Combine this with strict privacy and fairness safeguards, and you get a humane but realistic lens on risk.
The Real Battlefield
Ransomware isn’t just code, breaches aren’t just logs. They’re the behavioural output of warped motivation, skewed reward systems, and personalities tuned for exploitation.
We can keep patching and hardening forever and still stay one step behind if we ignore the human operating system behind the keyboard. If we fuse psychology with cybersecurity, we can stop playing only defence and start fighting the mind war.