Who Does Social Engineering Target And Why?

Explore the rise of social engineering threats. Understand how individuals are manipulated to divulge sensitive information, passwords, and financial details.

While technical security measures like antivirus software, spyware detection, and two-factor authentication are essential first lines of defence, they can’t protect against your organisation’s most vulnerable element: people.

That’s why social engineering penetration testing has become a critical component of modern cybersecurity strategies – helping organisations identify vulnerabilities in human decision-making before attackers exploit them.

What is social engineering?

Social engineering is the psychological manipulation of people into performing actions or divulging confidential information. Unlike technical hacking that exploits software vulnerabilities, social engineering exploits human psychology, emotions, and trust.

These attacks bypass even the most sophisticated security systems by targeting the weakest link in your security chain – human decision-making under pressure, distraction, or emotional manipulation.

Why social engineering works

Attackers don’t need to break through your firewall if they can simply ask for access. Social engineering succeeds because it leverages fundamental human traits:

  • Trust: We want to believe people are honest
  • Helpfulness: We’re conditioned to assist others, especially authority figures
  • Fear: We react quickly to perceived threats without fully thinking through consequences
  • Curiosity: We want to know more, even when we shouldn’t
  • Greed: Promises of rewards or benefits cloud our judgment

Modern social engineering has evolved beyond simple phishing emails. Attackers now use AI-generated deepfake voices, sophisticated impersonation tactics, and multi-channel approaches combining email, phone calls, text messages, and social media to create convincing scenarios.

The six psychological principles of social engineering

Psychologist Robert Cialdini identified six principles of influence that drive human decision-making. Social engineers weaponise these principles to manipulate targets. Understanding these principles helps you recognise when you’re being exploited.

Reciprocity: People who like to give back

The principle: Humans have a deeply ingrained desire to return favours. When someone does something for us, we feel obligated to reciprocate –  it’s the “golden rule” in action.

How attackers exploit it:

Social engineers create artificial debts that victims feel compelled to repay. They might:

  • Offer unsolicited help or “free” services that require information in return
  • Send unexpected gifts or vouchers with links requiring personal details to claim
  • Provide “warnings” about security threats, then ask for credentials to “verify” your account
  • Claim you’ve won prizes in contests you never entered, but need bank details to transfer winnings
  • Offer refunds for services you haven’t paid for, requesting payment information to process the return

Real-world example: An attacker sends an email appearing to be from IT support, offering to help “optimise” your computer performance. After you accept their help, they ask for your login credentials to “complete the maintenance” – you feel obligated to provide them since they’ve “helped” you.

How to protect yourself:

  • Verify the identity of anyone offering unsolicited help or compensation
  • Remember that legitimate organisations don’t ask for sensitive information to give you refunds or prizes
  • Be especially cautious when you feel you “owe” someone you’ve just met
  • Question why someone is offering you something for free

If something feels like an obligation you didn’t create, it probably is a manipulation tactic.

Commitment and consistency: People who follow through

The principle: Once we commit to something, we feel psychological pressure to follow through and remain consistent with that decision, even when circumstances change.

How attackers exploit it:

Social engineers get you to make small initial commitments that escalate into larger, more damaging actions:

  • Start with small, reasonable requests, then gradually escalate to sensitive information
  • Get you to agree to help with a “simple task,” which grows into providing access or data
  • Present investment or money-making “opportunities” where backing out feels like admitting you were wrong
  • Create scenarios where you’ve already “started” a process, making you feel you should complete it

Real-world example: A scammer contacts you claiming to be from a recruitment agency with a “perfect job opportunity.” They ask you to fill out a simple application form first (commitment). Then they need your national insurance number for “background checks,” then bank details for “direct deposit setup.” Each step feels like continuing what you’ve already started.

How to protect yourself:

  • Recognise that you can walk away from any interaction, regardless of what you’ve already done
  • Don’t let past actions dictate future decisions – “sunk cost” doesn’t apply to security
  • Be especially cautious of multi-step processes that gradually request more sensitive information
  • Question whether your commitment is to a legitimate person or organisation

Remember: legitimate organisations won’t pressure you to continue something that feels wrong just because you’ve started.

People who respect power and position

The principle: We’re conditioned to obey authority figures and defer to perceived expertise. This response is so ingrained that we often comply without question.

How attackers exploit it:

Social engineers impersonate authority figures to bypass critical thinking:

  • Fake emails from senior executives requesting urgent wire transfers (CEO fraud/business email compromise)
  • Messages appearing to be from IT departments demanding immediate password changes
  • Calls from “police” or “tax authorities” threatening legal action unless immediate payment is made
  • Texts claiming to be from banks, warning of suspicious activity requiring urgent verification
  • AI-generated deepfake audio mimicking executive voices requesting confidential information
  • Impersonation of regulatory bodies demanding compliance information

Real-world example: You receive an email that appears to be from your CEO, marked urgent, requesting an immediate bank transfer to complete a confidential acquisition. The email matches their writing style and comes from what looks like their address. The urgency and authority override your normal verification procedures.

How to protect yourself:

  • Verify unusual requests through separate communication channels, regardless of apparent urgency
  • Establish verification protocols for sensitive requests (verbal confirmation via known phone numbers)
  • Be suspicious of urgent demands from “authority” figures you don’t regularly communicate with
  • Remember that legitimate authorities (police, tax services, banks) don’t demand immediate payment via email or text
  • Implement dual-authorisation requirements for financial transactions

Phishing simulation helps employees practice identifying these authority-based attacks in a safe environment.

Social proof: People who follow the crowd

The principle: When uncertain, humans look to others for guidance. We assume that if many people are doing something, it must be correct or safe.

How attackers exploit it:

Social engineers create false consensus to make malicious actions seem normal and safe:

  • Fake testimonials and reviews making scams appear legitimate
  • Claims that “thousands of people” have already benefited from an offer
  • Notifications that “colleagues have already completed” a fake security update
  • Social media posts showing others winning prizes or making money (often fake accounts)
  • Cryptocurrency and investment scams featuring fake success stories
  • Urgency combined with popularity: “Only 3 spots left! 1,247 people have already signed up!”

Real-world example: A phishing email claims your company is rolling out new security software and states that “85% of your colleagues have already installed it.” It includes a link to download the “security update.” The implication that most people have complied makes you less likely to question the legitimacy.

Contemporary threat – AI-generated fake personas: Attackers now use AI to create realistic fake social media profiles, complete with generated photos, posts, and interaction histories. These fake personas endorse scams or build trust before attacking.

How to protect yourself:

  • Verify claims independently rather than trusting testimonials or success stories
  • Be suspicious of overwhelming positive feedback with no negative reviews
  • Check official company communications rather than relying on what “everyone else” is doing
  • Research investment opportunities through independent sources
  • Look for verified badges and check business registrations

Just because something appears popular doesn’t make it legitimate.

People who easily build rapport

The principle: We’re more likely to be influenced by people we like, find attractive, or share commonalities with. Building rapport creates trust.

How attackers exploit it:

Social engineers establish artificial connections to lower your defences:

  • Research targets on social media to find common interests, shared connections, or alma maters
  • Use charming, friendly communication styles in phone calls and emails
  • Mirror your communication patterns and language
  • Express agreement with your opinions to build rapport
  • Reference mutual connections (real or fabricated)
  • Present themselves as helpful, trustworthy individuals who want to assist you
  • Use attractive profile pictures on social media or dating apps to build trust before scams

Real-world example: An attacker calls claiming to be from a software vendor your company uses. They mention they also graduated from your university (information found on LinkedIn), discuss common interests, and build a friendly rapport. After establishing this connection, they ask for system access to “help optimise” your setup. The personal connection makes you less likely to follow security protocols.

Contemporary threat – Romance scams: Attackers build long-term emotional relationships online, sometimes over months, before eventually requesting money for fabricated emergencies.

How to protect yourself:

  • Recognise that professional interactions don’t require personal rapport
  • Don’t let likability override security protocols
  • Be cautious about information you share on social media that can be used to build false rapport
  • Separate personal feelings from business security decisions
  • Verify identities through official channels, regardless of how friendly someone seems

Professional does not mean friendly – maintain appropriate boundaries in security-sensitive situations.

Scarcity and urgency: People who fear missing out 

The principle: We perceive limited-availability items or time-sensitive opportunities as more valuable. The fear of missing out (FOMO) drives hasty decisions without proper evaluation.

How attackers exploit it:

Social engineers create artificial scarcity and urgency to bypass rational decision-making:

  • “Limited time offers” requiring immediate action
  • Account suspension warnings demanding urgent password updates
  • Prize notifications that “expire in 24 hours”
  • Flash sales with countdown timers
  • “Your account will be closed unless you verify immediately”
  • “Last chance” investment opportunities
  • Urgent wire transfer requests from executives “closing a deal”

Real-world example: You receive an email claiming your email account will be suspended in 2 hours due to “unusual activity” unless you verify your credentials immediately. The urgency prevents you from carefully examining the sender’s address or recognising warning signs. You click the link and enter your password to avoid losing access.

How to protect yourself:

  • Pause when you feel rushed – urgency is a red flag
  • Legitimate organisations provide reasonable timeframes for action
  • Verify urgent requests through known contact information, not details provided in the suspicious message
  • Remember that “limited time” offers can wait for verification
  • Set up automatic account alerts to distinguish real security warnings from fake ones

If an offer or threat creates panic, it’s probably designed to manipulate you.

Who is most vulnerable to social engineering?

While these principles affect everyone, certain individuals and roles face elevated risk:

High-value targets

  • Executive and senior management: Attackers target executives through “whaling” attacks because they have access to sensitive information and financial authorisation
  • Finance and accounting staff: Those who process payments and have access to financial systems are prime targets for business email compromise and fraudulent transfer requests
  • IT and system administrators: Individuals with elevated system access are targeted because compromising their credentials provides extensive network access
  • HR personnel: Access to employee personal information, payroll systems, and hiring processes makes HR staff valuable targets

Personality-based vulnerabilities

  • Helpful and accommodating individuals: Those who prioritise being helpful may overlook security protocols to assist someone in apparent need
  • Authority-respecting employees: People who don’t question senior management are vulnerable to CEO fraud and executive impersonation
  • Time-pressured workers: Employees under deadline pressure make hasty decisions without proper verification
  • New employees: Less familiar with company procedures, communication patterns, and verification protocols
  • Remote workers: Isolated from in-person verification and casual conversations that might expose inconsistencies.

Protect your organisation from social engineering attacks

Social engineering succeeds because it exploits human nature, not technical vulnerabilities. Understanding the psychological principles attackers use – reciprocity, commitment, authority, social proof, liking, and scarcity – helps you recognise manipulation attempts before they succeed.

No one is immune. Even security professionals fall victim to sophisticated social engineering when attackers push the right psychological buttons. The defence is awareness, verification procedures, and a culture where questioning and verifying are encouraged, not seen as distrustful or unhelpful.

Your employees are either your strongest defence or your weakest link – training and preparation determine which.

OnSecurity’s phishing simulation service provides realistic, safe training that prepares your team to recognise and resist manipulation tactics. Our social engineering penetration testing reveals your actual vulnerability and helps you build effective defences. 

Get an instant quote today and take the first step in strengthening your human security layer.

Related Articles

What Is Broken Access Control? A Practical Guide

Learn how attackers exploit broken access controls, IDOR, and privilege escalation, and discover the server-side controls, RBAC policies, and testing practices your team needs to close the gap and protect sensitive data.