Social engineering attacks exploit a personality trait that most people have. And it's about to get a whole lot worse with AI.
Everyone can learn the process of critical thinking. But when someone receives information from others, their brain has to decide whether to trigger that critical thinking process.
For most people, information from others is assumed to be true unless there is some reason to believe otherwise.
For example, if they receive an email purporting to be from their boss, Joe Smith, most people will assume Joe Smith actually sent that email unless there is something off about the email (e.g., Joe is asking them to do something that would normally come from the Finance department rather than Joe).
This means a social engineering attack will be successful with most people so long as the request doesn't have visible red flags.
When you think about it, this is a really low bar and is going to be completely destroyed by AI. AI will help social engineers present false written and even voice requests with no red flags.
A better approach is to attach skepticism proportional to the nature of the requested action. A request for a wire transfer, for example, should be assumed to be illegitimate (even if no yellow/red flags in the request), until the request is affirmatively determined to be legitimate.
