
Caught in a Net
What once lived in spam folders now hides in job offers and social feeds.
Story by Claire Conger
Illustrations by Jack DeKoker

Urgent Action Required: Your Account Has Been Suspended!
Dear Valued Customer,
We regret to inform you that your account has been temporarily suspended due to suspicious activity. To protect your privacy and secure your information, we have placed a hold on your account until we can verify your identity.
Please click the link below to verify your account details and regain access immediately:
Failure to verify your information within the next 24 hours will result in permanent suspension.
In 2006, José Dominguez won the lottery. The news came in a letter from Spain. He might have believed it if he didn’t know a thing or two about scams.
Dominguez was already about a decade into his career managing cybersecurity risks at the University of Oregon. Identifying and taking down scammers is all part of the job.
Catching a phish is all in the details — oddities, subtle inconsistencies and missteps. It’s about piecing together fragments of evidence and digital breadcrumbs that lead to enough proof to expose a fraud.
The consequences of a digital deception like getting phished aren’t just financial — it’s not only tens, hundreds or thousands of dollars. It’s one’s identity. The crawling feeling that someone was meticulous enough to manipulate trust and imitate an identity.
Tactics like these are defined as social engineering: malicious manipulation used to trick users into disclosing personal information. These deceptions reach beyond financial loss – they challenge users’ understanding of security and legitimacy by understanding users’ habits, circumstances, and weaknesses.
As technology advances, so do the methods phishing attempts use to infiltrate users’ lives. Over time, social media platforms have intertwined with professional networks such as LinkedIn, Handshake and Substack, reshaping some online interactions.
Simultaneously, artificial intelligence is becoming more embedded in daily life, bringing both powerful potential and user skepticism.
While the term was coined in the 20th century, its breakthroughs in the 21st century, particularly with the creation of OpenAI and ChatGPT, have sparked widespread attention. The release of ChatGPT marked a turning point for artificial intelligence. Its ability to mimic human language and interactions makes it increasingly difficult for users to distinguish between human-made and artificial creations.
The Pew Research Center's 2023 survey of 11,004 U.S. adults revealed that while many Americans recognize common uses of AI, only 30% could correctly identify the various AI uses mentioned in the survey, including spam emails and customer service chatbots. Cybercriminals can use AI to generate emails or messages, create deepfake audio and video, or scan social media and public data to personalize phishing attempts.
In Febraury of 2024, scammers used deepfake technology to scam a finance worker at a multinational firm out of $25 million by posing as the company’s chief financial officer in a fake conference call. Every person in the call was a deepfake recreation. The worker, who remained anonymous in the original report by CNN, was initially lured into the scam with a suspicious text message. He put his doubts aside when he saw what appeared to be his colleagues in the video call.
This growing awareness gap is particularly concerning as cybercriminals leverage advanced techniques to exploit even the smallest vulnerabilities, making online fraud sophisticated and hard to detect.
According to Dominguez, phishing is a form of organized crime. Phishing hones in on the details of someone’s life and exposes any sliver of vulnerability that makes a fraudulent link more enticing or makes a reader trust a deceitful email.
According to the Federal Bureau of Investigation’s 2023 Internet Crime Report, the FBI received 3.79 million complaints with victims suffering a cumulative $37.4 billion in total losses from scams between 2019 and 2023.
Dominguez speculates as to why this is our Achilles' heel.
“Every civilization has some behavior or actions that have destroyed that civilization,” Dominguez said. “For us, for this civilization, it's going to be convenience.”
Convenience, in this regard, refers to the ease of using shorter passwords or avoiding extra steps, such as two-factor authentication — a process that adds 30 more seconds to logging in.
Kevin Long is the CEO and founder of Social Imposture, an organization that finds, reports and removes fake social network profiles for high-profile clients such as celebrities or politicians. Long has been referred to as a Facebook Bounty Hunter.
After more than a decade in cybersecurity, Long has identified some of the details that users can look for to detect a potential scam and avoid getting phished.
“Be careful who you accept requests from, first of all,” he said. “Second of all, make sure that you have two-factor authentication turned on on your account. That's probably the base, most basic security element to keep your account from being hacked and then turned into a phishing account for other people.”
Long highlighted the growing trend of excess media consumption, where users spend hours scrolling through content. According to the Global Web Index, the average global user spends about two and a half hours per day on social media.
“When you're just mindlessly scrolling through, you still got to be on alert and on guard, and that's going to be there,” Long said.
Dominguez supports Long’s recommendation of using two-factor authentication and either lengthening passwords or using a password manager to store strong passwords securely. He holds his passwords to a high standard, with most of his around 50 characters, and avoids using anything shorter than 25.
However, even with long passwords, two-factor authentication, and a deep understanding of phishing tactics, Dominguez and Long constantly face new challenges in their work.
AI’s presence has created obstacles in identifying and troubleshooting phishing scams.
“The first level, which is the most obvious, is that the organizations that are building these fake accounts are using AI to build them, and they're able to do it more rapidly than they were able to do before because of that,” Long said.
Some social networks rely on AI to report fake accounts. These networks use algorithms to take down accounts, but they lack the human touch of “getting into the nuance” of the post to distinguish authenticity from phishing attempts.
AI operates with social networks based on preset instructions for specific scenarios, such as when x, y, or z happens. However, if something unexpected occurs, like an imposter account behaving or appearing unusually, the system might not flag it as fake.
As of 2025, Meta reported that it was ending its long standing fact-checking program.
The complexity in identifying fake accounts is only one layer of the challenge — another is the psychological tactics used to exploit people.
Some scams succeed because the victim wants to avoid additional hardship: a parking ticket, having their password reset, or having a job offer expire if they don’t fill out a form in time.
Dominguez highlighted the use of urgency in these scams. “It really talks to you about the psychology of the individuals that send these messages and their victims,” he said. “If you are struggling financially, and you get an email, or a voice call, or anything that says that your life will get even harder if you don't reach back out, there's a good chance you will react.”
University of Oregon junior Avery Wachowiak experienced this firsthand.
In 2023, Wachowiak needed a job. She turned to Handshake, a popular hiring platform, and found a job: a remote position working for a UO professor offering around $500 per month.
Wachowiak got the job. Soon after, she received her first assignment via text. The assignment was due by the end of the weekend, and she spent hours attempting to complete it by the deadline.
“[The scams] keep trying to find ways to trick the users into chasing something. And the way to chase it is, by saying things like, ‘I have a great job for you,’” Dominguez said. “That happens a lot with the students trying to figure out how to make some money while attending school. And you get these great opportunities. This is one of those examples where if it is too good to be true, it's probably a lie.”
One of the first steps of a scam, Dominguez says, is isolating the victim — like a pride of lions separating its prey from the herd.
Wachowiak posted her new position on LinkedIn. That’s when the truth started to reveal itself.
“The actual professor reached out to me and was like, ‘Hey, I'm actually on sabbatical right now. I think you got phished because I'm not hiring,’” she said. “It was really frustrating because it was at a time when I really needed a job and I finally got a job, but yeah, then it was fake.”
After discovering the truth, Wachowiak contacted the UO phishing email, where students and UO affiliates can report scams.
“We develop a lot of tools. One of those is a thing we call Phish Tank,” Dominguez said. “The Phish Tank basically has specimens of every phishing message that we see around the university.”
In the Phish Tank, users can browse through actual phishing attempts, including the sender’s email and the contents of the message. “DON’T MISS OUT ON FLEXIBLE OPPORTUNITY”, “Free Items Available”, and “CONTACT THE BARRISTER OFFICE NOW” are a few examples of the flashy subject lines found in the Phish Tank.
Alongside the measurable damage of being caught in a scam, there is also an emotional toll.
“I don't know. It made me feel kind of dumb because, how did this happen to me?” Wachowiak said.
Eric Howald, the assistant director of issues management for the University of Oregon, emphasizes the importance of erasing stigma around being a victim of a scam.
“There's no shame in getting taken in by this. There are people who are organizing to take things from you,” Howald said. “They were engineering a way to do this.”
Wachowiak is certainly not alone in this experience. “Even the federal government gets hacked,” Dominguez said, laughing.
As technology dependence grows, understanding its impact is no longer optional — it’s critical for protecting autonomy.
New platforms, tools and methods of manipulation will continue to emerge. The power of the media lies not just in what is created, but in how it is consumed.
“It still bugs me to this day that I don't know who did it,” Wachowiak said. Stories like hers prove that anyone can get phished, but as the line between fact and fiction blurs, media literacy and critical thinking become more essential than ever.