#220: Social Engineering for Counter-Adversaries |
|
|
Welcome to another _secpro!
This week, we're poking the brain of CISO expert David Gee to deliver you some insights which line up nicely with his new book: A Day in the Life of a CISO. We've also included our popular PDF resource again, to help you improve your training sessions and help the non-specialists amongst us to make the right moves in the age of AI. Check it out!
|
If you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there! Cheers! Austin Miller Editor-in-Chief |
|
|
Recently, along with a wealth of other industry-critical information and resources, Palo Alto’s Unit 42 published their incident response report concerning social engineering. As an area of practice that has always fascinated me—as more art than science—this immediately grabbed my attention and almost forced me to start taking notes. With this in mind, we as a team are heading out over the next few weeks to dig deeper into social engineering and help you discern the golden kernels that you need to access.
|
Unit 42: PhantomVAI Loader Delivers a Range of Infostealers: Researchers from Unit 42 describe a new loader named PhantomVAI, used to deploy various infostealers (malware that exfiltrates sensitive data). The loader uses techniques like steganography (hiding payload in an image file, e.g. a GIF file or other image) and obfuscated PowerShell to download and load the payload. The embedded data (DLL) is encoded inside images, hiding the payload from simple detection. Once loaded, it communicates with command-and-control servers to pull further stages.
Unit 42: When AI Remembers Too Much – Persistent Behaviors in AI Agents via Indirect Prompt Injection: Shows a proof of concept demonstrating how adversaries can perform indirect prompt injection against AI agents. The technique doesn’t require direct user prompt, but uses external content (webpages, documents, metadata) feeding into the the agent’s memory or long-term memory subsystem. Once instructions are embedded via external content, they persist across sessions, meaning an attacker can embed malicious instructions that get loaded into the agent memory and later used to exfiltrate data, by instructing the agent to leak conversation history or other secrets. The attack is stealthy because it uses external content rather than explicit prompts.
Unit 42: The Golden Scale: Bling Libra and the Evolving Extortion Economy: This threat brief analyzes how extortion actors (including groups using variants like Bling Libra) are evolving. They discuss stolen data, ransom demands, deadlines, leaking stolen credentials or data, and extortion notes targeted at executives. The group is apparently coordinating via Telegram channels, recruiting other actors to send extortion notes (e.g. executive level), focusing on stolen data (Salesforce data) and pressing for payment. They set deadlines (e.g. threat actor set Oct 10, 2025 as a deadline to pay ransom or leak files).
CrowdStrike: Campaign targeting Oracle E‑Business Suite (Oracle EBS) zero-day CVE-2025-61882: CrowdStrike reports on a campaign targeting the zero-day vulnerability CVE-2025-61882 in Oracle E-Business Suite. This is an unauthenticated remote code execution (RCE) vulnerability (i.e. attackers can exploit without prior credentials). Oracle disclosed the vulnerability on 4 October 2025, but CrowdStrike observes that there are indicators of potential or likely exploitation in the wild. They note IOCs, commands, and files from Oracle’s advisory, suggesting real-world exploitation.
Unit 42: 2025 Global Incident Response Report: Social Engineering Edition: A large incident response / threat intelligence report covering social engineering cases from May 2024 to May 2025. Some key findings: social engineering was the top initial access vector in their caseload (~36% of cases). Techniques go beyond phishing to non-phishing vectors (help desk, fake system prompts, help desk manipulation, fake prompts). Attackers exploit trust, identity workflow, help desk resets, compromised accounts, etc. They provide recommendations for defenders: just-in-time provisioning, restricting sensitive workflows, data loss prevention, identity correlation, etc. (Check in next week to read our first steps into unpacking this important analysis!)
|
|
|
Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions: (Hemanth Ravipati) Neuromorphic computing, which mimics the brain’s neural structure in hardware, is increasingly used for efficient AI/edge computing. This paper introduces Neuromorphic Mimicry Attacks (NMAs), a novel class of threats that exploit the probabilistic, non-deterministic behavior of neuromorphic chips. By manipulating synaptic weights or poisoning sensory inputs, attackers can mimic legitimate neural activity, thereby evading standard intrusion detection systems. The work includes a theoretical framework, simulation experiments, and proposals for defenses—e.g. anomaly detection tuned to synaptic behavior, secure synaptic learning. The paper highlights that neuromorphic architectures introduce new cybersecurity risk surfaces.
APT-LLM: Embedding-Based Anomaly Detection of Cyber Advanced Persistent Threats Using Large Language Models: (Sidahmed Benabderrahmane, Petko Valtchev, James Cheney, Talal Rahwan) This paper tackles the hard problem of detecting Advanced Persistent Threats (APTs), which tend to blend into normal system behavior. Their approach, APT-LLM, uses large language models (e.g. BERT, ALBERT, etc.) to embed process–action provenance traces into semantic-rich embeddings. They then use autoencoder models (vanilla, variational, denoising) to learn normal behavior and flag anomalies. Evaluated on highly imbalanced real-world datasets (some with only 0.004% APT-like traces), they demonstrate substantial gains over traditional anomaly detection methods. The core idea is leveraging the representational strength of LLMs for cybersecurity trace analysis.
Precise Anomaly Detection in Behavior Logs Based on LLM Fine-Tuning: (S. Song et al.) Insider threats are notoriously difficult to detect because anomalies in user behavior often blur with benign but unusual actions. This paper proposes converting user behavior logs into natural language narratives, then fine-tuning a large language model with a contrastive learning objective (first at a global behavior level, then refined per user) to distinguish between benign and malicious anomalies. They also propose a fine-grained tracing mechanism to map detected anomalies back to behavioral steps. On the CERT v6.2 dataset, their approach achieves F1 ≈ 0.8941, outperforming various baseline methods. The method aims to reduce information loss in translation of logs to features and improve interpretability.
Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics: (Shide Zhou, Kailong Wang, Ling Shi, Haoyu Wang) As LLMs are embedded into real-world systems, they become potential attack targets (jailbreaks, backdoors, adversarial attacks). This work proposes a detection method that inspects internal hidden states (activation patterns) across layers and uses “hidden state forensics” to detect abnormal behaviors in real-time. The approach is claimed to detect a variety of threats (e.g. backdoors, deviations) with >95% accuracy and low overhead. The method operates without needing to retrain or heavily instrument the model, offering a promising path toward monitoring LLM security in deployment.
Robust Anomaly Detection in O-RAN: Leveraging LLMs against Data Manipulation Attacks: (Thusitha Dayaratne, Ngoc Duy Pham, Viet Vo, Shangqi Lai, Sharif Abuadbba, Hajime Suzuki, Xingliang Yuan, Carsten Rudolph) The Open Radio Access Network (O-RAN) architecture, used in 5G, introduces openness and programmability (xApps), but also novel attack vectors. The authors identify a subtle “hypoglyph” attack: injecting Unicode-wise manipulations (e.g. look-alike characters) into data that evade traditional ML-based anomaly detectors. They propose using LLMs (via prompt engineering) to robustly detect anomalies, even in manipulated data, and demonstrate low detection latency (<0.07 s), making it potentially viable for near-real-time use in RAN systems. This work bridges wireless systems and AI-based security in a timely domain.
Generative AI in Cybersecurity: A Comprehensive Review of Future Directions: (M. A. Ferrag et al.) This is a survey/review paper covering the intersection of Generative AI / LLMs and cybersecurity. It synthesizes recent research on how generative models can be used for threat creation (e.g. adversarial attacks, automated phishing, malware synthesis) and defense (e.g. automated patch generation, security policy synthesis, anomaly detection). The paper also outlines open challenges and risks (e.g. misuse, model poisoning, hallucination) and proposes a structured roadmap for future research. As the field is evolving rapidly, this review is becoming a frequently cited reference point.
|
|
|
Copyright (C) 2025 Packt Publishing. All rights reserved. Our mailing address is: Packt Publishing, Grosvenor House, 11 St Paul's Square, Birmingham, West Midlands, B3 1RB, United Kingdom
Want to change how you receive these emails? You can update your preferences or unsubscribe. |
|
|
|