Advanced Prompt Injection Techniques in Modern RAG Systems
Francesco Barbato · Target: arXiv / AI Security Conference
Detailed study in progress — taxonomy, attacks, defenses. Stay tuned.
I study the intersection of AI systems and cybersecurity — how language models break under adversarial pressure, how retrieval pipelines leak sensitive data, and how to build defences that hold under real-world conditions. Practitioner research: everything is tested in live environments.
Research statement
"The rapid deployment of large language models in production systems has outpaced our understanding of their failure modes. My research focuses on characterising these failures systematically — and on building practical mitigations that don't require retraining the model."
01
LLM Security
Adversarial prompts, jailbreaks, prompt injection and model extraction in production systems.
02
RAG Architecture
Chunking strategies, hybrid search, eval frameworks and data poisoning in retrieval pipelines.
03
Threat Intelligence
Malware analysis, ATT&CK TTP mapping, SIEM automation and incident response workflows.
04
Secure AI Systems
Privacy-preserving inference, federated learning safeguards and ML supply chain risks.
Research
Academic and technical writing on LLM security, RAG systems and AI red teaming — most works are currently in preparation or under submission. Stay tuned.
Francesco Barbato · Target: arXiv / AI Security Conference
Detailed study in progress — taxonomy, attacks, defenses. Stay tuned.
Francesco Barbato · Target: blog / preprint
Framework + tooling for continuous RAG quality monitoring. Coming soon.
Francesco Barbato · Previously internal — public version planned
Rework in progress for public release.
Security Research
Responsible disclosure activity in AI/ML components — full details & CVE write-ups will be published after coordination completes.
CVE-20XX-XXXXXTBASeveral responsible disclosures in progress — full write-ups coming after coordination.
Full advisory → coming after disclosure coordination
CVE-20XX-YYYYYTBAPatched internally — public advisory planned.
Full advisory → coming after disclosure coordination
Speaking & Workshops
Conference talks, workshops and guest lectures on AI security — upcoming events and slides will appear here when confirmed.
予定 — details will appear here once confirmed.
Details & slides → coming soon
Hands-on session planned — stay tuned.
Details & slides → coming soon
Updated version in preparation for public events.
Details & slides → coming soon
Want to collaborate on research
or review early drafts?
Get in touch
Whether it's a full-stack product, an AI-powered feature or a security audit — I'm open to new projects, collaborations and interesting problems.