Salta al contenuto principale
ResearchIndependent · 2022 – present

6

Papers & Reports

3

CVEs Filed

12

Talks & Workshops

Security & AI
research.

I study the intersection of AI systems and cybersecurity — how language models break under adversarial pressure, how retrieval pipelines leak sensitive data, and how to build defences that hold under real-world conditions. Practitioner research: everything is tested in live environments.

Research statement

"The rapid deployment of large language models in production systems has outpaced our understanding of their failure modes. My research focuses on characterising these failures systematically — and on building practical mitigations that don't require retraining the model."

LLM SecurityPrompt InjectionRAGAdversarial MLSIEMCVE ResearchRed Teaming
Research areas

01

LLM Security

Adversarial prompts, jailbreaks, prompt injection and model extraction in production systems.

02

RAG Architecture

Chunking strategies, hybrid search, eval frameworks and data poisoning in retrieval pipelines.

03

Threat Intelligence

Malware analysis, ATT&CK TTP mapping, SIEM automation and incident response workflows.

04

Secure AI Systems

Privacy-preserving inference, federated learning safeguards and ML supply chain risks.

Research

Papers & Preprints

Academic and technical writing on LLM security, RAG systems and AI red teaming — most works are currently in preparation or under submission. Stay tuned.

P01
In PreparationResearch Report2025–2026

Advanced Prompt Injection Techniques in Modern RAG Systems

Francesco Barbato · Target: arXiv / AI Security Conference

Detailed study in progress — taxonomy, attacks, defenses. Stay tuned.

Prompt InjectionRAGLLM Security
Details coming soon
P02
In PreparationTechnical Paper2025

Production-grade RAG Evaluation at Scale

Francesco Barbato · Target: blog / preprint

Framework + tooling for continuous RAG quality monitoring. Coming soon.

RAGEvaluationLLMOps
Details coming soon
P03
ReworkingResearch Report2024

ML-based Anomaly Detection in Enterprise SIEM

Francesco Barbato · Previously internal — public version planned

Rework in progress for public release.

SIEMAnomaly DetectionSecurity
Details coming soon

Security Research

Vulnerability Disclosures

Responsible disclosure activity in AI/ML components — full details & CVE write-ups will be published after coordination completes.

CVE-20XX-XXXXXTBA
2024–2026

Multiple prompt injection vectors (details soon)

Several responsible disclosures in progress — full write-ups coming after coordination.

Full advisory → coming after disclosure coordination

CVE-20XX-YYYYYTBA
2024

Deserialization & injection issues

Patched internally — public advisory planned.

Full advisory → coming after disclosure coordination

Speaking & Workshops

Talks & Lectures

Conference talks, workshops and guest lectures on AI security — upcoming events and slides will appear here when confirmed.

2025

Defending Modern LLM Applications

予定 — details will appear here once confirmed.

Details & slides → coming soon

2025

Practical AI Red Teaming Workshop

Hands-on session planned — stay tuned.

Details & slides → coming soon

2024

OWASP LLM Top 10 – Practitioner View

Updated version in preparation for public events.

Details & slides → coming soon

Want to collaborate on research
or review early drafts?

Say hello

Get in touch

Let's build something
worth remembering.

Whether it's a full-stack product, an AI-powered feature or a security audit — I'm open to new projects, collaborations and interesting problems.