About #
I’m a master’s student in the Erasmus Mundus CYBERUS program, specializing in software cybersecurity with a focus on AI safety and security. With 4+ years of experience in software development and QA engineering, I’m passionate about advancing AI security through rigorous research and collaboration.
Research Interests #
My current interest is in trustworthy AI and ensuring systems are resilient to adversarial attacks. I’m particularly interested in:
- Adversarial Machine Learning: Studying poisoning attacks, model robustness, and defense mechanisms.
- LLM Security: Exploring alignment, prompt injection, and jailbreaking techniques.
- Android Penetration Testing: Ensuring the security of mobile apps.
Through hands-on labs, I’ve worked on LLM alignment and jailbreaking using greedy coordinate descent optimization (implementing research from “Universal and Transferable Adversarial Attacks on Aligned Language Models”), and built adversarial-resistant malware classifiers for Android APKs.
Current Focus #
I’m currently deepening my understanding of transformer architectures and mechanistic interpretability through the ARENA course, while also pentesting vulnerable mobile applications. I’m seeking opportunities in AI safety and security fellowships to contribute to the development of trustworthy AI systems.
Experience Highlights #
Open Source Cybersecurity: I currently work with the AsyncAPI Initiative implementing security best practices including incident response plans, SBOMs, and GitHub security hardening (MFA, CodeQL, protected branches).
Security Research: At Grenoble LIG Lab, I validated a privacy-preserving authentication protocol using ProVerif and built a DNSSEC-enabled server prototype with KnotDNS, researching FIDO key integration for enhanced security.
Software Security Testing: My QA background across multiple companies (Gotu, Wattics/EnergyCAP, Brrng) developed my adversarial thinking and manual vulnerability assessment skills. I’ve conducted practical penetration tests (XSS, SQLi, CSRF, XXE, command injection) using Burp Suite and ZAP on DVWA and Juice Shop.
Technical Background #
AI Security: Adversarial attacks, model robustness evaluation, LLM security
Security Tools: Burp Suite, Metasploit, Wireshark, Ghidra, ZAP, Frida, ProVerif
Development: Python, Java, C/C++, PyTorch, TensorFlow
Cloud & DevOps: AWS, Docker, Kubernetes, GitHub Actions
What I’m Looking For #
I’m available for a 6-month thesis internship starting February 2026, ideally focused on adversarial machine learning, model robustness, or trustworthy AI evaluation. I’m also actively seeking AI safety and security fellowships to deepen my research contributions.
Get in Touch #
Feel free to reach out if you’re working on AI safety, adversarial ML, or security research—I’m always excited to collaborate and learn from others in the field.
🛠 Skills #
| Programming | Security Tools | DevOps | Documentation | Testing |
|---|---|---|---|---|
| C, C++ | Splunk, Wireshark, OSINT | AWS, Kubernetes, Docker | Technical Writing | Selenium, Puppeteer |
| Java, Kotlin | Bash Scripting | Cypress | ||
| Ruby on Rails, Python | Manual Testing |
✍️ Published Articles and Documentation #
Open Source Contributions #
- Creating a Generator Template
- Generator Tool Introduction
- Installation Guide
- Usage Guide
- AsyncAPI Document
- Template Context
- Generator Version vs Template Version