I am a PhD researcher in Computer Science at Loyola University Chicago, working at the intersection of adversarial machine learning, natural language processing, and social computing. I build AI systems that are technically strong, socially aware, and dependable in high-stakes environments.
My work is driven by a core question: how do we make AI systems that are not only powerful, but safe, fair, and accountable in the real world? I pursue this across three interconnected areas: online safety and social dynamics, adversarial robustness in distributed AI, and reasoning verification in large language models.
PI: Dr. Yasin N. Silva · Department of Computer Science, Loyola University Chicago
Within BullyBlocker, my work focuses on making social media safer for everyone. I build systems that detect and understand online harassment targeting marginalized communities using transformer-based NLP models, going beyond simple keyword filtering to capture conversational context, identity-aware signals, and subtle patterns of harm. The broader goal is to develop AI that can proactively identify toxic behavior, de-escalate conflict, and foster healthier online spaces at scale.
PI: Dr. Mohammed Abuhamad · Department of Computer Science, Loyola University Chicago
At AISeC, I work on adversarial robustness in federated learning systems and AI security more broadly. My research examines how distributed AI systems can be attacked through poisoning, model inversion, and evasion, and how to build defenses that are robust, privacy-preserving, and practical at scale.
PI: Dr. Mohammed Abuhamad · Department of Computer Science, Loyola University Chicago
Within the AI4SE group, I explore how formal methods can be applied to make AI systems more reliable and trustworthy. This includes using specification languages like TLA+ to formally model and verify the behavior of AI-driven systems, ensuring they satisfy correctness properties before deployment. The core idea is to bring the rigor of formal verification into the AI development pipeline: rather than relying solely on empirical testing, we reason mathematically about what a system will and will not do. I work on applying these techniques to verify properties of LLM-based pipelines, agentic systems, and safety-critical software, bridging the gap between theoretical guarantees and practical AI engineering.
Cyberbullying detection, hate speech classification, LLM-based mediation, and understanding social dynamics in online communities.
Robustness of federated learning against poisoning and evasion attacks, privacy-preserving AI, and secure model training.
Evaluating and improving the reliability of large language models through formal verification, chain-of-thought analysis, and alignment research.
Transformer-based models, conversational context modeling, identity-aware NLP, and cross-lingual transfer for social good applications.