Research Focus
My work sits at the intersection of machine learning, algorithmic fairness, and large language models. I design experimental frameworks and evaluation pipelines to study when LLMs can reliably follow fairness objectives in ranking and re-ranking tasks—especially for structured tabular decision data.
- Fairness-aware ranking and re-ranking (exposure, representation, utility trade-offs)
- LLM prompt design, sensitivity analysis, and cross-model stability
- Reproducible experimentation pipelines (Python-based evaluation workflows)
Selected Research
Fairness-aware ranking and LLM evaluation
Can Large Language Models Rank Tabular Data Fairly?
An empirical study investigating whether LLMs can perform fairness-aware re-ranking while preserving ranking utility in real-world tabular decision settings.
- Structured serialization for tabular ranking inputs
- Modular prompting with explicit fairness objectives and sensitive attributes
- Evaluation across multiple LLM families and prompt variants
- Analysis of prompt sensitivity and stability across models
Hidden or Inferred: Fair Learning-To-Rank with Unknown Demographics
Studies how errors in demographic inference affect fairness performance when sensitive attributes are unavailable, highlighting risks of relying on inferred demographic signals in ranking systems.
Current On-Going Research
Exploring advanced AI methodologies across model optimization, autonomous systems, and interpretability.
- LLM Finetuning: Developing efficient fine-tuning techniques for domain-specific applications
- Agentic AI: Building autonomous AI systems with goal-directed reasoning and decision-making
- Evolutionary Algorithms: Applying evolutionary computation to optimize AI model architectures and hyperparameters
- Explainable AI: Creating interpretable models and explanation methods for transparent AI decision-making
Technical Strengths
AI research + practical ML engineering
Machine Learning & AI
Building and evaluating ranking systems with focus on fairness metrics, experimental design, ablation studies, and reproducible ML pipelines.
Large Language Models
Designing robust prompts, evaluating in-context learning, testing structured input formats, and measuring cross-model stability for research applications.
Research Engineering
End-to-end ML experiment pipelines, data processing workflows, API development, and interactive analysis for iterative research.
Work Experience
Technical roles combining engineering and problem-solving
Technical Support Engineer — Microsoft 365
Tek Experts
T4 Trainer and First Point of Contact for Frontline Engineers
- Delivered enterprise-level support for Microsoft 365 and Exchange Server deployment and integration, leveraging SCCM, Intune, Group Policy, and the Office Deployment Toolkit for large-scale configuration, activation, and mail service management.
- Trained and mentored junior engineers in debugging techniques, system architecture, and structured problem-solving to strengthen technical proficiency and independent troubleshooting skills.
- Automated internal workflows improving ticket triage speed and response accuracy.
Software Developer
DiamondScripts Ltd.
Backend and Android Developer
- Built Android and web apps for retail and logistics clients.
- Designed and deployed backend services using PHP and MySQL for admin dashboards.
Freelance Software Developer
Full-Stack and AI Application Developer
- Delivered Android, web, and AI-based projects for global clients, integrating REST APIs and ML models.
- Developed and maintained websites including continentalgeneticsltd.com.
Contact
Open to research and applied ML roles
Email: sma.olulana@gmail.com
LinkedIn: Oluseun Olulana •
GitHub: sewen007 •
Google Scholar: Oluseun Olulana
Note: This site highlights academic and research work.