cv
You can download my full CV.
General Information
| Full Name | Seyedhamidreza Mousavi |
| Location | Västerås, Sweden |
| Languages | English (Professional), Persian (Native) |
| seyedhamidreza.mousavi@mdu.se | |
| GitHub | hamidmousavi0 |
| Website | hamidmousavi0.github.io |
Summary
Applied Machine Learning Researcher and Engineer with over 5 years of experience in efficient and robust deep learning. Specialized in model compression, Neural Architecture Search (NAS), robustness, and fault-tolerant learning, with strong hands-on expertise in PyTorch, distributed training, and MLOps. Proven track record of translating research into real-world deployments on GPUs, FPGAs, and microcontrollers for safety-critical and resource-constrained systems.
Education
-
2022 – Present PhD in Computer Science
Mälardalen University, Västerås, Sweden - Research focus on efficient, compact, robust, and reliable deep neural networks.
- Emphasis on automatic model design (NAS) for safety-critical systems and autonomous driving.
- Thesis:Efficient Design and Training of Compact and Robust Deep Neural Networks
-
2012 – 2019 MSc & BSc in Computer Engineering
Shahid Bahonar University, Kerman, Iran - Master’s thesis on reliability and security analysis of deep learning models.
- Bachelor’s thesis on ARM7-TDMI implementation on Xilinx FPGA.
Experience
-
2022 – Present PhD Researcher – Efficient & Robust Deep Learning
Mälardalen University, Sweden - Lead research on efficient and reliable deep learning for edge and embedded systems.
- Developed one-shot NAS frameworks with knowledge distillation, achieving up to 60× reduction in search cost.
- Designed fault-tolerant activation functions improving resilience under hardware bit-flip errors.
- Deployed TinyDL models on STM32 microcontrollers using TinyEngine.
- Mentored junior researchers and contributed to teaching in Deep Learning and Embedded Systems courses.
-
2020 – 2022 Machine Learning Researcher (Remote)
Simon Fraser University, Canada - Contributed to ICCAD 2020 work on aging-aware delay modeling using neural networks.
- Conducted research on fault injection and adversarial robustness of hardware-protected DNNs.
- Co-authored publications on stealthy bit-flip attacks and reliability-aware ML models.
Open Source Projects
Skills
- Machine Learning & AI: Efficient AI, Model Compression, NAS, Pruning, Quantization, Robustness, Fault Tolerance
- Frameworks: PyTorch, HuggingFace, LangGraph, TensorRT
- Distributed & MLOps: DDP, Horovod, Docker, DVC, Git, Google Vertex AI, GCP
- Programming: Python, C/C++, VHDL, LaTeX
- Embedded & Hardware: FPGA (Vivado), STM32, NVIDIA Jetson
- Soft Skills: Problem-solving, simplifying complex systems, cross-disciplinary collaboration, mentoring
Selected Publications
- ProARD:Progressive Adversarial Robustness Distillation, IJCNN 2025
- DASS:Differentiable Architecture Search for Sparse Neural Networks, TECS 2023
- TAS:Ternarized NAS for Resource-Constrained Edge Devices, DATE 2022
- Aadam:Aging-Aware Cell Library Delay Modeling, ICCAD 2020