Hello! I am Mehdi Ghasemzadeh. I am a master's student in electrical engineering, and digital electronics systems at the Sharif University of Technology, and I am passionate about Artificial Intelligence (AI) and Computer Vision. My current research concentrates on Computer Vision and developing vision systems for Self-Driving Cars. I am also experienced in the fields of Machine Learning, Deep Learning, Embedded Systems, and Microprocessors. I plan to pursue Ph.D. studies in AI and Computer Vision in order to become a proficient researcher in these fields.
My primary research interests are Computer Vision, Machine Learning, Deep Learning, Self-Driving Cars, and Embedded Systems.
2020 - 2023
Sharif University of Technology
Tehran, Iran
GPA: 3.86/4
Supervisor: Dr. Saeed Bagheri Shouraki
2015 - 2020
Shahid Rajaee Teacher Training University
Tehran, Iran
GPA: 3/4
Supervisor: Dr. Mohammad Shams Esfand Abadi
2010 - 2014
17 Shahrivar High School
Tehran, Iran
GPA: 4/4
2025
Heritage Branch, Library & Archives Canada
M. Yazdani, A. Razzaghi, S. Daneshi, A. Afshari, A. Azadnia, M. Ghasemzadeh, S. Yazdani, Practical Guide to Using Advanced Driver Assistance Systems (ADAS), 2025, Heritage Branch, Library & Archives Canada.
2024
Simorgh Aseman Azargan Publishing Institute
M. Ghasemzadeh, S. Hosseini, M. Kalani, and A. Afshari, Driver Safer: Autonomous Driving and Intelligent Driving Systems (using Artificial intelligence and Deep Learning), 2024, Simorgh Aseman Azargan Publishing Institute.
2024
Russian Open Medical Journal
A. Afshari, A. Azadnia, M. Ghasemzadeh, and M. Yazdani, Driver Safer: A look at How Men and Women use Advanced Driver Assistance System, 2024, Russian Open Medical Journal.
2023
International Conference on Artificial Intelligence and Smart Vehicle
M. Ghasemzadeh and S. B. Shouraki, Semantic Segmentation using Events and Combination of Events and Frames , 2023, Communications in Computer and Information Science (CCIS) Book Series.
Jun 2023 - Feb 2024
Deep Learning Center in Iran | Sharif University of Technology
Tehran, Iran
Designed and graded homework and projects
Jan 2022 - Aug 2022
Sharif University of Technology
Tehran, Iran
Designed and graded homework and projects
Jun 2023 - Feb 2024
Sharif University of Technology
Supervisor: Dr. Saeed Bagheri Shouraki
In this research, we are going to develop a Monocular Depth Estimation using a combination of Event Camera, and standard camera, and we are trying to use a combination of CNNs and RNNs such as Conv-LSTM or Conv-GRU for this task.
Feb 2021 - Feb 2023
Sharif University of Technology
Supervisor: Dr. Saeed Bagheri Shouraki
In this research, we developed a Semantic Segmentation model for self-driving cars using an Event Camera, codes and videos of the model are available on GitHub Page and also we developed a novel Semantic Segmentation model using combination of Event camera and standard camera (Sensor Fusion), codes and videos of the model are available on GitHub Page
The paper of this research is available here
Presentation's PDF is available here
May 2021
Sharif University of Technology
Dr. Mohammad Sharifkhani
The advantages and disadvantages of two types of DC-DC Converters were discussed in this survey, Presentation's PDF is available here
2019-2020
Shahid Rajaee Teacher Training University
Supervisor: Dr. Mohammad Shams Esfand Abadi
We designed a Switching DC-DC Buck-Boost Converter at power of 200w, and an AVR microcontroller was used as controller of the converter.
JUl 2019 - Feb 2020
Nira System, Tehran, Iran
Microcontroller programming for ARMs such as STM32 CortexM3 and AVRs, and using the microcontroller’s peripherals such as I2C, SPI, and UART.
Computer Vision
Semantic Segmentation using an event camera
Event cameras are bio-inspired sensors. They have outstanding properties compared to frame-based cameras: high dynamic range (120 vs 60), low latency, and no motion blur. Event cameras are appropriate to use in challenging scenarios such as vision systems in self-driving cars and they have been used for high-level computer vision tasks such as semantic segmentation and depth estimation. In this work, we worked on semantic segmentation using an event camera for self-driving cars. This work introduces a new event-based semantic segmentation network and we evaluate our model on DDD17 dataset and Event-Scape dataset which was produced using Carla simulator. Codes, videos, and more details are available on Project's GitHub Page.
A video from this project on DDD17 dataset:
Event: Event Camera, Image: Frame Camera, P: Predicted, and GT: Ground Truth
Combining images and events data for Semantic Segmentation tasks
Event cameras are bio-inspired sensors. They have outstanding properties compared to frame-based cameras: high dynamic range (120 vs 60), low latency, and no motion blur. Event cameras are appropriate to use in challenging scenarios such as vision systems in self-driving cars and they have been used for high-level computer vision tasks such as semantic segmentation and depth estimation. In this work, we worked on semantic segmentation using an event camera for self-driving cars. Event-based networks are robust to light conditions but their accuracy is low compared to common frame-based networks, for boosting the accuracy we propose a novel event-frame-based semantic segmentation network that it uses both images and events. We also introduce a novel training method (blurring module), and results show our training method boosts the performance of the network in recognition of small and far objects, and also the network could work when images suffer from blurring. Codes, videos, and more details are available on Project's GitHub Page.
A video from this project on Event-Scape dataset:
Event: Event Camera, Image: Frame Camera, P: Predicted, and GT: Ground Truth
Crowd Counting, counting the number of people in images or videos using detection-based methods such as YOLOV5
This model is inspired by yolov5-crowdhuman and it counts heads and people in images and videos. The model was evaluated in Mall Datase. Codes, videos, and more details are available on Project's GitHub Page.
A video from this project on Mall dataset:
The number of people and detected heads are shown in this video.
Crowd Counting, counting the number of people in images or videos using density-based methods
This network is used for crowd counting and also density map visualization, this network is inspired by Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting (SFANet). The model was evaluated in various datasets, we provide many samples which are collected from different scenarios to show the performance of this network. Codes, videos, and more details are available on Project's GitHub Page.
A video from this project on some famous datasets:
The density map of people's heads and the number of people are shown in this video.
Machine Learning
Blood Pressure Estimation from PPG Signal
This project was the final project of machine learning course at Sharif University of Technology. The project could achieve the highest accuracy among all projects in the class. Random forest regression is used in this project for the estimation of blood pressure using PPG Signal.
Autonomous Vehicles
Producing a dataset for autonomous driving using Carla simulator
Recent advancements in computer graphics technology allow more realistic rendering of car driving environments. They have enabled self-driving car simulators such as DeepGTA-V and CARLA (Car Learning to Act) to generate large amounts of synthetic data that can complement the existing real world dataset in training autonomous car perception. In this project, a large dataset for autonomous driving was produced, the dataset contains RGB images, Events, semantic segmentation labels, depth information, and control command (steering angle,...).
Parallel Programming
Implementing parallel lgorithms using CUDA and Pthread library
This project implements parallel matrix multiplication, parallel scan algorithm, and parallel Fast Fourier Transform (FFT) using cuda and parallelizing of merge sort algorithm using Pthread library. Codes and more details are available onProject's GitHub Page.
Embedded Systems
Designing a Linux-based Smart Home
This project is a Linux-based Smart Home that contains some cameras for recording and face detection, microphones, light and temperature controllers, and an HTTP server for showing home information. This system could connect to a database for saving and reloading data.
Fuzzy Systems
Implementing Fuzzy Estimator
This project implements Takagi-Sugeno model and a fuzzy-based active learning method for the estimation of a nonlinear function using Python. Codes and more details are available on Project's GitHub Page.
Programming: C/C++, Python, Matlab, and Shell
Deep Learning: PyTorch, Tensorflow, Keras, and SciKit Learn
Common Libraries: OpenCV (C++, Python), Numpy, Pandas, and …
Parallel Programming: CUDA, and Pthread library
Driving Simulation: Carla, Airsim
Embedded Systems: STs and AVR Microcontrollers programming, Boost/BEAST library and MQTT library
Hardware: Nvidia Jetson, Raspberry Pi, ARM, AVR, Arduino, FPGA
Electronics: Altium Designer, OrCAD PSpice, Multisim, Proteus
Operating Systems: Linux, Windows
Other: Make, Cmake, Git, STM32Cube IDE, Xilinx ISE Design Suite
Address
Tehran, Iran
Phone
+98-910-910-8893
ghasemzadehmehdi07@gmail.com
