I am serving as a Lecturer in the Computer Science and Engineering Department at International Islamic University, Chittagong. I graduated from Chittagong University of Engineering & Technology in June 2024 with B.Sc degree in Electrical and Electronic Engineering. My research interest follows Medical Image Processing, Real-time Vision and Surveillance in Healthcare, Deep Learning, Transfer Learning etc. Recently, I have developed a lightweight encoder-decoder based segmentation model for Brain Tumor MRI Scans and currently, I am analyzing the Segment Anything Model (SAM) to integrate some novel features for Image processing tasks. Additionally, I have also worked in Natural Language processing (NLP) and transformer models BERT. I believe that having the opportunity to work with leading researchers from various disciplines in ECE, CS or Biomedical may help me bring some innovation in my research fields.
I am looking for a fully funded PhD position as a Graduate Research Assistant/Teaching Assistant in USA. I am interested to join the PhD program in ECE/CS department of a reputed university in Spring 2027/Fall 2027 with possible funding facilities.
LCG Attention model is a novel deep neural network which fuses an attention enhanced CNN layer with another attention enhanced LSTM. The model is validated using historical load data from the Chattogram district and other benchmark public datasets. Results show that it outperforms several state-of-the-art methods, setting a newbenchmark for regional short-term load prediction. It is primarily benchmarked for load forecasting, a time-series application and can be pioneered for other forecasting methods and NLP. Overall methodology involves data preprocessing, increasing the number of features by using time-lag and statsmodels, feature importance calculation, data splitting, LCG Attention model, and model evaluation, respectively.
A hybrid deep learning-based architecture is established including a Convolutional Neural Network (CNN) with attention mechanisms (Squeeze and Excitation, Spatial), residual connections, and transfer learning to classify and detect diseases in tomato leaves. This model demonstrates the potential of channel and spatial attention to identify irregularities in leaves, as proven through its 99.69% accuracy in the Plant Village tomato leaf dataset. This innovative method enables the development of solutions that are more reliable and simplified, which could be advantageous to both producers and agricultural practitioners in the future.
The proposed Brain Tumor Segmentation Model is a lightweight, two-stage deep learning framework designed for the efficient detection and segmentation of Lower-Grade Gliomas (LGG) from MRI images. The model employs a basic encoder-decoder architecture inspired from UNet while emplying depth-wise convolutions as backbone for high-efficiency feature extraction. It also integrates Attention into the skip connections to filter redundant noise and emphasize tumor-relevant features. Operating through a cascaded pipeline, the system first classifies the presence of a tumor before performing pixel-wise segmentation, significantly reducing unnecessary computational overhead. Experimental results demonstrate state-of-the-art performance, achieving a mean Dice score of 0.95 and a mean IoU of 0.91, while maintaining a significantly lower parameter count compared to traditional models like UNet.
Natural Language Processing (NLP) is one of the trending topics in AI. Text classification, text completion, and sentiment analysis have been carried out for various languages, including English, Bengali, and Spanish. However, there are still languages where the potential of NLP is unexplored. The Chakma dialect, spoken by a significant ethnic group in the Chittagong Hill Tracts of Bangladesh, is one such area. This study makes a novel contribution by assessing sentiment classification in the Chakma language for the first time. A Chakma language dataset is formed by collecting some frequently used texts from social media networks and Chakma peers. It contains more than 8000 text samples with three different labels: positive, negative, and neutral. To classify this low-resource language effectively, several Bert-based classifiers are fine-tuned and validated using accuracy metrics and confusion metrics. Among the classifiers, bert-base-uncased obtained 0.85 accuracy and 0.46 validation loss, surpassing the others. This study will work as a pioneer for other low-resource ethnic languages that are still unexplored in this domain.