Zahiri, S. H., Iranpoor, R., Mehrshad, N.. (1403). Paying Attention to the Features Extracted from the Image to Person Re-identification. فناوری آموزش, (), 141-148. doi: 10.22061/jecei.2024.10968.752
S. H. Zahiri; R. Iranpoor; N. Mehrshad. "Paying Attention to the Features Extracted from the Image to Person Re-identification". فناوری آموزش, , , 1403, 141-148. doi: 10.22061/jecei.2024.10968.752
Zahiri, S. H., Iranpoor, R., Mehrshad, N.. (1403). 'Paying Attention to the Features Extracted from the Image to Person Re-identification', فناوری آموزش, (), pp. 141-148. doi: 10.22061/jecei.2024.10968.752
Zahiri, S. H., Iranpoor, R., Mehrshad, N.. Paying Attention to the Features Extracted from the Image to Person Re-identification. فناوری آموزش, 1403; (): 141-148. doi: 10.22061/jecei.2024.10968.752
Paying Attention to the Features Extracted from the Image to Person Re-identification
Journal of Electrical and Computer Engineering Innovations (JECEI)
Department of Electrical Engineering, Faculty of Engineering, University of Birjand, Birjand, Iran.
تاریخ دریافت: 11 تیر 1403،
تاریخ بازنگری: 05 مهر 1403،
تاریخ پذیرش: 19 مهر 1403
چکیده
Background and Objectives: Person re-identification is an important application in computer vision, enabling the recognition of individuals across non-overlapping camera views. However, the large number of pedestrians with varying appearances, poses, and environmental conditions makes this task particularly challenging. To address these challenges, various learning approaches have been employed. Achieving a balance between speed and accuracy is a key focus of this research. Recently introduced transformer-based models have made significant strides in machine vision, though they have limitations in terms of time and input data. This research aims to balance these models by reducing the input information, focusing attention solely on features extracted from a convolutional neural network model. Methods: This research integrates convolutional neural network (CNN) and Transformer architectures. A CNN extracts important features of a person in an image, and these features are then processed by the attention mechanism in a Transformer model. The primary objective of this work is to enhance computational speed and accuracy in Transformer architectures. Results: The results obtained demonstrate an improvement in the performance of the architectures under consistent conditions. In summary, for the Market-1501 dataset, the mAP metric increased from approximately 30% in the downsized Transformer model to around 74% after applying the desired modifications. Similarly, the Rank-1 metric improved from 48% to approximately 89%. Conclusion: Indeed, although it still has limitations compared to larger Transformer models, the downsized Transformer architecture has proven to be much more computationally efficient. Applying similar modifications to larger models could also yield positive effects. Balancing computational costs while improving detection accuracy remains a relative goal, dependent on specific domains and priorities. Choosing the appropriate method may emphasize one aspect over another.