Abstract
This research presents an image similarity learning method that focuses on
extracting multi-resolution features from images. The proposed method involves a
series of steps, including image collection, normalization processing, image pairing
based on visual judgment and a Hash algorithm, and division of data into training and
testing sets. Furthermore, a network model is constructed using a deep learning
framework, and a specific objective function and optimizer are designated for
similarity learning. The network model is then trained and tested using the prepared
data sets. This method addresses several challenges encountered in conventional image
similarity learning, such as limited feature information extraction, inadequate
description of image features, limitations imposed by data volume during network
training, and susceptibility to overfitting.
Keywords: Deep learning, Data set division, Image similarity learning, Multiresolution features, Network model, Overfitting.