Due to the outstanding performance of the deep network architecture–based hash for data storage and retrieval in recent years, it has been widely applied in massive image retrieval. Most previous approaches have not paid attention to the significant...
Due to the outstanding performance of the deep network architecture–based hash for data storage and retrieval in recent years, it has been widely applied in massive image retrieval. Most previous approaches have not paid attention to the significant effect on the hash model of quantization error during the learning process. Furthermore, the saturated loss function may result in similar hash codes being generated by large‐difference images. The underuse of classification information in the training process also brings about poor performance by hash codes in retrieval assignments. In this paper, we propose a novel quantitative regularization term with an exponential convergence rate to minimize the impact of quantization error on the model and accelerate the convergence speed of the network. In the training process, to resolve the dilemma caused by saturation loss functions, a new sigmoid function with a slope parameter that can be changed automatically according to the number of iterations is proposed. For the sufficient application of classification information, triplet labels and image labels are used in parallel under the same framework by integrating image labels into the output layer. The experiment results indicate that our algorithm is superior to several advanced hash methods on two standard datasets.