CN107133640A - Image classification method based on topography's block description and Fei Sheer vectors - Google Patents
- ️Tue Sep 05 2017
Info
-
Publication number
- CN107133640A CN107133640A CN201710269907.1A CN201710269907A CN107133640A CN 107133640 A CN107133640 A CN 107133640A CN 201710269907 A CN201710269907 A CN 201710269907A CN 107133640 A CN107133640 A CN 107133640A Authority
- CN
- China Prior art keywords
- image
- mrow
- feature
- data set
- test data Prior art date
- 2017-04-24 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000013598 vector Substances 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000012876 topography Methods 0.000 title claims 9
- 238000012549 training Methods 0.000 claims abstract description 70
- 238000012360 testing method Methods 0.000 claims abstract description 53
- 238000010276 construction Methods 0.000 claims abstract 3
- 238000009499 grossing Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 5
- 238000013139 quantization Methods 0.000 claims description 4
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000002087 whitening effect Effects 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 239000012141 concentrate Substances 0.000 claims 3
- 230000017105 transposition Effects 0.000 claims 1
- 239000000203 mixture Substances 0.000 abstract description 14
- 230000006870 function Effects 0.000 description 8
- 230000001131 transforming effect Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了基于局部图像块描述子和费舍尔向量的图像分类方法,步骤依次为,构建基于局部图像块的特征描述子;利用高斯混合模型对训练数据集的特征描述子进行建模;生成训练图像集和测试图像集的费舍尔向量;构建高斯空间金字塔,获取图像多尺度空间信息;计算训练数据集和测试数据集的特征集合;采用互信息方法对数据集的特征集合进行特征选择;训练分类器,实现图像分类。本发明能够精确地获取图像信息,提高了图像的分类准确率,可用于大规模图像分类与检索系统的构建。
The invention discloses an image classification method based on a local image block descriptor and a Fisher vector. The steps are as follows: constructing a feature descriptor based on a local image block; using a Gaussian mixture model to model the feature descriptor of a training data set; Generate the Fisher vectors of the training image set and the test image set; construct a Gaussian space pyramid to obtain the multi-scale spatial information of the image; calculate the feature set of the training data set and the test data set; use the mutual information method to characterize the feature set of the data set Select; train the classifier to achieve image classification. The invention can accurately acquire image information, improves the classification accuracy of images, and can be used in the construction of a large-scale image classification and retrieval system.
Description
技术领域technical field
本发明属于机器学习、计算机视觉技术领域,涉及了一种图像分类方法。The invention belongs to the technical fields of machine learning and computer vision, and relates to an image classification method.
背景技术Background technique
随着多媒体技术的发展,图像分类已成为计算机视觉领域研究的重点,图像分类是依据图像具有的某种属性而将其划分到预先设定的不同类别中,如何将图像进行有效的表达是提高图像分类准确率的关键,特征的选择与提取问题是图像分类目前存在的难点问题。随着移动互联网的迅速发展,人类社会已进入大数据时代。SIFT、HOG等这些传统的特征学习虽能提取图像的某些特征,在图像分类中也取得了较好的效果,但这种人工设计特征方法存在一定的缺陷。近年来,深度学习模型尤其是卷积神经网络(CNN)在特征提取中取得突破性进展,但是CNN需要调整大量的参数和耗费巨大的计算成本。目前计算成本已经成为对象检测/图像分类的一个核心问题。With the development of multimedia technology, image classification has become the focus of research in the field of computer vision. Image classification is to divide images into different preset categories according to certain attributes of images. How to effectively express images is the key to improving the The key to the accuracy of image classification, the selection and extraction of features are currently difficult problems in image classification. With the rapid development of the mobile Internet, human society has entered the era of big data. Although traditional feature learning methods such as SIFT and HOG can extract some features of images, they have also achieved good results in image classification, but this artificially designed feature method has certain defects. In recent years, deep learning models, especially convolutional neural networks (CNNs), have made breakthroughs in feature extraction, but CNNs need to adjust a large number of parameters and consume huge computational costs. Computational cost has become a core issue in object detection/image classification.
聚合局部特征已被广泛应用于2D图像的分类或检索任务中。许多特征聚合算法已被提出,视觉词袋模型(Bag-of-Visual-Words,BOW)是广泛使用的模型之一。简而言之,该模型从每幅图像提出一组局部特征,然后将这些局部特征量化为离散的视觉单词,最后用一个紧凑的直方图表示图像。虽然BOW模型具有较好的性能,但是该模型受限于以下两点:1、没有考虑位置因素;2、基于特征的0阶统计。Aggregating local features has been widely used in classification or retrieval tasks of 2D images. Many feature aggregation algorithms have been proposed, and the Bag-of-Visual-Words (BOW) model is one of the widely used models. In brief, the model proposes a set of local features from each image, then quantizes these local features into discrete visual words, and finally represents the image with a compact histogram. Although the BOW model has better performance, the model is limited by the following two points: 1. The location factor is not considered; 2. The zero-order statistics based on features.
发明内容Contents of the invention
为了解决上述背景技术提出的技术问题,本发明旨在提供基于局部图像块描述子和费舍尔向量的图像分类方法,克服现有图像分类方法存在的缺陷,减少计算成本,提高分类精度。In order to solve the technical problems raised by the above-mentioned background technology, the present invention aims to provide an image classification method based on local image block descriptors and Fisher vectors, which overcomes the defects of existing image classification methods, reduces computing costs, and improves classification accuracy.
为了实现上述技术目的,本发明的技术方案为:In order to realize above-mentioned technical purpose, technical scheme of the present invention is:
基于局部图像块描述子和费舍尔向量的图像分类方法,包括以下步骤:An image classification method based on local image block descriptors and Fisher vectors, comprising the following steps:
(1)选定训练数据集和测试数据集,分别构建两个数据集的基于局部图像块的特征描述子;(1) Select the training data set and the test data set, and construct the feature descriptors based on local image blocks of the two data sets respectively;
(2)利用高斯混合模型对训练数据集的特征描述子进行聚类,获取高斯混合模型参数;(2) clustering the feature descriptors of the training data set using the Gaussian mixture model to obtain the parameters of the Gaussian mixture model;
(3)生成训练数据集和测试数据集中每幅图像的费舍尔向量;(3) Generate the Fisher vector of each image in the training data set and the test data set;
(4)对训练数据集和测试数据集中每幅图像构建高斯空间金字塔;(4) Construct a Gaussian spatial pyramid for each image in the training data set and the testing data set;
(5)计算数据训练集和测试数据集中每幅图像的高斯空间金字塔中每幅图像的费舍尔向量,从而构成数据训练集和测试数据集的特征集合;(5) Calculate the Fisher vector of each image in the Gaussian space pyramid of each image in the data training set and the test data set, thereby forming the feature set of the data training set and the test data set;
(6)采用互信息方法对训练数据集、测试数据集的特征集合进行特征选择;(6) Using the mutual information method to perform feature selection on the feature set of the training data set and the test data set;
(7)采用选择后的训练数据集的特征值来训练分类器,将测试数据集的特征值输入训练好的分类器中,实现图像分类。(7) Use the eigenvalues of the selected training data set to train the classifier, and input the eigenvalues of the test data set into the trained classifier to realize image classification.
进一步地,步骤(1)的具体步骤如下:Further, the specific steps of step (1) are as follows:
(11)选定训练数据集和测试数据集;(11) selected training data set and test data set;
(12)通过随机采样的方法分别在训练数据集和测试数据集中的每幅图像中提取T个图像块;(12) Extract T image blocks in each image in the training data set and the test data set respectively by random sampling method;
(13)对提取的图像块进行亮度、对比度归一化和白化预处理操作,将其转换为基于像素值的列向量特征描述子,分别构成训练数据集的特征描述子和测试数据集的特征描述子;(13) Perform brightness, contrast normalization and whitening preprocessing operations on the extracted image blocks, and convert them into column vector feature descriptors based on pixel values, which constitute the feature descriptors of the training data set and the characteristics of the test data set respectively descriptor;
(14)对训练数据集的特征描述子和测试数据集的特征描述子进行PCA降维处理。(14) Perform PCA dimensionality reduction processing on the feature descriptors of the training data set and the feature descriptors of the test data set.
进一步地,步骤(2)的具体步骤如下:Further, the specific steps of step (2) are as follows:
(21)将K个高斯单元的高斯混合模型记为其中,pi(x)代表第i个高斯单元,D表示特征描述子x的维数,πi,μi,∑i分别表示第i个高斯单元的权重、均值和协方差矩阵;(21) Record the Gaussian mixture model of K Gaussian units as Among them, p i (x) represents the i-th Gaussian unit, D represents the dimensionality of the feature descriptor x, π i , μ i , ∑ i represent the weight, mean and covariance matrix of the i-th Gaussian unit, respectively;
(22)通过训练数据集特征描述子组成的集合C来估计高斯混合模型的所有参数θ={πi,μi,∑i,i=1,...K},构造集合C的概率公式:(22) Estimate all the parameters of the Gaussian mixture model θ={π i ,μ i ,∑ i ,i=1,...K} through the set C composed of the feature descriptors of the training data set, and construct the probability formula of the set C :
其中,S为集合C的特征描述子个数;Among them, S is the number of feature descriptors of the set C;
(23)采用期望值最大算法,根据步骤(22)中的概率公式,对高斯混合模型进行参数估计。(23) Using the maximum expected value algorithm, according to the probability formula in step (22), estimate the parameters of the Gaussian mixture model.
进一步地,步骤(3)的具体步骤如下:Further, the specific steps of step (3) are as follows:
(31)利用高斯混合模型参数,计算训练数据集和测试数据集中每个特征描述子在第i个高斯单元上的梯度向量;(31) Using the parameters of the Gaussian mixture model, calculate the gradient vector of each feature descriptor in the i-th Gaussian unit in the training data set and the test data set;
(32)计算第i个高斯单元的概率密度函数pi的费舍尔信息矩阵;(32) Calculate the Fisher information matrix of the probability density function p i of the i Gaussian unit;
(33)对信息矩阵Fi进行柯列斯基分解,得到柯列斯基分量,将此分量与步骤(31)中得到的梯度向量相乘,得到数据集中每幅图像的费舍尔向量F0,从而构成训练数据集和测试数据集的费舍尔向量集。(33) Carry out Colesky decomposition on the information matrix F i to obtain the Colesky component, multiply this component with the gradient vector obtained in step (31), and obtain the Fisher vector F of each image in the data set 0 , thus forming the Fisher vector set of the training data set and the testing data set.
进一步地,步骤(4)的具体步骤如下:Further, the specific steps of step (4) are as follows:
(41)利用高斯卷积函数生成数据集中图像I的尺度空间:L(q,y,σ)=G(q,y,σ)*I(q,y),其中,G(q,y,σ)是尺度可变的高斯卷积函数,“*”表示卷积运算,(q,y)是图像I上的空间坐标,σ是尺度坐标,k是平滑系数;通过变换不同的平滑系数k形成多幅图像,将这多幅图像构成图像I的第一组金字塔的若干层图像;(41) Use the Gaussian convolution function to generate the scale space of the image I in the dataset: L(q,y,σ)=G(q,y,σ)*I(q,y), where G(q,y, σ) is a scale-variable Gaussian convolution function, "*" means convolution operation, (q, y) is the spatial coordinate on the image I, σ is the scale coordinate, and k is the smoothing coefficient; multiple images are formed by transforming different smoothing coefficients k, and these multiple images are composed of images Several layer images of the first group of pyramids of I;
(42)将步骤(41)得到的第一组金字塔的倒数第三层图像进行比例因子为2的下采样操作,得到第二组金子塔的第一层图像,通过变换不同的平滑系数k,得到第二组金字塔的若干层图像;(42) carry out the down-sampling operation that scale factor is 2 with the penultimate third layer image of the first group of pyramids that step (41) obtains, obtain the first layer image of the second group of pyramids, by transforming different smoothing coefficients k, Obtain several layer images of the second group of pyramids;
(43)反复执行步骤(41)-(42),一共得到O组金字塔,每组金字塔包含S层图像,共计O×S幅图像,这些图像构成图像I的高斯空间金字塔。(43) Steps (41)-(42) are repeatedly performed to obtain a total of O groups of pyramids, each group of pyramids contains S layers of images, a total of O×S images, and these images constitute the Gaussian space pyramid of image I.
进一步地,步骤(5)的具体步骤如下:Further, the specific steps of step (5) are as follows:
(51)计算图像I高斯空间金字塔中每幅图像的费舍尔向量Fi,i=1,2,…,O×S,将O×S个费舍尔向量以及步骤(33)求得的向量F0进行串联,形成图像I的特征表示F=[F0,F1,...,FO×S];(51) Calculate the Fisher vector F i of each image in the image I Gaussian space pyramid, i=1, 2,..., O × S, with O × S Fisher vectors and step (33) obtained The vector F 0 is concatenated to form the feature representation of the image I F=[F 0 , F 1 ,...,F O×S ];
(52)将训练数据集和测试数据集中每幅图像进行步骤(51)的操作,得到训练数据集和测试数据集的特征集合。(52) Perform the operation of step (51) on each image in the training data set and the test data set to obtain the feature set of the training data set and the test data set.
进一步地,步骤(6)的具体步骤如下:Further, the specific steps of step (6) are as follows:
(71)将训练数据集的特征集合和测试数据集的特征集合进行转置,设置训练样本标签为l1,测试样本标签为l2,l1和l2分别为训练样本和测试样本中每幅图像种类的一组数据;(71) Transpose the feature set of the training data set and the feature set of the test data set, set the label of the training sample as l 1 , and set the label of the test sample as l 2 , where l 1 and l 2 are each A set of data of image type;
(72)建立训练数据集的特征集合中每一维特征与标签l1的互信息值模型:(72) Establish the mutual information value model of each dimension feature and label l in the feature set of the training data set:
I(x:i,l1)=H(l1)+H(x:i)-H(x:i,l1)I(x :i ,l 1 )=H(l 1 )+H(x :i )-H(x :i ,l 1 )
其中,H(x)为变量x的离散熵值,x:i为训练数据集的特征集合中第i维特征,则H(x:i,l1)表示在已知l1的情况下,变量x:i的离散熵值;Among them, H(x) is the discrete entropy value of the variable x, x :i is the i-th dimension feature in the feature set of the training data set, then H(x :i ,l 1 ) means that when l 1 is known, variable x : discrete entropy value of i ;
(73)使用1-bit量化方法求解离散熵值H(x):(73) Use 1-bit quantization method to solve discrete entropy value H(x):
其中,pj表示变量x落入第j个离散箱的概率值,离散箱如下式所示:Among them, p j represents the probability value of the variable x falling into the jth discrete box, and the discrete box is shown in the following formula:
根据离散熵值和互信息值模型计算训练数据集的特征集合中每一维特征与标签l1的互信息值;Calculate the mutual information value of each dimension feature and label l in the feature set of the training data set according to the discrete entropy value and the mutual information value model;
(74)对互信息值按照由大到小的顺序进行排序,选取前d个特征,并记录前d个特征索引值index;(74) Sort the mutual information values in descending order, select the first d features, and record the first d feature index values index;
(75)根据步骤(74)得到的索引值index选取出测试数据集的特征集合中的d个特征。(75) Select d features in the feature set of the test data set according to the index value index obtained in step (74).
采用上述技术方案带来的有益效果:The beneficial effect brought by adopting the above-mentioned technical scheme:
(1)本发明直接采用局部图像块向量构建局部特征描述子,有效地减少了计算成本、降低了计算复杂度;(1) The present invention directly uses local image block vectors to construct local feature descriptors, effectively reducing computational costs and computational complexity;
(2)本发明构建基于图像尺度空间理论的高斯金字塔,尺度空间方法将传统的单尺度图像信息处理技术纳入尺度不断变化的动态分析框架中,更容易获取图像的本质特征。此外,尺度空间中各尺度图像的模糊程度逐渐变大,能够模拟人在距离目标由近到远时目标在视网膜上的形成过程;(2) The present invention builds a Gaussian pyramid based on the image scale space theory, and the scale space method incorporates the traditional single-scale image information processing technology into the dynamic analysis framework with constantly changing scales, which makes it easier to obtain the essential features of the image. In addition, the blurring degree of each scale image in the scale space gradually increases, which can simulate the formation process of the target on the retina when the distance from the target is close to far;
(3)本发明使用费舍尔向量(Fisher Vector)取代传统的BOW,采用互信息方法对最终数据集的高维特征进行特征选择,从而达到理想的分类结果。(3) The present invention uses Fisher Vector to replace the traditional BOW, and adopts the mutual information method to select the high-dimensional features of the final data set, so as to achieve an ideal classification result.
附图说明Description of drawings
图1是本发明的基本流程图。Figure 1 is a basic flow chart of the present invention.
具体实施方式detailed description
以下将结合附图,对本发明的技术方案进行详细说明。The technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings.
实施例以STL-10数据库为例,该数据库包含10类RGB图像,每幅图像的大小为96*96。其中用于有监督训练的训练样本数共为5000,将5000个训练样本划分为十折,每次用于监督训练的训练样本数为1000,测试样本数为8000。The embodiment takes the STL-10 database as an example, the database contains 10 types of RGB images, and the size of each image is 96*96. The number of training samples used for supervised training is 5000 in total, and the 5000 training samples are divided into ten folds. The number of training samples used for supervised training each time is 1000, and the number of test samples is 8000.
基于局部图像块描述子和费舍尔向量的图像分类方法,如图1所示,具体步骤如下。The image classification method based on local image block descriptors and Fisher vectors is shown in Figure 1, and the specific steps are as follows.
步骤1,构建基于局部图像块的描述子:Step 1, build descriptors based on local image blocks:
(1a)将待分类的图像数据集分为各种类对应的训练数据集和测试数据集,其中训练数据集的数目为1000,测试数据集的数目为8000;(1a) divide the image data set to be classified into training data sets and test data sets corresponding to various classes, wherein the number of training data sets is 1000, and the number of test data sets is 8000;
(1b)采取随机采样的方法分别对训练数据集和测试数据集中的每幅图像中提取1000个图像块,图像块的大小为8*8;(1b) Take a random sampling method to extract 1000 image blocks from each image in the training data set and the test data set, and the size of the image block is 8*8;
(1c)对采集的图像块进行亮度和对比度归一化和白化预处理操作,将其转换为基于像素值的列向量特征描述子,分别构成训练数据集的特征描述子和测试数据集的特征描述子;(1c) Perform brightness and contrast normalization and whitening preprocessing operations on the collected image blocks, and convert them into column vector feature descriptors based on pixel values, which constitute the feature descriptors of the training data set and the characteristics of the test data set respectively descriptor;
(1d)对步骤(1c)中训练数据集的特征描述子和测试数据集的特征描述子进行PCA降维处理,获取最终训练集和测试集特征描述子组成的集合,分别记为C、E;(1d) Perform PCA dimensionality reduction processing on the feature descriptors of the training data set and the feature descriptors of the test data set in step (1c), and obtain a set composed of the final training set and test set feature descriptors, which are respectively denoted as C and E ;
步骤2,利用高斯混合模型对训练数据集的特征描述子进行建模:利用高斯混合模型对训练数据集特征描述子集C进行聚类,获取高斯混合模型参数θ={πi,μi,∑i,i=1,...256},其中高斯单元的个数取为256,πi,μi,∑i分别表示第i个高斯单元的权重、均值和协方差矩阵:Step 2, use the Gaussian mixture model to model the feature descriptors of the training data set: use the Gaussian mixture model to cluster the feature description subset C of the training data set, and obtain the parameters of the Gaussian mixture model θ={π i ,μ i , ∑ i , i=1,...256}, where the number of Gaussian units is 256, π i , μ i , ∑ i represent the weight, mean and covariance matrix of the i-th Gaussian unit respectively:
(2a)将256个高斯单元的高斯混合模型记为其中pi(x)代表第i个高斯单元,D表示特征描述子x的维数;(2a) Record the Gaussian mixture model of 256 Gaussian units as where p i (x) represents the ith Gaussian unit, D represents the dimensionality of the feature descriptor x;
(2b)通过训练集特征描述子组成的集合C来估计高斯混合模型的所有参数θ={πi,μi,∑i,i=1,...256}。构造集合C的概率公式为:(2b) Estimate all parameters of the Gaussian mixture model θ={π i ,μ i ,∑ i ,i=1,...256} through the set C composed of feature descriptors in the training set. The probability formula for constructing a set C is:
其中,S为集合C的特征描述子个数,S的值为1000; Among them, S is the number of feature descriptors of the set C, and the value of S is 1000;
(2c)采用期望值最大算法根据步骤(2b)中的概率公式对高斯混合模型进行参数估计;(2c) using the expected value maximum algorithm to estimate the parameters of the Gaussian mixture model according to the probability formula in step (2b);
步骤3,分别生成训练数据集和测试数据集中每幅图像的Fisher Vector:Step 3, generate the Fisher Vector of each image in the training dataset and test dataset respectively:
(3a)设X={xt,t=1,...,1000}是数据集中图像I的特征描述子集合,pθ是对X进行生成过程建模的概率密度函数,利用高斯混合模型参数θ,计算X中每个特征描述子在参数中第i个高斯单元上的梯度向量;(3a) Let X={x t ,t=1,...,1000} be the feature descriptor set of image I in the data set, p θ is the probability density function for modeling the generation process of X, using the Gaussian mixture model Parameter θ, calculate the gradient vector of each feature descriptor in X on the i-th Gaussian unit in the parameter;
(3b)计算概率密度函数pi的Fisher信息矩阵Fi;(3b) Calculate the Fisher information matrix F i of the probability density function p i ;
(3c)对信息矩阵Fi进行柯列斯基分解从而得到柯列斯基分量,将此分量与步骤(3a)中得到的梯度向量相乘,由此可得到图像I的Fisher Vector F0;(3c) Colesky decomposition is carried out to the information matrix F i to obtain the Colesky component, and this component is multiplied with the gradient vector obtained in step (3a), thus obtaining the Fisher Vector F 0 of the image I;
(3d)计算数据集所有图像的Fisher Vector,从而构成训练数据集和测试数据集的Fisher Vector集;(3d) Calculate the Fisher Vector of all images in the data set, thereby forming the Fisher Vector set of the training data set and the test data set;
步骤4,构建高斯空间金字塔:Step 4, build a Gaussian space pyramid:
(4a)利用高斯卷积函数生成图像I尺度空间:L(q,y,σ)=G(q,y,σ)*I(q,y)。G(q,y,σ)是尺度可变高斯函数,记为其中(q,y)是空间坐标,σ是尺度坐标。通过变换不同的平滑系数σ,kσ,k2σ...形成多幅图像,将这些多幅图像构成第一组O1金字塔的若干层图像;(4a) Generate image I scale space by using Gaussian convolution function: L(q,y,σ)=G(q,y,σ)*I(q,y). G(q,y,σ) is a scale-variable Gaussian function, denoted as where (q, y) is the spatial coordinate and σ is the scale coordinate. Multiple images are formed by transforming different smoothing coefficients σ, kσ, k 2 σ..., and these multiple images constitute the first group of O 1 pyramid layers of images;
(4b)将步骤(4a)一组金字塔的倒数第三层图像作比例因子为2的下采样操作,得到第二组金子塔的第一层图像,通过变换不同的平滑系数2σ,2kσ,2k2σ...,得到第二组O2金字塔的若干层图像;(4b) Perform a downsampling operation on the third-to-last layer images of a group of pyramids in step (4a) with a scale factor of 2 to obtain the first-layer images of the second group of pyramids, by transforming different smoothing coefficients 2σ, 2kσ, 2k 2 σ..., get the second group of O 2 Pyramid layers of images;
(4c)反复执行步骤(4a)和(4b),可得到一共O组,每组S层图像,共计O*S幅图像,这些图像构成高斯空间金字塔,优选地,O=3,S=4,其中,令可得σ=σ02°·ks,取σ0=1.6·21/S;(4c) Steps (4a) and (4b) are repeatedly executed to obtain a total of O groups, each group of S layer images, a total of O*S images, these images form a Gaussian space pyramid, preferably, O=3, S=4 , among them, order It can be obtained that σ=σ 0 2°·k s , take σ 0 =1.6·2 1/S ;
步骤5,计算训练数据集和测试数据集的最终特征描述:Step 5, calculate the final feature description of the training data set and the test data set:
(5a)计算图像I高斯空间金字塔中每幅图像的Fisher Vector Fj,j=1,...,12。将12个Fisher Vector与步骤(3c)得到的向量F0进行串联,形成图像I最终的特征表示F=[F0,F1,...,F12];(5a) Calculate the Fisher Vector F j of each image in the Gaussian space pyramid of image I, j=1,...,12. Concatenate 12 Fisher Vectors with the vector F 0 obtained in step (3c) to form the final feature representation of image I F=[F 0 , F 1 ,...,F 12 ];
(5b)分别计算训练数据集和测试数据集的每幅图像最终的特征表示,形成训练数据集和测试数据集最终特征描述集合为trainfeature、testfeature;(5b) Calculate the final feature representation of each image of the training data set and the test data set respectively, forming the final feature description set of the training data set and the test data set as trainfeature, testfeature;
步骤6,采用互信息方法对最终的数据集特征进行特征选择,对于步骤(5b)中数据集最终的高维特征向量,使用基于重要性排序的互信息方法来进行特征选择。Step 6, using the mutual information method to perform feature selection on the final data set features, and for the final high-dimensional feature vector of the data set in step (5b), use the mutual information method based on importance ranking to perform feature selection.
具体实现步骤如下:The specific implementation steps are as follows:
(6a)将步骤(5b)中的最终特征描述集合trainfeature、testfeature进行转置,记训练集样本标签和测试集样本标签分别为l1和l2,训练图像特征集合中每一维特征的重要性即为所需获取的互信息,计算表达式为:(6a) Transpose the final feature description sets trainfeature and testfeature in step (5b), record the training set sample label and test set sample label as l 1 and l 2 respectively, and the importance of each dimension feature in the training image feature set is the mutual information to be obtained, and the calculation expression is:
I(x:i,l1)=H(l1)+H(x:i)-H(x:i,l1),其中x:i表示trainfeature的第i维特征,H表示随机变量的熵值;I(x :i ,l 1 )=H(l 1 )+H(x :i )-H(x :i ,l 1 ), where x :i represents the i-th dimension feature of the trainfeature, and H represents the random variable entropy value;
(6b)使用1-bit量化方法取代估计概率密度函数。1-bit量化将实数x量化至两个离散箱(bin),可通过如下公式进行计算:(6b) Use a 1-bit quantization method instead of estimating the probability density function. 1-bit quantization quantizes the real number x to two discrete bins, which can be calculated by the following formula:
离散熵值H(x:i)和H(x:i,l1)可进一步通过公式H(x)=-∑jpjlog2(pj)计算,pj表示x落入第j个离散箱的概率值,其中j=1,2;The discrete entropy values H(x :i ) and H(x :i ,l 1 ) can be further calculated by the formula H(x)=-∑ j p j log 2 (p j ), p j means that x falls into the jth Probability values for discrete bins, where j=1,2;
(6c)计算trainfeature中每一维特征互信息值I(x:i,l1),然后对其进行排序,选取前d个特征,优选地,d=10000,并记录前d个特征索引值index;(6c) Calculate the mutual information value I(x :i ,l 1 ) of each dimension feature in the trainfeature, and then sort it, select the first d features, preferably, d=10000, and record the first d feature index values index;
(6d)根据步骤(6b)(6c)计算testfeature中每一维特征互信息值I(x':i,l2),x':i表示testfeature的第i维特征。根据步骤(6c)得到的索引值index选取前d个特征;(6d) Calculate the mutual information value I(x' :i ,l 2 ) of each dimension feature in the testfeature according to steps (6b)(6c), where x' :i represents the i-th dimension feature of the testfeature. Select the first d features according to the index value index obtained in step (6c);
步骤7,采用步骤6中经过特征选择后训练样本的特征值来训练SVM分类器,将测试样本的特征值输入已训练的SVM分类器中实现图像分类。In step 7, the feature value of the training sample after feature selection in step 6 is used to train the SVM classifier, and the feature value of the test sample is input into the trained SVM classifier to realize image classification.
实施例仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明保护范围之内。The embodiment is only to illustrate the technical idea of the present invention, and can not limit the scope of protection of the present invention with this. All technical ideas proposed in the present invention, any changes made on the basis of technical solutions, all fall within the scope of protection of the present invention .
Claims (7)
1. the image classification method based on topography's block description and Fei Sheer vectors, it is characterised in that comprise the following steps:
(1) training dataset and test data set are selected, the feature based on topography's block that two datasets are built respectively is retouched State son;
(2) Feature Descriptor of training dataset is clustered using gauss hybrid models, obtains gauss hybrid models parameter;
(3) generation training dataset and test data concentrate the Fei Sheer vectors of each image;
(4) each image is concentrated to build Gaussian spatial pyramid training dataset and test data;
(5) Fei Sheer of each image in the Gaussian spatial pyramid of data training set and test data concentration each image is calculated Vector, so as to constitute data training set and the characteristic set of test data set;
(6) feature selecting is carried out to the characteristic set of training dataset, test data set using mutual information method;
(7) grader is trained using the characteristic value of the training dataset after selection, the characteristic value of test data set is inputted and instructed In the grader perfected, image classification is realized.
2. according to claim 1 based on topography's block description and the vectorial image classification methods of Fei Sheer, its feature It is, step (1) is comprised the following steps that:
(11) training dataset and test data set are selected;
(12) T figure is extracted in each image that training dataset and test data are concentrated by the method for stochastical sampling respectively As block;
(13) brightness, contrast normalization and whitening pretreatment operation are carried out to the image block of extraction, are converted into based on picture The column vector Feature Descriptor of element value, respectively constitutes the Feature Descriptor of training dataset and the feature description of test data set Son;
(14) Feature Descriptor and the Feature Descriptor of test data set to training dataset carries out PCA dimension-reduction treatment.
3. according to claim 1 based on topography's block description and the vectorial image classification methods of Fei Sheer, its feature It is, step (2) are comprised the following steps that:
(21) gauss hybrid models of K Gauss unit are designated asWherein, pi(x) represent high i-th This unit,D represents Feature Descriptor x dimension, πi,μi,∑iPoint The weight, average and covariance matrix of i-th of Gauss unit are not represented;
(22) the set C being made up of training dataset Feature Descriptor estimates all parameter θs={ π of gauss hybrid modelsi, μi,∑i, i=1 ... K }, construction set C new probability formula:
<mrow> <mi>L</mi> <mrow> <mo>(</mo> <mi>C</mi> <mo>|</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>s</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>S</mi> </munderover> <mi>l</mi> <mi>o</mi> <mi>g</mi> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>&pi;</mi> <mi>i</mi> </msub> <msub> <mi>p</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>s</mi> </msub> <mo>|</mo> <msub> <mi>&mu;</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&Sigma;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow>
Wherein, S is set C Feature Descriptor number;
(23) expectation maximization algorithm is used, according to the new probability formula in step (22), parameter is carried out to gauss hybrid models and estimated Meter.
4. according to claim 3 based on topography's block description and the vectorial image classification methods of Fei Sheer, its feature It is, step (3) are comprised the following steps that:
(31) gauss hybrid models parameter is utilized, training dataset is calculated and test data concentrates each Feature Descriptor i-th Gradient vector on individual Gauss unit;
(32) the probability density function p of i-th of Gauss unit is calculatediFei Sheer information matrixs;
(33) to information matrix FiCholeski decomposition is carried out, Cholesky component is obtained, by this component with being obtained in step (31) Gradient vector be multiplied, obtain the Fei Sheer vectors F of each image in data set0, so that composing training data set and test number According to the Fei Sheer vector sets of collection.
5. according to claim 4 based on topography's block description and the vectorial image classification methods of Fei Sheer, its feature It is, step (4) are comprised the following steps that:
(41) metric space of image I in Gaussian convolution function generation data set is utilized:L (q, y, σ)=G (q, y, σ) * I (q, Y), wherein, G (q, y, σ) is the Gaussian convolution function of changeable scale, " * " represents convolution algorithm, and (q, y) is the space coordinate on image I, and σ is yardstick coordinate, and k is smoothing factor;It is different by converting Smoothing factor k formation multiple image, by first group pyramidal some tomographic images of this multiple image pie graph as I;
(42) first group of pyramidal tomographic image third from the bottom that step (41) is obtained is carried out into scale factor to grasp for 2 down-sampling Make, obtain the first tomographic image of second group of pyramid, the smoothing factor k different by converting, if obtain second group it is pyramidal Dried layer image;
(43) step (41)-(42) are performed repeatedly, one is obtained O group pyramids, and every group of pyramid includes S tomographic images, altogether O × S width images, these image construction images I Gaussian spatial pyramid.
6. according to claim 5 based on topography's block description and the vectorial image classification methods of Fei Sheer, its feature It is, step (5) are comprised the following steps that:
(51) the Fei Sheer vectors F of each image in image I Gaussian spatial pyramids is calculatedi, i=1,2 ..., O × S, by O × S The vectorial F that individual Fei Sheer vectors and step (33) are tried to achieve0Connected, form image I character representation F=[F0,F1,..., FO×S];
(52) concentrate each image to carry out the operation of step (51) training dataset and test data, obtain training dataset and The characteristic set of test data set.
7. according to claim 1 based on topography's block description and the vectorial image classification methods of Fei Sheer, its feature It is, step (6) are comprised the following steps that:
(71) characteristic set of training dataset and the characteristic set of test data set are subjected to transposition, training sample label is set For l1, test sample label is l2, l1And l2One group of data of each image species respectively in training sample and test sample;
(72) every one-dimensional characteristic and label l in the characteristic set of training dataset are set up1Association relationship model:
I(x:i,l1)=H (l1)+H(x:i)-H(x:i,l1)
Wherein, H (x) is variable x discrete entropy, x:iFor i-th dimension feature in the characteristic set of training dataset, then H (x:i,l1) Represent in known l1In the case of, variable x:iDiscrete entropy;
(73) discrete entropy H (x) is solved using 1-bit quantization methods:
<mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </munderover> <msub> <mi>p</mi> <mi>j</mi> </msub> <msub> <mi>log</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow>
Wherein, pjRepresent that variable x falls into the probable value of j-th of discrete case, discrete case is shown below:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mo>&LeftArrow;</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>x</mi> <mo>&LeftArrow;</mo> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo><</mo> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>j</mi> <mo>=</mo> <mn>2</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> 2
Every one-dimensional characteristic and label l in the characteristic set of training dataset are calculated according to discrete entropy and association relationship model1It is mutual The value of information;
(74) association relationship is ranked up according to descending order, d feature before choosing, and records preceding d feature rope Draw value index;
(75) the index value index obtained according to step (74) selects d feature in the characteristic set of test data set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710269907.1A CN107133640A (en) | 2017-04-24 | 2017-04-24 | Image classification method based on topography's block description and Fei Sheer vectors |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710269907.1A CN107133640A (en) | 2017-04-24 | 2017-04-24 | Image classification method based on topography's block description and Fei Sheer vectors |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107133640A true CN107133640A (en) | 2017-09-05 |
Family
ID=59716068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710269907.1A Pending CN107133640A (en) | 2017-04-24 | 2017-04-24 | Image classification method based on topography's block description and Fei Sheer vectors |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107133640A (en) |
Cited By (10)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108108751A (en) * | 2017-12-08 | 2018-06-01 | 浙江师范大学 | A kind of scene recognition method based on convolution multiple features and depth random forest |
CN108154183A (en) * | 2017-12-25 | 2018-06-12 | 深圳市唯特视科技有限公司 | A kind of objective classification method based on part and depth characteristic set |
CN108319961A (en) * | 2018-01-23 | 2018-07-24 | 西南科技大学 | A kind of image ROI rapid detection methods based on local feature region |
CN108399370A (en) * | 2018-02-02 | 2018-08-14 | 达闼科技(北京)有限公司 | The method and cloud system of Expression Recognition |
CN108416389A (en) * | 2018-03-15 | 2018-08-17 | 盐城师范学院 | The image classification method sampled based on the sparse autocoder of noise reduction and density space |
CN109344709A (en) * | 2018-08-29 | 2019-02-15 | 中国科学院信息工程研究所 | A method for detecting fake images generated by faces |
CN109447978A (en) * | 2018-11-09 | 2019-03-08 | 河北工业大学 | A kind of photovoltaic solar cell piece electroluminescent image defect classification method |
WO2019100348A1 (en) * | 2017-11-24 | 2019-05-31 | 华为技术有限公司 | Image retrieval method and device, and image library generation method and device |
CN114581797A (en) * | 2022-01-24 | 2022-06-03 | 航天时代飞鸿技术有限公司 | Unmanned aerial vehicle image feature selection method and system and readable storage medium |
CN118172599A (en) * | 2024-03-14 | 2024-06-11 | 上海交通大学 | Mobile terminal new category learning and lightweight updating migration method based on parameter freezing |
Citations (5)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050276500A1 (en) * | 2004-06-15 | 2005-12-15 | Canon Kabushiki Kaisha | Image encoding apparatus, and image processing apparatus and its control method |
CN105488519A (en) * | 2015-11-13 | 2016-04-13 | 同济大学 | Video classification method based on video scale information |
CN106056159A (en) * | 2016-06-03 | 2016-10-26 | 西安电子科技大学 | Image fine classification method based on Fisher Vector |
CN106203354A (en) * | 2016-07-14 | 2016-12-07 | 南京信息工程大学 | Scene recognition method based on interacting depth structure |
CN106230444A (en) * | 2016-06-07 | 2016-12-14 | 云南大学 | A kind of with the logical and concurrent dual band transmitter of low pass Δ ∑ hybrid modulation |
-
2017
- 2017-04-24 CN CN201710269907.1A patent/CN107133640A/en active Pending
Patent Citations (5)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050276500A1 (en) * | 2004-06-15 | 2005-12-15 | Canon Kabushiki Kaisha | Image encoding apparatus, and image processing apparatus and its control method |
CN105488519A (en) * | 2015-11-13 | 2016-04-13 | 同济大学 | Video classification method based on video scale information |
CN106056159A (en) * | 2016-06-03 | 2016-10-26 | 西安电子科技大学 | Image fine classification method based on Fisher Vector |
CN106230444A (en) * | 2016-06-07 | 2016-12-14 | 云南大学 | A kind of with the logical and concurrent dual band transmitter of low pass Δ ∑ hybrid modulation |
CN106203354A (en) * | 2016-07-14 | 2016-12-07 | 南京信息工程大学 | Scene recognition method based on interacting depth structure |
Non-Patent Citations (4)
* Cited by examiner, † Cited by third partyTitle |
---|
YU ZHANG等: "《Compact Representation for Image Classification: To Choose or to Compress?》", 《IN PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
塞利斯基著: "《计算机视觉——算法与应用》", 31 January 2012 * |
杨杰,张翔编著: "《视频目标检测和跟踪及其应用》", 31 August 2012 * |
汪鹏: "《基于空间Fisher核框架的Bag of Features算法的研究与应用》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (13)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019100348A1 (en) * | 2017-11-24 | 2019-05-31 | 华为技术有限公司 | Image retrieval method and device, and image library generation method and device |
CN108108751A (en) * | 2017-12-08 | 2018-06-01 | 浙江师范大学 | A kind of scene recognition method based on convolution multiple features and depth random forest |
CN108108751B (en) * | 2017-12-08 | 2021-11-12 | 浙江师范大学 | Scene recognition method based on convolution multi-feature and deep random forest |
CN108154183A (en) * | 2017-12-25 | 2018-06-12 | 深圳市唯特视科技有限公司 | A kind of objective classification method based on part and depth characteristic set |
CN108319961A (en) * | 2018-01-23 | 2018-07-24 | 西南科技大学 | A kind of image ROI rapid detection methods based on local feature region |
CN108319961B (en) * | 2018-01-23 | 2022-03-25 | 西南科技大学 | A fast detection method of image ROI based on local feature points |
CN108399370A (en) * | 2018-02-02 | 2018-08-14 | 达闼科技(北京)有限公司 | The method and cloud system of Expression Recognition |
CN108416389A (en) * | 2018-03-15 | 2018-08-17 | 盐城师范学院 | The image classification method sampled based on the sparse autocoder of noise reduction and density space |
CN109344709A (en) * | 2018-08-29 | 2019-02-15 | 中国科学院信息工程研究所 | A method for detecting fake images generated by faces |
CN109447978A (en) * | 2018-11-09 | 2019-03-08 | 河北工业大学 | A kind of photovoltaic solar cell piece electroluminescent image defect classification method |
CN109447978B (en) * | 2018-11-09 | 2021-11-02 | 河北工业大学 | A method for classifying defects in electroluminescence images of photovoltaic solar cells |
CN114581797A (en) * | 2022-01-24 | 2022-06-03 | 航天时代飞鸿技术有限公司 | Unmanned aerial vehicle image feature selection method and system and readable storage medium |
CN118172599A (en) * | 2024-03-14 | 2024-06-11 | 上海交通大学 | Mobile terminal new category learning and lightweight updating migration method based on parameter freezing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107133640A (en) | 2017-09-05 | Image classification method based on topography's block description and Fei Sheer vectors |
CN105760821B (en) | 2017-06-06 | The face identification method of the grouped accumulation rarefaction representation based on nuclear space |
CN103942568B (en) | 2017-04-05 | A kind of sorting technique based on unsupervised feature selection |
CN105426919B (en) | 2017-11-14 | The image classification method of non-supervisory feature learning is instructed based on conspicuousness |
CN104268593B (en) | 2017-10-17 | The face identification method of many rarefaction representations under a kind of Small Sample Size |
CN107122375A (en) | 2017-09-01 | The recognition methods of image subject based on characteristics of image |
CN103258210B (en) | 2016-09-14 | A kind of high-definition image classification method based on dictionary learning |
CN105574534A (en) | 2016-05-11 | Significant object detection method based on sparse subspace clustering and low-order expression |
CN109376787B (en) | 2021-02-26 | Manifold learning network and computer vision image set classification method based on manifold learning network |
CN103177265B (en) | 2016-09-14 | High-definition image classification method based on kernel function Yu sparse coding |
CN111860171A (en) | 2020-10-30 | A method and system for detecting irregularly shaped targets in large-scale remote sensing images |
CN101699514B (en) | 2011-09-21 | SAR Image Segmentation Method Based on Immune Cloning Quantum Clustering |
CN109710804B (en) | 2022-10-18 | Teaching video image knowledge point dimension reduction analysis method |
CN118799619A (en) | 2024-10-18 | A method for batch recognition and automatic classification and archiving of image content |
Qian et al. | 2021 | Classification of rice seed variety using point cloud data combined with deep learning |
CN105320764A (en) | 2016-02-10 | 3D model retrieval method and 3D model retrieval apparatus based on slow increment features |
CN110347851A (en) | 2019-10-18 | Image search method and system based on convolutional neural networks |
CN105046272A (en) | 2015-11-11 | Image classification method based on concise unsupervised convolutional network |
CN111311702A (en) | 2020-06-19 | Image generation and identification module and method based on BlockGAN |
CN110097096A (en) | 2019-08-06 | A kind of file classification method based on TF-IDF matrix and capsule network |
CN103971136A (en) | 2014-08-06 | Large-scale data-oriented parallel structured support vector machine classification method |
CN114330516A (en) | 2022-04-12 | Small sample logo image classification based on multi-graph guided neural network model |
CN110991554B (en) | 2023-04-18 | Improved PCA (principal component analysis) -based deep network image classification method |
CN113723456B (en) | 2023-10-17 | Automatic astronomical image classification method and system based on unsupervised machine learning |
CN104331717B (en) | 2017-10-17 | The image classification method that a kind of integration characteristics dictionary structure is encoded with visual signature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2017-09-05 | PB01 | Publication | |
2017-09-05 | PB01 | Publication | |
2017-09-29 | SE01 | Entry into force of request for substantive examination | |
2017-09-29 | SE01 | Entry into force of request for substantive examination | |
2019-05-31 | RJ01 | Rejection of invention patent application after publication | |
2019-05-31 | RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170905 |