patents.google.com

CN111638465B - Lithium battery health state estimation method based on convolutional neural network and transfer learning - Google Patents

  • ️Tue Feb 28 2023
Lithium battery health state estimation method based on convolutional neural network and transfer learning Download PDF

Info

Publication number
CN111638465B
CN111638465B CN202010475482.1A CN202010475482A CN111638465B CN 111638465 B CN111638465 B CN 111638465B CN 202010475482 A CN202010475482 A CN 202010475482A CN 111638465 B CN111638465 B CN 111638465B Authority
CN
China
Prior art keywords
layer
value
model
neural network
parameter
Prior art date
2020-05-29
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010475482.1A
Other languages
Chinese (zh)
Other versions
CN111638465A (en
Inventor
陶吉利
李央
马龙华
白杨
乔志军
谢亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Science and Technology ZUST
Original Assignee
Zhejiang University of Science and Technology ZUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2020-05-29
Filing date
2020-05-29
Publication date
2023-02-28
2020-05-29 Application filed by Zhejiang University of Science and Technology ZUST filed Critical Zhejiang University of Science and Technology ZUST
2020-05-29 Priority to CN202010475482.1A priority Critical patent/CN111638465B/en
2020-09-08 Publication of CN111638465A publication Critical patent/CN111638465A/en
2023-02-28 Application granted granted Critical
2023-02-28 Publication of CN111638465B publication Critical patent/CN111638465B/en
Status Active legal-status Critical Current
2040-05-29 Anticipated expiration legal-status Critical

Links

  • 238000000034 method Methods 0.000 title abstract 4
  • 238000013526 transfer learning Methods 0.000 title abstract 3
  • WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 title abstract 2
  • 238000013527 convolutional neural network Methods 0.000 title abstract 2
  • 229910052744 lithium Inorganic materials 0.000 title abstract 2
  • 230000032683 aging Effects 0.000 abstract 3
  • 238000002474 experimental method Methods 0.000 abstract 2
  • 239000010926 waste battery Substances 0.000 abstract 2
  • 238000004364 calculation method Methods 0.000 abstract 1

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/392Determining battery ageing or deterioration, e.g. state of health
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/367Software therefor, e.g. for battery testing using modelling or look-up tables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/382Arrangements for monitoring battery or accumulator variables, e.g. SoC
    • G01R31/3842Arrangements for monitoring battery or accumulator variables, e.g. SoC combining voltage and current measurements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Secondary Cells (AREA)
  • Charge And Discharge Circuits For Batteries Or The Like (AREA)

Abstract

The invention discloses a lithium battery health state estimation method based on a convolutional neural network and transfer learning. The method is based on transfer learning, a basic model is pre-trained offline by using complete cycle data of an accelerated aging experiment and the last small part of 7.5% cycle data in the life cycle of a waste battery, and then parameters of the basic model are finely adjusted by using normal speed aging data of only 15% cycle before a new battery, so that the health state of the battery at any time is estimated online. Because the service life of the battery is greatly shortened by the accelerated aging experiment, the last small part of cycle data of the waste battery is easy to obtain, and the previous 15% cycle data of the new battery is also easy to obtain, a large amount of time for collecting training data is saved, the size of model input data is reduced, and the calculation process is quicker.

Description

基于卷积神经网络和迁移学习的锂电池健康状态估计方法Lithium battery state of health estimation method based on convolutional neural network and transfer learning

技术领域technical field

本发明属于自动化技术领域,涉及一种基于卷积神经网络和迁移学习的锂电池健康状态估计方法。The invention belongs to the technical field of automation, and relates to a method for estimating the state of health of a lithium battery based on a convolutional neural network and transfer learning.

背景技术Background technique

现有的锂电池健康状态估计方法主要包括基于模型和数据驱动两大类。基于模型的方法对电池内部复杂的物理机制要求较高。数据驱动的方法主要是从电池原始的电压、电流、容量等数据手动提取出重要的特征,作为一些传统的机器学习模型的输入。其中,容量增量分析方法被广泛采用,该方法将原始的充放电电压容量曲线上反映电池一阶相变的电压平台转化成容量增量曲线上能明确识别的ΔQ/ΔV峰,然后提取ΔQ/ΔV峰和对应的电压值等特征作为模型的输入,研究表明电池衰老的物理变化会在电压容量的曲线中有相应的体现。The existing lithium battery state of health estimation methods mainly include two categories: model-based and data-driven. The model-based method has high requirements on the complex physical mechanism inside the battery. The data-driven method is mainly to manually extract important features from the original battery voltage, current, capacity and other data, as the input of some traditional machine learning models. Among them, the capacity increment analysis method is widely used. This method converts the voltage plateau reflecting the first-order phase transition of the battery on the original charge-discharge voltage capacity curve into a clearly identifiable ΔQ/ΔV peak on the capacity increment curve, and then extracts ΔQ Features such as the /ΔV peak and the corresponding voltage value are used as the input of the model, and the research shows that the physical changes of battery aging will be reflected in the curve of the voltage capacity.

近几年,深度学习被使用在各个领域,可以从大量的数据中自动提取重要特征,弥补了传统的机器学习人为的特征提取可能造成重要信息丢失以及工作量大的缺点。S.Shen等人在文献A deep learning method for online capacity estimation of lithium-ion batteries(Journal of Energy Storage,vol.25,p.100817,2019)首次将深度学习引入电池健康状态估计的研究中,然而该方法是基于10年的循环实验数据,采集数据相当地耗时,而本发明旨在改善健康状态估计模型精确,并且克服过去过多依赖于数据的不足。In recent years, deep learning has been used in various fields, which can automatically extract important features from a large amount of data, making up for the disadvantages of traditional machine learning that artificial feature extraction may cause important information loss and heavy workload. S.Shen et al first introduced deep learning into the study of battery health state estimation in the literature A deep learning method for online capacity estimation of lithium-ion batteries (Journal of Energy Storage, vol.25, p.100817, 2019), however This method is based on 10 years of cyclical experimental data, and data collection is quite time-consuming, while the present invention aims to improve the accuracy of the health state estimation model and overcome the shortcomings of relying too much on data in the past.

发明内容Contents of the invention

为了克服现有技术中对建立模型所要求的数据量庞大等缺陷,提出一种基于卷积神经网络和迁移学习的方法。迁移学习使用加速老化实验的完整循环数据和废弃电池寿命周期中最后一小部分约7.5%循环数据来离线地预训练一个基础模型,然后使用新电池前面仅15%循环的正常速度老化数据对基础模型的参数进行微调,从而对该电池任意一个时刻的健康状态进行估计。由于加速老化实验大大缩短了电池寿命,废弃电池的最后一小部分循环数据容易获取,新电池的前面15%循环数据也容易获取,因此节省了大量的收集训练数据的时间,并且减小了模型输入数据的大小,使得计算过程更加快速。In order to overcome the shortcomings of the existing technology, such as the huge amount of data required to build a model, a method based on convolutional neural network and transfer learning is proposed. Transfer learning pre-trains a base model offline using full cycle data from accelerated aging experiments and the last ~7.5% cycle data from a spent battery life cycle, and then uses normal rate aging data from the first 15% cycles of a new battery to update the base model. The parameters of the model are fine-tuned to estimate the state of health of the battery at any moment. Because the accelerated aging experiment greatly shortens the battery life, the last small part of the cycle data of the discarded battery is easy to obtain, and the first 15% cycle data of the new battery is also easy to obtain, thus saving a lot of time for collecting training data and reducing the model size. The size of the input data makes the calculation process faster.

本发明的技术方案是通过数据采集、模型建立及微调等手段,确立了一种基于卷积神经网络和迁移学习的锂电池健康状态估计方法。利用该方法可有效提高电池健康状态估计的准确性。The technical solution of the present invention is to establish a lithium battery health state estimation method based on convolutional neural network and transfer learning by means of data collection, model establishment and fine-tuning. Using this method can effectively improve the accuracy of battery state of health estimation.

本发明的具体技术方案如下:Concrete technical scheme of the present invention is as follows:

基于卷积神经网络和迁移学习的锂电池健康状态估计方法,其步骤如下:Lithium battery state of health estimation method based on convolutional neural network and transfer learning, the steps are as follows:

S1:获取卷积神经网络的输入数据,具体方法是:S1: Obtain the input data of the convolutional neural network, the specific method is:

S11:选择若干不同型号的全新锂电池,各自进行加速老化实验采集循环数据,按照恒流充电、恒压充电、恒流放电的循环不断消耗电池容量,直到健康状态下降至80%以下;S11: Select a number of new lithium batteries of different types, conduct accelerated aging experiments to collect cycle data, and continuously consume battery capacity according to the cycles of constant current charging, constant voltage charging, and constant current discharging until the health status drops below 80%;

同时,获取相同型号的接近寿命终点的废弃锂电池,各自进行正常速度老化实验采集循环数据,同样地按照恒流充电、恒压充电、恒流放电的过程来消耗电池容量,直到健康状态下降至80%以下;At the same time, obtain waste lithium batteries of the same type that are close to the end of their life, carry out normal speed aging experiments to collect cycle data, and consume battery capacity in the same way according to the process of constant current charging, constant voltage charging, and constant current discharge until the health status drops to Below 80%;

获取相同型号的全新锂电池,各自进行正常速度老化实验采集循环数据,按照恒流充电、恒压充电、恒流放电的过程进行充放电循环,获取电池寿命的前15%循环数据;Obtain brand-new lithium batteries of the same type, carry out normal speed aging experiments to collect cycle data, perform charge and discharge cycles according to the process of constant current charging, constant voltage charging, and constant current discharge, and obtain the first 15% cycle data of the battery life;

S12:根据S11中采集的不同老化实验中恒流充电阶段电池的电压和电流值,计算得到电池容量,将电压、电流和电池容量三个变量的数值构成矩阵,作为卷积神经网络的输入数据;S12: Calculate the battery capacity according to the voltage and current values of the battery in the constant current charging stage in different aging experiments collected in S11, and form a matrix with the values of the three variables of voltage, current and battery capacity as the input data of the convolutional neural network ;

S2:构建卷积神经网络模型,整个网络包括卷积层、池化层、全连接层,选取修正线性单元作为激活函数,与每一个卷积层和池化层的输出相连;S2: Construct a convolutional neural network model. The entire network includes a convolutional layer, a pooling layer, and a fully connected layer. Select a modified linear unit as the activation function and connect it to the output of each convolutional layer and pooling layer;

S3:对S2中构建的模型进行预训练,具体方法为S31~S32:S3: Pre-train the model constructed in S2, the specific method is S31-S32:

S31:将S1中全新锂电池加速老化实验得到的输入数据分为若干小批次的训练样本,按批次输入S2中构建的神经网络中,在迭代学习过程中通过随机梯度下降法更新参数,从而得到第一预训练模型,保存第一预训练模型的参数值,包括卷积核的值ka,b,c,k、偏置值bk、全连接层的权重Wl和偏置blS31: Divide the input data obtained from the accelerated aging experiment of the new lithium battery in S1 into several small batches of training samples, and input them into the neural network constructed in S2 according to batches, and update the parameters through the stochastic gradient descent method during the iterative learning process. Thus, the first pre-training model is obtained, and the parameter values of the first pre-training model are saved, including the values k a,b,c,k of the convolution kernel, the bias value b k , the weight W l and the bias b of the fully connected layer l ;

S32:将S1中废弃锂电池正常速度老化实验得到的输入数据分为若干小批次的训练样本,按批次输入S31训练后的神经网络中,在第一预训练模型已保存的模型参数值基础上进行迭代学习,并通过随机梯度下降法进一步调整参数,从而得到第二预训练模型,保存第二预训练模型的参数值,包括新的卷积核的值k'a,b,c,k、偏置值b'k、全连接层的权重Wl'和偏置bl';S32: Divide the input data obtained from the normal speed aging experiment of discarded lithium batteries in S1 into several small batches of training samples, and input them into the trained neural network in S31 in batches, and save the model parameter values in the first pre-training model On the basis of iterative learning, and further adjust the parameters through the stochastic gradient descent method, so as to obtain the second pre-training model, save the parameter values of the second pre-training model, including the value of the new convolution kernel k' a,b,c, k , bias value b' k , weight W l ' and bias b l ' of the fully connected layer;

此时前向传播和参数更新如下:At this time, the forward propagation and parameter update are as follows:

Figure BDA0002515692820000031

Figure BDA0002515692820000031

al'=f(zl')=f(Wl'al-1'+bl') (17)a l '=f(z l ')=f(W l 'a l-1 '+b l ') (17)

Figure BDA0002515692820000032

Figure BDA0002515692820000032

其中:模型内部参数θ'j包括卷积核的值k'a,b,c,k、偏置值b'k、全连接层的权重Wl'和偏置bl';公式(16)~(18)中参数的上标“'”表示该参数在预训练阶段的前向传播更新值;Among them: the internal parameters of the model θ' j include the value of the convolution kernel k' a,b,c,k , the bias value b' k , the weight W l ' and the bias b l ' of the fully connected layer; formula (16) The superscript "'" of the parameter in ~(18) indicates the forward propagation update value of the parameter in the pre-training stage;

S4:将S1中全新锂电池正常速度老化实验得到的输入数据分为若干小批次的训练样本,按批次输入S3中得到的预训练模型中进行迭代学习,迭代学习过程中固定预训练模型的卷积层参数不变,即保持k'a,b,c,k和b'k不变,只将全连接层的权重Wl'和偏置bl'更新为Wl”和bl”,保存更新后的参数,得到最终的估计模型;S4: Divide the input data obtained from the normal speed aging experiment of the new lithium battery in S1 into several small batches of training samples, and input them into the pre-training model obtained in S3 for iterative learning according to the batches, and fix the pre-training model during the iterative learning process The parameters of the convolutional layer remain unchanged, that is, keep k' a, b, c, k and b' k unchanged, and only update the weight W l ' and bias b l ' of the fully connected layer to W l ” and b l ", save the updated parameters, and get the final estimated model;

此时前向传播和参数更新如下:At this time, the forward propagation and parameter update are as follows:

Figure BDA0002515692820000033

Figure BDA0002515692820000033

al”=f(zl”)=f(Wl”al-1”+bl”) (20)a l ”=f(z l ”)=f(W l ”a l-1 ”+b l ”) (20)

Figure BDA0002515692820000034

Figure BDA0002515692820000034

其中:模型内部参数θ”j包括全连接层的权重Wl”和偏置bl”;公式(19)~(21)中参数的上标“〞”表示该参数在微调阶段的前向传播更新值;Among them: the internal parameters of the model θ” j include the weight W l ” and bias b l ” of the fully connected layer; the superscript “”” of the parameters in the formulas (19)~(21) indicates the forward propagation of the parameter in the fine-tuning stage update value;

S5:对待估计的锂电池进行一次恒流充电实验,得到其电压、电流、容量测试值,将三者构成矩阵作为S4中得到的估计模型的输入X,网络前向传播的计算过程中使用S4中保存的参数k'a,b,c,k、b'k、Wl”和bl”,此时的前向传播和参数更新如下:S5: Conduct a constant current charging experiment on the lithium battery to be estimated to obtain its voltage, current, and capacity test values, and use the matrix formed by the three as the input X of the estimation model obtained in S4, and use S4 in the calculation process of the network forward propagation The parameters k' a,b,c,k , b' k , W l ” and b l ” saved in , the forward propagation and parameter update at this time are as follows:

Figure BDA0002515692820000041

Figure BDA0002515692820000041

al”'=f(zl”')=f(Wl”al-1”'+bl”) (23)a l "'=f(z l "')=f(W l "a l-1 "'+b l ") (23)

公式(22)~(23)中参数的上标“”'”表示该参数在估计阶段的前向传播更新值;The superscript ""'" of the parameter in the formulas (22)~(23) indicates the forward propagation update value of the parameter in the estimation stage;

最终,估计模型输出电池在该时刻的健康状态。Ultimately, the estimated model outputs the state of health of the battery at that moment.

作为优选,所述S1中,加速老化实验是指对电池进行过充和过放,即通过设置较高的恒流充电电压上限和较低的恒流放电电压下限。Preferably, in said S1, the accelerated aging test refers to overcharging and overdischarging the battery, that is, by setting a higher upper limit of constant current charging voltage and a lower constant current discharge voltage lower limit.

作为优选,所述S1中,废弃锂电池的正常速度老化实验,进行35~40次充放电循环。Preferably, in the S1, the normal speed aging test of the discarded lithium battery is carried out for 35 to 40 charge and discharge cycles.

作为优选,所述S1中,全新锂电池的正常速度老化实验,进行75次充放电循环。Preferably, in the S1, the normal speed aging test of a new lithium battery is carried out for 75 charge and discharge cycles.

作为优选,所述S1中,将不同老化实验采集的数据分别构建成模型输入X:As a preference, in said S1, the data collected by different aging experiments are respectively constructed into model input X:

Figure BDA0002515692820000042

Figure BDA0002515692820000042

其中:k是恒流充电阶段的采样点数,Vi、Ii、Ci分别为第i个采样点的电压、电流、容量值。Where: k is the number of sampling points in the constant current charging phase, and V i , I i , and C i are the voltage, current, and capacity values of the i-th sampling point, respectively.

作为优选,所述的卷积神经网络模型中,卷积层的前向传播的计算如下:As preferably, in the described convolutional neural network model, the calculation of the forward propagation of the convolutional layer is as follows:

Figure BDA0002515692820000043

Figure BDA0002515692820000043

i'=(i-1)×hs+a (3)i'=(i-1)×h s +a (3)

j'=(j-1)×ws+b (4)j'=(j-1)×w s +b (4)

其中,k是该卷积层中卷积核的个数,即输出矩阵的通道数,Ci,j,k是输出矩阵中第k层第i行第j列的值,bk是偏置值,hk、wk和ck分别是卷积核的高度、宽度和通道数,ws和hs分别是卷积核在扫描输入矩阵时宽度和高度的步长,xi',j',c是输入矩阵中第c层第i'行第j'列的值,ka,b,c,k是第k个卷积核中第c层第a行第b列的值,f是激活函数;Among them, k is the number of convolution kernels in the convolution layer, that is, the number of channels of the output matrix, C i, j, k is the value of the i row and j column of the kth layer in the output matrix, and b k is the bias value, h k , w k and c k are the height, width and number of channels of the convolution kernel respectively, w s and h s are the step size of the width and height of the convolution kernel when scanning the input matrix, xi ',j ', c is the value of row i', column j' of layer c in the input matrix, k a, b, c, k are the values of row a, column b, layer c of layer c in the kth convolution kernel, f is the activation function;

卷积层的维度计算如下:The dimensions of the convolutional layer are calculated as follows:

Figure BDA0002515692820000051

Figure BDA0002515692820000051

Figure BDA0002515692820000052

Figure BDA0002515692820000052

其中wk和hk分别是卷积核的宽度和高度,ws和hs分别是卷积核在扫描输入矩阵时宽度方向和高度方向的步长,win,wout分别表示输入矩阵和输出矩阵的宽度,hin和hout分别表示输入矩阵和输出矩阵的高度,wp和hp分别表示在输入矩阵的左右和上下方向对称填充零元素的数目,防止矩阵的边界信息随着不断卷积而丢失;Where w k and h k are the width and height of the convolution kernel respectively, w s and h s are the step size of the convolution kernel in the width direction and height direction when scanning the input matrix, respectively, w in and w out represent the input matrix and The width of the output matrix, h in and h out represent the heights of the input matrix and the output matrix respectively, w p and h p represent the number of symmetrically filled zero elements in the left and right and up and down directions of the input matrix, respectively, to prevent the boundary information of the matrix from continuously Convoluted and lost;

最大池化层的前向传播的计算为:The forward pass of the max pooling layer is calculated as:

Figure BDA0002515692820000053

Figure BDA0002515692820000053

上式(7)表示将特征图拆分为i×j个大小为e1×e2的区域,对每个e1×e2区域的特征点做一次最大池化操作;其中,Mi,j,k是池化层输出的第k层第i行第j列的值,

Figure BDA0002515692820000054

是前一层卷积层的第k层第ei+σ1行第ej+σ2列的值,(ei,ej)是特征图中第i行第j列个e1×e2区域的左上角位置坐标;The above formula (7) means that the feature map is split into i×j regions with a size of e 1 ×e 2 , and a maximum pooling operation is performed on the feature points of each e 1 ×e 2 region; among them, M i, j,k is the value of the i-th row and j-column of the k-th layer output by the pooling layer,

Figure BDA0002515692820000054

is the value of row ei+ σ1, column ej+ σ2 of the kth layer of the previous convolutional layer, and (ei,ej) is the upper left of the e 1 × e 2 area in row i, column j, of the feature map angular position coordinates;

全连接层的前向传播的计算为:The forward propagation of the fully connected layer is calculated as:

al=f(zl)=f(Wlal-1+bl) (8)a l =f(z l )=f(W l a l-1 +b l ) (8)

Figure BDA0002515692820000055

Figure BDA0002515692820000055

其中f(x)是激活函数,Wl和bl分别是第l层的权重和偏置值,al是第l层的输入;Where f(x) is the activation function, W l and b l are the weight and bias values of the l-th layer, respectively, and a l is the input of the l-th layer;

卷积层的反向传播为:The backpropagation of the convolutional layer is:

Figure BDA0002515692820000061

Figure BDA0002515692820000061

其中rot180表示将卷积核旋转180度,δl表示第l层的输出对目标函数的微分;Among them, rot180 means to rotate the convolution kernel by 180 degrees, and δ l means the differential of the output of the first layer to the objective function;

建立神经网络的目标函数J为:The objective function J of establishing the neural network is:

Figure BDA0002515692820000062

Figure BDA0002515692820000062

其中

Figure BDA0002515692820000063

是输出层的输出,yi(x)是真实标签值,n是样本数,λ是正则化参数。in

Figure BDA0002515692820000063

is the output of the output layer, y i (x) is the true label value, n is the number of samples, and λ is the regularization parameter.

网络内部的参数θj根据目标函数(12)更新如下,θj包括权重W和偏置b:The parameters θ j inside the network are updated according to the objective function (12) as follows, θ j includes weight W and bias b:

Figure BDA0002515692820000064

Figure BDA0002515692820000064

Figure BDA0002515692820000065

Figure BDA0002515692820000065

其中m表示小批次包含的样本数,

Figure BDA0002515692820000066

表示在第j次迭代时小批次中第i个输入的输出值,yi(x)j是对应的真实值,θj是第j次迭代时的内部参数,α是学习率,γ是动量值。where m represents the number of samples contained in the mini-batch,

Figure BDA0002515692820000066

Indicates the output value of the i-th input in the small batch at the j-th iteration, y i (x) j is the corresponding real value, θ j is the internal parameter at the j-th iteration, α is the learning rate, and γ is Momentum value.

作为优选,所述的卷积神经网络模型中,在网络中添加策略防止过拟合,临时地随机删掉网络中一定比例的隐藏神经元,输入输出神经元保持不变;输入通过修改后的网络前向传播,然后把得到的损失结果通过修改的网络反向传播;一批次训练样本执行完这个过程后,在没有被删除的神经元上按照随机梯度下降法更新参数。As a preference, in the convolutional neural network model, a strategy is added to the network to prevent overfitting, temporarily randomly delete a certain proportion of hidden neurons in the network, and the input and output neurons remain unchanged; the input is passed through the modified The network propagates forward, and then backpropagates the obtained loss results through the modified network; after a batch of training samples executes this process, the parameters are updated according to the stochastic gradient descent method on the neurons that have not been deleted.

作为优选,所述的卷积神经网络模型中,采用分段常数衰减策略,在训练过程中对学习率进行调整,而非固定不变,使得模型快速收敛。Preferably, in the convolutional neural network model, a piecewise constant decay strategy is adopted, and the learning rate is adjusted during the training process instead of being fixed, so that the model converges quickly.

相对于现有技术而言,本发明的有益效果如下:Compared with the prior art, the beneficial effects of the present invention are as follows:

(1)本发明利用卷积神经网络自动地在电压、电流、容量的数据中提取特征,省去了人工提取的步骤,并且避免了人工提取特征可能带来的重要信息丢失。(1) The present invention uses a convolutional neural network to automatically extract features from voltage, current, and capacity data, eliminating the steps of manual extraction and avoiding the loss of important information that may be caused by manual feature extraction.

(2)由于加速老化实验大大缩短了电池寿命,废弃电池的最后一小部分循环数据容易获取,新电池的前面15%循环数据也容易获取,因此本发明节省了大量的收集训练数据的时间,并且减小了模型输入数据的大小,使得计算过程更加快速。(2) Because the accelerated aging experiment has greatly shortened the battery life, the last small part of the cycle data of the discarded battery is easy to obtain, and the front 15% cycle data of the new battery is also easy to obtain, so the present invention saves a large amount of time for collecting training data, And the size of the model input data is reduced, making the calculation process faster.

(3)本发明基于加速模式的模型可以快速地迁移到正常速度模式下,具有良好的泛化性。(3) The model based on the acceleration mode of the present invention can be quickly migrated to the normal speed mode, and has good generalization.

(4)本发明只取了恒流充电阶段的数据作为输入,减小了模型输入数据的大小,加快了计算过程。(4) The present invention only takes the data of the constant current charging stage as input, which reduces the size of the model input data and speeds up the calculation process.

(5)本发明提高了锂电池健康状态在线估计的准确度。(5) The present invention improves the accuracy of online estimation of the state of health of the lithium battery.

附图说明Description of drawings

图1为卷积神经网络结构示意图;Figure 1 is a schematic diagram of a convolutional neural network structure;

图2为单个充放电循环过程中电压、电流、容量的变化示意图;Figure 2 is a schematic diagram of changes in voltage, current, and capacity during a single charge-discharge cycle;

图3为迁移学习示意图;Figure 3 is a schematic diagram of transfer learning;

图4为表格1中实验2的在线估计结果示意图,对应SONYUS18650VTC6电池;Figure 4 is a schematic diagram of the online estimation results of Experiment 2 in Table 1, corresponding to the SONYUS18650VTC6 battery;

图5为表格1中实验4的在线估计结果示意图,对应FST2000电池。Figure 5 is a schematic diagram of the online estimation results of Experiment 4 in Table 1, corresponding to the FST2000 battery.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明做进一步阐述和说明。本发明中各个实施方式的技术特征在没有相互冲突的前提下,均可进行相应组合。The present invention will be further elaborated and illustrated below in conjunction with the accompanying drawings and specific embodiments. The technical features of the various implementations in the present invention can be combined accordingly on the premise that there is no conflict with each other.

在本发明中,提供了一种基于卷积神经网络和迁移学习的锂电池健康状态估计方法,其步骤如下:In the present invention, a method for estimating the state of health of a lithium battery based on a convolutional neural network and transfer learning is provided, the steps of which are as follows:

S1:获取卷积神经网络的输入数据,具体方法是:S1: Obtain the input data of the convolutional neural network, the specific method is:

S11:选择若干不同型号的全新锂电池,各自进行加速老化实验采集循环数据,按照恒流充电、恒压充电、恒流放电的循环不断消耗电池容量,直到健康状态下降至80%以下。加速老化是指对电池进行过充和过放,即通过设置较高的恒流充电电压上限和较低的恒流放电电压下限,下同。S11: Select a number of new lithium batteries of different types, conduct accelerated aging experiments to collect cycle data, and continuously consume battery capacity according to the cycles of constant current charging, constant voltage charging, and constant current discharging until the health status drops below 80%. Accelerated aging refers to overcharging and overdischarging the battery, that is, by setting a higher upper limit of constant current charging voltage and a lower lower limit of constant current discharge voltage, the same below.

同时,获取相同型号的接近寿命终点的废弃锂电池,各自进行正常速度老化实验采集循环数据,同样地按照恒流充电、恒压充电、恒流放电的过程来消耗电池容量,直到健康状态下降至80%以下,大约进行35-40次循环即可。由于电池在正常老化情况下完整的寿命大约是500次循环,35-40次循环是完整寿命的7.5%左右。At the same time, obtain waste lithium batteries of the same type that are close to the end of their life, carry out normal speed aging experiments to collect cycle data, and consume battery capacity in the same way according to the process of constant current charging, constant voltage charging, and constant current discharge until the health status drops to Below 80%, about 35-40 cycles are enough. Since the full life of a battery under normal aging conditions is about 500 cycles, 35-40 cycles is about 7.5% of the full life.

获取相同型号的全新锂电池,各自进行正常速度老化实验采集循环数据,按照恒流充电、恒压充电、恒流放电的过程进行75次充放电循环,获取电池寿命的前15%循环数据。由于电池在正常老化情况下完整的寿命大约是500次循环,75次循环是完整寿命的15%左右。Obtain brand-new lithium batteries of the same type, carry out normal speed aging experiments to collect cycle data, and perform 75 charge and discharge cycles according to the process of constant current charging, constant voltage charging, and constant current discharge, and obtain the cycle data of the first 15% of the battery life. Since the full life of a battery under normal aging conditions is about 500 cycles, 75 cycles is about 15% of the full life.

S12:根据S11中采集的不同老化实验中恒流充电阶段电池的电压和电流值,并使用库伦计数法计算得到电池容量,将电压、电流和电池容量三个变量的数值构成矩阵,作为卷积神经网络的输入数据,模型输入X形式为:S12: According to the voltage and current values of the battery in the constant current charging stage collected in different aging experiments in S11, and use the coulomb counting method to calculate the battery capacity, the values of the three variables of voltage, current and battery capacity form a matrix as a convolution The input data of the neural network, the model input X form is:

Figure BDA0002515692820000081

Figure BDA0002515692820000081

其中:k是恒流充电阶段的采样点数,Vi、Ii、Ci分别为第i个采样点的电压、电流、容量。Where: k is the number of sampling points in the constant current charging stage, and V i , I i , and C i are the voltage, current, and capacity of the i-th sampling point, respectively.

S2:构建卷积神经网络模型,整个网络包括卷积层、池化层、全连接层,选取修正线性单元作为激活函数,与每一个卷积层和池化层的输出相连。S2: Construct a convolutional neural network model. The entire network includes a convolutional layer, a pooling layer, and a fully connected layer. The corrected linear unit is selected as the activation function and connected to the output of each convolutional layer and pooling layer.

卷积层的前向传播的计算如下:The forward pass of the convolutional layer is calculated as follows:

Figure BDA0002515692820000082

Figure BDA0002515692820000082

i'=(i-1)×hs+a (3)i'=(i-1)×h s +a (3)

j'=(j-1)×ws+b (4)j'=(j-1)×w s +b (4)

其中,k是该卷积层中卷积核的个数,即输出矩阵的通道数,Ci,j,k是输出矩阵中第k层第i行第j列的值,bk是偏置值,hk、wk和ck分别是卷积核的高度、宽度和通道数,ws和hs分别是卷积核在扫描输入矩阵时宽度和高度的步长,xi',j',c是输入矩阵中第c层第i'行第j'列的值,ka,b,c,k是第k个卷积核中第c层第a行第b列的值,f是激活函数;Among them, k is the number of convolution kernels in the convolution layer, that is, the number of channels of the output matrix, C i, j, k is the value of the i row and j column of the kth layer in the output matrix, and b k is the bias value, h k , w k and c k are the height, width and number of channels of the convolution kernel respectively, w s and h s are the step size of the width and height of the convolution kernel when scanning the input matrix, xi ',j ', c is the value of row i', column j' of layer c in the input matrix, k a, b, c, k are the values of row a, column b, layer c of layer c in the kth convolution kernel, f is the activation function;

卷积层的维度计算如下:The dimensions of the convolutional layer are calculated as follows:

Figure BDA0002515692820000091

Figure BDA0002515692820000091

Figure BDA0002515692820000092

Figure BDA0002515692820000092

其中wk和hk分别是卷积核的宽度和高度,ws和hs分别是卷积核在扫描输入矩阵时宽度方向和高度方向的步长,win,wout分别表示输入矩阵和输出矩阵的宽度,hin和hout分别表示输入矩阵和输出矩阵的高度,wp和hp分别表示在输入矩阵的左右和上下方向对称填充零元素的数目,防止矩阵的边界信息随着不断卷积而丢失;Where w k and h k are the width and height of the convolution kernel respectively, w s and h s are the step size of the convolution kernel in the width direction and height direction when scanning the input matrix, respectively, w in and w out represent the input matrix and The width of the output matrix, h in and h out represent the heights of the input matrix and the output matrix respectively, w p and h p represent the number of symmetrically filled zero elements in the left and right and up and down directions of the input matrix, respectively, to prevent the boundary information of the matrix from continuously Convoluted and lost;

最大池化层的前向传播的计算为:The forward pass of the max pooling layer is calculated as:

Figure BDA0002515692820000093

Figure BDA0002515692820000093

上式(7)表示将特征图拆分为i×j个大小为e1×e2的区域,对每个e1×e2区域的特征点做一次最大池化操作;其中,Mi,j,k是池化层输出的第k层第i行第j列的值,

Figure BDA0002515692820000094

是前一层卷积层的第k层第ei+σ1行第ej+σ2列的值,(ei,ej)是特征图中第i行第j列个e1×e2区域的左上角位置坐标;The above formula (7) means that the feature map is split into i×j regions with a size of e 1 ×e 2 , and a maximum pooling operation is performed on the feature points of each e 1 ×e 2 region; among them, M i, j,k is the value of the i-th row and j-column of the k-th layer output by the pooling layer,

Figure BDA0002515692820000094

is the value of row ei+ σ1, column ej+ σ2 of the kth layer of the previous convolutional layer, and (ei,ej) is the upper left of the e 1 × e 2 area in row i, column j, of the feature map angular position coordinates;

全连接层的前向传播的计算为:The forward propagation of the fully connected layer is calculated as:

al=f(zl)=f(Wlal-1+bl) (8)a l =f(z l )=f(W l a l-1 +b l ) (8)

Figure BDA0002515692820000095

Figure BDA0002515692820000095

其中f(x)是激活函数,Wl和bl分别是第l层的权重和偏置值,al是第l层的输入;Where f(x) is the activation function, W l and b l are the weight and bias values of the l-th layer, respectively, and a l is the input of the l-th layer;

卷积层的反向传播为:The backpropagation of the convolutional layer is:

Figure BDA0002515692820000096

Figure BDA0002515692820000096

其中rot180表示将卷积核旋转180度,δl表示第l层的输出对目标函数的微分;Among them, rot180 means to rotate the convolution kernel by 180 degrees, and δ l means the differential of the output of the first layer to the objective function;

建立神经网络的目标函数J为:The objective function J of establishing the neural network is:

Figure BDA0002515692820000101

Figure BDA0002515692820000101

其中

Figure BDA0002515692820000102

是输出层的输出,yi(x)是真实标签值,n是样本数,λ是正则化参数。in

Figure BDA0002515692820000102

is the output of the output layer, y i (x) is the true label value, n is the number of samples, and λ is the regularization parameter.

网络内部的参数θj根据目标函数(12)更新如下,包括权重W和偏置b:The parameters θ j inside the network are updated according to the objective function (12) as follows, including weight W and bias b:

Figure BDA0002515692820000103

Figure BDA0002515692820000103

Figure BDA0002515692820000104

Figure BDA0002515692820000104

其中m表示小批次包含的样本数,

Figure BDA0002515692820000105

表示在第j次迭代时小批次中第i个输入的输出值,yi(x)j是对应的真实值,θj是第j次迭代时的内部参数,α是学习率,γ是动量值。where m represents the number of samples contained in the mini-batch,

Figure BDA0002515692820000105

Indicates the output value of the i-th input in the small batch at the j-th iteration, y i (x) j is the corresponding real value, θ j is the internal parameter at the j-th iteration, α is the learning rate, and γ is Momentum value.

S3:对S2中构建的模型进行预训练,具体方法为S31~S32:S3: Pre-train the model constructed in S2, the specific method is S31-S32:

S31:将S1中全新锂电池加速老化实验得到的输入数据分为若干小批次的训练样本,按批次输入S2中构建的神经网络中,在迭代学习过程中通过随机梯度下降法更新参数,从而得到第一预训练模型,保存第一预训练模型的参数值,包括卷积核的值ka,b,c,k、偏置值bk、全连接层的权重Wl和偏置blS31: Divide the input data obtained from the accelerated aging experiment of the new lithium battery in S1 into several small batches of training samples, and input them into the neural network constructed in S2 according to batches, and update the parameters through the stochastic gradient descent method during the iterative learning process. Thus, the first pre-training model is obtained, and the parameter values of the first pre-training model are saved, including the values k a,b,c,k of the convolution kernel, the bias value b k , the weight W l and the bias b of the fully connected layer l ;

S32:将S1中废弃锂电池正常速度老化实验得到的输入数据分为若干小批次的训练样本,按批次输入S31训练后的神经网络中,在第一预训练模型已保存的模型参数值基础上进行迭代学习,并通过随机梯度下降法进一步调整参数,从而得到第二预训练模型,保存第二预训练模型的参数值,包括新的卷积核的值k'a,b,c,k、偏置值b'k、全连接层的权重Wl'和偏置bl';S32: Divide the input data obtained from the normal speed aging experiment of discarded lithium batteries in S1 into several small batches of training samples, and input them into the trained neural network in S31 in batches, and save the model parameter values in the first pre-training model On the basis of iterative learning, and further adjust the parameters through the stochastic gradient descent method, so as to obtain the second pre-training model, save the parameter values of the second pre-training model, including the value of the new convolution kernel k' a,b,c, k , bias value b' k , weight W l ' and bias b l ' of the fully connected layer;

此时前向传播和参数更新如下:At this time, the forward propagation and parameter update are as follows:

Figure BDA0002515692820000106

Figure BDA0002515692820000106

al'=f(zl')=f(Wl'al-1'+bl') (17)a l '=f(z l ')=f(W l 'a l-1 '+b l ') (17)

Figure BDA0002515692820000111

Figure BDA0002515692820000111

其中:模型内部参数θ'j包括卷积核的值k'a,b,c,k、偏置值b'k、全连接层的权重Wl'和偏置bl';公式(16)~(18)中参数的上标“'”表示该参数在预训练阶段的前向传播更新值;Among them: the internal parameters of the model θ' j include the value of the convolution kernel k' a,b,c,k , the bias value b' k , the weight W l ' and the bias b l ' of the fully connected layer; formula (16) The superscript "'" of the parameter in ~(18) indicates the forward propagation update value of the parameter in the pre-training stage;

卷积神经网络模型中,在网络中添加策略防止过拟合,临时地随机删掉网络中一定比例的隐藏神经元,输入输出神经元保持不变;输入通过修改后的网络前向传播,然后把得到的损失结果通过修改的网络反向传播;一批次训练样本执行完这个过程后,在没有被删除的神经元上按照随机梯度下降法更新参数。In the convolutional neural network model, a strategy is added to the network to prevent overfitting, and a certain proportion of hidden neurons in the network is temporarily deleted randomly, and the input and output neurons remain unchanged; the input is propagated forward through the modified network, and then The obtained loss result is back-propagated through the modified network; after a batch of training samples executes this process, the parameters are updated according to the stochastic gradient descent method on the neurons that have not been deleted.

卷积神经网络模型中,采用分段常数衰减策略,在训练过程中对学习率进行调整,而非固定不变,使得模型快速收敛。In the convolutional neural network model, the piecewise constant decay strategy is adopted to adjust the learning rate during the training process instead of being fixed, so that the model converges quickly.

S4:将S1中全新锂电池正常速度老化实验得到的输入数据分为若干小批次的训练样本,按批次输入S3中得到的预训练模型中进行迭代学习,迭代学习过程中固定预训练模型的卷积层参数不变,即保持k'a,b,c,k和b'k不变,只将全连接层的权重Wl'和偏置bl'更新为Wl”和bl”,保存更新后的参数,得到最终的估计模型;S4: Divide the input data obtained from the normal speed aging experiment of the new lithium battery in S1 into several small batches of training samples, and input them into the pre-training model obtained in S3 for iterative learning according to the batches, and fix the pre-training model during the iterative learning process The parameters of the convolutional layer remain unchanged, that is, keep k' a, b, c, k and b' k unchanged, and only update the weight W l ' and bias b l ' of the fully connected layer to W l ” and b l ", save the updated parameters, and get the final estimated model;

此时前向传播和参数更新如下:At this time, the forward propagation and parameter update are as follows:

Figure BDA0002515692820000112

Figure BDA0002515692820000112

al”=f(zl”)=f(Wl”al-1”+bl”) (20)a l ”=f(z l ”)=f(W l ”a l-1 ”+b l ”) (20)

Figure BDA0002515692820000113

Figure BDA0002515692820000113

其中:模型内部参数θ”j包括全连接层的权重Wl”和偏置bl”;公式(19)~(21)中参数的上标“〞”表示该参数在微调阶段的前向传播更新值;Among them: the internal parameters of the model θ” j include the weight W l ” and bias b l ” of the fully connected layer; the superscript “”” of the parameters in the formulas (19)~(21) indicates the forward propagation of the parameter in the fine-tuning stage update value;

S5:对待估计的锂电池进行一次恒流充电实验,得到其电压、电流、容量测试值,将三者构成矩阵作为S4中得到的估计模型的输入X,网络前向传播的计算过程中使用S4中保存的参数k'a,b,c,k、b'k、Wl”和bl”,此时的前向传播和参数更新如下:S5: Conduct a constant current charging experiment on the lithium battery to be estimated to obtain its voltage, current, and capacity test values, and use the matrix formed by the three as the input X of the estimation model obtained in S4, and use S4 in the calculation process of the network forward propagation The parameters k' a,b,c,k , b' k , W l ” and b l ” saved in , the forward propagation and parameter update at this time are as follows:

Figure BDA0002515692820000121

Figure BDA0002515692820000121

al”'=f(zl”')=f(Wl”al-1”'+bl”) (23)a l "'=f(z l "')=f(W l "a l-1 "'+b l ") (23)

公式(22)~(23)中参数的上标“”'”表示该参数在估计阶段的前向传播更新值;The superscript ""'" of the parameter in the formulas (22)~(23) indicates the forward propagation update value of the parameter in the estimation stage;

最终,估计模型输出电池在该时刻的健康状态。Ultimately, the estimated model outputs the state of health of the battery at that moment.

下面将上述方法应用于一个具体实施例中,以展示其具体的实现过程和技术效果。The above method is applied to a specific embodiment below to demonstrate its specific implementation process and technical effects.

实施例Example

本实施例中,具体步骤如下:In this embodiment, the specific steps are as follows:

步骤(1),获取卷积神经网络的输入数据。In step (1), the input data of the convolutional neural network is obtained.

a.选定3个全新的SONYUS18650VTC6型号和3个全新FST2000型号的锂电池,对每一种型号分别进行完整的过充、过放老化实验以及75次正常速度老化循环。选定1个SONYUS18650VTC6型号和1个FST2000型号的接近寿命终点的废弃电池,各自进行正常速度老化实验,按照恒流充电、恒压充电、恒流放电的循环来消耗电池容量直到健康状态下降至80%以下,大约进行35~40次循环即可。以上共得到8组数据。正常速度老化的截止电压上下限分别是4.2V和2.75V,过充的截止电压上限是4.4V,过放的截止电压下限是2V。a. Select 3 brand-new SONYUS18650VTC6 lithium batteries and 3 brand-new FST2000 lithium batteries, and conduct a complete overcharge, over-discharge aging test and 75 normal speed aging cycles for each model. Select 1 SONYUS18650VTC6 model and 1 FST2000 model waste battery near the end of life, and carry out the normal speed aging test respectively, and consume the battery capacity according to the cycle of constant current charging, constant voltage charging and constant current discharging until the healthy state drops to 80% % or less, about 35 to 40 cycles are sufficient. A total of 8 sets of data were obtained above. The upper and lower limits of cut-off voltage for normal aging are 4.2V and 2.75V respectively, the upper limit of overcharge cut-off voltage is 4.4V, and the lower limit of over-discharge cut-off voltage is 2V.

单个充放电循环过程中电压、电流、容量的变化如图2所示。The changes in voltage, current, and capacity during a single charge-discharge cycle are shown in Figure 2.

b.采集充放电循环中恒流充电阶段电池的电压、电流值,并使用库伦计数法得到容量,这三个变量的数值构成矩阵,作为卷积神经网络的输入X,X的维度是4000×3。b. Collect the voltage and current values of the battery in the constant current charging phase of the charge-discharge cycle, and use the Coulomb counting method to obtain the capacity. The values of these three variables form a matrix, which is used as the input X of the convolutional neural network. The dimension of X is 4000× 3.

Figure BDA0002515692820000122

Figure BDA0002515692820000122

步骤(2),设计卷积神经网络算法。Step (2), designing a convolutional neural network algorithm.

a.整个网络包括卷积层、池化层、全连接层。选取修正线性单元作为激活函数,与每一个卷积层和池化层的输出相连。a. The entire network includes convolutional layers, pooling layers, and fully connected layers. The rectified linear unit is selected as the activation function, which is connected to the output of each convolution layer and pooling layer.

卷积层的前向传播的计算:The calculation of the forward propagation of the convolutional layer:

Figure BDA0002515692820000131

Figure BDA0002515692820000131

i'=(i-1)×hs+a (3)i'=(i-1)×h s +a (3)

j'=(j-1)×ws+b (4)j'=(j-1)×w s +b (4)

其中,k是该卷积层中卷积核的个数,即输出矩阵的通道数,Ci,j,k是输出矩阵中第k层第i行第j列的值,bk是偏置值,hk、wk和ck分别是卷积核的高度、宽度和通道数,ws和hs分别是卷积核在扫描输入矩阵时宽度和高度的步长,xi',j',c是输入矩阵中第c层第i'行第j'列的值,ka,b,c,k是第k个卷积核中第c层第a行第b列的值,f是激活函数。Among them, k is the number of convolution kernels in the convolution layer, that is, the number of channels of the output matrix, C i, j, k is the value of the i row and j column of the kth layer in the output matrix, and b k is the bias value, h k , w k and c k are the height, width and number of channels of the convolution kernel respectively, w s and h s are the step size of the width and height of the convolution kernel when scanning the input matrix, xi ',j ', c is the value of row i', column j' of layer c in the input matrix, k a, b, c, k are the values of row a, column b, layer c of layer c in the kth convolution kernel, f is the activation function.

卷积层的维度计算如下:The dimensions of the convolutional layer are calculated as follows:

Figure BDA0002515692820000132

Figure BDA0002515692820000132

Figure BDA0002515692820000133

Figure BDA0002515692820000133

其中wk和hk分别是卷积核的宽度和高度,ws和hs分别是卷积核在扫描输入矩阵时宽度和高度的步长,win,wout,hin和hout分别表示输入矩阵和输出矩阵的宽度和高度,wp和hp表示在输入矩阵的左右和上下方向对称填充零元素的数目,防止矩阵的边界信息随着不断卷积而丢失;Where w k and h k are the width and height of the convolution kernel respectively, w s and h s are the steps of the width and height of the convolution kernel when scanning the input matrix, respectively, w in , w out , h in and h out respectively Represents the width and height of the input matrix and the output matrix, w p and h p represent the number of zero elements that are symmetrically filled in the left and right and up and down directions of the input matrix, preventing the boundary information of the matrix from being lost with continuous convolution;

最大池化层的前向传播的计算:The calculation of the forward propagation of the maximum pooling layer:

Figure BDA0002515692820000134

Figure BDA0002515692820000134

其中,Mi,j,k是池化层输出的第k层第i行第j列的值,

Figure BDA0002515692820000135

是前一层卷积层的第k层第ei+σ1行第ej+σ2列的值,上式表示对特征图中每e1×e2个区域做一次最大池化操作。Among them, M i, j, k is the value of the i-th row and j-th column of the k-th layer output by the pooling layer,

Figure BDA0002515692820000135

is the value of row ei+ σ1 and column ej+ σ2 of the kth layer of the previous convolutional layer. The above formula means that a maximum pooling operation is performed on each e 1 ×e 2 region in the feature map.

全连接层的前向传播的计算:The calculation of the forward propagation of the fully connected layer:

al=f(zl)=f(Wlal-1+bl) (8)a l =f(z l )=f(W l a l-1 +b l ) (8)

Figure BDA0002515692820000141

Figure BDA0002515692820000141

其中f(x)是激活函数,Wl和bl分别是第l层的权重和偏置值,al是第l层的输入,卷积层中对应的是卷积计算。Among them, f(x) is the activation function, W l and b l are the weight and bias value of the l-th layer respectively, a l is the input of the l-th layer, and the convolutional layer corresponds to the convolution calculation.

各个卷积层中卷积核的大小、最大池化层以及全连接层的维度变化可见图1。输入X的维度是4000×3×1,首先经过6个大小为5×2×1的卷积核进行卷积运算,取wp=1,hp=0,由公式(5)和(6)计算可得卷积层输出的维度是3996×4×6。然后经过一个最大池化层,每4×2个区域做一次最大池化操作,输出的维度是999×2×6。接着经过16个大小为5×1×6的卷积核进行卷积运算得到995×2×16,每5×1个区域做一次最大池化操作,输出的维度是199×2×6,将其平铺成维度为6348的列向量,然后分别经过3个全连接层,维度依次变为80,40,最终网络输出一个神经元。The size of the convolution kernel in each convolution layer, the dimension change of the maximum pooling layer and the fully connected layer can be seen in Figure 1. The dimension of the input X is 4000×3×1. First, the convolution operation is performed through 6 convolution kernels with a size of 5×2×1. Take w p =1, h p =0, and formula (5) and (6 ) to calculate the output dimension of the convolutional layer is 3996×4×6. Then after a maximum pooling layer, a maximum pooling operation is performed for every 4×2 area, and the output dimension is 999×2×6. Then, through 16 convolution kernels with a size of 5×1×6, the convolution operation is performed to obtain 995×2×16, and a maximum pooling operation is performed for each 5×1 area, and the output dimension is 199×2×6. It is flattened into a column vector with a dimension of 6348, and then passes through three fully connected layers, and the dimension becomes 80 and 40 in turn, and finally the network outputs a neuron.

卷积层的反向传播:Backpropagation of convolutional layers:

Figure BDA0002515692820000142

Figure BDA0002515692820000142

其中rot180表示将卷积核旋转180度,δl表示第l层的输出对目标函数的微分。Among them, rot180 means to rotate the convolution kernel by 180 degrees, and δ l means the differential of the output of the first layer to the objective function.

建立目标函数J:Build the objective function J:

Figure BDA0002515692820000143

Figure BDA0002515692820000143

其中W和b分别是网络内部的权重和偏置值;

Figure BDA0002515692820000144

是输出层的输出,yi(x)是真实标签值,n是样本数,λ是正则化参数。取λ=0.001。where W and b are the weight and bias values inside the network, respectively;

Figure BDA0002515692820000144

is the output of the output layer, y i (x) is the true label value, n is the number of samples, and λ is the regularization parameter. Take λ=0.001.

权重和偏置的更新如下:The weights and biases are updated as follows:

Figure BDA0002515692820000145

Figure BDA0002515692820000145

Figure BDA0002515692820000146

Figure BDA0002515692820000146

其中m表示小批次包含的样本数,

Figure BDA0002515692820000151

表示在第j次迭代时小批次中第i个输入的输出值,yi(x)j是对应的真实值,θj第j次迭代时的内部参数,α是学习率,γ是动量值。取m=64,学习率的设置可见步骤c,γ=0.9。where m represents the number of samples contained in the mini-batch,

Figure BDA0002515692820000151

Indicates the output value of the i-th input in the small batch at the j-th iteration, y i (x) j is the corresponding real value, θ j is the internal parameter at the j-th iteration, α is the learning rate, γ is the dynamic magnitude. Take m=64, the setting of learning rate can be seen in step c, γ=0.9.

模型评估指标采用准确度和均方根误差:The model evaluation metrics use accuracy and root mean square error:

Figure BDA0002515692820000152

Figure BDA0002515692820000152

Figure BDA0002515692820000153

Figure BDA0002515692820000153

b.在网络基础上添加策略防止过拟合,临时地随机删掉网络中p比例的隐藏神经元,输入输出神经元保持不变。输入通过修改后的网络前向传播,然后把得到的损失结果通过修改的网络反向传播。一小批训练样本执行完这个过程后,在没有被删除的神经元上按照随机梯度下降法更新参数。隐藏神经元以p=0.5的概率停止计算。b. Add a strategy on the basis of the network to prevent overfitting, temporarily randomly delete p proportion of hidden neurons in the network, and keep the input and output neurons unchanged. The input is forward-propagated through the modified network, and the resulting loss result is back-propagated through the modified network. After performing this process on a small batch of training samples, the parameters are updated according to the stochastic gradient descent method on the neurons that have not been deleted. Hidden neurons stop computing with probability p = 0.5.

c.采用分段常数衰减策略,在训练过程中对学习率进行调整,而非固定不变,使得模型快速收敛。学习率初始值为1×10-5,每隔一定的迭代周期衰减为原来的0.7倍。c. Using a piecewise constant decay strategy, the learning rate is adjusted during the training process instead of being fixed, so that the model converges quickly. The initial value of the learning rate is 1×10 -5 , and it decays to 0.7 times the original value every certain iteration period.

步骤(3),模型预训练。Step (3), model pre-training.

a.迁移学习首先在一个基础数据集和基础任务上训练一个基础网络,然后再微调学到的特征,将它们迁移到第二个目标网络中,用目标数据集和目标任务训练网络。由于电池在加速条件下和正常速度下的老化情况具有一定的相似性,因此可以使用加速老化的数据为正常速度老化预训练一个基础模型。本发明中迁移学习的具体策略可见图3。a. Migration learning first trains a basic network on a basic data set and basic task, then fine-tunes the learned features, transfers them to the second target network, and trains the network with the target data set and target task. Due to the similarities in how batteries age under accelerated conditions and at normal speeds, accelerated aging data can be used to pre-train a base model for normal speed aging. The specific strategy of transfer learning in the present invention can be seen in Figure 3.

在步骤(1)进行的加速老化实验会得到电池在过充和过放条件下寿命缩短之后的循环数据,将所有的循环分为一系列小批次的训练样本,按批次输入网络,通过随机梯度下降法更新参数,从而得到一个初步的预训练模型,通过迭代学习后网络内部的参数被固定下来,并且将这些参数的值保存下来,分别是:卷积核的值ka,b,c,k、偏置值bk、全连接层的权重Wl和偏置blThe accelerated aging experiment carried out in step (1) will obtain the cycle data after the life of the battery is shortened under the condition of overcharge and overdischarge. All the cycles are divided into a series of small batches of training samples, which are input into the network in batches. The stochastic gradient descent method updates the parameters to obtain a preliminary pre-training model. After iterative learning, the internal parameters of the network are fixed, and the values of these parameters are saved, respectively: the values of the convolution kernel k a, b, c, k , bias value b k , weight W l and bias b l of the fully connected layer.

b.若仅使用加速老化数据预训练,得到的模型直接用在步骤(4)的微调和测试过程,那么在测试时,模型在电池寿命周期的最后一部分循环上表现不佳,SOH估计十分不准确,这可能是因为电池在加速老化和正常条件下,各自接近寿命终点时的老化情况表现出明显的差异,特征之间不具备很好的泛化性。因此,在使用加速老化数据预训练阶段之后,使用一部分废弃电池的正常老化数据继续第二个预训练阶段。b. If only the accelerated aging data is used for pre-training, the obtained model is directly used in the fine-tuning and testing process of step (4), then during the test, the model performs poorly on the last part of the battery life cycle, and the SOH estimation is very poor Accurate, this may be because the aging conditions of the battery near the end of life show obvious differences between accelerated aging and normal conditions, and the characteristics do not have good generalization. Therefore, after a pre-training phase using accelerated aging data, a second pre-training phase is continued using normal aging data from a fraction of discarded batteries.

同样地,在步骤(1)进行的废弃电池正常速度老化实验会得到废弃电池寿命周期中最后一部分循环的数据,将这些循环分为一系列小批次的训练样本,按批次输入网络,通过随机梯度下降法更新参数,在原先初步的预训练模型上进一步调整参数,从而得到一个新的预训练模型。具体地,将上一个阶段保存下来的参数在新的模型上进行参数初始化,通过迭代学习后网络内部的参数再一次被固定下来,并且将这些参数的值保存下来,分别是:k'a,b,c,k、b'k、Wl'和bl'。Similarly, the normal speed aging experiment of discarded batteries in step (1) will obtain the data of the last part of the cycle in the life cycle of the discarded battery, divide these cycles into a series of small batches of training samples, and input them into the network in batches, through The stochastic gradient descent method updates the parameters, and further adjusts the parameters on the original preliminary pre-training model to obtain a new pre-training model. Specifically, the parameters saved in the previous stage are initialized on the new model, and the parameters inside the network are fixed again after iterative learning, and the values of these parameters are saved, respectively: k' a, b,c,k , b' k , W l ' and b l '.

此时的前向传播和参数更新如下,其中θ'j包括卷积核的值k'a,b,c,k、偏置值b'k、全连接层的权重Wl'和偏置bl':The forward propagation and parameter update at this time are as follows, where θ' j includes the value k' a,b,c,k of the convolution kernel, the bias value b' k , the weight W l ' of the fully connected layer and the bias b l ':

Figure BDA0002515692820000161

Figure BDA0002515692820000161

al'=f(zl')=f(Wl'al-1'+bl') (17)a l '=f(z l ')=f(W l 'a l-1 '+b l ') (17)

Figure BDA0002515692820000162

Figure BDA0002515692820000162

用步骤(1)得到的数据设计4组实验,每一组实验分别进行初步预训练和进一步的预训练阶段,各个阶段使用的数据如表1所示。The data obtained in step (1) is used to design 4 sets of experiments, and each set of experiments is subjected to preliminary pre-training and further pre-training stages respectively. The data used in each stage is shown in Table 1.

表格1:4组实验各个阶段使用的数据Table 1: Data used in each stage of the 4 groups of experiments

Figure BDA0002515692820000163

Figure BDA0002515692820000163

Figure BDA0002515692820000171

Figure BDA0002515692820000171

步骤(4),模型微调和在线估计。Step (4), model fine-tuning and online estimation.

a.在步骤(1)进行的正常速度老化实验会得到电池前15%的循环数据,分为一系列小批次的训练样本,按批次输入网络,在步骤(3)得到的预训练模型的基础上调整参数,利用微调后的模型在线估计电池在任意时刻的健康状态。此时,微调阶段并不需要像第二个预训练阶段一样在所有的参数上进行更新,而是固定卷积层的参数不变,即保持k'a,b,c,k和b'k不变,只改变全连接层的权重Wl'和偏置bl',更新为Wl”和bl”。将这些参数的值保存下来,作为最终测试的模型。微调阶段使用的数据如表1所示。a. The normal speed aging experiment carried out in step (1) will obtain the cycle data of the first 15% of the battery, which will be divided into a series of small batches of training samples, which will be input into the network in batches, and the pre-trained model obtained in step (3) Adjust the parameters on the basis of , and use the fine-tuned model to estimate the health status of the battery at any time online. At this time, the fine-tuning stage does not need to update all parameters like the second pre-training stage, but the parameters of the fixed convolution layer remain unchanged, that is, keep k' a, b, c, k and b' k unchanged, only the weight W l ' and bias b l ' of the fully connected layer are changed, and updated to W l ” and b l ”. Save the values of these parameters as the model for the final test. The data used in the fine-tuning stage are shown in Table 1.

此时的前向传播和参数更新如下,其中θ”j包括全连接层的权重Wl”和偏置bl”:The forward propagation and parameter update at this time are as follows, where θ” j includes the weight W l ” and bias b l ” of the fully connected layer:

Figure BDA0002515692820000172

Figure BDA0002515692820000172

al”=f(zl”)=f(Wl”al-1”+bl”) (20)a l ”=f(z l ”)=f(W l ”a l-1 ”+b l ”) (20)

Figure BDA0002515692820000173

Figure BDA0002515692820000173

b.在线估计时,只需要对某一时刻的电池进行一次恒流充电实验,将其电压、电流、容量构成的矩阵作为模型的输入X,网络前向传播的计算过程中使用微调阶段保存下来的参数:k'a,b,c,k、b'k、Wl”和bl”,经过公式(2)-(9)的计算之后即可输出电池在该时刻的健康状态,测试时无需进行反向传播过程。b. When estimating online, it is only necessary to conduct a constant current charging experiment on the battery at a certain moment, and use the matrix composed of its voltage, current, and capacity as the input X of the model, and use the fine-tuning stage to save it during the calculation process of the network forward propagation Parameters: k' a,b,c,k , b' k , W l ”and b l ”, after the calculation of the formula (2)-(9), the health status of the battery at this moment can be output. During the test No backpropagation process is required.

此时的前向传播和参数更新如下:The forward propagation and parameter update at this time are as follows:

Figure BDA0002515692820000181

Figure BDA0002515692820000181

al”'=f(zl”')=f(Wl”al-1”'+bl”) (23)a l "'=f(z l "')=f(W l "a l-1 "'+b l ") (23)

图4和图5是部分的实验结果,分别对应表格1中的实验2和实验4,图中三角形代表真实的SOH,圆圈代表在线估计值,从结果中可以看出本发明的准确度分别达到了99.56%、99.01%,均方根误差分别是0.435%、1.120%。Fig. 4 and Fig. 5 are the experimental results of part, respectively correspond to experiment 2 and experiment 4 in table 1, triangle represents real SOH in the figure, and circle represents online estimated value, can find out from the result that the accuracy of the present invention reaches respectively The results are 99.56%, 99.01%, and the root mean square errors are 0.435%, 1.120%, respectively.

以上所述的实施例只是本发明的一种较佳的方案,然其并非用以限制本发明。有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型。因此凡采取等同替换或等效变换的方式所获得的技术方案,均落在本发明的保护范围内。The above-mentioned embodiment is only a preferred solution of the present invention, but it is not intended to limit the present invention. Various changes and modifications can be made by those skilled in the relevant technical fields without departing from the spirit and scope of the present invention. Therefore, all technical solutions obtained by means of equivalent replacement or equivalent transformation fall within the protection scope of the present invention.

Claims (8)

1. A lithium battery health state estimation method based on a convolutional neural network and transfer learning is characterized by comprising the following steps:

s1: the method for acquiring the input data of the convolutional neural network comprises the following steps:

s11: selecting a plurality of brand-new lithium batteries with different models, respectively carrying out an accelerated aging experiment to acquire cycle data, and continuously consuming the battery capacity according to the cycles of constant-current charging, constant-voltage charging and constant-current discharging until the health state is reduced to below 80%;

meanwhile, waste lithium batteries with the same type and close to the end of service life are obtained, normal speed aging experiments are respectively carried out to collect cycle data, and the battery capacity is consumed according to the processes of constant current charging, constant voltage charging and constant current discharging until the health state is reduced to be below 80%;

acquiring brand new lithium batteries with the same model, respectively carrying out a normal speed aging experiment to acquire cycle data, carrying out charge-discharge cycle according to the processes of constant current charging, constant voltage charging and constant current discharging, and acquiring the first 15% cycle data of the service life of the batteries;

s12: calculating to obtain the battery capacity according to the voltage and current values of the battery in the constant current charging stage in different aging experiments collected in S11, and forming a matrix by using numerical values of three variables of the voltage, the current and the battery capacity as input data of a convolutional neural network;

s2: constructing a convolutional neural network model, wherein the whole network comprises convolutional layers, pooling layers and full-connection layers, and selecting a modified linear unit as an activation function to be connected with the output of each convolutional layer and each pooling layer;

s3: pre-training the model constructed in S2, wherein the specific method is S31-S32:

s31: dividing input data obtained by a brand-new lithium battery accelerated aging experiment in S1 into a plurality of small batches of training samples, inputting the training samples into a neural network constructed in S2 in batches, updating parameters through a random gradient descent method in an iterative learning process to obtain a first pre-training model, and storing parameter values of the first pre-training model, including a value k of a convolution kernel a,b,c,k Offset value b k Weight W of full connection layer l And bias b l

S32: dividing input data obtained by the normal speed aging experiment of the waste lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into a neural network trained in the S31 according to the batches, performing iterative learning on the basis of model parameter values stored in a first pre-training model, further adjusting parameters through a random gradient descent method to obtain a second pre-training model, storing parameter values of the second pre-training model, and including a value k 'of a new convolution kernel' a,b,c,k And a bias value of b' k Weight W of full connection layer l ' and offset b l ';

The forward propagation and parameter update at this time are as follows:

Figure FDA0003916589860000021

a l '=f(z l ')=f(W l 'a l-1 '+b l ') (17)

Figure FDA0003916589860000022

wherein: k in equation (16) is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, C i,j,k Is the value of the ith row and jth column, x, in the output matrix i',j',c Is the value of the ith 'row and the jth' column in the c-th layer of the input matrix, b k Is an offset value, h k 、w k And c k Respectively the height, width and channel number of the convolution kernel, and f is an activation function; in the formula (17), f (x) is an activation function, W l And b l Respectively, the weight and offset value of the l-th layer, a l Is an input to the l-th layer;

model internal parameter θ' j Value k 'comprising convolution kernel' a,b,c,k Offset value b' k Weight W of full connection layer l ' and offset b l '; the superscript "'" of the parameter in equations (16) - (18) indicates the forward propagation update value of the parameter in the pre-training phase; λ is a regularization parameter, α is a learning rate, and γ is a momentum value;

s4: dividing input data obtained in the normal speed aging experiment of the brand-new lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the pre-training model obtained in the S3 according to the batches for iterative learning, and keeping the convolution layer parameters of the fixed pre-training model in the iterative learning process unchanged, namely keeping k' a,b,c,k And b' k Not changing, only the weight W of the full connection layer l ' and offset b l Update to W l "and b l ", saving the updated parameters to obtain the final estimation model;

the forward propagation and parameter update at this time are as follows:

Figure FDA0003916589860000023

a l ”=f(z l ”)=f(W l ”a l-1 ”+b l ”) (20)

Figure FDA0003916589860000024

wherein: internal parameter of model theta' j Weight W including full connection layer l And bias b l "; the superscript "" of a parameter in equations (19) - (21) indicates that the parameter is propagating forward to update values during the fine-tuning phase;

s5: performing a constant current charging experiment on the lithium battery to be estimated to obtain voltage, current and capacity test values of the lithium battery, taking a matrix formed by the three as an input X of the estimation model obtained in S4, and using the parameter k 'stored in S4 in the calculation process of network forward propagation' a,b,c,k 、b' k 、W l "and b l ", the forward propagation and parameter updates at this time are as follows:

Figure FDA0003916589860000031

a l ”'=f(z l ”')=f(W l ”a l-1 ”'+b l ”) (23)

the superscript ""' of the parameter in equations (22) to (23) indicates the forward propagation update value of the parameter in the estimation stage;

finally, the estimation model outputs the state of health of the battery at that time.

2. The method for estimating the state of health of the lithium battery based on the convolutional neural network and the transfer learning as claimed in claim 1, wherein in S1, the accelerated aging test refers to overcharging and overdischarging the battery by setting a higher upper constant-current charging voltage limit and a lower constant-current discharging voltage limit.

3. The method for estimating the health state of a lithium battery based on a convolutional neural network and transfer learning as claimed in claim 1, wherein in S1, the normal speed aging test of the waste lithium battery is performed for 35-40 charge and discharge cycles.

4. The method for estimating the health state of a lithium battery based on a convolutional neural network and transfer learning as claimed in claim 1, wherein in S1, 75 charge and discharge cycles are performed in a normal speed aging experiment of a brand new lithium battery.

5. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein in S1, data collected by different aging experiments are respectively constructed as model inputs X:

Figure FDA0003916589860000032

wherein: k is the number of sampling points in the constant current charging stage, V i 、I i 、C i The voltage, current and capacity of the ith sampling point are respectively.

6. The lithium battery state of health estimation method based on convolutional neural network and transfer learning of claim 1, wherein in the convolutional neural network model, the calculation of forward propagation of convolutional layer is as follows:

Figure FDA0003916589860000041

i'=(i-1)×h s +a (3)

j'=(j-1)×w s +b (4)

where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, C i,j,k Is the value of the ith row and jth column in the kth layer of the output matrix, b k Is an offset value, h k 、w k And c k Respectively the height, width and number of channels, w, of the convolution kernel s And h s Step sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrix i',j',c Is the value of the ith 'row and the jth' column in the input matrix at layer c, k a,b,c,k Is the value of the c-th layer, a-th row and b-th column in the k-th convolution kernel, and f is the activation function;

the dimensions of the convolutional layer were calculated as follows:

Figure FDA0003916589860000042

Figure FDA0003916589860000043

wherein w k And h k Width and height, w, of the convolution kernel, respectively s And h s The step sizes, w, of the convolution kernel in the width and height directions, respectively, when scanning the input matrix in ,w out Representing the width of the input matrix and the output matrix, h, respectively in And h out Representing the height, w, of the input and output matrices, respectively p And h p Respectively representing the number of zero elements symmetrically filled in the left and right directions and the up and down directions of an input matrix, and preventing the boundary information of the matrix from being lost along with continuous convolution;

the forward propagation for the maximum pooling layer is calculated as:

Figure FDA0003916589860000044

the above equation (7) shows that the feature map is divided into i × j pieces with the size e 1 ×e 2 For each e 1 ×e 2 Performing one maximum pooling operation on the characteristic points of the region; wherein, M i,j,k Is the value of the k layer ith row and jth column of the pooled layer output,

Figure FDA0003916589860000051

the e + sigma of the k-th layer of the previous convolutional layer 1 Line ej + sigma 2 The value of column, (ei, ej) is the ith row and jth column in the feature map, e 1 ×e 2 The upper left corner position coordinates of the region;

the forward propagation of the fully connected layer is calculated as:

a l =f(z l )=f(W l a l-1 +b l ) (8)

Figure FDA0003916589860000052

where f (x) is an activation function, W l And b l Respectively, the weight and offset value of the l-th layer, a l Is an input to the l-th layer;

the back propagation of the convolutional layer is:

Figure FDA0003916589860000053

where rot180 denotes rotating the convolution kernel by 180 degrees, δ l Represents the differential of the output of the l-th layer to the objective function;

the objective function J for establishing the neural network is:

Figure FDA0003916589860000054

wherein

Figure FDA0003916589860000055

Is the output of the output layer, y i (x) Is the true tag value, n is the sampleNumber, λ is the regularization parameter; w and b are the weight and offset value inside the network, respectively;

updating the network internal parameter theta according to the objective function (12) j Which includes weight W and offset b:

Figure FDA0003916589860000056

Figure FDA0003916589860000057

where m represents the number of samples contained in a small batch,

Figure FDA0003916589860000058

output value, y, representing the ith input in the small batch at the jth iteration i (x) j Is the corresponding true value, θ j Is an internal parameter at the j-th iteration, α is the learning rate, and γ is the momentum value.

7. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein in the convolutional neural network model, a strategy is added in the network to prevent overfitting, a certain proportion of hidden neurons in the network are temporarily and randomly deleted, and input and output neurons are kept unchanged; the input is propagated forward through the modified network, and then the resulting loss results are propagated backward through the modified network; after a batch of training samples completes the process, parameters are updated on the neurons which are not deleted according to a random gradient descent method.

8. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein a piecewise constant attenuation strategy is adopted in the convolutional neural network model, and a learning rate is adjusted in a training process instead of being fixed, so that the model is converged quickly.

CN202010475482.1A 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning Active CN111638465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010475482.1A CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010475482.1A CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Publications (2)

Publication Number Publication Date
CN111638465A CN111638465A (en) 2020-09-08
CN111638465B true CN111638465B (en) 2023-02-28

Family

ID=72332399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010475482.1A Active CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Country Status (1)

Country Link
CN (1) CN111638465B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112345952B (en) * 2020-09-23 2024-07-05 上海电享信息科技有限公司 Power battery aging degree judging method
CN112444748B (en) * 2020-10-12 2024-10-08 武汉蔚来能源有限公司 Battery abnormality detection method, device, electronic device and storage medium
CN112231975B (en) * 2020-10-13 2024-11-01 中国铁路上海局集团有限公司南京供电段 Data modeling method and system based on reliability analysis of railway power supply equipment
CN112083337B (en) * 2020-10-22 2023-06-16 重庆大学 A power battery health prediction method for predictive operation and maintenance
CN112666480B (en) * 2020-12-02 2023-04-28 西安交通大学 Battery life prediction method based on characteristic attention of charging process
CN112666479B (en) * 2020-12-02 2023-05-16 西安交通大学 Battery life prediction method based on charge cycle fusion
CN112684346B (en) * 2020-12-10 2023-06-20 西安理工大学 Lithium battery state of health estimation method based on genetic convolutional neural network
CN112834945B (en) * 2020-12-31 2024-06-21 东软睿驰汽车技术(沈阳)有限公司 Evaluation model establishment method, battery health state evaluation method and related products
CN112798960B (en) * 2021-01-14 2022-06-24 重庆大学 A battery pack remaining life prediction method based on transfer deep learning
CN113406496B (en) * 2021-05-26 2023-02-28 广州市香港科大霍英东研究院 Battery capacity prediction method, system, device and medium based on model migration
CN113612269B (en) * 2021-07-02 2023-06-27 国网山东省电力公司莱芜供电公司 Method and system for controlling charge and discharge of battery monomer of lead-acid storage battery energy storage station
CN113536676B (en) * 2021-07-15 2022-09-27 重庆邮电大学 Lithium battery health monitoring method based on feature transfer learning
JP7269999B2 (en) * 2021-07-26 2023-05-09 本田技研工業株式会社 Battery model construction method and battery deterioration prediction device
CN113740736B (en) * 2021-08-31 2024-04-02 哈尔滨工业大学 A deep network adaptive SOH estimation method for electric vehicle lithium batteries
CN113777499A (en) * 2021-09-24 2021-12-10 山东浪潮科学研究院有限公司 Lithium battery capacity estimation method based on convolutional neural network
CN113721151B (en) * 2021-11-03 2022-02-08 杭州宇谷科技有限公司 Battery capacity estimation model and method based on double-tower deep learning network
CN114578250B (en) * 2022-02-28 2022-09-02 广东工业大学 Lithium battery SOH estimation method based on double-triangular structure matrix
CN115015760B (en) * 2022-05-10 2024-06-14 香港中文大学(深圳) Lithium battery health status assessment method based on neural network and transfer ensemble learning
CN114720882B (en) * 2022-05-20 2023-02-17 东南大学溧阳研究院 Reconstruction method of maximum capacity fading curve of lithium ion battery
CN115184805B (en) * 2022-06-21 2025-01-17 东莞新能安科技有限公司 Battery state of health acquisition method, apparatus, device and computer program product
CN115267550B (en) * 2022-07-26 2025-02-11 西安秉绎新能源有限公司 A method for detecting abnormal individual battery packs based on convolutional neural network
JP7521154B1 (en) 2023-03-08 2024-07-24 Ec Sensing株式会社 Secondary battery system and method for predicting abnormal deterioration of secondary battery
CN117054892B (en) * 2023-10-11 2024-02-27 特变电工西安电气科技有限公司 Evaluation method, device and management method for battery state of energy storage power station
CN117706383A (en) * 2023-12-11 2024-03-15 山东大学 State-of-charge estimation method for sodium-ion batteries based on variable learning rate multilayer perceptron
CN118226280B (en) * 2024-05-24 2024-07-30 云储新能源科技有限公司 Battery aging evaluation method based on multi-source multi-scale high-dimensional state space modeling

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034045A (en) * 2018-07-20 2018-12-18 中南大学 A kind of leucocyte automatic identifying method based on convolutional neural networks
CN109523013A (en) * 2018-10-15 2019-03-26 西北大学 A kind of air particle pollution level estimation method based on shallow-layer convolutional neural networks
CN109784480A (en) * 2019-01-17 2019-05-21 武汉大学 A State Estimation Method of Power System Based on Convolutional Neural Network
CN109918752A (en) * 2019-02-26 2019-06-21 华南理工大学 Mechanical fault diagnosis method, equipment and medium based on transfer convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11637331B2 (en) * 2017-11-20 2023-04-25 The Trustees Of Columbia University In The City Of New York Neural-network state-of-charge and state of health estimation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034045A (en) * 2018-07-20 2018-12-18 中南大学 A kind of leucocyte automatic identifying method based on convolutional neural networks
CN109523013A (en) * 2018-10-15 2019-03-26 西北大学 A kind of air particle pollution level estimation method based on shallow-layer convolutional neural networks
CN109784480A (en) * 2019-01-17 2019-05-21 武汉大学 A State Estimation Method of Power System Based on Convolutional Neural Network
CN109918752A (en) * 2019-02-26 2019-06-21 华南理工大学 Mechanical fault diagnosis method, equipment and medium based on transfer convolutional neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Danhua Zhou ; Zhanying Li ; Jiali Zhu ; Haichuan Zhang ; Lin Hou.State of Health Monitoring and Remaining Useful Life Prediction of Lithium-Ion Batteries Based on Temporal Convolutional Network.《IEEE Access 》.2020,第8卷 *
Microstrong0305.卷积神经网络(CNN)综述.《CSDN博客》.2018, *
ShengShen,Mohammadkazem Sadoughi,XiangyiChen,MingyiHong,ChaoHu.A deep learning method for online capacity estimation of lithium-ion batteries.《JOURNAL OF ENERGY STORAGE》.2019, *
Yohwan Choi ; Seunghyoung Ryu ; Kyungnam Park ; Hongseok Kim.Machine Learning-Based Lithium-Ion Battery Capacity Estimation Exploiting Multi-Channel Charging Profiles.《IEEE Access》.2019,第7卷 *
蜜丝特湖.Pool层及其公式推导.《CSDN博客》.2018, *

Also Published As

Publication number Publication date
CN111638465A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN111638465B (en) 2023-02-28 Lithium battery health state estimation method based on convolutional neural network and transfer learning
CN108519556A (en) 2018-09-11 A Lithium-ion Battery SOC Prediction Method Based on Recurrent Neural Network
CN110888059B (en) 2022-04-19 Charge state estimation algorithm based on improved random forest combined volume Kalman
CN113917337A (en) 2022-01-11 Battery state of health estimation method based on charging data and LSTM neural network
CN111832220A (en) 2020-10-27 A method for estimating state of health of lithium-ion battery based on codec model
Li et al. 2020 CNN and transfer learning based online SOH estimation for lithium-ion battery
CN111458646A (en) 2020-07-28 A Lithium Battery SOC Estimation Method Based on PSO-RBF Neural Network
CN114726045B (en) 2022-09-30 A Lithium Battery SOH Estimation Method Based on IPEA-LSTM Model
CN113702843A (en) 2021-11-26 Lithium battery parameter identification and SOC estimation method based on suburb optimization algorithm
WO2024016496A1 (en) 2024-01-25 Method and apparatus for estimating soh of lithium battery
CN112782594B (en) 2022-09-20 A data-driven algorithm considering internal resistance to estimate the SOC of lithium batteries
CN115963407A (en) 2023-04-14 A lithium battery SOC estimation method based on ICGWO optimized ELM
CN112163372A (en) 2021-01-01 SOC estimation method of power battery
CN114167295B (en) 2022-08-30 Lithium ion battery SOC estimation method and system based on multi-algorithm fusion
CN115219906A (en) 2022-10-21 Multi-model fusion battery state of charge prediction method and system based on GA-PSO optimization
CN111537888A (en) 2020-08-14 A data-driven SOC prediction method for echelon batteries
CN117949832B (en) 2024-06-18 Battery SOH analysis method based on optimized neural network
CN117471320A (en) 2024-01-30 Battery state of health estimation method and system based on charging fragments
CN117572236A (en) 2024-02-20 A method for estimating the state of charge of lithium batteries based on transfer learning
Peng et al. 2024 A hybrid-aided approach with adaptive state update for estimating the state-of-charge of LiFePO4 batteries considering temperature uncertainties
CN117686937A (en) 2024-03-12 A method for estimating the health status of individual cells in a battery system
Ge et al. 2024 A novel data-driven IBA-ELM model for SOH/SOC estimation of lithium-ion batteries
CN116298916A (en) 2023-06-23 Deep learning cross-domain prediction method for health condition of lithium battery of new energy aircraft
CN116029183A (en) 2023-04-28 A power battery temperature prediction method based on iPSO-LSTM model
CN112257348A (en) 2021-01-22 Method for predicting long-term degradation trend of lithium battery

Legal Events

Date Code Title Description
2020-09-08 PB01 Publication
2020-09-08 PB01 Publication
2020-10-02 SE01 Entry into force of request for substantive examination
2020-10-02 SE01 Entry into force of request for substantive examination
2023-02-28 GR01 Patent grant
2023-02-28 GR01 Patent grant