[论文翻译]基于深度循环神经网络的长期血压预测


原文地址:https://arxiv.org/pdf/1705.04524v3


Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks

基于深度循环神经网络的长期血压预测

Abstract— Existing methods for arterial blood pressure (BP) estimation directly map the input physiological signals to output BP values without explicitly modeling the underlying temporal dependencies in BP dynamics. As a result, these models suffer from accuracy decay over a long time and thus require frequent calibration. In this work, we address this issue by formulating BP estimation as a sequence prediction problem in which both the input and target are temporal sequences. We propose a novel deep recurrent neural network (RNN) consisting of multi layered Long Short-Term Memory (LSTM) networks, which are incorporated with (1) a bidirectional structure to access larger-scale context information of input sequence, and (2) residual connections to allow gradients in deep RNN to propagate more effectively. The proposed deep RNN model was tested on a static BP dataset, and it achieved root mean square error (RMSE) of 3.90 and $\mathbf{2.66mmHg}$ for systolic BP (SBP) and diastolic BP (DBP) prediction respectively, surpassing the accuracy of traditional BP prediction models. On a multi-day BP dataset, the deep RNN achieved RMSE of 3.84, 5.25, 5.80 and ${\pmb5.81}{\bf m m H g}$ for the 1st day, 2nd day, 4th day and 6th month after the 1st day SBP prediction, and 1.80, 4.78, 5.0, 5.21 $\mathbf{mmHg}$ for corresponding DBP prediction, respectively, which outperforms all previous models with notable improvement. The experimental results suggest that modeling the temporal dependencies in BP dynamics significantly improves the longterm BP prediction accuracy.

摘要—现有的动脉血压(BP)估计方法直接将输入生理信号映射为输出BP值,而没有显式建模BP动态中的潜在时间依赖性。因此,这些模型会随着时间推移出现精度衰减,从而需要频繁校准。本研究通过将BP估计构建为输入和目标均为时间序列的序列预测问题来解决该问题。我们提出了一种由多层长短期记忆(LSTM)网络组成的新型深度循环神经网络(RNN),该网络具有:(1)双向结构以获取输入序列更大尺度的上下文信息;(2)残差连接使深度RNN中的梯度能更有效传播。所提出的深度RNN模型在静态BP数据集上测试时,收缩压(SBP)和舒张压(DBP)预测的均方根误差(RMSE)分别达到3.90和2.66mmHg,超越了传统BP预测模型的精度。在多日BP数据集上,该深度RNN在第1天、第2天、第4天和第6个月后的SBP预测RMSE分别为3.84、5.25、5.80和5.81mmHg,相应DBP预测RMSE分别为1.80、4.78、5.0和5.21mmHg,以显著优势超越所有先前模型。实验结果表明,对BP动态中的时间依赖性建模能显著提升长期BP预测精度。

I. INTRODUCTION

I. 引言

Accordingly, such dependency is critical for continuous BP prediction and in particular, for long-term BP prediction.

因此,这种依赖性对于持续血压 (BP) 预测至关重要,尤其是长期血压预测。

Existing methods for cuffless and continuous BP estimation can be categorized into two groups, namely physiologi- cal model, i.e., pulse transit time model [3] [4], and regression model, such as decision tree, support vector regression and etc [5] [6]. These models suffers from accuracy decay over time, especially for multi-day continuous BP prediction. Such limitation has become the bottleneck that prevents the use of these models in practical applications. It is worth noting that the aforementioned models directly map present input to the target while ignoring the important temporal dependencies in BP dynamics. This could be the root of long-term inaccuracy.

现有无袖带连续血压估计方法可分为两类:生理模型(如脉搏波传导时间模型 [3][4])和回归模型(如决策树、支持向量回归等 [5][6])。这些模型存在随时间推移精度下降的问题,特别是在多日连续血压预测场景中。该局限性已成为阻碍这些模型实际应用的瓶颈。值得注意的是,上述模型直接将当前输入映射至目标值,而忽略了血压动态变化中重要的时间依赖性——这可能是长期预测失准的根本原因。

Compared with static BP prediction, the multi-day BP prediction is generally much more challenging. Due to the complex regulation mechanisms of human body, multi-day BP dynamics have more intricate temporal dependencies and a larger variation range. In this paper, we formulate the BP prediction as a sequence learning problem, and propose a novel deep RNN model, which is proved to be very effective for modeling long-range dependencies in BP dynamics and has achieved the state-of-the-art accuracy on multi-day continuous BP prediction.

与静态血压预测相比,多日血压预测通常更具挑战性。由于人体复杂的调节机制,多日血压动态具有更复杂的时间依赖性和更大的变化范围。本文将血压预测建模为序列学习问题,并提出一种新型深度RNN模型,该模型被证明能有效建模血压动态中的长程依赖关系,并在多日连续血压预测中达到了最先进的准确率。

As the leading risk factor of cardiovascular diseases (CVD) [1], high blood pressure (BP) has been commonly used as the critical criterion for diagnosing and preventing CVD. Therefore, accurate and continuous BP monitoring during people’s daily life is imperative for early detection and intervention of CVD. Traditional BP measurement devices, e.g., Omron products, are cuff-based and therefore bulky, discomfort to use, and only available for snapshot measurements. These disadvantages restrict the use of the cuff-based devices for long-term and continuous BP measurement, which are essential for nighttime monitoring and precise diagnosis of different CVD symptoms.

作为心血管疾病(CVD) [1]的首要风险因素,高血压(BP)通常被用作诊断和预防CVD的关键标准。因此,在人们日常生活中进行准确、持续的血压监测对CVD的早期发现和干预至关重要。传统血压测量设备(如欧姆龙产品)基于袖带,因而体积庞大、使用不适且仅能进行单次测量。这些缺点限制了基于袖带的设备在长期连续血压监测中的应用,而这对夜间监测和不同CVD症状的精确诊断至关重要。

A key feature of our cardiovascular system is its complex dynamic self-regulation that involves multiple feedback control loops in response to BP variation [2]. This mechanism gives the BP dynamics a temporal dependency nature.

我们心血管系统的一个关键特征是其复杂的动态自我调节机制,该机制涉及多个反馈控制回路以应对血压(BP)变化 [2]。这一机制使血压动态具有时间依赖性特征。

Peng Su, Xiao-Rong Ding, Yuan-Ting Zhang, Jing Liu and Ni Zhao are with the Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong (psu@ee.cuhk.edu.hk; nzhao@ee.cuhk.edu.hk). Fen Miao is with the Key Laboratory for Health Informatics of the Chinese Academy of Sciences, Shenzhen Institutes of Advanced Technology, Shenzhen, China.

彭苏、丁晓荣、张元亭、刘晶和赵妮就职于香港中文大学电子工程系(psu@ee.cuhk.edu.hk; nzhao@ee.cuhk.edu.hk)。苗芬就职于中国科学院健康信息学重点实验室,深圳先进技术研究院,中国深圳。

Published as a conference paper at IEEE International Conference on Biomedical and Health Informatics (BHI) 2018.

以会议论文形式发表于2018年IEEE生物医学与健康信息学国际会议(BHI)。

II. THE MODEL

II. 模型

The goal of arterial BP prediction is to use multiple temporal physiological signals to predict BP sequence. Let $X_{T}=[x_{1},x_{2}\dotsb,x_{T}]$ be the input features extracted from electrocardiograph y (ECG) and photo ple thys m ogram (PPG) signals, and $Y_{T}=[y_{1},y_{2}\dots y_{T}]$ denote the target BP sequence. The conditional probability $p(Y_{T}\mid X_{T})$ is factorized as:

动脉血压预测的目标是利用多个时序生理信号来预测血压序列。设 $X_{T}=[x_{1},x_{2}\dotsb,x_{T}]$ 为从心电图(ECG)和光电容积图(PPG)信号中提取的输入特征,$Y_{T}=[y_{1},y_{2}\dots y_{T}]$ 表示目标血压序列。条件概率 $p(Y_{T}\mid X_{T})$ 分解为:

$$
p(Y_{T}\mid X_{T})=\prod_{t=1}^{T}p(y_{t}\mid h_{t}),
$$

$$
p(Y_{T}\mid X_{T})=\prod_{t=1}^{T}p(y_{t}\mid h_{t}),
$$

where $h_{t}$ can be interpreted as hidden state of BP dynamic system and it is generated from previous hidden state $h_{t-1}$ and current input $x_{t}$ as:

其中 $h_{t}$ 可解释为BP动态系统的隐藏状态,它由前一个隐藏状态 $h_{t-1}$ 和当前输入 $x_{t}$ 生成:

$$
h_{t}=f(h_{t-1},x_{t}).
$$

$$
h_{t}=f(h_{t-1},x_{t}).
$$

Figure 1 illustrates the overview of our proposed deep RNN model. The deep RNN consists of a bidirectional LSTM at the bottom layer, and a stack of multi layered Long ShortTerm Memory (LSTM) with residual connections. The full network was trained with back propagation through time [7] to miniaturize the difference between BP prediction and the ground truth.

图 1: 展示了我们提出的深度 RNN (Recurrent Neural Network) 模型概览。该深度 RNN 底层采用双向 LSTM (Long Short-Term Memory),上层堆叠了带残差连接的多层 LSTM。整个网络通过随时间反向传播 [7] 进行训练,以最小化血压 (BP) 预测值与真实值之间的差异。


Fig. 1: DeepRNN architecture. Each rectangular box is an LSTM cell. The green dashed box at bottom is a bidirectional LSTM layer consisting of forward (orange) and backward (green) LSTM. The orange dashed box depicts the LSTM layer with residual connections.

图 1: DeepRNN架构。每个矩形框代表一个LSTM单元。底部的绿色虚线框是由前向(橙色)和后向(绿色)LSTM组成的双向LSTM层。橙色虚线框表示带有残差连接的LSTM层。

A. Bidirectional LSTM Structure

A. 双向 LSTM 结构

First, we introduce the basic block of our deep RNN model, a one-layer bidirectional Long short-term memory (LSTM). LSTM [8] was designed to address the vanishing gradient problem of conventional RNN by introducing a memory cell state $c_{t}$ and multiple gating mechanisms inside a standard RNN hidden state transition process.

首先,我们介绍深度RNN模型的基本模块——单层双向长短期记忆网络(LSTM)。LSTM [8]通过引入记忆细胞状态$c_{t}$和在标准RNN隐藏状态转换过程中加入多重门控机制,旨在解决传统RNN的梯度消失问题。

where $f,i$ and $o$ are respectively the forget gate, input gate, output gate that control how much information will be forgotten, accumulated, or be outputted. $W$ and $b$ terms denote weight matrices and bias vectors respectively. $\sigma$ and tanh stand for an element-wise application of the logistic sigmoid function and hyperbolic tangent function respectively, and $\odot$ denote element-wise multiplication.

其中 $f$、$i$ 和 $o$ 分别代表控制信息遗忘量、积累量和输出量的遗忘门、输入门和输出门。$W$ 和 $b$ 分别表示权重矩阵和偏置向量。$\sigma$ 和 tanh 分别代表逐元素应用的逻辑 sigmoid 函数和双曲正切函数,$\odot$ 表示逐元素乘法。

Conventional LSTMs use $h_{t}$ to capture information from the past history $x_{1},\dots,x_{t-1}$ , and the present input $x_{t}$ . To access larger-scale temporal context of input sequence, one can also incorporate nearby future information $x_{t+1},\dots,x_{T}$ to inform the downstream modeling process. Bidirectional RNN (BRNN) [9] can realize this function by processing the data in both forward and backward directions with two separate hidden layers, which then merge to the same output layer. As illustrated in the bottom of Figure1, a BRNN computes a forward hidden state $h_{t}^{f}$ , a backward hidden state $h_{t}^{b}$ and final output $h_{t}$ by following equations:

传统的 LSTM 使用 $h_{t}$ 来捕获过去历史 $x_{1},\dots,x_{t-1}$ 和当前输入 $x_{t}$ 的信息。为了获取输入序列更大范围的时间上下文,还可以整合附近未来信息 $x_{t+1},\dots,x_{T}$ 来指导下游建模过程。双向 RNN (BRNN) [9] 通过用两个独立的隐藏层分别向前和向后处理数据,然后将结果合并到同一个输出层来实现这一功能。如图 1 底部所示,BRNN 通过以下公式计算前向隐藏状态 $h_{t}^{f}$、后向隐藏状态 $h_{t}^{b}$ 和最终输出 $h_{t}$:

$$
\begin{array}{r l}{\boldsymbol{h}{t}^{f}=\mathcal{H}(\boldsymbol{W}{h h}^{f}\boldsymbol{h}{t-1}^{f}+\boldsymbol{W}{x h}^{f}\boldsymbol{x}{t}+\boldsymbol{b}{f})}\ {\boldsymbol{h}{t}^{b}=\mathcal{H}(\boldsymbol{W}{h h}^{b}\boldsymbol{h}{t+1}^{b}+\boldsymbol{W}{x h}^{b}\boldsymbol{x}{t}+\boldsymbol{b}{b})}\ {\boldsymbol{h}{t}=\boldsymbol{W}^{f}\boldsymbol{h}{t}^{f}+\boldsymbol{W}^{b}\boldsymbol{h}{t}^{b}+\boldsymbol{b}_{h}.}\end{array}
$$

$$
\begin{array}{r l}{\boldsymbol{h}{t}^{f}=\mathcal{H}(\boldsymbol{W}{h h}^{f}\boldsymbol{h}{t-1}^{f}+\boldsymbol{W}{x h}^{f}\boldsymbol{x}{t}+\boldsymbol{b}{f})}\ {\boldsymbol{h}{t}^{b}=\mathcal{H}(\boldsymbol{W}{h h}^{b}\boldsymbol{h}{t+1}^{b}+\boldsymbol{W}{x h}^{b}\boldsymbol{x}{t}+\boldsymbol{b}{b})}\ {\boldsymbol{h}{t}=\boldsymbol{W}^{f}\boldsymbol{h}{t}^{f}+\boldsymbol{W}^{b}\boldsymbol{h}{t}^{b}+\boldsymbol{b}_{h}.}\end{array}
$$

where $\mathcal{H}$ is implemented by Equations 3-7.

其中 $\mathcal{H}$ 由公式3-7实现。

B. Multi layered Architecture with Residual Connections

B. 带残差连接的多层架构

A variety of experimental results [10] [11] have suggested that RNNs with deep architecture can significantly outperform shallow RNNs. Simply by stacking multiple layers of RNN could readily gain expressive power. However, a full deep network could become difficult to train as it goes deeper, likely due to exploding and vanishing gradient problems [12].

多项实验结果 [10] [11] 表明,具有深层架构的循环神经网络 (RNN) 能显著优于浅层RNN。仅通过堆叠多层RNN就能轻松获得更强的表达能力。但随着网络深度增加,完整的深度网络会变得难以训练,这很可能源于梯度爆炸和梯度消失问题 [12]。

Inspired by the idea of attaching an identity skip connection between adjacent layers, which has shown good performance for training deep neural networks [13] [14] [15], we incorporate a residual connection from one LSTM layer to the next in our model, as shown in Figure 2. Let $x_{t}^{i},h_{t}^{i}$ , and

受到在相邻层之间添加恒等跳跃连接这一思路的启发(该技术在训练深度神经网络中已展现出良好性能 [13][14][15]),我们在模型中引入了从一个LSTM层到下一个LSTM层的残差连接,如图2所示。设$x_{t}^{i},h_{t}^{i}$,以及

Fig. 2: LSTM with residual connection.

图 2: 带残差连接的LSTM。

$\mathcal{H}{i}$ be the input, hidden state and LSTM function respectively associated with the $i$ -th LSTM layer $(i=1,2,\dots,L)$ , and $W^{i}$ is the corresponding weight of $\mathcal{H}{i}$ . The input to the $i$ -th LSTM layer $\boldsymbol{x}_{t}^{i}$ is element-wise added to this layer’s hidden state hi. This sum xi+ is then fed to the next LSTM layer. The LSTM block with residual connections can be implemented by:

$\mathcal{H}{i}$ 分别表示第 $i$ 个 LSTM 层 $(i=1,2,\dots,L)$ 的输入、隐藏状态和 LSTM 函数,$W^{i}$ 是 $\mathcal{H}{i}$ 的对应权重。第 $i$ 个 LSTM 层的输入 $\boldsymbol{x}_{t}^{i}$ 会与该层的隐藏状态 hi 进行逐元素相加,得到的和 xi+ 将输入到下一层 LSTM。带残差连接的 LSTM 模块可通过以下方式实现:

$$
\begin{array}{r l}{\quad h_{t}^{i}=\mathcal{H}{i}\left(x_{t}^{i},h_{t-1}^{i};W^{i}\right)}\ {\quad x_{t}^{i+1}=h_{t}^{i}+x_{t}^{i}}\ {\quad h_{t}^{i+1}=\mathcal{H}{i+1}\left(x_{t}^{i+1},h_{t-1}^{i+1};W^{i+1}\right).}\end{array}
$$

$$
\begin{array}{r l}{\quad h_{t}^{i}=\mathcal{H}{i}\left(x_{t}^{i},h_{t-1}^{i};W^{i}\right)}\ {\quad x_{t}^{i+1}=h_{t}^{i}+x_{t}^{i}}\ {\quad h_{t}^{i+1}=\mathcal{H}{i+1}\left(x_{t}^{i+1},h_{t-1}^{i+1};W^{i+1}\right).}\end{array}
$$

The deep RNN model can be created by stacking multiple such LSTM blocks on top of each other, with the output of previous block forming the input of the next. Once the toplayer hidden state is computed, the output $z_{t}$ can be obtained by:

深度RNN模型可以通过将多个这样的LSTM块相互堆叠来构建,前一个块的输出构成下一个块的输入。在计算完顶层隐藏状态后,输出$z_{t}$可通过以下方式获得:

$$
z_{t}=\sigma\big(W_{h z}^{L}h_{t}^{L}+W_{x z}^{L}x_{t}^{L}+b^{L}\big).
$$

$$
z_{t}=\sigma\big(W_{h z}^{L}h_{t}^{L}+W_{x z}^{L}x_{t}^{L}+b^{L}\big).
$$

C. Multi-task Training

C. 多任务训练

Given that we have multiple supervision signals like systolic BP (SBP), diastolic BP (DBP) and mean BP (MBP) which are closely related to each other, we adopt multi-task training strategy to train one single model to predict SBP,

考虑到我们拥有多个密切相关的监督信号,如收缩压 (SBP)、舒张压 (DBP) 和平均血压 (MBP),我们采用多任务训练策略来训练单一模型以预测 SBP、

DBP and MBP in parallel. Accordingly, the training objective is to minimize the mean squared error (MSE) of total $N$ training samples as follow:

DBP和MBP并行处理。因此,训练目标是最小化总$N$个训练样本的均方误差 (MSE) ,如下所示:

$$
\mathcal{L}\big({x_{1:T},y_{1:T}}{N}\big)=\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{T}\left\lVert\boldsymbol{z}{t}-\boldsymbol{y}_{t}\right\rVert^{2}+\lambda\lVert\boldsymbol{\theta}\rVert^{2},
$$

$$
\mathcal{L}\big({x_{1:T},y_{1:T}}{N}\big)=\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{T}\left\lVert\boldsymbol{z}{t}-\boldsymbol{y}_{t}\right\rVert^{2}+\lambda\lVert\boldsymbol{\theta}\rVert^{2},
$$

where $y_{t}=[S B P,D B P,M B P]$ represents ground truth, $z_{t}$ is corresponding prediction. And $\bar{||\theta||}^{2}$ represents the $L_{2}$ regulation of model parameters and $\lambda$ is the corresponding penalty coefficient. One advantage of multi-task training is that learning to predict different BP values simultaneously could implicitly encode the quantitative constrains among SBP, DBP and MBP.

其中 $y_{t}=[S B P,D B P,M B P]$ 代表真实值,$z_{t}$ 是对应的预测值。$\bar{||\theta||}^{2}$ 表示模型参数的 $L_{2}$ 正则项,$\lambda$ 是对应的惩罚系数。多任务训练的一个优势是,同时学习预测不同的血压值可以隐式编码收缩压 (SBP)、舒张压 (DBP) 和平均动脉压 (MBP) 之间的定量约束关系。

III. ANALYSIS OF DEEP RNN ARCHITECTURE

III. 深度RNN架构分析

RNNs are inherently deep in time because of their hidden states transition. Despite the depth in time, the proposed Deep RNN model is also deep along layer structure. To simplify the analysis, here we mainly focus on the gradient flow along the depth of layers. Through recursively updating Equation 12, we will have:

RNN因其隐藏状态转换而具有时间上的深度。尽管在时间维度上具有深度,所提出的深度RNN模型在层结构上也是深度的。为简化分析,这里我们主要关注沿层深度的梯度流动。通过递归更新公式12,我们将得到:

$$
x_{t}^{L}=x_{t}^{l}+\sum_{i=l}^{L-1}\mathcal{H}_{i}\left(x_{t}^{i},h_{t-1}^{i};W^{i}\right),
$$

$$
x_{t}^{L}=x_{t}^{l}+\sum_{i=l}^{L-1}\mathcal{H}_{i}\left(x_{t}^{i},h_{t-1}^{i};W^{i}\right),
$$

for any deeper layer $L$ and shallower layer $l$ . Equation 16 leads to nice backward propagation properties. Denoting the loss function as $\mathcal{L}$ , by the chain rule of back propagation we have:

对于任意深层 $L$ 和浅层 $l$ ,公式16具有良好的反向传播特性。将损失函数记为 $\mathcal{L}$ ,根据反向传播的链式法则可得:

$$
\begin{array}{r l}&{\displaystyle\frac{\partial\mathcal{L}}{\partial x_{t}^{l}}=\frac{\partial\mathcal{L}}{\partial x_{t}^{L}}\frac{\partial x_{t}^{L}}{\partial x_{t}^{l}}}\ &{\quad\quad=\displaystyle\frac{\partial\mathcal{L}}{\partial x_{t}^{L}}\big(1+\frac{\partial}{\partial x_{t}^{l}}\sum_{i=l}^{L-1}\mathcal{H}{i}\big(x_{t}^{i},h_{t-1}^{i};W^{i}\big)\big).}\end{array}
$$

$$
\begin{array}{r l}&{\displaystyle\frac{\partial\mathcal{L}}{\partial x_{t}^{l}}=\frac{\partial\mathcal{L}}{\partial x_{t}^{L}}\frac{\partial x_{t}^{L}}{\partial x_{t}^{l}}}\ &{\quad\quad=\displaystyle\frac{\partial\mathcal{L}}{\partial x_{t}^{L}}\big(1+\frac{\partial}{\partial x_{t}^{l}}\sum_{i=l}^{L-1}\mathcal{H}{i}\big(x_{t}^{i},h_{t-1}^{i};W^{i}\big)\big).}\end{array}
$$

Equation 17 shows that the gradient $\frac{\partial\mathcal{L}}{\partial x_{t}^{l}}$ can be decomposed into two additive terms: a term of $\textstyle{\frac{\partial{\mathcal{L}}}{\partial x_{t}^{L}}}$ that propagates information directly without through any weight layers, and another term $\begin{array}{r}{\frac{\partial\mathcal{L}}{\partial x_{t}^{L}}\left(\overset{\cdot\partial}{\partial x_{t}^{l}}\sum_{i=l}^{L-1}\mathcal{H}{i}\right)}\end{array}$ that propagates through the weight layers. The first term of $\textstyle{\frac{\partial{\mathcal{L}}}{\partial x_{t}^{L}}}$ ensures that supervised information could directly back propagate to any shallower layer $x_{t}^{l}$ . In general the term $\frac{\dot{\partial}}{\partial x_{t}^{l}}\sum_{i=l}^{\star}\mathcal{H}{i}$ cannot always be $-1$ for all samples in a mini-batch, so the gradient $\frac{\partial\dot{\mathcal{L}}}{\partial x_{t}^{l}}$ is unlikely to be canceled out. This implies that the gradients of a layer does not vanish even when the intermediate weights are arbitrarily small. This nice back propagation property allows us to train deep RNN model that owns more expressive power without worrying about the gradient vanishing problem.

方程17表明,梯度$\frac{\partial\mathcal{L}}{\partial x_{t}^{l}}$可分解为两个相加项:一项是$\textstyle{\frac{\partial{\mathcal{L}}}{\partial x_{t}^{L}}}$(不经过任何权重层直接传播信息),另一项是$\begin{array}{r}{\frac{\partial\mathcal{L}}{\partial x_{t}^{L}}\left(\overset{\cdot\partial}{\partial x_{t}^{l}}\sum_{i=l}^{L-1}\mathcal{H}{i}\right)}\end{array}$(通过权重层传播)。第一项$\textstyle{\frac{\partial{\mathcal{L}}}{\partial x_{t}^{L}}}$确保监督信息可直接反向传播至任意浅层$x_{t}^{l}$。通常对于小批量中的所有样本,$\frac{\dot{\partial}}{\partial x_{t}^{l}}\sum_{i=l}^{\star}\mathcal{H}{i}$项不可能恒为$-1$,因此梯度$\frac{\partial\dot{\mathcal{L}}}{\partial x_{t}^{l}}$不太可能被抵消。这意味着即使中间权重任意小,该层的梯度也不会消失。这种优良的反向传播特性使我们能够训练具有更强表达能力的深度RNN模型,而无需担心梯度消失问题。

IV. EXPERIMENTS

IV. 实验

We evaluate the proposed model on both a static and multi-day continuous BP dataset. Root mean square error (RMSE) is used as the evaluation metric, which is defined as $R M S E=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{T}\lVert\boldsymbol{z}_{t}-\boldsymbol{y}_{t}\rVert^{2}}$ . On both datasets we compare our model with the following reference models:

我们在静态和多日连续血压数据集上评估了所提出的模型。采用均方根误差 (RMSE) 作为评估指标,其定义为 $R M S E=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{T}\lVert\boldsymbol{z}_{t}-\boldsymbol{y}_{t}\rVert^{2}}$ 。在两个数据集上,我们将模型与以下参考模型进行对比:


Fig. 3: Illustration of PPG feature.

图 3: PPG特征示意图

A. Dataset

A. 数据集

Static continuous BP dataset. The dataset, including ECG, PPG and BP were obtained from 84 healthy people including 51 males and 33 females. ECG and PPG signal were acquired with Biopac system and reference cont