当前位置:   article > 正文

PINN解N-S方程参数反演问题—tf2版本代码+解释_pinn tf2 github

pinn tf2 github


本人是研究工业数字孪生技术的,最近PINN由于其对机理与数据的良好兼顾性非常热门。本文是作者对PINN入门研究过程的一个归纳,主要也是原作者代码是基于TF 1.0代码写的,作者用TF 2.0代码进行了改写,发上来供各位参考,欢迎研究数字孪生的小伙伴一起讨论~

1. 传送门

1.1 PINN基本原理

1.2 PINN作者原文及代码

PINN原作者Maziar Raissi代码是基于tensorflow 1.0代码实现的,在github上有四个问题的说明文档,实现代码和相关数据,详见作者的github

2. N-S反问题

作者对N-S反问题阐述的MD代码,在作者的github项目中有,大意如下:
要根据少量测点数据和PINN模型,求解2D Navier-Stokes方程,如下:
u t + λ 1 ( u u x + v u y ) = − p x + λ 2 ( u x x + u y y ) , v t + λ 1 ( u v x + v v y ) = − p y + λ 2 ( v x x + v y y ) , ut+λ1(uux+vuy)=px+λ2(uxx+uyy),vt+λ1(uvx+vvy)=py+λ2(vxx+vyy),

ut+λ1(uux+vuy)=px+λ2(uxx+uyy),vt+λ1(uvx+vvy)=py+λ2(vxx+vyy),
ut+λ1(uux+vuy)=px+λ2(uxx+uyy),vt+λ1(uvx+vvy)=py+λ2(vxx+vyy),
式中,
u ( t , x , y ) u(t, x, y) u(t,x,y) x x x轴的速度场, v ( t , x , y ) v(t, x, y) v(t,x,y) y y y轴的速度场, p ( t , x , y ) p(t, x, y) p(t,x,y) 是压力场,都不是定常的。
λ = ( λ 1 , λ 2 ) \lambda = (\lambda_1, \lambda_2) λ=(λ1,λ2) 是两个需要反演的参数,真实值分别为1.0和0.01。
Navier-Stokes的解满足下面散度为0的方程,即:
u x + v y = 0. u_x + v_y = 0. ux+vy=0.
速度场 u u u v v v又可以看做流量场的导数,即:
u = ψ y ,     v = − ψ x , u = \psi_y,\ \ \ v = -\psi_x, u=ψy,   v=ψx,
再结合如下的一些测量数据,求解N-S方程中的 λ 1 , λ 2 \lambda_1, \lambda_2 λ1,λ2两个系数,即:
{ t i , x i , y i , u i , v i } i = 1 N \{t^i, x^i, y^i, u^i, v^i\}_{i=1}^{N} {ti,xi,yi,ui,vi}i=1N
PDE方程的偏差项可定义为:
f : = u t + λ 1 ( u u x + v u y ) + p x − λ 2 ( u x x + u y y ) , g : = v t + λ 1 ( u v x + v v y ) + p y − λ 2 ( v x x + v y y ) , f:=ut+λ1(uux+vuy)+pxλ2(uxx+uyy),g:=vt+λ1(uvx+vvy)+pyλ2(vxx+vyy),
f:=ut+λ1(uux+vuy)+pxλ2(uxx+uyy),g:=vt+λ1(uvx+vvy)+pyλ2(vxx+vyy),
f:=ut+λ1(uux+vuy)+pxλ2(uxx+uyy),g:=vt+λ1(uvx+vvy)+pyλ2(vxx+vyy),

再加上神经网络的预测值和测量值的偏差项,PINN模型的损失可列出:
M S E : = 1 N ∑ i = 1 N ( ∣ u ( t i , x i , y i ) − u i ∣ 2 + ∣ v ( t i , x i , y i ) − v i ∣ 2 ) + 1 N ∑ i = 1 N ( ∣ f ( t i , x i , y i ) ∣ 2 + ∣ g ( t i , x i , y i ) ∣ 2 ) . MSE:=1NNi=1(|u(ti,xi,yi)ui|2+|v(ti,xi,yi)vi|2)+1NNi=1(|f(ti,xi,yi)|2+|g(ti,xi,yi)|2).
MSE:=+N1i=1N(u(ti,xi,yi)ui2+v(ti,xi,yi)vi2)N1i=1N(f(ti,xi,yi)2+g(ti,xi,yi)2).

3. 基于TF 2.0代码实现

3.1 PINN模型定义

导入包并继承keras的Model类定义PINN模型,原文中作者权重初始化是使用了xavier初始化方法,这边也继续基于keras方法复现。
同时增加了权重L2范式衰减方法,大意就是希望各层参数能够平坦化,目的是增加模型收敛速度。

import random
import numpy as np
import scipy.io
import time, datetime


class PhysicsInformedNN(Model):
    # Initialize the class
    def __init__(self, layer, lr=1e-3):
        super(PhysicsInformedNN, self).__init__()

        # 初始化需要反演的参数λ1、λ2
        self.lambda_1 = tf.Variable([0.0], dtype=tf.float32, trainable=True)
        self.lambda_2 = tf.Variable([0.0], dtype=tf.float32, trainable=True)

        # 全连接层定义
        self.model = Sequential()
        self.model.add(Flatten(input_shape=(3, 1)))
        self.model.add(Dense(layer[0], activation='tanh', name="Dense_0",
                             kernel_initializer="glorot_normal",
                             bias_initializer='zeros',
                             kernel_regularizer=l2(theata)
                             ))

        for i in range(1, len(layer) - 1):
            self.model.add(Dense(layer[i], activation='tanh', name="Dense_{}".format(i),
                                 kernel_initializer="glorot_normal",  # xavier初始化方法
                                 bias_initializer='zeros',
                                 kernel_regularizer=l2(theata)   # 权重衰减
                                 ))
            # self.model.add(BatchNormalization(name="BN{}".format(i)))  # 添加数据标准化层

        self.model.add(Dense(layer[-1], activation='tanh', name='Dense_end',
                             kernel_initializer="glorot_normal",
                             bias_initializer='zeros',
                             kernel_regularizer=l2(theata)
                             ))
        self.optimizer = Adam(learning_rate=lr)
	
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39

3.2 模型调用方法定义

用 tf2 的 tf.GradientTape 记录模型计算中的梯度,计算梯度是因为损失函数当中要用。

    # 全连接模型计算
    def call(self, X):
        y = self.model.call(X)
        return y

    # PINN模型计算
    def predict(self, x, y, t):
        lambda_1 = self.lambda_1
        lambda_2 = self.lambda_2

        # 神经网络模型预测ψ(t,x,y) & p(t,x,y)
        x_var = tf.Variable(tf.cast(x, dtype=tf.float32))
        y_var = tf.Variable(tf.cast(y, dtype=tf.float32))
        t_var = tf.Variable(tf.cast(t, dtype=tf.float32))

        with tf.GradientTape(persistent=True) as tape_uxx:
            with tf.GradientTape(persistent=True) as tape_ux:
                with tf.GradientTape(persistent=True) as tape_psi:
                    psi_and_p = self.call(tf.concat([x_var, y_var, t_var], 1))
                    psi = psi_and_p[:, 0:1]
                    p = psi_and_p[:, 1:2]

                # 一阶导
                u = tape_psi.gradient(psi, y_var)
                v = -tape_psi.gradient(psi, x_var)
                p_x = tape_psi.gradient(p, x_var)
                p_y = tape_psi.gradient(p, y_var)

            # 二阶导
            u_t = tape_ux.gradient(u, t_var)
            u_x = tape_ux.gradient(u, x_var)
            u_y = tape_ux.gradient(u, y_var)
            v_t = tape_ux.gradient(v, t_var)
            v_x = tape_ux.gradient(v, x_var)
            v_y = tape_ux.gradient(v, y_var)

        # 三阶导
        u_xx = tape_uxx.gradient(u_x, x_var)
        u_yy = tape_uxx.gradient(u_y, y_var)
        v_xx = tape_uxx.gradient(v_x, x_var)
        v_yy = tape_uxx.gradient(v_y, y_var)

        # 损失项预测
        f_u_pred = u_t + lambda_1 * (u * u_x + v * u_y) + p_x - lambda_2 * (u_xx + u_yy)
        f_v_pred = v_t + lambda_1 * (u * v_x + v * v_y) + p_y - lambda_2 * (v_xx + v_yy)

        u_pred = u
        v_pred = v
        p_pred = p

        # 释放内存
        del tape_psi
        del tape_ux
        del tape_uxx

        return u_pred, v_pred, p_pred, f_u_pred, f_v_pred
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56

3.3 损失函数定义和参数更新

损失函数包含4项,参数更新即是求loss关于模型的 w , b w, b w,b以及 λ 1 , λ 2 \lambda_1, \lambda_2 λ1,λ2的导数,并用Adam优化器进行梯度更新。

    # 损失计算
    def loss_function(self, u_pred, v_pred, u_real, v_real, f_u_pred, f_v_pred):
        loss = tf.reduce_sum(tf.square(u_real - u_pred)) + \
            tf.reduce_sum(tf.square(v_real - v_pred)) + \
            tf.reduce_sum(tf.square(f_u_pred)) + \
            tf.reduce_sum(tf.square(f_v_pred))
        return loss


    # 运行优化
    def run_optimizer(self, x, y, t, u_real, v_real):
        optimizer = self.optimizer

        with tf.GradientTape() as Tape:
            # 调用模型预测
            u_pred, v_pred, p_pred, f_u_pred, f_v_pred = self.predict(x, y, t)

            # loss由4项构成
            loss = self.loss_function(u_pred, v_pred, u_real, v_real, f_u_pred, f_v_pred)

        trainable_variables = self.trainable_variables
        gradients = Tape.gradient(loss, trainable_variables)
        optimizer.apply_gradients(zip(gradients, trainable_variables))

        return loss
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

3.4 主函数部分

参数设置和训练数据准备这边不再赘述,需要的直接看最下方完整代码。
模型训练采用mini-batch训练方法,代码如下:

# ==================== 2. 训练模型 ====================
    # 实例化
    pinn = PhysicsInformedNN(layer=layers, lr=1e-3)

    start_time = time.time()
    total_time = time.time()
    for step in range(nIter):
        for x_batch,y_batch,t_batch,u_batch,v_batch in data_iter(batch_size, x_train,y_train,t_train,u_train,v_train):
            # 梯度更新
            loss = pinn.run_optimizer(x_batch,y_batch,t_batch,u_batch,v_batch)

        # Print
        if step % 10 == 0:
            elapsed = time.time() - start_time
            print('It: %d, Loss: %.3f, λ1: %.3f, λ2: %.5f, Time: %.2f s' %
                  (step+1, loss, pinn.lambda_1.numpy(), pinn.lambda_2.numpy(), elapsed))
            start_time = time.time()

        if loss < 5:
            break

    print("\n\ntrain time: %.1f s" %(time.time()-total_time))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

4. 完整代码

首先说明我没有训练完整,作者设置的epoch数量是200000,在我尝试训练的过程中发现模型确实很难收敛,模型的超参数和结构还有需要优化的地方,希望能一起探讨优化!

另外,绘图部分代码还没弄

"""
@author: Maziar Raissi
@modify: Steven Yin
"""

import tensorflow as tf
from keras.optimizers import Adam
from keras import Sequential, Model
from keras.layers import Dense, Flatten, BatchNormalization
from keras.regularizers import l2

import random
import numpy as np
import scipy.io
import time, datetime


class PhysicsInformedNN(Model):
    # Initialize the class
    def __init__(self, layer, lr=1e-3):
        super(PhysicsInformedNN, self).__init__()

        # 初始化需要反演的参数λ1、λ2
        self.lambda_1 = tf.Variable([0.0], dtype=tf.float32, trainable=True)
        self.lambda_2 = tf.Variable([0.0], dtype=tf.float32, trainable=True)

        # 全连接层定义
        self.model = Sequential()
        self.model.add(Flatten(input_shape=(3, 1)))
        self.model.add(Dense(layer[0], activation='tanh', name="Dense_0",
                             kernel_initializer="glorot_normal",
                             bias_initializer='zeros',
                             kernel_regularizer=l2(theata)
                             ))

        for i in range(1, len(layer) - 1):
            self.model.add(Dense(layer[i], activation='tanh', name="Dense_{}".format(i),
                                 kernel_initializer="glorot_normal",
                                 bias_initializer='zeros',
                                 kernel_regularizer=l2(theata)
                                 ))
            # self.model.add(BatchNormalization(name="BN{}".format(i)))  # 添加数据标准化层

        self.model.add(Dense(layer[-1], activation='tanh', name='Dense_end',
                             kernel_initializer="glorot_normal",
                             bias_initializer='zeros',
                             kernel_regularizer=l2(theata)
                             ))
        self.optimizer = Adam(learning_rate=lr)


    # 全连接模型计算
    def call(self, X):
        y = self.model.call(X)
        return y


    # PINN模型计算
    def predict(self, x, y, t):
        lambda_1 = self.lambda_1
        lambda_2 = self.lambda_2

        # 神经网络模型预测ψ(t,x,y) & p(t,x,y)
        x_var = tf.Variable(tf.cast(x, dtype=tf.float32))
        y_var = tf.Variable(tf.cast(y, dtype=tf.float32))
        t_var = tf.Variable(tf.cast(t, dtype=tf.float32))

        with tf.GradientTape(persistent=True) as tape_uxx:
            with tf.GradientTape(persistent=True) as tape_ux:
                with tf.GradientTape(persistent=True) as tape_psi:
                    psi_and_p = self.call(tf.concat([x_var, y_var, t_var], 1))
                    psi = psi_and_p[:, 0:1]
                    p = psi_and_p[:, 1:2]

                # 一阶导
                u = tape_psi.gradient(psi, y_var)
                v = -tape_psi.gradient(psi, x_var)
                p_x = tape_psi.gradient(p, x_var)
                p_y = tape_psi.gradient(p, y_var)

            # 二阶导
            u_t = tape_ux.gradient(u, t_var)
            u_x = tape_ux.gradient(u, x_var)
            u_y = tape_ux.gradient(u, y_var)
            v_t = tape_ux.gradient(v, t_var)
            v_x = tape_ux.gradient(v, x_var)
            v_y = tape_ux.gradient(v, y_var)

        # 三阶导
        u_xx = tape_uxx.gradient(u_x, x_var)
        u_yy = tape_uxx.gradient(u_y, y_var)
        v_xx = tape_uxx.gradient(v_x, x_var)
        v_yy = tape_uxx.gradient(v_y, y_var)

        # 损失项预测
        f_u_pred = u_t + lambda_1 * (u * u_x + v * u_y) + p_x - lambda_2 * (u_xx + u_yy)
        f_v_pred = v_t + lambda_1 * (u * v_x + v * v_y) + p_y - lambda_2 * (v_xx + v_yy)

        u_pred = u
        v_pred = v
        p_pred = p

        # 释放内存
        del tape_psi
        del tape_ux
        del tape_uxx

        return u_pred, v_pred, p_pred, f_u_pred, f_v_pred


    # 损失计算
    def loss_function(self, u_pred, v_pred, u_real, v_real, f_u_pred, f_v_pred):
        loss = tf.reduce_sum(tf.square(u_real - u_pred)) + \
            tf.reduce_sum(tf.square(v_real - v_pred)) + \
            tf.reduce_sum(tf.square(f_u_pred)) + \
            tf.reduce_sum(tf.square(f_v_pred))
        return loss


    # 运行优化
    def run_optimizer(self, x, y, t, u_real, v_real):
        optimizer = self.optimizer

        with tf.GradientTape() as Tape:
            # 调用模型预测
            u_pred, v_pred, p_pred, f_u_pred, f_v_pred = self.predict(x, y, t)

            # loss由4项构成
            loss = self.loss_function(u_pred, v_pred, u_real, v_real, f_u_pred, f_v_pred)

        trainable_variables = self.trainable_variables
        gradients = Tape.gradient(loss, trainable_variables)
        optimizer.apply_gradients(zip(gradients, trainable_variables))

        return loss


    # 误差评估
    def error(self, u_star, v_star, p_star, u_pred, v_pred, p_pred, lambda_1_value, lambda_2_value):
        # Error
        error_u = np.linalg.norm(u_star - u_pred, 2) / np.linalg.norm(u_star, 2)
        error_v = np.linalg.norm(v_star - v_pred, 2) / np.linalg.norm(v_star, 2)
        error_p = np.linalg.norm(p_star - p_pred, 2) / np.linalg.norm(p_star, 2)

        error_lambda_1 = np.abs(lambda_1_value - 1.0) * 100
        error_lambda_2 = np.abs(lambda_2_value - 0.01) / 0.01 * 100

        return error_u, error_v, error_p, error_lambda_1, error_lambda_2


# 数据batch迭代器
def data_iter(batch_size, x, y, t, u, v):
    num_examples = len(x)
    indices = list(range(num_examples))
    # 随机读取样本
    random.shuffle(indices)
    for i in range(0, num_examples, batch_size):
        j = tf.constant(indices[i: min(i + batch_size, num_examples)])
        yield tf.gather(x, j), tf.gather(y, j), tf.gather(t, j), tf.gather(u, j), tf.gather(v, j)
        
        
if __name__ == "__main__":

    # ==================== 0. 全局参数设置 ====================
    nIter = 20000
    N_train = 5000 # 随机选择训练样本
    batch_size = 100
    layers = [3,32,32,32,32,32,2]
    theata = 0.3 # 权重衰减惩罚因子


    # ==================== 1. 训练数据准备 ====================
    # Load Data
    data = scipy.io.loadmat('../Data/cylinder_nektar_wake.mat')
           
    U_star = data['U_star'] # N x 2 x T
    P_star = data['p_star'] # N x T
    t_star = data['t'] # T x 1
    X_star = data['X_star'] # N x 2
    
    N = X_star.shape[0]     # x∈[1,8] 5000 samples
    T = t_star.shape[0]     # time
    
    # Rearrange Data
    XX = np.tile(X_star[:,0], (1,T)) # N x T
    YY = np.tile(X_star[:,1], (1,T)) # N x T
    TT = np.tile(t_star, (1,N)).T # N x T
    
    UU = U_star[:,0,:] # N x T
    VV = U_star[:,1,:] # N x T
    PP = P_star # N x T
    
    x = XX.flatten()[:,None] # NT x 1
    y = YY.flatten()[:,None] # NT x 1
    t = TT.flatten()[:,None] # NT x 1
    
    u = UU.flatten()[:,None] # NT x 1
    v = VV.flatten()[:,None] # NT x 1
    p = PP.flatten()[:,None] # NT x 1

    idx = np.random.choice(N*T, N_train, replace=False)
    x_train = tf.constant(x[idx,:], dtype=tf.float32)
    y_train = tf.constant(y[idx,:], dtype=tf.float32)
    t_train = tf.constant(t[idx,:], dtype=tf.float32)
    u_train = tf.constant(u[idx,:], dtype=tf.float32)
    v_train = tf.constant(v[idx,:], dtype=tf.float32)
    p_train = tf.constant(p[idx,:], dtype=tf.float32)

    # mini_batch训练


    # ==================== 2. 训练模型 ====================
    # 实例化
    pinn = PhysicsInformedNN(layer=layers, lr=1e-3)

    start_time = time.time()
    total_time = time.time()
    for step in range(nIter):
        for x_batch,y_batch,t_batch,u_batch,v_batch in data_iter(batch_size, x_train,y_train,t_train,u_train,v_train):
            # 梯度更新
            loss = pinn.run_optimizer(x_batch,y_batch,t_batch,u_batch,v_batch)

        # Print
        if step % 10 == 0:
            elapsed = time.time() - start_time
            print('It: %d, Loss: %.3f, λ1: %.3f, λ2: %.5f, Time: %.2f s' %
                  (step+1, loss, pinn.lambda_1.numpy(), pinn.lambda_2.numpy(), elapsed))
            start_time = time.time()

        if loss < 5:
            break

    print("\n\ntrain time: %.1f s" %(time.time()-total_time))


    # ==================== 3. 模型评估 ====================
    # Test Data
    snap = np.array([100])
    x_star = X_star[:, 0:1]
    y_star = X_star[:, 1:2]
    t_star = TT[:, snap]

    u_star = U_star[:, 0, snap]
    v_star = U_star[:, 1, snap]
    p_star = P_star[:, snap]

    # Prediction
    u_pred, v_pred, p_pred, f_u_pred, f_v_pred = pinn.predict(x_star, y_star, t_star)
    lambda_1_value = pinn.lambda_1.numpy()
    lambda_2_value = pinn.lambda_2.numpy()

    error_u, error_v, error_p, error_lambda_1, error_lambda_2 = \
        pinn.error(u_star, v_star, p_star, u_pred, v_pred, p_pred, lambda_1_value, lambda_2_value)

    print('Error u: %e' % error_u)
    print('Error v: %e' % error_v)
    print('Error p: %e' % error_p)
    print('Error l1: %.5f%%' % error_lambda_1)
    print('Error l2: %.5f%%' % error_lambda_2)


    # ==================== 4. 结果保存 & 绘图 ====================
    # 模型保存
    now_time = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
    pinn.model.save("./models/pinn_model_%s.h5" %now_time)

    # Plot Results
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/木道寻08/article/detail/831342
推荐阅读
相关标签
  

闽ICP备14008679号