{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "pDyCVsqlFA_5"
},
"source": [
"Updated 19/Nov/2021 by Yoshihisa Nitta "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "I4thkRCveS2o"
},
"source": [
"\n",
"\n",
"# Variational Auto Encoder Analysis for MNIST dataset with Tensorflow 2 on Google Colab\n",
"\n",
"To run this notebook, we assume it is in the state after training with VAE_MNIST_Train.ipynb.\n",
"\n",
"\n",
"## MNIST データセットを用いて Variational Auto Encoder をGooble Colab 上の Tensorflow 2 で解析する\n",
"\n",
"このノートブックを実行するには VAE_MNIST_Train.ipynb で訓練した後の状態であることを仮定している。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "ok",
"timestamp": 1637572808348,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "gLzKsQqnToi6"
},
"outputs": [],
"source": [
"#! pip install tensorflow==2.7.0"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 2015,
"status": "ok",
"timestamp": 1637572810361,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "j3o8E-o4TpyQ",
"outputId": "bfcc50a5-ce5d-4195-face-07cc15598efc"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2.7.0\n"
]
}
],
"source": [
"%tensorflow_version 2.x\n",
"\n",
"import tensorflow as tf\n",
"print(tf.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RV4ugCIeerBy"
},
"source": [
"# Check the Google Colab runtime environment\n",
"\n",
"## Google Colab 実行環境を確認する"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 1159,
"status": "ok",
"timestamp": 1637572811513,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "2BkP2SYVeOtQ",
"outputId": "36188a02-1704-4f09-b2d8-044d8a4eeb53"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Mon Nov 22 09:20:10 2021 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 495.44 Driver Version: 460.32.03 CUDA Version: 11.2 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"| | | MIG M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 34C P0 26W / 250W | 0MiB / 16280MiB | 0% Default |\n",
"| | | N/A |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: |\n",
"| GPU GI CI PID Type Process name GPU Memory |\n",
"| ID ID Usage |\n",
"|=============================================================================|\n",
"| No running processes found |\n",
"+-----------------------------------------------------------------------------+\n",
"processor\t: 0\n",
"vendor_id\t: GenuineIntel\n",
"cpu family\t: 6\n",
"model\t\t: 79\n",
"model name\t: Intel(R) Xeon(R) CPU @ 2.20GHz\n",
"stepping\t: 0\n",
"microcode\t: 0x1\n",
"cpu MHz\t\t: 2199.998\n",
"cache size\t: 56320 KB\n",
"physical id\t: 0\n",
"siblings\t: 2\n",
"core id\t\t: 0\n",
"cpu cores\t: 1\n",
"apicid\t\t: 0\n",
"initial apicid\t: 0\n",
"fpu\t\t: yes\n",
"fpu_exception\t: yes\n",
"cpuid level\t: 13\n",
"wp\t\t: yes\n",
"flags\t\t: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities\n",
"bugs\t\t: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa\n",
"bogomips\t: 4399.99\n",
"clflush size\t: 64\n",
"cache_alignment\t: 64\n",
"address sizes\t: 46 bits physical, 48 bits virtual\n",
"power management:\n",
"\n",
"processor\t: 1\n",
"vendor_id\t: GenuineIntel\n",
"cpu family\t: 6\n",
"model\t\t: 79\n",
"model name\t: Intel(R) Xeon(R) CPU @ 2.20GHz\n",
"stepping\t: 0\n",
"microcode\t: 0x1\n",
"cpu MHz\t\t: 2199.998\n",
"cache size\t: 56320 KB\n",
"physical id\t: 0\n",
"siblings\t: 2\n",
"core id\t\t: 0\n",
"cpu cores\t: 1\n",
"apicid\t\t: 1\n",
"initial apicid\t: 1\n",
"fpu\t\t: yes\n",
"fpu_exception\t: yes\n",
"cpuid level\t: 13\n",
"wp\t\t: yes\n",
"flags\t\t: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities\n",
"bugs\t\t: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa\n",
"bogomips\t: 4399.99\n",
"clflush size\t: 64\n",
"cache_alignment\t: 64\n",
"address sizes\t: 46 bits physical, 48 bits virtual\n",
"power management:\n",
"\n",
"Ubuntu 18.04.5 LTS \\n \\l\n",
"\n",
" total used free shared buff/cache available\n",
"Mem: 12G 738M 9G 1.2M 2.0G 11G\n",
"Swap: 0B 0B 0B\n"
]
}
],
"source": [
"! nvidia-smi\n",
"! cat /proc/cpuinfo\n",
"! cat /etc/issue\n",
"! free -h"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PLB9-03nfCxx"
},
"source": [
"# Mount Google Drive from Google Colab\n",
"\n",
"## Google Colab から GoogleDrive をマウントする"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 38581,
"status": "ok",
"timestamp": 1637572850092,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "JS_DxEGYe3bK",
"outputId": "23b987ed-ba88-4dd5-bb1d-5d03577fb250"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Mounted at /content/drive\n"
]
}
],
"source": [
"from google.colab import drive\n",
"drive.mount('/content/drive')"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 6,
"status": "ok",
"timestamp": 1637572850093,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "2B6gVAM6fJEw",
"outputId": "1af2f27c-e878-4476-81b3-871b8058c369"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"MyDrive Shareddrives\n"
]
}
],
"source": [
"! ls /content/drive"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0EN3D7nffROf"
},
"source": [
"# Download source file from Google Drive or nw.tsuda.ac.jp\n",
"Basically, gdown
from Google Drive. Download from nw.tsuda.ac.jp above only if the specifications of Google Drive change and you cannot download from Google Drive.\n",
"\n",
"## Google Drive または nw.tsuda.ac.jp からファイルをダウンロードする\n",
"基本的に Google Drive から gdown
してください。 Google Drive の仕様が変わってダウンロードができない場合にのみ、nw.tsuda.ac.jp からダウンロードしてください。"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 3977,
"status": "ok",
"timestamp": 1637572854067,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "F3wJxpfjfQxJ",
"outputId": "9393e00c-c465-4f28-ad44-205f8a2063c2"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading...\n",
"From: https://drive.google.com/uc?id=1ZCihR7JkMOity4wCr66ZCp-3ZOlfwwo3\n",
"To: /content/nw/VariationalAutoEncoder.py\n",
"\r",
" 0% 0.00/18.7k [00:00, ?B/s]\r",
"100% 18.7k/18.7k [00:00<00:00, 14.9MB/s]\n"
]
}
],
"source": [
"# Download source file\n",
"nw_path = './nw'\n",
"! rm -rf {nw_path}\n",
"! mkdir -p {nw_path}\n",
"\n",
"if True: # from Google Drive\n",
" url_model = 'https://drive.google.com/uc?id=1ZCihR7JkMOity4wCr66ZCp-3ZOlfwwo3'\n",
" ! (cd {nw_path}; gdown {url_model})\n",
"else: # from nw.tsuda.ac.jp\n",
" URL_NW = 'https://nw.tsuda.ac.jp/lec/GoogleColab/pub'\n",
" url_model = f'{URL_NW}/models/VariationalAutoEncoder.py'\n",
" ! wget -nd {url_model} -P {nw_path}"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 23,
"status": "ok",
"timestamp": 1637572854068,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "LdJkkfFtfW72",
"outputId": "6e7b56dc-3708-4be6-e8c5-f64d326fb1f8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"import tensorflow as tf\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import os\n",
"import pickle\n",
"import datetime\n",
"\n",
"class Sampling(tf.keras.layers.Layer):\n",
" def __init__(self, **kwargs):\n",
" super().__init__(**kwargs)\n",
"\n",
" def call(self, inputs):\n",
" mu, log_var = inputs\n",
" epsilon = tf.keras.backend.random_normal(shape=tf.keras.backend.shape(mu), mean=0., stddev=1.)\n",
" return mu + tf.keras.backend.exp(log_var / 2) * epsilon\n",
"\n",
"\n",
"class VAEModel(tf.keras.models.Model):\n",
" def __init__(self, encoder, decoder, r_loss_factor, **kwargs):\n",
" super().__init__(**kwargs)\n",
" self.encoder = encoder\n",
" self.decoder = decoder\n",
" self.r_loss_factor = r_loss_factor\n",
"\n",
"\n",
" @tf.function\n",
" def loss_fn(self, x):\n",
" z_mean, z_log_var, z = self.encoder(x)\n",
" reconstruction = self.decoder(z)\n",
" reconstruction_loss = tf.reduce_mean(\n",
" tf.square(x - reconstruction), axis=[1,2,3]\n",
" ) * self.r_loss_factor\n",
" kl_loss = tf.reduce_sum(\n",
" 1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var),\n",
" axis = 1\n",
" ) * (-0.5)\n",
" total_loss = reconstruction_loss + kl_loss\n",
" return total_loss, reconstruction_loss, kl_loss\n",
"\n",
"\n",
" @tf.function\n",
" def compute_loss_and_grads(self, x):\n",
" with tf.GradientTape() as tape:\n",
" total_loss, reconstruction_loss, kl_loss = self.loss_fn(x)\n",
" grads = tape.gradient(total_loss, self.trainable_weights)\n",
" return total_loss, reconstruction_loss, kl_loss, grads\n",
"\n",
"\n",
" def train_step(self, data):\n",
" if isinstance(data, tuple):\n",
" data = data[0]\n",
" total_loss, reconstruction_loss, kl_loss, grads = self.compute_loss_and_grads(data)\n",
" self.optimizer.apply_gradients(zip(grads, self.trainable_weights))\n",
" return {\n",
" \"loss\": tf.math.reduce_mean(total_loss),\n",
" \"reconstruction_loss\": tf.math.reduce_mean(reconstruction_loss),\n",
" \"kl_loss\": tf.math.reduce_mean(kl_loss),\n",
" }\n",
"\n",
" def call(self,inputs):\n",
" _, _, z = self.encoder(inputs)\n",
" return self.decoder(z)\n",
"\n",
"\n",
"class VariationalAutoEncoder():\n",
" def __init__(self, \n",
" input_dim,\n",
" encoder_conv_filters,\n",
" encoder_conv_kernel_size,\n",
" encoder_conv_strides,\n",
" decoder_conv_t_filters,\n",
" decoder_conv_t_kernel_size,\n",
" decoder_conv_t_strides,\n",
" z_dim,\n",
" r_loss_factor, ### added\n",
" use_batch_norm = False,\n",
" use_dropout = False,\n",
" epoch = 0\n",
" ):\n",
" self.name = 'variational_autoencoder'\n",
" self.input_dim = input_dim\n",
" self.encoder_conv_filters = encoder_conv_filters\n",
" self.encoder_conv_kernel_size = encoder_conv_kernel_size\n",
" self.encoder_conv_strides = encoder_conv_strides\n",
" self.decoder_conv_t_filters = decoder_conv_t_filters\n",
" self.decoder_conv_t_kernel_size = decoder_conv_t_kernel_size\n",
" self.decoder_conv_t_strides = decoder_conv_t_strides\n",
" self.z_dim = z_dim\n",
" self.r_loss_factor = r_loss_factor ### added\n",
" \n",
" self.use_batch_norm = use_batch_norm\n",
" self.use_dropout = use_dropout\n",
"\n",
" self.epoch = epoch\n",
" \n",
" self.n_layers_encoder = len(encoder_conv_filters)\n",
" self.n_layers_decoder = len(decoder_conv_t_filters)\n",
" \n",
" self._build()\n",
" \n",
"\n",
" def _build(self):\n",
" ### THE ENCODER\n",
" encoder_input = tf.keras.layers.Input(shape=self.input_dim, name='encoder_input')\n",
" x = encoder_input\n",
" \n",
" for i in range(self.n_layers_encoder):\n",
" x = conv_layer = tf.keras.layers.Conv2D(\n",
" filters = self.encoder_conv_filters[i],\n",
" kernel_size = self.encoder_conv_kernel_size[i],\n",
" strides = self.encoder_conv_strides[i],\n",
" padding = 'same',\n",
" name = 'encoder_conv_' + str(i)\n",
" )(x)\n",
"\n",
" if self.use_batch_norm: ### The order of layers is opposite to AutoEncoder\n",
" x = tf.keras.layers.BatchNormalization()(x) ### AE: LeakyReLU -> BatchNorm\n",
" x = tf.keras.layers.LeakyReLU()(x) ### VAE: BatchNorm -> LeakyReLU\n",
" \n",
" if self.use_dropout:\n",
" x = tf.keras.layers.Dropout(rate = 0.25)(x)\n",
" \n",
" shape_before_flattening = tf.keras.backend.int_shape(x)[1:]\n",
" \n",
" x = tf.keras.layers.Flatten()(x)\n",
" \n",
" self.mu = tf.keras.layers.Dense(self.z_dim, name='mu')(x)\n",
" self.log_var = tf.keras.layers.Dense(self.z_dim, name='log_var')(x) \n",
" self.z = Sampling(name='encoder_output')([self.mu, self.log_var])\n",
" \n",
" self.encoder = tf.keras.models.Model(encoder_input, [self.mu, self.log_var, self.z], name='encoder')\n",
" \n",
" \n",
" ### THE DECODER\n",
" decoder_input = tf.keras.layers.Input(shape=(self.z_dim,), name='decoder_input')\n",
" x = decoder_input\n",
" x = tf.keras.layers.Dense(np.prod(shape_before_flattening))(x)\n",
" x = tf.keras.layers.Reshape(shape_before_flattening)(x)\n",
" \n",
" for i in range(self.n_layers_decoder):\n",
" x = conv_t_layer = tf.keras.layers.Conv2DTranspose(\n",
" filters = self.decoder_conv_t_filters[i],\n",
" kernel_size = self.decoder_conv_t_kernel_size[i],\n",
" strides = self.decoder_conv_t_strides[i],\n",
" padding = 'same',\n",
" name = 'decoder_conv_t_' + str(i)\n",
" )(x)\n",
" \n",
" if i < self.n_layers_decoder - 1:\n",
" if self.use_batch_norm: ### The order of layers is opposite to AutoEncoder\n",
" x = tf.keras.layers.BatchNormalization()(x) ### AE: LeakyReLU -> BatchNorm\n",
" x = tf.keras.layers.LeakyReLU()(x) ### VAE: BatchNorm -> LeakyReLU \n",
" if self.use_dropout:\n",
" x = tf.keras.layers.Dropout(rate=0.25)(x)\n",
" else:\n",
" x = tf.keras.layers.Activation('sigmoid')(x)\n",
" \n",
" decoder_output = x\n",
" self.decoder = tf.keras.models.Model(decoder_input, decoder_output, name='decoder') ### added (name)\n",
" \n",
" ### THE FULL AUTOENCODER\n",
" self.model = VAEModel(self.encoder, self.decoder, self.r_loss_factor)\n",
" \n",
" \n",
" def save(self, folder):\n",
" self.save_params(os.path.join(folder, 'params.pkl'))\n",
" self.save_weights(folder)\n",
"\n",
"\n",
" @staticmethod\n",
" def load(folder, epoch=None): # VariationalAutoEncoder.load(folder)\n",
" params = VariationalAutoEncoder.load_params(os.path.join(folder, 'params.pkl'))\n",
" VAE = VariationalAutoEncoder(*params)\n",
" if epoch is None:\n",
" VAE.load_weights(folder)\n",
" else:\n",
" VAE.load_weights(folder, epoch-1)\n",
" VAE.epoch = epoch\n",
" return VAE\n",
"\n",
" \n",
" def save_params(self, filepath):\n",
" dpath, fname = os.path.split(filepath)\n",
" if dpath != '' and not os.path.exists(dpath):\n",
" os.makedirs(dpath)\n",
" with open(filepath, 'wb') as f:\n",
" pickle.dump([\n",
" self.input_dim,\n",
" self.encoder_conv_filters,\n",
" self.encoder_conv_kernel_size,\n",
" self.encoder_conv_strides,\n",
" self.decoder_conv_t_filters,\n",
" self.decoder_conv_t_kernel_size,\n",
" self.decoder_conv_t_strides,\n",
" self.z_dim,\n",
" self.r_loss_factor,\n",
" self.use_batch_norm,\n",
" self.use_dropout,\n",
" self.epoch\n",
" ], f)\n",
"\n",
"\n",
" @staticmethod\n",
" def load_params(filepath):\n",
" with open(filepath, 'rb') as f:\n",
" params = pickle.load(f)\n",
" return params\n",
"\n",
"\n",
" def save_weights(self, folder, epoch=None):\n",
" if epoch is None:\n",
" self.save_model_weights(self.encoder, os.path.join(folder, f'weights/encoder-weights.h5'))\n",
" self.save_model_weights(self.decoder, os.path.join(folder, f'weights/decoder-weights.h5'))\n",
" else:\n",
" self.save_model_weights(self.encoder, os.path.join(folder, f'weights/encoder-weights_{epoch}.h5'))\n",
" self.save_model_weights(self.decoder, os.path.join(folder, f'weights/decoder-weights_{epoch}.h5'))\n",
"\n",
"\n",
" def save_model_weights(self, model, filepath):\n",
" dpath, fname = os.path.split(filepath)\n",
" if dpath != '' and not os.path.exists(dpath):\n",
" os.makedirs(dpath)\n",
" model.save_weights(filepath)\n",
"\n",
"\n",
" def load_weights(self, folder, epoch=None):\n",
" if epoch is None:\n",
" self.encoder.load_weights(os.path.join(folder, f'weights/encoder-weights.h5'))\n",
" self.decoder.load_weights(os.path.join(folder, f'weights/decoder-weights.h5'))\n",
" else:\n",
" self.encoder.load_weights(os.path.join(folder, f'weights/encoder-weights_{epoch}.h5'))\n",
" self.decoder.load_weights(os.path.join(folder, f'weights/decoder-weights_{epoch}.h5'))\n",
"\n",
"\n",
" def save_images(self, imgs, filepath):\n",
" z_mean, z_log_var, z = self.encoder.predict(imgs)\n",
" reconst_imgs = self.decoder.predict(z)\n",
" txts = [ f'{p[0]:.3f}, {p[1]:.3f}' for p in z ]\n",
" AutoEncoder.showImages(imgs, reconst_imgs, txts, 1.4, 1.4, 0.5, filepath)\n",
" \n",
"\n",
" def compile(self, learning_rate):\n",
" self.learning_rate = learning_rate\n",
" optimizer = tf.keras.optimizers.Adam(lr=learning_rate)\n",
" self.model.compile(optimizer=optimizer) # CAUTION!!!: loss(y_true, y_pred) function is not specified.\n",
" \n",
" \n",
" def train_with_fit(\n",
" self,\n",
" x_train,\n",
" batch_size,\n",
" epochs,\n",
" run_folder='run/'\n",
" ):\n",
" history = self.model.fit(\n",
" x_train,\n",
" x_train,\n",
" batch_size = batch_size,\n",
" shuffle=True,\n",
" initial_epoch = self.epoch,\n",
" epochs = epochs\n",
" )\n",
" if (self.epoch < epochs):\n",
" self.epoch = epochs\n",
"\n",
" if run_folder != None:\n",
" self.save(run_folder)\n",
" self.save_weights(run_folder, self.epoch-1)\n",
" \n",
" return history\n",
"\n",
"\n",
" def train_generator_with_fit(\n",
" self,\n",
" data_flow,\n",
" epochs,\n",
" run_folder='run/'\n",
" ):\n",
" history = self.model.fit(\n",
" data_flow,\n",
" initial_epoch = self.epoch,\n",
" epochs = epochs\n",
" )\n",
" if (self.epoch < epochs):\n",
" self.epoch = epochs\n",
"\n",
" if run_folder != None:\n",
" self.save(run_folder)\n",
" self.save_weights(run_folder, self.epoch-1)\n",
" \n",
" return history\n",
"\n",
"\n",
" def train_tf(\n",
" self,\n",
" x_train,\n",
" batch_size = 32,\n",
" epochs = 10,\n",
" shuffle = False,\n",
" run_folder = 'run/',\n",
" optimizer = None,\n",
" save_epoch_interval = 100,\n",
" validation_data = None\n",
" ):\n",
" start_time = datetime.datetime.now()\n",
" steps = x_train.shape[0] // batch_size\n",
"\n",
" total_losses = []\n",
" reconstruction_losses = []\n",
" kl_losses = []\n",
"\n",
" val_total_losses = []\n",
" val_reconstruction_losses = []\n",
" val_kl_losses = []\n",
"\n",
" for epoch in range(self.epoch, epochs):\n",
" epoch_loss = 0\n",
" indices = tf.range(x_train.shape[0], dtype=tf.int32)\n",
" if shuffle:\n",
" indices = tf.random.shuffle(indices)\n",
" x_ = x_train[indices]\n",
"\n",
" step_total_losses = []\n",
" step_reconstruction_losses = []\n",
" step_kl_losses = []\n",
" for step in range(steps):\n",
" start = batch_size * step\n",
" end = start + batch_size\n",
"\n",
" total_loss, reconstruction_loss, kl_loss, grads = self.model.compute_loss_and_grads(x_[start:end])\n",
" optimizer.apply_gradients(zip(grads, self.model.trainable_weights))\n",
" \n",
" step_total_losses.append(np.mean(total_loss))\n",
" step_reconstruction_losses.append(np.mean(reconstruction_loss))\n",
" step_kl_losses.append(np.mean(kl_loss))\n",
" \n",
" epoch_total_loss = np.mean(step_total_losses)\n",
" epoch_reconstruction_loss = np.mean(step_reconstruction_losses)\n",
" epoch_kl_loss = np.mean(step_kl_losses)\n",
"\n",
" total_losses.append(epoch_total_loss)\n",
" reconstruction_losses.append(epoch_reconstruction_loss)\n",
" kl_losses.append(epoch_kl_loss)\n",
"\n",
" val_str = ''\n",
" if not validation_data is None:\n",
" x_val = validation_data\n",
" tl, rl, kl = self.model.loss_fn(x_val)\n",
" val_tl = np.mean(tl)\n",
" val_rl = np.mean(rl)\n",
" val_kl = np.mean(kl)\n",
" val_total_losses.append(val_tl)\n",
" val_reconstruction_losses.append(val_rl)\n",
" val_kl_losses.append(val_kl)\n",
" val_str = f'val loss total {val_tl:.3f} reconstruction {val_rl:.3f} kl {val_kl:.3f} '\n",
"\n",
" if (epoch+1) % save_epoch_interval == 0 and run_folder != None:\n",
" self.save(run_folder)\n",
" self.save_weights(run_folder, self.epoch)\n",
"\n",
" elapsed_time = datetime.datetime.now() - start_time\n",
" print(f'{epoch+1}/{epochs} {steps} loss: total {epoch_total_loss:.3f} reconstruction {epoch_reconstruction_loss:.3f} kl {epoch_kl_loss:.3f} {val_str}{elapsed_time}')\n",
"\n",
" self.epoch += 1\n",
"\n",
" if run_folder != None:\n",
" self.save(run_folder)\n",
" self.save_weights(run_folder, self.epoch-1)\n",
"\n",
" dic = { 'loss' : total_losses, 'reconstruction_loss' : reconstruction_losses, 'kl_loss' : kl_losses }\n",
" if not validation_data is None:\n",
" dic['val_loss'] = val_total_losses\n",
" dic['val_reconstruction_loss'] = val_reconstruction_losses\n",
" dic['val_kl_loss'] = val_kl_losses\n",
"\n",
" return dic\n",
" \n",
"\n",
" def train_tf_generator(\n",
" self,\n",
" data_flow,\n",
" epochs = 10,\n",
" run_folder = 'run/',\n",
" optimizer = None,\n",
" save_epoch_interval = 100,\n",
" validation_data_flow = None\n",
" ):\n",
" start_time = datetime.datetime.now()\n",
" steps = len(data_flow)\n",
"\n",
" total_losses = []\n",
" reconstruction_losses = []\n",
" kl_losses = []\n",
"\n",
" val_total_losses = []\n",
" val_reconstruction_losses = []\n",
" val_kl_losses = []\n",
"\n",
" for epoch in range(self.epoch, epochs):\n",
" epoch_loss = 0\n",
"\n",
" step_total_losses = []\n",
" step_reconstruction_losses = []\n",
" step_kl_losses = []\n",
"\n",
" for step in range(steps):\n",
" x, _ = next(data_flow)\n",
"\n",
" total_loss, reconstruction_loss, kl_loss, grads = self.model.compute_loss_and_grads(x)\n",
" optimizer.apply_gradients(zip(grads, self.model.trainable_weights))\n",
" \n",
" step_total_losses.append(np.mean(total_loss))\n",
" step_reconstruction_losses.append(np.mean(reconstruction_loss))\n",
" step_kl_losses.append(np.mean(kl_loss))\n",
" \n",
" epoch_total_loss = np.mean(step_total_losses)\n",
" epoch_reconstruction_loss = np.mean(step_reconstruction_losses)\n",
" epoch_kl_loss = np.mean(step_kl_losses)\n",
"\n",
" total_losses.append(epoch_total_loss)\n",
" reconstruction_losses.append(epoch_reconstruction_loss)\n",
" kl_losses.append(epoch_kl_loss)\n",
"\n",
" val_str = ''\n",
" if not validation_data_flow is None:\n",
" step_val_tl = []\n",
" step_val_rl = []\n",
" step_val_kl = []\n",
" for i in range(len(validation_data_flow)):\n",
" x, _ = next(validation_data_flow)\n",
" tl, rl, kl = self.model.loss_fn(x)\n",
" step_val_tl.append(np.mean(tl))\n",
" step_val_rl.append(np.mean(rl))\n",
" step_val_kl.append(np.mean(kl))\n",
" val_tl = np.mean(step_val_tl)\n",
" val_rl = np.mean(step_val_rl)\n",
" val_kl = np.mean(step_val_kl)\n",
" val_total_losses.append(val_tl)\n",
" val_reconstruction_losses.append(val_rl)\n",
" val_kl_losses.append(val_kl)\n",
" val_str = f'val loss total {val_tl:.3f} reconstruction {val_rl:.3f} kl {val_kl:.3f} '\n",
"\n",
" if (epoch+1) % save_epoch_interval == 0 and run_folder != None:\n",
" self.save(run_folder)\n",
" self.save_weights(run_folder, self.epoch)\n",
"\n",
" elapsed_time = datetime.datetime.now() - start_time\n",
" print(f'{epoch+1}/{epochs} {steps} loss: total {epoch_total_loss:.3f} reconstruction {epoch_reconstruction_loss:.3f} kl {epoch_kl_loss:.3f} {val_str}{elapsed_time}')\n",
"\n",
" self.epoch += 1\n",
"\n",
" if run_folder != None:\n",
" self.save(run_folder)\n",
" self.save_weights(run_folder, self.epoch-1)\n",
"\n",
" dic = { 'loss' : total_losses, 'reconstruction_loss' : reconstruction_losses, 'kl_loss' : kl_losses }\n",
" if not validation_data_flow is None:\n",
" dic['val_loss'] = val_total_losses\n",
" dic['val_reconstruction_loss'] = val_reconstruction_losses\n",
" dic['val_kl_loss'] = val_kl_losses\n",
"\n",
" return dic\n",
"\n",
"\n",
" @staticmethod\n",
" def showImages(imgs1, imgs2, txts, w, h, vskip=0.5, filepath=None):\n",
" n = len(imgs1)\n",
" fig, ax = plt.subplots(2, n, figsize=(w * n, (2+vskip) * h))\n",
" for i in range(n):\n",
" if n == 1:\n",
" axis = ax[0]\n",
" else:\n",
" axis = ax[0][i]\n",
" img = imgs1[i].squeeze()\n",
" axis.imshow(img, cmap='gray_r')\n",
" axis.axis('off')\n",
"\n",
" axis.text(0.5, -0.35, txts[i], fontsize=10, ha='center', transform=axis.transAxes)\n",
"\n",
" if n == 1:\n",
" axis = ax[1]\n",
" else:\n",
" axis = ax[1][i]\n",
" img2 = imgs2[i].squeeze()\n",
" axis.imshow(img2, cmap='gray_r')\n",
" axis.axis('off')\n",
"\n",
" if not filepath is None:\n",
" dpath, fname = os.path.split(filepath)\n",
" if dpath != '' and not os.path.exists(dpath):\n",
" os.makedirs(dpath)\n",
" fig.savefig(filepath, dpi=600)\n",
" plt.close()\n",
" else:\n",
" plt.show()\n",
"\n",
" @staticmethod\n",
" def plot_history(vals, labels):\n",
" colors = ['red', 'blue', 'green', 'orange', 'black', 'pink']\n",
" n = len(vals)\n",
" fig, ax = plt.subplots(1, 1, figsize=(9,4))\n",
" for i in range(n):\n",
" ax.plot(vals[i], c=colors[i], label=labels[i])\n",
" ax.legend(loc='upper right')\n",
" ax.set_xlabel('epochs')\n",
" # ax[0].set_ylabel('loss')\n",
" \n",
" plt.show()\n"
]
}
],
"source": [
"! cat {nw_path}/VariationalAutoEncoder.py"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A1sT3O6Ofdd2"
},
"source": [
"# Preparing MNIST dataset\n",
"\n",
"## MNIST データセットを用意する"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 18,
"status": "ok",
"timestamp": 1637572854068,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "zXeh_DeCfZjD",
"outputId": "8be082bd-3934-4c9c-f2ad-828986c64b35"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2.7.0\n"
]
}
],
"source": [
"%tensorflow_version 2.x\n",
"\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"\n",
"print(tf.__version__)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 1100,
"status": "ok",
"timestamp": 1637572855155,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "yHVkd5c0fkzB",
"outputId": "54d98c13-6bdf-4c2c-9833-18ddcab165fa"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n",
"11493376/11490434 [==============================] - 0s 0us/step\n",
"11501568/11490434 [==============================] - 0s 0us/step\n",
"(60000, 28, 28)\n",
"(60000,)\n",
"(10000, 28, 28)\n",
"(10000,)\n"
]
}
],
"source": [
"# prepare data\n",
"(x_train_raw, y_train_raw), (x_test_raw, y_test_raw) = tf.keras.datasets.mnist.load_data()\n",
"print(x_train_raw.shape)\n",
"print(y_train_raw.shape)\n",
"print(x_test_raw.shape)\n",
"print(y_test_raw.shape)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 29,
"status": "ok",
"timestamp": 1637572855158,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "pakzbL16iVP3",
"outputId": "b9236c19-c21d-406d-d303-5d49101b2fd8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(60000, 28, 28, 1)\n",
"(10000, 28, 28, 1)\n"
]
}
],
"source": [
"x_train = x_train_raw.reshape(x_train_raw.shape+(1,)).astype('float32') / 255.0\n",
"x_test = x_test_raw.reshape(x_test_raw.shape+(1,)).astype('float32') / 255.0\n",
"print(x_train.shape)\n",
"print(x_test.shape)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 143
},
"executionInfo": {
"elapsed": 23,
"status": "ok",
"timestamp": 1637572855160,
"user": {
"displayName": "Yoshihisa Nitta",
"photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJLeg9AmjfexROvC3P0wzJdd5AOGY_VOu-nxnh=s64",
"userId": "15888006800030996813"
},
"user_tz": -540
},
"id": "45ij1IwMfnFH",
"outputId": "ee169e72-f658-40d2-f58e-efd0e46f95d4"
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABigAAACSCAYAAADfGkI9AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3df/zW870/8PcHRSkJ86MxbGLI/Cg3O7jJr3Zj0U7ZnGZKtuTHiaRRzqKNmjHWYYT8OCp2kpgzRyVldPJjR520TdpGi2rEKBGR+nz/+p7beb+f71xXV9fnfX1+3O//PR+9rvf1dOvV9bk+18v1ftbV19cnAAAAAAAARdqq1g0AAAAAAAAtjwMKAAAAAACgcA4oAAAAAACAwjmgAAAAAAAACueAAgAAAAAAKNw2n/eHdXV19UU1QtNQX19fV8Tz2Htk2XvUir1Hrdh71Iq9R63Ye9RKEXvPviPLax61Yu9RK5vae75BAQAAAAAAFM4BBQAAAAAAUDgHFAAAAAAAQOEcUAAAAAAAAIVzQAEAAAAAABTOAQUAAAAAAFA4BxQAAAAAAEDhHFAAAAAAAACFc0ABAAAAAAAUzgEFAAAAAABQOAcUAAAAAABA4RxQAAAAAAAAhXNAAQAAAAAAFM4BBQAAAAAAUDgHFAAAAAAAQOEcUAAAAAAAAIVzQAEAAAAAABTOAQUAAAAAAFC4bWrdADR1u+66a6ru2bNnWNOnT5+QvfLKKyGbOHFiyTUbNmzY3BZpwVq1ahWyI444ImQjR45M1ZMmTQprpkyZUr3GAABoMG3atAnZCy+8kKrXrFkT1gwdOjRk8+bNq15jOb70pS+l6oEDB4Y1nTp1CtmcOXNClv19CoCWbdtttw1Zx44dSz7u6quvDtkFF1xQUQ+HHHJIyN59992QrV+/vqx1zZFvUAAAAAAAAIVzQAEAAAAAABTOAQUAAAAAAFA4BxQAAAAAAEDhDMmGzXDttdeG7NJLL03Vbdu2DWvq6+tDduqpp4bssssuS9UPP/xwWHPNNdeE7OWXX47N0uJ06NAhZHnDrr/5zW+GbNGiRal66dKlVesLoDFq3759qn7kkUfCmpNOOilkdXV1IXvvvfdS9X/+53+GNU899VTIJkyYULJPgEr07NkzZHlDOrMGDRoUsldeeaWiHvr27RuyPn36hOzII49M1bvssktZ19+4cWPIDMlm//33D9msWbNCtvPOO6fqvH8fS5YsqV5jQINr06ZNyM4555yQ3XrrrRVdP+/nTjkWLlwYsq22it8ZWLx4cci++93vpupVq1aFNW+88UZFfTUmvkEBAAAAAAAUzgEFAAAAAABQOAcUAAAAAABA4ery7o3/v39YV7fpP2zkWrVqFbK8+3vlyd5TbP369VXpqTmor6+PN15uAEXvvex9qJMkSW688caQ/dM//VPI2rVrl6rz7k39ef/ONlfefjzttNNCNnv27Ko9Z2PQXPdeNd1www0hy841KdecOXNCduKJJ1Z0rabO3itWdq5PkuTP3rnnnntCNnTo0AbpqVbsverJ2xv/8i//kqp32mmnsq6V93N+xYoVJR/XqVOnkHXu3DlkjeF+1/ZesbL3QU+SJHn33Xdr0Ent2XvVM3/+/JAdfvjhNeikOtauXRuy7H25kyR/BlA5ith7LWHfFS3vc5/p06eH7IQTTgjZlVdemarzfpdqaF7zqJXmuvcOPfTQkOX9PGwM8j6jLmfGRd5nfRdffHHI/vznP1fWWAPb1N7zDQoAAAAAAKBwDigAAAAAAIDCOaAAAAAAAAAK54ACAAAAAAAo3Da1buD/2m677ULWsWPHkJUzBLNXr14hO+CAA8rqY/ny5an63//938Oav/zlLyG7//77Q/bxxx+X9ZzUVt6Q6R/84Ac16KS01q1bh2yPPfaoQSfU2jnnnJOqhw0bFtZUOqD9uOOOq+hxsLl++MMfpuq8n/GTJk0K2SWXXBKyBx98MFW/8MILW9gdjd3WW28dsr59+4Zs5MiRIVuzZk2qHjduXFjz8MMPl9XHm2++maqzr89JkiTXX399yB599NGQHXXUUanae8mm7eCDDw7Z2Wefnaovu+yysObHP/5xyK677rqq9UXTlfe6d9VVV4XssMMOK6Kd/7V+/fqQvfXWWyF74IEHQvbiiy+m6ryB2AceeGDIKh2ITfMxYsSIkOUNxM7zySefVLsdmol99tknZLvttluqPuWUU8Ka7HvLJEmSdevWhez222+vvDlq6tlnnw3ZkiVLSj7uzDPPDFmbNm0q6uGkk04K2U033RSy4cOHh2zRokUVPWcRfIMCAAAAAAAonAMKAAAAAACgcA4oAAAAAACAwjmgAAAAAAAAClfYkOzevXuHLDu8ctq0aWFN0cO9kiRJ9txzz1R9+eWXl/W4vAEk2YGI9957b1izYcOGzeiOhpA34Khc8+bNS9XPPPNMxdfKDtXcZZddKr4WzV/eQERozHr16hWyUaNGpeojjzwyrHnttddC1qNHj5DtscceW9AdTdEhhxwSsp/97Gch++ijj0J26qmnpurFixdXra9Zs2ZV7Vo0HaeffnrIfvWrX4Wsbdu2Ja91xRVXhKx///4lH5cd2J4kcQBxkiRJfX19yWvlWb58ecj+8pe/hCw7pHblypVhzdixYyvqoaX7h3/4h5BdffXVZT02+/c3bty4sCZvb5Tz+vjOO++E7Lnnniurr3LMnDmzateiaWjVqlXIjjvuuFR93nnnlXWtvNeppUuXVtQXjVPeYOsuXbqk6m7duoU1Z511VsjyPofZaqv0/9/dvn37svqaMWNGyAzJbjh5P4tuvPHGkB188MEhy/695322N3ny5JC99NJLJfv661//GrIf//jHJR9XruzvNUmSJBs3bgzZlVdemaob09Bs36AAAAAAAAAK54ACAAAAAAAonAMKAAAAAACgcA4oAAAAAACAwtV93oC0urq6yqan5cgbBD179uxUfdRRR4U1a9eurej58gaj3HHHHRVdK0/nzp1DdvHFF4dsm23Sc8gPP/zwsKacgSqNRX19fV0Rz1PNvVeOiRMnhux73/teWY/NDn19/PHHK+4jO6gnb7DcDjvsELK77rorZIMGDaq4j8aoue69cuX9HX//+99P1Xmv5z//+c9DdsQRR4Ts5JNPTtV5A6FOPPHEkn02Ry1971Vqv/32C9mTTz4ZsjFjxqTqu+++u6zr/+u//mvI1qxZk6rLHRraWNl70Ze+9KVUvWDBgrCmY8eOITvnnHNCNmnSpOo1lnHbbbeF7MILLwxZdthnkiTJ3LlzG6SnzWHvVWbdunUhyxvy+tRTT6Xqdu3ahTV5vxflDcDO2n333UNWVxf/Oisdkl3ptfKGhPbs2TPvWvZeRteuXVP1tGnTwpovfOELIcsbAJwdovmnP/1py5prRorYe01p3zUG2267bcj+4z/+I1X36NGjrGt17949ZH7e1l7ea1ffvn1Tdd7PnTPOOCNkeQOwP/nkk1T96quvhjV5n7nMnz8/ZNn3nDfccENYc/zxx4esd+/eIXviiSdCVrSWvvcOOuigkGWHZM+ZM6dBexg2bFjI8t7HDR06tGrPmf2d6IEHHqjatcu1qb3nGxQAAAAAAEDhHFAAAAAAAACFc0ABAAAAAAAUbpvSS6pj4MCBIXv55ZdT9UcffRTW/PGPf2ywnqrtww8/DNlVV12Vqi+77LKwpn///g3WE+V5++23Q5Z3P92RI0eGLDtLZUtk/03kzWDJu0/x3nvvXbUeaJzy7qmZ3aN5+/jKK68MWd59oLPXevHFFze3RVqwNm3ahCzvHvxPP/10yP7t3/6touds27ZtyFauXFnRtWg6br/99lS94447hjWvvfZayPLuJVxNnTp1StWbuLd+yN57770G64mGlXfv6bz7pb/yyishy96POu93iMYgO68gSZKkdevWIcu7p3f2PS3lyd57PUmS5Oabb07Vefdsz5N3b2szJ2issrM7kyRJrrvuupAdeuihJa/161//OmS///3vK2uMimTv5Z8kSfLDH/4wZHlzMzt06JCq82ZQTJkyJWQDBgwIWXb+3erVq8OaPKecckrIHn744VSdN2e3T58+IWsM8yaIFi1aVOsWkptuuilkea9x1ZxBcf7556fqhQsXhjW1+hzeNygAAAAAAIDCOaAAAAAAAAAK54ACAAAAAAAonAMKAAAAAACgcIUNya50CGZTsmDBglq3QIXGjRsXsoMPPjhkkydPDtm6deuq1sfJJ5+cqvOGf+aZNGlS1Xqg6Zo6dWrI2rdvH7Kdd965iHZoQb7+9a+H7KSTTgrZaaedFrK8AXNZBxxwQMiyQ2aTJEm6d+9e8lo0HXkDh3faaaeSj/vrX/8asm9+85sh22233VL13Llzw5r169eX1Vd2mPAXv/jFsOa5554LWWMY0Ed5jj/++FR94YUXhjVvv/12yPL2XmMdip01ffr0WrfQrHXr1i1kt9xyS8jyhs2W47777gtZpb+Tr1y5MmSPPPJIqp4wYUJY89prr4Us73UVzjrrrJANGTKk5OPyfraOHj06ZGvWrKmsMcqSfR/0i1/8IqzZc889QzZnzpyQPf3006n6oYceCmuWLl1aVl+tW7dO1Xk/uy+//PKQrV27NmTnnntuqs7be9DYHX300al69913D2sMyQYAAAAAAFoMBxQAAAAAAEDhHFAAAAAAAACFc0ABAAAAAAAUrrAh2S3BrrvuWnJN3vA8am/JkiUhyw56KsKJJ56YqrNDnWgZ8gYO77fffiUf98Ybb4Tss88+C1k5g90POuigkN11110hyxvwWI477rgjZHfeeWdF16L2zjvvvJBNnDgxZDNmzCh5rXbt2oUsb7907NgxZMccc0yqNoC4aevcuXPIvva1r5V83L777huyG264IWT19fWp+re//W1YM2rUqJBtv/32IRs/fnyqfv/998OaM888MzZLo9S2bduQjRw5suSa73znOyErd5AnzVvea9e0adNCVulA7Dzt27ev2rV22GGHkA0fPvxz6yTJH/p96aWXVq0vmqa8QcXjxo0L2caNG0O2atWqkteq1YDXliz7PugPf/hDWNOjR4+QLV++vGo9HHvssSG75557UvWOO+4Y1kyaNClk11xzTcgMWqcIf/7zn0N20UUXpeq818vmwDcoAAAAAACAwjmgAAAAAAAACueAAgAAAAAAKJwDCgAAAAAAoHCGZFfotNNOC9n1119f8nF5gz7h/ytnMPdHH30UMoNgm5e8QYRt2rQp+bhhw4aFLG/YYteuXUteK28v1tXVhSw7ZLZct912W8iOOuqokH3/+9+v6Po0rOwezRv+27t374quPXjw4JB17969rMdutZX/76I5yRtyuXDhwlT99a9/PazJDkRMkiRZsWJFyL71rW+l6j59+oQ1J554Ysk+kyRJfve736Xqm266Kax58803y7oWtTdo0KCQnXDCCan6pz/9aVgzY8aMBuuJpm3s2LEhq3Qg9vr160P2m9/8JmSzZ88uea25c+eG7MADDwxZly5dQtarV69Ufeihh4Y1l1xySci23nrrkI0ZMyZVv/XWW7FZmqzDDjssVZfzO2+S5P/ee+utt6bqV199tfLGqJpu3bql6r///e9hzYYNGyq69u677x6y7D5IkvzP6J555plUPXz48LDmpZdeqqgvaAgff/xxyJYuXVp8IzXgN3kAAAAAAKBwDigAAAAAAIDCOaAAAAAAAAAKV/d59w+vq6ur7ObiTdyee+6Zqjt06BDWTJkyJWQHHXRQyN59991Uffjhh4c1y5Yt29wWa6a+vj7ehL4BtNS9l73/4SGHHBLW5N2nf968eQ3WU2PR0vfec889F7LsXsi7//7GjRur1sPbb78dskpnUHzhC18I2TbbxLFI/fr1S9X3339/Rc+3JVr63svTvn37VJ13n9mbb745ZAsWLAhZ9u+4R48eYU3enJ28n7nZ18zFixeHNU2JvRdlXzvy5k307NkzZGvWrAlZ9jWnbdu2ZfXw4Ycfhiy7H/NmXjQlLWnvbbfddiGbOXNmyI499tiS18q7j3X23utJkiQPPfRQqr7vvvvCmunTp5d8vuaoue69Tp06hexHP/pRyPL2Y3YOU9++fcOaWvwukJ2PNnny5LDm9NNPL+ta2fd3AwYMCGuq+Z42TxF7rzG85tVCdhZJ3kyWvFl3ebObvvjFL1avsUagub7mlSvv99fLLrssVV9xxRVhzf/8z/+E7Oc//3nIsjPC8t7D5dl+++1DduSRR6bqvM9q8uTt7XfeeSdV5/3MX716dVnXr1RL33t5Mz7zPvstWt4cqHJmSlU6L/Qb3/hGyGbNmlXycVtiU3vPNygAAAAAAIDCOaAAAAAAAAAK54ACAAAAAAAonAMKAAAAAACgcI1qSHarVq1CNnr06JB17do1ZI8++miq/sMf/lDWc373u98NWXZA50477RTW7LjjjmVd/6mnnkrVp512Wljz8ccfl3WtxqClD9Kppi5duoTs6aefTtV5+yxvkHBL0NL33pe//OWQXXzxxan6+OOPD2u23XbbkO2///4ln2/ChAkh+8EPflDyceX629/+FrLddtstZL/61a9SdXagchFa+t4rR96wwyFDhpT12BdeeCFV//KXvwxrhg8fHrL//u//DtmgQYPKes6mwt4r7ZFHHgnZt771rZCtXLkyZOPHj0/VS5cuDWvGjRsXsjfeeCNkAwcOTNVz584Na5qSlrT38t6PLVy4sOTjXn311ZBt2LChrOfcd999U3Xr1q3DmgMOOKCs52xuWtLeawlef/31kO21114lH3fWWWeFLG8IdzUZkl0deZ/pLFu2LFXnvef/9NNPQ3bMMceErBYD4RtSS3rN22GHHUI2YsSIkGXf9w8ePDisWbBgQcgOO+ywkJ1++ukl+8obLvzVr341ZHvvvXfJa5V7/eznsGvXrg1rRo4cGbKpU6eGLO/36nK0pL3Xvn37kF1yySUh+8lPflJEO/8rb0j8xo0bC+3hoosuCtmkSZNCVs3PrQ3JBgAAAAAAGg0HFAAAAAAAQOEcUAAAAAAAAIVzQAEAAAAAABSuUQ3JHjp0aMh+8YtfFNlCg8sb6pQddJskcWhoY9GSBuk0tDvvvDNk2SHEeUOQ+vbt22A9NWb2XmXyBkLNnj07ZEcccUSqPvroo8OavKHElSp3SHZ2UGm2zyLYe8U67rjjQvbkk0+GrFevXiF74oknGqSnWrH3op/+9KepOm+w23XXXRey66+/vqLnu/3220N2/vnnh+zXv/51qj7jjDMqer7GoiXtva233jpk++23X8nHbcmQ7H/+539O1bfccktY88ADD4Ssf//+ZV2/KWtJe68lGDBgQMjuvffeko/L2//9+vWrRkubZEj25mvTpk3ILrjggpDdeOONJa+1ZMmSkHXu3LmyxpqQlvSad+WVV4ZszJgxJR9XzpDpJMkfND1nzpyS13/++edDlve53YwZM0peq1J5A8Tnz58fsryhyl/5ylcqes7muvfyPv/Ivu9KkiQZPXp0Ee18rsYwJDtP3sD5P/7xj1W7viHZAAAAAABAo+GAAgAAAAAAKJwDCgAAAAAAoHAOKAAAAAAAgMJtU+sG/q/sgMEkSZIdd9wxZHlDUrNDPPbcc8/qNVZF3bp1C9lvfvObkD344IMhGzZsWKr+9NNPq9cYhdt7771Lrnn99dcL6ITm7IMPPgjZe++9V/JxeQNeqzkke8KECSG74oorqnZ9mq68IWazZs0KWXMbiE3UpUuXkGUHVOcNMZw4cWLVerj22mtL9pAkSdK7d+9U/bWvfS2s+f3vf1+1vqievMHWf/rTnxr0OcePH5+qf/KTn4Q1p5xySoP2QO3lvU589NFHIcsbyN5UVHOoJo1P//79Q1bOQOzHH388ZL169apKTzQO7dq1C1neAPW8YdfZn8HTpk0La5YuXRqyRx99NGTLly//vDYbja9+9ash69SpU8iGDh1aRDtNSps2bVL1JZdcEtbkvc9i004//fSQ5b0XWbduXVWf1zcoAAAAAACAwjmgAAAAAAAACueAAgAAAAAAKFyjmkGRdx+5UaNGlfXYIUOGpOru3buHNVOmTAnZ4sWLy2uuSgYPHhyys846q6x12f+mvHs+vvTSS1vQHQ1l1113DVk5MyigVvLm5VTTu+++26DXp+nYaqv0/ytx5plnhjUDBw4sqh0akd/97nchmz9/fqrOu0//Z599VrUe8mahrVq1quS6k046Kawxg4L/b/369ak67x7cdXV1RbVDA2jdunWqvvnmm8Oas88+O2Q33HBDyPJm4TQG2Z/fu+++e1jzwAMPVHTtat/Xmi23//77h+zSSy8t67HvvPNOqp45c2ZVeqLxynsvljcj4vnnnw/ZY489lqrXrl1bvcYaiWOPPTZV583jffLJJ0OWnWFFknTo0CFVlztvYtGiRSHLm3dSqQEDBqTqXXbZpWrXbmhf+cpXQrbNNg1/fOAbFAAAAAAAQOEcUAAAAAAAAIVzQAEAAAAAABTOAQUAAAAAAFC4RjUke0tkB4/lDSJrDPIGfeYNv7n88stD1rVr11Q9a9assObkk08OmcHZxWrfvn3IZs+eHbK8QWNZbdu2DVneALo82SHcZ5xxRlmPyxvKmB3euM8++4Q1o0ePDpmBoI3Tf/3Xf4WsR48eqfr4449v0B6GDRsWsuywxSRpnkPRSDvvvPNS9VtvvRXWTJ48uah2KEDHjh1DNmPGjJCtXLkyZN/73vdSdTUHYufJG6CXlx199NGp+vDDD2+wnmh+8oZk07TdcccdqTo7LHNTKh0q3dB23nnnkN1+++2p+tvf/nbF1//ggw9S9Y033ljxtWgYd999d8jK+X02SZJkyJAhqfrBBx+sSk80XnmD7rP7oDnadtttQ9a5c+eQ3Xfffan62WefDWvOPvvsqvVFNH/+/JCNGDGiomudf/75W9rOFnv66adD9vjjj1d0rUmTJoXsww8/rOham8M3KAAAAAAAgMI5oAAAAAAAAArngAIAAAAAACicAwoAAAAAAKBwzWZIdlOWNyTqiSeeCNljjz2Wqo899tiw5qmnngrZTjvttAXdsbm23377kB144IEhK2cg4oUXXhiyiy66qKJrlaucIdl5rrnmmqr1QMO69957QzZw4MBUvddee4U1p556asimT59e8vnyXqvatWsXstWrV4esX79+Ja9P05bde2+++WZYY1h685I3OHG77bYLWXbIbJIkybJlyxqkp0358pe/XNa6VatWpeoFCxY0RDs0E/vss0+qbtOmTVizYsWKgrqhIZx77rmpOu+99AUXXBCy119/vaLn69SpU8g+++yzkLVv3z5VDx8+PKzp3bt3yFq1ahWyHXbYoWRfGzZsCNn48eNDNnXq1FS9ZMmSktemevL+Lm+55ZZUfcghh5R1rV/+8pche/HFFytrDBqxvKHfAwYMCNkuu+wSspEjR6bqCRMmVK2vlub9999P1VdddVVYc+2114Ys7zOKiRMnVtTDCSecELK8v/eGNG/evJCNHTu20B62lG9QAAAAAAAAhXNAAQAAAAAAFM4BBQAAAAAAUDgHFAAAAAAAQOEMyW6k8obFZocQT5s2Lazp2LFjg/UENA95Q4gXLlyYqvOGZI8ZMyZkr732Wsi6d++eqgcPHhzW5A0EHTduXMiWLl0aMpquXXfdNWRdunRJ1XmDkWleWrduHbK8QXLPPfdcEe18ru985zshyxuq98ILL6Tq++67r6FaohnIDpFt27ZtWPPQQw8V1Q41kjd0uG/fviHLvif7xje+EdYceuihIVu3bl3I9thjj81pcbPMnTs3ZKNHjw7ZzJkzG6wHKtOzZ8+Q9evXr+TjsoO0kyRJRowYEbJPPvmkssagRg466KCQ3Xnnnan6yCOPDGvuuuuukN10000h8ztu9Xz88cep+tZbbw1rNm7cGLK8zzb23Xff6jVWRfPnz0/VP/rRj8Ka5rCnfIMCAAAAAAAonAMKAAAAAACgcA4oAAAAAACAwtXV19dv+g/r6jb9h9TcBx98ELJ27dqFrK6urmrPWV9fX72LfY6mvPc6dOgQsnnz5oWs0vvb5f19ft6/44a4ft4Mg6OPPjpky5Ytq1pf9l7DOuaYY1L12LFjw5quXbuG7J133glZ3r3ks6ZOnRqyiy++uKzrF83eq568fTVkyJBUnTdL6f3332+wnhqz5rr3unXrFrJZs2aFLO/nyqJFi6rWR3YWzvjx48Oab3/72yG77bbbQjZq1KhUvXbt2i3srraa697LM2XKlJBNnz49ZL/97W9T9YcffljW9fPuwT9o0KBU/eqrr4Y1+++/f1nXb26ay97Lvneu5nv1alqxYkXInn322ZA98sgjIXvsscdS9aeffhrWbNiwYQu6K1YRe68xvObtsMMOIcv7GZx935+dnZMkSTJ8+PCQmTexeZrLa15jcNVVV4Us+zlM3ryevPd6Bx54YMiys6Huv//+sCbv31Jj1ZL2Xvv27UOWNyfz2muvLaKd/7Vq1aqQnXDCCSHLvuds6vMmNrX3fIMCAAAAAAAonAMKAAAAAACgcA4oAAAAAACAwjmgAAAAAAAACrdNrRugfEuWLEnVeQOxqb28Ya6nnnpqyJ544omQ7b333iWv/8wzz4TsxRdfLHmtvIGdf//730M2Z86cktfPG4K3evXq2CxNRnYgYt5wpp/97Gchu/DCC0tee8KECSHLDkZOkvIHjtJ0dejQoeSaljoQuyV5+eWXQ/b666+H7Pnnnw9Z3759U/Urr7wS1uQNO8yTHcJ98sknhzXZgYhJkiRjxowJWVMfit2SLV++PGS33HJLyNq2bVvyWnV1ceZf3nDk7NDOPn36lLw2Tcvll1+eqvOGnpczCDNJkuSDDz5I1YsXLy6rh7yfp9nXtLzX3pUrV5Z1fZqmnj17hiw7EDtJkuRvf/tbqs6+biWJgdg0LnvttVfIRowYkarfe++9sCZvsPWoUaNCNnPmzFS9fv36zW2RGsn+HE2SJBk7dmzI7r777lR99dVXhzXlfP6Rp0uXLiHL+zwuL2spfIMCAAAAAAAonAMKAAAAAACgcA4oAAAAAACAwjDhp2MAAAInSURBVDmgAAAAAAAACleXN7jtf/+wrm7Tf0iDGjp0aMh22223VJ03iC/P8OHDq9JTkiRJfX19eU+6hew9suw9asXeq0zeQOxFixaFbPbs2am6f//+DdZTU9OS9t4RRxwRsilTpoRs3333rej6ee+ZsoMSBw8eHNZMnjy5oudr6lrS3suTN7y4X79+qXq//fYLa5YtWxayqVOnhuzxxx9P1Z9++unmtthstfS9R+0Usfcaw74bOHBgyO68886Q/eM//mOqfuyxxxqsp5bMa17DOvjgg1P1ihUrwprVq1cX1U6jYu9RK5vae75BAQAAAAAAFM4BBQAAAAAAUDgHFAAAAAAAQOEcUAAAAAAAAIUzJJvNYpAOtWLvUSv2XmXOPffckN1zzz0hyw5Hfumllxqsp6bG3qNW7D1qxd6jVlrKkGwaF6951Iq9R60Ykg0AAAAAADQaDigAAAAAAIDCOaAAAAAAAAAKZwYFm8V96qgVe49asfeoFXuPWrH3qBV7j1oxg4Ja8JpHrdh71IoZFAAAAAAAQKPhgAIAAAAAACicAwoAAAAAAKBwDigAAAAAAIDCOaAAAAAAAAAK54ACAAAAAAAonAMKAAAAAACgcA4oAAAAAACAwjmgAAAAAAAACldXX19f6x4AAAAAAIAWxjcoAAAAAACAwjmgAAAAAAAACueAAgAAAAAAKJwDCgAAAAAAoHAOKAAAAAAAgMI5oAAAAAAAAAr3/wA+k9rCQlIqjwAAAABJRU5ErkJggg==\n",
"text/plain": [
"