[minGPT]play_math 설명
Train GPT on addition¶
Train a GPT model on a dedicated addition dataset to see if a Transformer can learn to add.
github: https://github.com/karpathy/minGPT
minGPT 코드 이해를 돕기 위한 노트북.
attention 설명 생략
전체 코드 및 참고 자료는 Andrej karpathy github 참고 바람
play_math 목적
- ndigit 자리 수의 덧셈 학습
In [2]:
"""
Drive Mount and change directory
If you will not use google colab then you don't need this cell
code: cs231n/assignments
"""
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# Enter the foldername in your Drive where you have saved the unzipped
FOLDERNAME = 'minGPT/'
assert FOLDERNAME is not None, "[!] Enter the foldername."
import sys
sys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))
%cd drive/My\ Drive/$FOLDERNAME
Mounted at /content/drive /content/drive/My Drive/minGPT
In [ ]:
# set up logging
import logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
In [ ]:
# make deterministic
from mingpt.utils import set_seed
set_seed(42)
In [ ]:
import numpy as np
import torch
import torch.nn as nn
from torch.nn import functional as F
In [ ]:
from torch.utils.data import Dataset
class AdditionDataset(Dataset):
"""
Returns addition problems of up to some number of digits in the inputs. Recall
that all GPT cares about are sequences of integers, and completing them according to
patterns in the data. Therefore, we have to somehow encode addition problems
as a sequence of integers.
The sum of two n-digit numbers gives a third up to (n+1)-digit number. So our
encoding will simply be the n-digit first number, n-digit second number,
and (n+1)-digit result, all simply concatenated together. Because each addition
problem is so structured, there is no need to bother the model with encoding
+, =, or other tokens. Each possible sequence has the same length, and simply
contains the raw digits of the addition problem.
As a few examples, the 2-digit problems:
- 85 + 50 = 135 becomes the sequence [8, 5, 5, 0, 1, 3, 5]
- 6 + 39 = 45 becomes the sequence [0, 6, 3, 9, 0, 4, 5]
etc.
We will also only train GPT on the final (n+1)-digits because the first
two n-digits are always assumed to be given. So when we give GPT an exam later,
we will e.g. feed it the sequence [0, 6, 3, 9], which encodes that we'd like
to add 6 + 39, and hope that the model completes the integer sequence with [0, 4, 5]
in 3 sequential steps.
fun exercise: does it help if the result is asked to be produced in reverse order?
"""
def __init__(self, ndigit, split):
self.split = split # train/test
self.ndigit = ndigit
self.vocab_size = 10 # 10 possible digits 0..9
# +1 due to potential carry overflow, but then -1 because very last digit doesn't plug back
self.block_size = ndigit + ndigit + ndigit + 1 - 1
# split up all addition problems into either training data or test data
num = (10**self.ndigit)**2 # total number of possible combinations
r = np.random.RandomState(1337) # make deterministic
perm = r.permutation(num)
num_test = min(int(num*0.2), 1000) # 20% of the whole dataset, or only up to 1000
self.ixes = perm[:num_test] if split == 'test' else perm[num_test:]
def __len__(self):
return self.ixes.size
def __getitem__(self, idx):
# given a problem index idx, first recover the associated a + b
idx = self.ixes[idx]
nd = 10**self.ndigit
a = idx // nd
b = idx % nd
c = a + b
render = f'%0{self.ndigit}d%0{self.ndigit}d%0{self.ndigit+1}d' % (a,b,c) # e.g. 03+25=28 becomes "0325028"
dix = [int(s) for s in render] # convert each character to its token index
# x will be input to GPT and y will be the associated expected outputs
x = torch.tensor(dix[:-1], dtype=torch.long)
y = torch.tensor(dix[1:], dtype=torch.long) # predict the next token in the sequence
y[:self.ndigit*2-1] = -100 # we will only train in the output locations. -100 will mask loss to zero
return x, y
Dataset 생성 class
해당 노트북에서는 model train에 필요한 data를 모을 필요 없음
(code로 해결가능)
학습결과 예시
1) 85 + 50 = 135 -> [8, 5, 5, 0, 1, 3, 5]
2) 6 + 39 = 45 -> [0, 6, 3, 9, 0, 4, 5]
- ndigit = 2일 때,
cf) 피연산자의 경우 최대 두자리 수가 가능하므로 숫자를 할당하고 남은 자리 0으로 배정
연산의 결과 최대 세자리 수 가능(99+99 = 181). 마찬가지로 숫자를 할당하고 남은 자리 0으로 배정
class AdditionDataset(Dataset):
def __init__(self, ndigit, split):
...
"""
num: 가능한 피연산자 순열의 갯수
ex) ndigit=2 -> 10의 자리에 들어갈 수 있는 수(0~9), 1의 자리에 들어갈 수 있는 수 (0~9)
-> 총 경우의 수 = 10*10
cf) 일반적으로 ndigit=2일 경우 10의 자리에 0이 들어갈 수 없지만 해당 문제의 경우 가능 (빈자리를 자동으로 0으로 할당하기 때문에)
"""
num = (10**ndigit)**2 # ndigit=2 -> num=10**4
perm = r.permutation(num) # perm= 0부터 10**4까지가 들어있는 list random shuffle
...
def __getitem__(self, idx):
idx = self.ixes[idx] # 최대 4자리 숫자 중 하나 반환(0~9999) ex) 325
nd = 10**self.ndigit
a = idx // nd # a = 325//(10**2) = 3
b = idx % nd # b = 325%(10**2) = 25
c = a + b # c = 3 + 25 = 28
render = f'%0{self.ndigit}d%0{self.ndigit}d%0{self.ndigit+1}d' % (a,b,c)
# e.g. 03+25=28 becomes "0325028"
# f'%0nd % a -> %d: a(숫자) 할당, %0n: n자리수 지정. 모자랄 경우 0으로 할당.
# ex) f'%02d % 2' = '02', f'%02d % (10)' = '10'
x = torch.tensor(dix[:-1], dtype=torch.long) # [0, 3, 2, 5, 0, 2]
y = torch.tensor(dix[1:], dtype=torch.long) # [3, 2, 5, 0, 2, 8]
y[:self.ndigit*2-1] = -100 # [-100, -100, -100, 0, 2, 8]
# softmax를 적용할 것이기 때문에 -100으로 할당 (label만 존재)
return x, y
In [ ]:
ndigit=2
split='train'
train_dataset= AdditionDataset(ndigit, split)
train_dataset[0] # sample a training instance just to see what one raw example looks like
Out[ ]:
(tensor([4, 7, 1, 7, 0, 6]), tensor([-100, -100, -100, 0, 6, 4]))
In [ ]:
from mingpt.model import GPT, GPTConfig, GPT1Config
# initialize a baby GPT model
mconf = GPTConfig(train_dataset.vocab_size, train_dataset.block_size,
n_layer=2, n_head=4, n_embd=128)
model = GPT(mconf)
05/07/2022 00:57:32 - INFO - mingpt.model - number of parameters: 4.001280e+05
In [3]:
# minGPT model architecture
from IPython.display import Image
Image('minGPT.png', width=1000)
Out[3]:
GPT model init¶
model.py
class GPT(nn.Module):
def __init__(self, config):
super().__init__()
# layer init
self.apply(self._init_weights) # Applies _init_weights function recursively to every submodule
# (ref: pytorch.docs)
def _init_weights(self, module):
'''
layer's weights init
1) Linear weights, Embedding -> normal(mean=0, std=0.2)
2) Linear bias -> zeros
3) Layernorm weights -> ones, bias -> zeors
4) pos_emb -> normal(mean=0, std=0.02)
'''
In [ ]:
from mingpt.trainer import Trainer, TrainerConfig
# initialize a trainer instance and kick off training
tconf = TrainerConfig(max_epochs=50, batch_size=512, learning_rate=6e-4,
lr_decay=True, warmup_tokens=1024, final_tokens=50*len(train_dataset)*(ndigit+1),
num_workers=4)
trainer = Trainer(model, train_dataset, test_dataset, tconf)
trainer.train()
0%| | 0/18 [00:00<?, ?it/s]/apcv/shared/conda-envs/apcv-6244e1d-566/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
epoch 1 iter 17: train loss 1.74049. lr 5.994512e-04: 100%|██████████| 18/18 [00:30<00:00, 1.70s/it]
08/16/2020 23:48:16 - INFO - mingpt.trainer - test loss: 1.693525
epoch 2 iter 17: train loss 1.50974. lr 5.977197e-04: 100%|██████████| 18/18 [00:01<00:00, 11.61it/s]
08/16/2020 23:48:18 - INFO - mingpt.trainer - test loss: 1.466473
epoch 3 iter 17: train loss 1.31133. lr 5.948114e-04: 100%|██████████| 18/18 [00:01<00:00, 11.45it/s]
08/16/2020 23:48:20 - INFO - mingpt.trainer - test loss: 1.256615
epoch 4 iter 17: train loss 1.22379. lr 5.907379e-04: 100%|██████████| 18/18 [00:01<00:00, 11.50it/s]
08/16/2020 23:48:21 - INFO - mingpt.trainer - test loss: 1.160792
epoch 5 iter 17: train loss 1.14308. lr 5.855153e-04: 100%|██████████| 18/18 [00:01<00:00, 11.63it/s]
08/16/2020 23:48:23 - INFO - mingpt.trainer - test loss: 1.091487
epoch 6 iter 17: train loss 1.09970. lr 5.791641e-04: 100%|██████████| 18/18 [00:01<00:00, 11.56it/s]
08/16/2020 23:48:25 - INFO - mingpt.trainer - test loss: 1.050111
epoch 7 iter 17: train loss 1.08481. lr 5.717095e-04: 100%|██████████| 18/18 [00:01<00:00, 11.53it/s]
08/16/2020 23:48:26 - INFO - mingpt.trainer - test loss: 1.037456
epoch 8 iter 17: train loss 1.03496. lr 5.631810e-04: 100%|██████████| 18/18 [00:01<00:00, 11.59it/s]
08/16/2020 23:48:28 - INFO - mingpt.trainer - test loss: 0.997156
epoch 9 iter 17: train loss 0.98606. lr 5.536122e-04: 100%|██████████| 18/18 [00:01<00:00, 11.67it/s]
08/16/2020 23:48:30 - INFO - mingpt.trainer - test loss: 0.836543
epoch 10 iter 17: train loss 0.59589. lr 5.430411e-04: 100%|██████████| 18/18 [00:01<00:00, 12.80it/s]
08/16/2020 23:48:31 - INFO - mingpt.trainer - test loss: 0.438013
epoch 11 iter 17: train loss 0.50257. lr 5.315093e-04: 100%|██████████| 18/18 [00:01<00:00, 12.99it/s]
08/16/2020 23:48:33 - INFO - mingpt.trainer - test loss: 0.343370
epoch 12 iter 17: train loss 0.44096. lr 5.190624e-04: 100%|██████████| 18/18 [00:01<00:00, 12.08it/s]
08/16/2020 23:48:34 - INFO - mingpt.trainer - test loss: 0.277625
epoch 13 iter 17: train loss 0.37445. lr 5.057497e-04: 100%|██████████| 18/18 [00:01<00:00, 12.84it/s]
08/16/2020 23:48:36 - INFO - mingpt.trainer - test loss: 0.236511
epoch 14 iter 17: train loss 0.31269. lr 4.916238e-04: 100%|██████████| 18/18 [00:01<00:00, 12.90it/s]
08/16/2020 23:48:37 - INFO - mingpt.trainer - test loss: 0.207689
epoch 15 iter 17: train loss 0.34095. lr 4.767405e-04: 100%|██████████| 18/18 [00:01<00:00, 12.74it/s]
08/16/2020 23:48:39 - INFO - mingpt.trainer - test loss: 0.165566
epoch 16 iter 17: train loss 0.25957. lr 4.611586e-04: 100%|██████████| 18/18 [00:01<00:00, 12.69it/s]
08/16/2020 23:48:40 - INFO - mingpt.trainer - test loss: 0.123080
epoch 17 iter 17: train loss 0.23488. lr 4.449397e-04: 100%|██████████| 18/18 [00:01<00:00, 12.85it/s]
08/16/2020 23:48:42 - INFO - mingpt.trainer - test loss: 0.091252
epoch 18 iter 17: train loss 0.20269. lr 4.281479e-04: 100%|██████████| 18/18 [00:01<00:00, 12.72it/s]
08/16/2020 23:48:43 - INFO - mingpt.trainer - test loss: 0.078601
epoch 19 iter 17: train loss 0.19535. lr 4.108497e-04: 100%|██████████| 18/18 [00:01<00:00, 12.78it/s]
08/16/2020 23:48:45 - INFO - mingpt.trainer - test loss: 0.055412
epoch 20 iter 17: train loss 0.16152. lr 3.931133e-04: 100%|██████████| 18/18 [00:01<00:00, 12.66it/s]
08/16/2020 23:48:46 - INFO - mingpt.trainer - test loss: 0.051874
epoch 21 iter 17: train loss 0.14061. lr 3.750088e-04: 100%|██████████| 18/18 [00:01<00:00, 12.84it/s]
08/16/2020 23:48:48 - INFO - mingpt.trainer - test loss: 0.044502
epoch 22 iter 17: train loss 0.16309. lr 3.566079e-04: 100%|██████████| 18/18 [00:01<00:00, 12.67it/s]
08/16/2020 23:48:49 - INFO - mingpt.trainer - test loss: 0.036376
epoch 23 iter 17: train loss 0.14411. lr 3.379832e-04: 100%|██████████| 18/18 [00:01<00:00, 13.21it/s]
08/16/2020 23:48:51 - INFO - mingpt.trainer - test loss: 0.029843
epoch 24 iter 17: train loss 0.12110. lr 3.192084e-04: 100%|██████████| 18/18 [00:01<00:00, 12.74it/s]
08/16/2020 23:48:52 - INFO - mingpt.trainer - test loss: 0.025040
epoch 25 iter 17: train loss 0.11360. lr 3.003577e-04: 100%|██████████| 18/18 [00:01<00:00, 12.77it/s]
08/16/2020 23:48:54 - INFO - mingpt.trainer - test loss: 0.023500
epoch 26 iter 17: train loss 0.13910. lr 2.815056e-04: 100%|██████████| 18/18 [00:01<00:00, 12.78it/s]
08/16/2020 23:48:55 - INFO - mingpt.trainer - test loss: 0.022606
epoch 27 iter 17: train loss 0.07931. lr 2.627266e-04: 100%|██████████| 18/18 [00:01<00:00, 12.74it/s]
08/16/2020 23:48:57 - INFO - mingpt.trainer - test loss: 0.015403
epoch 28 iter 17: train loss 0.09684. lr 2.440948e-04: 100%|██████████| 18/18 [00:01<00:00, 11.92it/s]
08/16/2020 23:48:58 - INFO - mingpt.trainer - test loss: 0.015245
epoch 29 iter 17: train loss 0.09055. lr 2.256841e-04: 100%|██████████| 18/18 [00:01<00:00, 12.77it/s]
08/16/2020 23:49:00 - INFO - mingpt.trainer - test loss: 0.012647
epoch 30 iter 17: train loss 0.08837. lr 2.075671e-04: 100%|██████████| 18/18 [00:01<00:00, 12.59it/s]
08/16/2020 23:49:01 - INFO - mingpt.trainer - test loss: 0.011611
epoch 31 iter 17: train loss 0.08425. lr 1.898155e-04: 100%|██████████| 18/18 [00:01<00:00, 12.43it/s]
08/16/2020 23:49:03 - INFO - mingpt.trainer - test loss: 0.009952
epoch 32 iter 17: train loss 0.10772. lr 1.724993e-04: 100%|██████████| 18/18 [00:01<00:00, 12.40it/s]
08/16/2020 23:49:05 - INFO - mingpt.trainer - test loss: 0.008648
epoch 33 iter 17: train loss 0.07272. lr 1.556871e-04: 100%|██████████| 18/18 [00:01<00:00, 12.57it/s]
08/16/2020 23:49:06 - INFO - mingpt.trainer - test loss: 0.010154
epoch 34 iter 17: train loss 0.05550. lr 1.394453e-04: 100%|██████████| 18/18 [00:01<00:00, 12.47it/s]
08/16/2020 23:49:08 - INFO - mingpt.trainer - test loss: 0.007668
epoch 35 iter 17: train loss 0.05451. lr 1.238381e-04: 100%|██████████| 18/18 [00:01<00:00, 12.59it/s]
08/16/2020 23:49:09 - INFO - mingpt.trainer - test loss: 0.008095
epoch 36 iter 17: train loss 0.09133. lr 1.089272e-04: 100%|██████████| 18/18 [00:01<00:00, 12.39it/s]
08/16/2020 23:49:11 - INFO - mingpt.trainer - test loss: 0.006615
epoch 37 iter 17: train loss 0.06825. lr 9.477150e-05: 100%|██████████| 18/18 [00:01<00:00, 12.27it/s]
08/16/2020 23:49:12 - INFO - mingpt.trainer - test loss: 0.005874
epoch 38 iter 17: train loss 0.05798. lr 8.142699e-05: 100%|██████████| 18/18 [00:01<00:00, 12.49it/s]
08/16/2020 23:49:14 - INFO - mingpt.trainer - test loss: 0.005701
epoch 39 iter 17: train loss 0.06975. lr 6.894639e-05: 100%|██████████| 18/18 [00:01<00:00, 12.88it/s]
08/16/2020 23:49:15 - INFO - mingpt.trainer - test loss: 0.005469
epoch 40 iter 17: train loss 0.06070. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.80it/s]
08/16/2020 23:49:17 - INFO - mingpt.trainer - test loss: 0.005307
epoch 41 iter 17: train loss 0.06378. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.60it/s]
08/16/2020 23:49:18 - INFO - mingpt.trainer - test loss: 0.005681
epoch 42 iter 17: train loss 0.04885. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.81it/s]
08/16/2020 23:49:20 - INFO - mingpt.trainer - test loss: 0.005456
epoch 43 iter 17: train loss 0.06409. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.81it/s]
08/16/2020 23:49:21 - INFO - mingpt.trainer - test loss: 0.004907
epoch 44 iter 17: train loss 0.07563. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.69it/s]
08/16/2020 23:49:23 - INFO - mingpt.trainer - test loss: 0.004650
epoch 45 iter 17: train loss 0.03149. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.79it/s]
08/16/2020 23:49:24 - INFO - mingpt.trainer - test loss: 0.004626
epoch 46 iter 17: train loss 0.07037. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.86it/s]
08/16/2020 23:49:26 - INFO - mingpt.trainer - test loss: 0.004147
epoch 47 iter 17: train loss 0.07650. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.82it/s]
08/16/2020 23:49:27 - INFO - mingpt.trainer - test loss: 0.004611
epoch 48 iter 17: train loss 0.06342. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.63it/s]
08/16/2020 23:49:29 - INFO - mingpt.trainer - test loss: 0.004083
epoch 49 iter 17: train loss 0.12429. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.69it/s]
08/16/2020 23:49:30 - INFO - mingpt.trainer - test loss: 0.004081
epoch 50 iter 17: train loss 0.04616. lr 6.000000e-05: 100%|██████████| 18/18 [00:01<00:00, 12.19it/s]
08/16/2020 23:49:32 - INFO - mingpt.trainer - test loss: 0.003922
GPT model train¶
- get optimizer
- train
- data load
- forward
- back prop and gradient descent
- model save (if it's needed)
get optimizer(AdamW)¶
trainer.py
class Trainer:
def train(self):
...
optimizer = raw_model.configure_optimizers(config)
...
model.py
class GPT(nn.Module):
def configure_optimizers(self, train_config):
"""
learning rate decay할 대상 지정한 후에 AdamW 적용
decay: (Linear weights)
no decay: (biases, layernorm/embedding wegihts, position embedding(special case))
"""
for mn, m in self.named_moduels(): # mn: index, m: each module
for pn, p in m.named_parameters(): #pn: parameter's name, p: parameters(tensor)
fpn = '%s.%s' %(mn, pn) if mn else pn
...
'''
error 처리
inter_params = decay & no_decay
따라서 inter_params: 공집합
union_params = decay | no_decay
따라서 union_params: 전체 params
'''
'''
AdamW 적용
learning_rate=6e-4, warmup_tokens=1024, betas = (0.9, 0.95)
'''
In [ ]:
# named_modules, named_parameters() example
layer = nn.Sequential(
nn.Dropout(p=0.1),
nn.Linear(1024, 10)
)
for mn, m in layer.named_modules():
print('mn: ', mn)
print('m: ', m)
for pn, p in m.named_parameters():
fpn = '%s.%s' %(mn, pn) if mn else pn
print('fpn: ', fpn)
print('pn: ', pn)
print('p: ', p)
mn:
m: Sequential(
(0): Dropout(p=0.1, inplace=False)
(1): Linear(in_features=1024, out_features=10, bias=True)
)
fpn: 1.weight
pn: 1.weight
p: Parameter containing:
tensor([[ 0.0034, -0.0203, -0.0144, ..., 0.0188, 0.0111, -0.0010],
[ 0.0080, -0.0057, -0.0203, ..., 0.0142, -0.0210, 0.0054],
[-0.0114, 0.0158, 0.0054, ..., -0.0186, -0.0239, 0.0026],
...,
[ 0.0074, -0.0309, -0.0093, ..., -0.0144, -0.0162, -0.0270],
[ 0.0233, -0.0047, 0.0205, ..., 0.0007, -0.0309, -0.0135],
[-0.0008, -0.0115, 0.0058, ..., -0.0091, -0.0117, 0.0023]],
requires_grad=True)
fpn: 1.bias
pn: 1.bias
p: Parameter containing:
tensor([ 0.0073, -0.0023, -0.0302, -0.0146, -0.0222, -0.0180, -0.0174, 0.0301,
-0.0267, 0.0203], requires_grad=True)
mn: 0
m: Dropout(p=0.1, inplace=False)
mn: 1
m: Linear(in_features=1024, out_features=10, bias=True)
fpn: 1.weight
pn: weight
p: Parameter containing:
tensor([[ 0.0034, -0.0203, -0.0144, ..., 0.0188, 0.0111, -0.0010],
[ 0.0080, -0.0057, -0.0203, ..., 0.0142, -0.0210, 0.0054],
[-0.0114, 0.0158, 0.0054, ..., -0.0186, -0.0239, 0.0026],
...,
[ 0.0074, -0.0309, -0.0093, ..., -0.0144, -0.0162, -0.0270],
[ 0.0233, -0.0047, 0.0205, ..., 0.0007, -0.0309, -0.0135],
[-0.0008, -0.0115, 0.0058, ..., -0.0091, -0.0117, 0.0023]],
requires_grad=True)
fpn: 1.bias
pn: bias
p: Parameter containing:
tensor([ 0.0073, -0.0023, -0.0302, -0.0146, -0.0222, -0.0180, -0.0174, 0.0301,
-0.0267, 0.0203], requires_grad=True)
train¶
다시 train()함수로 돌아와서
trainer.py
class Trainer:
def train(self):
...
# optimizer
def run_epoch(loader, is_train):
# 일단 생략
best_loss = float('inf')
self.tokens=0 # learning_rate deay (if tokens >= warmup_tokens)
"""
train data loader
test data loader
cf) data loader: 설정된 batch size를 통해 data를 계속해서 모델에 feed할 수 있는 iterable 객체.
설계자가 직접 random함수를 이용하여 iterable하게 batch data를 생성하지 않아도 되므로 편리.
이외에도 single-and multi- processing data loading 등 지원.
자세한 건 pytorch.docs(torch.utils.data) 참고 (사실 잘 모름)
"""
# 매 epoch마다 run_epoch 실행
for epoch in range(config.max_epochs):
run_epoch(train_loader, is_train=True)
if self.test_dataset is not None:
test_loss = run_epoch(test_loader, is_train=False)
#good model 저장
good_model = self.test_dataset is None or test_loss < best_loss
if self.config.ckpt_path is not None and good_model:
best_loss = test_loss
self.save_checkpoint()
run_epoch¶
- loss 저장
- if train
- gradient descent
- learning rate decay?
- if test
- return test_loss
trainer.py
class Trainer:
def train(self):
def run_epoch(loader, is_train):
model.train(is_train) # nn.module funtion. Let model know train or test(for batchnorm or dropout etc. Not above Trainer.train function)
losses = []
pbar = tqdm(enumerate(loader), total=len(loader)) if is_train else enumerate(loader) # just to know visually how much time will take. Search tqdm lib for detail.
for it, (x, y) in pbar:
x = x.to(self.device)
y = y.to(self.device)
with torch.set_grad_enabled(is_train): # enable grad if True else disenable grad
...
if is_train:
# backprop, gradient update
if config.lr_decay:
self.tokens += (y>=0).sum() # sum all labels (not -100)
if self.tokens < config.warmup_tokens:
#linear warmup
else:
#cosine learning rate decay
lr = config.learning_rate * lr_mult
for param_group in optimizer.param_groups:
# learning rate 변경
else:
# no learning rate decay
# report (epoch, iter, train loss, lr)
if not is_train:
test_loss = float(np.mean(losses))
logger.info('test loss: %f', test_loss)
return test_loss
In [ ]:
# now let's give the trained model an addition exam
from torch.utils.data.dataloader import DataLoader
from mingpt.utils import sample
def give_exam(dataset, batch_size=32, max_batches=-1):
results = []
loader = DataLoader(dataset, batch_size=batch_size)
for b, (x, y) in enumerate(loader):
x = x.to(trainer.device)
d1d2 = x[:, :ndigit*2]
d1d2d3 = sample(model, d1d2, ndigit+1)
d3 = d1d2d3[:, -(ndigit+1):]
factors = torch.tensor([[10**i for i in range(ndigit+1)][::-1]]).to(trainer.device)
# decode the integers from individual digits
d1i = (d1d2[:,:ndigit] * factors[:,1:]).sum(1)
d2i = (d1d2[:,ndigit:ndigit*2] * factors[:,1:]).sum(1)
d3i_pred = (d3 * factors).sum(1)
d3i_gt = d1i + d2i
correct = (d3i_pred == d3i_gt).cpu() # Software 1.0 vs. Software 2.0 fight RIGHT on this line, lol
for i in range(x.size(0)):
results.append(int(correct[i]))
judge = 'YEP!!!' if correct[i] else 'NOPE'
if not correct[i]:
print("GPT claims that %03d + %03d = %03d (gt is %03d; %s)"
% (d1i[i], d2i[i], d3i_pred[i], d3i_gt[i], judge))
if max_batches >= 0 and b+1 >= max_batches:
break
print("final score: %d/%d = %.2f%% correct" % (np.sum(results), len(results), 100*np.mean(results)))
sample¶
for steps:
expected_value = the value which has the highest prob by model prediction using test_data
test_data= concatenation(test_data, expected_value)
In [ ]:
from IPython.display import Image
Image('sample.PNG')
Out[ ]:
utils.py
def sample(model, x, steps, temperature=1.0, sample=False, top_k=None):
block_size = model.get_block_size() # ndigit + ndigit + ndigit + 1 - 1
model_eval() # evaluation mode
for k in range(steps): # in this case) steps = ndigit+1, (but what if we want multiplication, subtraction or division?)
x_cond = x if x.size(1) <= block_size else x[:, -block_size:] # assert x_cond.size(1) <= block_size
logits, loss = model(x_cond) # loss is None
logits = logits[:, -1, :] / temperature # logits[:, -1, :] are scores of elems. e.x) [0, 3, 5, 9] -> what's the next?
# top_k_logits if you want
# softmax
if sample:
# multinomial(num_samples=1)
else:
# 가장 높은 확률을 가지는 수
x = torch.cat((x, ix)), dim=1) # test data + expected value
return x
give_exam¶
print how many expectations are correct.
if the expectations aren't correct, show those
예시를 위해 test_data = [5, 5, 4, 5]라고 가정하자.
def give_exam(dataset, batch_size=32, max_batches=-1):
...
for b, (x, y) in enumerate(loader):
...
d1d2d3 = sample(model, d1d2, ndigit+1) # d1d2d3=concat(test_data, expected_value)
d3 = d1d2d3[:, -(ndigit+1):] # list(label) e.x) [0, 9, 0]
factors = torch.tensor([[10**i for i in range(ndigit+1)][::-1]]).to(trainer.device) # [100, 10, 1]
d1i = (d1d2[:,:ndigit] * factors[:,1:]).sum(1) # 5*10 + 5*1
d2i = (d1d2[:,ndigit:ndigit*2] * factors[:,1:]).sum(1) # 4*10 + 5*1
d3i_pred = (d3 * factors).sum(1) # d3[0]*100 + d3[1]*10 + d3[2]*1
d3i_gt = d1i + d2i # 55 + 45
correct = (d3i_pred == d3i_gt).cpu()
# 생략
In [ ]:
# training set: how well did we memorize?
give_exam(train_dataset, batch_size=1024, max_batches=10)
final score: 9000/9000 = 100.00% correct
In [ ]:
# test set: how well did we generalize?
give_exam(test_dataset, batch_size=1024, max_batches=-1)
GPT claims that 055 + 045 = 090 (gt is 100; NOPE) final score: 999/1000 = 99.90% correct
In [ ]:
# well that's amusing... our model learned everything except 55 + 45