Albumentation, Image trasnform

2022. 9. 22. 15:50AI/라이브러리

볂공식 document

https://albumentations.ai/docs/getting_started/mask_augmentation/

참고 블로그

https://hoya012.github.io/blog/albumentation_tutorial/

colab example

https://colab.research.google.com/drive/1KgVb5W2UeXHAgwZfIe_0mJZvXqBmbJal

https://colab.research.google.com/drive/1JuZ23u0C0gx93kV0oJ8Mq0B6CBYhPLXy#scrollTo=8H4FnMgNdR7A&forceEdit=true&sandboxMode=true

Image Classification, Segmentation, Object Detection, Keypoints 등에 모두 사용 가능 

Install

pip install -U albumentations

Import - ToTensorV2 사용시 pytorch 는 직접 import

import albumentations as A
from albumentations.pytorch import ToTensorV2 # ToTensorV2 사용에 필요, pytorch 는 직접 import 필요
import cv2

Define Augmentation Pipeline

transform = A.Compose([
    A.RandomCrop(width=256, height=256),
    A.HorizontalFlip(p=0.5),
    A.RandomBrightnessContrast(p=0.2),
])

Image Read - numpy array로 변환

# cv2 
image = cv2.imread("/path/to/image.jpg")		# numpy array로 read
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) 	# RGB로 변환필요 - 이미지가 파란색인경우 실행

# PIL
from PIL import Image
pillow_image = Image.open("image.jpg")
image = np.array(pillow_image)			# PIL.Image -> numpy array로 변환

Augmentation

transformed = transform(image=image)	# pass img to the object
transformed_image = transformed["image"]
# next image
another_transformed_image = transform(image=another_image)["image"]

Multi-inputs Augmentation : additional_targets

transform = A.Compose(
    [A.VerticalFlip(p=1)],
    additional_targets={'image0': 'image', 'image1': 'image'} # additional_targets 에 dict 전달
)
transformed = transform(image=image, image0=image0, image1=image1) # multi-inpust

visualize(transformed['image'])
visualize(transformed['image0'])
visualize(transformed['image1'])


def visualize(image):
    plt.figure(figsize=(10, 10))
    plt.axis('off')
    plt.imshow(image)

예시

PIL.Image 를 numpy array로 변환해야한다. model로 입력하는 경우에는 tensor로 변환 필요

class A_DataTransform():
    def __init__(self, input_size, color_mean, color_std):
        self.data_transform = A.Compose([
                A.Resize(input_size, input_size), 
                A.Rotate(p= 0.5), 
                A.HorizontalFlip(p = 0.5),  
                A.Normalize(color_mean, color_std),
              # A.pytorch.transforms.ToTensorV2()	# Tensor로 변환
           		A.OneOf([			# 내부 layer들중 1가지를 선택 
	            		A.Rotate(p= 0.5), 
    	          		A.HorizontalFlip(p = 0.5)])
            ])
            
    def __call__(self, phase, img, anno_class_img):
        return self.data_transform(image = img, mask = anno_class_img)

A_DataTransform 객체 생성후 trans를 호출하면 trans['image'] 딕셔너리를 생성한다

다음 이미지를 변환하면 이전 이미지는 사라지고 새로운 변환 이미지로 딕셔너리가 업데이트 된다

cat = Image.open('/content/cat.jpg')
color_mean = [0.485, 0.456, 0.406]
color_std = [0.229, 0.224, 0.225]
trans = A_DataTransform(300, color_mean, color_std)

cat = np.array(cat)
img= trans('train', cat,cat)
plt.imshow(img['image'])

요약

https://github.com/albumentations-team/albumentations_examples/blob/master/notebooks/migrating_from_torchvision_to_albumentations.ipynb

'AI > 라이브러리' 카테고리의 다른 글

fastai & pytorch 정리글  (0) 2022.08.30
Numba  (0) 2022.08.11
Moviepy  (0) 2022.08.01
Git 정리  (0) 2022.07.29
torch 정리  (0) 2022.07.25