<https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced>
.
author是Bharath Raj <https://thatbrguy.github.io/>。

<>can my “state-of-the-art” neural network perform well with the meagre
amount of data I have?

Yes.我们的优化目的，是当参数沿着正确的方向调整时，模型的loss可以达到最低。

<>How do I get more data, if I don’t have “more data”?

<>Can augmentation help even if I have lots of data?

Your neural network is only as good as the data you feed it.

<>Where do we augment data in our ML pipeline?

<>Popular Augmentation Techniques

<>1、Flip

# NumPy.'img' = A single image. flip_1 = np.fliplr(img) # TensorFlow. 'x' = A
placeholder for an image. shape = [height, width, channels] x = tf.placeholder(
dtype= tf.float32, shape = shape) flip_2 = tf.image.flip_up_down(x) flip_3 = tf.
image.flip_left_right(x) flip_4 = tf.image.random_flip_up_down(x) flip_5 = tf.
image.random_flip_left_right(x)
<>2、Rotation

# Placeholders: 'x' = A single image, 'y' = A batch of images # 'k' denotes
the number of 90 degree anticlockwise rotations shape = [height, width, channels
] x = tf.placeholder(dtype = tf.float32, shape = shape) rot_90 = tf.image.rot90(
img, k=1) rot_180 = tf.image.rot90(img, k=2) # To rotate in any angle. In the
example below, 'angles' is in radians shape = [batch, height, width, 3] y = tf.
placeholder(dtype = tf.float32, shape = shape) rot_tf_180 = tf.contrib.image.
rotate(y, angles=3.1415) # Scikit-Image. 'angle' = Degrees. 'img' = Input Image
# For details about 'mode', checkout the interpolation section below. rot =
skimage.transform.rotate(img, angle=45, mode='reflect')
<>3、Scale

# Scikit Image. 'img' = Input Image, 'scale' = Scale factor # For details
about 'mode', checkout the interpolation section below. scale_out = skimage.
transform.rescale(img, scale=2.0, mode='constant') scale_in = skimage.transform.
rescale(img, scale=0.5, mode='constant') # Don't forget to crop the images back
to the original size (for # scale_out)
<>4、Crop

# TensorFlow. 'x' = A placeholder for an image. original_size = [height, width,
channels] x = tf.placeholder(dtype = tf.float32, shape = original_size) # Use
the following commands to perform random crops crop_size = [new_height,
new_width, channels] seed = np.random.randint(1234) x = tf.random_crop(x, size =
crop_size, seed = seed) output = tf.images.resize_images(x, size =
original_size)
<>5、Translation

Set one of them to the desired value and rest to 0 shape = [batch, height, width
, channels] x = tf.placeholder(dtype = tf.float32, shape = shape) # We use two
functions to get our desired augmentation x = tf.image.pad_to_bounding_box(x,
<>6、Gaussion Noise

#TensorFlow. 'x' = A placeholder for an image. shape = [height, width, channels
] x = tf.placeholder(dtype = tf.float32, shape = shape) # Adding Gaussian noise
noise= tf.random_normal(shape=tf.shape(x), mean=0.0, stddev=1.0, dtype=tf.

- 拍摄照片的季节。

<>GAN来拯救你

Changing seasons using a CycleGAN (Source)
<https://junyanz.github.io/CycleGAN/>

<https://blog.csdn.net/u010801994/article/details/https：//arxiv.org/abs/1703.07511>

<>A brief note on interpolation关于插值的简要说明

**但这是正确的假设吗？**在现实世界的情况下，大多数不是。图像处理和ML框架有一些标准方法，您可以使用它们来决定如何填充未知空间。它们的定义如下。

<>1、Constant

<>2、Edge

<>3、Relect

<>4、Symmetric对称

<>5、Wrap

<>So, if I use ALL of these techniques, my ML algorithm would be robust right?

<>Is it really worth the effort?

ps：需要翻墙，打不开的可留邮箱给我)

Learning
<https://medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab>

<https://github.com/thatbrguy/VGG19-Transfer-Learn>，它基于here
<https://github.com/machrisaa/tensorflow-vgg>

<https://mega.nz/#!xZ8glS6J!MAnE91ND_WyfZ_8mvkuSa2YcA7q-1ehfSm-Q1fxOvvs>
（用于转移学习）。您现在可以运行模型来验证性能。

<https://nanonets.com/?utm_source=Medium&utm_campaign=data%20augmentation/>
。他们在内部使用转移学习和数据扩充，以使用最少的数据提供最佳结果。您需要做的就是在他们的网站
<https://blog.csdn.net/u010801994/article/details/81914716>

Results VGG19 (No Augmentation)- 76% Test Accuracy (Highest) Nanonets (With
Augmentation) - 94.5% Test Accuracy

10（C10）和Cifar 100（C100）数据集上流行神经网络的错误率。C10 +和C100 +列是数据增加的错误率。