程序员

TensorFlow 2 quickstart for beginners

作者:admin 2021-07-28 我要评论

This short introduction uses Keras to: Build a neural network that classifies images.Train this neural network.And, finally, evaluate the accuracy of ...

在说正事之前,我要推荐一个福利:你还在原价购买阿里云、腾讯云、华为云服务器吗?那太亏啦!来这里,新购、升级、续费都打折,能够为您省60%的钱呢!2核4G企业级云服务器低至69元/年,点击进去看看吧>>>)

This short introduction uses Keras to:

Build a neural network that classifies images.Train this neural network.And, finally, evaluate the accuracy of the model.
import tensorflow as tf

Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers:

mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

Build the tf.keras.Sequential model by stacking layers. Choose an optimizer and loss function for training:

model = tf.keras.models.Sequential([
 tf.keras.layers.Flatten(input_shape=(28, 28)),
 tf.keras.layers.Dense(128, activation='relu'),
 tf.keras.layers.Dropout(0.2),
 tf.keras.layers.Dense(10)
])

For each example the model returns a vector of "logits" or "log-odds" scores, one for each class.

predictions = model(x_train[:1]).numpy()
predictions

The tf.nn.softmax function converts these logits to "probabilities" for each class:

tf.nn.softmax(predictions).numpy()

Note: It is possible to bake this tf.nn.softmax in as the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to provide an exact and numerically stable loss calculation for all models when using a softmax output.

The losses.SparseCategoricalCrossentropy loss takes a vector of logits and a True index and returns a scalar loss for each example.

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

This loss is equal to the negative log probability of the true class: It is zero if the model is sure of the correct class.

This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.math.log(1/10) ~= 2.3

loss_fn(y_train[:1], predictions).numpy()
model.compile(optimizer='adam',
 loss=loss_fn,
 metrics=['accuracy'])

The Model.fit method adjusts the model parameters to minimize the loss:

model.fit(x_train, y_train, epochs=5)

The Model.evaluate method checks the models performance, usually on a "Validation-set" or "Test-set".

model.evaluate(x_test, y_test, verbose=2)

The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the TensorFlow tutorials.

If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:

probability_model = tf.keras.Sequential([
 model,
 tf.keras.layers.Softmax()
])
probability_model(x_test[:5])

代码链接: https://codechina.csdn.net/csdn_codechina/enterprise_technology/-/blob/master/CV_Classification/TensorFlow%202%20quickstart%20for%20beginners.ipynb


本文转自网络,原文链接:https://developer.aliyun.com/article/785857

版权声明:本文转载自网络,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。本站转载出于传播更多优秀技术知识之目的,如有侵权请联系QQ/微信:153890879删除

相关文章
  • TensorFlow 2 quickstart for beginner

    TensorFlow 2 quickstart for beginner

  • DataWorks数据建模 - 一揽子数据模型管

    DataWorks数据建模 - 一揽子数据模型管

  • 数据开发(DataStudio)降本提效的核心

    数据开发(DataStudio)降本提效的核心

  • DataWorks运维中心与移动版介绍 | 《一

    DataWorks运维中心与移动版介绍 | 《一

腾讯云代理商
海外云服务器