I have only one GPU so all discussions are single GPU only, no multi GPU
Tensorflow uses GPU by default, when one is available.
Example:
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c))
# log_device_placement True will show which devices does which operation
Here,
MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const)/job:localhost/replica:0/task:0/device:GPU:0
b: (Const)/job:localhost/replica:0/task:0/device:GPU:0
matmul has both CPU and GPU kernels, by default GPU is selected
Running on CPU:
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c))
Here,
MatMul: (MatMul)/job:localhost/replica:0/task:0/device:CPU:0
a: (Const)/job:localhost/replica:0/task:0/device:CPU:0
b: (Const)/job:localhost/replica:0/task:0/device:CPU:0
Running on GPU:
tf.device('/gpu:0'):
Running some on CPU some on GPU:
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c))
** Note there in no indentation on c = tf.matmul(a, b)
MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const)/job:localhost/replica:0/task:0/device:CPU:0
b: (Const)/job:localhost/replica:0/task:0/device:CPU:0
Now a and b are assigned to cpu:0. Since a device was not explicitly specified for the MatMul operation, the TensorFlow runtime will choose one based on the operation and available devices (gpu:0 in this example) and automatically copy tensors between devices if required.
Source: https://www.tensorflow.org/guide/using_gpu
Tensorflow uses GPU by default, when one is available.
Example:
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c))
# log_device_placement True will show which devices does which operation
Here,
MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const)/job:localhost/replica:0/task:0/device:GPU:0
b: (Const)/job:localhost/replica:0/task:0/device:GPU:0
matmul has both CPU and GPU kernels, by default GPU is selected
Running on CPU:
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c))
Here,
MatMul: (MatMul)/job:localhost/replica:0/task:0/device:CPU:0
a: (Const)/job:localhost/replica:0/task:0/device:CPU:0
b: (Const)/job:localhost/replica:0/task:0/device:CPU:0
Running on GPU:
tf.device('/gpu:0'):
Running some on CPU some on GPU:
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c))
** Note there in no indentation on c = tf.matmul(a, b)
MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const)/job:localhost/replica:0/task:0/device:CPU:0
b: (Const)/job:localhost/replica:0/task:0/device:CPU:0
Now a and b are assigned to cpu:0. Since a device was not explicitly specified for the MatMul operation, the TensorFlow runtime will choose one based on the operation and available devices (gpu:0 in this example) and automatically copy tensors between devices if required.
Source: https://www.tensorflow.org/guide/using_gpu