If you want to close other sessions used by GPU:
if 'session' in locals() and session is not None:
print('Close interactive session')
session.close()
Reinstall nvidia driver if you don't see nvidia-smi working.
If you see:
failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
or
InternalError: Blas GEMM launch failed : a.shape=
Try:
sudo rm -rf .nv/
https://github.com/tensorflow/tensorflow/issues/5354
if you're still having trouble, try adding /usr/local/cuda/extras/CUPTI/lib64 to your LD_LIBRARY_PATH. I had the same error and this fixed it (I was on mac though, so verify that directory on your system)
export tfhome="/media/rb/Omega/tensorflow"
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/extras/CUPTI/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Second:
Weight initialization for conv2dtranspose as:
sd = np.sqrt(2. / in_c)
w_init = tf.contrib.layers.xavier_initializer() #tf.truncated_normal_initializer(stddev=sd)
FAILED.
I first thought I should add tf.nn.tanh(conv5) at the last layer of generator to make the output good, but it seems like there was a problem with the initializer instead.
as
w_init = tf.contrib.layers.xavier_initializer() #tf.truncated_normal_initializer(stddev=sd)
even tf.nn.tanh(conv5) was not necessary after the this new initialization method, but why did the sd method not work?
Maybe because I did not understand fan_in and fan_out properly.
It should have been (probably):
Source:
http://deeplearning.net/tutorial/lenet.html
fan_in = n_feature_maps_in * receptive_field_height * receptive_field_width
fan_out = n_feature_maps_out * receptive_field_height * receptive_field_width / max_pool_area
where receptive_field_height and receptive_field_width correspond to those of the conv layer under consideration and max_pool_area is the product of the height and width of the max pooling that follows the convolution layer.
Source: https://stackoverflow.com/questions/42670274/
This is also a good site for understand conv neural nets:
https://www.datacamp.com/community/tutorials/cnn-tensorflow-python
Also tried tf.truncated_normal_initializer(stddev=0.02)
https://github.com/nnUyi/SRGAN/blob/master/SRGAN.py
Almost same results
Maybe check with the loss graphs.
tf.summary - how to use.
Check your output range, is it -1 to 1 or 0 to 1
Pixel shuffling in 2D means to reshape a tensor of shape (N, f1*f2*C, H, W) to (N, C, f1*H, f2*W),
thereby effectively upscaling the images by (f1, f2). So, r is the scale size and n_splits is the output feature size.
if 'session' in locals() and session is not None:
print('Close interactive session')
session.close()
Reinstall nvidia driver if you don't see nvidia-smi working.
If you see:
failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
or
InternalError: Blas GEMM launch failed : a.shape=
Try:
sudo rm -rf .nv/
This probably clears the nvidia cache
https://github.com/tensorflow/tensorflow/issues/5354
if you're still having trouble, try adding /usr/local/cuda/extras/CUPTI/lib64 to your LD_LIBRARY_PATH. I had the same error and this fixed it (I was on mac though, so verify that directory on your system)
export tfhome="/media/rb/Omega/tensorflow"
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/extras/CUPTI/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Second:
Weight initialization for conv2dtranspose as:
sd = np.sqrt(2. / in_c)
w_init = tf.contrib.layers.xavier_initializer() #tf.truncated_normal_initializer(stddev=sd)
FAILED.
I first thought I should add tf.nn.tanh(conv5) at the last layer of generator to make the output good, but it seems like there was a problem with the initializer instead.
as
w_init = tf.contrib.layers.xavier_initializer() #tf.truncated_normal_initializer(stddev=sd)
even tf.nn.tanh(conv5) was not necessary after the this new initialization method, but why did the sd method not work?
Maybe because I did not understand fan_in and fan_out properly.
It should have been (probably):
Source:
http://deeplearning.net/tutorial/lenet.html
fan_in = n_feature_maps_in * receptive_field_height * receptive_field_width
fan_out = n_feature_maps_out * receptive_field_height * receptive_field_width / max_pool_area
where receptive_field_height and receptive_field_width correspond to those of the conv layer under consideration and max_pool_area is the product of the height and width of the max pooling that follows the convolution layer.
Source: https://stackoverflow.com/questions/42670274/
This is also a good site for understand conv neural nets:
https://www.datacamp.com/community/tutorials/cnn-tensorflow-python
Also tried tf.truncated_normal_initializer(stddev=0.02)
https://github.com/nnUyi/SRGAN/blob/master/SRGAN.py
Almost same results
Maybe check with the loss graphs.
tf.summary - how to use.
Check your output range, is it -1 to 1 or 0 to 1
Pixel shuffling in 2D means to reshape a tensor of shape (N, f1*f2*C, H, W) to (N, C, f1*H, f2*W),
thereby effectively upscaling the images by (f1, f2). So, r is the scale size and n_splits is the output feature size.