Ubuntu16.04下安装CUDA8.0和tensorflow

来源:互联网 发布:手机淘宝首页海报尺寸 编辑:程序博客网 时间:2024/06/08 13:16

转自:http://www.cnblogs.com/arkenstone/p/6900956.html

GPU版的Tensorflow无疑是深度学习的一大神器,当然caffe之类的框架也可以用GPU来加速训练。

注意:以下安装默认为python2.7

1. 安装依赖包

$ sudo apt-get install openjdk-8-jdk git python-dev python3-dev python-numpy python3-numpy python-six python3-six build-essential python-pip python3-pip python-virtualenv swig python-wheel python3-wheel libcurl3-dev libcupti-dev

其中openjdk是必须的,不然在之后配置文件的时候会报错。

2. 安装CUDA和cuDNN

这两个是NVIDIA开发的专门用于机器学习的底层计算框架,通过软硬件的加成达到深度学习吊打I卡的神功。
安装的CUDA和cuDNN版本以来所选用的显卡,可以在这里查询。这里我们用的是GeForce 1080ti,所以对应的版本为CUDA8.0(.run版本)(这里下载)和cuDNN6.0(这里下载)。

# 安装cuda$ wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda_8.0.61_375.26_linux-run$ sudo sh cuda_8.0.61_375.26_linux.run --override --silent --toolkit # 安装的cuda在/usr/local/cuda下面# 安装cdDNN$ cd /usr/local/cuda # cuDNN放在这个目录下解压$ tar -xzvf cudnn-8.0-linux-x64-v6.0.tgz$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

然后将将一下路径加入环境变量:

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"export CUDA_HOME=/usr/local/cuda

即将上述代码放入~/.bashrc文件保存后source ~/.bashrc

3. 安装Tensorflow

自从tensorflow 1.0版正式发布以后,安装已经非常容易,在安装CUDA和cuDNN之后,只需以下两步即可安装GPU版本的tf:

$ sudo apt-get install python-pip python-dev$ pip install tensorflow-gpu

但是用这种方法安装的会在运行的时候报一下警告:

这是由于用默认安装是没有编译上面这4个库,需要手动编译。

3.1 安装bazel

$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list$ curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -$ sudo apt-get update$ sudo apt-get install bazel

3.2 clone tf

$ git clone https://github.com/tensorflow/tensorflow$ cd tensorflow$ git checkout r1.1 # 选择tf 1.1版本

3.3 配置tf

$ ./configure # 以下是一个例子Please specify the location of python. [Default is /usr/bin/python]: yInvalid python path. y cannot be foundPlease specify the location of python. [Default is /usr/bin/python]: Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Do you wish to use jemalloc as the malloc implementation? [Y/n] yjemalloc enabledDo you wish to build TensorFlow with Google Cloud Platform support? [y/N] nNo Google Cloud Platform support will be enabled for TensorFlowDo you wish to build TensorFlow with Hadoop File System support? [y/N] yHadoop File System support will be enabled for TensorFlowDo you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] nNo XLA JIT support will be enabled for TensorFlowFound possible Python library paths:  /usr/local/lib/python2.7/dist-packages  /usr/lib/python2.7/dist-packagesPlease input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]Using python library path: /usr/local/lib/python2.7/dist-packagesDo you wish to build TensorFlow with OpenCL support? [y/N] nNo OpenCL support will be enabled for TensorFlowDo you wish to build TensorFlow with CUDA support? [y/N] yCUDA support will be enabled for TensorFlowPlease specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Please specify the Cudnn version you want to use. [Leave empty to use system default]: 5Please specify the location where cuDNN 5 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Please specify a list of comma-separated Cuda compute capabilities you want to build with.You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.Please note that each additional compute capability significantly increases your build time and binary size.[Default is: "3.5,5.2"]: 6.1INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes.........INFO: All external dependencies fetched successfully.Configuration finished

3.4 build tf

为了编译之前提到的SSE4.1, SSE4.2, AVX, AVX2, FMA,用bazel build的时候设置参数如下(具体可以参考这里):

$ bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package

3.5 封装tf

$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

3.6 安装tf

$ sudo pip install /tmp/tensorflow_pkg/tensorflow

3.7 测试

$ pythonimport tensorflow as tfsess = tf.Session()

参考

https://alliseesolutions.wordpress.com/2016/09/08/install-gpu-tensorflow-from-sources-w-ubuntu-16-04-and-cuda-8-0/
https://stackoverflow.com/questions/41293077/how-to-compile-tensorflow-with-sse4-2-and-avx-instructions
https://www.tensorflow.org/install/install_linux

标签: ubuntu16.04, CUDA8.0, Tensorflow
原创粉丝点击