Numpy, BLAS and CUBLAS(Numpy、BLAS 和 CUBLAS)
问题描述
Numpy 可以针对不同的 BLAS 实现(MKL、ACML、ATLAS、GotoBlas 等)链接/编译".这并不总是很容易配置,但它是可能的.
Numpy can be "linked/compiled" against different BLAS implementations (MKL, ACML, ATLAS, GotoBlas, etc). That's not always straightforward to configure but it is possible.
是否也可以针对 NVIDIA 的 CUBLAS 实现链接/编译"numpy?
我在网络上找不到任何资源,在我花太多时间尝试之前,我想确保它完全可行.
Is it also possible to "link/compile" numpy against NVIDIA's CUBLAS implementation?
I couldn't find any resources in the web and before I spend too much time trying it I wanted to make sure that it possible at all.
推荐答案
一句话:不,你不能那样做.
In a word: no, you can't do that.
有一个相当不错的 scikit 提供从 scipy 访问 CUBLAS 的功能,称为 scikits.cuda
建立在 PyCUDA 之上.PyCUDA 提供了一个类似 numpy.ndarray
的类,它允许使用 CUDA 无缝地操作 GPU 内存中的 numpy 数组.因此,您可以将 CUBLAS 和 CUDA 与 numpy 一起使用,但您不能只链接 CUBLAS 并期望它能够工作.
There is a rather good scikit which provides access to CUBLAS from scipy called scikits.cuda
which is built on top of PyCUDA. PyCUDA provides a numpy.ndarray
like class which seamlessly allows manipulation of numpy arrays in GPU memory with CUDA. So you can use CUBLAS and CUDA with numpy, but you can't just link against CUBLAS and expect it to work.
还有一个商业库,它提供类似 numpy 和 cublas 的功能,并且具有 Python 接口或绑定,但我将把它留给他们的一个工人来填补.
There is also a commercial library that provides numpy and cublas like functionality and which has a Python interface or bindings, but I will leave it to one of their shills to fill you in on that.
这篇关于Numpy、BLAS 和 CUBLAS的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:Numpy、BLAS 和 CUBLAS
- padding='same' 转换为 PyTorch padding=# 2022-01-01
- 如何将一个类的函数分成多个文件? 2022-01-01
- 使用Heroku上托管的Selenium登录Instagram时,找不到元素';用户名'; 2022-01-01
- 如何在 Python 的元组列表中对每个元组中的第一个值求和? 2022-01-01
- 沿轴计算直方图 2022-01-01
- pytorch 中的自适应池是如何工作的? 2022-07-12
- python-m http.server 443--使用SSL? 2022-01-01
- 如何在 python3 中将 OrderedDict 转换为常规字典 2022-01-01
- python check_output 失败,退出状态为 1,但 Popen 适用于相同的命令 2022-01-01
- 分析异常:路径不存在:dbfs:/databricks/python/lib/python3.7/site-packages/sampleFolder/data; 2022-01-01