登录  
 加关注
查看详情
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

银河里的星星

落在人间

 
 
 

日志

 
 

MPI and CUDA mixed programming, General CUDA programming  

2010-03-26 09:39:18|  分类: 高性能计算 |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |
http://forums.nvidia.com/index.php?showtopic=98213
I am running Ubuntu 8.04 with the CUDA 2.0 toolkit and driver version 177.73 with openMPI. With this configuration, everything works fine and I am able to compile and execute mpi code by simply replace g++/gcc with mpic++ in common.mk.

My issue is that when I try to upgrade my driver to version 180.22 (to get support for my new 295 cards) I get immediate segmentation fault with even the most trivial programs (empty int main). This problem happens only when I am compiling with the CUDA template. Other programs which are compiled with only the mpic++ command line run fine and when I go back to driver v177.73, everything works again. This issue occurs with nearly identical software config on 5 different workstation with different mobo/CPU, chipset, and graphics cards.

has anyone had this issue in the past... I suspect that there may be a compiler flag that I can pass to fix this issue, but that is way outside my pay rate. I have found that things compile and run if I switch to mpich and the mpicc wrapper.

http://forums.nvidia.com/index.php?showtopic=30741

http://forums.nvidia.com/index.php?showtopic=96620&hl=mpi
http://forums.nvidia.com/index.php?showtopic=75796&hl=mpi
http://forums.nvidia.com/index.php?showtopic=71498&hl=mpi
http://forums.nvidia.com/index.php?showtopic=159179

完整的cuda与mpi混合编程指南
Mixing CUDA and openMPI
the gpu I was using with mpi was in protected mode, which prevent me from running
CUDA and autoconf

compiling MPI and CUDA C
 MPI and CUDA C
Elementary Cuda Question MPI and CUDA, Can one run a program already written in standard MPI?

how to compile MPI and CUDA.

CUDA and MPI

cuda + openmpi

 CUDA with OpenMPI on Ubuntu 8.04, libcudart.so.2: cannot open shared object file: No such file or directory

Mixed CUDA and MPI programming

MPI causing trouble in memory allocation?


Question about using cudaMemcpy in mixed CUDA/MPI Programming

 CUDA visual profiler using mpi?

 Sharing 1 GPU betwenn MPI tasks, work fine with 4 mpi tasks but cudaMalloc "unknown error" with

Sort-of MPI on the Tesla, Development of high-level routines

CUDA multicore/mpi



  评论这张
 
阅读(1785)| 评论(0)

历史上的今天

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2018