找回密码
 新注册用户
搜索
查看: 11714|回复: 42

[新闻] CORE22 0.0.13 POWERUP WITH NV CUDA SUPPORT(Kepler、帕斯卡的春天)

[复制链接]
发表于 2020-9-20 10:26:52 | 显示全部楼层 |阅读模式
本帖最后由 金鹏 于 2020-9-29 18:31 编辑

目前来看帕斯卡显卡提升巨大,1070TI跑1343X出了160万+PPD绝对甜点,1080跑134XX出了160+万,1070跑1343X出了120+万
最惊喜的是1080TI跑出了260-280万PPD,提升幅度达到63%

2080TI一直爆包中,估计是超频过度,适当降低频率后稳定在420万PPD,提升幅度20%不如帕斯卡

PS:超频的显卡适当降频保持稳定,并将驱动版本更新至较新版本

TOGETHER WE ARE EVEN MORE POWERFUL: GPU FOLDING GETS A POWERUP WITH NVIDIA CUDA SUPPORT!September 28, 2020
CUDA support comes to Folding@home to give NVIDIA GPUs big boosts in speed, and you don’t have to do anything to activate it!
GPU Folders make up a huge fraction of the number-crunching power power of Folding@home, enabling us to help projects like the COVID Moonshot open science drug discovery project evaluate thousands of molecules per week in their quest to produce a new low-cost patent-free therapy for COVID-19.

https://youtu.be/VnyaAmM1nhE

The COVID Moonshot (@covid_moonshot) is using the number-crunching power of Folding@home to evaluate thousands of molecules per week, synthesizing hundreds of these molecules in their quest to develop a patent-free drug for COVID-19 that could be taken as a simple 2x/day pill.
As of today, your folding GPUs just got a big powerup! Thanks to NVIDIA engineers, our Folding@home GPU cores—based on the open source OpenMM toolkit—are now CUDA-enabled, allowing you to run GPU projects significantly faster. Typical GPUs will see 15-30% speedups on most Folding@home projects, drastically increasing both science throughput and points per day (PPD) these GPUs will generate.
GPU speedups for CUDA-enabled Folding@home core22 on typical Folding@home projects range from 15-30% for most GPUs with some GPUs seeing even larger benefits.
Even more exciting is that the COVID Moonshot Sprints—which use special OpenMM features to estimate how tightly potential therapeutics will inhibit the SARS-CoV-2 main viral protease—can see speedups up to 50-100% on many GPUs, helping us enormously accelerate our progress toward a cure. You can follow Moonshot’s progress on Twitter.
GPU speedups for CUDA-enabled Folding@home core22 on COVID Moonshot projects—which use special features of OpenMM to help identify promising therapeutics—range from 50-400%!
To see these speed boosts, you won’t have to do anything—the new 0.0.13 release of core22 will automatically roll out over the next few days on many projects, automatically downloading the CUDA-enabled version of the core and CUDA runtime compiler libraries needed to accelerate our code. If you have an NVIDIA GPU, your client logs will show that the 0.0.13 core will attempt to launch the faster CUDA version.
Folding@home core22 0.0.13 performance benchmark for the largest system ever simulated on Folding@home—the SARS-CoV-2 spike protein (448,584 atoms)—with PME electrostatics and 2 fs timestep.Folding@home core22 0.0.13 performance benchmark (higher ns/day is more science per day!) for a small system—the DHFR benchmark (23,558 atoms)—with PME electrostatics and 4 fs timesteps using a BAOAB Langevin integrator.
To get the most performance out of the new CUDA-enabled core, be sure to update your NVIDIA drivers! There’s no need to install the CUDA Toolkit.
While core22 0.0.13 should automatically enable CUDA support for Kepler and later NVIDIA GPU architectures, if you encounter any issues, please see the Folding Forum for help in troubleshooting. Both Folding@home team members and community volunteers can provide help debug any issues.
Besides CUDA support, core22 0.0.13 includes a number of bugfixes and new science features, as well as more useful information displayed in the logs.
We’re incredibly grateful to all those that contributed to development of the latest version of the Folding@home GPU core, especially:
  • Peter Eastman, lead OpenMM developer (Stanford)
  • Joseph Coffland, lead Folding@home developer (Cauldron Development)
  • Adam Beberg, Principal Architect, Distributed Systems (NVIDIA) and original co-creator of Folding@home nearly 21 years ago!
We’d like to send special thanks to Jensen Huang and everyone at NVIDIA for their incredible support for Folding@home, which was recently featured in the recent NVIDIA GeForce RTX 30 Series launch event:

https://youtu.be/E98hC9e__Xs

In addition, we couldn’t have brought you these improvements without the incredible effort of all of the Folding@home volunteers who helped us test many builds, especially PantherX, Anand Bhat, Jesse_V, bruce, toTOW, davidcoton, mwroggenbuck, artoar_11, rhavern, hayesk, muziqaz, Zach Hillard, _r2w_ben, bollix47, joe_h, ThWuensche, and everyone else who tested the core and provided feedback.
core22 0.0.13 in BETA testing : CUDA support at last!
by JohnChodera » Sun Sep 20, 2020 9:22 am
[size=1.3em]After a lot of work from a large number of awesome folks, we've just rolled out core22 0.0.13 to BETA!

Currently, we're only testing project 17102, which is a collection of different RUNs (RUN0 through RUN17) that test different workloads in short WUs for benchmarking and stability analysis.
As a reminder, you can set yourCODE:

SELECT ALLclient-type

toCODE:
SELECT ALL
beta

to run through BETA projects (like 17102).

After that, we'll test more broadly in BETA projects before the full announce.

Full release notes are below:

New features

  • 0.0.13 adds support for CUDA for NVIDIA GPUs! This enables NVIDIA GPUs to run ~25% faster on normal projects and up to 50-100% faster for COVID Moonshot free energy sprints. Expect to see noticeably lower time per frame (TPF) numbers for these projects with significantly higher points per day (PPD). The core will fall back to OpenCL if CUDA cannot be configured.
  • More useful debugging information is printed in the client logs when the CUDA or OpenCL platforms cannot be configured.
  • The core now bundles dynamic libraries instead of compiling libraries statically to permit various platforms and plugins to be loaded dynamically. This was essential for enabling CUDA support.
  • The OpenCL driver version is now reported in the client logs
  • Checkpoints are now reported in the client logs
  • The final integrator state is returned at the end of the WU, allowing for more complex simulation schemes that employ OpenMM CustomIntegrator

Bugfixes

  • Several OpenMM bugfixes for AMD GPUs were incorporated.
  • Temperature monitor warnings have been removed.
  • Requesting the core to shut down during core initialization should now be handled more gracefully
  • On older clients, `-gpu` is interpreted as a synonym for `-opencl-device`, and a warning to upgrade the client is issued

Known issues

  • We are noticing that some NVIDIA linux beta drivers sometimes fail when using the OpenCL platform with the Windows Subsystem for Linux 2 (WSL2) with
    CODE: SELECT ALLERROR:exception: Error initializing context: clCreateContext (218)

    and are actively working with NVIDIA to track down and fix this issue. In the meantime, we hope the CUDA support will provide a work-around for this issue. Please post here if you encounter this issue so we can work with you to more quickly test a fix.
  • Older NVIDIA GPUs (Fermi and older) may not be able to make use of CUDA, but will fall back to OpenCL instead

Acknowledgments

Enormous thanks to the following people for their help in producing the latest core:
  • Peter Eastman, lead OpenMM developer (Stanford)
  • Joseph Coffland, lead Folding@home developer (Cauldron Development)
  • Adam Beberg, Principal Architect, Distributed Systems (NVIDIA) (and original co-architect of Folding@home nearly 21 years ago!)

Huge thanks to the extremely patient Folding@home volunteers who helped us test many builds of this core, especially PantherX, Anand Bhat, Jesse_V, bruce, toTOW, davidcoton, mwroggenbuck, artoar_11, rhavern, hayesk, muziqaz, Zach Hillard, _r2w_ben, bollix47, joe_h, ThWuensche, and everyone else who tested the core and provided feedback.
Re: core22 0.0.13 in BETA testing : CUDA support at last!
by Jesse_V » Sun Sep 20, 2020 9:39 am
[size=1.3em]John deserves his share of the credit too for an enormous amount of time and effort getting this version to run smoothly, reliably, and ready for beta testing. We have all been working on this and testing different builds for several months now.

During our testing, we've also been able to perform some benchmarking between Windows -> Linux and OpenCL -> CUDA. While a ~20-25% performance boost is fairly common for Linux when compared to Windows, there's a very big boost when comparing OpenCL on WIndows to CUDA on Linux. You can see this in the time per frame (TPF) for specific projects. For example, on COVID Moonshot free energy calculation projects:

p13426 at 10% in Windows 10 on GTX 1080 Ti running OpenCL: TPF of 1:57
p13426 at 10% in Windows 10 on GTX 1080 Ti running CUDA: TPF of 1:16
p13426 at 10% in Debian on GTX 1080 Ti running OpenCL: TPF of 1:34
p13426 at 10% in Debian on GTX 1080 Ti running CUDA: TPF of 0:59

So if Windows-OpenCL is our baseline, then on this project, the speed improvements are:
Windows-CUDA: +53%
Debian-OpenCL: +24%
Debian-CUDA: +98%
CUDA average: +75%

The core will attempt to use CUDA when available. This translates into a very significant gain in PPD, as Adam Beberg found for another project across different Nvidia hardware in Linux:

TPF 73s - GTX 1080Ti running OpenCL/ 1.554 M PPD
TPF 57s - GTX 1080Ti running CUDA / 2.253 M PPD
TPF 49s - RTX 2080Ti running OpenCL/ 2.826 M PPD
TPF 39s - RTX 2080Ti running CUDA / 3.981 M PPD
TPF 36s - RTX 3080 running OpenCL / 4.489 M PPD
TPF 31s - RTX 3080 running CUDA / 5.618 M PPD

Finally, I'd like to highlight that core22 v0.0.13 currently seems incompatible with the pocl-opencl-icd package in Ubuntu, which is depend on libpocl2. These package is not necessary to run F@h, but if you have them installed, please remove them before running the core. Other than that, if the core segfaults at startup or fails to start or run in any way, please post below!
Re: core22 0.0.13 in BETA testing : CUDA support at last!
by JohnChodera » Mon Sep 21, 2020 1:38 pm
[size=1.3em]Heads up that we've just moved the longer 1343x projects to BETA and enabled them for core22 0.0.13 so that we can test full-length projects to make sure failure rates are within expected margins.

捕获2.PNG

捕获1.PNG




回复

使用道具 举报

发表于 2020-9-21 00:49:00 | 显示全部楼层
3080 5.6M PPD……Holy Fx...
回复

使用道具 举报

发表于 2020-9-21 11:46:33 | 显示全部楼层
QQ截图20200921114440.png
我的GTX 980都到了130W+,温度还很喜人,才60度出头。

评分

参与人数 1基本分 +20 收起 理由
金鹏 + 20 很给力!

查看全部评分

回复

使用道具 举报

发表于 2020-9-21 13:06:48 | 显示全部楼层
终于能摆脱拉胯的opencl1.2了
回复

使用道具 举报

发表于 2020-9-21 13:40:56 | 显示全部楼层
本帖最后由 牵牛星 于 2020-9-21 19:03 编辑

在13435任务包上成功调用CUDA,暂时观察到PPD提升30%,等跑完整一个包再更新

更新:13435最终PPD达到284万,提升幅度35%!新接手13430PPD260万附近,提升幅度高达50%以上!

20200921133757.png
20200921133813.png

评分

参与人数 1基本分 +20 收起 理由
金鹏 + 20 很给力!

查看全部评分

回复

使用道具 举报

发表于 2020-9-21 14:05:57 | 显示全部楼层
2070S win10下接到13432,PPD从平时160-210上升到262万。。

评分

参与人数 1基本分 +20 收起 理由
金鹏 + 20 很给力!

查看全部评分

回复

使用道具 举报

 楼主| 发表于 2020-9-21 15:43:02 | 显示全部楼层
zflowers 发表于 2020-9-21 14:05
2070S win10下接到13432,PPD从平时160-210上升到262万。。

1080TI从160万飙升到260万
回复

使用道具 举报

发表于 2020-9-21 16:24:57 | 显示全部楼层
看来优化程度很大,以为没上图灵卡就没机会上分,没想到优化幅度这么大,开心

评分

参与人数 1基本分 +20 收起 理由
金鹏 + 20 帕斯卡春天

查看全部评分

回复

使用道具 举报

发表于 2020-9-22 09:24:39 | 显示全部楼层
我这里的中低端卡都接不到这个cuda包,1070,106-100,474等卡都接不到
回复

使用道具 举报

发表于 2020-9-22 09:59:02 | 显示全部楼层
坐等新内核转正
回复

使用道具 举报

发表于 2020-9-22 14:26:39 | 显示全部楼层
把家里其余三张卡都加入beta,1070获得80%提升,M1650ti和1660ti都能获得55%的提升,老帕斯卡的大甜包

评分

参与人数 1基本分 +20 收起 理由
金鹏 + 20 很给力!

查看全部评分

回复

使用道具 举报

发表于 2020-9-22 20:31:20 | 显示全部楼层
1660已破百万ppd

评分

参与人数 1基本分 +20 收起 理由
金鹏 + 20 很给力!

查看全部评分

回复

使用道具 举报

发表于 2020-9-23 09:54:56 | 显示全部楼层
今天发现平时67-73万PPD的1070接到cuda包,跑出139万的惊人PPD

评分

参与人数 1基本分 +20 收起 理由
金鹏 + 20 很给力!

查看全部评分

回复

使用道具 举报

发表于 2020-9-25 12:39:01 | 显示全部楼层
好像0.0.13包暂时跑完了?
回复

使用道具 举报

发表于 2020-9-25 13:58:39 | 显示全部楼层
牵牛星 发表于 2020-9-25 12:39
好像0.0.13包暂时跑完了?

貌似内核转正了。我没加任何参数,接到了14484任务,自动开始下载新内核了。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 新注册用户

本版积分规则

论坛官方淘宝店开业啦~
欢迎大家多多支持基金会~

Archiver|手机版|小黑屋|中国分布式计算总站 ( 沪ICP备05042587号 )

GMT+8, 2024-4-17 05:54

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表