GridcoinStats uses the popular blockchain based blogging platform Steemit to relay news and updates. To read the full article, comment and vote please browse to it by clicking on the title.

Here you can read our latest headlines about the page, as well as the latest headlines in the "gridcoin" category, both from Steemit.com. All the current Whitelisted Projects news feeds are collected on this page as well, which uses their respective forums.
Posts from the Commutiy that are made in sub-categories for a Gridcoin whitelisted project. Read more project specific news on each projects page in the Projects Section
Headlines from projects on the whitelist. Read more by clicking the link, commenting usually possible on respective projects forum. More headlines on each projects details page by clicking on the project name below the article.
Planned Server Maintenance

Hi Everyone, The MilkyWay@home server will be shut down at noon tomorrow (6/20). We plan on bringing it back online shortly after. If there are any problems, we will post to our social media accounts. Thank you for your support! Tom

By milkyway@home at 2019-06-19

killing extremely long SixTrack tasks

Dear all, we had to kill ~10k WUs named: w-c*_job*__s__62.31_60.32__*__7__*_sixvf_boinc* due to a mismatch between the requested disk space and that actually necessary to the job. These tasks would anyway be killed by the BOINC manager at a certain point with an EXIT_DISK_LIMIT_EXCEEDED message - please see: [Link] https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5062 for further info. These tasks cover 10^7 LHC turns, a factor 10 larger than usual, with files building up in dimension until the limit is hit. The killing does not involve all tasks with such names - I have killed only those that should cover the stable part of the beam; these tasks are expected to last long and hence reach the limit in disk usage. The other WUs should see enough beam losses that the limit is not reached - please post in this thread if this is not the case. The cherry-picking killing was done in the effort of preserving as much as possible tasks already being crunched or pending validation. As soon as you update the LHC@project on your BOINC manager you should see the task being killed. We will resubmit soon the same tasks, with appropriate disk requirements. Apologies for the disturbance, and thanks for your understanding. A.

By lhc@home_classic at 2019-06-18

Subfield 5 almost complete

As you may have noticed, subfield 5 is nearing completion. In the next few hours the last of the work units will be sent out. Of course it will still take ~2weeks for the final results to trickle in before it's officially complete. This is a huge milestone! This subfield was originally estimated to take over a year to complete. Due to the new optimized apps (including GPU apps), this was completed in just several months. We will be moving on to subfield 4. But first we will make a short detour and finish off subfield 6 DS7. Don't let the ~2.5mil work units scare you. These were generated for the old apps, and would have taken about 2 hours a pop, but with the newer apps they should be about 10x faster. That means we should blow through about 350k work units per day. If this turns out to be too much of a strain on the server, then we will jump straight to subfield 4 (and run sf6 DS7 in parallel). Thanks everyone for your contributions! We couldn't have done this without our volunteers.

By numberfields@home at 2019-06-16

Using a local proxy to reduce network traffic for CMS

Thanks to computezrmle, with additional work from Laurence and a couple of CMS experts (and my adding one line to the site-local-config file) there is now a way to set up a local caching proxy to greatly reduce your network traffic. Each job instance that runs within s CMS BOINC task must retrieve a lot of set-up data from our database. This data doesn't change very often, so if you keep a local copy the job can access that rather than going over the network every time. Instructions on how to do this are available at [Link] https://lhcathomedev.cern.ch/lhcathome-dev/forum_thread.phpp?id=475&postid=6396 or [Link] https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5052&postid=39072

By lhc@home_classic at 2019-06-07

Citizen scientists use Foldit to successfully design synthetic proteins

Citizen scientists can now use Foldit to successfully design synthetic proteins. The initial results of this unique collaboration are described in Nature. Brian Koepnick, a recent PhD graduate in the Baker lab, led a team that worked on Foldit behind the scenes, introducing new features into the game that they believed would help players home in on better folded structures. Read more from the [Link] Baker Lab. Thanks to all Rosetta@home participants who helped in this study. Many of the designs were validated using forward folding on Rosetta@home. Read the full manuscript: https://doi.org/10.1038/s41586-019-1274-4 [Link] PDF

By rosetta@home at 2019-06-06

NewER runs for Milkyway_nbody

Hey all, Sorry, there was a problem with the newest runs for nbody. I've taken them down and replaced them with these runs: -de_nbody_06_06_2019_v176_40k__sim__1 -de_nbody_06_06_2019_v176_40k__sim__2 -de_nbody_06_06_2019_v176_40k__sim__3 I apologize for any inconvenience. Thank you once again for your support. -Eric

By milkyway@home at 2019-06-06

Formula BOINC Sprint 07.06.2019 04:00 (UTC) - 10.06.2019 03:59 (UTC)

SRBase is chosen as next Sprint project. More work is available soon.

By srbase at 2019-06-06

New runs for Milkyway_nbody

Hello all, Just wanted to let you know that a new batch of runs for milkyway_nbody were just put on the server. The names of the runs are as follows: -de_nbody_06_05_2019_v176_40k__sim__1 -de_nbody_06_05_2019_v176_40k__sim__2 -de_nbody_06_05_2019_v176_40k__sim__3 If you have any issues or questions about these runs, please let me know. Thank you all for your support. -Eric

By milkyway@home at 2019-06-05

new exes for SixTrack 5.02.05

Dear volunteers, we are pleased to announce the release to production (SixTrack app) of new exes for the current pro version (v5.02.05). We have new exes for FreeBSD (avx/sse2), an exe for XP hosts (32bits), an aarch64 executable for Linux, and one for Android. Many thanks to James, Kyrre and Veronica for finding the time to produce them. Distributing an exe compatible with XP hosts is not a way to encourage people to stay with unsupported OSs, but rather a trial to have a smooth transition to more recent OSs. In this way, people with XP hosts do not miss the possibility to contribute to the present wave of SixTrack tasks (expected to be quite long) while considering options for upgrading their hosts. At the same time, we are looking into preparing 32bits Linux exes. It should be noted that all Win exes are distributed without targeting specific kernel versions - hence, XP hosts may receive tasks with regular Windows exes immediately failing, but the BOINC server should quickly learn that the XP-compatible exe is the appropriate one. We are also very happy to start involving freeBSD and Android users in our production chain. For the latter platform, the present exe won't run on Android versions >=8 - James is still looking into this. Since the android version filtering needs a fix on the scheduler side: [Link] https://github.com/BOINC/boinc/issues/3172 we labelled the Android exe as beta. Hence, Sixtrack beta users with Android 8 and later should not request tasks for that host or untick the tes...

By lhc@home_classic at 2019-06-04

New Separation Runs

Hi everyone, I put up some new separation runs so that we wouldn't be wasting GPU cycles while I am analysing the results from the last runs. The names of these runs are: de_modfit_80_bundle4_4s_south4s_1 de_modfit_81_bundle4_4s_south4s_1 de_modfit_82_bundle4_4s_south4s_1 de_modfit_83_bundle4_4s_south4s_1 de_modfit_84_bundle4_4s_south4s_1 de_modfit_85_bundle4_4s_south4s_1 de_modfit_86_bundle4_4s_south4s_1 NOTE that these are very similar to the names of the previous runs, demarcated by only a "1" at the end instead of a "0". These runs have slightly different parameters, so it will be interesting to see whether or not they result in the same problems we were having at the end of the last runs. If you have any issues with these runs please don't hesitate to post about it here. There are lots of great people on these forums who try their hardest to help solve these problems. Thank you all for your continued support! - Tom

By milkyway@home at 2019-05-31

Article on SETI@home's 20th anniversary

An [Link] article by Ben Lindbergh in The Ringer discusses the history and status of SETI@home as it turns 20.

By seti@home at 2019-05-25

base R951 proven / Megaprime

On 04th of May, whizbang, a member of the team Ars Technica found the last prime for base R951. The prime 38*875^256892-1 has 1.107.391 digits and entered the TOP5000 in Chris Caldwell's The Largest Known Primes Database.

By srbase at 2019-05-25


Please note that all data is as-is and comes from the Gridcoin Project blockchain. Estimations may be incorrect.
If you enjoy this service, please consider voting for our Steemit Witness @sc-steemit.

Issues with the page should be submitted in our GitHub Repo. Direct contact can be done with @startail on our Gridcoin Chat
Page Rendered in 0.0272s
Ran 6 Queries in 0.0252s
Backend Status: 3 of 5 nodes in sync