GridcoinStats uses the popular blockchain based blogging platform Steemit to relay news and updates. To read the full article, comment and vote please browse to it by clicking on the title.

Here you can read our latest headlines about the page, as well as the latest headlines in the "gridcoin" category, both from All the current Whitelisted Projects news feeds are collected on this page as well, which uses their respective forums.
Posts from the Commutiy that are made in sub-categories for a Gridcoin whitelisted project. Read more project specific news on each projects page in the Projects Section
Headlines from projects on the whitelist. Read more by clicking the link, commenting usually possible on respective projects forum. More headlines on each projects details page by clicking on the project name below the article.
Acemd apps should be fixed

We are aware of the problem and working on it. Sorry for the inconvenience.

By gpugrid at 2019-08-10

License expired for Windows

We are aware of the problem and working on it. Sorry for the inconvenience.

By gpugrid at 2019-08-10

Entering Final Phase of Subfield 4

The final batch of subfield 4 is starting to rear its ugly head. This beast has 1.6M work units averaging about 2.5 hours a piece (on a 4GHz cpu).

By numberfields@home at 2019-08-07

new GPU app versions

I was noticing a slow down with the GPU app versions on the 15x271 data set. I found the problem - a large fraction of the discriminants were exceeding the hard coded precision. When this happens the GPU kicks it back to the CPU to handle. As a result, the GPU spent more time idling as it waited on the CPU to finish the task. A side effect of this was that the WU would also use almost an entire CPU core. I increased the hard coded precision and tested on the troublesome data sets as well as the newest 16x271 data set. The issue seems to be fixed, but please report any unexpected behavior.

By numberfields@home at 2019-08-07

New Expanded Runs for Milkyway_nbody (08/06/2019)

Hello all, I've just placed two new runs up onto MilkyWay@home: -de_nbody_08_06_2019_v176_40k__dataExpand__1 -de_nbody_08_06_2019_v176_40k__dataExpand__2 These runs are going to search over a larger phase space, however, due to the technological limitations of simulating incredibly dense dwarf galaxies, a significant portion of this phase space will be skipped. This limitation has been placed in previous runs using the current version of Milkyway_nbody, however, the larger phase space will make this more apparent. We set up our lua file such that runs with abnormally long runtimes will only take a few seconds to complete and return the worst case likelihood score. As credits are calculated dynamically for Nbody, it may take a few runs for Milkyway@home to assign the proper amount of credits. We plan on adding another expanded data run once de_nbody_07_10_2019_v176_40k__data__7 converges. Thank you for your support, Eric

By milkyway@home at 2019-08-06

New Separation Testing Runs

Hi Everyone, Looking at all of the previous runs (including those from previous projects), there are patterns of interesting and unexpected behavior from the optimization results. In order to better understand these results, I have planned a series of extensive testing runs. These tests will allow us to learn how the optimization routine's responds to specific circumstances and inputs. I will make a comment attached to this post describing the tests & reasons for the tests in greater detail if anyone is interested. EVERYTHING IS WORKING AS INTENDED. These results are not entirely understood, not incorrect. I don't expect to invalidate any results from this project, but I do plan on actually producing more, better results as I learn why the optimizer outputs what it does. I am rolling out the first 2 series of tests, and have released the runs on the server. The names of these runs are: de_modfit_14_bundle5_testing_4s3f_1 de_modfit_14_bundle5_testing_4s3f_2 de_modfit_14_bundle5_testing_4s3f_3 de_modfit_14_bundle4_testing_3s4f_1 de_modfit_14_bundle4_testing_3s4f_2 de_modfit_14_bundle4_testing_3s4f_3 Please let me know if you experience any problems with these runs. Thank you all for your help with this project! Best, Tom

By milkyway@home at 2019-07-29

firewall black listing

It has come to my attention that some users have been black listed by the university firewall system. This usually means that the volunteer cant download tasks or even connect to the website. We are currently looking at ways to reduce the firewall restrictions so this will stop happening. In the meantime, the work around is for me to request the IT department to white list an IP address on an individual basis. If you or someone you know is experiencing connection problems, please let me know so I can have them added to the white list.

By numberfields@home at 2019-07-27

Results databases

After last server update I have also prepared NFS based folders with results databases. So, [Link] here is rsults page where you can download all processed data from our project.

By universe@home at 2019-07-26

New Separation Runs [UPDATE]

Hi Everyone, I just put some new Separation runs up on the server and took down the old ones. You may still see new workunits from old runs in your queues for a few days as those runs finish validating. The names of the new runs are: de_modfit_80_bundle4_4s_south4s_bgset de_modfit_81_bundle4_4s_south4s_bgset de_modfit_82_bundle4_4s_south4s_bgset de_modfit_83_bundle4_4s_south4s_bgset de_modfit_84_bundle4_4s_south4s_bgset de_modfit_85_bundle4_4s_south4s_bgset de_modfit_86_bundle4_4s_south4s_bgset An error processing a flag in the parameter files has been fixed and updated runs have been released. These runs are confirmed to be returning results as of 10 PM on 7/24. I apologize for the inconvenience. The names of the new runs are: de_modfit_80_bundle4_4s_south4s_bgset_2 de_modfit_81_bundle4_4s_south4s_bgset_2 de_modfit_82_bundle4_4s_south4s_bgset_2 de_modfit_83_bundle4_4s_south4s_bgset_2 de_modfit_84_bundle4_4s_south4s_bgset_2 de_modfit_85_bundle4_4s_south4s_bgset_2 de_modfit_86_bundle4_4s_south4s_bgset_2 As these runs optimize we may see increased invalidated returns with the stripe 84 and 85 runs. This is a known issue that I believe has something to do with a data cut made in those stripes. If/when this starts happening and the stripes are sufficiently optimized, I will take them down so you all don't have to worry about crunching invalidated WUs. If you have any questions/comments/concerns, please feel free to post them here. Thanks for all your support! Best, Tom

By milkyway@home at 2019-07-24

Nbody Data Run 4 Replaced

Hey all, Something happened with de_nbody_07_10_2019_v176_40k__data__4 that caused it to end prematurely. As such a new run has been implemented to replace it, titled de_nbody_07_10_2019_v176_40k__data__7. We apologize for any inconvenience. -Eric

By milkyway@home at 2019-07-23

2019 SETI.Germany Wow! anniversary event: 15-29 August

Every year [Link] SETI.Germany organizes an event in honor of the anniversary of the Wow! signal. This year's event (the 42nd anniversary) takes place from 15 to 29 August . Further information can be found [Link] here .

By seti@home at 2019-07-22

CMS@Home disruption, Monday 22nd July

I've had the following notice from CERN/CMS IT: >> following the hypervisor reboot campaign, as announced by CERN IT here: >> the following VMs - under the CMS Production openstack project - will be rebooted on Monday July 22 (starting at 8:30am CERN time): ... >> | vocms0267 | cern-geneva-b | cms-home to which I replied: > Thanks, Alan. vocms0267 runs the CMS@Home campaign. Should I warn the volunteers of the disruption, or will it be mainly transparent? and received this reply: Running jobs will fail because they won't be able to connect to the schedd condor_shadow process. So this will be the visible impact on the users. There will be also a short time window (until I get the agent restarted) where there will be no jobs pending in the condor pool. So it might be worth it giving the users a heads up. So, my recommendation is that you set "No New Tasks" for CMS@Home sometime Sunday afternoon, to let tasks complete before the 0830 CST restart. I'll let you know as soon as Alan informs me that vocm0267 is up and running again

By lhc@home_classic at 2019-07-17

Please note that all data is as-is and comes from the Gridcoin Project blockchain. Estimations may be incorrect.
If you enjoy this service, please consider voting for our Steemit Witness @sc-steemit.

Issues with the page should be submitted in our GitHub Repo. Direct contact can be done with @startail on our Gridcoin Chat
Page Rendered in 0.0815s
Ran 6 Queries in 0.0775s
Backend Status: 2 of 5 nodes in sync