GridcoinStats uses the popular blockchain based blogging platform Steemit to relay news and updates. To read the full article, comment and vote please browse to it by clicking on the title.
Here you can read our latest headlines about the page, as well as the latest headlines in the "gridcoin" category, both from Steemit.com
All the current Whitelisted Projects news feeds are collected on this page as well, which uses their respective forums.
Posts from the Commutiy that are made in sub-categories for a Gridcoin whitelisted project. Read more project specific news on each projects page in the Projects Section
Headlines from projects on the whitelist. Read more by clicking the link, commenting usually possible on respective projects forum. More headlines on each projects details page by clicking on the project name below the article.
new GPU app versions
I was noticing a slow down with the GPU app versions on the 15x271 data set. I found the problem - a large fraction of the discriminants were exceeding the hard coded precision. When this happens the GPU kicks it back to the CPU to handle. As a result, the GPU spent more time idling as it waited on the CPU to finish the task. A side effect of this was that the WU would also use almost an entire CPU core.
I increased the hard coded precision and tested on the troublesome data sets as well as the newest 16x271 data set. The issue seems to be fixed, but please report any unexpected behavior.
New Expanded Runs for Milkyway_nbody (08/06/2019)
I've just placed two new runs up onto MilkyWay@home:
These runs are going to search over a larger phase space, however, due to the technological limitations of simulating incredibly dense dwarf galaxies, a significant portion of this phase space will be skipped. This limitation has been placed in previous runs using the current version of Milkyway_nbody, however, the larger phase space will make this more apparent. We set up our lua file such that runs with abnormally long runtimes will only take a few seconds to complete and return the worst case likelihood score. As credits are calculated dynamically for Nbody, it may take a few runs for Milkyway@home to assign the proper amount of credits. We plan on adding another expanded data run once de_nbody_07_10_2019_v176_40k__data__7 converges.
Thank you for your support,
New Separation Testing Runs
Looking at all of the previous runs (including those from previous projects), there are patterns of interesting and unexpected behavior from the optimization results. In order to better understand these results, I have planned a series of extensive testing runs. These tests will allow us to learn how the optimization routine's responds to specific circumstances and inputs. I will make a comment attached to this post describing the tests & reasons for the tests in greater detail if anyone is interested.
EVERYTHING IS WORKING AS INTENDED. These results are not entirely understood, not incorrect. I don't expect to invalidate any results from this project, but I do plan on actually producing more, better results as I learn why the optimizer outputs what it does.
I am rolling out the first 2 series of tests, and have released the runs on the server.
The names of these runs are:
Please let me know if you experience any problems with these runs. Thank you all for your help with this project!
firewall black listing
It has come to my attention that some users have been black listed by the university firewall system. This usually means that the volunteer cant download tasks or even connect to the website.
We are currently looking at ways to reduce the firewall restrictions so this will stop happening. In the meantime, the work around is for me to request the IT department to white list an IP address on an individual basis.
If you or someone you know is experiencing connection problems, please let me know so I can have them added to the white list.
After last server update I have also prepared NFS based folders with results databases.
So, [Link] here is rsults page where you can download all processed data from our project.
New Separation Runs [UPDATE]
I just put some new Separation runs up on the server and took down the old ones. You may still see new workunits from old runs in your queues for a few days as those runs finish validating.
The names of the new runs are:
An error processing a flag in the parameter files has been fixed and updated runs have been released. These runs are confirmed to be returning results as of 10 PM on 7/24. I apologize for the inconvenience.
The names of the new runs are:
As these runs optimize we may see increased invalidated returns with the stripe 84 and 85 runs. This is a known issue that I believe has something to do with a data cut made in those stripes. If/when this starts happening and the stripes are sufficiently optimized, I will take them down so you all don't have to worry about crunching invalidated WUs.
If you have any questions/comments/concerns, please feel free to post them here. Thanks for all your support!
Nbody Data Run 4 Replaced
Something happened with de_nbody_07_10_2019_v176_40k__data__4 that caused it to end prematurely. As such a new run has been implemented to replace it, titled de_nbody_07_10_2019_v176_40k__data__7.
We apologize for any inconvenience.
CMS@Home disruption, Monday 22nd July
I've had the following notice from CERN/CMS IT:
>> following the hypervisor reboot campaign, as announced by CERN IT here: https://cern.service-now.com/service-portal/view-outage.do?n=OTG0051185
>> the following VMs - under the CMS Production openstack project - will be rebooted on Monday July 22 (starting at 8:30am CERN time):
>> | vocms0267 | cern-geneva-b | cms-home
to which I replied:
> Thanks, Alan. vocms0267 runs the CMS@Home campaign. Should I warn the volunteers of the disruption, or will it be mainly transparent?
and received this reply:
Running jobs will fail because they won't be able to connect to the schedd condor_shadow process. So this will be the visible impact on the users. There will be also a short time window (until I get the agent restarted) where there will be no jobs pending in the condor pool.
So it might be worth it giving the users a heads up.
So, my recommendation is that you set "No New Tasks" for CMS@Home sometime Sunday afternoon, to let tasks complete before the 0830 CST restart. I'll let you know as soon as Alan informs me that vocm0267 is up and running again