GridcoinStats uses the popular blockchain based blogging platform Steemit to relay news and updates. To read the full article, comment and vote please browse to it by clicking on the title.
Here you can read our latest headlines about the page, as well as the latest headlines in the "gridcoin" category, both from Steemit.com
All the current Whitelisted Projects news feeds are collected on this page as well, which uses their respective forums.
Posts from the Commutiy that are made in sub-categories for a Gridcoin whitelisted project. Read more project specific news on each projects page in the Projects Section
Headlines from projects on the whitelist. Read more by clicking the link, commenting usually possible on respective projects forum. More headlines on each projects details page by clicking on the project name below the article.
Another paper on GW170817
Sorry for late notification!
Another research on the famous GW170817 source, which used data obtained from the Universe@Home project, have been published last year. In this study, it was shown that the merger rates of double neutron stars (calculated from one direct observation) stay in contradiction with these inferred from the observations of double pulsars in our Galaxy. The reason behind this tension is the delay time distribution which favours short delays. The study highlights the problem that scientist encounter trying to connect evolutionary predictions and the observations of the first double neutron star merger.
The original paper can be found here:
Problem writing CMS job results; please avoid CMS tasks until we find the reason
Since some time last night CMS jobs appear to have problems writing results to CERN storage (DataBridge). It's not affecting BOINC tasks as far as I can see, they keep running and credit is given. However, Dashboard does see the jobs as failing, hence the large red areas on the job plots.
Until we find out where the problem lies, it's best to set No New Tasks or otherwise avoid CMS jobs. I'll let you know when things are back to normal again.
By lhc@home classic
Long Outage Today
We had to recover the master database on oscar from a backup taken today on carolyn. Oscar is now back to being the master DB and carolyn is once again the replica DB. Things will be a bit slow as the database becomes resident in memory.
GPU app status update
So there have been some new developments over the last week. It's both good and bad.
First of all, some history. The reason I waited so long to develop a GPU app is because the calculation was heavily dependent on multi-precision libraries (gmp) and number theoretic libraries (pari/gp). Both of these use dynamically allocated memory which is a big no-no in GPUs. I found a multi-precision library online that I could use by hard coding the precision to the maximum required (about 750 bits), thereby removing the dependence on memory allocations. The next piece of the puzzle was to code up a polynomial discriminant function. After doing this, I could finally compile a kernel for the GPU. That is the history for the current GPU app. It is about 20 to 30 times faster than the current cpu version (depends on WU and cpu/gpu speeds).
But then I got thinking... my GPU polynomial discriminant algorithm is different from the one in the PARI library (theirs works for any degree and mine is specialized to degree 10). So to do a true apples-to-apples comparison, I replaced the PARI algorithm with mine in the cpu version of the code. I was shocked by what I found... the cpu version was now about 10x faster than it used to be. I never thought I was capable of writing an algorithm that would be 10x faster than a well established library function. WTF? Now I'm kicking myself in the butt for not having done this sooner!
This brings mixed emotions. On one side, it is great that I now have a cpu version that is 10x faster. But it also means that my GPU code is total crap. Wit...
This time the primary database machine crashed and hasn't automatically recovered. We've fallen back to the replica machine, and the only symptom should be a few extra hours of outage.
I'm glad we have the replica.
Option to delete your account is now enabled
As supported by the current BOINC server software, I have enabled the option to delete your NFS@Home account if you wish. Please note that if you choose to do so, it is permanent. Deleted accounts are removed from the database and cannot be recovered.
It's been quite a while since the last news post, but work has been continuing. On the status pages you can follow the many completed factorizations. Also, I yesterday I updated the BOINC server code to the latest version.