BOINC & Grid Computing

As of 10/16/2014 I have a bit over 2.9 million credits on Berkley’s BOINC computing project which puts me in the top 2% of (98.329% rank).

All but ~200k of that credit is from post January. That previous 100k credit is from when I ran SETI@HOME on my Amiga 1200 many years ago, which I haven’t had a running Amiga since the early 2000s.

The neat part part is I am now in the top 2% of the grid computing users in the world in only 10 months, unfortunately the sad part about this is that grid computing is in a sad state if I can get to the top 2% in less than a year when the BOINC project has been running for over 10 years with 3 older Macs (more below).

I realize there are multiple components to this and I will try to talk about each of them that I am aware of.

  1. Competition for Grid Computing Project Resources and their tactics to get the computing they needs.
  2. Lack of Interest and / or Declining Host Base
  3. Energy / Cost Savings.
  4. Inconsistent client and project ability to fully utilize system resources.

Competition:

On the competition for resources by the different projects, this has been a bit of a frustration for me as I believe there are a lot of good projects out there. I have seen over the last year with the limited number of projects I run see three different method employed to utilize my system resources. One in particular heavily wasted my computing resources and so now I keep it very much in check but only occasionally giving it computing power.

The Mega-Payload; in particular most payloads from Climate Prediction Dot Net; I have seen some of those come down with 500 hours of computing. SETI@HOME also has their AstroPulse payloads but they generally in the 50-100 hour range. These mega-payloads, even sometimes with a 365 day turn around time are still difficult for an ‘spare cycles” BOINC user to complete.

The Small Payload: This seems to be Rosetta and World Community grids main strategy, but World Community Grid (WCG) takes the deadline management to an extreme with sometimes less than 2-6 hours to turn around a payload than has an estimated 3-4 hours of computing. Last I checked the work of WCG is not doing real-time life saving and are on similar schedules to projects as Rosetta and many others and so their deadlines do not make snese. The BOINC client unfortunately does not say “I am not going to make that” and abandon it, it continues to process it anyway and submit it. The user then really potentially contributing anything for their CPU cycles and perhaps getting discouraged.

More annoying is that that freshly downloaded payload you download a couple minutes ago is getting “priority status” over all of the other work you have queued up and if WCG is repeatedly doing this short ‘hours’ deadline your other work will never get worked you will process a bunch of data that may or may not get rejected or no credit given.

“Let it ride” mixed payloads, moderate deadlines: This is the general strategy of Rosetta and SETI@HOME it seems. Most are in the 3-4 hour time frame and have at least 7-30 days to complete processing.This makes sense for anyone basically not leaving their computers running for BOINC.

So basically when WCG utilize their ridiculously short deadlines they are trying to cut out other projects and other projects try other techniques to make up for WCG attempting to cut them out via Mega Payloads and / or resorting to WCG’s other tactic of having its own client and discouraging the use of BOINC.

I think all projects deserve a reasonable chance and I would like to see something in the BOINC system for managing and regulating the payloads bring some sort of balance.

Note: On missing deadlines I believe World Community Grid & Climate Prediction will rejects / not give credit it for missing the deadline. Rosetta & SETI I believe has a minor thresh hold for being late where they will still give you credit.

Lack of Interest

Something has changed, maybe busy lifestyle, mobile computing, reduced number of desktop computers vs. tablets and phones, or just the mindset of computing and the general public. Back in the Day (early 2000s) all of my friends owned multiple computers and many when they were not gaming with them would run BOINC or dedicated SETI@HOME clients. Now none do and every single one of them have just as many desktop and notebook and even server computers as they did 10+ years ago if not more plus the mobile computing.  Is it lack of incentive, priorities, or desire to contribute to the community good. Or could it be something of lack of reward and credit for what time they are willing to contribute. I can’t explain this one especially in the age of crowd sourcing everything.. perhaps it is just not chique.

One thing I think would be a real potential is corporate PC processing power that during the day may be busy, but at night many corporations say to leave the PCs on for patching, updating, etcetera. I think a corporate administered BOINC server and client administrator for large numbers of PCs that downloads payloads and distributes them to PCs running all night would be a good boost for grid computing and I would even argue they should get a tax or some sort of other credit for contributing units and gaining BOINC style credits each year.

Energy Conversation

BOINC and SETI (both at Berkley) I believe have always both said do not setup computers and leave them running over night or constantly just to support this computing. This is contrary to the above idea in the corporate environment as I can understand across even just a small office’s PC base this could rack up a ton of extra power depending on the configuration of the computer as depending on how a project is coded it may max out the CPU as well as the graphic cards and FANs on a system when gets processing BOINC projects; thus significantly increasing power usage over normal day-to-day activity.

The 2008 Mac Pro with 8 cores described below according to a Kill-o-Watt meter utilizes about 30 KWh over 3 days running full time 24/7 BOINC projects. It has a 385 watt power supply. My 2012 i7 Mac Mini and my early 2013 i7 Retina Macbook pro also both crank out individually just as many credits as that Mac Pro and only have an 85w power supply. While I don’t want to retire my perfectly working Mac Pro (plus it has tons of extras) it consuming so much power adds up to about the cost of the newly released entry level 2014 i5 Mac Mini in electricity costs in a little over a year for about the same processing power.

The Daily Update of my stats:

 

—Geekbench 3

# Name Processor Frequency Cores Platform Architecture Bits Single-core Score Multi-core Score
607969 MacBook Pro (15-inch Retina Early 2013) Intel Core i7-3635QM 2400 4 Mac OS X x86 64 3214 12362
508671 Mac Pro (Early 2008) Intel Xeon X5482 3200 8 Mac OS X x86 64 1881 13079
472836 Mac Pro (Early 2008) Intel Xeon E5462 2800 4 Mac OS X x86 64 1669 5677
464822 Mac Pro (Early 2008) Intel Xeon E5462 2800 4 Mac OS X x86 64 1614 4525
464817 MacBook Pro (15-inch Early 2010) Intel Core i5-520M 2400 2 Mac OS X x86 64 2077 4179
464807 Mac mini (Late 2012) Intel Core i7-3615QM 2300 4 Mac OS X x86 64 3077 11988

 

This entry was posted in Computing, Data and tagged , .

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*