Well, if someone's login or database credentials are exposed because of this, then they deserve to get hacked. There are a couple of reasons:
1- The crawler can only find files that are exposed through links. Apparently people are linking to the backups, or the crawlers are getting directory listings (because there is no 'index' page in the directory). If the crawler can find it so could any visitor to the site.
2- You don't put sensitive, unencrypted backups in the web root! That sounds about as stupid as if Galactic were to post the root password to the machine on the forums here.
As for 'unparsed PHP files.' The crawler can't do anything the web server won't allow. If the file is named .php (and there is nothing else restricting it from being parsed by PHP) then the server is not going to give the raw script code, under any circumstances, to the client.
In other words, Google is only able to crawl and index sites which actually have raw source code available to look at. It can't just look at any site's arbitrary code (i.e. it couldn't start picking apart the PHP code we use here at OPU, since it's not available in raw / source form. The only way it can access PHP pages we use are after they get rendered).
Bottom line is, people who have problems with this have no one to blame but themselves. Don't put raw DB dumps or backup files in a directory accessible to the web, and for gosh's sake, don't link to them so the robot can find them!
It's not a matter of security, it's just a matter of ignorance / stupidity on the webmaster / site developer's part if bad things are happening to them because of Google Code Search.