Yes, I also feel the whole prior probability thing is open to abuse.
But Bayesian stats also offers something really simple and useful that might help us get around some of the problems of null hypothesis testing:
Introducing: The Bayes Factor:
http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1167&context=jps
Here's how it works. You might be deciding which of two hypotheses is best supported by your data - maybe a null hypothesis and an alternative one (but it could be other possibilities too). The Bayes Factor (BF) expresses how likely one is supported by your data, relative to the other. Its a simple ratio. So for example, you might calculate how likely the alternative hypothesis is given your data, relative to the null hypothesis. The bigger the Bayes Factor, the more likely it is that the alternative hypothesis is true. The smaller the Bayes Factor, the less likely its true. A Bayes factor of 10,0000 is pretty persuasive. One of 2.00 is not. The article link above provides ways to calculate and interpret the values.
Doesn't seem much different to what we do now, right? But you can do more - you're not limited to testing the null hypothesis. You can compare two alternative hypotheses. Based on Bayes Factors, you can even argue the null hypothesis is actually correct (whereas with conventional hypothesis testing, you can never accept the null hypothesis, you can only reject H1).
This kind of approach might help get us out of the quagmire that null hypothesis testing has got us into. The pressure to find a difference - that is, reject H0 - or else your study is meaningless (and unpublishable).