It's really never as simple as "accurate" vs. "not accurate", either.
False negatives and false positives don't always happen at the same rate -- in other words, for some tests, you can pretty much trust a positive, but you can't really trust a negative, or the other way 'round.
That's why it is usually divided into two separate measures: sensitivity, the ability of a test to pick up a true positive result, and specificity, the ability of a test to NOT pick up a false positive result. So a high sensitivity test would have fewer false negatives, and a high specificity test would have fewer false positives. Some tests are both highly specific and highly sensitive. Some are sensitive but not very specific, and some are specific but not very sensitive.
I bring all this up because I think that (1) if you're getting tested now, it's not necessarily useless -- but it's hard to interpret, because the early versions of tests aren't necessarily both highly sensitive and highly specific, and (2) specifically for you, I don't think it's very safe to assume that a negative result here means you don't have XMRV. You may not have XMRV. But it may be true, instead, that false negatives are common right now, and that as the testing is refined, it will become easier to interpret negative results with some confidence. We don't have that confidence right now.
It's not as easy as "if they're saying it's a worthwhile test, it must be accurate", is all I'm saying. My impression is that you can pretty much trust a positive result, but that a negative isn't necessarily trustworthy at this time. They'll be working to increase the sensitivity over time.