FDR correction is a way of reducing the chances of getting statistically significant findings by chance alone. If an analysis yields a p value below .05, this means that the chances are less than 1 in 20 that the two samples being compared were in fact drawn from identical populations. You're still not certain (because you can never be certain), but you figure 1 in 20 is a reasonably conservative assessment.
This is okay, but think about what happens when you do 20 comparisons? By the 1 in 20 reasoning, at least one of them will be false positive effect (due to chance).
So researchers consider that we need to be more stringent as we increase our numbers of comparisons.
The Bonferroni method is the traditional way of doing this. You simply divide the p value by the number of comparisons you plan to do. So if you do 1 comparison, you stick with the .05 value. But if you do 2 comparisons (double the chance of a false positive effect), you half that to .025. And if you do 20, your p value for assessing each of these comparisons is .0025.
But some researchers have argued, based on statistical reasoning and simulation studies, that the Bonferroni method may be way too stringent when you are dealing with massess of comparisons. This is because it assumes that all comparisons are independent, which is unlikely to be the case. The FDR method is a less conservative way of correcting. Its the one most commonly used in fMRI studies, were you have thousands of comparisons.