No problem, happy to help. In a lot of cases, even direct methods couldn't reach 100%. Sometimes the problem definition, combined with just regular noise in you input, will mean that you can have examples that have basically the same input, but different classes.
In the blur-domain, for example, if one of your original "unblurred" images was already blurred (or just out of focus) it might look pretty indistinguishable from "blurred" image. Then the only way for the net to "learn" to solve that problem is by overfitting to some unique value in that image.
A lot of machine learning is just making sure the nets are actually solving your problem rather than figuring out a way to cheat.
Uh, don't have that meeting then? This work has to at least touch on controlled unclassified information, the rules don't go away just because your new boss doesn't know them. Also, this is exactly the sort of thing people would do to test that the rules were being followed. Work to rule.