If a user is attempting to manually navigate to the pages you specified, the 404 errors should pick up. If someone is "smart enough" to attempt to manually try to attack those pages, a blank page being returned should be fine as long as directory browsing is turned off. My guess is it's bots or some other crawler on the site looking for those URL's because you have them referenced somewhere in the rendered HTML.
In your Robots.txt, exclude both /CMSPages and /CMSTemplates.