During my first pentest reports it happened a few times that I suddenly realised that I hadn’t saved some data which would have been good to include in the report. Usually this costs you only time to reproduce the problem and gather that data again, but sometimes it’s not possible. Either you are not allowed to access the target system anymore or in case of a test environment it might not be even running anymore. In this case you need to accept that your report will be worse as it could have been. To avoid these situations I tried to summarize for myself what is needed to a report and how all relevant data should be stored for a finding. Bear in mind that usually every penetration tester has their own way to keep their data so this here is just my way(or the highway..).

Although sometimes I found the endless different note-taking-programs helpful but in this case I don’t use them. The two main reasons are that first I use different operating systems for different tests and I would need a program which is portable. Second I have still data stuck in a backup file of basket because I used that in Backtrack but then I changed to gnome and I didn’t want to bother with it’s installation. Therefor I basically collect all the information in simple text files in a simple directory structure. One more sidenote: this is mostly relevant for webapp pentest finding because for network pentest you probably need some kind of database to keep everything. If I have time I will write about that as well.

So here it goes:

1. Scanner results

If scanners were run such as nikto or nmap, the results should be saved in separate text files in the project root folder.

Background information

There should be a background.txt that describes the application itself. It should not be too detailed, but it must give some overview of the application’s core functionality, the technologies used by the application and the complexity of it (approximate number of static and dynamic pages, input fields etc).


There shoud be a tools.txt that contains the list of tools that were used during the test. This is just to keep track exactly what was used so one could include it later in the report.

2. Important data for findings


Create a separate directory for each finding. The directory name should start with a sequence number and then the title of the finding. In the directory name the underscore(‘_’) character is preferred over spaces. The title of the finding should be so descriptive to make it distinguishable from the other findings. An example directory:


Every information for the particular finding should be saved in that directory.

Finding details

For every finding at least the following information must be saved in the directory of the finding:

  • description of the finding (i.e.: description.txt)
  • every request used for the attack
  • every response during the attack
  • screenshot if applicable


Every request that was used to execute the attack should be saved in separate text files. The sequence of request should be marked in the file name or should be described in the description. Always the full request must be saved including the headers.


Every response to the attack requests should be saved in separate text files. It must be clear from the filename which requests and which responses belong together, for example:


Always the full response should be saved including the headers.


If the attack has some visual effects, then screenshots should be taken. On the screenshot there must be as much important detail as possible, for example:

  • Browser address bar, if it holds some special information then that part should be visible i.e.: attack string of reflected XSS in GET parameter.
  • Parts of the page that make it identifiable.
  • The attack itself.

The saved screen area should be as small as possible i.e.: to save an alert window of an XSS, the browser should be not on fullscreen on a 24″ monitor, instead the browser window should be resized to show only the important information.


There should be always a desciption.txt. It should contain the following:

  • description of the vulnerability
  • description of an attack scenario
  • description of the proof-of-concept attack