From the course: Burp Suite Essential Training

Crawling the website

- [Instructor] Let's use Burp Suite Professional Edition and take a deeper look at crawling a website. We'll switch off intercept, open the browser, and we'll visit 10.0.2.20, which is our Metasploitable2 server. In our site map, we can expand 10.0.2.20 in the left-hand panel and we can see our visited page and the forward links. Let's right click and open the scan panel. Here we can see the scan parameters we set to crawl and audit the URL 10.0.2.20 using HTTP and HTTPS. We'll not make any changes so let's select OK. And we can see the website tree is being populated. Back at the dashboard, we can see in the lower right pane that the crawler's started. We can scroll to see the crawl happening. Crawling a website takes a while, so we'll wait until this finishes. Once the crawl finishes, Burp Suite begins its audit. And we can see more problems appearing in the top right-hand issues pane. Back in the site map, we can see red dots starting to appear beside pages with high severity issues. If we click on mutillidae, we can see in the lower right pane that it's vulnerable to file path manipulation. We can also click on an issue in the top right-hand pane and get the details shown together with the relevant web page. The audit will take quite a while but let's see what we have so far. Let's take a look at the OS command injection and the web home entry. We can see not only the advisory in the lower pane but also two request and response messages. Sometimes we'll be doing authenticated scans or we'll have found credentials in some way. If we have credentials to use, we can add them. We do that, the scan page by selecting application login. And we can add a new application login, perhaps DVWA with the username of admin and password is password. This then allows Burp Suite to use the credentials when it sees the login form on the website. And allows further crawling of the pages behind the login page.

Contents