Web Crawler and Scraper
Return browser (webdriver) location path
ContentScraper
Fetch page using web driver/Session
Getencoding
Install PhantomJS webdriver
LinkExtractor
Link Normalization
Get the list of parameters and values from an URL
Link parameters filter
ListProjects
LoadHTMLFiles @rdname LoadHTMLFiles
Open a logged in Session
Rcrawler
RobotParser fetch and parse robots.txt
Start up web driver process on localhost, with a random port
Stop web driver process and Remove its Object
Performs parallel web crawling and web scraping. It is designed to crawl, parse and store web pages to produce data that can be directly used for analysis application. For details see Khalil and Fakir (2017) <DOI:10.1016/j.softx.2017.04.004>.