Visual Web Ripper is a powerful visual tool used for automated web scraping, web harvesting and content extraction from the web.
Visual Web Ripper can automatically walk through whole web sites and collect complete content structures such as product catalogues or search results, and save the content in many different formats, including XML, spreadsheets and databases.
*Visual Web Ripper has a visual editor to define projects and templates. You use a mouse to click on the content you want to collect, so no coding is required.
*Visual Web Ripper can automatically walk through whole web sites and collect complete content structures such as product catalogues or search results
*Visual Web Ripper has a fast multi-threaded data collector for web sites where AJAX is not required for data extraction.
*Visual Web Ripper can repeatedly submit forms for all possible combinations of input values in dropdown boxes, or you can supply a list of input values by yourself.
*You can supply parameter data from a database, such as form input values or URLs that should be visited.
*Visual Web Ripper can extract data from most framesets and iframes.
*Semi-automatic data extraction from web sites using CAPTCHA protection.
*Duplicate data detection can be used to extract only new data.
*A list of anonymous proxy servers can be setup to hide your IP-address and facillitate anonymous web scrapping.
*You can schedule content extraction to keep data up-to-date. The scheduler includes email notification, logging and status screens.
*Advanced selection techniques make project templates more resistant to structural changes on web pages, so a scheduled project can keep collecting data even if the structure of a webpage changes slightly.
*Email notifications can be sent out if the structure of a webpage changes so much that you must modify the scheduled project to extracting content from the webpage.
*Unique features allow you to extract data from web pages with an unstructured "flow" of content. Most other web data extraction tools are unable to extract data from such web pages.
*You can collect many different types of content, such as text, links, images, files, meta tags, tag attributes and many more.
*You can run data extraction projects from the command line.
*You can save extracted content to databases, spreadsheets, XML or CSV files. You can also save the data in an internal memory structure that can be used in conjunction with the API.
*Custom scripting in C#, VB.NET or Regex allows transformation of content as it is being extracted.
*You can use custom post-processing modules (.NET assemblies or scripts) to post-process data after it has been extracted. Custom modules are automatically triggered after a project has run.
*Visual Web Ripper includes a powerful API. You can use the API to modify and run projects from within your own applications, or use the API in conjunction with a post-processing module to easily post-process collected data.
*The Visual Web Ripper installer package includes examples showing how to build custom post-processing modules and how to use the API.
Changelog for this release:
- Auto-submit forms don't always fire the required events in the editor.
15 days trial ; extracts only the first 100 web elements on a website