Scrapy is a free open source web crawling framework.
Users can build and scale bulk crawling projects with Scrapy. There is a built-in mechanism in Scrapy called Selectors for data scraping. Scrapy automatically alters crawling speed by Auto-throttling mechanism and generates feed exports in formats like JSON, CSV, and XML.
Scrapy provides built-in support for selecting /extracting data from sources either by XPath or CSS expressions. Scrapy has a unique built-in service called Scrapyd to upload projects and control spiders using JSON.
Scrapy runs a web-crawling built-in shell called Scrapy Shell to test a site behavior. Scrapy shell is used to perceive what components are returned by a web page to use for scraping data.
Scrapy is an open-source web crawling framework written in Python to scrape/extract data with the support of selectors based on XPath. BeautifulSoup is easy to understand for beginners in programming. BeautifulSoup is used for scraping purposes with proficiency. BeautifulSoup is basically an HTML and XML parser and involves additional libraries like requests, urlib2 to open URLs saving the results.
To become a Scrapy proficient requires more practice to learn all functionalities.
Scrapy can crawl a group of URLs in few minutes by using Twister which runs asynchronously (non-blocking) for concurrency.
Scrapy offers Item pipelines to write functions in spiders to process data like validating data, removing data and saving data to the database. Scrapy provides spider Contracts to test spiders creating generic and deep crawlers. If a scraping or crawling project does not involve ample logic, BeautifulSoup is good for this job, but if it requires much customization then Scrapy is the best option for proxy, managing cookies, and data pipelines.