链接:https://jn.zu.ke.com/zufang
首先我们使用scrapy startproject Beike 这个命令创建一个scrapy爬虫项目,接着我们用pycharm打开项目,完善item
接着我们找到setting文件,把ROBOTSTXT_OBEY = True,注释掉,或删除,表示我们不遵守协议
设置请求头,伪装成浏览器,不设置直接识别scrapy爬虫框架,直接把你机器拉入黑名单
我们抓取的是页面上的信息字段
我们明确了item以后,我们开始一个scrapy爬虫项目,使用命令: scrapy genspider beike jn.zu.ke.com
beike ------ 是爬虫名字
beike jn.zu.ke.com – domain
接下来我们先编写爬虫文件先写解析列表页面,还有详情页面,我们先不进行翻页,一步一步实现:
代码如下:
- # -*- coding: utf-8 -*-
- import scrapy
- from Beike.items import BeikeItem
- import copy
-
-
- class BeikeSpider(scrapy.Spider):
- name = 'beike'
- allowed_domains = ['jn.zu.ke.com']
- start_urls = ['https://jn.zu.ke.com/zufang']
- page = 2
-
-
- def parse(self, response):
- print(response.url)
- node_list = response.xpath('//div[@class="content__list--item--main"]')
- print(len(node_list))
- item = BeikeItem()
- for node in node_list:
- item["title"] = node.xpath("./p[1]/a/text()").extract_first().strip()
- item["link"] = response.urljoin(node.xpath("./p[1]/a/@href").extract_first().strip())
- item["address"] = node.xpath("./p[2]/a[3]/text()").extract_first().strip()
- item["big"] = node.xpath("./p[2]/text()[5]").extract_first().strip()
- item["where"] = node.xpath("./p[2]/text()[6]").extract_first().strip()
- item["how"] = node.xpath("./p[2]/text()[7]").extract_first().strip()
- item["price"] = node.xpath(
- './span[@class="content__list--item-price"]/em/text()').extract_first().strip() + '元/月'
- yield scrapy.Request(
- url=item["link"],
- callback=self.detail_parse,
- meta={"item": copy.deepcopy(item)},
- dont_filter=True
- )
-
- def detail_parse(self, response):
- item = response.meta['item']
- item["name"] = response.xpath('//*[@id="aside"]/div[2]/div[2]/div[1]/span/text()').extract_first()
- print(item["title"])
- yield item
-
-
因为翻页是动态加载的,我们就直接拼接下一页链接
第二页:https://jn.zu.ke.com/zufang/pg2/#contentList
第三页:https://jn.zu.ke.com/zufang/pg3/#contentList
发现变化的是pg后面的数字,构建代码:
- if self.page < 100:
- next_url = 'https://jn.zu.ke.com/zufang/pg{}/#contentList'.format(self.page)
- self.page += 1
- yield scrapy.Request(next_url, callback=self.parse)
-
-
交给scrapy框架进行请求解析,昨晚有一个bug一直困扰,就是解析列表页数据正常,但是解析详情页就出现了问题,最后打印数据是一样的,找了好久,然后通过百度找到了解决方法那就是导入copy模块,将列表的item传递使用深拷贝一下就好了meta={“item”: copy.deepcopy(item)},切记爬虫编写时,一定要注意对应好item字段,否则报错,注意一定要打开dont_fillter,不过滤,不打开了就会出现一个bug,当前页面会referer到上一页,数据一样就像这样
我们写完spider爬虫后,运行爬虫 scrapy crawl beike 查看效果:
下一步我们我们将数据保存csv,使用pipline管道,将数据写入
注意:一定要在setting文件中注册管道后面数值越小,越优先
全部代码:
爬虫item文件代码
- # -*- coding: utf-8 -*-
-
- # Define here the models for your scraped items
- #
- # See documentation in:
- # https://docs.scrapy.org/en/latest/topics/items.html
-
- import scrapy
-
-
- class BeikeItem(scrapy.Item):
- # define the fields for your item here like:
- # name = scrapy.Field()
- title = scrapy.Field()
- link = scrapy.Field()
- address = scrapy.Field()
- big = scrapy.Field()
- where = scrapy.Field()
- how = scrapy.Field()
- price = scrapy.Field()
- name = scrapy.Field()
-
-
spider爬虫文件:
- # -*- coding: utf-8 -*-
- import scrapy
- from Beike.items import BeikeItem
- import copy
-
-
- class BeikeSpider(scrapy.Spider):
- name = 'beike'
- allowed_domains = ['jn.zu.ke.com']
- start_urls = ['https://jn.zu.ke.com/zufang']
- page = 2
-
-
- def parse(self, response):
- print(response.url)
- node_list = response.xpath('//div[@class="content__list--item--main"]')
- print(len(node_list))
- item = BeikeItem()
- for node in node_list:
- item["title"] = node.xpath("./p[1]/a/text()").extract_first().strip()
- item["link"] = response.urljoin(node.xpath("./p[1]/a/@href").extract_first().strip())
- item["address"] = node.xpath("./p[2]/a[3]/text()").extract_first().strip()
- item["big"] = node.xpath("./p[2]/text()[5]").extract_first().strip()
- item["where"] = node.xpath("./p[2]/text()[6]").extract_first().strip()
- item["how"] = node.xpath("./p[2]/text()[7]").extract_first().strip()
- item["price"] = node.xpath(
- './span[@class="content__list--item-price"]/em/text()').extract_first().strip() + '元/月'
- yield scrapy.Request(
- url=item["link"],
- callback=self.detail_parse,
- meta={"item": copy.deepcopy(item)},
- dont_filter=True
- )
- if self.page < 100:
- next_url = 'https://jn.zu.ke.com/zufang/pg{}/#contentList'.format(self.page)
- self.page += 1
- yield scrapy.Request(next_url, callback=self.parse)
-
- def detail_parse(self, response):
- item = response.meta['item']
- item["name"] = response.xpath('//*[@id="aside"]/div[2]/div[2]/div[1]/span/text()').extract_first()
- print(item["title"])
- yield item
-
-
pipline代码
- # -*- coding: utf-8 -*-
-
- # Define your item pipelines here
- #
- # Don't forget to add your pipeline to the ITEM_PIPELINES setting
- # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
- import csv
-
-
-
- class SavePipeline(object):
- def open_spider(self, spider):
- self.file = open("贝壳.csv", 'a', newline="",encoding="gb18030")
- self.csv_writer = csv.writer(self.file)
- self.csv_writer.writerow(["标题", "链接", '地址', "大小", "方向", "居室",
- "价格", "名字"])
-
- def process_item(self, item, spider):
- self.csv_writer.writerow(
- [item["title"], item["link"], item["address"],
- item["big"], item["where"], item["how"], item["price"], item["name"]]
- )
- return item
-
- def close_spider(self, spider):
- self.file.close()
-
setting文件
- # -*- coding: utf-8 -*-
-
- # Scrapy settings for Beike project
- #
- # For simplicity, this file contains only settings considered important or
- # commonly used. You can find more settings consulting the documentation:
- #
- # https://docs.scrapy.org/en/latest/topics/settings.html
- # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
- # https://docs.scrapy.org/en/latest/topics/spider-middleware.html
-
- BOT_NAME = 'Beike'
-
- SPIDER_MODULES = ['Beike.spiders']
- NEWSPIDER_MODULE = 'Beike.spiders'
-
- # Crawl responsibly by identifying yourself (and your website) on the user-agent
- USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 ' \
- 'Safari/537.36 '
-
- # Obey robots.txt rules
- # ROBOTSTXT_OBEY = True
-
- # Configure maximum concurrent requests performed by Scrapy (default: 16)
- # CONCURRENT_REQUESTS = 32
-
- # Configure a delay for requests for the same website (default: 0)
- # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
- # See also autothrottle settings and docs
- # DOWNLOAD_DELAY = 3
- # The download delay setting will honor only one of:
- # CONCURRENT_REQUESTS_PER_DOMAIN = 16
- # CONCURRENT_REQUESTS_PER_IP = 16
-
- # Disable cookies (enabled by default)
- # COOKIES_ENABLED = False
-
- # Disable Telnet Console (enabled by default)
- # TELNETCONSOLE_ENABLED = False
-
- # Override the default request headers:
- # DEFAULT_REQUEST_HEADERS = {
- # "Referer": "https://jn.zu.ke.com/zufang"
- # }
-
- # Enable or disable spider middlewares
- # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
- # SPIDER_MIDDLEWARES = {
- # 'Beike.middlewares.BeikeSpiderMiddleware': 543,
- # }
-
- # Enable or disable downloader middlewares
- # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
- # DOWNLOADER_MIDDLEWARES = {
- # 'Beike.middlewares.BeikeDownloaderMiddleware': 543,
- # }
-
- # Enable or disable extensions
- # See https://docs.scrapy.org/en/latest/topics/extensions.html
- # EXTENSIONS = {
- # 'scrapy.extensions.telnet.TelnetConsole': None,
- # }
-
- # Configure item pipelines
- # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
- ITEM_PIPELINES = {
- 'Beike.pipelines.SavePipeline': 300,
- }
-
- # Enable and configure the AutoThrottle extension (disabled by default)
- # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
- # AUTOTHROTTLE_ENABLED = True
- # The initial download delay
- # AUTOTHROTTLCONCURRENT_REQUESTSE_START_DELAY = 5
- # The maximum download delay to be set in case of high latencies
- # AUTOTHROTTLE_MAX_DELAY = 60
- # The average number of requests Scrapy should be sending in parallel to
- # each remote server
- # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
- # Enable showing throttling stats for every response received:
- # AUTOTHROTTLE_DEBUG = False
-
- # Enable and configure HTTP caching (disabled by default)
- # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
- # HTTPCACHE_ENABLED = True
- # HTTPCACHE_EXPIRATION_SECS = 0
- # HTTPCACHE_DIR = 'httpcache'
- # HTTPCACHE_IGNORE_HTTP_CODES = []
- # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
-
-
-
-
-
整个代码实现比较简单,就是有坑,每个人主要就是从项目中积累经验吧,这个网站,没有反爬,也没cookie模拟登录,后续遇到反爬可以用一个请求头池,还有ip代理池,进行持续化抓取数据,以上都是个人学习经验,如有不正请指教,谢谢!