- - 1.实例化一个BeautifulSoup对象,并且将页面源码数据加载到该对象中
- - 2.通过调用BeautifulSoup对象中相关的属性或者方法进行标签定位和数据提取
- - 环境安装:
- - pip install bs4
- - pip install lxml
- - 如何实例化BeautifulSoup对象:
- - from bs4 import BeautifulSoup
- - 对象的实例化:
- - 1.将本地的html文档中的数据加载到该对象中
- fp = open('./test.html','r',encoding='utf-8')
- soup = BeautifulSoup(fp,'lxml')
- - 2.将互联网上获取的页面源码加载到该对象中
- page_text = response.text
- soup = BeatifulSoup(page_text,'lxml')
- - 提供的用于数据解析的方法和属性:
- - soup.tagName:返回的是文档中第一次出现的tagName对应的标签
- - soup.find():
- - find('tagName'):等同于soup.div
- - 属性定位:
- -soup.find('div',class_/id/attr='song')
- - soup.find_all('tagName'):返回符合要求的所有标签(列表)
- - select:
- - select('某种选择器(id,class,标签...选择器)'),返回的是一个列表。
- - 层级选择器:
- - soup.select('.tang > ul > li > a'):>表示的是一个层级
- - oup.select('.tang > ul a'):空格表示的多个层级
- - 获取标签之间的文本数据:
- - soup.a.text/string/get_text()
- - text/get_text():可以获取某一个标签中所有的文本内容
- - string:只可以获取该标签下面直系的文本内容
- - 获取标签中属性值:
- - soup.a['href']
-
- # -*- encoding: utf-8 -*-
- """
- @File : 爬取小说.py
- @Time : 2022/3/11 13:00
- @Author : simon
- @Email : 294168604@qq.com
- @Software: PyCharm
- """
- import requests
- from bs4 import BeautifulSoup
-
- # 需求:爬取三国演义小说所有的章节标题和章节内容http://www.shicimingju.com/book/sanguoyanyi.html
-
-
- if __name__ == "__main__":
- url = 'http://www.shicimingju.com/book/sanguoyanyi.html'
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36'
- }
- page_text = requests.get(url=url,headers=headers)
- page_text.encoding = 'utf-8'
- # 在首页中解析出章节的标题和详情页的url
- # 1.实例化BeautifulSoup对象,需要将页面源码数据加载到该对象中
- soup = BeautifulSoup(page_text.text, 'lxml')
- # 解析章节标题和详情页的url
- li_list = soup.select('.book-mulu > ul > li')
- for li in li_list:
- title = li.a.string
- detail_url = 'http://www.shicimingju.com' + li.a['href']
- # 对详情页发起请求,解析出章节内容
- detail_page_text = requests.get(url=detail_url, headers=headers)
- detail_page_text.encoding = 'utf-8'
- # 解析出详情页中相关的章节内容
- detail_soup = BeautifulSoup(detail_page_text.text, 'lxml')
- div_tag = detail_soup.find('div', class_='chapter_content')
- # 解析到了章节的内容
- content = div_tag.text
- with open('./sanguo.txt', 'w', encoding='utf-8') as fp:
- fp.write(title + ':' + content + '\n')
- print(title, '爬取成功!!!')
-
效果展示