当前位置:   article > 正文

爬取糗事百科[文字]栏前十页_爬取糗事百科推荐页面中第1到10

爬取糗事百科推荐页面中第1到10
import urllib.request
import re



def jokeCrawer(url):
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Maxthon/4.4.3.4000 Chrome/30.0.1599.101 Safari/537.36"
    }
    req = urllib.request.Request(url, headers=headers)
    response = urllib.request.urlopen(req)

    HTML = response.read().decode('utf-8')

    pat = r'<div class="author clearfix">(.*?)<span class="stats-vote"><i class="number">'
    re_joke = re.compile(pat, re.S)
    divsList = re_joke.findall(HTML)
    # print(divsList)
    # print(len(divsList))
    dic = {}
    for div in divsList:
        # 用户名
        re_u = re.compile(r'<h2>(.*?)</h2>', re.S)
        username = re_u.findall(div)
        username = username[0]

        # 段子
        re_d = re.compile(r'<div class="content">\n<span>(.*?)</span>', re.S)
        duanzi = re_d.findall(div)
        duanzi = duanzi[0]
        # print(duanzi)

        dic[username] = duanzi


    return dic



    # with open(r'E:\all-workspace\qianfeng\0802-爬虫简介与json\file\file3.html', 'w') as f:
    #     f.write(HTML)

for i in range(1,10):
    url = "https://www.qiushibaike.com/text/page/str(i)/"
    info = jokeCrawer(url)
    for k, v in info.items():
        print(k+"说"+ v)










  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/爱喝兽奶帝天荒/article/detail/995312
推荐阅读
相关标签
  

闽ICP备14008679号