栏目分类:
子分类:
返回
终身学习网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
终身学习网 > IT > 软件开发 > 后端开发 > Python

Python自学记录--实战1--爬取美女图片

Python 更新时间:发布时间: 百科书网 趣学号

目标网址:http://www.netbian.com/

调用模块

import requests
from lxml import etree
# 设置ua
header = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 "
                  "Safari/537.36"}

Page = int(input('请输入下载页数:'))
if Page == 1:
    url = 'http://www.netbian.com/meinv/'
    response = requests.get(url, headers=header).text
    html = etree.HTML(response)
    for a in range(4, 21):
        print('正在下载第', a - 3, '张!')
        response2 = html.xpath('//*[@id="main"]/div[3]/ul/li[' + str(a) + ']/a/@href')
        # 确定图片地址
        CQT_url = 'http://www.netbian.com/' + str(response2[0])

        response2 = requests.get(CQT_url, headers=header).text
        html2 = etree.HTML(response2)
        TP_url = html2.xpath('//*[@id="main"]/div[3]/div/p/a/img/@src')[0]
        TP_name = html2.xpath('//*[@id="main"]/div[3]/div/p/a/img/@alt')[0]
        image = requests.get(TP_url)
        # 将图片保存
        file = open(fr"D:代码保存图片{TP_name}.jpg", "wb")
        file.write(image.content)
        file.close()
elif Page >= 2:
    for page in range(2, Page + 1):
        url = 'http://www.netbian.com/meinv/index_' + str(page) + '.htm'
        response = requests.get(url, headers=header).text
        html = etree.HTML(response)
        for a in range(4, 21):
            print('正在下载第', page, '页!第', a - 3, '张!')
            response2 = html.xpath('//*[@id="main"]/div[3]/ul/li[' + str(a) + ']/a/@href')
            CQT_url = 'http://www.netbian.com/' + str(response2[0])
            # print(CQT_url)

            response2 = requests.get(CQT_url, headers=header).text
            html2 = etree.HTML(response2)
            TP_url = html2.xpath('//*[@id="main"]/div[3]/div/p/a/img/@src')[0]
            TP_name = html2.xpath('//*[@id="main"]/div[3]/div/p/a/img/@alt')[0]
            image = requests.get(TP_url)

            file = open(fr"D:代码保存图片{TP_name}.jpg", "wb")
            file.write(image.content)
            file.close()

翻页网址没办法通用,使用了if语句,网站好像设置了反爬,一页20张图,第三张xpath跟其他的不一样,所以我是从第四张开始的,师兄们不忙的话可以看一下,顺便指点指点小弟,差异图我贴上。

 放一张图!

 

转载请注明:文章转载自 www.051e.com
本文地址:http://www.051e.com/it/295743.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 ©2023-2025 051e.com

ICP备案号:京ICP备12030808号