资源简介
可用的谷歌图片爬虫,默认的关键词是心情,如angry、sad
代码片段和文件信息
# -*- coding: utf-8 -*-
# @Author: wlc
# @Date: 2017-09-25 23:54:24
# @Last Modified by: Henry
# @Last Modified time: 2018-7-11 22:40:11
####################################################################################################################
# Download images from google with specified keywords for searching
# search query is created by “main_keyword + supplemented_keyword“
# if there are multiple keywords each main_keyword will join with each supplemented_keyword
# mainly use urllib and each search query will download at most 100 images due to page source code limited by google
# allow single process or multiple processes for downloading
####################################################################################################################
import os
import time
import re
import logging
import urllib.request
import urllib.error
from multiprocessing import Pool
from user_agent import generate_user_agent
log_file = ‘download.log‘
logging.basicConfig(level=logging.DEBUG filename=log_file filemode=“a+“ format=“%(asctime)-15s %(levelname)-8s %(message)s“)
def download_page(url):
“““download raw content of the page
Args:
url (str): url of the page
Returns:
raw content of the page
“““
try:
headers = {}
headers[‘User-Agent‘] = generate_user_agent()
headers[‘Referer‘] = ‘https://www.google.com‘
req = urllib.request.Request(url headers = headers)
resp = urllib.request.urlopen(req)
return str(resp.read())
except Exception as e:
print(‘error while downloading page {0}‘.format(url))
logging.error(‘error while downloading page {0}‘.format(url))
return None
def parse_page(url):
“““parge the page and get all the links of images max number is 100 due to limit by google
Args:
url (str): url of the page
Returns:
A set containing the urls of images
“““
page_content = download_page(url)
if page_content:
link_list = re.findall(‘“ou“:“(.*?)“‘ page_content)
if len(link_list) == 0:
print(‘get 0 links from page {0}‘.format(url))
logging.info(‘get 0 links from page {0}‘.format(url))
return set()
else:
return set(link_list)
else:
return set()
def download_images(main_keyword supplemented_keywords download_dir):
“““download images with one main keyword and multiple supplemented keywords
Args:
main_keyword (str): main keyword
supplemented_keywords (list[str]): list of supplemented keywords
Returns:
None
“““
image_links = set()
print(‘Process {0} Main keyword: {1}‘.format(os.getpid() main_keyword))
# create a directory for a main keyword
img_dir = download_dir + main_keyword + ‘/‘
if not os.path.exists(img_dir)
相关资源
- pywin32_python3.6_64位
- python+ selenium教程
- PycURL(Windows7/Win32)Python2.7安装包 P
- 英文原版-Scientific Computing with Python
- 7.图像风格迁移 基于深度学习 pyt
- 基于Python的学生管理系统
- A Byte of Python(简明Python教程)(第
- Python实例174946
- Python 人脸识别
- Python 人事管理系统
- 一个多线程智能爬虫,爬取网站小说
- 基于python-flask的个人博客系统
- 计算机视觉应用开发流程
- python 调用sftp断点续传文件
- python socket游戏
- 基于Python爬虫爬取天气预报信息
- python函数编程和讲解
- 顶点小说单本书爬虫.py
- Python开发的个人博客
- 基于python的三层神经网络模型搭建
- python实现自动操作windows应用
- python人脸识别(opencv)
- python 绘图(方形、线条、圆形)
- python疫情卡UN管控
- python 连连看小游戏源码
- 基于PyQt5的视频播放器设计
- 一个简单的python爬虫
- csv文件行列转换python实现代码
- Python操作Mysql教程手册
- Python Machine Learning Case Studies
评论
共有 条评论