爬虫真好玩,本来按照教学爬淘宝图片的,可是我怎么都找不到图片地址在源码什么地方,无意看到彼岸的小姐姐图片,牛牛都受不了了,研究半天写了个爬虫,只爬取动漫的,现在把源码分享给大家,写的可能不是很好,望体谅哈。
照片感觉只是720p的 谁教教我怎么弄4K的照片啊
https://pic.netbian.com/4kdongman/index.html这个就是网址,里面的拼音有个动漫,就是动漫分类了,大家也可以换成风景那些,估计都一样
for i in range(0,10)这个是爬的页数,只有10页,可以自己修改哈
dd='C:\\Users\\86185\\Desktop\\22\\'+str(a)+'.jpg'这个是保存地址 C:\\Users\\86185\\Desktop\\22只需要把这一段换成自己想保存的电脑路径。
import urllib.request,re,random
btk=[
'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)',
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)',
def ua(btk):
thisua=random.choice(btk)
headers=('User-Agent',thisua)
opener=urllib.request.build_opener()
opener.addheaders=[headers
urllib.request.install_opener(opener)
try:
a=0
for i in range(0,10):
if i <2:
url='https://pic.netbian.com/4kdongman/index.html'
else:
url='https://pic.netbian.com/4kdongman/index_'+str(i)+'.html'
ua(btk)
data=urllib.request.urlopen(url).read().decode('utf-8','ignore')
pat='</li><li><a href="(.*?)"'
rst=re.compile(pat).findall(data)
for j in range(0,len(rst)):
a+=1
url1='https://pic.netbian.com'+rst[j
ua(btk)
data1=urllib.request.urlopen(url1).read().decode('utf-8','ignore')
pat1='id="img"><img src="(.*?)"'
rst1=re.compile(pat1).findall(data1)
url2='https://pic.netbian.com'+rst1[0
dd='C:\\Users\\86185\\Desktop\\22\\'+str(a)+'.jpg'
urllib.request.urlretrieve(url2,filename=dd) #爬取全部东西并且存储
print(f'当前第{str(a)} 个照片下载成功')
except urllib.error.URLError as e:
print (e.code) #异常状态
print (e.reason)
-------------------------------------------------------------- 分割线,再分享一个爬虫你懂得,记得改存储路径-
就这个动漫天天在我各大软件出现,兄弟们给我冲------------------------------------------------------------- -------------------------------------------------------------- -------------------------import urllib.request,re,random
btk=[
'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)',
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)',
]
def ua(btk):
thisua=random.choice(btk)
headers=('User-Agent',thisua)
opener=urllib.request.build_opener()
opener.addheaders=[headers]
urllib.request.install_opener(opener)
try:
a=0
for i in range(0,100):
url='https://rouman5.com/books/63b65185-f798-4c8f-a0b0-8811615908fd/'+str(i)
ua(btk)
data=urllib.request.urlopen(url).read().decode('utf-8','ignore')
pat='"no-referrer" src="(.*?)"'
rst=re.compile(pat).findall(data)
for j in range(len(rst)):
a+=1
dd='C:\\Users\\86185\\Desktop\\22\\'+str(j)+'.jpg' #记得改存储路径啊
urllib.request.urlretrieve(rst[j],filename=dd) #爬取全部东西并且存储
print(f'当前第{str(j)} 个照片下载成功')
except urllib.error.URLError as e:
print (e.code) #异常状态
print (e.reason)
|