python批量关键字百度搜索结果url解码

    xiaoxiao2021-11-30  16

    代码块思路:

    读取TXT文件中的关键了,每行1个;按关键字批量采集百度搜索结果前10名;采集格式为搜索词、匹配排名标题、匹配排名URL(加密结果)、对应排名等信息批量对于百度加密后的URL进行解密;存储解密后的真实URL。 5、针对行业快排10W排名等进行验证,也可针对百度关键字结果URL进行汇总,查找主要流量平台。

    导入需要的库

    import requests from bs4 import BeautifulSoup import re import time #coding:utf-8

    读取TXT中的关键字,生成数组

    with open('key.txt','r') as f: result = f.read() keys = result.split('\n') key_words = list(enumerate(keys, start=1))

    按关键字逐条采集百度搜索结果,并解密URL进行存储! 因百度屏蔽爬虫,故加入了HEADER信息。 按URL的状态、直接网址,200,302状态进行不同识别。

    for key in key_words: url = 'https://www.baidu.com/s?wd='+ key[1] header = { 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36', 'Cookie':'PSTM=1476231684; BIDUPSID=4F526560482E2A5E68D69CC8B0998806; plus_cv=1::m:92e3c68f; BAIDUID=C5A710455602AEA5BEC3D1B13B26321B:FG=1;' ' BDUSS=W5zS3JSeVYwSHZjVm5SdTdjQjlKNC1FLWJqbklvaEptZjVZVkl2bXhMN1o1amhZSVFBQUFBJCQAAAAAAAAAAAEAAACj2nZjanVleWluZ3MAAAAAAAAAAAAAAAAAAAA' 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANlZEVjZWRFYT; BD_HOME=1; BD_UPN=12314353; sug=3; sugstore=0; ORIGIN=2; bdime=0;' ' H_PS_645EC=78d5XI4+j6NkSjLKSmkiYdx/5jHNa0c4UemYz6WwEpyczIPebiQwaLtzwnXd2gUHv28P; BDRCVFR[feWj1Vr5u3D]=I67x6TjHwwYf0; BD_CK_SAM=1;' ' PSINO=6; H_PS_PSSID=1448_18288_21112_17001_20241_21455_21406_21394_21377_21192_20929; BDSVRTM=0' } web_db = requests.get(url,headers=header) time.sleep(2) soup = BeautifulSoup(web_db.text,'lxml') titles = soup.select('#content_left > div > h3 > a') ranks = [ i for i in range(1,11)] for title,link,rank in zip(titles,titles,ranks): baidu_url = link.get('href') if str(baidu_url).find('link?url=') > 0 : web_db2 = requests.get(baidu_url, allow_redirects=False) if web_db2.status_code == 200: soup = BeautifulSoup(web_db2.text, 'lxml') urls = soup.select('head > noscript') url2 = urls[0] url_math = re.search(r'\'(.*?)\'', str(url2), re.S) web_url = url_math.group(1) elif web_db2.status_code == 302: web_url = web_db2.headers['location'] else: web_url = 'error' else: web_url = baidu_url data = { 'key':key, 'title':title.get_text(), 'url':web_url.encode('utf-8'), 'rank':rank, } with open('info.txt','a') as f: f.write(str(data)+'\n') print('已完成采集任务' + str(key[0]) + '**********总采集任务' + str(len(key_words)))

    生成TXT完成,生成结果已做EXT识别。 直接粘贴结果到EXCLE统计即可。

    对于不懂代码的同学,也生成了EXE的软件,一键操作即可。 有需要的同学可以在下面留言!!!! EXE没有做加壳免杀。。。。

    转载请注明原文地址: https://ju.6miu.com/read-679251.html

    最新回复(0)