scrapy爬虫保存为csv文件的技术分析

    xiaoxiao2021-12-12  6

    由于工作需要,将爬虫的文件要保存为csv,以前只是保存为json,但是目前网上很多方法都行不通,主要有一下两种:

     

    from scrapy import signals from scrapy.contrib.exporter import CsvItemExporter class CSVPipeline(object): def __init__(self): self.files = {} @classmethod def from_crawler(cls, crawler): pipeline = cls() crawler.signals.connect(pipeline.spider_opened, signals.spider_opened) crawler.signals.connect(pipeline.spider_closed, signals.spider_closed) return pipeline def spider_opened(self, spider): file = open('%s_items.csv' % spider.name, 'w+b') self.files[spider] = file self.exporter = CsvItemExporter(file) self.exporter.fields_to_export = [list with Names of fields to export - order is important] self.exporter.start_exporting() def spider_closed(self, spider): self.exporter.finish_exporting() file = self.files.pop(spider) file.close() def process_item(self, item, spider): self.exporter.export_item(item) return item

    第二种:

     

     

    import csv import itertools class CSVPipeline(object): def __init__(self): self.csvwriter = csv.writer(open('items.csv', 'wb'), delimiter=',') self.csvwriter.writerow(['names','starts','subjects','reviews']) def process_item(self, item, ampa): rows = zip(item['names'],item['stars'],item['subjects'],item['reviews']) for row in rows: self.csvwriter.writerow(row) return item

    结果行不通,无法保存。后来经过研究发现,无法保存的根本原因在于爬虫得到的数据格式和保存文件的格式不一样,修改格式后,保存成功,如有需要,请扣扣联系:1241296318

    保存以后直接用excel打开是乱码

     

    用其他工具editplus打开,另存为bom编码格式

     

    再次打开,则文件成功

     

     

     

     

     

     

     

    转载请注明原文地址: https://ju.6miu.com/read-900264.html

    最新回复(0)