在安装完成caffe后,并且编译完成github上的faster-rcnn python版之后,可以采用自己的数据来训练faster-rcnn了。 一,文件修改: 1,在py-faster-rcnn目录下,找到lib/datasets/pascal_voc.py 文件打开逐一修改相应的函数: 如果打算添加中文注释请,在文件开图添加#encoding:utf-8,不然会报错。 以下为修改的细节:
1)、初始化函数init的修改,同时修改类名:
class hs(imdb): def __init__(self, image_set, devkit_path=None): # modified imdb.__init__(self, image_set) self._image_set = image_set self._devkit_path = devkit_path#datasets路径 self._data_path = os.path.join(self._devkit_path,image_set) #图片文件夹路径 self._classes = ('__background__', # always index 0 'jyz','fzc','qnq') #two classes self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes))) # form the dict{'__background__':'0','person':'1'} self._image_ext = '.jpg' self._image_index = self._load_image_set_index('ImageList.txt') # Default to roidb handler self._roidb_handler = self.selective_search_roidb self._salt = str(uuid.uuid4()) self._comp_id = 'comp4' # PASCAL specific config options self.config = {'cleanup' : True, 'use_salt' : True, 'use_diff' : False, 'matlab_eval' : False, 'rpn_file' : None, 'min_size' : 16} #小于16个像素的框扔掉 assert os.path.exists(self._devkit_path), \ 'VOCdevkit path does not exist: {}'.format(self._devkit_path) assert os.path.exists(self._data_path), \ 'Path does not exist: {}'.format(self._data_path)2)修改image_path_from_index函数的修改:
def image_path_from_index(self, index): #modified """ Construct an image path from the image's "index" identifier. """ image_path = os.path.join(self._data_path,index +'.jpg') assert os.path.exists(image_path), \ 'Path does not exist: {}'.format(image_path) return image_path3)修改_load_image_set_index函数:
def _load_image_set_index(self,imagelist): # modified """ Load the indexes listed in this dataset's image set file. """ # Example path to image set file: # self._devkit_path + /VOCdevkit2007/VOC2007/ImageSets/Main/val.txt image_set_file = os.path.join(self._devkit_path, imagelist) assert os.path.exists(image_set_file), \ 'Path does not exist: {}'.format(image_set_file) with open(image_set_file) as f: image_index = [x.strip() for x in f.readlines()] return image_index4)修改_load_pascal_annotation(self, index):
def _load_pascal_annotation(self, index): #modified """ Load image and bounding boxes info from XML file in the PASCAL VOC format. """ filename = os.path.join(self._devkit_path, 'Annotations', index + '.xml') tree = ET.parse(filename) objs = tree.findall('object') if not self.config['use_diff']: # Exclude the samples labeled as difficult non_diff_objs = [ obj for obj in objs if int(obj.find('difficult').text) == 0] # if len(non_diff_objs) != len(objs): # print 'Removed {} difficult objects'.format( # len(objs) - len(non_diff_objs)) objs = non_diff_objs num_objs = len(objs) boxes = np.zeros((num_objs, 4), dtype=np.uint16) gt_classes = np.zeros((num_objs), dtype=np.int32) overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32) # "Seg" area for pascal is just the box area seg_areas = np.zeros((num_objs), dtype=np.float32) # Load object bounding boxes into a data frame. for ix, obj in enumerate(objs): bbox = obj.find('bndbox') # Make pixel indexes 0-based x1 = float(bbox.find('xmin').text) y1 = float(bbox.find('ymin').text) x2 = float(bbox.find('xmax').text) y2 = float(bbox.find('ymax').text) cls = self._class_to_ind[obj.find('name').text.lower().strip()] boxes[ix, :] = [x1, y1, x2, y2] gt_classes[ix] = cls overlaps[ix, cls] = 1.0 seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1) overlaps = scipy.sparse.csr_matrix(overlaps) return {'boxes' : boxes, 'gt_classes': gt_classes, 'gt_overlaps' : overlaps, 'flipped' : False, 'seg_areas' : seg_areas}5)main下面修改相应的路径:
if __name__ == '__main__': from datasets.hs import hs d = hs('hs', '/home/panyiming/py-faster-rcnn/lib/datasets') res = d.roidb from IPython import embed; embed()2,在py-faster-rcnn目录下,找到lib/datasets/factory.py 并修改,修改后的文件如下:
# -------------------------------------------------------- # Fast R-CNN # Copyright (c) 2015 Microsoft # Licensed under The MIT License [see LICENSE for details] # Written by Ross Girshick # -------------------------------------------------------- """Factory method for easily getting imdbs by name.""" __sets = {} from datasets.hs import hs import numpy as np # # Set up voc_<year>_<split> using selective search "fast" mode # for year in ['2007', '2012']: # for split in ['train', 'val', 'trainval', 'test']: # name = 'voc_{}_{}'.format(year, split) # __sets[name] = (lambda split=split, year=year: pascal_voc(split, year)) # # # Set up coco_2014_<split> # for year in ['2014']: # for split in ['train', 'val', 'minival', 'valminusminival']: # name = 'coco_{}_{}'.format(year, split) # __sets[name] = (lambda split=split, year=year: coco(split, year)) # # # Set up coco_2015_<split> # for year in ['2015']: # for split in ['test', 'test-dev']: # name = 'coco_{}_{}'.format(year, split) # __sets[name] = (lambda split=split, year=year: coco(split, year)) name = 'hs' devkit = '/home/panyiming/py-faster-rcnn/lib/datasets' __sets['hs'] = (lambda name = name,devkit = devkit: hs(name,devkit)) def get_imdb(name): """Get an imdb (image database) by name.""" if not __sets.has_key(name): raise KeyError('Unknown dataset: {}'.format(name)) return __sets[name]() def list_imdbs(): """List all registered imdbs.""" return __sets.keys()二、模型的选择、训练以及测试: 1.预训练模型介绍 在github官网上的py-faster-rcnn的编译安装教程中有一步如下:
cd $FRCN_ROOT ./data/scripts/fetch_faster_rcnn_models.sh执行完成之后会在/data/scripts下产生压缩文件faster_rcnn_models.tgz,解压得到faster_rcnn_model文件夹,faster_rcnn_model文件夹下面是作者用faster rcnn训练好的三个网络,分别对应着小、中、大型网络,大家可以试用一下这几个网络,看一些检测效果,他们训练都迭代了80000次,数据集都是pascal_voc的数据集。
可以通过执行如下命令下载Imagenet上训练好的通用模型:
cd $FRCN_ROOT ./data/scripts/fetch_imagenet_models.sh执行完成之后会在/data/scripts下产生压缩文件imagenet_models.tgz,解压得到imagenet_models文件夹,imagenet_model文件夹下面是在Imagenet上训练好的通用模型,在这里用来初始化网络的参数.
2.修改模型文件配置 模型文件在models下面对应的网络文件夹下,在这里我用中型网络的配置文件修改为例子 比如:我的检测目标物是3类 ,那么我的类别就有两个类别即 background 和 3类目标 因此,首先打开网络的模型文件夹,打开train.prototxt修改的地方重要有三个 分别是个地方
首先在data层把num_classes 从原来的21类 20类+背景 ,改成 4类 3类目标+背景 接在在cls_score层把num_output 从原来的21 改成 4 RoI Proposal下有个名为name: 'roi-data'的层,将其num_classes修改为4 在bbox_pred层把num_output 从原来的84 改成16, 为检测类别个数乘以4,如果你要进一步修改网络训练中的学习速率,步长,gamma值,以及输出模型的名字,需要在同目录下的solver.prototxt中修改。
3.启动Fast RCNN网络训练
python ./tools/train_net.py --gpu 1 --solver models/hs/faster_rcnn_end2end/solver.prototxt --weights data/imagenet_models/VGG_CNN_M_1024.v2.caffemodel --imdb hs --iters 80000 --cfg experiments/cfgs/faster_rcnn_end2end.yml命令解析: 1)、train_net.py是网络的训练文件,之后的参数都是附带的输入参数。 3)、–gpu 代表机器上的GPU编号,如果是nvidia系列的tesla显卡,可以在终端中输入nvidia-smi来查看当前的显卡负荷,选择合适的显卡。 4)、–solver 代表模型的配置文件,train.prototxt的文件路径已经包含在这个文件之中。 5)、-weights 代表初始化的权重文件,这里用的是Imagenet上预训练好的模型,中型的网络我们选择用VGG_CNN_M_1024.v2.caffemodel,此步可以省略,省略后会自动初始化。 6)、–imdb 这里给出的训练的数据库名字需要在factory.py的_sets中,我在文件里面有。_sets[‘hs’],train_net.py这个文件会调用factory.py再生成hs这个类,来读取数据。
4.启动Fast RCNN网络检测 可以参考tools下面的demo.py 文件,来做检测,并且将检测的坐标结果输出到相应的txt文件中。