caffe代码阅读:SyncedMemory和Blob

    xiaoxiao2026-02-25  12

    1.SyncedMemory

    这个类SyncedMemory的作用是数据存储,在CPU和GPU上各有一块内存空间(大小为size_)。

    先看初始化

    SyncedMemory() : cpu_ptr_(NULL), gpu_ptr_(NULL), size_(0), head_(UNINITIALIZED), own_cpu_data_(false), device_context_(Caffe::GetDefaultDeviceContext()), cl_gpu_mem_(NULL) { }

    重要的成员变量

      enum SyncedHead {     UNINITIALIZED,     HEAD_AT_CPU,     HEAD_AT_GPU,     SYNCED   };//用来表示内存分配状态,未分配、CPU已分配、GPU已分配、两边内存已同步。 private: void to_cpu();//判断head_,如果CPU未分配就开辟空间或者从GPU上copy void to_gpu();//<span style="font-family: Arial;">判断head_,如果GPU未分配就开辟空间或者从CPU上copy </span> //两个指针 void* cpu_ptr_; void* gpu_ptr_; size_t size_;//数据大小 SyncedHead head_;//数据分配状态 bool own_cpu_data_;//CPU指针是自己分配的内存还是引用别人的内存(存疑) DeviceContext *device_context_;//设备 #ifdef USE_GREENTEA cl_mem cl_gpu_mem_;//在opencl里用,这时候的<span style="font-family: Arial;">gpu_ptr_是强制类型转换(void*)</span><span style="font-family: Arial;">cl_gpu_mem_。</span><span style="font-family: Arial;"> </span>#endif

    还有个比较重要的区分,

    const void* cpu_data();//返回CPU指针,注意是const不可改的,返回之前会调用to_cpu()。 void set_cpu_data(void* data);//CPU指针指向data,这时候不再own cpu data,但是size_未设置。 const void* gpu_data();//同上。 void* mutable_cpu_data();//这个返回的指针是mutable可变的,可以改写类的数据,返回前调用to_cpu(),调用之后要设置head_为不同步状态。 void* mutable_gpu_data();//同上。

    2.Blob

    这个类可以理解成每一层的参数数据(每层有个Blob的向量保存weights、bias等),保存了原始数据和传回来的梯度。

    看成员变量,特别说明下,num_axes一般指shape_的size

    protected: shared_ptr<SyncedMemory> data_;//原始数据 shared_ptr<SyncedMemory> diff_;//传回的梯度 shared_ptr<SyncedMemory> shape_data_;//形状信息(就是shape_的一串数) vector<int> shape_;//形状(也可以叫维度),最常见是4维(num、channel、width、height),最多INT_MAX维,其余维度作用存疑。 int count_;//数据Dtype num,大小是num*channel*width*height。 int capacity_;//按理说同count_,容量,具体作用存疑 DeviceContext *device_context_;//设备 初始化

    Blob() : data_(), diff_(), count_(0), capacity_(0), device_context_(Caffe::GetDefaultDeviceContext()) { } 一些有代表性的函数

    bool Blob<Dtype>::Reshape(const vector<int>& shape);//看名字很好理解是干啥的,主要会改shape_、shape_data_(内存)、count_、capacity_、两个数据(内存大小) inline int offset(const int n, const int c = 0, const int h = 0, const int w = 0);//可以看组织形式 ((n * channels() + c) * height() + h) * width() + w void Update();//data减去diff,调数学库做。 void FromProto(const BlobProto& proto, bool reshape = true);//这就是读protex文件要调的吧(待看)。 //其余一些asum、sumsq、scale、share等等操作,看名字就知道在干嘛了,直接贴上来,也有英文注释   /// @brief Compute the sum of absolute values (L1 norm) of the data.   Dtype asum_data() const;   /// @brief Compute the sum of absolute values (L1 norm) of the diff.   Dtype asum_diff() const;   /// @brief Compute the sum of squares (L2 norm squared) of the data.   Dtype sumsq_data() const;   /// @brief Compute the sum of squares (L2 norm squared) of the diff.   Dtype sumsq_diff() const;   /// @brief Scale the blob data by a constant factor.   void scale_data(Dtype scale_factor);   /// @brief Scale the blob diff by a constant factor.   void scale_diff(Dtype scale_factor);   /// @brief Add diff from other blob   by alan   void AddDiffFrom(const Blob& other);   /// @brief Clear diff   void ClearDiff();   /**    * @brief Set the data_ shared_ptr to point to the SyncedMemory holding the    *        data_ of Blob other -- useful in Layer&s which simply perform a copy    *        in their Forward pass.    *    * This deallocates the SyncedMemory holding this Blob's data_, as    * shared_ptr calls its destructor when reset with the "=" operator.    */   void ShareData(const Blob& other);   /**    * @brief Set the diff_ shared_ptr to point to the SyncedMemory holding the    *        diff_ of Blob other -- useful in Layer&s which simply perform a copy    *        in their Forward pass.    *    * This deallocates the SyncedMemory holding this Blob's diff_, as    * shared_ptr calls its destructor when reset with the "=" operator.    */   void ShareDiff(const Blob& other);   bool ShapeEquals(const BlobProto& other);

    总结一下吧,写完这些发现,syncedMem其实是保证了数据的存储和拷贝,而Blob封装了对于数据的操作(深度学习要用的操作),而且组织形式(data和diff)也符合网络的结构。

    转载请注明原文地址: https://ju.6miu.com/read-1307353.html
    最新回复(0)