陆续好多人会问,在写入Hbase的时候总是会出现空指针的问题,而检查程序,看起来一点也没有错。
如报的错误大致如下:
Error: application failed with exception java.lang.RuntimeException: java.lang.NullPointerException at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:209) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:802) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:200) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:85) at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:310) at org.apache.hadoop.hbase.client.HTable.getRegionLocations(HTable.java:666) at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:79) at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:64) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:160) at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:98) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:220) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:218) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:218) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1335) at org.apache.spark.rdd.RDD.count(RDD.scala:925) at HBaseTest$.main(HBaseTest.scala:27) at HBaseTest.main(HBaseTest.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:367) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:77) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:269) at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:241) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:62) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1213) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1174) at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:294) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:130) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:55) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:201) ... 28 more
今天就将这种问题解决方法写成博客,其实解决办法非常简单。其问题主要发生在这行代码当中。
hbaseContext.bulkPut[(Array[Byte], Array[(Array[Byte], Array[Byte], Array[Byte])])](rdd, tableName, (putRecord) => { val put = new Put(putRecord._1) putRecord._2.foreach((putValue) => put.add(putValue._1, putValue._2, putValue._3)) put }, true); }
这个问题,主要原因在于从HiveContext中访问的DataFrame中,遍历的某些行里面putRecord中的某一个单元值为NULL,所以就会抛出这种异常。
因此在put.add的时候首先需要进行判断一下。
如 putRecord.IsNullAt(index),这样进行判断,如果为NULL值,简单设个特定的字符串,马上什么问题全部解决。