2016年5月6日 星期五

[Spark1.6.0] ERROR SparkContext: Error initializing SparkContext


[hadoop@master01 spark-1.6.0]$ spark-shell
16/05/06 16:46:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.0
      /_/

Using Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.8.0_91)
Type in expressions to have them evaluated.
Type :help for more information.
16/05/06 16:46:54 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Log directory hdfs:///user/spark/eventlog does not exist.

    at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:101)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:549)
    at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
    at $line3.$read$$iwC$$iwC.<init>(<console>:15)
    at $line3.$read$$iwC.<init>(<console>:24)
    at $line3.$read.<init>(<console>:26)
    at $line3.$read$.<init>(<console>:30)
    at $line3.$read$.<clinit>(<console>)
    at $line3.$eval$.<init>(<console>:7)
    at $line3.$eval$.<clinit>(<console>)



Solution :
hdfs dfs -mkdir -p /user/spark/eventlog

[Spark1.6.0] Install Scala & Spark


Download and install Scala 2.11.8


Set Scala configure

---------------------------------------------------------------------------------------
sudo gedit ~/.bashrc

#scala
export SCALA_HOME=/opt/scala-2.11.8
export PATH=$PATH:$SCALA_HOME/bin

source ~/.bashrc




---------------------------------------------------------------------------------------
test
[hadoop@master01 lib]$ scala
Welcome to Scala 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_91).
Type in expressions for evaluation. Or try :help.

scala> 1+1
res0: Int = 2
---------------------------------------------------------------------------------------



Download and install Spark 1.6.0 on Hadoop 2.6
Set Spark configure
---------------------------------------------------------------------------------------
sudo gedit ~/.bashrc

#Spark
export SPARK_HOME=/opt/spark-1.6.0
export PATH=$PATH:$SPARK_HOME/bin

source ~/.bashrc
---------------------------------------------------------------------------------------


 
cp spark-env.sh.template spark-env.sh
sudo gedit spark-env.sh

export SCALA_HOME=/opt/scala-2.11.8
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
export SPARK_MASTER_IP=master01
export SPARK_WORKER_MEMORY=1024m


spark.master spark://master01:7077
spark.eventLog.enabled true
spark.eventLog.dir hdfs:///user/spark/eventlog



ps aux | grep spark
hadoop     969  0.0  0.0 112644   952 pts/0    R+   21:21   0:00 grep --color=auto spark



---------------------------------------------------------------------------------------
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
 
val sc = new SparkContext(new SparkConf().setAppName("Spark Count"))
val count = sc.parallelize(1 to NUM_SAMPLES).map{i =>
  val x = Math.random()
  val y = Math.random()
  if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)

---------------------------------------------------------------------------------------
Word count example

scala> val textFile = sc.textFile("hdfs://master01:9000/opt/hadoop-2.7.1/input/text34mb.txt")
textFile: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[9] at textFile at <console>:27

scala> val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)
wordCounts: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[12] at reduceByKey at <console>:29

scala> wordCounts.collect()
res0: Array[(String, Int)] = Array(('lopin',1), (Ah!,99), (houres,,36), (Committee,),1), (bone,40), (fleein',1), (�Head.�,1), (delinquents.,2), (Malwa,1), (routing*,2), ('farthest,1), (Dollours,2), (Feldkirch,,3), ((1754-1831),,1), (nothin,1), (untruthfulness.,1), (signal.,6), (langwidge,3), (drad;*,1), (meets,,3), (Lost.,3), (Papists,,6), (accompts,,2), (Goodbye!,1), (Galliard,4), ((1563-1631),1), (Anthonio,,40), (God-forsaken,4), (rightly-,1), (fowl,30), (coat;,3), (husky,5), (Carpenter,4), (precious*,1), (ampullaria,1), (afterward,64), (armes*,,2), (entend*,1), (provisioned,,1), (wicked?,3), (Francaise,1), (Herefords,2), (Souls.",1), (/Loci,2), (speak:,9), (half-crowns,1), (Thunder.,18), (Halkar;,2), (HISTORIES.,1), (feats;,1), (robin,1), (fixed-I,1), (undeterred,2), (fastenings,4), ...

 

[Hadoop2.7.1] Can't run Datanode


Error : java.io.IOException: Incompatible clusterIDs

Solution : 
\rm -r /opt/hadoop-2.7.1/tmp/
hadoop namenode -format

After that, you could start again.



Reference
http://blog.chinaunix.net/uid-20682147-id-4214553.html

2016年5月5日 星期四

[Hadoop2.7.1] Wordcount


hadoop fs -mkdir -p /opt/hadoop-2.7.1/input

hadoop fs -copyFromLocal /opt/hadoop-2.7.1/text/text34mb.txt /opt/hadoop-2.7.1/input

hadoop jar /opt/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /opt/hadoop-2.7.1/input/text34mb.txt /opt/hadoop-2.7.1/output




[hadoop@master01 lib]$ hadoop jar /opt/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /opt/hadoop-2.7.1/input/text34mb.txt /opt/hadoop-2.7.1/output
16/05/05 16:30:43 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/05/05 16:30:43 INFO input.FileInputFormat: Total input paths to process : 1
16/05/05 16:30:44 INFO mapreduce.JobSubmitter: number of splits:1
16/05/05 16:30:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1462429858916_0001
16/05/05 16:30:45 INFO impl.YarnClientImpl: Submitted application application_1462429858916_0001
16/05/05 16:30:45 INFO mapreduce.Job: The url to track the job: http://master01:8088/proxy/application_1462429858916_0001/
16/05/05 16:30:45 INFO mapreduce.Job: Running job: job_1462429858916_0001
16/05/05 16:30:53 INFO mapreduce.Job: Job job_1462429858916_0001 running in uber mode : false
16/05/05 16:30:53 INFO mapreduce.Job:  map 0% reduce 0%
16/05/05 16:31:04 INFO mapreduce.Job:  map 42% reduce 0%
16/05/05 16:31:09 INFO mapreduce.Job:  map 67% reduce 0%
16/05/05 16:31:11 INFO mapreduce.Job:  map 100% reduce 0%
16/05/05 16:31:19 INFO mapreduce.Job:  map 100% reduce 100%
16/05/05 16:31:19 INFO mapreduce.Job: Job job_1462429858916_0001 completed successfully
16/05/05 16:31:19 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=9917184
        FILE: Number of bytes written=15106616
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=35926297
        HDFS: Number of bytes written=3103134
        HDFS: Number of read operations=6
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=1
        Launched reduce tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=15003
        Total time spent by all reduces in occupied slots (ms)=4504
        Total time spent by all map tasks (ms)=15003
        Total time spent by all reduce tasks (ms)=4504
        Total vcore-seconds taken by all map tasks=15003
        Total vcore-seconds taken by all reduce tasks=4504
        Total megabyte-seconds taken by all map tasks=15363072
        Total megabyte-seconds taken by all reduce tasks=4612096
    Map-Reduce Framework
        Map input records=788346
        Map output records=6185757
        Map output bytes=59289268
        Map output materialized bytes=4958589
        Input split bytes=121
        Combine input records=6185757
        Combine output records=328274
        Reduce input groups=272380
        Reduce shuffle bytes=4958589
        Reduce input records=328274
        Reduce output records=272380
        Spilled Records=984822
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=209
        CPU time spent (ms)=11810
        Physical memory (bytes) snapshot=327483392
        Virtual memory (bytes) snapshot=4164567040
        Total committed heap usage (bytes)=219676672
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters
        Bytes Read=35926176
    File Output Format Counters
        Bytes Written=3103134



---------------------------------------------------------------------------------------

Delete the file
hdfs dfs -rm -r /opt/hadoop-2.7.1/output

---------------------------------------------------------------------------------------
[hadoop@master01 lib]$ ls /opt/hadoop-2.7.1/share/hadoop/mapreduce/
hadoop-mapreduce-client-app-2.7.1.jar
hadoop-mapreduce-client-common-2.7.1.jar
hadoop-mapreduce-client-core-2.7.1.jar
hadoop-mapreduce-client-hs-2.7.1.jar
hadoop-mapreduce-client-hs-plugins-2.7.1.jar
hadoop-mapreduce-client-jobclient-2.7.1.jar
hadoop-mapreduce-client-jobclient-2.7.1-tests.jar
hadoop-mapreduce-client-shuffle-2.7.1.jar
hadoop-mapreduce-examples-2.7.1.jar
lib
lib-examples
sources
---------------------------------------------------------------------------------------


Reference
http://kurthung1224.pixnet.net/blog/post/175503049
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v1.0

[CentOS] How to Share Your Computer’s Files With a Virtual Machine


Reference

http://www.howtogeek.com/189974/how-to-share-your-computers-files-with-a-virtual-machine/

sudo mkdir c
sudo mount -t vboxsf C_DRIVE /c

2015年7月15日 星期三

[Raspbian] 系統打包到不同大小的SD卡上

原本用8GB的SD卡
但是覺得實在是太小了,於是買了

SanDisk Extreme microSD U3 16GB
http://24h.pchome.com.tw/prod/DGAG0H-A9005QU7O?q=/S/DGAG0H

把原本的系統燒錄成映像檔,燒到16GB的新卡之後居然顯示只有8GB!
用dd的指令搬過去也是一樣的狀況~

感謝實驗室的學弟找到了神奇的指令
sudo raspi-config

會進入下圖的畫面,選擇Expand Filesystem後reboot系統就會自動fit size囉!



2015年7月3日 星期五

[Linux] CentOS USB相關, 編輯檔案, 常用指令整理

搜尋USB
fdisk -l

登入USB (通常替USB取名為mnt)
mount /dev/sdb1/mnt

或是另一種方法
mount -t vfat /dev/sdb1 usb/



退出USB
umount /mnt

若是unmount無法順利退出,可以換個方法看看
sudo fuser -km /dev/sdb1
sudo umount /dev/sdb1



[root@master01 spark]# mkdir /mnt/usb
[root@master01 spark]# mount -v -t auto /dev/sdb1 /mnt/usb
mount: you didn't specify a filesystem type for /dev/sdb1
       I will try type vfat
/dev/sdb1 on /mnt/usb type vfat (rw)
[root@master01 spark]# mount /dev/sdb1 /mnt/usb
mount: /dev/sdb1 already mounted or /mnt/usb busy
mount: according to mtab, /dev/sdb1 is already mounted on /mnt/usb
[root@master01 spark]# cd /mnt/usb/
[root@master01 usb]# ls
123.txt
[root@master01 usb]# cp 123.txt /mnt/usb /opt/spark/bin/examples/jverne/



快速檢查磁碟狀況
df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_master01-lv_root
                       50G  2.7G   44G   6% /
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sda2             477M   67M  385M  15% /boot
/dev/sda1             200M  260K  200M   1% /boot/efi
/dev/mapper/vg_master01-lv_home
                      378G  1.4G  357G   1% /home
/dev/sdb1             7.5G  5.2G  2.3G  70% /home/hduser/usb
/dev/sdc1             7.5G  5.2G  2.3G  70% /home/hduser/usb




vi 與 vim 的指令整理
http://www.vixual.net/blog/archives/234

壓縮和解壓縮的指令整理
http://www.centoscn.com/CentOS/help/2014/0613/3133.html

強制關機
shutdown -f

現在關機
shutdown -h now