原本用8GB的SD卡
但是覺得實在是太小了,於是買了
SanDisk Extreme microSD U3 16GB
http://24h.pchome.com.tw/prod/DGAG0H-A9005QU7O?q=/S/DGAG0H
把原本的系統燒錄成映像檔,燒到16GB的新卡之後居然顯示只有8GB!
用dd的指令搬過去也是一樣的狀況~
感謝實驗室的學弟找到了神奇的指令
sudo raspi-config
會進入下圖的畫面,選擇Expand Filesystem後reboot系統就會自動fit size囉!
2015年7月15日 星期三
2015年7月3日 星期五
[Linux] CentOS USB相關, 編輯檔案, 常用指令整理
搜尋USB
fdisk -l
登入USB (通常替USB取名為mnt)
mount /dev/sdb1/mnt
或是另一種方法
mount -t vfat /dev/sdb1 usb/
退出USB
umount /mnt
若是unmount無法順利退出,可以換個方法看看
sudo fuser -km /dev/sdb1
sudo umount /dev/sdb1
[root@master01 spark]# mkdir /mnt/usb
[root@master01 spark]# mount -v -t auto /dev/sdb1 /mnt/usb
mount: you didn't specify a filesystem type for /dev/sdb1
I will try type vfat
/dev/sdb1 on /mnt/usb type vfat (rw)
[root@master01 spark]# mount /dev/sdb1 /mnt/usb
mount: /dev/sdb1 already mounted or /mnt/usb busy
mount: according to mtab, /dev/sdb1 is already mounted on /mnt/usb
[root@master01 spark]# cd /mnt/usb/
[root@master01 usb]# ls
123.txt
[root@master01 usb]# cp 123.txt /mnt/usb /opt/spark/bin/examples/jverne/
快速檢查磁碟狀況
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_master01-lv_root
50G 2.7G 44G 6% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/sda2 477M 67M 385M 15% /boot
/dev/sda1 200M 260K 200M 1% /boot/efi
/dev/mapper/vg_master01-lv_home
378G 1.4G 357G 1% /home
/dev/sdb1 7.5G 5.2G 2.3G 70% /home/hduser/usb
/dev/sdc1 7.5G 5.2G 2.3G 70% /home/hduser/usb
vi 與 vim 的指令整理
http://www.vixual.net/blog/archives/234
壓縮和解壓縮的指令整理
http://www.centoscn.com/CentOS/help/2014/0613/3133.html
fdisk -l
登入USB (通常替USB取名為mnt)
mount /dev/sdb1/mnt
或是另一種方法
mount -t vfat /dev/sdb1 usb/
退出USB
umount /mnt
若是unmount無法順利退出,可以換個方法看看
sudo fuser -km /dev/sdb1
sudo umount /dev/sdb1
[root@master01 spark]# mkdir /mnt/usb
[root@master01 spark]# mount -v -t auto /dev/sdb1 /mnt/usb
mount: you didn't specify a filesystem type for /dev/sdb1
I will try type vfat
/dev/sdb1 on /mnt/usb type vfat (rw)
[root@master01 spark]# mount /dev/sdb1 /mnt/usb
mount: /dev/sdb1 already mounted or /mnt/usb busy
mount: according to mtab, /dev/sdb1 is already mounted on /mnt/usb
[root@master01 spark]# cd /mnt/usb/
[root@master01 usb]# ls
123.txt
[root@master01 usb]# cp 123.txt /mnt/usb /opt/spark/bin/examples/jverne/
快速檢查磁碟狀況
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_master01-lv_root
50G 2.7G 44G 6% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/sda2 477M 67M 385M 15% /boot
/dev/sda1 200M 260K 200M 1% /boot/efi
/dev/mapper/vg_master01-lv_home
378G 1.4G 357G 1% /home
/dev/sdb1 7.5G 5.2G 2.3G 70% /home/hduser/usb
/dev/sdc1 7.5G 5.2G 2.3G 70% /home/hduser/usb
vi 與 vim 的指令整理
http://www.vixual.net/blog/archives/234
壓縮和解壓縮的指令整理
http://www.centoscn.com/CentOS/help/2014/0613/3133.html
強制關機
shutdown -f
現在關機
shutdown -h now
shutdown -f
現在關機
shutdown -h now
2015年7月2日 星期四
[Hadoop] 用hadoop-daemon.sh啟動Hadoop
最近要測master不參與運算的效能
從slave名單拿掉master後,卻讓整個hadoop都不能運作了
去社群上詢問之後,得到建議用hadoop-daemon.sh start的方法
(我本來都直接用start-all.sh)
雖然要個別開啟,輸入的指令多了一些,但是至少克服了上述的問題
首先在master輸入以下指令,個別啟動
hadoop-daemon.sh start namenode
hadoop-daemon.sh start secondarynamenode
yarn-daemon.sh start nodemanager
yarn-daemon.sh start resourcemanager
檢查一下~
[hduser@master01 ~]$ jps
22550 NameNode
22818 SecondaryNameNode
9958 Master
23420 Jps
23027 NodeManager
23187 ResourceManager
21980 RunJar
接著到要開啟的slave啟動datanode
hadoop-daemon.sh start datanode
檢查一下~
[hduser@slave02 ~]$ jps
11274 Jps
11212 DataNode
4361 Worker
[補充] 在Banana Pi or Raspberry Pi上的路徑不太一樣,要先到hadoop資料夾再啟動
[補充] 在Banana Pi or Raspberry Pi上的路徑不太一樣,要先到hadoop資料夾再啟動
hduser@banana01 ~ $ hadoop-daemon.sh start datanode
-bash: hadoop-daemon.sh:命令找不到
hduser@banana01 ~ $ cd /opt/hadoop/
hduser@banana01 /opt/hadoop $ sbin/hadoop-daemon.sh start datanode
-bash: hadoop-daemon.sh:命令找不到
hduser@banana01 ~ $ cd /opt/hadoop/
hduser@banana01 /opt/hadoop $ sbin/hadoop-daemon.sh start datanode
看起來是沒有問題了,在做個最後的檢查~
[hduser@master01 ~]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/07/02 17:28:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 52844687360 (49.22 GB)
Present Capacity: 47795707904 (44.51 GB)
DFS Remaining: 47795683328 (44.51 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1): 確認任無達成!
Name: 192.168.70.103:50010 (slave02)
Hostname: slave02
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5048979456 (4.70 GB)
DFS Remaining: 47795683328 (44.51 GB)
DFS Used%: 0.00%
DFS Remaining%: 90.45%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Jul 02 17:28:05 CST 2015
[參考資料]
what is best way to start and stop hadoop ecosystem?http://stackoverflow.com/questions/17569423/what-is-best-way-to-start-and-stop-hadoop-ecosystem
hadoop启动之“hadoop-daemon.sh”详解http://blog.csdn.net/sinoyang/article/details/8021296
訂閱:
文章 (Atom)