close

事情是這樣的,

S3 + ec2的使用在AWS上已經算是經典配合了,

但作者算是aws新手,

再另一篇文章安裝完環境並掛載S3完成,

並設定fstab自動重開機後,

instance status check 就失敗了...

也不能ssh到該instance....

我自己知道的開機自動執行有以下4種方式

1./etc/fstab -> 有可能系統服務尚未開啟就執行裡面的scripts

2./etc/rc.d/rc.local -> 順序在開機完成後,第一個被執行的

3.crontab 的開機排程 @reboot

4.AWS usr data + cloud init

 

1.種方式的失敗修復...

從systemlog中發現了

[FAILED0m Failed to mount /mnt/mypay.webserver.bucket.
See 'systemctl status mnt-mypay.webserver.bucket.mount' for details.
[DEPEND0m Dependency failed for Local File Systems.
[DEPEND0m Dependency failed for Relabel all filesystems, if necessary.
[DEPEND0m Dependency failed for Mark the need to relabel after reboot.

這問題可能是核心或網路在mount時,還沒有完全啟動.

這是因為/etc/fstab並不保證在所有服務啟動後才執行.

但問題已經發生了,

就去google找方法,發現aws forum有人跟我遇到一樣的問題 : 

Can't SSH into EC2 instance after adding s3fs mount in fstab and restarting

只能先將instance stop然後volume deatch,

在attach到另一台可以使用的instance.

 

現在到正常的instance,

雖然已經attach了,但還是需要mount才能使用,(一值以來都使用windows,這一點很不習慣呢)

先檢查lsblk

xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   8G  0 disk
└─xvdf1 202:81   0   8G  0 part

xvdf1就是新attach的vilume.
因為此volume是其他的ec2使用的,本身已經有file system.

sudo file -s /dev/xvdf1
/dev/xvdf1: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)
不需要格式化.直接掛載

sudo mount xvdf1 /tmp

但出錯了....
mount: wrong fs type, bad option, bad superblock on /dev/xvdf1,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

檢查一下

parted -l
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  8590MB  8589MB  primary  xfs          boot


Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  8590MB  8589MB  primary  xfs          boot

dmesg | tail
[    7.216174] ISOFS: Unable to identify CD-ROM format.
[   10.840435] ISOFS: Unable to identify CD-ROM format.
[   10.890416] ISOFS: Unable to identify CD-ROM format.
[   11.371911] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[   53.659838] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[  400.033376] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[ 1670.060581] XFS (xvdf1):
Filesystem has duplicate UUID ef6ba050-6cdc-416a-9380-c14304d0d206 - can't mount
[ 3999.196798] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[ 4069.123734] XFS (xvdf1):
Filesystem has duplicate UUID ef6ba050-6cdc-416a-9380-c14304d0d206 - can't mount
[ 4231.286556]  xvdf: xvdf1

那麼不是忽略uuid掛載,就是要產生新的uuid才能掛載

mount -o nouuid xvdf1 /tmp

OR

xfs_admin -U generate xvdf1
sudo mount xvdf1 /tmp

接著去修改 /tmp/etc/fstab 將s3fs掛載拿掉.

卸載->關機->attach回原來的instance -> 重開instance.
(attach時,註要要改裝置路徑名稱=>/dev/vda1)

2.作法

新增掛載到/etc/rc.local

/usr/local/bin/s3fs#bucket_name /path/mount_folder allow_other,use_cache=/path/cache_folder 0 0

給檔案執行權限

chmod a+x /etc/rc.d/rc.local

3.簡易的開機排程作法

crontab -e

@reboot /usr/local/bin/s3fs -o use_cache=/path/cache_folder bucket_name /path/mount_folder

4.AWS user data

官網文˙件好像是把script寫到user data內而已,但有一段話

Important

User data scripts and cloud-init directives only run during the first boot cycle when an instance is launched.

所以應該是用來作為配置第一次部屬的instance. (沒試

 

參考

使 Amazon EBS 卷可用

Created an EBS volume from a snapshot, but can't mount it?

arrow
arrow
    全站熱搜

    abcg5 發表在 痞客邦 留言(0) 人氣()