Hp StorageWorks Scalable File Share User Manual Page 57

  • Download
  • Add to my manuals
  • Print
  • Page
    / 84
  • Table of contents
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 56
To see if the problem can be fixed with writeconf, run the following test:
1. On the MGS node run:
[root@adm ~]# debugfs -c -R 'dump CONFIGS/testfs-client /tmp/testfs-client' /dev/mapper/mpath0
Replace testfs with file system name and mpath0 with mpath for MGS device.
2. Convert the dump file to ASCII:
[root@adm ~]# llog_reader /tmp/testfs-client > /tmp/testfs-client.txt
[root@adm ~]# grep MDT /tmp/testfs-client.txt
#05 (224)marker 4 (flags=0x01, v1.6.6.0) scratch-MDT0000 'add mdc' Wed Dec 10 09:53:41 2008-
#07 (136)attach 0:scratch-MDT0000-mdc 1:mdc 2:scratch-MDT0000-mdc_UUID
#08 (144)setup 0:scratch-MDT0000-mdc 1:scratch-MDT0000_UUID 2:10.129.10.1@o2ib
#09 (128)mount_option 0: 1:scratch-client 2:scratch-clilov 3:scratch-MDT0000-mdc
#10 (224)marker 4 (flags=0x02, v1.6.6.0) scratch-MDT0000 'add mdc' Wed Dec 10 09:53:41 2008-
The problem is in line #08. The MDT is related to 10.129.10.1@o2ib, but in this example the
IP address is for the MGS node not the MDT node. So MDT will never mount on the MDT
node.
To fix the problem, use the following procedure:
IMPORTANT: The following steps must be performed in the exact order as they appear below.
1. Unmount HP SFS from all client nodes.
# umount /testfs
2. Stop Heartbeat on HP SFS server nodes.
a. Stop the Heartbeat service on all the OSS nodes:
# pdsh -w oss[1-n] service heartbeat stop
b. Stop the Heartbeat service on the MDS and MGS nodes:
# pdsh -w mgs,mds service heartbeat stop
c. To prevent the file system components and the Heartbeat service from automatically
starting on boot, enter the following command:
# pdsh -a chkconfig --level 345 heartbeat off
This forces you to manually start the Heartbeat service and the file system after a file
system server node is rebooted.
3. Verify that the Lustre mount-points are unmounted on the servers.
# pdsh -a "df | grep mnt"
4. Run the following command on the MGS node:
# tunefs.lustre --writeconf /dev/mapper/mpath[mgs]
5. Run the following command on the MDT node:
# tunefs.lustre --writeconf /dev/mapper/mpath[mdt]
6. Run this command on each OSS server node for all the mpaths which that node normally
mounts:
# tunefs.lustre --writeconf /dev/mapper/mpath[oss]
7. Manually mount the MGS mpath on the MGS server. Monitor the /var/log/messages
to verify that it is mounted without any errors.
8. Manually mount the MDT mpath on the MDT server. Monitor the /var/log/messages
to verify that there are no errors and the mount is complete. This might take several minutes.
9. Manually mount each OST on the OSS server where it normally runs.
5.7 Testing Your Configuration 57
Page view 56
1 2 ... 52 53 54 55 56 57 58 59 60 61 62 ... 83 84

Comments to this Manuals

No comments