Tuesday, June 12, 2012

attaching EMC san storage to Solaris 9

I have EMC san storage. I have always depended on the storage guy to give me detailed instructions on how to mount it to solaris 9. 

Device # 836 
told to put it in sd.conf 

But I can't seem to figure out how exactly this goes, as it does not match existing data in sd.conf. 

And am I required to do a reboot to see these new drives? 
I have existing EMC disk mounted on the server. 

I am used to getting it in c3t1d1 format. 

Anyone point me to a primer or other information. 
Your EMC SAN storage, which i suppose should be a Clariion CX series , if your solaris server is connected to the SAN switch(and has two adapters) ,,then Solaris 9 will not see the disks like the SCSI external disk or FC-AL (probe-scsi at NVram prompt or Format at the multi-user mode). 

Here is the complete procedure (that was a part of a training at EMC and you will notice that the procedure to mount a LUN on solaris seems longer that in Windows host, I you can see below). 

If you want to configure your SUN Host to access EMC LUNs, there is a specific command: 
% cd /usr/sbin/lpfc 
% ./lputil (Light pulse utility for solaris) 

Choose 5. Persistent Bindings. 
after that enter choice 1 (display current binding) 
The list of persistent bindings should show the CLARiiON ports that your host is 
connected to. 

If all of the entries are not present, you will need to re-create them. Return to the Persistent Bindings Menu. 
Choose 6. Delete Binding(s) 
Choose 2. Delete All Bindings and confirm 

When all current bindings are deleted, return to the Persistent Bindings Menu 
(Option 0). 

Choose 5. Bind Automapped Targets. 
Select the entry for the first adapter. 
Select YES to Bind all auto mapped targets 
Select to bind By Port Name. 

Repeat for the second adapter. 

After selecting a binding method for the first lpfc (HBA) you won?t be prompted for the binding method for any remaining lpfcs because the binding method will be the same. 

View all the current bindings. You should now have 8 entries for CLARiiON ports. 

Exit from the lputil utility (Option 0). 

Change to the /kernel/drv directory, and view the lpfc.conf file. Use the following command: 
% more lpfc.conf 

The lpfc.conf file should have CLARiiON WWPNs in the uncommented Persistent Bindings section 

Each line links a CLARiiON SP port WWPN to a host controller/target combination. 
Ensure there are 4 WWPNs for your primary storage system. 
Ensure there are 4 WWPNs for your secondary storage system. 

At this point the lpfc.conf file reflects the bindings of the HBAs to the WWPNs of the SP ports. Next, we need to configure the sd.conf file to reflect the hlu addresses that the SPs are showing in the Storage Groups. We do this with sd.conf. 

Make a backup copy of sd.conf using the following command: 
cp sd.conf sd.conf.old 

The LUN numbers you want to enter in the sd.conf file must be the HLUs and not the ALUs. 
Open sd.conf with the vi editor (vi sd.conf). 

Edit the existing entries so that you have an entry for each target/LUN combination, prefixed by name=?sd? parent =?lpfc? 

Save the new sd.conf file and exit vi ( <Esc> and :wq! ). 
Reboot your Sun host using the following command: 
reboot -- -r 

When the Sun host is back up, run the following commands: 
?? powermt display dev=all 
?? powercf ?q 
?? powermt config 
?? powermt check force 
?? powermt display dev=all 
?? powermt save 
This will add new LUNs to the PowerPath configuration file (and remove any 
unused LUNs). 


Format the LUNs that will be mounted to mount points 
? Create file systems on the LUNs 
? Create mount points 
? Mount the file systems onto the mount points 

I hope this will help , Good luck.
Hello, 

I belive that is a LUN, what you need is the device mapping from LUN to a physical device. 

You can get that I believe by typing in /etc/powerpath display dev=all, or with inq. I could be wrong as my SAN Admin days have long past on by. On the other hand you could try to get more information from your SAN guy. As far as the reboot, do a devfsadm and see the new disk via cfgadm -al, a reboot should not be needed. 
Sorry, the command is /etc/powermt display dev=all

0 comments:

Post a Comment

 
Design by BABU | Dedicated to grandfather | welcome to BABU-UNIX-FORUM