Using Disk Storage From Internal Clients
Using Disk Storage from Internal Clients
Disk Storage Servers (GPFS Servers)
Disk storage servers, including CIFS/NFS servers, are intended to provide disk space to batch servers, work servers, grid servers, and research group hosts in KEK.
Research group hosts should access the disk space with NFS ver.4, while user authentication is managed by LDAP.
Client nodes can also access with NFS v4, though allowed only for particular cases due to security reasons.
Disk Storage Space
This section describes the disk spaces provided by GPFS servers.
In the new system, as it used to be in the previous system, do not use the df command instead grpquota command or gquota command to check the disk space assigned to and used by each group. Also, it is possible to check the capacity and used space of the home domain with hquota. See Notice of the GPFS usage for more information.
File System Domain (File System Name) |
Work group /Sub group |
Directory | Quota size |
---|---|---|---|
Home domain(gpfs_home) | Acc | /home/acc | 100GB/user |
Atlas | /home/atlas | ||
Bess | /home/bess | ||
Central | /home/ce | ||
CMB | /home/cmb | ||
Had | /home/had | ||
ILC | /home/ilc | ||
ITDC | /home/itdc | ||
QUP | /home/qup | ||
MLF | /home/mlf | ||
PS | /home/ps | ||
T2K | /home/t2k | ||
Theory | /home/th | ||
Belle | /home/belle | 250GB/user | |
Belle2 | /home/belle2 | 100GB/user |
File System Domain (File System Name) |
Work group /Sub group |
Directory | Quota size | |
---|---|---|---|---|
Group domain(gpfs_group) | Acc | /group/acc | 1 TB | |
Atlas | /group/atlas | 5 TB | ||
Bess | /group/bess | 5 TB | ||
Central | ce_ce | /group/ce | --- | |
ce_cc | /group/ce/cc | 1 TB | ||
ce_ccs | /group/ce/ccs | 1 TB | ||
ce_ccx | /group/ce/ccx | 1 TB | ||
ce_cryo | /group/ce/cryo | 2 TB | ||
ce_ebes | /group/ce/ebes | 20 TB | ||
ce_rad | /group/ce/rad | 30 TB | ||
ce_geant4 | /group/ce/geant4 | 1 TB | ||
ce_geant4_grid_storm | /group/grid/geant4 | 200 GB | ||
ce_pf | /group/ce/pf | 20 TB | ||
ce_kagra | /group/ce/kagra | 1 TB | ||
ce_kagra_grid_storm | /group/grid/kagra | 200 GB | ||
ce_sbrc | /group/ce/sbrc | 1 TB | ||
CMB | /group/cmb | 370 TB | ||
Had | had_had | /group/had | --- | |
had_koto | /group/had/koto | 3 PB | ||
had_trek | /group/had/trek | 30 TB | ||
had_sks | /group/had/sks | 200 TB | ||
had_knucl | /group/had/knucl | 100 TB | ||
had_muon | /group/had/muon | 300 TB | ||
had_staff | /group/had/staff | 10 TB | ||
had_g-2 | /group/had/g-2 | 6 TB | ||
had_high-p | /group/had/high-p | 100 TB | ||
ILC | ilc_ilc | /group/ilc | 260TB | |
ilc_grid | /group/ilc/grid | 350TB | ||
MLF | mlf_mlf | /group/mlf | 70 TB | |
mlf_nu | /group/mlf/nu | 180 TB | ||
mlf_deeme | /group/mlf/deeme | 100 TB | ||
NU | nu_hk | /group/nu/hk | 10 TB | |
nu_ninja | /group/nu/ninja | 10 TB | ||
ITDC | itdc_tbl | /group/itdc/tbl | 5 TB | |
QUP | qup_gen | /group/qup/gen | 10 TB | |
qup_ldm | /group/qup/ldm | 75 TB | ||
PFCS | /group/pfcs | 3 TB | ||
PS | /group/ps | 1 TB | ||
SGR | /group/sgr | 10 TB | ||
T2K | /group/t2k | 600 TB | ||
Theory | /group/th | 20 TB | ||
tmp | /group/tmp | 1 TB | ||
Belle | /group/belle | 1260 TB | ||
/group/belle/users | 1 TB/users | |||
Belle2 | /group/belle2 | 210 TB | ||
/group/belle2/dataprod | 1.5 PB | |||
/group/belle2/CALIB | 200 TB |
File System Domain (File System Name) |
Work group /Sub group |
Directory | Quota size |
---|---|---|---|
Library domain(sw) | --- | /sw | --- |
CIFS/NFS Servers
Server List
CIFS servers are made up of 2 servers to form redundancy.
Use a delegate host name to access disk storage via CIFS service.
Delegate host name of CIFS server | smbgw.cc.kek.jp |
Note that passwords for CIFS (samba) servers and work servers are not synced.
Below are the paths for accessing CIFS.
Domain | Workgroup | Path on disks | Windows access path | MacOSX access address |
---|---|---|---|---|
Home | Acc | /home/acc/username | \smbgw.cc.kek.jp\h_acc | smb://smbgw.cc.kek.jp/h_acc |
Atlas | /home/atlas/username | \smbgw.cc.kek.jp\h_atlas | smb://smbgw.cc.kek.jp/h_atlas | |
Bess | /home/bess/username | \smbgw.cc.kek.jp\h_bess | smb://smbgw.cc.kek.jp/h_bess | |
Central | /home/ce/username | \smbgw.cc.kek.jp\h_ce | smb://smbgw.cc.kek.jp/h_ce | |
CMB | /home/cmb/username | \smbgw.cc.kek.jp\h_cmb | smb://smbgw.cc.kek.jp/h_cmb | |
Had | /home/had/username | \smbgw.cc.kek.jp\h_had | smb://smbgw.cc.kek.jp/h_had | |
ILC | /home/ilc/username | \smbgw.cc.kek.jp\h_ilc | smb://smbgw.cc.kek.jp/h_ilc | |
ITDC | /home/itdc/username | \smbgw.cc.kek.jp\h_itdc | smb://smbgw.cc.kek.jp/h_itdc | |
QUP | /home/qup/username | \smbgw.cc.kek.jp\h_qup | smb://smbgw.cc.kek.jp/h_qup | |
MLF | /home/mlf/username | \smbgw.cc.kek.jp\h_mlf | smb://smbgw.cc.kek.jp/h_mlf | |
PS | /home/ps/username | \smbgw.cc.kek.jp\h_ps | smb://smbgw.cc.kek.jp/h_ps | |
T2K | /home/t2k/username | \smbgw.cc.kek.jp\h_t2k | smb://smbgw.cc.kek.jp/h_t2k | |
Theory | /home/th/username | \smbgw.cc.kek.jp\h_th | smb://smbgw.cc.kek.jp/h_th | |
Belle | /home/belle/username | \smbgw.cc.kek.jp\h_belle | smb://smbgw.cc.kek.jp/h_belle | |
Belle2 | /home/belle2/username | \smbgw.cc.kek.jp\h_belle2 | smb://smbgw.cc.kek.jp/h_belle2 | |
Groups | Acc | /group/acc | \smbgw.cc.kek.jp\g_acc | smb://smbgw.cc.kek.jp/g_acc |
Atlas | /group/atlas | \smbgw.cc.kek.jp\g_atlas | smb://smbgw.cc.kek.jp/g_atlas | |
Bess | /group/bess | \smbgw.cc.kek.jp\g_bess | smb://smbgw.cc.kek.jp/g_bess | |
Central | /group/ce | \smbgw.cc.kek.jp\g_ce | smb://smbgw.cc.kek.jp/g_ce | |
CMB | /group/cmb | \smbgw.cc.kek.jp\g_cmb | smb://smbgw.cc.kek.jp/g_cmb | |
Had | /group/had | \smbgw.cc.kek.jp\g_had | smb://smbgw.cc.kek.jp/g_had | |
ILC | /group/ilc | \smbgw.cc.kek.jp\g_ilc | smb://smbgw.cc.kek.jp/g_ilc | |
ITDC | /group/itdc | \smbgw.cc.kek.jp\g_itdc | smb://smbgw.cc.kek.jp/g_itdc | |
QUP | /group/qup | \smbgw.cc.kek.jp\g_qup | smb://smbgw.cc.kek.jp/g_qup | |
MLF | /group/mlf | \smbgw.cc.kek.jp\g_mlf | smb://smbgw.cc.kek.jp/g_mlf | |
PFCS | /group/pfcs | \smbgw.cc.kek.jp\g_pfcs | smb://smbgw.cc.kek.jp/g_pfcs | |
PS | /group/ps | \smbgw.cc.kek.jp\g_ps | smb://smbgw.cc.kek.jp/g_ps | |
T2K | /group/t2k | \smbgw.cc.kek.jp\g_t2k | smb://smbgw.cc.kek.jp/g_t2k | |
Theory | /group/th | \smbgw.cc.kek.jp\g_th | smb://smbgw.cc.kek.jp/g_th | |
Belle | /group/belle | \smbgw.cc.kek.jp\g_belle | smb://smbgw.cc.kek.jp/g_belle | |
Belle2 | /group/belle2 | \smbgw.cc.kek.jp\g_belle2 | smb://smbgw.cc.kek.jp/g_belle2 |
Access instructions from Windows are described below.
- Start Explorer and search for computers connected to the network.
- Input "smbgw.cc.kek.jp" as a computer name, and search.
- Select and double-click the computer found as a result of the search.
- Input the ID and password given after application, to log in.
- Refer to the table above, and access the Group/Home domain.
Changing CIFS Password
You can change your password with the WEB interface.
https://smbgw.cc.kek.jp/smbpasswd/index.html
Access the web page and enter information as follows.
imformation | content |
---|---|
User name | user name on Data Analysis System |
Old Password | given password for CIFS access |
New Password | any string 1 |
Confirm New Password | re-type your new password |
Data from the home domain of the previous system
This part explains where the data from the old system is stored.
The data saved from the old system is mounted on /gpfs/home/old as read-only. You can copy any old data you need to your home directory.
Below are the paths to access the saved data.
Location of the saved data(/gpfs/home/old/"workgroup name"/"User name")
Workgroup /Subgroup |
Current path | Former path on the Common Computation System |
---|---|---|
Acc | /gpfs/home/old/acc | /home/acc |
Atlas | /gpfs/home/old/atlas | /home/atlas |
Bess | /gpfs/home/old/bess | /home/bess |
Central | /gpfs/home/old/ce | /home/ce |
CMB | /gpfs/home/old/cmb | /home/cmb |
Had | /gpfs/home/old/had | /home/had |
ILC | /gpfs/home/old/ilc | /home/ilc |
MLF | /gpfs/home/old/mlf | /home/mlf |
PS | /gpfs/home/old/ps | /home/ps |
T2K | /gpfs/home/old/t2k | /home/t2k |
Theory | /gpfs/home/old/th | /home/th |
Belle | /gpfs/home/old/belle | /home/belle |
Belle2 | /gpfs/home/old/belle2 | /home/belle2 |
Data from the group domain of the previous system
This part explains where the data from the old system is stored.
The data saved from the old system is stored in the same directory.
Data from the library domain of the previous system
This part explains where the data from the old system is stored.
The data saved from the old system is mounted on /sw/old as read-only.
Below are the paths to access the saved data.
Target Domain | Current path | Former path on the Common Computation System |
---|---|---|
library domain | /sw/old | /sw |
Notice of the GPFS usage
User application
- CIFS requires an ID for the Data Analysis System. Application form is available here. Check “CIFS server” on the user account application form.
- Only clients inside KEK are allowed to use the NFS /CIFS services.
- It is not permitted to access from NAT environments.
- A source host must be registered to the internal node database and DNS.
- The system defines NFS client's IP addresses on servers. When there is a change in your IP address, please report it.
- It is not allowed for NFS clients to access the NFS area with a root privilege.
Available Domains
This part deals with checking the size and usage status in the home domain or the group domain.
As already explained in the section "File System", grpquota command or gquota command should be used to check the disk space assigned to and used by each group.
[NOTICE] grpquota command always shows 0 for the domain without files.
Also, to check the capacity and usage of the home domain dedicated to each user, use rquota command or hquota command.
To check of /group/belle/users/, use bquota command.
grpquota command
<Syntax> : grpquota [OPTION] {Group_DIR}
Options:
-h, --help display this help and exit
-l, --list list group directories
-a, --show-all show all usages
-t, --terabyte print usage in terabyte (default auto)
-g, --gigabyte print usage in gigabyte
-m, --magabyte print usage in magabyte
-k, --kilobyte print usage in kilobyte
$> grpquota /group/ps
GROUP_DIR Used Quota Use%
/group/ps 323 GB 1 TB 31%
$> grpquota -k /group/ps
GROUP_DIR Used Quota Use%
/group/ps 338813952 KB 1073741824 KB 31%
gquota command
<Syntax> : gquota [ -K | -M | -G ] {Directory}
Directory :
/group/ps | /group/mlf | /group/ilc | /group/acc | /group/atlas | /group/ce | /group/ce/cc | /group/ce/ccs |
/group/ce/ccx | /group/ce/rad | /group/pfcs | /group/th | /group/bess | /group/sgr | /group/belle | /group/belle/users |
/group/belle2 | /group/had | /group/had/koto | /group/had/g-2 | /group/had/trek | /group/had/sks |
/group/t2k | /group/cmb
$> gquota -M /group/ps
Checking Quota Data...
Block Limits | File Limits
Filesystem type MB quota limit in_doubt grace | files quota limit in_doubt grace Remarks
/group/ps FILESET 858233 0 1048576 0 none | 527893 0 0 0 none
$> gquota -G /group/ps
Checking Quota Data...
Block Limits | File Limits
Filesystem type GB quota limit in_doubt grace | files quota limit in_doubt grace Remarks
/group/ps FILESET 838 0 1024 0 none | 527893 0 0 0 none
$> gquota -K /group/belle/users
Checking Quota Data...
Block Limits | File Limits
Filesystem type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks
/group/belle/users USR 0 0 1073741824 0 none | 1 0 0 0 none
rquota command
<Syntax> : rquota
$> rquota
Block Limits | File Limits
Filesystem type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks
/home/ce USR 416 0 104857600 0 none | 13 0 0 0 none
hquota command
<Syntax> : hquota [ -h | -g | -k ]
$> hquota
HOME directory usage: 0/100 GB (0%)
$> hquota -k
HOME directory usage: 1376K / 100GB (0%)
( Example of Belle users )
$> hquota
HOME directory usage: 33G / 250GB (13%)
Your usage in /group/belle/users/: 43G / 1024GB (4%)
bquota command
<Syntax> : bquota
$> bquota
Block Limits | File Limits
Filesystem type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks
/group/belle/users/[USERNAME] USR 45412849 0 1073741824 0 none | 54739 0 0 0 none
Regarding gquota, the size unit can be specified as an option. It is also possible to indicate a specific domain and display only the information related to this domain.
Regarding hquota, the size unit can be specified as an option.
The output shows the following information:
- Filesystem : specified domain
- type : quota type (USR: user quota, FILESET: fileset quota)
≪Block Limits≫
- KB(or MB/GB) : used disk size in KB(or MB/GB)
- quota : soft limit
- limit : hard limit
- grace : grace period (unset. When the usage reached hard limit, "*" is displayed)
≪File Limits≫
- files : number of files
- quota : soft limit on files (unset)
- limit : hard limit on files (unset)
- grace : grace period on files (unset)
-
The rules to follow to create a new password are listed below.
at least 9 characters long
must include at least one alphabetical character
must contain at least one symbol
must contain at least one digit ↩