User Tools

Site Tools


docs:vserver

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
docs:vserver [2013-02-11 18:22]
glen [Using quota in vservers] fix formattng
docs:vserver [2015-10-05 14:40]
arekm [XFS filesystem - kernel upgrade causes xfs related oops (xfs_filestream_lookup_ag)]
Line 15: Line 15:
 ===== Installing Vserver host on PLD Linux ===== ===== Installing Vserver host on PLD Linux =====
  
-vserver support is included in PLD Linux main kernels, so you can just install kernel package +Ensure ​you have appropriate [[packages:kernel]] installed.
- +
-<​file>​ +
-# poldek -u kernel +
-</​file>​ +
- +
-or alternatively,​ a longterm stable kernel: +
-<​file>​ +
-# poldek -u kernel-longterm +
-</​file>​+
  
 +You can check this from kernel config:
 +<​code>​
 +# modprobe configs
 +# zgrep CONFIG_VSERVER /​proc/​config.gz ​
 +CONFIG_VSERVER=y
 +</​code>​
 ===== Installing guest PLD Linux Vserver ===== ===== Installing guest PLD Linux Vserver =====
  
Line 65: Line 62:
 If you need to use another combination,​ then there are two versions of PLD available for guest systems: ​ If you need to use another combination,​ then there are two versions of PLD available for guest systems: ​
  
-  * pld-ac - [[:AcInfo|PLD 2.0 (Ac)]]  +  * pld-ac - [[:ac|PLD 2.0 (Ac)]]  
-  * pld-th - [[:ThInfo|PLD 3.0 (Th)]]+  * pld-th - [[:th|PLD 3.0 (Th)]]
  
 You may choose one using ''​-d''​ option: ​ You may choose one using ''​-d''​ option: ​
Line 395: Line 392:
 [[http://​www.solucorp.qc.ca/​howto.hc?​projet=vserver&​amp;​id=72|http://​www.solucorp.qc.ca/​howto.hc?​projet=vserver&​id=72]] ​ [[http://​www.solucorp.qc.ca/​howto.hc?​projet=vserver&​amp;​id=72|http://​www.solucorp.qc.ca/​howto.hc?​projet=vserver&​id=72]] ​
  
-You can use //lcap// program to see available ​capatabilities+You can use //lcap// program to see available ​capabilities
  
  
Line 579: Line 576:
  
 ==== Running 32 bit vserver on an 64 bit host ==== ==== Running 32 bit vserver on an 64 bit host ====
-With recent PLD util-vserver package you can create 32-bit guest systems inside a 64-bit host. First you need to prepare a new distribution definition skeleton: ​ 
  
 +With recent [[package>​util-vserver]] package you can create 32-bit guest systems inside a 64-bit host.
  
 +To specify arch during guest creation, use ''​-d''​ option, and to change what ''​uname''​ returns, use arguments ''​%%--personality linux_32bit --machine i686%%'':​
  
-<​file># ​mkdir -p /​etc/​vservers/​.distributions/​pld-th-i686/​poldek/​repos.d+<​file># ​vserver test build --context <num> -n test -m poldek -- -d pld-th-i686 ​--personality linux_32bit --machine i686
 </​file>​ </​file>​
-Then copy your repository configuration to ''/​etc/​vservers/​.distributions/​pld-th-i686/​poldek/​repos.d/​pld.conf''​ and change the architecture and source paths to your liking. When configuration is ready, create a new guest vserver using the ''​-d''​ command line option: ​ 
  
- +If you need to set ''​uts''​ parameters afterwards, you can just echo them: 
- +<​file>​ 
-<​file># ​vserver test build --context <num-n test -m poldek -- -d pld-th-i686+echo linux_32bit ​>> /​etc/​vservers/​test/​personality 
 +# echo i686 > /​etc/​vservers/​test/​uts/​machine
 </​file>​ </​file>​
-Later to force i686 32bit use:  
  
- 
- 
-<​file>#​ echo linux_32bit >> /​etc/​vservers/​test/​personality 
-# echo i686 > /​etc/​vservers/​test/​uts/​machine 
-</​file>​ 
-however, you can do that at vserver build time using arguments ''​--personality linux_32bit --machine i686''​. ​ 
  
  
Line 719: Line 710:
  
  
 +==== Running auditd inside guest ====
 +
 +You need ''​CAP_AUDIT_CONTROL''​ in ''​bcapabilities''​ and lower ''​priority_boost''​ to ''​0''​ in ''/​etc/​audit/​auditd.conf''​
 +
 +==== XFS filesystem - kernel upgrade causes xfs related oops (xfs_filestream_lookup_ag) ====
 +
 +After upgrading from 2.6-3.4 kernels (possibly other versions) to 3.18 (tested, possibly other versions) kernel ooppses
 +almost immediately after accessing some files on xfs filesystem with ''​xfs_filestream_lookup_ag''​ visible in stack trace
 +(or other filestream related function).
 +
 +That's because vserver patch for kernels earlier than 2.6.23 patched xfs filesystem to introduce new flag:
 +
 +<​file>​
 +#define XFS_XFLAG_BARRIER ​    ​0x00004000 ​     /* chroot() barrier */
 +</​file>​
 +
 +and files/dirs with such flag got saved on your filesystem.
 +
 +Starting with kernel 2.6.23 kernel introduced filestreams which are using 0x00004000 bit, thus causing conflict with vserver.
 +
 +<​file>​
 +#define XFS_XFLAG_FILESTREAM ​  ​0x00004000 ​     /* use filestream allocator */
 +</​file>​
 +
 +Vserver stopped adding such xfs xflag in 3.13 BUT your existing filesystem can still have XFS_XFLAG_BARRIER (0x00004000) set
 +causing oops in newer kernels.
 +
 +How to find out if I'm affected?
 +
 +IIF you don't use filestream feature then modify http://​oss.sgi.com/​cgi-bin/​gitweb.cgi?​p=xfs/​cmds/​xfstests.git;​a=blob_plain;​f=src/​bstat.c;​hb=HEAD to show only files containing XFS_XFLAG_FILESTREAM
 +
 +<​file>​
 +diff --git a/​src/​bstat.c b/​src/​bstat.c
 +index 4e22ecd..887512f 100644
 +--- a/​src/​bstat.c
 ++++ b/​src/​bstat.c
 +@@ -34,19 +34,21 @@ dotime(void *ti, char *s)
 + void
 + ​printbstat(xfs_bstat_t *sp)
 + {
 +-       ​printf("​ino %lld mode %#o nlink %d uid %d gid %d rdev %#​x\n",​
 +-               (long long)sp->​bs_ino,​ sp->​bs_mode,​ sp->​bs_nlink,​
 +-               ​sp->​bs_uid,​ sp->​bs_gid,​ sp->​bs_rdev);​
 +-       ​printf("​\tblksize %d size %lld blocks %lld xflags %#x extsize %d\n",
 +-               ​sp->​bs_blksize,​ (long long)sp->​bs_size,​ (long long)sp->​bs_blocks,​
 +-               ​sp->​bs_xflags,​ sp->​bs_extsize);​
 +-       ​dotime(&​sp->​bs_atime,​ "​atime"​);​
 +-       ​dotime(&​sp->​bs_mtime,​ "​mtime"​);​
 +-       ​dotime(&​sp->​bs_ctime,​ "​ctime"​);​
 +-       ​printf( "​\textents %d %d gen %d\n",
 +-               ​sp->​bs_extents,​ sp->​bs_aextents,​ sp->​bs_gen);​
 +-       ​printf( "​\tDMI:​ event mask 0x%08x state 0x%04x\n",​
 +-               ​sp->​bs_dmevmask,​ sp->​bs_dmstate);​
 ++       if (sp->​bs_xflags & XFS_XFLAG_FILESTREAM) {
 ++               ​printf("​ino %lld mode %#o nlink %d uid %d gid %d rdev %#​x\n",​
 ++                               (long long)sp->​bs_ino,​ sp->​bs_mode,​ sp->​bs_nlink,​
 ++                               ​sp->​bs_uid,​ sp->​bs_gid,​ sp->​bs_rdev);​
 ++               ​printf("​\tblksize %d size %lld blocks %lld xflags %#x extsize %d\n",
 ++                               ​sp->​bs_blksize,​ (long long)sp->​bs_size,​ (long long)sp->​bs_blocks,​
 ++                               ​sp->​bs_xflags,​ sp->​bs_extsize);​
 ++               ​dotime(&​sp->​bs_atime,​ "​atime"​);​
 ++               ​dotime(&​sp->​bs_mtime,​ "​mtime"​);​
 ++               ​dotime(&​sp->​bs_ctime,​ "​ctime"​);​
 ++               ​printf( "​\textents %d %d gen %d\n",
 ++                               ​sp->​bs_extents,​ sp->​bs_aextents,​ sp->​bs_gen);​
 ++               ​printf( "​\tDMI:​ event mask 0x%08x state 0x%04x\n",​
 ++                               ​sp->​bs_dmevmask,​ sp->​bs_dmstate);​
 ++       }
 + }
 +</​file>​
 +
 +and then run it with mounted directory of each filesystem (bstat /; bstat /home etc). It will print "ino ..." information for filestream files.
 +
 +
 +How to clean up?
 +
 +rsync files to other partition, recreate problematic partition and then copy files back. 
  
 ===== Debian or Ubuntu guest installation ===== ===== Debian or Ubuntu guest installation =====
Line 1006: Line 1074:
  
   * add ''​quota_ctl''​ to ''/​etc/​vservers/​test/​ccapabilities'': ​   * add ''​quota_ctl''​ to ''/​etc/​vservers/​test/​ccapabilities'': ​
-  * restart your vserver and run ''​edquota''​ inside ​+  * restart your vserver and run ''​edquota''​ inside 
 ===== Network namespace in vservers ===== ===== Network namespace in vservers =====
 +
 +Starting from util-vserver 0.30.216-1.pre3054 there is basic support for creating network namespaces with interfaces inside.
 +
 +Enabling netns and two capabilities:​ NET_ADMIN (allows interfaces in guest to be managed) and NET_RAW (makes iptables working).  ​
 +
 +
 +<​file>​mkdir /​etc/​vservers/​test/​spaces
 +touch /​etc/​vserver/​test/​spaces/​net
 +echo NET_ADMIN >> /​etc/​vservers/​test/​bcapabilities
 +echo NET_RAW >> /​etc/​vservers/​test/​bcapabilities
 +echo '​plain'​ > /​etc/​vservers/​test/​apps/​init/​style
 +</​file>​
 +
 +Avoid context isolation since it makes little sense when using network namespaces:
 +<​file>​touch /​etc/​vserver/​test/​noncontext</​file>​
 +
 +Configure interfaces:
 +
 +0 - arbitrary directory name, just for ordering
 +
 +myiface0 will be interface name inside of guest (optional, default geth0,
 +geth1 and so on)
 +
 +veth-host - interface name on the host side
 +
 +<​file>​
 +mkdir -p /​etc/​vservers/​test/​netns/​interfaces/​0
 +echo myiface0 > /​etc/​vservers/​test/​netns/​interfaces/​guest
 +echo veth-host > /​etc/​vservers/​test/​netns/​interfaces/​host
 +</​file>​
 +
 +!!! FINISH ME. FINISH ME. FINISH ME. !!!
 +
 +===== Network namespace in vservers (OLD WAY) =====
 Enabling netns and two capabilities:​ NET_ADMIN (allows interfaces in guest to be managed) and NET_RAW (makes iptables working).  ​ Enabling netns and two capabilities:​ NET_ADMIN (allows interfaces in guest to be managed) and NET_RAW (makes iptables working).  ​
  
Line 1014: Line 1117:
  
  
-<​file>​mkdir /etc/vserver/​test/​spaces +<​file>​mkdir /etc/vservers/​test/​spaces 
-touch /etc/vserver/​test/​spaces/​net+touch /etc/vservers/​test/​spaces/​net
 echo NET_ADMIN >> /​etc/​vservers/​test/​bcapabilities echo NET_ADMIN >> /​etc/​vservers/​test/​bcapabilities
 echo NET_RAW >> /​etc/​vservers/​test/​bcapabilities echo NET_RAW >> /​etc/​vservers/​test/​bcapabilities
Line 1106: Line 1209:
  
 <​file>#​ cat /​proc/​mounts |grep cgroup <​file>#​ cat /​proc/​mounts |grep cgroup
-cgroup /dev/​cgroup/​blkio cgroup rw,​relatime,​blkio 0 0 +cgroup /sys/fs/​cgroup/​blkio cgroup rw,​relatime,​blkio 0 0 
-cgroup /dev/cgroup/cpu cgroup rw,​relatime,​cpu 0 0 +cgroup /sys/fs/cgroup/cpu cgroup rw,​relatime,​cpu 0 0 
-cgroup /dev/​cgroup/​cpuacct cgroup rw,​relatime,​cpuacct 0 0 +cgroup /sys/fs/​cgroup/​cpuacct cgroup rw,​relatime,​cpuacct 0 0 
-cgroup /dev/​cgroup/​cpuset cgroup rw,​relatime,​cpuset 0 0 +cgroup /sys/fs/​cgroup/​cpuset cgroup rw,​relatime,​cpuset 0 0 
-cgroup /dev/​cgroup/​devices cgroup rw,​relatime,​devices 0 0 +cgroup /sys/fs/​cgroup/​devices cgroup rw,​relatime,​devices 0 0 
-cgroup /dev/​cgroup/​freezer cgroup rw,​relatime,​freezer 0 0 +cgroup /sys/fs/​cgroup/​freezer cgroup rw,​relatime,​freezer 0 0 
-cgroup /dev/​cgroup/​memory cgroup rw,​relatime,​memory 0 0 +cgroup /sys/fs/​cgroup/​memory cgroup rw,​relatime,​memory 0 0 
-cgroup /dev/​cgroup/​net_cls cgroup rw,​relatime,​net_cls 0 0+cgroup /sys/fs/​cgroup/​net_cls cgroup rw,​relatime,​net_cls 0 0
 </​file>​ </​file>​
 For these to work you need at least util-vserver-0.30.216-1.pre2955.3 (that .3 is important) and turn on per subsys support by doing: ​ For these to work you need at least util-vserver-0.30.216-1.pre2955.3 (that .3 is important) and turn on per subsys support by doing: ​
- 
  
  
 <​file>#​ mkdir /​etc/​vservers/​.defaults/​cgroup <​file>#​ mkdir /​etc/​vservers/​.defaults/​cgroup
 # touch /​etc/​vservers/​.defaults/​cgroup/​per-ss # touch /​etc/​vservers/​.defaults/​cgroup/​per-ss
 +</​file>​
 +
 +===== cgroups mountpoint =====
 +
 +if you have cgroups mounted somewhere else, you can inform vserver of that (it searching in ''/​sys/​fs/​cgroup''​ by default)
 +
 +<​file>​
 +none        /​dev/​cgroup ​    ​cgroup ​ cpuset,​cpu,​cpuacct,​devices,​freezer,​net_cls ​ 0 0
 +</​file>​
 +
 +you need to tell vserver where it mounted: ​
 +<​file>​
 +# cat /​etc/​vservers/​.defaults/​cgroup/​mnt
 +/dev/cgroup
 </​file>​ </​file>​
docs/vserver.txt · Last modified: 2015-10-05 15:07 by glen