lxc.sgml.in revision 7f95145833bb24f54e037f73ecc37444d6635697
52N/A<!--
52N/A
292N/Alxc: linux Container library
52N/A
52N/A(C) Copyright IBM Corp. 2007, 2008
292N/A
52N/AAuthors:
52N/ADaniel Lezcano <daniel.lezcano at free.fr>
52N/A
52N/AThis library is free software; you can redistribute it and/or
292N/Amodify it under the terms of the GNU Lesser General Public
292N/ALicense as published by the Free Software Foundation; either
292N/Aversion 2.1 of the License, or (at your option) any later version.
292N/A
52N/AThis library is distributed in the hope that it will be useful,
52N/Abut WITHOUT ANY WARRANTY; without even the implied warranty of
52N/AMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
292N/ALesser General Public License for more details.
292N/A
292N/AYou should have received a copy of the GNU Lesser General Public
292N/ALicense along with this library; if not, write to the Free Software
292N/AFoundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
292N/A
292N/A-->
292N/A
292N/A<!DOCTYPE refentry PUBLIC @docdtd@ [
292N/A
292N/A<!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
52N/A]>
52N/A
52N/A<refentry>
52N/A
52N/A <docinfo>
52N/A <date>@LXC_GENERATE_DATE@</date>
52N/A </docinfo>
52N/A
52N/A
52N/A <refmeta>
52N/A <refentrytitle>lxc</refentrytitle>
52N/A <manvolnum>7</manvolnum>
292N/A <refmiscinfo>
52N/A Version @PACKAGE_VERSION@
52N/A </refmiscinfo>
6N/A </refmeta>
6N/A
6N/A <refnamediv>
6N/A <refname>lxc</refname>
6N/A
15N/A <refpurpose>
6N/A linux containers
6N/A </refpurpose>
201N/A </refnamediv>
6N/A
15N/A <refsect1>
302N/A <title>Quick start</title>
6N/A <para>
6N/A You are in a hurry, and you don't want to read this man page. Ok,
6N/A without warranty, here are the commands to launch a shell inside
6N/A a container with a predefined configuration template, it may
6N/A work.
6N/A <command>@BINDIR@/lxc-execute -n foo -f
6N/A @DOCDIR@/examples/lxc-macvlan.conf /bin/bash</command>
6N/A </para>
6N/A </refsect1>
6N/A
15N/A <refsect1>
6N/A <title>Overview</title>
6N/A <para>
6N/A The container technology is actively being pushed into the
6N/A mainstream linux kernel. It provides the resource management
6N/A through the control groups aka process containers and resource
6N/A isolation through the namespaces.
302N/A </para>
302N/A
6N/A <para>
6N/A The linux containers, <command>lxc</command>, aims to use these
6N/A new functionalities to provide an userspace container object
201N/A which provides full resource isolation and resource control for
121N/A an applications or a system.
201N/A </para>
6N/A
6N/A <para>
15N/A The first objective of this project is to make the life easier
15N/A for the kernel developers involved in the containers project and
15N/A especially to continue working on the Checkpoint/Restart new
15N/A features. The <command>lxc</command> is small enough to easily
15N/A manage a container with simple command lines and complete enough
15N/A to be used for other purposes.
15N/A </para>
15N/A </refsect1>
15N/A
15N/A <refsect1>
302N/A <title>Requirements</title>
302N/A <para>
302N/A The <command>lxc</command> relies on a set of functionalities
44N/A provided by the kernel which needs to be active. Depending of
44N/A the missing functionalities the <command>lxc</command> will
44N/A work with a restricted number of functionalities or will simply
191N/A fail.
191N/A </para>
191N/A
191N/A <para>
191N/A The following list gives the kernel features to be enabled in
191N/A the kernel to have the full features container:
44N/A </para>
44N/A <programlisting>
44N/A * General setup
190N/A * Control Group support
194N/A -> Namespace cgroup subsystem
194N/A -> Freezer cgroup subsystem
220N/A -> Cpuset support
221N/A -> Simple CPU accounting cgroup subsystem
6N/A -> Resource counters
6N/A -> Memory resource controllers for Control Groups
6N/A * Group CPU scheduler
6N/A -> Basis for grouping tasks (Control Groups)
6N/A * Namespaces support
6N/A -> UTS namespace
302N/A -> IPC namespace
6N/A -> User namespace
6N/A -> Pid namespace
6N/A -> Network namespace
6N/A * Device Drivers
6N/A * Character devices
6N/A -> Support multiple instances of devpts
6N/A * Network device support
6N/A -> MAC-VLAN support
6N/A -> Virtual ethernet pair device
302N/A * Networking
6N/A * Networking options
6N/A -> 802.1d Ethernet Bridging
6N/A * Security options
6N/A -> File POSIX Capabilities
302N/A </programlisting>
6N/A
6N/A <para>
63N/A
63N/A The kernel version >= 2.6.27 shipped with the distros, will
63N/A work with <command>lxc</command>, this one will have less
63N/A functionalities but enough to be interesting.
63N/A
63N/A With the kernel 2.6.29, <command>lxc</command> is fully
63N/A functional.
63N/A
63N/A The helper script <command>lxc-checkconfig</command> will give
63N/A you information about your kernel configuration.
63N/A </para>
63N/A
63N/A <para>
63N/A Before using the <command>lxc</command>, your system should be
168N/A configured with the file capabilities, otherwise you will need
275N/A to run the <command>lxc</command> commands as root.
63N/A </para>
63N/A
63N/A <para>
63N/A The control group can be mounted anywhere, eg:
63N/A <command>mount -t cgroup cgroup /cgroup</command>.
63N/A
63N/A If you want to dedicate a specific cgroup mount point
63N/A for <command>lxc</command>, that is to have different cgroups
63N/A mounted at different places with different options but
63N/A let <command>lxc</command> to use one location, you can bind
63N/A the mount point with the <option>lxc</option> name, eg:
63N/A <command>mount -t cgroup lxc /cgroup4lxc</command> or
63N/A <command>mount -t cgroup -ons,cpuset,freezer,devices
63N/A lxc /cgroup4lxc</command>
63N/A
63N/A </para>
288N/A
288N/A </refsect1>
288N/A
63N/A <refsect1>
220N/A <title>Functional specification</title>
220N/A <para>
220N/A A container is an object isolating some resources of the host,
220N/A for the application or system running in it.
220N/A </para>
220N/A <para>
220N/A The application / system will be launched inside a
220N/A container specified by a configuration that is either
220N/A initially created or passed as parameter of the starting commands.
220N/A </para>
220N/A
220N/A <para>How to run an application in a container ?</para>
220N/A <para>
220N/A Before running an application, you should know what are the
220N/A resources you want to isolate. The default configuration is to
220N/A isolate the pids, the sysv ipc and the mount points. If you want
220N/A to run a simple shell inside a container, a basic configuration
220N/A is needed, especially if you want to share the rootfs. If you
220N/A want to run an application like <command>sshd</command>, you
220N/A should provide a new network stack and a new hostname. If you
220N/A want to avoid conflicts with some files
220N/A eg. <filename>/var/run/httpd.pid</filename>, you should
220N/A remount <filename>/var/run</filename> with an empty
253N/A directory. If you want to avoid the conflicts in all the cases,
220N/A you can specify a rootfs for the container. The rootfs can be a
220N/A directory tree, previously bind mounted with the initial rootfs,
220N/A so you can still use your distro but with your
220N/A own <filename>/etc</filename> and <filename>/home</filename>
253N/A </para>
253N/A <para>
253N/A Here is an example of directory tree
220N/A for <command>sshd</command>:
220N/A <programlisting>
220N/A[root@lxc sshd]$ tree -d rootfs
220N/A
220N/Arootfs
220N/A|-- bin
220N/A|-- dev
220N/A| |-- pts
63N/A| `-- shm
63N/A| `-- network
63N/A|-- etc
63N/A| `-- ssh
63N/A|-- lib
131N/A|-- proc
187N/A|-- root
63N/A|-- sbin
63N/A|-- sys
63N/A|-- usr
6N/A`-- var
26N/A |-- empty
6N/A | `-- sshd
44N/A |-- lib
213N/A | `-- empty
213N/A | `-- sshd
213N/A `-- run
213N/A `-- sshd
213N/A </programlisting>
213N/A
213N/A and the mount points file associated with it:
213N/A <programlisting>
222N/A [root@lxc sshd]$ cat fstab
222N/A
222N/A /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
222N/A /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
222N/A /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
222N/A /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
222N/A </programlisting>
222N/A </para>
222N/A
222N/A <para>How to run a system in a container ?</para>
222N/A
222N/A <para>Running a system inside a container is paradoxically easier
222N/A than running an application. Why ? Because you don't have to care
222N/A about the resources to be isolated, everything need to be
222N/A isolated, the other resources are specified as being isolated but
213N/A without configuration because the container will set them
213N/A up. eg. the ipv4 address will be setup by the system container
213N/A init scripts. Here is an example of the mount points file:
44N/A
44N/A <programlisting>
302N/A [root@lxc debian]$ cat fstab
44N/A
44N/A /dev /home/root/debian/rootfs/dev none bind 0 0
44N/A /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
44N/A </programlisting>
44N/A
44N/A More information can be added to the container to facilitate the
44N/A configuration. For example, make accessible from the container
302N/A the resolv.conf file belonging to the host.
302N/A
302N/A <programlisting>
44N/A /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
44N/A </programlisting>
44N/A </para>
191N/A
191N/A <refsect2>
191N/A <title>Container life cycle</title>
44N/A <para>
44N/A When the container is created, it contains the configuration
44N/A information. When a process is launched, the container will be
206N/A starting and running. When the last process running inside the
206N/A container exits, the container is stopped.
206N/A </para>
44N/A <para>
44N/A In case of failure when the container is initialized, it will
44N/A pass through the aborting state.
44N/A </para>
44N/A
44N/A <programlisting>
44N/A<![CDATA[
44N/A ---------
44N/A | STOPPED |<---------------
44N/A --------- |
44N/A | |
44N/A start |
44N/A | |
44N/A V |
44N/A ---------- |
44N/A | STARTING |--error- |
44N/A ---------- | |
194N/A | | |
194N/A V V |
194N/A --------- ---------- |
194N/A | RUNNING | | ABORTING | |
194N/A --------- ---------- |
194N/A | | |
194N/A no process | |
194N/A | | |
194N/A V | |
194N/A ---------- | |
194N/A | STOPPING |<------- |
194N/A ---------- |
194N/A | |
194N/A ---------------------
194N/A]]>
194N/A </programlisting>
194N/A </refsect2>
194N/A
194N/A <refsect2>
194N/A <title>Configuration</title>
194N/A <para>The container is configured through a configuration
194N/A file, the format of the configuration file is described in
194N/A <citerefentry>
194N/A <refentrytitle><filename>lxc.conf</filename></refentrytitle>
194N/A <manvolnum>5</manvolnum>
194N/A </citerefentry>
213N/A </para>
213N/A </refsect2>
213N/A
213N/A <refsect2>
6N/A <title>Creating / Destroying container
6N/A (persistent container)</title>
213N/A <para>
213N/A A persistent container object can be
213N/A created via the <command>lxc-create</command>
213N/A command. It takes a container name as parameter and
213N/A optional configuration file and template.
213N/A The name is used by the different
213N/A commands to refer to this
213N/A container. The <command>lxc-destroy</command> command will
213N/A destroy the container object.
213N/A <programlisting>
213N/A lxc-create -n foo
213N/A lxc-destroy -n foo
213N/A </programlisting>
213N/A </para>
213N/A </refsect2>
213N/A
6N/A <refsect2>
6N/A <title>Volatile container</title>
6N/A <para>It is not mandatory to create a container object
15N/A before to start it.
15N/A The container can be directly started with a
15N/A configuration file as parameter.
15N/A </para>
15N/A </refsect2>
44N/A
44N/A <refsect2>
44N/A <title>Starting / Stopping container</title>
44N/A <para>When the container has been created, it is ready to run an
44N/A application / system.
44N/A This is the purpose of the <command>lxc-execute</command> and
44N/A <command>lxc-start</command> commands.
44N/A If the container was not created before
15N/A starting the application, the container will use the
15N/A configuration file passed as parameter to the command,
15N/A and if there is no such parameter either, then
15N/A it will use a default isolation.
15N/A If the application is ended, the container will be stopped also,
15N/A but if needed the <command>lxc-stop</command> command can
15N/A be used to kill the still running application.
15N/A </para>
302N/A
302N/A <para>
15N/A Running an application inside a container is not exactly the
15N/A same thing as running a system. For this reason, there are two
15N/A different commands to run an application into a container:
15N/A <programlisting>
15N/A lxc-execute -n foo [-f config] /bin/bash
15N/A lxc-start -n foo [-f config] [/bin/bash]
15N/A </programlisting>
15N/A </para>
15N/A
15N/A <para>
15N/A <command>lxc-execute</command> command will run the
15N/A specified command into the container via an intermediate
15N/A process, <command>lxc-init</command>.
15N/A This lxc-init after launching the specified command,
15N/A will wait for its end and all other reparented processes.
190N/A (that allows to support daemons in the container).
190N/A In other words, in the
190N/A container, <command>lxc-init</command> has the pid 1 and the
15N/A first process of the application has the pid 2.
15N/A </para>
15N/A
15N/A <para>
15N/A <command>lxc-start</command> command will run directly the specified
15N/A command into the container.
6N/A The pid of the first process is 1. If no command is
6N/A specified <command>lxc-start</command> will
6N/A run <filename>/sbin/init</filename>.
6N/A </para>
6N/A
6N/A <para>
6N/A To summarize, <command>lxc-execute</command> is for running
6N/A an application and <command>lxc-start</command> is better suited for
6N/A running a system.
6N/A </para>
6N/A
6N/A <para>
6N/A If the application is no longer responding, is inaccessible or is
6N/A not able to finish by itself, a
6N/A wild <command>lxc-stop</command> command will kill all the
6N/A processes in the container without pity.
6N/A <programlisting>
6N/A lxc-stop -n foo
6N/A </programlisting>
6N/A </para>
6N/A </refsect2>
6N/A
6N/A <refsect2>
121N/A <title>Connect to an available tty</title>
6N/A <para>
6N/A If the container is configured with the ttys, it is possible
6N/A to access it through them. It is up to the container to
6N/A provide a set of available tty to be used by the following
6N/A command. When the tty is lost, it is possible to reconnect it
6N/A without login again.
6N/A <programlisting>
6N/A lxc-console -n foo -t 3
6N/A </programlisting>
63N/A </para>
63N/A </refsect2>
63N/A
63N/A <refsect2>
63N/A <title>Freeze / Unfreeze container</title>
63N/A <para>
63N/A Sometime, it is useful to stop all the processes belonging to
63N/A a container, eg. for job scheduling. The commands:
63N/A <programlisting>
6N/A lxc-freeze -n foo
63N/A </programlisting>
164N/A
164N/A will put all the processes in an uninteruptible state and
164N/A
164N/A <programlisting>
164N/A lxc-unfreeze -n foo
164N/A </programlisting>
164N/A
164N/A will resume them.
164N/A </para>
164N/A
170N/A <para>
164N/A This feature is enabled if the cgroup freezer is enabled in the
164N/A kernel.
164N/A </para>
164N/A </refsect2>
314N/A
314N/A <refsect2>
314N/A <title>Getting information about container</title>
314N/A <para>When there are a lot of containers, it is hard to follow
314N/A what has been created or destroyed, what is running or what are
314N/A the pids running into a specific container. For this reason, the
314N/A following commands may be usefull:
314N/A <programlisting>
314N/A lxc-ls
314N/A lxc-ps --name foo
164N/A lxc-info -n foo
164N/A </programlisting>
164N/A </para>
63N/A <para>
63N/A <command>lxc-ls</command> lists the containers of the
63N/A system. The command is a script built on top
63N/A of <command>ls</command>, so it accepts the options of the ls
63N/A commands, eg:
63N/A <programlisting>
63N/A lxc-ls -C1
63N/A </programlisting>
63N/A will display the containers list in one column or:
63N/A <programlisting>
63N/A lxc-ls -l
63N/A </programlisting>
63N/A will display the containers list and their permissions.
6N/A </para>
6N/A
32N/A <para>
32N/A <command>lxc-ps</command> will display the pids for a specific
32N/A container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
308N/A is built on top of <command>ps</command> and accepts the same
308N/A options, eg:
32N/A <programlisting>lxc-ps --name foo --forest</programlisting>
32N/A will display the processes hierarchy for the processes
32N/A belonging the 'foo' container.
32N/A
32N/A <programlisting>lxc-ps --lxc</programlisting>
32N/A will display all the containers and their processes.
32N/A </para>
32N/A
32N/A <para>
32N/A <command>lxc-info</command> gives informations for a specific
32N/A container, at present time, only the state of the container is
32N/A displayed.
32N/A </para>
32N/A
32N/A <para>
32N/A Here is an example on how the combination of these commands
32N/A allow to list all the containers and retrieve their state.
32N/A <programlisting>
32N/A for i in $(lxc-ls -1); do
32N/A lxc-info -n $i
32N/A done
32N/A </programlisting>
32N/A
32N/A And displaying all the pids of all the containers:
32N/A
32N/A <programlisting>
32N/A for i in $(lxc-ls -1); do
32N/A lxc-ps --name $i --forest
32N/A done
32N/A </programlisting>
6N/A
6N/A </para>
6N/A
6N/A <para>
6N/A <command>lxc-netstat</command> display network information for
6N/A a specific container. This command is built on top of
6N/A the <command>netstat</command> command and will accept its
6N/A options
6N/A </para>
63N/A
63N/A <para>
63N/A The following command will display the socket informations for
63N/A the container 'foo'.
63N/A <programlisting>
6N/A lxc-netstat -n foo -tano
199N/A </programlisting>
199N/A </para>
199N/A
199N/A </refsect2>
199N/A
199N/A <refsect2>
199N/A <title>Monitoring container</title>
199N/A <para>It is sometime useful to track the states of a container,
6N/A for example to monitor it or just to wait for a specific
state in a script.
</para>
<para>
<command>lxc-monitor</command> command will monitor one or
several containers. The parameter of this command accept a
regular expression for example:
<programlisting>
lxc-monitor -n "foo|bar"
</programlisting>
will monitor the states of containers named 'foo' and 'bar', and:
<programlisting>
lxc-monitor -n ".*"
</programlisting>
will monitor all the containers.
</para>
<para>
For a container 'foo' starting, doing some work and exiting,
the output will be in the form:
<programlisting>
'foo' changed state to [STARTING]
'foo' changed state to [RUNNING]
'foo' changed state to [STOPPING]
'foo' changed state to [STOPPED]
</programlisting>
</para>
<para>
<command>lxc-wait</command> command will wait for a specific
state change and exit. This is useful for scripting to
synchronize the launch of a container or the end. The
parameter is an ORed combination of different states. The
following example shows how to wait for a container if he went
to the background.
<programlisting>
<![CDATA[
# launch lxc-wait in background
lxc-wait -n foo -s STOPPED &
LXC_WAIT_PID=$!
# this command goes in background
lxc-execute -n foo mydaemon &
# block until the lxc-wait exits
# and lxc-wait exits when the container
# is STOPPED
wait $LXC_WAIT_PID
echo "'foo' is finished"
]]>
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Setting the control group for container</title>
<para>The container is tied with the control groups, when a
container is started a control group is created and associated
with it. The control group properties can be read and modified
when the container is running by using the lxc-cgroup command.
</para>
<para>
<command>lxc-cgroup</command> command is used to set or get a
control group subsystem which is associated with a
container. The subsystem name is handled by the user, the
command won't do any syntax checking on the subsystem name, if
the subsystem name does not exists, the command will fail.
</para>
<para>
<programlisting>
lxc-cgroup -n foo cpuset.cpus
</programlisting>
will display the content of this subsystem.
<programlisting>
lxc-cgroup -n foo cpu.shares 512
</programlisting>
will set the subsystem to the specified value.
</para>
</refsect2>
</refsect1>
<refsect1>
<title>Bugs</title>
<para>The <command>lxc</command> is still in development, so the
command syntax and the API can change. The version 1.0.0 will be
the frozen version.</para>
</refsect1>
&seealso;
<refsect1>
<title>Author</title>
<para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
</refsect1>
</refentry>
<!-- Keep this comment at the end of the file Local variables: mode:
sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
sgml-parent-document:nil sgml-default-dtd-file:nil
sgml-exposed-tags:nil sgml-local-catalogs:nil
sgml-local-ecat-files:nil End: -->