lxc.sgml.in revision ac30d6a43245e0c50aad9e2ebfb88d80aaeea691
<!--
lxc: linux Container library
(C) Copyright IBM Corp. 2007, 2008
Authors:
Daniel Lezcano <dlezcano at fr.ibm.com>
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-->
<!DOCTYPE refentry PUBLIC "-//Davenport//DTD DocBook V3.0//EN" [
<!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
]>
<refentry>
<docinfo>
<date>@LXC_GENERATE_DATE@</date>
</docinfo>
<refmeta>
<refentrytitle>lxc</refentrytitle>
<manvolnum>7</manvolnum>
<refmiscinfo>
Version @LXC_MAJOR_VERSION@.@LXC_MINOR_VERSION@.@LXC_MICRO_VERSION@
</refmiscinfo>
</refmeta>
<refnamediv>
<refname>lxc</refname>
<refpurpose>
linux containers
</refpurpose>
</refnamediv>
<refsect1>
<title>Quick start</title>
<para>
You are in a hurry, and you don't want to read this man page. Ok,
without warranty, here are the commands to launch a shell inside
a container with a predefined configuration template, it may
work.
<command>
@BINDIR@/lxc-execute -n foo -f @SYSCONFDIR@/lxc/lxc-macvlan.conf /bin/bash
</command>
</para>
</refsect1>
<refsect1>
<title>Overview</title>
<para>
The container technology is actively being pushed into the
mainstream linux kernel. It provides the resource management
through the control groups aka process containers and resource
isolation through the namespaces.
</para>
<para>
The linux containers, <command>lxc</command>, aims to use these
new functionalities to provide an userspace container object
which provides full resource isolation and resource control for
an applications or a system.
</para>
<para>
The first objective of this project is to make the life easier
for the kernel developers involved in the containers project and
especially to continue working on the Checkpoint/Restart new
features. The <command>lxc</command> is small enough to easily
manage a container with simple command lines and complete enough
to be used for other purposes.
</para>
</refsect1>
<refsect1>
<title>Requirements</title>
<para>
The <command>lxc</command> relies on a set of functionalies
provided by the kernel which needs to be active. Depending of
the missing functionalities the <command>lxc</command> will
work with a restricted number of functionalities or will simply
fails.
</para>
<para>
The following list gives the kernel features to be enabled in
the kernel to have the full features container:
</para>
<programlisting>
* General setup
* Control Group support
-> Namespace cgroup subsystem
-> Freezer cgroup subsystem
-> Cpuset support
-> Simple CPU accounting cgroup subsystem
-> Resource counters
-> Memory resource controllers for Control Groups
* Group CPU scheduler
-> Basis for grouping tasks (Control Groups)
* Namespaces support
-> UTS namespace
-> IPC namespace
-> User namespace
-> Pid namespace
-> Network namespace
* Security options
-> File POSIX Capabilities
</programlisting>
<para>
For the moment the easiest way to have all the features in the
kernel is to use the git tree at:
<systemitem>
git://git.kernel.org/pub/scm/linux/kernel/git/daveh/linux-2.6-lxc.git
</systemitem>
But the kernel version >= 2.6.27 shipped with the distros, may
work with <command>lxc</command>, this one will have less
functionalities but enough to be interesting.
The planned kernel version which <command>lxc</command> should
be fully functionaly is 2.6.29.
</para>
<para>
Before using the <command>lxc</command>, your system should be
configured with the file capabilities, otherwise you will need
to run the <command>lxc</command> commands as root. The
control group should be mounted anywhere, eg:
<command>mount -t cgroup cgroup /cgroup</command>
</para>
</refsect1>
<refsect1>
<title>Functional specification</title>
<para>
A container is an object where the configuration is
persistent. The application will be launched inside this
container and it will use the configuration which was previously
created.
</para>
<para>How to run an application in a container ?</para>
<para>
Before running an application, you should know what are the
resources you want to isolate. The default configuration is to
isolate the pids, the sysv ipc and the mount points. If you want
to run a simple shell inside a container, a basic configuration
is needed, especially if you want to share the rootfs. If you
want to run an application like <command>sshd</command>, you
should provide a new network stack and a new hostname. If you
want to avoid conflicts with some files
eg. <filename>/var/run/httpd.pid</filename>, you should
remount <filename>/var/run</filename> with an empty
directory. If you want to avoid the conflicts in all the cases,
you can specify a rootfs for the container. The rootfs can be a
directory tree, previously bind mounted with the initial rootfs,
so you can still use your distro but with your
own <filename>/etc</filename> and <filename>/home</filename>
</para>
<para>
Here is an example of directory tree
for <command>sshd</command>:
<programlisting>
[root@lxc sshd]$ tree -d rootfs
rootfs
|-- bin
|-- dev
| |-- pts
| `-- shm
| `-- network
|-- etc
| `-- ssh
|-- lib
|-- proc
|-- root
|-- sbin
|-- sys
|-- usr
`-- var
|-- empty
| `-- sshd
|-- lib
| `-- empty
| `-- sshd
`-- run
`-- sshd
</programlisting>
and the mount points file associated with it:
<programlisting>
[root@lxc sshd]$ cat fstab
/lib /home/root/sshd/rootfs/lib none ro,bind 0 0
/bin /home/root/sshd/rootfs/bin none ro,bind 0 0
/usr /home/root/sshd/rootfs/usr none ro,bind 0 0
/sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
</programlisting>
</para>
<para>How to run a system in a container ?</para>
<para>Running a system inside a container is paradoxically easier
than running an application. Why ? Because you don't have to care
about the resources to be isolated, everything need to be isolated
except <filename>/dev</filename> which needs to be remounted in
the container rootfs, the other resources are specified as being
isolated but without configuration because the container will set
them up. eg. the ipv4 address will be setup by the system
container init scripts. Here is an example of the mount points
file:
<programlisting>
[root@lxc debian]$ cat fstab
/dev /home/root/debian/rootfs/dev none bind 0 0
/dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
</programlisting>
More information can be added to the container to facilitate the
configuration. For example, make accessible from the container
the resolv.conf file belonging to the host.
<programlisting>
/etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
</programlisting>
</para>
<refsect2>
<title>Container life cycle</title>
<para>
When the container is created, it contains the configuration
information. When a process is launched, the container will be
starting and running. When the last process running inside the
container exits, the container is stopped.
</para>
<para>
In case of failure when the container is initialized, it will
pass through the aborting state.
</para>
<programlisting>
---------
| STOPPED |<---------------
--------- |
| |
start |
| |
V |
---------- |
| STARTING |--error- |
---------- | |
| | |
V V |
--------- ---------- |
| RUNNING | | ABORTING | |
--------- ---------- |
| | |
no process | |
| | |
V | |
---------- | |
| STOPPING |<------- |
---------- |
| |
---------------------
</programlisting>
</refsect2>
<refsect2>
<title>Configuration</title>
<para>The container is configured through a configuration
file, the format of the configuration file is described in
<citerefentry>
<refentrytitle><filename>lxc.conf</filename></refentrytitle>
<manvolnum>5</manvolnum>
</citerefentry>
</para>
</refsect2>
<refsect2>
<title>Creating / Destroying the containers</title>
<para>
The container is created via the <command>lxc-create</command>
command. It takes a container name as parameter and an
optional configuration file. The name is used by the different
commands to refer to this
container. The <command>lxc-destroy</command> command will
destroy the container object.
<programlisting>
lxc-create -n foo
lxc-destroy -n foo
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Starting / Stopping a container</title>
<para>When the container has been created, it is ready to run an
application / system. When the application has to be destroyed
the container can be stopped, that will kill all the processes
of the container.</para>
<para>
Running an application inside a container is not exactly the
same thing as running a system. For this reason, there is two
commands to run an application into a container:
<programlisting>
lxc-execute -n foo [-f config] /bin/bash
lxc-start -n foo [/bin/bash]
</programlisting>
</para>
<para>
<command>lxc-execute</command> command will run the
specified command into a container but it will mount /proc
and autocreate/autodestroy the container if it does not
exist. It will furthermore create an intermediate
process, <command>lxc-init</command>, which is in charge to
launch the specified command, that allows to support daemons
in the container. In other words, in the
container <command>lxc-init</command> has the pid 1 and the
first process of the application has the pid 2.
</para>
<para>
<command>lxc-start</command> command will run the specified
command into the container doing nothing else than using the
configuration specified by <command>lxc-create</command>.
The pid of the first process is 1. If no command is
specified <command>lxc-start</command> will
run <filename>/sbin/init</filename>.
</para>
<para>
To summarize, <command>lxc-execute</command> is for running
an application and <command>lxc-start</command> is for
running a system.
</para>
<para>
If the application is no longer responding, inaccessible or is
not able to finish by itself, a
wild <command>lxc-stop</command> command will kill all the
processes in the container without pity.
<programlisting>
lxc-stop -n foo
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Connect to an available tty</title>
<para>
If the container is configured with the ttys, it is possible
to access it through them. It is up to the container to
provide a set of available tty to be used by the following
command. When the tty is lost, it is possible to reconnect it
without login again.
<programlisting>
lxc-console -n foo -t 3
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Freeze / Unfreeze a container</title>
<para>
Sometime, it is useful to stop all the processes belonging to
a container, eg. for job scheduling. The commands:
<programlisting>
lxc-freeze -n foo
</programlisting>
will put all the processes in an uninteruptible state and
<programlisting>
lxc-unfreeze -n foo
</programlisting>
will resume all the tasks.
</para>
<para>
This feature is enabled if the cgroup freezer is enabled in the
kernel.
</para>
</refsect2>
<refsect2>
<title>Getting information about the container</title>
<para>When there are a lot of containers, it is hard to follow
what has been created or destroyed, what is running or what are
the pids running into a specific container. For this reason, the
following commands give this information:
<programlisting>
lxc-ls
lxc-ps -n foo
lxc-info -n foo
</programlisting>
</para>
<para>
<command>lxc-ls</command> lists the containers of the
system. The command is a script built on top
of <command>ls</command>, so it accepts the options of the ls
commands, eg:
<programlisting>
lxc-ls -C1
</programlisting>
will display the containers list in one column or:
<programlisting>
lxc-ls -l
</programlisting>
will display the containers list and their permissions.
</para>
<para>
<command>lxc-ps</command> will display the pids for a specific
container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
is built on top of <command>ps</command> and accepts the same
options, eg:
<programlisting>
lxc-ps -n foo --forest
</programlisting>
will display the process hierarchy for the container 'foo'.
</para>
<para>
<command>lxc-info</command> gives informations for a specific
container, at present time, only the state of the container is
displayed.
</para>
<para>
Here is an example on how the combination of these commands
allow to list all the containers and retrieve their state.
<programlisting>
for i in $(lxc-ls -1); do
lxc-info -n $i
done
</programlisting>
And displaying all the pids of all the containers:
<programlisting>
for i in $(lxc-ls -1); do
lxc-ps -n $i --forest
done
</programlisting>
</para>
<para>
<command>lxc-netstat</command> display network information for
a specific container. This command is built on top of
the <command>netstat</command> command and will accept its
options
</para>
<para>
The following command will display the socket informations for
the container 'foo'.
<programlisting>
lxc-netstat -n foo -tano
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Monitoring the containers</title>
<para>It is sometime useful to track the states of a container,
for example to monitor it or just to wait for a specific
state in a script.
</para>
<para>
<command>lxc-monitor</command> command will monitor one or
several containers. The parameter of this command accept a
regular expression for example:
<programlisting>
lxc-monitor -n "foo|bar"
</programlisting>
will monitor the states of containers named 'foo' and 'bar', and:
<programlisting>
lxc-monitor -n ".*"
</programlisting>
will monitor all the containers.
</para>
<para>
For a container 'foo' starting, doing some work and exiting,
the output will be in the form:
<programlisting>
'foo' changed state to [STARTING]
'foo' changed state to [RUNNING]
'foo' changed state to [STOPPING]
'foo' changed state to [STOPPED]
</programlisting>
</para>
<para>
<command>lxc-wait</command> command will wait for a specific
state change and exit. This is useful for scripting to
synchronize the launch of a container or the end. The
parameter is an ORed combination of different states. The
following example shows how to wait for a container if he went
to the background.
<programlisting>
# launch lxc-wait in background
lxc-wait -n foo -s STOPPED &
LXC_WAIT_PID=$!
# this command goes in background
lxc-execute -n foo mydaemon &
# block until the lxc-wait exits
# and lxc-wait exits when the container
# is STOPPED
wait $LXC_WAIT_PID
echo "'foo' is finished"
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Setting the control group for a container</title>
<para>The container is tied with the control groups, when a
container is started a control group is created and associated
with it. The control group properties can be read and modified
when the container is running by using the lxc-cgroup command.
</para>
<para>
<command>lxc-cgroup</command> command is used to set or get a
control group subsystem which is associated with a
container. The subsystem name is handled by the user, the
command won't do any syntax checking on the subsystem name, if
the subsystem name does not exists, the command will fail.
</para>
<para>
<programlisting>
lxc-cgroup -n foo cpuset.cpus
</programlisting>
will display the content of this subsystem.
<programlisting>
lxc-cgroup -n foo cpu.shares 512
</programlisting>
will set the subsystem to the specified value.
</para>
</refsect2>
</refsect1>
<refsect1>
<title>Bugs</title>
<para>The <command>lxc</command> is still in development, so the
command syntax and the API can change. The version 1.0.0 will be
the frozen version.</para>
</refsect1>
&seealso;
<refsect1>
<title>Author</title>
<para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
</refsect1>
</refentry>
<!-- Keep this comment at the end of the file Local variables: mode:
sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
sgml-parent-document:nil sgml-default-dtd-file:nil
sgml-exposed-tags:nil sgml-local-catalogs:nil
sgml-local-ecat-files:nil End: -->