lxc.conf.sgml.in revision 2f3f41d0d586bbf4d16969ea13074eddf761d1d1
<!--
lxc: linux Container library
(C) Copyright IBM Corp. 2007, 2008
Authors:
Daniel Lezcano <dlezcano at fr.ibm.com>
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-->
<!DOCTYPE refentry PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
<!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
]>
<refentry>
<docinfo><date>@LXC_GENERATE_DATE@</date></docinfo>
<refmeta>
<manvolnum>5</manvolnum>
</refmeta>
<refnamediv>
<refpurpose>
linux container configuration file
</refpurpose>
</refnamediv>
<refsect1>
<title>Description</title>
<para>
The linux containers (<command>lxc</command>) are always created
before being used. This creation defines a set of system
resources to be virtualized / isolated when a process is using
the container. By default, the pids, sysv ipc and mount points
are virtualized and isolated. The other system resources are
shared across containers, until they are explicitly defined in
the configuration file. For example, if there is no network
configuration, the network will be shared between the creator of
the container and the container itself, but if the network is
specified, a new network stack is created for the container and
the container can no longer use the network of its ancestor.
</para>
<para>
The configuration file defines the different system resources to
be assigned for the container. At present, the utsname, the
network, the mount points, the root file system and the control
groups are supported.
</para>
<para>
Each option in the configuration file has the form <command>key
= value</command> fitting in one line. The '#' character means
the line is a comment.
</para>
<refsect2>
<title>Architecture</title>
<para>
Allows to set the architecture for the container. For example,
set a 32bits architecture for a container running 32bits
binaries on a 64bits host. That fix the container scripts
which rely on the architecture to do some work like
downloading the packages.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
Specify the architecture for the container.
</para>
<para>
Valid options are
<option>x86</option>,
<option>i686</option>,
<option>x86_64</option>,
<option>amd64</option>
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Hostname</title>
<para>
The utsname section defines the hostname to be set for the
container. That means the container can set its own hostname
without changing the one from the system. That makes the
hostname private for the container.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the hostname for the container
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Stop signal</title>
<para>
Allows to specify signal name or number, sent by lxc-stop to
shutdown the container. Different init systems could use
different signals to perform clean shutdown sequence. Option
allows signal to be specified in kill(1) fashion, e.g.
SIGKILL, SIGRTMIN+14, SIGRTMAX-10 or plain number.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the signal used to stop the container
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Network</title>
<para>
The network section defines how the network is virtualized in
the container. The network virtualization acts at layer
two. In order to use the network virtualization, parameters
must be specified to define the network interfaces of the
container. Several virtual interfaces can be assigned and used
in a container even if the system has only one physical
network interface.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify what kind of network virtualization to be used
for the container. Each time
round of network configuration begins. In this way,
several network virtualization types can be specified
for the same container, as well as assigning several
network interfaces for one container. The different
virtualization types can be:
</para>
<para>
<option>empty:</option> will create only the loopback
interface.
</para>
<para>
<option>veth:</option> a peer network device is created
with one side assigned to the container and the other
side is attached to a bridge specified by
not specified, then the veth pair device will be created
but not attached to any bridge. Otherwise, the bridge
has to be setup before on the
system, <command>lxc</command> won't handle any
configuration outside of the container. By
default <command>lxc</command> choose a name for the
network device belonging to the outside of the
container, this name is handled
by <command>lxc</command>, but if you wish to handle
this name yourself, you can tell <command>lxc</command>
to set a specific name with
</para>
<para>
<option>vlan:</option> a vlan interface is linked with
the interface specified by
the container. The vlan identifier is specified with the
</para>
<para>
<option>macvlan:</option> a macvlan interface is linked
with the interface specified by
the container.
mode the macvlan will use to communicate between
different macvlan on the same upper device. The accepted
modes are <option>private</option>, the device never
communicates with any other device on the same upper_dev (default),
<option>vepa</option>, the new Virtual Ethernet Port
Aggregator (VEPA) mode, it assumes that the adjacent
bridge returns all frames where both source and
destination are local to the macvlan port, i.e. the
bridge is set up as a reflective relay. Broadcast
frames coming in from the upper_dev get flooded to all
macvlan interfaces in VEPA mode, local frames are not
delivered locallay, or <option>bridge</option>, it
provides the behavior of a simple bridge between
different macvlan interfaces on the same port. Frames
from one interface to another one get delivered directly
and are not sent out externally. Broadcast frames get
flooded to all other bridge ports and to the external
interface, but when they come back from a reflective
relay, we don't deliver them again. Since we know all
the MAC addresses, the macvlan bridge mode does not
require learning or STP like the bridge module does.
</para>
<para>
<option>phys:</option> an already existing interface
assigned to the container.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify an action to do for the
network.
</para>
<para><option>up:</option> activates the interface.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the interface to be used for real network
traffic.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
the interface name is dynamically allocated, but if
another name is needed because the configuration files
being used by the container use a generic name,
eg. eth0, this option will rename the interface in the
container.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
the interface mac address is dynamically allocated by
default to the virtual interface, but in some cases,
this is needed to resolve a mac address conflict or to
always have the same link-local ipv6 address
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the ipv4 address to assign to the virtualized
interface. Several lines specify several ipv4 addresses.
The address is in format x.y.z.t/m,
eg. 192.168.1.123/24. The broadcast address should be
specified on the same line, right after the ipv4
address.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the ipv4 address to use as the gateway inside the
container. The address is in format x.y.z.t, eg.
192.168.1.123.
Can also have the special value <option>auto</option>,
which means to take the primary address from the bridge
interface (as specified by the
the gateway. <option>auto</option> is only available when
using the <option>veth</option> and
<option>macvlan</option> network types.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the ipv6 address to assign to the virtualized
interface. Several lines specify several ipv6 addresses.
The address is in format x::y/m,
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the ipv6 address to use as the gateway inside the
container. The address is in format x::y,
eg. 2003:db8:1:0::1
Can also have the special value <option>auto</option>,
which means to take the primary address from the bridge
interface (as specified by the
the gateway. <option>auto</option> is only available when
using the <option>veth</option> and
<option>macvlan</option> network types.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
add a configuration option to specify a script to be
executed after creating and configuring the network used
from the host side. The following arguments are passed
to the script: container name and config section name
(net) Additional arguments depend on the config section
employing a script hook; the following are used by the
network system: execution context (up), network type
type, other arguments may be passed:
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
add a configuration option to specify a script to be
executed before destroying the network used from the
host side. The following arguments are passed to the
script: container name and config section name (net)
Additional arguments depend on the config section
employing a script hook; the following are used by the
network system: execution context (down), network type
type, other arguments may be passed:
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>New pseudo tty instance (devpts)</title>
<para>
For stricter isolation the container can have its own private
instance of the pseudo tty.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
If set, the container will have a new pseudo tty
instance, making this private to it. The value specifies
the maximum number of pseudo ttys allowed for a pts
instance (this limitation is not implemented yet).
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Container system console</title>
<para>
If the container is configured with a root filesystem and the
inittab file is setup to use the console, you may want to specify
where goes the output of this console.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
Specify a path to a file where the console output will
be written. The keyword 'none' will simply disable the
console. This is dangerous once if have a rootfs with a
console device file where the application can write, the
messages will fall in the host.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Console through the ttys</title>
<para>
If the container is configured with a root filesystem and the
inittab file is setup to launch a getty on the ttys. This
option will specify the number of ttys to be available for the
container. The number of getty in the inittab file of the
container should not be greater than the number of ttys
specified in this configuration file, otherwise the excess
getty sessions will die and respawn indefinitly giving
annoying messages on the console.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
Specify the number of tty to make available to the
container.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Console devices location</title>
<para>
LXC consoles are provided through Unix98 PTYs created on the
host and bind-mounted over the expected devices in the container.
in the guest. Therefore you can specify a directory location (under
<filename>/dev</filename> under which LXC will create the files and
bind-mount over them. These will then be symbolically linked to
A package upgrade can then succeed as it is able to remove and replace
the symbolic links.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
Specify a directory under <filename>/dev</filename>
under which to create the container console devices.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>/dev directory</title>
<para>
By default, lxc does nothing with the container's
<filename>/dev</filename>. This allows the container's
<filename>/dev</filename> to be set up as needed in the container
rootfs. If lxc.autodev is set to 1, then after mounting the container's
rootfs LXC will mount a fresh tmpfs under <filename>/dev</filename>
(limited to 100k) and fill in a minimal set of initial devices.
This is generally required when starting a container containing
a "systemd" based "init" but may be optional at other times. Addional
devices in the containers /dev directory may be created through the
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
Set this to 1 to have LXC mount and populate a minimal
<filename>/dev</filename> when starting the container.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Enable kmsg symlink</title>
<para>
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Mount points</title>
<para>
The mount points section specifies the different places to be
mounted. These mount points will be private to the container
and won't be visible by the processes running outside of the
container. This is useful to mount /etc, /var or /home for
examples.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify a file location in
the <filename>fstab</filename> format, containing the
mount informations. If the rootfs is an image file or a
device block and the fstab is used to mount a point
somewhere in this rootfs, the path of the rootfs mount
point should be prefixed with the
<filename>@LXCROOTFSMOUNT@</filename> default path or
specified.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify a mount point corresponding to a line in the
fstab format.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Root file system</title>
<para>
The root file system of the container can be different than that
of the host system.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the root file system for the container. It can
be an image file, a directory or a block device. If not
specified, the container shares its root file system
with the host.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
before pivoting. This is to ensure success of the
<citerefentry>
<refentrytitle><command>pivot_root</command></refentrytitle>
<manvolnum>8</manvolnum>
</citerefentry>
syscall. Any directory suffices, the default should
generally work.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
</term>
<listitem>
<para>
where to pivot the original root file system under
that. The default is <filename>mnt</filename>.
It is created if necessary, and also removed after
unmounting everything from it during container setup.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Control group</title>
<para>
The control group section contains the configuration for the
different subsystem. <command>lxc</command> does not check the
correctness of the subsystem name. This has the disadvantage
of not detecting configuration errors until the container is
started, but has the advantage of permitting any future
subsystem.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
specify the control group value to be set. The
subsystem name is the literal name of the control group
subsystem. The permitted names and the syntax of their
values is not dictated by LXC, instead it depends on the
features of the Linux kernel running at the time the
container is started,
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Capabilities</title>
<para>
The capabilities can be dropped in the container if this one
is run as root.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
Specify the capability to be dropped in the container. A
single line defining several capabilities with a space
separation is allowed. The format is the lower case of
the capability definition without the "CAP_" prefix,
eg. CAP_SYS_MODULE should be specified as
sys_module. See
<citerefentry>
<refentrytitle><command>capabilities</command></refentrytitle>
<manvolnum>7</manvolnum>
</citerefentry>,
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>UID mappings</title>
<para>
A container can be started in a private user namespace with
user and group id mappings. For instance, you can map userid
0 in the container to userid 200000 on the host. The root
user in the container will be privileged in the container,
but unprivileged on the host. Normally a system container
will want a range of ids, so you would map, for instance,
user and group ids 0 through 20,000 in the container to the
ids 200,000 through 220,000.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
Four values must be provided. First a character, either
'u', or 'g', to specify whether user or group ids are
being mapped. Next is the first userid as seen in the
user namespace of the container. Next is the userid as
seen on the host. Finally, a range indicating the number
of consecutive ids to map.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Startup hooks</title>
<para>
Startup hooks are programs or scripts which can be executed
at various times in a container's lifetime.
</para>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
A hook to be run in the host's namespace before the
container ttys, consoles, or mounts are up.
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
A hook to be run in the container's fs namespace but before
the rootfs has been set up. This allows for manipulation
of the rootfs, i.e. to mount an encrypted filesystem. Mounts
done in this hook will not be reflected on the host (apart from
mounts propagation), so they will be automatically cleaned up
when the container shuts down.
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
A hook to be run in the container's namespace after
mounting has been done, but before the pivot_root.
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
A hook to be run in the container's namespace after
mounting has been done and after any mount hooks have
run, but before the pivot_root, if
The purpose of this hook is to assist in populating the
/dev directory of the container when using the autodev
option for systemd based containers. The container's /dev
directory is relative to the
${<option>LXC_ROOTFS_MOUNT</option>} environment
variable available when the hook is run.
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
A hook to be run in the container's namespace immediately
before executing the container's init. This requires the
program to be available in the container.
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
</term>
<listitem>
<para>
A hook to be run in the host's namespace after the
container has been shut down.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
<refsect2>
<title>Startup hooks Environment Variables</title>
<para>
A number of environment variables are made available to the startup
hooks to provide configuration information and assist in the
functioning of the hooks. Not all variables are valid in all
contexts. In particular, all paths are relative to the host system
</para>
<variablelist>
<varlistentry>
<term>
<option>LXC_NAME</option>
</term>
<listitem>
<para>
The LXC name of the container. Useful for logging messages
in commmon log environments. [<option>-n</option>]
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
<option>LXC_CONFIG_FILE</option>
</term>
<listitem>
<para>
Host relative path to the container configuration file. This
gives the container to reference the original, top level,
configuration file for the container in order to locate any
addotional configuration information not otherwise made
available. [<option>-f</option>]
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
<option>LXC_CONSOLE</option>
</term>
<listitem>
<para>
The path to the console output of the container if not NULL.
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
<option>LXC_CONSOLE_LOGPATH</option>
</term>
<listitem>
<para>
The path to the console log output of the container if not NULL.
[<option>-L</option>]
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
<option>LXC_ROOTFS_MOUNT</option>
</term>
<listitem>
<para>
The mount location to which the container is initially bound.
This will be the host relative path to the container rootfs
for the container instance being started and is where changes
should be made for that instance.
</para>
</listitem>
</varlistentry>
</variablelist>
<variablelist>
<varlistentry>
<term>
<option>LXC_ROOTFS_PATH</option>
</term>
<listitem>
<para>
The host relative path to the container root which has been
mounted to the rootfs.mount location.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsect2>
</refsect1>
<refsect1>
<title>Examples</title>
<para>
In addition to the few examples given below, you will find
some other examples of configuration file in @DOCDIR@/examples
</para>
<refsect2>
<title>Network</title>
<para>This configuration sets up a container to use a veth pair
device with one side plugged to a bridge br0 (which has been
configured before on the system by the administrator). The
virtual network device visible in the container is renamed to
eth0.</para>
<programlisting>
lxc.utsname = myhostname
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.hwaddr = 4a:49:43:49:79:bf
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597
</programlisting>
</refsect2>
<refsect2>
<para>This configuration will map both user and group ids in the
range 0-9999 in the container to the ids 100000-109999 on the host.
</para>
<programlisting>
lxc.id_map = u 0 100000 10000
lxc.id_map = g 0 100000 10000
</programlisting>
</refsect2>
<refsect2>
<title>Control group</title>
<para>This configuration will setup several control groups for
the application, cpuset.cpus restricts usage of the defined cpu,
cpus.share prioritize the control group, devices.allow makes
usable the specified devices.</para>
<programlisting>
lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234
lxc.cgroup.devices.allow = c 1:3 rw
lxc.cgroup.devices.allow = b 8:0 rw
</programlisting>
</refsect2>
<refsect2>
<title>Complex configuration</title>
<para>This example show a complex configuration making a complex
network stack, using the control groups, setting a new hostname,
mounting some locations and a changing root file system.</para>
<programlisting>
lxc.utsname = complex
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 4a:49:43:49:79:bf
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597
lxc.network.ipv6 = 2003:db8:1:0:214:5432:feab:3588
lxc.network.type = macvlan
lxc.network.flags = up
lxc.network.link = eth0
lxc.network.hwaddr = 4a:49:43:49:79:bd
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596
lxc.network.type = phys
lxc.network.flags = up
lxc.network.link = dummy0
lxc.network.hwaddr = 4a:49:43:49:79:ff
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3297
lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234
lxc.cgroup.devices.allow = c 1:3 rw
lxc.cgroup.devices.allow = b 8:0 rw
lxc.cap.drop = sys_module mknod setuid net_raw
lxc.cap.drop = mac_override
</programlisting>
</refsect2>
</refsect1>
<refsect1>
<title>See Also</title>
<simpara>
<citerefentry>
<refentrytitle><command>chroot</command></refentrytitle>
<manvolnum>1</manvolnum>
</citerefentry>,
<citerefentry>
<refentrytitle><command>pivot_root</command></refentrytitle>
<manvolnum>8</manvolnum>
</citerefentry>,
<citerefentry>
<refentrytitle><filename>fstab</filename></refentrytitle>
<manvolnum>5</manvolnum>
</citerefentry>
</simpara>
</refsect1>
&seealso;
<refsect1>
<title>Author</title>
<para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
</refsect1>
</refentry>
<!-- Keep this comment at the end of the file
Local variables:
mode: sgml
sgml-omittag:t
sgml-shorttag:t
sgml-minimize-attributes:nil
sgml-always-quote-attributes:t
sgml-indent-step:2
sgml-indent-data:t
sgml-parent-document:nil
sgml-default-dtd-file:nil
sgml-exposed-tags:nil
sgml-local-catalogs:nil
sgml-local-ecat-files:nil
End:
-->