lxc.sgml.in revision 0d9f8e188c1c4832e4f6b9de646478947ae86877
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger<!--
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornbergerlxc: linux Container library
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger(C) Copyright IBM Corp. 2007, 2008
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian KornbergerAuthors:
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian KornbergerDaniel Lezcano <dlezcano at fr.ibm.com>
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian KornbergerThis library is free software; you can redistribute it and/or
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornbergermodify it under the terms of the GNU Lesser General Public
772a71bcc07f7001f5cd3cb4c3dc2cf393ffe9beJulian KornbergerLicense as published by the Free Software Foundation; either
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksaversion 2.1 of the License, or (at your option) any later version.
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen Kuksa
4755669b9d408e3240e1f005349399973923c095Eugen KuksaThis library is distributed in the hope that it will be useful,
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen Kuksabut WITHOUT ANY WARRANTY; without even the implied warranty of
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen KuksaMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen KuksaLesser General Public License for more details.
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen Kuksa
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen KuksaYou should have received a copy of the GNU Lesser General Public
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen KuksaLicense along with this library; if not, write to the Free Software
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen KuksaFoundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
f1304db188bff0f7c40172819863ffe54401b400Julian Kornberger
8d050e0562ded9f29d349f3589d870fe9121f7e2henning mueller-->
be6eb5d7ea1888f2f835fe0fff358f72572afeb4henning mueller
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa<!DOCTYPE refentry PUBLIC "-//Davenport//DTD DocBook V3.0//EN" [
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger<!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
772a71bcc07f7001f5cd3cb4c3dc2cf393ffe9beJulian Kornberger]>
772a71bcc07f7001f5cd3cb4c3dc2cf393ffe9beJulian Kornberger
772a71bcc07f7001f5cd3cb4c3dc2cf393ffe9beJulian Kornberger<refentry>
772a71bcc07f7001f5cd3cb4c3dc2cf393ffe9beJulian Kornberger
772a71bcc07f7001f5cd3cb4c3dc2cf393ffe9beJulian Kornberger <docinfo>
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger <date>@LXC_GENERATE_DATE@</date>
f9c64720306a03102ed06e2e497c8f7d5bd0910aChristian Clausen </docinfo>
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa
f9c64720306a03102ed06e2e497c8f7d5bd0910aChristian Clausen
f9c64720306a03102ed06e2e497c8f7d5bd0910aChristian Clausen <refmeta>
f9c64720306a03102ed06e2e497c8f7d5bd0910aChristian Clausen <refentrytitle>lxc</refentrytitle>
f9c64720306a03102ed06e2e497c8f7d5bd0910aChristian Clausen <manvolnum>7</manvolnum>
f9c64720306a03102ed06e2e497c8f7d5bd0910aChristian Clausen <refmiscinfo>
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa Version @LXC_MAJOR_VERSION@.@LXC_MINOR_VERSION@.@LXC_MICRO_VERSION@
f9c64720306a03102ed06e2e497c8f7d5bd0910aChristian Clausen </refmiscinfo>
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen </refmeta>
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen <refnamediv>
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen <refname>lxc</refname>
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen <refpurpose>
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen linux containers
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa </refpurpose>
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen </refnamediv>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa <refsect1>
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen <title>Quick start</title>
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen <para>
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen You are in a hurry, and you don't want to read this man page. Ok,
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen without warranty, here are the commands to launch a shell inside
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen a container with a predefined configuration template, it may
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa work.
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen <command>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen @BINDIR@/lxc-execute -n foo -f @SYSCONFDIR@/lxc/lxc-macvlan.conf /bin/bash
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa </command>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen </para>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen </refsect1>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen <refsect1>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen <title>Overview</title>
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa <para>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen The container technology is actively being pushed into the
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen mainstream linux kernel. It provides the resource management
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa through the control groups aka process containers and resource
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen isolation through the namespaces.
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen </para>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen <para>
ca68055161f6beb2ec248e789ab787e6de69bd18Christian Clausen The linux containers, <command>lxc</command>, aims to use these
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa new functionalities to provide an userspace container object
0ae77dd0f6698fa1948d1c6c973cc64d6df9e8d6Christian Clausen which provides full resource isolation and resource control for
813c1fb6ef7f1d386c65abf8d79389be3cb0f4e9Christian Clausen an applications or a system.
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa </para>
813c1fb6ef7f1d386c65abf8d79389be3cb0f4e9Christian Clausen
813c1fb6ef7f1d386c65abf8d79389be3cb0f4e9Christian Clausen <para>
813c1fb6ef7f1d386c65abf8d79389be3cb0f4e9Christian Clausen The first objective of this project is to make the life easier
813c1fb6ef7f1d386c65abf8d79389be3cb0f4e9Christian Clausen for the kernel developers involved in the containers project and
813c1fb6ef7f1d386c65abf8d79389be3cb0f4e9Christian Clausen especially to continue working on the Checkpoint/Restart new
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa features. The <command>lxc</command> is small enough to easily
7c5e9db9a07461d92c007563a2a27ca42ce93baeChristian Clausen manage a container with simple command lines and complete enough
7c5e9db9a07461d92c007563a2a27ca42ce93baeChristian Clausen to be used for other purposes.
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa </para>
813c1fb6ef7f1d386c65abf8d79389be3cb0f4e9Christian Clausen </refsect1>
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa <refsect1>
1d2a2bdc364ab8fecd3db98c17162ce9f03d4361Eugen Kuksa <title>Requirements</title>
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger <para>
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger The <command>lxc</command> relies on a set of functionalies
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger provided by the kernel which needs to be active. Depending of
432ee743a5f1c5688c73446b5977b06ed97fb67cJulian Kornberger the missing functionalities the <command>lxc</command> will
432ee743a5f1c5688c73446b5977b06ed97fb67cJulian Kornberger work with a restricted number of functionalities or will simply
8566aec18eff0f0d248d73d2f44f9df16cc41456Julian Kornberger fails.
8566aec18eff0f0d248d73d2f44f9df16cc41456Julian Kornberger </para>
8566aec18eff0f0d248d73d2f44f9df16cc41456Julian Kornberger
432ee743a5f1c5688c73446b5977b06ed97fb67cJulian Kornberger <para>
f4f335875509867dd238df7c92b0b8f4fe101705Julian Kornberger The following list gives the kernel features to be enabled in
f4f335875509867dd238df7c92b0b8f4fe101705Julian Kornberger the kernel to have the full features container:
f4f335875509867dd238df7c92b0b8f4fe101705Julian Kornberger </para>
f4f335875509867dd238df7c92b0b8f4fe101705Julian Kornberger <programlisting>
3c4b1bd39fa36d241f2ef0d6f7ebbf2a9a6f4d36henning mueller * General setup
3c4b1bd39fa36d241f2ef0d6f7ebbf2a9a6f4d36henning mueller * Control Group support
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen Kuksa -> Namespace cgroup subsystem
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen Kuksa -> Freezer cgroup subsystem
4755669b9d408e3240e1f005349399973923c095Eugen Kuksa -> Cpuset support
df8908f48450f65ca99c9b997ebd3711f49049e5Eugen Kuksa -> Simple CPU accounting cgroup subsystem
17e3eda463c5fe407b4498c996c7fd0474e34e41henning mueller -> Resource counters
17e3eda463c5fe407b4498c996c7fd0474e34e41henning mueller -> Memory resource controllers for Control Groups
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger * Group CPU scheduler
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger -> Basis for grouping tasks (Control Groups)
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger * Namespaces support
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger -> UTS namespace
7b025f9d9726413eb3f50ca2b39826e7eed816fbJulian Kornberger -> IPC namespace
-> User namespace
-> Pid namespace
-> Network namespace
* Security options
-> File POSIX Capabilities
</programlisting>
<para>
The kernel version >= 2.6.27 shipped with the distros, will
work with <command>lxc</command>, this one will have less
functionalities but enough to be interesting.
With the kernel 2.6.29, <command>lxc</command> is fully
functional.
</para>
<para>
Before using the <command>lxc</command>, your system should be
configured with the file capabilities, otherwise you will need
to run the <command>lxc</command> commands as root.
</para>
<para>
The control group can be mounted anywhere, eg:
<command>mount -t cgroup cgroup /cgroup</command>.
If you want to dedicate a specific cgroup mount point
for <command>lxc</command>, that is to have different cgroups
mounted at different places with different options but
let <command>lxc</command> to use one location, you can bind
the mount point with the <option>lxc</option> name, eg:
<command>mount -t cgroup lxc /cgroup4lxc</command> or
<command>mount -t cgroup -ons,cpuset,freezer,devices
lxc /cgroup4lxc</command>
</para>
</refsect1>
<refsect1>
<title>Functional specification</title>
<para>
A container is an object where the configuration is
persistent. The application will be launched inside this
container and it will use the configuration which was previously
created.
</para>
<para>How to run an application in a container ?</para>
<para>
Before running an application, you should know what are the
resources you want to isolate. The default configuration is to
isolate the pids, the sysv ipc and the mount points. If you want
to run a simple shell inside a container, a basic configuration
is needed, especially if you want to share the rootfs. If you
want to run an application like <command>sshd</command>, you
should provide a new network stack and a new hostname. If you
want to avoid conflicts with some files
eg. <filename>/var/run/httpd.pid</filename>, you should
remount <filename>/var/run</filename> with an empty
directory. If you want to avoid the conflicts in all the cases,
you can specify a rootfs for the container. The rootfs can be a
directory tree, previously bind mounted with the initial rootfs,
so you can still use your distro but with your
own <filename>/etc</filename> and <filename>/home</filename>
</para>
<para>
Here is an example of directory tree
for <command>sshd</command>:
<programlisting>
[root@lxc sshd]$ tree -d rootfs
rootfs
|-- bin
|-- dev
| |-- pts
| `-- shm
| `-- network
|-- etc
| `-- ssh
|-- lib
|-- proc
|-- root
|-- sbin
|-- sys
|-- usr
`-- var
|-- empty
| `-- sshd
|-- lib
| `-- empty
| `-- sshd
`-- run
`-- sshd
</programlisting>
and the mount points file associated with it:
<programlisting>
[root@lxc sshd]$ cat fstab
/lib /home/root/sshd/rootfs/lib none ro,bind 0 0
/bin /home/root/sshd/rootfs/bin none ro,bind 0 0
/usr /home/root/sshd/rootfs/usr none ro,bind 0 0
/sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
</programlisting>
</para>
<para>How to run a system in a container ?</para>
<para>Running a system inside a container is paradoxically easier
than running an application. Why ? Because you don't have to care
about the resources to be isolated, everything need to be isolated
except <filename>/dev</filename> which needs to be remounted in
the container rootfs, the other resources are specified as being
isolated but without configuration because the container will set
them up. eg. the ipv4 address will be setup by the system
container init scripts. Here is an example of the mount points
file:
<programlisting>
[root@lxc debian]$ cat fstab
/dev /home/root/debian/rootfs/dev none bind 0 0
/dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
</programlisting>
More information can be added to the container to facilitate the
configuration. For example, make accessible from the container
the resolv.conf file belonging to the host.
<programlisting>
/etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
</programlisting>
</para>
<refsect2>
<title>Container life cycle</title>
<para>
When the container is created, it contains the configuration
information. When a process is launched, the container will be
starting and running. When the last process running inside the
container exits, the container is stopped.
</para>
<para>
In case of failure when the container is initialized, it will
pass through the aborting state.
</para>
<programlisting>
---------
| STOPPED |<---------------
--------- |
| |
start |
| |
V |
---------- |
| STARTING |--error- |
---------- | |
| | |
V V |
--------- ---------- |
| RUNNING | | ABORTING | |
--------- ---------- |
| | |
no process | |
| | |
V | |
---------- | |
| STOPPING |<------- |
---------- |
| |
---------------------
</programlisting>
</refsect2>
<refsect2>
<title>Configuration</title>
<para>The container is configured through a configuration
file, the format of the configuration file is described in
<citerefentry>
<refentrytitle><filename>lxc.conf</filename></refentrytitle>
<manvolnum>5</manvolnum>
</citerefentry>
</para>
</refsect2>
<refsect2>
<title>Creating / Destroying the containers</title>
<para>
The container is created via the <command>lxc-create</command>
command. It takes a container name as parameter and an
optional configuration file. The name is used by the different
commands to refer to this
container. The <command>lxc-destroy</command> command will
destroy the container object.
<programlisting>
lxc-create -n foo
lxc-destroy -n foo
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Starting / Stopping a container</title>
<para>When the container has been created, it is ready to run an
application / system. When the application has to be destroyed
the container can be stopped, that will kill all the processes
of the container.</para>
<para>
Running an application inside a container is not exactly the
same thing as running a system. For this reason, there is two
commands to run an application into a container:
<programlisting>
lxc-execute -n foo [-f config] /bin/bash
lxc-start -n foo [/bin/bash]
</programlisting>
</para>
<para>
<command>lxc-execute</command> command will run the
specified command into a container but it will mount /proc
and autocreate/autodestroy the container if it does not
exist. It will furthermore create an intermediate
process, <command>lxc-init</command>, which is in charge to
launch the specified command, that allows to support daemons
in the container. In other words, in the
container <command>lxc-init</command> has the pid 1 and the
first process of the application has the pid 2.
</para>
<para>
<command>lxc-start</command> command will run the specified
command into the container doing nothing else than using the
configuration specified by <command>lxc-create</command>.
The pid of the first process is 1. If no command is
specified <command>lxc-start</command> will
run <filename>/sbin/init</filename>.
</para>
<para>
To summarize, <command>lxc-execute</command> is for running
an application and <command>lxc-start</command> is for
running a system.
</para>
<para>
If the application is no longer responding, inaccessible or is
not able to finish by itself, a
wild <command>lxc-stop</command> command will kill all the
processes in the container without pity.
<programlisting>
lxc-stop -n foo
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Connect to an available tty</title>
<para>
If the container is configured with the ttys, it is possible
to access it through them. It is up to the container to
provide a set of available tty to be used by the following
command. When the tty is lost, it is possible to reconnect it
without login again.
<programlisting>
lxc-console -n foo -t 3
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Freeze / Unfreeze a container</title>
<para>
Sometime, it is useful to stop all the processes belonging to
a container, eg. for job scheduling. The commands:
<programlisting>
lxc-freeze -n foo
</programlisting>
will put all the processes in an uninteruptible state and
<programlisting>
lxc-unfreeze -n foo
</programlisting>
will resume all the tasks.
</para>
<para>
This feature is enabled if the cgroup freezer is enabled in the
kernel.
</para>
</refsect2>
<refsect2>
<title>Getting information about the container</title>
<para>When there are a lot of containers, it is hard to follow
what has been created or destroyed, what is running or what are
the pids running into a specific container. For this reason, the
following commands give this information:
<programlisting>
lxc-ls
lxc-ps -n foo
lxc-info -n foo
</programlisting>
</para>
<para>
<command>lxc-ls</command> lists the containers of the
system. The command is a script built on top
of <command>ls</command>, so it accepts the options of the ls
commands, eg:
<programlisting>
lxc-ls -C1
</programlisting>
will display the containers list in one column or:
<programlisting>
lxc-ls -l
</programlisting>
will display the containers list and their permissions.
</para>
<para>
<command>lxc-ps</command> will display the pids for a specific
container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
is built on top of <command>ps</command> and accepts the same
options, eg:
<programlisting>
lxc-ps -n foo --forest
</programlisting>
will display the process hierarchy for the container 'foo'.
</para>
<para>
<command>lxc-info</command> gives informations for a specific
container, at present time, only the state of the container is
displayed.
</para>
<para>
Here is an example on how the combination of these commands
allow to list all the containers and retrieve their state.
<programlisting>
for i in $(lxc-ls -1); do
lxc-info -n $i
done
</programlisting>
And displaying all the pids of all the containers:
<programlisting>
for i in $(lxc-ls -1); do
lxc-ps -n $i --forest
done
</programlisting>
</para>
<para>
<command>lxc-netstat</command> display network information for
a specific container. This command is built on top of
the <command>netstat</command> command and will accept its
options
</para>
<para>
The following command will display the socket informations for
the container 'foo'.
<programlisting>
lxc-netstat -n foo -tano
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Monitoring the containers</title>
<para>It is sometime useful to track the states of a container,
for example to monitor it or just to wait for a specific
state in a script.
</para>
<para>
<command>lxc-monitor</command> command will monitor one or
several containers. The parameter of this command accept a
regular expression for example:
<programlisting>
lxc-monitor -n "foo|bar"
</programlisting>
will monitor the states of containers named 'foo' and 'bar', and:
<programlisting>
lxc-monitor -n ".*"
</programlisting>
will monitor all the containers.
</para>
<para>
For a container 'foo' starting, doing some work and exiting,
the output will be in the form:
<programlisting>
'foo' changed state to [STARTING]
'foo' changed state to [RUNNING]
'foo' changed state to [STOPPING]
'foo' changed state to [STOPPED]
</programlisting>
</para>
<para>
<command>lxc-wait</command> command will wait for a specific
state change and exit. This is useful for scripting to
synchronize the launch of a container or the end. The
parameter is an ORed combination of different states. The
following example shows how to wait for a container if he went
to the background.
<programlisting>
# launch lxc-wait in background
lxc-wait -n foo -s STOPPED &
LXC_WAIT_PID=$!
# this command goes in background
lxc-execute -n foo mydaemon &
# block until the lxc-wait exits
# and lxc-wait exits when the container
# is STOPPED
wait $LXC_WAIT_PID
echo "'foo' is finished"
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Setting the control group for a container</title>
<para>The container is tied with the control groups, when a
container is started a control group is created and associated
with it. The control group properties can be read and modified
when the container is running by using the lxc-cgroup command.
</para>
<para>
<command>lxc-cgroup</command> command is used to set or get a
control group subsystem which is associated with a
container. The subsystem name is handled by the user, the
command won't do any syntax checking on the subsystem name, if
the subsystem name does not exists, the command will fail.
</para>
<para>
<programlisting>
lxc-cgroup -n foo cpuset.cpus
</programlisting>
will display the content of this subsystem.
<programlisting>
lxc-cgroup -n foo cpu.shares 512
</programlisting>
will set the subsystem to the specified value.
</para>
</refsect2>
</refsect1>
<refsect1>
<title>Bugs</title>
<para>The <command>lxc</command> is still in development, so the
command syntax and the API can change. The version 1.0.0 will be
the frozen version.</para>
</refsect1>
&seealso;
<refsect1>
<title>Author</title>
<para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
</refsect1>
</refentry>
<!-- Keep this comment at the end of the file Local variables: mode:
sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
sgml-parent-document:nil sgml-default-dtd-file:nil
sgml-exposed-tags:nil sgml-local-catalogs:nil
sgml-local-ecat-files:nil End: -->