example.tex revision 5cab282d07c1006eaa3e373b015b91014e17afa5
\documentclass{entcs} \usepackage{entcsmacro}
\usepackage{graphicx}
\usepackage{mathpartir}
\sloppy
% The following is enclosed to allow easy detection of differences in
% ascii coding.
% Upper-case A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
% Lower-case a b c d e f g h i j k l m n o p q r s t u v w x y z
% Digits 0 1 2 3 4 5 6 7 8 9
% Exclamation ! Double quote " Hash (number) #
% Dollar $ Percent % Ampersand &
% Acute accent ' Left paren ( Right paren )
% Asterisk * Plus + Comma ,
% Minus - Point . Solidus /
% Colon : Semicolon ; Less than <
% Equals =3D Greater than > Question mark ?
% At @ Left bracket [ Backslash \
% Right bracket ] Circumflex ^ Underscore _
% Grave accent ` Left brace { Vertical bar |
% Right brace } Tilde ~
% A couple of exemplary definitions:
\newcommand{\Nat}{{\mathbb N}}
\newcommand{\Real}{{\mathbb R}}
\def\lastname{Hausmann and Schr\"oder}
\begin{document}
\begin{frontmatter}
\title{Optimizing Conditional Logic (Draft)}
\author[Bremen]{Daniel Hausmann\thanksref{ALL}\thanksref{myemail}}
\author[Bremen]{Lutz Schr\"oder \thanksref{ALL}\thanksref{coemail}}
\address[Bremen]{Deutsches Zentrum f\"ur k\"unstliche Intelligenz (DFKI)\\ Universit\"at Bremen, Germany}
\thanks[ALL]{This work is supported by DFG-project no1234}
\thanks[myemail]{Email: \href{mailto:Daniel.Hausmann@dfki.de} {\texttt{\normalshape Daniel.Hausmann@dfki.de}}}
\thanks[coemail]{Email: \href{mailto:Lutz.Schroeder@dfki.de} {\texttt{\normalshape Lutz.Schroeder@dfki.de}}}
\begin{abstract}
We describe a specific optimization of the decision algorithm
of conditional logic. The general framework
of this optimization is the setting of coalgebraic modal logic,
the actual implementation is an extension to
the generic satisfiability solver for modal logics CoLoSS.
\end{abstract}
\begin{keyword}
Coalgebraic modal logic, conditional logic,
satisfiability / provability, optimization by memoizing
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{intro}
Introduce here :)
Structure of the paper: First we give a short introduction to
the theory of coalgebraic modal logics, the general abstract
setting in which this paper takes place. Then we describe the
satisfiability (and provability) solving algorithm and its
optimization with regards to conditional logics. Further, we
analyze the quality of this optimization by relating
the structure of the input formula to the amount of work which may
be saved by using the proposed optimization. Furthermore,
we compare our tool with other model checkers for conditional logic
and finally we discuss to which extent the described
optimization could be used for other modal logics than just
conditional logic.
\section{Coalgebraic conditional logic}
Some theory here...
\section{The algorithm}
\subsection{The generic sequent calculus}
According to the introduced framework, we now devise a generic
algorithm to decide the provability of formulae. By
instantiating the generic algorithm to a specific modal logic
(simply by using the defining modal rule of this logic), it is
possible to obtain an algorithm to decide provability of
formulae of this specific modal logic.
\begin{definition}
A \emph{Sequent} $\Gamma$ is a set of formulae. If the provability
of a sequent is yet to be decided, it is called an \emph{open sequent}.
If the provability of a sequent has allready been shown or refuted,
the sequent is called a \emph{treated sequent}. A \emph{premise}
$\Lambda$ is a set of open sequents.
\end{definition}
\begin{definition}
\noindent A \emph{rule} $r=(\Lambda,\Gamma)$ consists of a premise
and an open sequent (usually called \emph{conclusion}).
\end{definition}
\noindent The generic sequent calculus is given by a set of rules
$\mathcal{R}_{sc}$ which consists of the finishing and the branching
rules $\mathcal{R}^b_{sc}$ (i.e. rules with no premise or more than
one premise), the linear rules $\mathcal{R}^l_{sc}$ (i.e. rules with
exactly one premise) and the modal rule $\mathcal R^m_{sc}$. The
finishing and the branching rules are presented in Figure~\ref{fig:branching}
(where $\top=\neg\bot$ and $p$ is an atom), the
linear rules are shown in Figure~\ref{fig:linear}. For now, we consider the modal
rule $\mathcal R^m_{sc}$ of the standard modal logic \textbf{K}
(as given by Figure~\ref{fig:modalK}). Note again that
it is possible to treat different modal logics with the same generic
calculus just by appropriately instantiating the modal rule.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c c c |}
\hline
& & \\[-5pt]
(\textsc {$\neg$F}) \inferrule{ }{\Gamma, \neg\bot} &
(\textsc {Ax}) \inferrule{ }{\Gamma, p, \neg p} &
(\textsc {$\wedge$}) \inferrule{\Gamma, A \\ \Gamma, B}{\Gamma, A\wedge B} \\[-5pt]
& & \\
\hline
\end{tabular}
\end{center}
\caption{The finishing and the branching sequent rules $\mathcal{R}^b_{sc}$}
\label{fig:branching}
\end{figure}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c c |}
\hline
& \\[-5pt]
(\textsc {$\neg\neg$})\inferrule{\Gamma, A}{\Gamma, \neg\neg A} &
(\textsc {$\neg\wedge$}) \inferrule{\Gamma, \neg A, \neg B}{\Gamma, \neg(A\wedge B)} \\[-5pt]
& \\
\hline
\end{tabular}
\end{center}
\caption{The linear sequent rules $\mathcal{R}^l_{sc}$}
\label{fig:linear}
\end{figure}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{K}})\inferrule{\neg A_1, \ldots , \neg A_n, A_0}
{\Gamma, \neg \Box A_1,\ldots,\neg \Box A_n, \Box A_0 } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modal rule of \textbf{K}}
\label{fig:modalK}
\end{figure}
Algorithm~\ref{alg:seq} makes use of the sequent rules in the following manner:
In order to show the \emph{provability} of a formula $\phi$, the
algorithm starts with the sequent $\{\phi\}$ and tries to apply all
of the sequent rules $r\in R_{sc}$ to it
\footnote{The actual haskell
implementation of the algorithm first tries to apply the linear rules
from $\mathcal{R}^l_{sc}$ and afterwards it tries to apply the other
rules from $\mathcal{R}^b_{sc}$ and $\mathcal{R}^m_{sc}$.}.
% explain what 'applying' means
The result of this one step of application of the rules will be a set
of premises $P$. The proving function is then recursively started on
each sequent of every premise in this set. If there is any premise
whose sequents are all provable, the application of the according rule is
justified and hence the algorithm has succeeded.
\begin{remark}
Due to $\ldots$ %what?
, the \emph{satisfiability} of a formula $\phi$ amounts to the provability
of its negation $\neg \phi$.
\end{remark}
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take the input formula $\phi$, construct the initial open
sequent $\Gamma_0 = \{\phi\}$ from it. Set i=0.\\
Step 2: Try to apply all rules $r\in R_{sc}$ to any open sequent
$\Gamma_i$. Let $\Lambda_i$ denote the set of the resulting open
premises.\\
Step 3: If $\Lambda_i$ contains the empty premise, mark the open
sequent $\Gamma_i$ as proven. If $\Lambda_i$ contains no premise at
all, mark the open sequent $\Gamma_i$ as refuted.\\
Step 4: Else, add all premises from $\Lambda_i$ to the set of open
premises and choose any premise from this set. Add all $\Gamma^j_i$
from this premise to the set of open sequents and continue with
Step 2.
\end{upshape}
\label{alg:seq}
\end{alg}
\end{algorithm}
\begin{proposition}
\begin{upshape}
Algorithm~\ref{alg:seq} is sound and complete w.r.t. provability. Furthermore,
Algorithm~\ref{alg:seq} has complexity X.
\end{upshape}
\end{proposition}
\begin{proof}
...
\end{proof}
\subsection{The conditional logic instance}
The genericity of the introduced sequent calculus allows us to easiliy
create instantiations of Algorithm~\ref{alg:seq} for a large variety
of modal logics:
By setting $\mathcal R^m_{sc}$ for instance to the rule shown in
Figure~\ref{fig:modalCKCEM}
(where $A_a = A_b$ is shorthand for $A_a\rightarrow A_b\wedge A_b\rightarrow A_a$), we
obtain an algorithm for deciding provability (and satisfiability) of
conditional logic. Note however, that we restrict ourselves to the
examplary conditional logic \textbf{CKCEM} for the remainder of this section
(slightly adapted versions of the optimization will work for other conditional logics).
\begin{figure}[h!]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{CKCEM}})\inferrule{A_0 = \ldots = A_n \\ B_0,\ldots, B_j,\neg B_{j+1},\ldots,\neg B_n}
{\Gamma, (A_0\Rightarrow B_0),\ldots,(A_j\Rightarrow B_j),
\neg(A_{j+1}\Rightarrow B_{j+1}),\ldots,\neg(A_n\Rightarrow B_n) } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modal rule $\textbf{CKCEM}$ of conditional logic}
\label{fig:modalCKCEM}
\end{figure}
In the following, we use the notions of \emph{conditional antecedent} and
\emph{conditional consequent} to refer to the parameters of the modal operator of
conditional logic.
In order to decide whether there is an
instance of the modal rule of conditional logic which can be applied to
the actual current sequent, it is necessary to create a preliminary premise
for each possible combination of equalities of all the premises of the
modal operators in this sequent. This results in $2^n$ new premises for
a sequent with $n$ top-level modal operators.
\begin{example}
Consider the sequent $\Gamma=\{(A_0\Rightarrow B_0),(A_1\Rightarrow B_1),
(A_2\Rightarrow B_2)\}$. By instantiating the modal rule, several
new premises
such as $\{\{A_0=A_1=A_2\},\{B_0,B_1,B_2\}\}$ or $\{\{A_0=A_2\},\{B_0,B_2\}\}$
may be obtained. Only after the equalities in the first sequent of each
of these premises have been shown (or refuted), it is clear whether the
current instantiation of the modal rule is actually leading to a possibly
provable premise.
\label{ex:cond}
\end{example}
It seems to be a more intelligent approach to first
partition the set of all antecedents of the top-level modal operators in the
current sequent into equivalence classes with respect to logical equality.
This partition does not only allow for a reduction of the number of newly
created open premises in each modal step,
but it also allows us to seperate the two parts of the conditional modal step
(i.e. the first part, showing the necessary equalities, and the second
part, showing that the resulting sequent - consisting only of the appropriate
conditional consequences - is actually provable itself).
\begin{example}
Consider again the sequent from Example~\ref{ex:cond}. By using the examplary
knowledge, that $A_0=A_1$, $A_1\neq A_2$ and $A_0\neq A_2$, it is immediate,
that there are just two reasonable instations of the modal rule, leading
to the two premises $\{\{B_0,B_1\}\}$ and $\{\{B_2\}\}$. For the
first of these two premises, note that it is not necessary to show the
equivalence of $A_0$ and $A_1$ again.
\end{example}
\begin{remark}
In the case of conditional logic, observe the following: Since the
modal antecedents that appear in a formula are not being changed by any
rule of the sequent calculus, it is possible to extract all possibly
relevant antecedents of a formula even \emph{before} the actual sequent
calculus iswe applied. This allows us to first compute the equivalence
classes of all the relevant antecedents and then feed this knowledge into
the actual sequent calculus:
\end{remark}
\section{The optimization}
\begin{definition}
A conditional antecedent of \emph{modal nesting depth} $i$ is a
conditional antecedent which contains at least one antecedent of
modal nesting depth $i-1$ and which contains no antecedent
of modal nesting depth greater than $i-1$. A
conditional antecedent of nesting depth 0 is an antecedent
that does not contain any further modal operators.
Let $p_i$ denote the set of all conditional antecedents of modal
nesting depth $i$. Further, let $prems(n)$ denote the set of all
conditional antecedents of modal nesting depth at most $n$ (i.e.
$prems(n)=\bigcup_{j=1..n}^{} p_j$).
Finally, let $depth(\phi)$ denote the maximal modal nesting in
the formula $\phi$.
\end{definition}
\begin{example}
In the formula $\phi=((p_0\Rightarrow p_1) \wedge ((p_2\Rightarrow p_3)\Rightarrow p_4))
\Rightarrow (p_5\Rightarrow p_6)$,
the premise $((p_0\Rightarrow p_1) \wedge ((p_2\Rightarrow p_3)\Rightarrow p_4))$ has a
nesting depth of 2 (and not 1), $(p_0\Rightarrow p_1)$ and $(p_2\Rightarrow p_3)$ both
have a nesting depth of 1, and finally $p_0$, $p_2$ and $p_4$ all have a nesting depth of 0.
Furthermore, $prems(1)=\{p_0,p_2,p_4,(p_0\Rightarrow p_1),(p_2\Rightarrow p_3)\}$
and $depth(\phi)=3$.
\end{example}
\begin{definition}
A set $\mathcal{K}$ of treated sequents together with a function
$eval:\mathcal{K}\rightarrow \{\top,\bot\}$ is called \emph{knowledge}.
\end{definition}
We may now construct an optimized algorithm which may allow us
to decide provability (and satisfiability) of formulae more efficiently
in some cases. The optimized algorithm is constructed from two functions
(namely from the so-called \emph{pre-proving} function and the actual proving
function):
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ and knowledge $(\mathcal{K},eval)$ as
input. Set $\Gamma_0 = \{\phi\}$, $i=0$.\\
Step 2: Try to apply all rules $r\in \mathcal{RO}_{sc}$
(supplying $\mathcal{RO}^m_{sc}$ with knowledge $(\mathcal{K},eval)$)
to any open sequent $\Gamma_i$. Let
$\Lambda_i$ denote the set of the resulting premises.\\
Step 3: If $\Lambda_i$ contains the empty premise, mark the open
sequent $\Gamma_i$ as proven. If $\Lambda_i$ contains no premise at
all, mark the open sequent $\Gamma_i$ as refuted.\\
Step 4: Else, add all premises from $\Lambda_i$ to the set of open
premises and choose any premise from this set. Add all $\Gamma^j_i$
from this premise to the set of open sequents and continue with
Step 2.
\label{alg:optSeq}
\end{upshape}
\end{alg}
\end{algorithm}
Algorithm~\ref{alg:optSeq} is very similar to Algorithm~\ref{alg:seq},
however, it uses a modified
set of rules $\mathcal{RO}_{sc}$ and it also allows for the
input of some knowledge $(\mathcal{K},eval)$ in the form of a set of sequents
together with an evaluation function. This knowledge is passed on to
the modified modal rule of conditional logic, which makes appropriate use of it.
$\mathcal{RO}_{sc}$ is obtained from $\mathcal{R}_{sc}$ by replacing
the modal rule from Figure~\ref{fig:modalCKCEM} with the modified modal
rule from Figure~\ref{fig:modalCKCEMm}.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{CKCEM}$^m$})\inferrule{\bigwedge{}_{i,j=\{1..n\}}{eval(A_i=B_j)=\top}
\\ B_0,\ldots, B_j,\neg B_{j+1},\ldots,\neg B_n}
{\Gamma, (A_0\Rightarrow B_0),\ldots,(A_j\Rightarrow B_j),
\neg(A_{j+1}\Rightarrow B_{j+1}),\ldots,\neg(A_n\Rightarrow B_n) } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modified modal rule \textbf{CKCEM}$^m$ of conditional logic}
\label{fig:modalCKCEMm}
\end{figure}
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $prems_i$ of all conditional antecedents of $\phi$
of nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $eq_i$ denote the set of all equalities $A_a = A_b$ for different
formulae $A_a,A_b\in prems_i$. Compute
Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all $\psi\in eq_i$.
Set $\mathcal{K}_{i+1} = eq_i$, set $i = i + 1$. For each equality $\psi\in eq_i$,
set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was \verb+True+
and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$).
\label{alg:preprove}
\end{upshape}
\end{alg}
\end{algorithm}
Algorithm~\ref{alg:preprove} first computes the knowledge $(\mathcal{K},eval)$ about specific
subformulae of $\phi$ and then finally checks for provability of
$\phi$ (using this knowledge): In order to show the equivalence of
two conditional antecedents of nesting depth at most $i$, we assume,
that the equalities $\mathcal{K}_{i}$ between modal antecedents of nesting depth less
than $i$ have allready been computed and the result is stored in $eval_i$; hence,
two antecedents are equal, if their equivalence is provable by
Algorithm~\ref{alg:optSeq} using only the knowledge $(\mathcal{K}_{i},eval_i)$.
\subsection{Treatment of needed equivalences only}
Since Algorithm~\ref{alg:preprove} tries to show the logical equivalence of every combination
of any two conditional antecedents that appear in $\phi$, it will have worse completion
time than Algorithm~\ref{alg:seq} on many formulae:
\begin{example}
Consider the formula
\begin{quote}
$\phi=(((p_0\Rightarrow p_1)\Rightarrow p_2)\Rightarrow p_4)\vee
(((p_5\Rightarrow p_6)\Rightarrow p_7)\Rightarrow p_8)$.
\end{quote}
Algorithm~\ref{alg:preprove} will not only
try to show the necessary equivalences between the pairs
$(((p_0\Rightarrow p_1)\Rightarrow p_2), ((p_5\Rightarrow p_6)\Rightarrow p_7))$,
$((p_0\Rightarrow p_1), (p_5\Rightarrow p_6))$ and $(p_0,p_5)$, but it will
also try to show equivalences between any two conditional antecedents (e.g. $(p_0,
(p_5\Rightarrow p_6))$), even though these equivalences may not be needed
during the execution of Algorithm~\ref{alg:optSeq}.
\end{example}
Based on this observation, it is possible to assign a category to each pair of
antecedents that appear in it:
\begin{definition}
Two conditional antecedents $A$ and $B$ are called \emph{connected (in $\phi$)} if
they both are the antecedent of a top-level modality.
Furthermore, we define sub-antecedents $C$ and $D$ of connected antecedents $A$ and $B$
to be connected to each other, if they appear at the same position in a chain
of conditional antecedents (formally: $A=(\ldots(C\Rightarrow a_n)\ldots \Rightarrow a_0)$,
$B=(\ldots(D\Rightarrow b_n)\ldots \Rightarrow b_0)$).
Two conditional antecedents $A$ and $B$ are called \emph{independent (in $\phi$)} if
they occur at a different modal depth (even though they may have modal nestings the same depth).
The relevance of two antecedents with regard to each other is said to be
\emph{pending} if they occur at the same depth in $\phi$ but each
in the consequent of a different modal operator.
\end{definition}
Since two independent antecedents will never appear in the scope of the
same application of the modal rule, it is in no case necessary to show (or
refute) the logical equivalence of independent
conditional antecedents. Hence it suffices to focus
our attention to the connected and pending conditional antecedents. By pre-proving only
the equivalences of the connected antecedents, we can be assured that we are
always as efficient as Algorithm~\ref{alg:optSeq}. However, it would still
be necessary to treat the equivalence of those initially pending antecedents which
turn out to be connected during the execution of Algorithm~\ref{alg:optSeq}.
This would not allow us to completely seperate to treatment of equivalences
(achieved by Algorithm~\ref{alg:preprove}) and the actual sequent calculus (realized by
Algorithm~\ref{alg:optSeq}).
If the pending antecedents are also included in the pre-proving, it is obvious, that
any possibly needed equivalence and its truth-value are allready included in $(K,eval)$
when the main proving is induced. On the other hand, we have to be aware that it
may be the case, that we show equivalences of antecedents which later turn out to
be independent.
As result of these considerations, we devise Algorithm~\ref{alg:optPreprove},
an improved version of Algorithm~\ref{alg:preprove}. The only difference is
that before proving any equivalence, Algorithm~\ref{alg:optPreprove} checks
whether the current pair of conditional antecedents is actually connected or if their
relevance to each other is pending; only then it treats the equivalence. Hence
independent pairs of antecedents remain untreated.
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $prems_i$ of all conditional antecedents of $\phi$
of nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $eq_i$ denote the set of all equalities $A_a = A_b$ for different and not independent
pairs of formulae $A_a,A_b\in prems_i$. Compute
Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all $\psi\in eq_i$.
Set $\mathcal{K}_{i+1} = eq_i$, set $i = i + 1$. For each equality $\psi\in eq_i$,
set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was \verb+True+
and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$).
\label{alg:optPreprove}
\end{upshape}
\end{alg}
\end{algorithm}
\begin{proposition}
\begin{upshape}
Algorithm~\ref{alg:optPreprove} is sound and complete w.r.t. provability.
Algorithm~\ref{alg:optPreprove} has complexity X. Furthermore, Algorithm~\ref{alg:optPreprove}
allows us, to completely seperate the pre-proving of equivalences and
the actual sequent calculus.
\end{upshape}
\end{proposition}
\begin{proof}
...
\end{proof}
\section{Generalized optimization}
As previously mentioned, the demonstrated optimization is not restricted to the
case of conditional
modal logics.
\begin{definition}
If $\Gamma$ is a sequent, we denote the set of all arguments of
top-level modalities from $\Gamma$ by $arg(\Gamma)$.
A \emph{short sequent} is a sequent which consists of just one formula which
itself is a propositional formula over a fixed maximal number of modal arguments
from $arg(\Gamma)$. In the following, we fix the maximal number of modal arguments
in short sequents to be 2.
\end{definition}
The general method of the optimization then takes the following form:
Let $S_1,\ldots S_n$ be short sequents and assume that there is
a (w.r.t the considered modal logic) sound instance of the generic rule which
is depicted in Figure~\ref{fig:modalOpt} (where $\mathcal{S}$ is a set of any
sequents).
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{Opt}}) \inferrule{ S_1 \\ \ldots \\ S_n \\ \mathcal{S} }
{ \Gamma } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The general rule-scheme to which the optimization may be applied}
\label{fig:modalOpt}
\end{figure}
We are now able to devise a final version (Algorithm~\ref{alg:genOptPreprove}) of the
optimized algorithm: Instead of considering only equivalences of conditional antecedents
for pre-proving, we now extend our attention to any short sequents over any modal arguments.
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $args_i$ of all modal arguments of $\phi$
which have nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $seq_i$ denote the set of all short sequents $S_1$ over at most two formulae
$A_a,A_b\in args_i$. Compute Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all
$\psi\in seq_i$. Set $\mathcal{K}_{i+1} = seq_i$, set $i = i + 1$. For each short sequent
$\psi\in seq_i$, set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was
\verb+True+ and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$).
\end{upshape}
\label{alg:genOptPreprove}
\end{alg}
\end{algorithm}
Thus it is valid to use this new Algorithm~\ref{alg:genOptPreprove} where the employed ruleset has to be
extended by the generic modified rule given by Figure~\ref{fig:modModalOpt}.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{Opt}$^m$}) \inferrule{ eval(S_1)=\top \\ \ldots \\ eval(S_n)=\top \\ \mathcal{S} }
{ \Gamma } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The general optimized rule}
\label{fig:modModalOpt}
\end{figure}
\begin{example}
The following two cases are may be instances of the generic optimization:
\begin{enumerate}
\item (Classical modal Logics / Neighbourhood Semantics) Let $\Gamma = \{\Box A = \Box B\}$,
$n=1$, $S_1=\{A=B\}$ and $\mathcal{S}=\emptyset$. Algorithm~\ref{alg:genOptPreprove}
may be then applied whenever the following congruence rule is sound in the considered
logic:
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$_{Cong}$}) \inferrule{ A=B }
{ \Box A = \Box B }
\end{center}
\end{quote}
The according modified version of this rule is as follows:
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$^m_{Cong}$}) \inferrule{ {eval(A=B)=\top} }
{ \Box A = \Box B }
\end{center}
\end{quote}
\item (Monotone modal logics) By setting $\Gamma = \{\Box A \rightarrow \Box B\}$,
$n=1$, $S_1=\{A\rightarrow B\}$ and $\mathcal{S}=\emptyset$, we may instantiate
the generic algorithm to the case of modal logics which are monotone w.r.t. their
modal operator. So assume the following rule to be sound in the considered modal
logic:
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$_{Mon}$}) \inferrule{ A\rightarrow B }
{ \Box A \rightarrow \Box B }
\end{center}
\end{quote}
The according modified version of this rule is as follows:
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$^m_{Mon}$}) \inferrule{ {eval(A\rightarrow B)=\top} }
{ \Box A \rightarrow \Box B }
\end{center}
\end{quote}
In the case, that \textbf{Opt}$_{Mon}$ is the only modal rule in the
considered logic (i.e. the case of plain monotone modal logic), all the
prove-work which is connected to the modal operator is shifted to the
pre-proving process. Especially matching with the modal rules
$\mathcal{RO}^m_{sc}$ becomes a mere lookup of the value of $eval$.
This means, that all calls of the sequent algorithm Algorithm~\ref{alg:optSeq}
correspond in complexity just to ordinary sat-solving of propositional logic.
Furthermore, Algorithm~\ref{alg:optSeq} will be called $|\phi|$ times. This
observation may be generalized:
\end{enumerate}
\label{ex:neighMon}
\end{example}
\begin{remark}
In the case that all modal rules of the considered logic are instances of
the generic rule (\textbf{Opt}) with $P=\emptyset$ (as seen in Example~\ref{ex:neighMon}),
the optimization does not only allow for a reduction of computing time, but
it also allows us to effectively reduce the sequent calculus to a sat-solving
algorithm.
Furthermore, the optimized algorithm will always be as efficient as the
original one in this case (since every occurence of short sequents over $arg(\Gamma)$
which accord to the current instantiation of the rule (\textbf{Opt}) will
have to be shown or refuted even during the course of the original algorithm).
\end{remark}
\section{Implementation}
Write a few words about the Haskell-Implementation here. Nothing interresting apart
from the Web-Interface, and that all of this is a part of CoLoSS.
\section{Summary}
\begin{problem}
Finish your paper and get it to your Program Chairman on time!
\end{problem}
\section{Bibliographical references}\label{references}
\begin{thebibliography}{10}\label{bibliography}
\bibitem{cy} Civin, P., and B. Yood, \emph{Involutions on Banach
algebras}, Pacific J. Math. \textbf{9} (1959), 415--436.
\end{thebibliography}
\end{document}