example.tex revision 2852143315b31dd349116573e51faf3d6d90f2dc
\documentclass{entcs} \usepackage{entcsmacro}
\usepackage{graphicx}
\usepackage{mathpartir}
\sloppy
% The following is enclosed to allow easy detection of differences in
% ascii coding.
% Upper-case A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
% Lower-case a b c d e f g h i j k l m n o p q r s t u v w x y z
% Digits 0 1 2 3 4 5 6 7 8 9
% Exclamation ! Double quote " Hash (number) #
% Dollar $ Percent % Ampersand &
% Acute accent ' Left paren ( Right paren )
% Asterisk * Plus + Comma ,
% Minus - Point . Solidus /
% Colon : Semicolon ; Less than <
% Equals =3D Greater than > Question mark ?
% At @ Left bracket [ Backslash \
% Right bracket ] Circumflex ^ Underscore _
% Grave accent ` Left brace { Vertical bar |
% Right brace } Tilde ~
% A couple of exemplary definitions:
\newcommand{\Nat}{{\mathbb N}}
\newcommand{\Real}{{\mathbb R}}
\newcommand{\COLOSS}{{\textrm CoLoSS}}
\def\lastname{Hausmann and Schr\"oder}
\begin{document}
\begin{frontmatter}
\title{Optimizing Conditional Logic Reasoning within \COLOSS}
\author[DFKI]{Daniel Hausmann\thanksref{myemail}}
\author[DFKI,UBremen]{Lutz Schr\"oder\thanksref{coemail}}
\address[DFKI]{DFKI Bremen, SKS}
\address[UBremen]{Department of Mathematics and Computer Science, Universit\"at Bremen, Germany}
% \thanks[ALL]{Work forms part of DFG-project \emph{Generic Algorithms and Complexity
% Bounds in Coalgebraic Modal Logic} (SCHR 1118/5-1)}
\thanks[myemail]{Email: \href{mailto:Daniel.Hausmann@dfki.de} {\texttt{\normalshape Daniel.Hausmann@dfki.de}}}
\thanks[coemail]{Email: \href{mailto:Lutz.Schroeder@dfki.de} {\texttt{\normalshape Lutz.Schroeder@dfki.de}}}
\begin{abstract}
The generic modal reasoner \COLOSS~covers a wide variety of logics
ranging from graded and probabilistic modal logic to coalition logic
and conditional logics, being based on a broadly applicable
optimisation of the reasoning strategies employed in
\COLOSS. Specifically, we discuss strategies of memoisation and
dynamic programming that are based on the observation that short
sequents play a central role in many of the logics under
study. These optimisations seem to be particularly useful for the
case of conditional logics, for some of which dynamic programming
even improves the theoretical complexity of the algorithm. These
strategies have been implemented in \COLOSS; we give a detailed
comparison of the different heuristics, observing that in the
targeted domain of conditional logics, a substantial speed-up can be
achieved.
\end{abstract}
\begin{keyword}
Coalgebraic modal logic, conditional logic, automated reasoning,
optimisation, heuristics, memoizing, dynamic programming
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{intro}
In recent decades, modal logic has seen a development towards semantic
heterogeneity, witnessed by an emergence of numerous logics that,
while still of manifestly modal character, are not amenable to
standard Kripke semantics. Examples include probabilistic modal
logic~\cite{FaginHalpern94}, coalition logic~\cite{Pauly02}, and
conditional logic~\cite{Chellas80}, to name just a few. The move away
from Kripke semantics, mirrored on the syntactical side by the failure
of normality, entails additional challenges for tableau and sequent
systems, as the correspondence between tableaus and models becomes
looser, and in particular demands created by modal formulas can no
longer be discharged by the creation of single successor nodes.
This problem is tackled on the theoretical side by introducing the
semantic framework of coalgebraic modal
logic~\cite{Pattinson03,Schroder05}, which covers all logics mentioned
above and many more. It turns out that coalgebraic modal logic does
allow the design of generic reasoning algorithms, including a generic
tableau method originating from~\cite{SchroderPattinson09}; this
generic method may in fact be separated from the semantics and
developed purely syntactically, as carried out
in~\cite{PattinsonSchroder08b,PattinsonSchroder09a}.
Generic tableau algorithms for coalgebraic modal logics, in particular
the algorithm described in~\cite{SchroderPattinson09}, have been
implemented in the reasoning tool
\COLOSS~\cite{CalinEA09}\footnote{available under
\url{http://www.informatik.uni-bremen.de/cofi/CoLoSS/} and \url{http://www.doc.ic.ac.uk/~dirk/COLOSS/}}. As indicated above, it
is a necessary feature of the generic tableau systems that they
potentially generate multiple successor nodes for a given modal
demand, so that in addition to the typical depth problem, proof search
faces a rather noticeable problem of breadth. The search for
optimisation strategies to increase the efficiency of reasoning thus
becomes all the more urgent. Here we present one such strategy, which
is particularly efficient in reducing both depth and branching for the
class of conditional logics. A notable feature of this class is that
many of the rules rely rather heavily on premises stating equivalence
between formulas; thus, conditional logics are a good candidate for
memoising strategies, applied judiciously to short sequents. We
discuss the implementation of memoising and dynamic programming
strategies within \COLOSS,
We recall
Structure of the paper: First we give a short introduction to
the theory of coalgebraic modal logics, the general abstract
setting in which this paper takes place. Then we describe the
satisfiability (and provability) solving algorithm and its
optimisation with regards to conditional logics. Further, we
analyze the quality of this optimisation by relating
the structure of the input formula to the amount of work which may
be saved by using the proposed optimisation. Furthermore,
we compare our tool with other model checkers for conditional logic
and finally we discuss to which extent the described
optimisation could be used for other modal logics than just
conditional logic.
\section{Coalgebraic conditional logic}
Some theory here...
Needed from this section: coalgebraic modal logic etc, notion of the set of
formulae of a modal logic, provability / satisfiability. Provability is
negated satisfiability of the negation.
Some (general) words about conditional logic?
\section{The algorithm}
\subsection{The generic sequent calculus}
According to the introduced framework, we now devise a generic
algorithm to decide the provability of formulae. By
instantiating the generic algorithm to a specific modal logic
(simply by using the defining modal rule of this logic), it is
possible to obtain an algorithm to decide provability of
formulae of this specific modal logic.
\begin{definition}
A \emph{Sequent} $\Gamma$ is a set of formulae. If the provability
of a sequent is yet to be decided, it is called an \emph{open sequent}.
If the provability of a sequent has allready been shown or refuted,
the sequent is called a \emph{treated sequent}. A \emph{premise}
$\Lambda$ is a set of open sequents.
\end{definition}
\begin{definition}
\noindent A \emph{rule} $r=(\Lambda,\Gamma)$ consists of a premise
and an open sequent (usually called \emph{conclusion}).
\end{definition}
\noindent The generic sequent calculus is given by a set of rules
$\mathcal{R}_{sc}$ which consists of the finishing and the branching
rules $\mathcal{R}^b_{sc}$ (i.e. rules with no premise or more than
one premise), the linear rules $\mathcal{R}^l_{sc}$ (i.e. rules with
exactly one premise) and the modal rule $\mathcal R^m_{sc}$. The
finishing and the branching rules are presented in Figure~\ref{fig:branching}
(where $\top=\neg\bot$ and $p$ is an atom), the
linear rules are shown in Figure~\ref{fig:linear}. For now, we consider the modal
rule $\mathcal R^m_{sc}$ of the standard modal logic \textbf{K}
(as given by Figure~\ref{fig:modalK}). Note again that
it is possible to treat different modal logics with the same generic
calculus just by appropriately instantiating the modal rule.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c c c |}
\hline
& & \\[-5pt]
(\textsc {$\neg$F}) \inferrule{ }{\Gamma, \neg\bot} &
(\textsc {Ax}) \inferrule{ }{\Gamma, p, \neg p} &
(\textsc {$\wedge$}) \inferrule{\Gamma, A \\ \Gamma, B}{\Gamma, A\wedge B} \\[-5pt]
& & \\
\hline
\end{tabular}
\end{center}
\caption{The finishing and the branching sequent rules $\mathcal{R}^b_{sc}$}
\label{fig:branching}
\end{figure}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c c |}
\hline
& \\[-5pt]
(\textsc {$\neg\neg$})\inferrule{\Gamma, A}{\Gamma, \neg\neg A} &
(\textsc {$\neg\wedge$}) \inferrule{\Gamma, \neg A, \neg B}{\Gamma, \neg(A\wedge B)} \\[-5pt]
& \\
\hline
\end{tabular}
\end{center}
\caption{The linear sequent rules $\mathcal{R}^l_{sc}$}
\label{fig:linear}
\end{figure}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{K}})\inferrule{\neg A_1, \ldots , \neg A_n, A_0}
{\Gamma, \neg \Box A_1,\ldots,\neg \Box A_n, \Box A_0 } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modal rule of \textbf{K}}
\label{fig:modalK}
\end{figure}
Algorithm~\ref{alg:seq} makes use of the sequent rules in the following manner:
In order to show the \emph{provability} of a formula $\phi$, the
algorithm starts with the sequent $\{\phi\}$ and tries to apply all
of the sequent rules $r\in R_{sc}$ to it
\footnote{The actual haskell
implementation of the algorithm first tries to apply the linear rules
from $\mathcal{R}^l_{sc}$ and afterwards it tries to apply the other
rules from $\mathcal{R}^b_{sc}$ and $\mathcal{R}^m_{sc}$.}.
% explain what 'applying' means
The result of this one step of application of the rules will be a set
of premises $P$. The proving function is then recursively started on
each sequent of every premise in this set. If there is any premise
whose sequents are all provable, the application of the according rule is
justified and hence the algorithm has succeeded.
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take the input formula $\phi$, construct the initial set
of premises (containing just one premise with just one sequent)
$S = \{\{\phi\}\}$ from it.\\
Step 2: Try to apply all rules $r\in R_{sc}$ to any open sequent
$\Gamma\in P$ from any premise $P\in S$. Let $\Lambda$ denote the set
of the resulting premises.\\
Step 3: If $\Lambda$ contains the empty premise, mark the open
sequent $\Gamma$ as proved (and if $P$ contains no more open
sequents, mark $P$ as proved). If $\Lambda$ contains no premise at
all, mark the open sequent $\Gamma$ and the premise $P$ as refuted.\\
Step 4: Else, add all premises from $\Lambda$ to the set of
premises (i.e. $S=S\cup\Lambda$). If there are any remaining open
sequents, continue with Step 2.\\
Step 5: If all premises resulting from the application of all rules
from $R_{sc}$ to $\phi$ are marked as proved, finish with result \verb|True|
else finish with result \verb|False|.
\end{upshape}
\label{alg:seq}
\end{alg}
\end{algorithm}
\begin{proposition}
\begin{upshape}
Let $\mathcal{R}_{sc}$ be strictly one-step complete, closed under contraction,
and PSPACE-tractable. Then Algorithm~\ref{alg:seq} is sound and complete w.r.t. provability
and it is in PSPACE.
\end{upshape}
\end{proposition}
\begin{proof}
We just note, that Algorithm~\ref{alg:seq} is an equivalent implementation of the algorithm proposed
in~\cite{SchroderPattinson09}. For more details, refer to ~\cite{SchroderPattinson09}, Theorem 6.13.
\end{proof}
\subsection{The conditional logic instance}
The genericity of the introduced sequent calculus allows us to easiliy
create instantiations of Algorithm~\ref{alg:seq} for a large variety
of modal logics:
By setting $\mathcal R^m_{sc}$ for instance to the rule shown in
Figure~\ref{fig:modalCKCEM}
(where $A_a = A_b$ is shorthand for $A_a\rightarrow A_b\wedge A_b\rightarrow A_a$), we
obtain an algorithm for deciding provability (and satisfiability) of
conditional logic. Note however, that we restrict ourselves to the
examplary conditional logic \textbf{CKCEM} for the remainder of this section
(slightly adapted versions of the optimisation will work for other conditional logics).
\begin{figure}[h!]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{CKCEM}})\inferrule{A_0 = \ldots = A_n \\ B_0,\ldots, B_j,\neg B_{j+1},\ldots,\neg B_n}
{\Gamma, (A_0\Rightarrow B_0),\ldots,(A_j\Rightarrow B_j),
\neg(A_{j+1}\Rightarrow B_{j+1}),\ldots,\neg(A_n\Rightarrow B_n) } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modal rule $\textbf{CKCEM}$ of conditional logic}
\label{fig:modalCKCEM}
\end{figure}
In the following, we use the notions of \emph{conditional antecedent} and
\emph{conditional consequent} to refer to the parameters of the modal operator of
conditional logic.
In order to decide whether there is an
instance of the modal rule of conditional logic which can be applied to
the actual current sequent, it is necessary to create a preliminary premise
for each possible combination of equalities of all the premises of the
modal operators in this sequent. This results in $2^n$ new premises for
a sequent with $n$ top-level modal operators.
\begin{example}
Consider the sequent $\Gamma=\{(A_0\Rightarrow B_0),(A_1\Rightarrow B_1),
(A_2\Rightarrow B_2)\}$. By instantiating the modal rule, several
new premises
such as $\{\{A_0=A_1=A_2\},\{B_0,B_1,B_2\}\}$ or $\{\{A_0=A_2\},\{B_0,B_2\}\}$
may be obtained. Only after the equalities in the first sequent of each
of these premises have been shown (or refuted), it is clear whether the
current instantiation of the modal rule is actually leading to a possibly
provable premise.
\label{ex:cond}
\end{example}
It seems to be a more intelligent approach to first
partition the set of all antecedents of the top-level modal operators in the
current sequent into equivalence classes with respect to logical equality.
This partition does not only allow for a reduction of the number of newly
created open premises in each modal step,
but it also allows us to seperate the two parts of the conditional modal step
(i.e. the first part, showing the necessary equalities, and the second
part, showing that the resulting sequent - consisting only of the appropriate
conditional consequences - is actually provable itself).
\begin{example}
Consider again the sequent from Example~\ref{ex:cond}. By using the examplary
knowledge, that $A_0=A_1$, $A_1\neq A_2$ and $A_0\neq A_2$, it is immediate,
that there are just two reasonable instations of the modal rule, leading
to the two premises $\{\{B_0,B_1\}\}$ and $\{\{B_2\}\}$. For the
first of these two premises, note that it is not necessary to show the
equivalence of $A_0$ and $A_1$ again.
\end{example}
\begin{remark}
In the case of conditional logic, observe the following: Since the
modal antecedents that appear in a formula are not being changed by any
rule of the sequent calculus, it is possible to extract all possibly
relevant antecedents of a formula even \emph{before} the actual sequent
calculus iswe applied. This allows us to first compute the equivalence
classes of all the relevant antecedents and then feed this knowledge into
the actual sequent calculus:
\end{remark}
\section{The optimisation}
\begin{definition}
A conditional antecedent of \emph{modal nesting depth} $i$ is a
conditional antecedent which contains at least one antecedent of
modal nesting depth $i-1$ and which contains no antecedent
of modal nesting depth greater than $i-1$. A
conditional antecedent of nesting depth 0 is an antecedent
that does not contain any further modal operators.
Let $p_i$ denote the set of all conditional antecedents of modal
nesting depth $i$. Further, let $prems(n)$ denote the set of all
conditional antecedents of modal nesting depth at most $n$ (i.e.
$prems(n)=\bigcup_{j=1..n}^{} p_j$).
Finally, let $depth(\phi)$ denote the maximal modal nesting in
the formula $\phi$.
\end{definition}
\begin{example}
In the formula $\phi=((p_0\Rightarrow p_1) \wedge ((p_2\Rightarrow p_3)\Rightarrow p_4))
\Rightarrow (p_5\Rightarrow p_6)$,
the premise $((p_0\Rightarrow p_1) \wedge ((p_2\Rightarrow p_3)\Rightarrow p_4))$ has a
nesting depth of 2 (and not 1), $(p_0\Rightarrow p_1)$ and $(p_2\Rightarrow p_3)$ both
have a nesting depth of 1, and finally $p_0$, $p_2$ and $p_4$ all have a nesting depth of 0.
Furthermore, $prems(1)=\{p_0,p_2,p_4,(p_0\Rightarrow p_1),(p_2\Rightarrow p_3)\}$
and $depth(\phi)=3$.
\end{example}
\begin{definition}
A set $\mathcal{K}$ of treated sequents together with a function
$eval:\mathcal{K}\rightarrow \{\top,\bot\}$ is called \emph{knowledge}.
\end{definition}
We may now construct an optimized algorithm which allows us
to decide provability (and satisfiability) of formulae more efficiently
in some cases. The optimized algorithm is constructed from two functions
(namely from the actual proving function and from the so-called
\emph{pre-proving} function):
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take the input formula $\phi$ and the knowledge $(\mathcal{K},eval)$,
construct the initial set of premises (containing just one premise with just
one sequent) $S = \{\{\phi\}\}$ from it.\\
Step 2: Try to apply all rules $r\in R_{sc}$ (supplying $\mathcal{RO}^m_{sc}$
with knowledge $(\mathcal{K},eval)$) to any open sequent
$\Gamma\in P$ from any premise $P\in S$. Let $\Lambda$ denote the set
of the resulting premises.\\
Step 3: If $\Lambda$ contains the empty premise, mark the open
sequent $\Gamma$ as proved (and if $P$ contains no more open
sequents, mark $P$ as proved). If $\Lambda$ contains no premise at
all, mark the open sequent $\Gamma$ and the premise $P$ as refuted.\\
Step 4: Else, add all premises from $\Lambda$ to the set of
premises (i.e. $S=S\cup\Lambda$). If there are any remaining open
sequents, continue with Step 2.\\
Step 5: If all premises resulting from the application of all rules
from $R_{sc}$ to $\phi$ are marked as proved, finish with result \verb|True|
else finish with result \verb|False|.
\end{upshape}
\label{alg:optSeq}
\end{alg}
\end{algorithm}
Algorithm~\ref{alg:optSeq} is very similar to Algorithm~\ref{alg:seq},
however, it uses a modified
set of rules $\mathcal{RO}_{sc}$ and it also allows for the
input of some knowledge $(\mathcal{K},eval)$ in the form of a set of sequents
together with an evaluation function. This knowledge is passed on to
the modified modal rule of conditional logic, which makes appropriate use of it.
$\mathcal{RO}_{sc}$ is obtained from $\mathcal{R}_{sc}$ by replacing
the modal rule from Figure~\ref{fig:modalCKCEM} with the modified modal
rule from Figure~\ref{fig:modalCKCEMm}.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{CKCEM}$^m$})\inferrule{\bigwedge{}_{i,j=\{1..n\}}{eval(A_i=B_j)=\top}
\\ B_0,\ldots, B_j,\neg B_{j+1},\ldots,\neg B_n}
{\Gamma, (A_0\Rightarrow B_0),\ldots,(A_j\Rightarrow B_j),
\neg(A_{j+1}\Rightarrow B_{j+1}),\ldots,\neg(A_n\Rightarrow B_n) } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modified modal rule \textbf{CKCEM}$^m$ of conditional logic}
\label{fig:modalCKCEMm}
\end{figure}
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $prems_i$ of all conditional antecedents of $\phi$
of nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $eq_i$ denote the set of all equalities $A_a = A_b$ for different
formulae $A_a,A_b\in prems_i$. Compute
Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all $\psi\in eq_i$.
Set $\mathcal{K}_{i+1} = eq_i$, set $i = i + 1$. For each equality $\psi\in eq_i$,
set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was \verb+True+
and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$) and return its result
as result.
\label{alg:preprove}
\end{upshape}
\end{alg}
\end{algorithm}
Algorithm~\ref{alg:preprove} first computes the knowledge $(\mathcal{K},eval)$ about specific
subformulae of $\phi$ and then finally checks for provability of
$\phi$ (using this knowledge): In order to show the equivalence of
two conditional antecedents of nesting depth at most $i$, we assume,
that the equalities $\mathcal{K}_{i}$ between modal antecedents of nesting depth less
than $i$ have allready been computed and the result is stored in $eval_i$; hence,
two antecedents are equal, if their equivalence is provable by
Algorithm~\ref{alg:optSeq} using only the knowledge $(\mathcal{K}_{i},eval_i)$.
\subsection{Treatment of needed equivalences only}
Since Algorithm~\ref{alg:preprove} tries to show the logical equivalence of any combination
of two conditional antecedents that appear in $\phi$, it will have worse completion
time than Algorithm~\ref{alg:seq} on many formulae:
\begin{example}
Consider the formula
\begin{quote}
$\phi=(((p_0\Rightarrow p_1)\Rightarrow p_2)\Rightarrow p_4)\vee
(((p_5\Rightarrow p_6)\Rightarrow p_7)\Rightarrow p_8)$.
\end{quote}
Algorithm~\ref{alg:preprove} will not only
try to show the necessary equivalences between the pairs
$(((p_0\Rightarrow p_1)\Rightarrow p_2), ((p_5\Rightarrow p_6)\Rightarrow p_7))$,
$((p_0\Rightarrow p_1), (p_5\Rightarrow p_6))$ and $(p_0,p_5)$, but it will
also try to show equivalences between any two conditional antecedents (e.g. $(p_0,
(p_5\Rightarrow p_6))$), even though these equivalences will not be needed
during the execution of Algorithm~\ref{alg:optSeq}.
\end{example}
Based on this observation it is possible to assign a category to each pair of
antecedents that appear in it:
\begin{definition}
The \emph{paths (in $\phi$)} to a conditional antecedent $\psi$ describe the orders
of modal arguments through which $\psi$ is reached, when starting from the root
of $\phi$:
The path to a top-level antecedent is just $\{1\}$. If $\psi$ does not appear as
antecedent on the topmost level of $\phi$, the path to it is $\{1\}$ prepended to the set
of paths to $\psi$ in any top-level conditional antecedent of $\phi$ together with $\{0\}$
prepended to the set of paths to $\psi$ in any top-level conditional consequent of $\phi$.
\end{definition}
\begin{example}
Consider the formula $\phi=(p_0\Rightarrow p_2)\Rightarrow ((p_0\Rightarrow p_1)\Rightarrow p_3)$. Then the path to
$(p_0\Rightarrow p_2)$ is $\{1\}$, whereas the path to $(p_0\Rightarrow p_1)$ is $\{01\}$. The paths
to $p_0$ are $\{11,011\}$.
\end{example}
\begin{definition}
Let $A$ and $B$ be two conditional antecedents. $A$ and $B$ are called \emph{connected (in $\phi$)} if
at least one path to $A$ is also a path to $B$ (and hence vice-versa). If no path to $A$ is a path to $B$,
the two antecedents are said to be \emph{independent}.
\end{definition}
Since two independent conditional antecedents will never appear in the scope of the
same application of the modal rule, it is in no case necessary to show (or
refute) the logical equivalence of independent conditional antecedents. Hence it suffices to focus
our attention to the connected conditional antecedents. It is then obvious that
any possibly needed equivalence and its truth-value are allready included in $(K,eval)$
when the main proving is induced. On the other hand, we have to be aware that it
may be the case, that we show equivalences of antecedents which are in fact not needed
(since antecedents may indeed be connected and still it is possible that they never appear together in an application
of the modal rule - this is the case whenever two preceeding antecedents are not logically equivalent).
As result of these considerations, we devise Algorithm~\ref{alg:optPreprove},
an improved version of Algorithm~\ref{alg:preprove}. The only difference is
that before proving any equivalence, Algorithm~\ref{alg:optPreprove} checks
whether the current pair of conditional antecedents is actually connected;
only then does it treat the equivalence. Hence independent pairs of antecedents
remain untreated.
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $prems_i$ of all conditional antecedents of $\phi$
of nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $eq_i$ denote the set of all equalities $A_a = A_b$ for different and not independent
pairs of formulae $A_a,A_b\in prems_i$. Compute
Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all $\psi\in eq_i$.
Set $\mathcal{K}_{i+1} = eq_i$, set $i = i + 1$. For each equality $\psi\in eq_i$,
set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was \verb+True+
and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$) and return its result
as result.
\label{alg:optPreprove}
\end{upshape}
\end{alg}
\end{algorithm}
\section{Implementation}
The proposed optimized algorithms have been implemented (using the programming
language Haskell) as part of the generic coalgebraic modal logic satisfiability
solver (\COLOSS\footnote{As already mentioned above, more information about \COLOSS,
a web-interface to the tool and
the tested benchmarking formulae can be found at \url{http://www.informatik.uni-bremen.de/cofi/CoLoSS/}}).
\COLOSS~provides the general coalgebraic framework in which the generic
sequent calculus is embedded. It is easily possible to instantiate this generic sequent
calculus to specific modal logics, one particular example being conditional logic.
The matching function for conditional logic in \COLOSS~was hence adapted in order to realize
the different optimisations (closely following Algorithms~\ref{alg:seq},~\ref{alg:preprove} and
~\ref{alg:optPreprove}), so that \COLOSS~now provides an efficient algorithm for
deciding the provability (and satisfiability) of conditional logic formulae.
\subsection{Comparing the proposed algorithms}
In order to show the relevance of the proposed optimisations, we devise several classes
of conditional formulae. Each class has a characteristic general shape, defining its
complexity w.r.t. different parts of the algorithms and thus exhibiting specific
advantages or disadvantages of each algorithm:
\begin{itemize}
\item The formula \verb|bloat(|$i$\verb|)| is a full binary tree of depth $i$ (containing $2^i$ pairwise logically
inequivalent atoms and $2^i-1$ modal antecedents):
\begin{quote}
\verb|bloat(|$i$\verb|)| = $($\verb|bloat(|$i-1$\verb|)|$)\Rightarrow($\verb|bloat(|$i-1$\verb|)|)\\
\verb|bloat(|$0$\verb|)| = $p_{rand}$
\end{quote}
Formulae from this class should show the problematic performance of Algorithm~\ref{alg:preprove} whenever
a formula contains many modal antecedents which appear at different depths. A comparison of the different
algorithms w.r.t. formulae \verb|bloat(|$i$\verb|)| is depicted in Figure~\ref{fig:benchBloat}.
Since Algorithm~\ref{alg:preprove} does not check whether pairs of modal antecedents are independent or connected,
it performs considerably worse than Algorithm~\ref{alg:optPreprove} which only attempts to prove the logical
equivalence of formulae which are not independent. Algorithm~\ref{alg:seq} has the best performance in this
extreme case, as it only has to consider pairs of modal antecedents which actually appear during the course
of a proof. This is the price to pay for the optimisation by dynamic programming.
\end{itemize}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| l | r | r | r |}
\hline
$i$ & Algorithm~\ref{alg:seq} & Algorithm~\ref{alg:preprove} & Algorithm~\ref{alg:optPreprove} \\
\hline
1 & 0.008s & 0.009s & 0.010s\\
2 & 0.008s & 0.011s & 0.010s\\
3 & 0.009s & 0.028s & 0.014s\\
4 & 0.010s & 0.233s & 0.022s\\
5 & 0.011s & 2.840s & 0.087s\\
6 & 0.013s & 33.476s & 0.590s\\
7 & 0.019s & 402.239s & 4.989s\\
\hline
\end{tabular}
\end{center}
\caption{Results for bloat($i$)}
\label{fig:benchBloat}
\end{figure}
\begin{itemize}
\item The formula \verb|conjunct(|$i$\verb|)| is just an $i$-fold conjunction of a specific formula $A$:
\begin{quote}
\verb|conjunct(|$i$\verb|)| = $A_1\wedge\ldots\wedge A_i$\\
$A=(((p_1\vee p_0)\Rightarrow p_2)\vee((p_0\vee p_1)\Rightarrow p_2))\vee\neg(((p_0\vee p_1)\Rightarrow p_2)\vee((p_1\vee p_0)\Rightarrow p_2))$)
\end{quote}
This class consists of formulae which contain logically (but not sytactically) equivalent antecedents.
As $i$ increases, so does the amount of appearances of identical modal antecedents in different positions
of the considered formula. A comparison of the different algorithms w.r.t. formulae \verb|conjunct(|$i$\verb|)| is depicted in
Figure~\ref{fig:benchConjunct}. It is obvious that the optimized algorithms perform considerably better than the unoptimized
Algorithm~\ref{alg:seq}. The reason for this is, that Algorithm~\ref{alg:seq} repeatedly proves equivalences between the same
pairs of modal antecedents. The optimized algorithms on the other hand are equipped with knowledge about the modal antecedents,
so that these equivalences have to be proved only once. However, even the runtime of the optimized algorithms is exponential in $i$,
due to the exponentially increasing complexity of the underlying propositional formula. Note that the use of propositional taulogies (such as
$A \leftrightarrow (A\wedge A) $ in this case) would help to greatly reduce the computing time for \verb|conjunct(|$i$\verb|)|.
Optimisation of propositional reasoning is not the scope of this paper though, thus we devise the following examplary class of formulae
(for which propositional tautologies would not help).
\end{itemize}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| l | r | r | r |}
\hline
$i$ & Algorithm~\ref{alg:seq} & Algorithm~\ref{alg:preprove} & Algorithm~\ref{alg:optPreprove} \\
\hline
1 & 0.012s & 0.013s & 0.012s\\
2 & 0.021s & 0.015s & 0.014s\\
3 & 0.681s & 0.021s & 0.021s\\
4 & 131.496s & 0.048s & 0.048s\\
5 & $>$600.000s & 0.199s & 0.201s\\
6 & $\gg$600.000s & 1.152s & 1.161s\\
7 & $\gg$600.000s & 8.746s & 8.667s\\
8 & $\gg$600.000s & 74.805s & 75.595s\\
9 & $\gg$600.000s & $>$600.000s & $>$600.000s\\
\hline
\end{tabular}
\end{center}
\caption{Results for conjunct($i$)}
\label{fig:benchConjunct}
\end{figure}
\begin{itemize}
\item The formula \verb|explode(|$i$\verb|)| contains equivalent but not syntactically
equal and interchangingly nested modal antecedents of depth at most $i$:
\begin{quote}
\verb|explode(|$i$\verb|)| = $X^i_1\vee\ldots\vee X^i_i$\\
$X^i_1=(A^i_1\Rightarrow(\ldots(A^i_i\Rightarrow (c_1\wedge\ldots\wedge c_i))\ldots))$\\
$X^i_j=(A^i_{j\bmod i}\Rightarrow(\ldots(A^i_{(j+(i-1))\bmod i}\Rightarrow \neg c_j)\ldots))$\\
$A^i_j=p_{j \bmod i}\wedge\ldots\wedge p_{(j+(i-1)) \bmod i}$
\end{quote}
This class contains complex formulae for which the unoptimized algorithm should not be
efficient any more: Only the combined knowledge about all appearing modal antecedents $A^i_j$ allows
the proving algorithm to reach all modal consequents $c_n$, and only the combined sequent
$\{(c_1\wedge\ldots\wedge c_i),\neg c_1,\ldots,\neg c_i\}$ (containing every appearing
consequent) is provable. For formulae from this class (specifically designed to show the
advantages of optimization by dynamic programming),
the optimized algorithms vastly outperform the unoptimized algorithm (see Figure~\ref{fig:benchExplode}).
\end{itemize}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| l | r | r | r |}
\hline
$i$ & Algorithm~\ref{alg:seq} & Algorithm~\ref{alg:preprove} & Algorithm~\ref{alg:optPreprove} \\
\hline
1 & 0.009s & 0.008s & 0.009s\\
2 & 0.011s & 0.010s & 0.010s\\
3 & 0.028s & 0.012s & 0.014s\\
4 & 0.268s & 0.018s & 0.018s\\
5 & 4.555s & 0.025s & 0.027s\\
6 & 10.785s & 0.035s & 0.039s\\
7 & $\gg$600.000s & 0.054s & 0.060s\\
8 & $\gg$600.000s & 0.079s & 0.089s\\
9 & $\gg$600.000s & 0.122s & 0.140s\\
\hline
\end{tabular}
\end{center}
\caption{Results for explode($i$)}
\label{fig:benchExplode}
\end{figure}
The tests were conducted on a Linux PC (Dual Core AMD Opteron 2220S (2800MHZ), 16GB RAM).
It is obvious that a significant increase of performance may be obtained through
the proposed optimisations. In general, the implementation of the proposed algorithms (realized
as a part of \COLOSS) has a comparable performance to
other conditional logic provers (such as CondLean~\cite{OlivettiEA07}). However, the authors of other
provers did not publish their benchmarking sets of formulae, so that a direct
comparison was not possible. Since other provers do not implement the optimisation
strategies proposed in this paper, it is reasonable to assume, that the optimized
implementation in \COLOSS~outperforms other systems at least for the
considered examplary formulae \verb|explode(|i\verb|)| and \verb|conjunct(|i\verb|)|.
\section{Generalized optimisation}
As previously mentioned, the demonstrated optimisation is not restricted to the
case of conditional
modal logics.
\begin{definition}
If $\Gamma$ is a sequent, we denote the set of all arguments of
top-level modalities from $\Gamma$ by $arg(\Gamma)$.
A \emph{short sequent} is a sequent which consists of just one formula which
itself is a propositional formula over a fixed maximal number of modal arguments
from $arg(\Gamma)$. In the following, we fix the maximal number of modal arguments
in short sequents to be 2.
\end{definition}
The general method of the optimisation then takes the following form:
Let $S_1,\ldots S_n$ be short sequents and assume that there is
a (w.r.t the considered modal logic) sound instance of the generic rule which
is depicted in Figure~\ref{fig:modalOpt} (where $\mathcal{S}$ is a set of any
sequents).
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{Opt}}) \inferrule{ S_1 \\ \ldots \\ S_n \\ \mathcal{S} }
{ \Gamma } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The general rule-scheme to which the optimisation may be applied}
\label{fig:modalOpt}
\end{figure}
Then we devise a final version (Algorithm~\ref{alg:genOptPreprove}) of the
optimized algorithm: Instead of considering only equivalences of conditional antecedents
for pre-proving, we now extend our attention to any short sequents over any modal arguments.
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $args_i$ of all modal arguments of $\phi$
which have nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $seq_i$ denote the set of all short sequents of form $S_i$ (where $S_i$ is a sequent
from the premise of rule (\textbf{Opt})) over at most two formulae
$A_a,A_b\in args_i$. Compute Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all
$\psi\in seq_i$. Set $\mathcal{K}_{i+1} = seq_i$, set $i = i + 1$. For each short sequent
$\psi\in seq_i$, set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was
\verb+True+ and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$) and return its result
as result.
\end{upshape}
\label{alg:genOptPreprove}
\end{alg}
\end{algorithm}
This new Algorithm~\ref{alg:genOptPreprove} may then be used to decide provability of formulae,
where the employed ruleset has to be extended by the generic modified rule given by Figure~\ref{fig:modModalOpt}.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{Opt}$^m$}) \inferrule{ eval(S_1)=\top \\ \ldots \\ eval(S_n)=\top \\ \mathcal{S} }
{ \Gamma } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The general optimized rule}
\label{fig:modModalOpt}
\end{figure}
\begin{example}
The following two cases are instances of the generic optimisation:
\begin{enumerate}
\item (Classical modal Logics / Neighbourhood Semantics) Let $\Gamma = \{\Box A = \Box B\}$,
$n=1$, $S_1=\{A=B\}$ and $\mathcal{S}=\emptyset$. Algorithm~\ref{alg:genOptPreprove}
may be then applied whenever the following congruence rule is sound in the considered
logic:\\
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$_{Cong}$}) \inferrule{ A=B }
{ \Box A = \Box B }
\end{center}
\end{quote}
\vspace{10pt}
The according modified version of this rule is as follows:\\
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$^m_{Cong}$}) \inferrule{ {eval(A=B)=\top} }
{ \Box A = \Box B }
\end{center}
\end{quote}
\item (Monotone modal logics) By setting $\Gamma = \{\Box A \rightarrow \Box B\}$,
$n=1$, $S_1=\{A\rightarrow B\}$ and $\mathcal{S}=\emptyset$, we may instantiate
the generic algorithm to the case of modal logics which are monotone w.r.t. their
modal operator. So assume the following rule to be sound in the considered modal
logic:\\
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$_{Mon}$}) \inferrule{ A\rightarrow B }
{ \Box A \rightarrow \Box B }
\end{center}
\end{quote}
\vspace{10pt}
The according modified version of this rule is as follows:\\
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$^m_{Mon}$}) \inferrule{ {eval(A\rightarrow B)=\top} }
{ \Box A \rightarrow \Box B }
\end{center}
\end{quote}
In the case that (\textbf{Opt}$_{Mon}$) is the only modal rule in the
considered logic (i.e. the case of plain monotone modal logic), all the
prove-work which is connected to the modal operator is shifted to the
pre-proving process. Especially matching with the modal rules
$\mathcal{RO}^m_{sc}$ becomes a mere lookup of the value of $eval$.
This means, that all calls of the sequent algorithm Algorithm~\ref{alg:optSeq}
correspond in complexity just to ordinary sat-solving of propositional logic.
Furthermore, Algorithm~\ref{alg:optSeq} will be called $|\phi|$ times. This
observation may be generalized:
\end{enumerate}
\label{ex:neighMon}
\end{example}
\begin{remark}
In the case that all modal rules of the considered logic are instances of
the generic rule (\textbf{Opt}) with $P=\emptyset$ (as seen in Example~\ref{ex:neighMon}),
the optimisation does not only allow for a reduction of computing time, but
it also allows us to effectively reduce the sequent calculus to a sat-solving
algorithm.
Furthermore, the optimized algorithm will always be as efficient as the
original one in this case (since every occurence of short sequents over $arg(\Gamma)$
which accord to the current instantiation of the rule (\textbf{Opt}) will
have to be shown or refuted even during the course of the original algorithm).
\end{remark}
\section{Summary}
\begin{problem}
Finish your paper and get it to your Program Chairman on time!
\end{problem}
\bibliographystyle{myabbrv}
\bibliography{coalgml}
\end{document}