example.tex revision fc6b231dc3daf448b2a6d8b73aa7b052d1637d80
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\documentclass{entcs} \usepackage{entcsmacro}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\usepackage{graphicx}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\usepackage{mathpartir}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\usepackage{amsmath,amssymb}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\newcommand{\hearts}{\heartsuit}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\input{header}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\sloppy
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% The following is enclosed to allow easy detection of differences in
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% ascii coding.
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Upper-case A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Lower-case a b c d e f g h i j k l m n o p q r s t u v w x y z
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Digits 0 1 2 3 4 5 6 7 8 9
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Exclamation ! Double quote " Hash (number) #
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Dollar $ Percent % Ampersand &
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Acute accent ' Left paren ( Right paren )
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Asterisk * Plus + Comma ,
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Minus - Point . Solidus /
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Colon : Semicolon ; Less than <
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Equals =3D Greater than > Question mark ?
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% At @ Left bracket [ Backslash \
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Right bracket ] Circumflex ^ Underscore _
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Grave accent ` Left brace { Vertical bar |
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Right brace } Tilde ~
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% A couple of exemplary definitions:
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder%\newcommand{\Nat}{{\mathbb N}}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder%\newcommand{\Real}{{\mathbb R}}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\newcommand{\COLOSS}{{\textrm CoLoSS}}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\def\lastname{Hausmann and Schr\"oder}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\begin{document}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\begin{frontmatter}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \title{Optimizing Conditional Logic Reasoning within \COLOSS}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \author[DFKI]{Daniel Hausmann\thanksref{myemail}}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \author[DFKI,UBremen]{Lutz Schr\"oder\thanksref{coemail}}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \address[DFKI]{DFKI Bremen, SKS}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \address[UBremen]{Department of Mathematics and Computer Science, Universit\"at Bremen, Germany}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% \thanks[ALL]{Work forms part of DFG-project \emph{Generic Algorithms and Complexity
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder% Bounds in Coalgebraic Modal Logic} (SCHR 1118/5-1)}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \thanks[myemail]{Email: \href{mailto:Daniel.Hausmann@dfki.de} {\texttt{\normalshape Daniel.Hausmann@dfki.de}}}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \thanks[coemail]{Email: \href{mailto:Lutz.Schroeder@dfki.de} {\texttt{\normalshape Lutz.Schroeder@dfki.de}}}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\begin{abstract}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder The generic modal reasoner \COLOSS~covers a wide variety of logics
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder ranging from graded and probabilistic modal logic to coalition logic
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder and conditional logics, being based on a broadly applicable
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder optimisation of the reasoning strategies employed in
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \COLOSS. Specifically, we discuss strategies of memoisation and
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder dynamic programming that are based on the observation that short
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder sequents play a central role in many of the logics under
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder study. These optimisations seem to be particularly useful for the
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder case of conditional logics, for some of which dynamic programming
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder even improves the theoretical complexity of the algorithm. These
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder strategies have been implemented in \COLOSS; we give a detailed
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder comparison of the different heuristics, observing that in the
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder targeted domain of conditional logics, a substantial speed-up can be
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder achieved.
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\end{abstract}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\begin{keyword}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder Coalgebraic modal logic, conditional logic, automated reasoning,
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder optimisation, heuristics, memoizing, dynamic programming
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\end{keyword}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\end{frontmatter}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\section{Introduction}\label{intro}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian MaederIn recent decades, modal logic has seen a development towards semantic
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederheterogeneity, witnessed by an emergence of numerous logics that,
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederwhile still of manifestly modal character, are not amenable to
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederstandard Kripke semantics. Examples include probabilistic modal
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederlogic~\cite{FaginHalpern94}, coalition logic~\cite{Pauly02}, and
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederconditional logic~\cite{Chellas80}, to name just a few. The move away
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederfrom Kripke semantics, mirrored on the syntactical side by the failure
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederof normality, entails additional challenges for tableau and sequent
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedersystems, as the correspondence between tableaus and models becomes
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederlooser, and in particular demands created by modal formulas can no
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederlonger be discharged by the creation of single successor nodes.
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian MaederThis problem is tackled on the theoretical side by introducing the
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedersemantic framework of coalgebraic modal
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederlogic~\cite{Pattinson03,Schroder05}, which covers all logics mentioned
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederabove and many more. It turns out that coalgebraic modal logic does
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederallow the design of generic reasoning algorithms, including a generic
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedertableau method originating from~\cite{SchroderPattinson09}; this
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedergeneric method may in fact be separated from the semantics and
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederdeveloped purely syntactically, as carried out
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederin~\cite{PattinsonSchroder08b,PattinsonSchroder09a}.
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian MaederGeneric tableau algorithms for coalgebraic modal logics, in particular
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederthe algorithm described in~\cite{SchroderPattinson09}, have been
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederimplemented in the reasoning tool
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\COLOSS~\cite{CalinEA09}\footnote{available under
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder \url{http://www.informatik.uni-bremen.de/cofi/CoLoSS/}}. As
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederindicated above, it is a necessary feature of the generic tableau
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedersystems that they potentially generate multiple successor nodes for a
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedergiven modal demand, so that in addition to the typical depth problem,
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederproof search faces a rather noticeable problem of breadth. The search
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederfor optimisation strategies to increase the efficiency of reasoning
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederthus becomes all the more urgent. Here we present one such strategy,
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederwhich is generally applicable, but particularly efficient in reducing
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederboth depth and branching for the class of conditional logics. We
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederexploit a notable feature of this class, namely that many of the
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederrelevant rules rely rather heavily on premises stating equivalence
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederbetween formulas; thus, conditional logics are a good candidate for
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedermemoising strategies, applied judiciously to short sequents. We
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederdescribe the implementation of memoising and dynamic programming
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederstrategies within \COLOSS, and discuss the outcome of various
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedercomparative experiments.
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder\section{Generic Sequent Calculi for Coalgebraic Modal Logic}
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maeder
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian MaederCoalgebraic modal logic, originally introduced as a specification
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederlanguage for coalgebras, seen as generic reactive
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedersystems~\cite{Pattinson03}, has since evolved into a generic framework
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederfor modal logic beyond Kripke semantics~\cite{CirsteaEA09}. The basic
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederidea is to encapsulate the branching type of the systems relevant for
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederthe semantics of a particular modal logic, say probabilistic or
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedergame-theoretic branching, in the choice of a set functor, the
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedersignature functor (e.g.\ the distribution functor and the games
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederfunctor in the mentioned examples), and to capture the semantics of
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maedermodal operators in terms of so-called predicate liftings. For the
462ec4b2fa3e0e788eb60dcb4aebc518298f342cChristian Maederpurposes of the present work, details of the semantics are less
relevant than proof-theoretic aspects, which we shall recall
presently. The range of logics covered by the coalgebraic approach is
extremely broad, including, besides standard Kripke and neighbourhood
semantics, e.g.\ graded modal logic~\cite{Fine72}, probabilistic modal
logic~\cite{FaginHalpern94}, coalition logic~\cite{Pauly02}, various
conditional logics equipped with selection function
semantics~\cite{Chellas80}, and many more.
Syntactically, logics are parametrised by the choice of a \emph{modal
similarity type} $\Lambda$, i.e.\ a set of modal operators with
associated finite arities. This choice determines the set of formulas
$\phi,\psi$ via the grammar
\begin{equation*}
\phi, \psi ::= p \mid
\phi \land \psi \mid \lnot \phi \mid \hearts(\phi_1, \dots, \phi_n)
\end{equation*}
where $\hearts$ is an $n$-ary operator in $\Lambda$. Examples are
$\Lambda=\{L_p\mid p\in[0,1]\cap\mathbb{Q}\}$, the unary operators
$L_p$ of probabilistic modal logic read `with probability at least
$p$'; $\Lambda=\{\gldiamond{k}\mid k\in\Nat\}$, the operators
$\gldiamond{k}$ of graded modal logic read `in more than $k$
successors'; $\Lambda=\{[C]\mid C\subseteq N\}$, the operators $[C]$
of coalition logic read `coalition $C$ (a subset of the set $N$ of
agents) can jointly enforce that'; and, for our main example here,
$\Lambda=\{\Rightarrow\}$, the \emph{binary} modal operator
$\Rightarrow$ of conditional logic read e.g.\ `if \dots then normally
\dots'.
Coalgebraic modal logic was originally limited to so-called
\emph{rank-$1$ logics} axiomatised by formulas with nesting depth of
modal operators uniformly equal to $1$~\cite{Schroder05}. It has since
been extended to the more general non-iterative
logics~\cite{SchroderPattinson08d} and to some degree to iterative
logics, axiomatised by formulas with nested
modalities~\cite{SchroderPattinson08PHQ}. The examples considered here
all happen to be rank-$1$, so we focus on this case. In the rank-$1$
setting, it has been shown~\cite{Schroder05} that all logics can be
axiomatised by \emph{one-step rules} $\phi/psi$, where $\phi$ is
purely propositional and $\psi$ is a clause over formulas of the form
$\hearts(a_1,\dots,a_n)$, where the $a_i$ are propositional
variables. In the context of a sequent calculus, this takes the
following form~\cite{PattinsonSchroder08b}.
\begin{definition}
If $S$ is a set (of formulas or variables) then $\Lambda(S)$ denotes
the set $\lbrace \hearts(s_1, \dots, s_n) \mid \hearts \in \Lambda
\mbox{ is $n$-ary, } s_1, \dots, s_n \in S \rbrace$ of formulas
comprising exactly one application of a modality to elements of
$S$. An \emph{$S$-sequent}, or just a \emph{sequent} in case
$S=\Form(\Lambda)$, is a finite set of elements of $S \cup \lbrace
\neg A \mid A \in S \rbrace$. Then, a \emph{one-step rule}
$\Gamma_1,\dots,\Gamma_n/\Gamma_0$ over a set $V$ of variables
consists of $V$-sequents $\Gamma_1,\dots,\Gamma_n$ and a
$\Lambda(S)$-sequent $\Gamma_0$.
\end{definition}
\noindent A given set of one-step rules then induces an instantiation
of the \emph{generic sequent calculus}~\cite{PattinsonSchroder08b}
which is given by a set of rules $\mathcal{R}_{sc}$ consisting of the
finishing and the branching rules $\mathcal{R}^b_{sc}$ (i.e. rules
with no premise or more than one premise), the linear rules
$\mathcal{R}^l_{sc}$ (i.e. rules with exactly one premise) and the
modal rules $\mathcal R^m_{sc}$, i.e.\ the given one-step rules. The
finishing and the branching rules are presented in
Figure~\ref{fig:branching} (where $\top=\neg\bot$ and $p$ is an atom),
the linear rules are shown in Figure~\ref{fig:linear}. So far, all
these rules are purely propositional. As an example for a set of modal
one-step rules, consider the modal rules $\mathcal R^m_{sc}$ of the
standard modal logic \textbf{K} as given by Figure~\ref{fig:modalK}.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c c c |}
\hline
& & \\[-5pt]
(\textsc {$\neg$F}) \inferrule{ }{\Gamma, \neg\bot} &
(\textsc {Ax}) \inferrule{ }{\Gamma, p, \neg p} &
(\textsc {$\wedge$}) \inferrule{\Gamma, A \\ \Gamma, B}{\Gamma, A\wedge B} \\[-5pt]
& & \\
\hline
\end{tabular}
\end{center}
\caption{The finishing and the branching sequent rules $\mathcal{R}^b_{sc}$}
\label{fig:branching}
\end{figure}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c c |}
\hline
& \\[-5pt]
(\textsc {$\neg\neg$})\inferrule{\Gamma, A}{\Gamma, \neg\neg A} &
(\textsc {$\neg\wedge$}) \inferrule{\Gamma, \neg A, \neg B}{\Gamma, \neg(A\wedge B)} \\[-5pt]
& \\
\hline
\end{tabular}
\end{center}
\caption{The linear sequent rules $\mathcal{R}^l_{sc}$}
\label{fig:linear}
\end{figure}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{K}})\inferrule{\neg A_1, \ldots , \neg A_n, A_0}
{\Gamma, \neg \Box A_1,\ldots,\neg \Box A_n, \Box A_0 } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modal rule of \textbf{K}}
\label{fig:modalK}
\end{figure}
This calculus has been shown to be complete under suitable coherence
assumptions on the rule set and the coalgebraic
semantics~\cite{PattinsonSchroder08b}, provided that the set of rules
\emph{absorbs cut and contraction} in a precise sense. We say that a
formula is \emph{provable} if it can be derived in the relevant
instance of the sequent calculus; the algorithms presented below are
concerned with the \emph{provability problem}, i.e.\ to decide whether
a given sequent is provable. This is made possible by the fact that
the calculus does not include a cut rule, and hence enables automatic
proof search. For rank-1 logics, proof search can be performed in
$\mathit{PSPACE}$ under suitable assumptions on the representation of
the rule set~\cite{SchroderPattinson09,PattinsonSchroder08b}. The
corresponding algorithm has been implemented in the system
CoLoSS~\cite{CalinEA09} which remains under continuous
development. Particular attention is being paid to finding heuristic
optimisations to enable practically efficient proof search, as
described here.
Our running example used in the presentation of our optimisation
strategies below is conditional logic as mentioned above. The most
basic conditional logic is $\mathit{CK}$~\cite{Chellas80} (we shall
consider a slightly extended logic below), which is characterised by
assuming normality of the right-hand argument of the non-monotonic
conditional $\Rightarrow$, but only replacement of equivalents in the
left-hand arguments. Absorption of cut and contraction requires a
unified rule set consisting of the rules
\begin{equation*}
\infrule{A_0=A_1;\;\dots;\; A_n=A_0\quad \neg B_1,\dots,\neg B_1,B_0}
{\neg(A_1\Rightarrow B_1),\dots,\neg(A_n\Rightarrow B_n),(A_0\Rightarrow B_0)}
\end{equation*}
with $A=C$ abbreviating the pair of sequents $\neg A,C$ and $\neg
C,A$. This illustrates two important points that play a role in the
optimisation strategies described below. First, the rule has a large
branching degree both existentially and universally -- we have to
check that there \emph{exists} a rule proving some sequent such that
\emph{all} its premises are provable, and hence we have to guess one
of exponentially many subsequents to match the rule and then attempt
to prove linearly many premises; compare this to linearly many rules
and a constant number of premises per rule (namely, $1$) in the case
of $K$ shown above. Secondly, many of the premises are short
sequents. This will be exploited in our memoisation strategy. We note
that \emph{labelled} sequent calculi for conditional logics have been
designed previously~\cite{OlivettiEA07} and have been implemented in
the CondLean prover~\cite{OlivettiPozzato03}; contrastingly, our
calculus is unlabelled, and turns out to be conceptually simpler. The
comparison of the relative efficiency of labelled and unlabelled
calculi remains an open issue for the time being.
\section{The Algorithm}
According to the framework introduced above, we now devise a generic
algorithm to decide the provability of formulae. By instantiating the
generic algorithm to a specific modal logic (simply by using the
defining modal rule of this logic), it is possible to obtain an
algorithm to decide provability of formulae of this specific modal
logic.
\noindent
Algorithm~\ref{alg:seq} makes use of the sequent rules in the following manner:
In order to show the \emph{provability} of a formula $\phi$, the
algorithm starts with the sequent $\{\phi\}$ and tries to apply all
of the sequent rules $r\in R_{sc}$ to it
\footnote{The actual haskell
implementation of the algorithm first tries to apply the linear rules
from $\mathcal{R}^l_{sc}$ and afterwards it tries to apply the other
rules from $\mathcal{R}^b_{sc}$ and $\mathcal{R}^m_{sc}$.}.
% explain what 'applying' means
The result of this one step of application of the rules will be a set
of premises $P$. The proving function is then recursively started on
each sequent of every premise in this set. If there is any premise
whose sequents are all provable, the application of the according rule
is justified and hence the algorithm has succeeded. Below, we refer to
a sequent as \emph{open} if it has not yet been checked for
provability, and otherwise as \emph{treated}.
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take the input formula $\phi$, construct the initial set
of premises (containing just one premise with just one sequent)
$S = \{\{\phi\}\}$ from it.\\
Step 2: Try to apply all rules $r\in R_{sc}$ to any open sequent
$\Gamma\in P$ from any premise $P\in S$. Let $\Lambda$ denote the set
of the resulting premises.\\
Step 3: If $\Lambda$ contains the empty premise, mark the open
sequent $\Gamma$ as proved (and if $P$ contains no more open
sequents, mark $P$ as proved). If $\Lambda$ contains no premise at
all, mark the open sequent $\Gamma$ and the premise $P$ as refuted.\\
Step 4: Else, add all premises from $\Lambda$ to the set of
premises (i.e. $S=S\cup\Lambda$). If there are any remaining open
sequents, continue with Step 2.\\
Step 5: If all premises resulting from the application of all rules
from $R_{sc}$ to $\phi$ are marked as proved, finish with result \verb|True|
else finish with result \verb|False|.
\end{upshape}
\label{alg:seq}
\end{alg}
\end{algorithm}
\begin{proposition}
\begin{upshape}
Let $\mathcal{R}_{sc}$ be strictly one-step complete, closed under contraction,
and PSPACE-tractable. Then Algorithm~\ref{alg:seq} is sound and complete w.r.t. provability
and it is in PSPACE.
\end{upshape}
\end{proposition}
\begin{proof}
We just note, that Algorithm~\ref{alg:seq} is an equivalent implementation of the algorithm proposed
in~\cite{SchroderPattinson09}. For more details, refer to ~\cite{SchroderPattinson09}, Theorem 6.13.
\end{proof}
\subsection{The conditional logic instance}
The genericity of the introduced sequent calculus allows us to easiliy
create instantiations of Algorithm~\ref{alg:seq} for a large variety
of modal logics: By setting $\mathcal R^m_{sc}$ for instance to the
rule shown in Figure~\ref{fig:modalCKCEM} (where $A_a = A_b$ is
shorthand for $A_a\rightarrow A_b\wedge A_b\rightarrow A_a$), we
obtain an algorithm for deciding provability (and satisfiability) of
conditional logic. We restrict ourselves to the examplary conditional
logic \textbf{CKCEM} for the remainder of this section; slightly
adapted versions of the optimisation will work for other conditional
logics. \textbf{CKCEM} is characterised by the additional axioms of
\emph{conditional excluded middle} $(A\Rightarrow
B)\lor(A\Rightarrow\neg B)$, which to absorb cut and contraction is
integrated in the rule for \textbf{CK} as follows.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{CKCEM}})\inferrule{A_0 = \ldots = A_n \\ B_0,\ldots, B_j,\neg B_{j+1},\ldots,\neg B_n}
{\Gamma, (A_0\Rightarrow B_0),\ldots,(A_j\Rightarrow B_j),
\neg(A_{j+1}\Rightarrow B_{j+1}),\ldots,\neg(A_n\Rightarrow B_n) } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modal rule $\textbf{CKCEM}$ of conditional logic}
\label{fig:modalCKCEM}
\end{figure}
It has been shown in~\cite{PattinsonSchroder09a} that the complexity
of \textbf{CKCEM} is $\mathit{coNP}$, using a dynamic programming
approach in the spirit of~\cite{Vardi89}; in fact, this was the
original motivation for exploring the optimisation strategies pursued
here.
In the following, we use the notions of \emph{conditional antecedent} and
\emph{conditional consequent} to refer to the parameters of the modal operator of
conditional logic.
In order to decide whether there is an
instance of the modal rule of conditional logic which can be applied to
the actual current sequent, it is necessary to create a preliminary premise
for each possible combination of equalities of all the premises of the
modal operators in this sequent. This results in $2^n$ new premises for
a sequent with $n$ top-level modal operators.
\begin{example}
Consider the sequent $\Gamma=\{(A_0\Rightarrow B_0),(A_1\Rightarrow B_1),
(A_2\Rightarrow B_2)\}$. By instantiating the modal rule, several
new premises
such as $\{\{A_0=A_1=A_2\},\{B_0,B_1,B_2\}\}$ or $\{\{A_0=A_2\},\{B_0,B_2\}\}$
may be obtained. Only after the equalities in the first sequent of each
of these premises have been shown (or refuted), it is clear whether the
current instantiation of the modal rule is actually leading to a possibly
provable premise.
\label{ex:cond}
\end{example}
It seems to be a more intelligent approach to first
partition the set of all antecedents of the top-level modal operators in the
current sequent into equivalence classes with respect to logical equality.
This partition does not only allow for a reduction of the number of newly
created open premises in each modal step,
but it also allows us to seperate the two parts of the conditional modal step
(i.e. the first part, showing the necessary equalities, and the second
part, showing that the resulting sequent - consisting only of the appropriate
conditional consequences - is actually provable itself).
\begin{example}
Consider again the sequent from Example~\ref{ex:cond}. By using the examplary
knowledge, that $A_0=A_1$, $A_1\neq A_2$ and $A_0\neq A_2$, it is immediate,
that there are just two reasonable instations of the modal rule, leading
to the two premises $\{\{B_0,B_1\}\}$ and $\{\{B_2\}\}$. For the
first of these two premises, note that it is not necessary to show the
equivalence of $A_0$ and $A_1$ again.
\end{example}
\begin{remark}
In the case of conditional logic, observe the following: Since the
modal antecedents that appear in a formula are not being changed by any
rule of the sequent calculus, it is possible to extract all possibly
relevant antecedents of a formula even \emph{before} the actual sequent
calculus iswe applied. This allows us to first compute the equivalence
classes of all the relevant antecedents and then feed this knowledge into
the actual sequent calculus:
\end{remark}
\section{The Optimisation}
\begin{definition}
A conditional antecedent of \emph{modal nesting depth} $i$ is a
conditional antecedent which contains at least one antecedent of
modal nesting depth $i-1$ and which contains no antecedent
of modal nesting depth greater than $i-1$. A
conditional antecedent of nesting depth 0 is an antecedent
that does not contain any further modal operators.
Let $p_i$ denote the set of all conditional antecedents of modal
nesting depth $i$. Further, let $prems(n)$ denote the set of all
conditional antecedents of modal nesting depth at most $n$ (i.e.
$prems(n)=\bigcup_{j=1..n}^{} p_j$).
Finally, let $depth(\phi)$ denote the maximal modal nesting in
the formula $\phi$.
\end{definition}
\begin{example}
In the formula $\phi=((p_0\Rightarrow p_1) \wedge ((p_2\Rightarrow p_3)\Rightarrow p_4))
\Rightarrow (p_5\Rightarrow p_6)$,
the premise $((p_0\Rightarrow p_1) \wedge ((p_2\Rightarrow p_3)\Rightarrow p_4))$ has a
nesting depth of 2 (and not 1), $(p_0\Rightarrow p_1)$ and $(p_2\Rightarrow p_3)$ both
have a nesting depth of 1, and finally $p_0$, $p_2$ and $p_4$ all have a nesting depth of 0.
Furthermore, $prems(1)=\{p_0,p_2,p_4,(p_0\Rightarrow p_1),(p_2\Rightarrow p_3)\}$
and $depth(\phi)=3$.
\end{example}
\begin{definition}
A set $\mathcal{K}$ of treated sequents together with a function
$eval:\mathcal{K}\rightarrow \{\top,\bot\}$ is called \emph{knowledge}.
\end{definition}
We may now construct an optimized algorithm which allows us
to decide provability (and satisfiability) of formulae more efficiently
in some cases. The optimized algorithm is constructed from two functions
(namely from the actual proving function and from the so-called
\emph{pre-proving} function):
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take the input formula $\phi$ and the knowledge $(\mathcal{K},eval)$,
construct the initial set of premises (containing just one premise with just
one sequent) $S = \{\{\phi\}\}$ from it.\\
Step 2: Try to apply all rules $r\in R_{sc}$ (supplying $\mathcal{RO}^m_{sc}$
with knowledge $(\mathcal{K},eval)$) to any open sequent
$\Gamma\in P$ from any premise $P\in S$. Let $\Lambda$ denote the set
of the resulting premises.\\
Step 3: If $\Lambda$ contains the empty premise, mark the open
sequent $\Gamma$ as proved (and if $P$ contains no more open
sequents, mark $P$ as proved). If $\Lambda$ contains no premise at
all, mark the open sequent $\Gamma$ and the premise $P$ as refuted.\\
Step 4: Else, add all premises from $\Lambda$ to the set of
premises (i.e. $S=S\cup\Lambda$). If there are any remaining open
sequents, continue with Step 2.\\
Step 5: If all premises resulting from the application of all rules
from $R_{sc}$ to $\phi$ are marked as proved, finish with result \verb|True|
else finish with result \verb|False|.
\end{upshape}
\label{alg:optSeq}
\end{alg}
\end{algorithm}
Algorithm~\ref{alg:optSeq} is very similar to Algorithm~\ref{alg:seq},
however, it uses a modified
set of rules $\mathcal{RO}_{sc}$ and it also allows for the
input of some knowledge $(\mathcal{K},eval)$ in the form of a set of sequents
together with an evaluation function. This knowledge is passed on to
the modified modal rule of conditional logic, which makes appropriate use of it.
$\mathcal{RO}_{sc}$ is obtained from $\mathcal{R}_{sc}$ by replacing
the modal rule from Figure~\ref{fig:modalCKCEM} with the modified modal
rule from Figure~\ref{fig:modalCKCEMm}.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{CKCEM}$^m$})\inferrule{\bigwedge{}_{i,j=\{1..n\}}{eval(A_i=B_j)=\top}
\\ B_0,\ldots, B_j,\neg B_{j+1},\ldots,\neg B_n}
{\Gamma, (A_0\Rightarrow B_0),\ldots,(A_j\Rightarrow B_j),
\neg(A_{j+1}\Rightarrow B_{j+1}),\ldots,\neg(A_n\Rightarrow B_n) } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The modified modal rule \textbf{CKCEM}$^m$ of conditional logic}
\label{fig:modalCKCEMm}
\end{figure}
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $prems_i$ of all conditional antecedents of $\phi$
of nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $eq_i$ denote the set of all equalities $A_a = A_b$ for different
formulae $A_a,A_b\in prems_i$. Compute
Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all $\psi\in eq_i$.
Set $\mathcal{K}_{i+1} = eq_i$, set $i = i + 1$. For each equality $\psi\in eq_i$,
set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was \verb+True+
and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$) and return its result
as result.
\label{alg:preprove}
\end{upshape}
\end{alg}
\end{algorithm}
Algorithm~\ref{alg:preprove} first computes the knowledge $(\mathcal{K},eval)$ about specific
subformulae of $\phi$ and then finally checks for provability of
$\phi$ (using this knowledge): In order to show the equivalence of
two conditional antecedents of nesting depth at most $i$, we assume,
that the equalities $\mathcal{K}_{i}$ between modal antecedents of nesting depth less
than $i$ have allready been computed and the result is stored in $eval_i$; hence,
two antecedents are equal, if their equivalence is provable by
Algorithm~\ref{alg:optSeq} using only the knowledge $(\mathcal{K}_{i},eval_i)$.
\subsection{Treatment of needed equivalences only}
Since Algorithm~\ref{alg:preprove} tries to show the logical equivalence of any combination
of two conditional antecedents that appear in $\phi$, it will have worse completion
time than Algorithm~\ref{alg:seq} on many formulae:
\begin{example}
Consider the formula
\begin{quote}
$\phi=(((p_0\Rightarrow p_1)\Rightarrow p_2)\Rightarrow p_4)\vee
(((p_5\Rightarrow p_6)\Rightarrow p_7)\Rightarrow p_8)$.
\end{quote}
Algorithm~\ref{alg:preprove} will not only
try to show the necessary equivalences between the pairs
$(((p_0\Rightarrow p_1)\Rightarrow p_2), ((p_5\Rightarrow p_6)\Rightarrow p_7))$,
$((p_0\Rightarrow p_1), (p_5\Rightarrow p_6))$ and $(p_0,p_5)$, but it will
also try to show equivalences between any two conditional antecedents (e.g. $(p_0,
(p_5\Rightarrow p_6))$), even though these equivalences will not be needed
during the execution of Algorithm~\ref{alg:optSeq}.
\end{example}
Based on this observation it is possible to assign a category to each pair of
antecedents that appear in it:
\begin{definition}
The \emph{paths (in $\phi$)} to a conditional antecedent $\psi$ describe the orders
of modal arguments through which $\psi$ is reached, when starting from the root
of $\phi$:
The path to a top-level antecedent is just $\{1\}$. If $\psi$ does not appear as
antecedent on the topmost level of $\phi$, the path to it is $\{1\}$ prepended to the set
of paths to $\psi$ in any top-level conditional antecedent of $\phi$ together with $\{0\}$
prepended to the set of paths to $\psi$ in any top-level conditional consequent of $\phi$.
\end{definition}
\begin{example}
Consider the formula $\phi=(p_0\Rightarrow p_2)\Rightarrow ((p_0\Rightarrow p_1)\Rightarrow p_3)$. Then the path to
$(p_0\Rightarrow p_2)$ is $\{1\}$, whereas the path to $(p_0\Rightarrow p_1)$ is $\{01\}$. The paths
to $p_0$ are $\{11,011\}$.
\end{example}
\begin{definition}
Let $A$ and $B$ be two conditional antecedents. $A$ and $B$ are called \emph{connected (in $\phi$)} if
at least one path to $A$ is also a path to $B$ (and hence vice-versa). If no path to $A$ is a path to $B$,
the two antecedents are said to be \emph{independent}.
\end{definition}
Since two independent conditional antecedents will never appear in the scope of the
same application of the modal rule, it is in no case necessary to show (or
refute) the logical equivalence of independent conditional antecedents. Hence it suffices to focus
our attention to the connected conditional antecedents. It is then obvious that
any possibly needed equivalence and its truth-value are allready included in $(K,eval)$
when the main proving is induced. On the other hand, we have to be aware that it
may be the case, that we show equivalences of antecedents which are in fact not needed
(since antecedents may indeed be connected and still it is possible that they never appear together in an application
of the modal rule - this is the case whenever two preceeding antecedents are not logically equivalent).
As result of these considerations, we devise Algorithm~\ref{alg:optPreprove},
an improved version of Algorithm~\ref{alg:preprove}. The only difference is
that before proving any equivalence, Algorithm~\ref{alg:optPreprove} checks
whether the current pair of conditional antecedents is actually connected;
only then does it treat the equivalence. Hence independent pairs of antecedents
remain untreated.
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $prems_i$ of all conditional antecedents of $\phi$
of nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $eq_i$ denote the set of all equalities $A_a = A_b$ for different and not independent
pairs of formulae $A_a,A_b\in prems_i$. Compute
Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all $\psi\in eq_i$.
Set $\mathcal{K}_{i+1} = eq_i$, set $i = i + 1$. For each equality $\psi\in eq_i$,
set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was \verb+True+
and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$) and return its result
as result.
\label{alg:optPreprove}
\end{upshape}
\end{alg}
\end{algorithm}
\section{Implementation}
The proposed optimized algorithms have been implemented (using the programming
language Haskell) as part of the generic coalgebraic modal logic satisfiability
solver (\COLOSS\footnote{As already mentioned above, more information about \COLOSS,
a web-interface to the tool and
the tested benchmarking formulae can be found at \url{http://www.informatik.uni-bremen.de/cofi/CoLoSS/}}).
\COLOSS~provides the general coalgebraic framework in which the generic
sequent calculus is embedded. It is easily possible to instantiate this generic sequent
calculus to specific modal logics, one particular example being conditional logic.
The matching function for conditional logic in \COLOSS~was hence adapted in order to realize
the different optimisations (closely following Algorithms~\ref{alg:seq},~\ref{alg:preprove} and
~\ref{alg:optPreprove}), so that \COLOSS~now provides an efficient algorithm for
deciding the provability (and satisfiability) of conditional logic formulae.
\subsection{Comparing the proposed algorithms}
In order to show the relevance of the proposed optimisations, we devise several classes
of conditional formulae. Each class has a characteristic general shape, defining its
complexity w.r.t. different parts of the algorithms and thus exhibiting specific
advantages or disadvantages of each algorithm:
\begin{itemize}
\item The formula \verb|bloat(|$i$\verb|)| is a full binary tree of depth $i$ (containing $2^i$ pairwise logically
inequivalent atoms and $2^i-1$ modal antecedents):
\begin{quote}
\verb|bloat(|$i$\verb|)| = $($\verb|bloat(|$i-1$\verb|)|$)\Rightarrow($\verb|bloat(|$i-1$\verb|)|)\\
\verb|bloat(|$0$\verb|)| = $p_{rand}$
\end{quote}
Formulae from this class should show the problematic performance of Algorithm~\ref{alg:preprove} whenever
a formula contains many modal antecedents which appear at different depths. A comparison of the different
algorithms w.r.t. formulae \verb|bloat(|$i$\verb|)| is depicted in Figure~\ref{fig:benchBloat}.
Since Algorithm~\ref{alg:preprove} does not check whether pairs of modal antecedents are independent or connected,
it performs considerably worse than Algorithm~\ref{alg:optPreprove} which only attempts to prove the logical
equivalence of formulae which are not independent. Algorithm~\ref{alg:seq} has the best performance in this
extreme case, as it only has to consider pairs of modal antecedents which actually appear during the course
of a proof. This is the price to pay for the optimisation by dynamic programming.
\end{itemize}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| l | r | r | r |}
\hline
$i$ & Algorithm~\ref{alg:seq} & Algorithm~\ref{alg:preprove} & Algorithm~\ref{alg:optPreprove} \\
\hline
1 & 0.008s & 0.009s & 0.010s\\
2 & 0.008s & 0.011s & 0.010s\\
3 & 0.009s & 0.028s & 0.014s\\
4 & 0.010s & 0.233s & 0.022s\\
5 & 0.011s & 2.840s & 0.087s\\
6 & 0.013s & 33.476s & 0.590s\\
7 & 0.019s & 402.239s & 4.989s\\
\hline
\end{tabular}
\end{center}
\caption{Results for bloat($i$)}
\label{fig:benchBloat}
\end{figure}
\begin{itemize}
\item The formula \verb|conjunct(|$i$\verb|)| is just an $i$-fold conjunction of a specific formula $A$:
\begin{quote}
\verb|conjunct(|$i$\verb|)| = $A_1\wedge\ldots\wedge A_i$\\
$A=(((p_1\vee p_0)\Rightarrow p_2)\vee((p_0\vee p_1)\Rightarrow p_2))\vee\neg(((p_0\vee p_1)\Rightarrow p_2)\vee((p_1\vee p_0)\Rightarrow p_2))$)
\end{quote}
This class consists of formulae which contain logically (but not sytactically) equivalent antecedents.
As $i$ increases, so does the amount of appearances of identical modal antecedents in different positions
of the considered formula. A comparison of the different algorithms w.r.t. formulae \verb|conjunct(|$i$\verb|)| is depicted in
Figure~\ref{fig:benchConjunct}. It is obvious that the optimized algorithms perform considerably better than the unoptimized
Algorithm~\ref{alg:seq}. The reason for this is, that Algorithm~\ref{alg:seq} repeatedly proves equivalences between the same
pairs of modal antecedents. The optimized algorithms on the other hand are equipped with knowledge about the modal antecedents,
so that these equivalences have to be proved only once. However, even the runtime of the optimized algorithms is exponential in $i$,
due to the exponentially increasing complexity of the underlying propositional formula. Note that the use of propositional taulogies (such as
$A \leftrightarrow (A\wedge A) $ in this case) would help to greatly reduce the computing time for \verb|conjunct(|$i$\verb|)|.
Optimisation of propositional reasoning is not the scope of this paper though, thus we devise the following examplary class of formulae
(for which propositional tautologies would not help).
\end{itemize}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| l | r | r | r |}
\hline
$i$ & Algorithm~\ref{alg:seq} & Algorithm~\ref{alg:preprove} & Algorithm~\ref{alg:optPreprove} \\
\hline
1 & 0.012s & 0.013s & 0.012s\\
2 & 0.021s & 0.015s & 0.014s\\
3 & 0.681s & 0.021s & 0.021s\\
4 & 131.496s & 0.048s & 0.048s\\
5 & $>$600.000s & 0.199s & 0.201s\\
6 & $\gg$600.000s & 1.152s & 1.161s\\
7 & $\gg$600.000s & 8.746s & 8.667s\\
8 & $\gg$600.000s & 74.805s & 75.595s\\
9 & $\gg$600.000s & $>$600.000s & $>$600.000s\\
\hline
\end{tabular}
\end{center}
\caption{Results for conjunct($i$)}
\label{fig:benchConjunct}
\end{figure}
\begin{itemize}
\item The formula \verb|explode(|$i$\verb|)| contains equivalent but not syntactically
equal and interchangingly nested modal antecedents of depth at most $i$:
\begin{quote}
\verb|explode(|$i$\verb|)| = $X^i_1\vee\ldots\vee X^i_i$\\
$X^i_1=(A^i_1\Rightarrow(\ldots(A^i_i\Rightarrow (c_1\wedge\ldots\wedge c_i))\ldots))$\\
$X^i_j=(A^i_{j\bmod i}\Rightarrow(\ldots(A^i_{(j+(i-1))\bmod i}\Rightarrow \neg c_j)\ldots))$\\
$A^i_j=p_{j \bmod i}\wedge\ldots\wedge p_{(j+(i-1)) \bmod i}$
\end{quote}
This class contains complex formulae for which the unoptimized algorithm should not be
efficient any more: Only the combined knowledge about all appearing modal antecedents $A^i_j$ allows
the proving algorithm to reach all modal consequents $c_n$, and only the combined sequent
$\{(c_1\wedge\ldots\wedge c_i),\neg c_1,\ldots,\neg c_i\}$ (containing every appearing
consequent) is provable. For formulae from this class (specifically designed to show the
advantages of optimization by dynamic programming),
the optimized algorithms vastly outperform the unoptimized algorithm (see Figure~\ref{fig:benchExplode}).
\end{itemize}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| l | r | r | r |}
\hline
$i$ & Algorithm~\ref{alg:seq} & Algorithm~\ref{alg:preprove} & Algorithm~\ref{alg:optPreprove} \\
\hline
1 & 0.009s & 0.008s & 0.009s\\
2 & 0.011s & 0.010s & 0.010s\\
3 & 0.028s & 0.012s & 0.014s\\
4 & 0.268s & 0.018s & 0.018s\\
5 & 4.555s & 0.025s & 0.027s\\
6 & 10.785s & 0.035s & 0.039s\\
7 & $\gg$600.000s & 0.054s & 0.060s\\
8 & $\gg$600.000s & 0.079s & 0.089s\\
9 & $\gg$600.000s & 0.122s & 0.140s\\
\hline
\end{tabular}
\end{center}
\caption{Results for explode($i$)}
\label{fig:benchExplode}
\end{figure}
The tests were conducted on a Linux PC (Dual Core AMD Opteron 2220S (2800MHZ), 16GB RAM).
It is obvious that a significant increase of performance may be obtained through
the proposed optimisations. In general, the implementation of the proposed algorithms (realized
as a part of \COLOSS) has a comparable performance to
other conditional logic provers (such as CondLean~\cite{OlivettiEA07}). However, the authors of other
provers did not publish their benchmarking sets of formulae, so that a direct
comparison was not possible. Since other provers do not implement the optimisation
strategies proposed in this paper, it is reasonable to assume, that the optimized
implementation in \COLOSS~outperforms other systems at least for the
considered examplary formulae \verb|explode(|i\verb|)| and \verb|conjunct(|i\verb|)|.
\section{Generalized optimisation}
As previously mentioned, the demonstrated optimisation is not restricted to the
case of conditional
modal logics.
\begin{definition}
If $\Gamma$ is a sequent, we denote the set of all arguments of
top-level modalities from $\Gamma$ by $arg(\Gamma)$.
A \emph{short sequent} is a sequent which consists of just one formula which
itself is a propositional formula over a fixed maximal number of modal arguments
from $arg(\Gamma)$. In the following, we fix the maximal number of modal arguments
in short sequents to be 2.
\end{definition}
The general method of the optimisation then takes the following form:
Let $S_1,\ldots S_n$ be short sequents and assume that there is
a (w.r.t the considered modal logic) sound instance of the generic rule which
is depicted in Figure~\ref{fig:modalOpt} (where $\mathcal{S}$ is a set of any
sequents).
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{Opt}}) \inferrule{ S_1 \\ \ldots \\ S_n \\ \mathcal{S} }
{ \Gamma } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The general rule-scheme to which the optimisation may be applied}
\label{fig:modalOpt}
\end{figure}
Then we devise a final version (Algorithm~\ref{alg:genOptPreprove}) of the
optimized algorithm: Instead of considering only equivalences of conditional antecedents
for pre-proving, we now extend our attention to any short sequents over any modal arguments.
\begin{algorithm}[h]
\begin{alg}
\begin{upshape}
Step 1: Take a formula $\phi$ as input. Set $i=0$, $\mathcal{K}_0=\emptyset$, $eval_0=\emptyset$.\\
Step 2: Generate the set $args_i$ of all modal arguments of $\phi$
which have nesting depth at most $i$. If $i<depth(\phi)$ continue
with Step 3, else set $\mathcal{K}=\mathcal{K}_{i-1}, eval=eval_{i-1}$ and continue with Step 4.\\
Step 3: Let $seq_i$ denote the set of all short sequents of form $S_i$ (where $S_i$ is a sequent
from the premise of rule (\textbf{Opt})) over at most two formulae
$A_a,A_b\in args_i$. Compute Algorithm~\ref{alg:optSeq} ($\psi$, $(\mathcal{K}_i,eval_i)$) for all
$\psi\in seq_i$. Set $\mathcal{K}_{i+1} = seq_i$, set $i = i + 1$. For each short sequent
$\psi\in seq_i$, set $eval_{i+1}(\psi)=\top$ if the result of Algorithm~\ref{alg:optSeq} was
\verb+True+ and $eval_{i+1}(\psi)=\bot$ otherwise. Continue with Step 2.\\
Step 4: Call Algorithm~\ref{alg:optSeq} ($\phi$, $(\mathcal{K},eval)$) and return its result
as result.
\end{upshape}
\label{alg:genOptPreprove}
\end{alg}
\end{algorithm}
This new Algorithm~\ref{alg:genOptPreprove} may then be used to decide provability of formulae,
where the employed ruleset has to be extended by the generic modified rule given by Figure~\ref{fig:modModalOpt}.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{| c |}
\hline
\\[-5pt]
(\textsc {\textbf{Opt}$^m$}) \inferrule{ eval(S_1)=\top \\ \ldots \\ eval(S_n)=\top \\ \mathcal{S} }
{ \Gamma } \\[-5pt]
\\
\hline
\end{tabular}
\end{center}
\caption{The general optimized rule}
\label{fig:modModalOpt}
\end{figure}
\begin{example}
The following two cases are instances of the generic optimisation:
\begin{enumerate}
\item (Classical modal Logics / Neighbourhood Semantics) Let $\Gamma = \{\Box A = \Box B\}$,
$n=1$, $S_1=\{A=B\}$ and $\mathcal{S}=\emptyset$. Algorithm~\ref{alg:genOptPreprove}
may be then applied whenever the following congruence rule is sound in the considered
logic:\\
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$_{Cong}$}) \inferrule{ A=B }
{ \Box A = \Box B }
\end{center}
\end{quote}
\vspace{10pt}
The according modified version of this rule is as follows:\\
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$^m_{Cong}$}) \inferrule{ {eval(A=B)=\top} }
{ \Box A = \Box B }
\end{center}
\end{quote}
\item (Monotone modal logics) By setting $\Gamma = \{\Box A \rightarrow \Box B\}$,
$n=1$, $S_1=\{A\rightarrow B\}$ and $\mathcal{S}=\emptyset$, we may instantiate
the generic algorithm to the case of modal logics which are monotone w.r.t. their
modal operator. So assume the following rule to be sound in the considered modal
logic:\\
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$_{Mon}$}) \inferrule{ A\rightarrow B }
{ \Box A \rightarrow \Box B }
\end{center}
\end{quote}
\vspace{10pt}
The according modified version of this rule is as follows:\\
\begin{quote}
\begin{center}
(\textsc {\textbf{Opt}$^m_{Mon}$}) \inferrule{ {eval(A\rightarrow B)=\top} }
{ \Box A \rightarrow \Box B }
\end{center}
\end{quote}
In the case that (\textbf{Opt}$_{Mon}$) is the only modal rule in the
considered logic (i.e. the case of plain monotone modal logic), all the
prove-work which is connected to the modal operator is shifted to the
pre-proving process. Especially matching with the modal rules
$\mathcal{RO}^m_{sc}$ becomes a mere lookup of the value of $eval$.
This means, that all calls of the sequent algorithm Algorithm~\ref{alg:optSeq}
correspond in complexity just to ordinary sat-solving of propositional logic.
Furthermore, Algorithm~\ref{alg:optSeq} will be called $|\phi|$ times. This
observation may be generalized:
\end{enumerate}
\label{ex:neighMon}
\end{example}
\begin{remark}
In the case that all modal rules of the considered logic are instances of
the generic rule (\textbf{Opt}) with $P=\emptyset$ (as seen in Example~\ref{ex:neighMon}),
the optimisation does not only allow for a reduction of computing time, but
it also allows us to effectively reduce the sequent calculus to a sat-solving
algorithm.
Furthermore, the optimized algorithm will always be as efficient as the
original one in this case (since every occurence of short sequents over $arg(\Gamma)$
which accord to the current instantiation of the rule (\textbf{Opt}) will
have to be shown or refuted even during the course of the original algorithm).
\end{remark}
\section{Summary}
\begin{problem}
Finish your paper and get it to your Program Chairman on time!
\end{problem}
\bibliographystyle{myabbrv}
\bibliography{coalgml}
\end{document}