SCSS 2013:Papers with AbstractsPapers 

Abstract. In this talk, we will exemplify the spirit of a new type of mathematics by a report on the Theorema system being developed in the speaker's research group. Theorema is both a logic and a software frame for doing mathematics in the way sketched above. On the object level, Theorema allows to prove and program within the same logical frame and, on the metalevel, it allows to formulate reasoning techniques that help proving and programming on the object level. In particular, we will show how this type of doing mathematics allows to mimic the invention process behind the speaker’s theory of Gröbner bases, which provides a general method for dealing with multivariate nonlinear polynomial systems.  Abstract. This work aims to build a semantic framework for automated debugging. A debugging process consists of tracing, locating, and fixing processes consecutively. The first two processes are accomplished by a tracing procedure and a locating procedure, respectively. The tracing procedure reproduces the execution of a failed test case with welldesigned data structures and saves necessary information for locating bugs. The locating procedure will use the information obtained from the tracing procedure to locate illdesigned statements and to generate a fixequation, the solution of which is a function that will be used to fix the bugs. A structural operational semantics is given to define the functions of the tracing and locating procedure. Both procedures are proved to terminate and produces one fixequation. The main task of fixing process is to solve the fixequation. It turns out that for a given failed test case, there exist three different types of solutions: 1. the bug is solvable, there exists a solution of the fixequation, and the program can be repaired. 2. There exists a nonlinear error in the program, the fixequation generated at each round of the locating procedure is solvable, but a new bug will arise when the old bug is being fixed. 3. There exists a logical design error and the fixequation is not solvable.  Abstract. Linear recurrence sequences (LRS), such as the Fibonacci numbers, permeate vast areas of mathematics and computer science. In this talk, we consider three natural decision problems for LRS, namely the Skolem Problem (does a given LRS have a zero?), the Positivity Problem (are all terms of a given LRS positive?), and the Ultimate Positivity Problem (are all but finitely many terms of a given LRS positive?). Such problems (and assorted variants) have applications in a wide array of scientific areas, such as theoretical biology (analysis of Lsystems, population dynamics), economics (stability of supplyanddemand equilibria in cyclical markets, multiplieraccelerator models), software verification (termination of linear programs), probabilistic model checking (reachability and approximation in Markov chains, stochastic logics), quantum computing (threshold problems for quantum automata), discrete linear dynamical systems (reachability and invariance problems), as well as combinatorics, statistical physics, term rewriting, formal languages, cellular automata, generating functions, etc.
We shall see that these problems have deep and fascinating connections to rich mathematical concepts and conjectures, particularly in the fields of analytic and algebraic number theory, diophantine geometry and approximation, real algebraic geometry, mathematical logic, and complexity theory.  Abstract. This paper presents a parameterized technique for the inspection of Rewriting Logic computations that allows the nondeterministic execution of a given rewrite theory to be followed up in different ways. Starting from a selected state in the computation tree, the exploration is driven by a userdefined, inspection criterion that specifies the exploration mode. By selecting different inspection criteria, one can automatically derive useful debugging facilities such as program steppers and more sophisticated dynamic trace slicers that facilitate the detection of control and data dependencies across the computation tree. Our methodology, which is implemented in the Anima graphical tool, allows users to capture the impact of a given criterion, validate input data, and detect improper program behaviors.  Abstract. In this paper we propose a parametric technique to automatically infer algebraic propertyoriented specifications from Term Rewriting Systems. Namely, given the source code of a TRS we infer a specification which consists of a set of most general equations relating terms that rewrite, for all possible instantiations, to the same set of normal forms.
The semanticbased inference method that we propose can cope with nonconstructorbased TRSs, and considers nonground terms. Particular emphasis is posed to avoid the generation of “redundant” equations that can be a logical consequence of other ones.
To experiment on the validity of our proposal we have considered an instance of the method employing a novel (condensed) semantics for leftlinear TRSs and we have implemented a “proof of concept” prototype in Haskell which is available online.  Abstract. In this paper we present a novel condensed narrowinglike semantics that contains the minimal information which is needed to describe compositionally all possible rewritings of a term rewriting system. We provide its goaldependent topdown definition and, more importantly, an equivalent goalindependent bottomup fixpoint characterization. We prove soundness and completeness w.r.t. the smallstep behavior of rewriting for the full class of term rewriting systems.  Abstract. Making a knot on a rectangular origami or more generally on a tape of a finite length gives rise to a regular polygon. We present an automated algebraic proof that making two knots leads to a regular heptagon. Knot fold is regarded as a double fold operation coupled with Huzita's fold operations. We specify the construction by describing the geometrical constraints on the fold lines to be used for the construction of a knot. The algebraic interpretation of the logical formulas allows us to solve the problem of how to find the fold operations, i.e. to find concrete fold lines. The logical and algebraic framework incorporated in a system called Eos (eorigami system) is used to simulate the knot construction as well as to prove the correctness of the construction based on algebraic proof methods.  Abstract. t is recognized that security verification of cryptographic protocols tends to be difficult and in fact, some flaws on protocol designs or security proofs were found after they had been presented. The use of formal methods is a way to deal with such complexity. Especially, process calculi are suitable to describe parallel systems. Bisimilarity, which denotes that two processes behave indistinguishably from the outside, is a key notion in process calculi. However, byhand verification of bisimilarity is often tedious when the processes have many long branches in their transitions. We developed a software tool to automatically verify bisimulation relation in a quantum process calculus qCCS and applied it to Shor and Preskill's security proof of BB84. The tool succeeds to verify the equivalence of BB84 and an EDPbased protocol, which are discussed in their proof.  Abstract. In applications of symbolic computation an often required but complex procedure is the computation of Gröbner bases and hence it is obvious to realize parallel algorithms to compute them. There are parallel flavours of the F4 algorithm using the special structure of the occurring matrices to speed up the reduction. In this paper we start from this and present modifications allowing efficient computations of Gröbner bases on parallel architectures using shared as well as distributed memory. To achieve this we concentrate on one objective: reducing the memory consumption and avoiding communication overhead. We remove unrequired steps of the reduction, split the columns of the matrix in blocks for distribution and review the effectiveness of the SIMPLIFY function. Finally we provide benchmarks with up to 256 distributed threads of an implementation which will be available at https://github.com/svrnm/parallelGBC.  Abstract. We present the key notion of "asymptotically nonterminant initial variable values" for nonterminant loop programs. We show that those specific values are directly associated to inital variable values for which the loop program does not terminate. Considering linear diagonalizable programs, we describe powerful computational methods that generate automatically and symbolically a semilinear space represented by a linear system of equalities and inequalities. Each element of this space provides us with asymptotically nonterminant initial variable values. Our approach is based on linear algebraic methods and results. We obtain conditions using a decomposition on a specific basis, involving the loop condition and the matrix encoding the instructions of the loop.  Abstract. This paper presents modeling of a binary tree that represents a natural number and gives an inductive proof for its properties using theorem provers. We define a function for converting data from a natural number into a binary tree and give an inductive proof for its welldefinedness. We formalize this method, develop a computational model based on it, and apply it to an electronic cash protocol. We also define the payment function on the binary tree and go on to prove the divisibility of electronic cash using the theorem provers Isabelle/HOL and Coq, respectively. Furthermore, we discuss the effectiveness of this method.  Abstract. With the aid of computer algebra systems COCO and GAP with its packages we are investigating all seven known primitive trianglefree strongly regular graphs on 5, 10, 16, 50, 56, 77 and 100 vertices. These graphs are rank 3 graphs, having a rich automorphism group. The embeddings of each graph from this family to other ones are described, the automorphic equitable partitions are classified, all equitable partitions in the graphs on up to 50 vertices are enumerated. Basing on the reported computer aided results and using techniques of coherent configurations, a few new models of these graphs are suggested, which are relying on knowledge of just a small part of symmetries of a graph in consideration.  Abstract. The paper presents the usage of invariants for symbolic verification of requirements for reactive systems. It includes checking of safety, incompleteness, liveness, consistency properties, and livelock detection. The paper describes the iterative method of double approximation and the method of undetermined coefficients for invariants generation. Benefits, disadvantages, and comparison of this technique with existing methods are considered. The paper is illustrated by examples of invariants technique usage for symbolic verification.  Abstract. Polynomial interpolation is a classical method to approximate continuous functions by polynomials. To measure the correctness of the approximation, Lebesgue constants are introduced. For a given node system $X^{(n+1)}=\{x_1<\ldots<x_{n+1}\}\, (x_j\in [a,b])$, the Lebesgue function $\lambda_n(x)$ is the sum of the modulus of the Lagrange basis polynomials built on $X^{(n+1)}$. The Lebesgue constant $\Lambda_n$ assigned to the function $\lambda_n(x)$ is its maximum over $[a,b]$. The Lebesgue constant bounds the interpolation error, i.e., the interpolation polynomial is at most $(1+\Lambda_n)$ times worse then the best approximation. The minimum of the $\Lambda_n$'s for fixed $n$ and interval $[a,b]$ is called the optimal Lebesgue constant $\Lambda_n^*$. For specific interpolation node systems such as the equidistant system, numerical results for the Lebesgue constants $\Lambda_n$ and their asymptotic behavior are known \cite{3,7}. However, to give explicit symbolic expression for the minimal Lebesgue constant $\Lambda_n^*$ is computationally difficult. In this work, motivated by Rack \cite{5,6}, we are interested for expressing the minimal Lebesgue constants symbolically on $[1,1]$ and we are also looking for the characterization of the those node systems which realize the minimal Lebesgue constants. We exploited the equioscillation property of the Lebesgue function \cite{4} and used quantifier elimination and Groebner Basis as tools \cite{1,2}. Most of the computation is done in Mathematica \cite{8}. 

