# Summer term 2013

• Friday, June 21, 2013, at 13:30 in S2|15-201
Angeliki Koutsoukou-Argyraki
• Friday, June 7, 2013, at 13:30 in S2|15-201
Andreas Döring

# Winter term 2012/13

• Friday, March 15, 2013, at 13:30 in S2|15-201
Dr. Jaime Gaspar*
Krivine's classical realisability and the unprovability of the axiom of choice and the continuum hypothesis

Proof interpretations are tools in mathematical logic with a wide range of applications: consistency results, independence results, and extraction of computational content from proofs, just to name a few. They are usually applied only in the context of logic or arithmetic. There is the dream of applying them even to set theory. Now the dream is fulfilled with a novel proof interpretation: Krivine's classical realisability. In this talk I present my personal digest of Krivine's classical realisability and one of its main applications to set theory: set theory ZF plus the statement "there is a set S between N (the set of natural numbers) and R (the set of real numbers) not equinumerous to its cartesian square S x S" is a consistent theory where both the axiom of choice and the continuum hypothesis fail. As a corollary we get two classical and celebrated results: ZF does not prove the axiom of choice nor the continuum hypothesis. This is a short, simple, and sweet talk, focusing on the ideas and setting aside technicalities.

* INRIA Paris-Rocquencourt, pi.r2, Univ Paris Diderot, Sorbonne Paris Cite, F-78153 Le Chesnay, France. Financially supported by the French Fondation Sciences Mathematiques de Paris.

• Friday, February 1, 2013, at 13:30 in S2|15-201
Prof. Dr. Martin Ziegler
Real Benefit of Promises and Advice
Promises are a standard way to formalize partial algorithms; and advice quantifies nonuniformity. For decision problems, the latter it is captured in common complexity classes such as P/poly, that is, with advice growing in size with that of the input. We advertise constant-size advice and explore its theoretical impact on the complexity of classification problems -- a natural generalization of promise problems -- and on real functions and operators. Specifically we exhibit problems that, without any advice, are decidable/computable but of high complexity while, with (each increase in the permitted size of) advice, (gradually) drop down to polynomial-time.
• Friday, January 25, 2013, at 13:30 in S2|15-201
Johanna Sokoli
On the Complexity of Satisfiability over Vector Product Terms
A real vector product term is a term constructed iteratively by forming vector products of
R^3-valued variables X_1, ... X_n. By defining a vector product for elements of the projective space P^2 in an appropriate way one can also define projective vector product terms. Given a set of vector product terms the question arises whether a single equation or even an entire equation system formed by these terms is satisfiable or not. In this talk different definitions of such satisfiable problems are given and an overview of several polynomial reductions is provided. Furthermore, the languages are classified into a complexity class and a proof of completeness of some of these languages will be given.
• Tuesday, January 15, 2013, at 16:00 in S2|15-201
Keita Yokoyama, Tokyo Institute of Technology
Combinatorial principles and Reverse Mathematics
In the Reverse Mathematics program, many mathematical theorems are classified into typical subsystems of second-order arithmetic called "big five". Recently, combinatorial principles are focused on in this program, since many of them are not classified into big five. In this talk, I will discuss on the role of Ramsey's theorem in Reverse Mathematics and related topics.
• Friday, December 14, 2012, at 15:00 in S2|15-201
Dr Kord Eickmeyer, National Institute of Informatics, Tokyo
Order-Invariant Logics on Restricted Classes of Structures
Order-invariant extensions of various classical logics pop up naturally in descriptive complexity and database theory. Formulas of these logics may speak about a linear order on structures, but have to be invariant under the particular choice of such an order. Strictly speaking, this does not even define a logic, because in general it is undecidable whether a given formula has this invariance property or not, and this reflects in the failure of many powerful tools from finite model theory, such as Ehrenfeucht-Fraïssé games.
In my talk I will review known results about the expressiveness and model-checking complexity of order- and successor-invariant first-order logics. I will then go on to talk about my ongoing research (with Michael Elberfeld) about the expressive power of order-invariant logics on restricted classes of structures.
• Tuesday, December 4, 2012, at 16:15 in S2|15-201
Dr. Tomer Kotek, Haifa, Israel
Applications of logic in graph theory: definability of graph invariants
Logical methods can be used to unify results for individual graph- theoretic
objects, by providing meta-theorems which apply to all such objects
definable in an appropriate logic. In this talk we discuss graph properties
and graph polynomials definable in First Order Logic and Monadic Second
Order Logic in terms of their definability and combinatorial meta-theorem.
In particular, we show a method of proving non-definability using connection
matrices. We extend the results for graph properties by showing a
Feferman-Vaught type theorem for First Order Logic extended with counting
quantifiers.
• Wednesday, November 28, 2012, at 13:30 in S2|15-201
Rasmus Møgelberg, ITU Copenhagen, Denmark
Presheaf models for guarded recursion

A guarded recursive definition is one where the recursion variable only occurs in positions guarded by a delay operator. In this talk I will present a new model of guarded recursion and motivate it using two different examples: first as a model for computations with coinductive types such as streams, and second as a model of a metalanguage useful for defining models of programming languages with higher-order store. The model is a simple presheaf model (over the ordered natural numbers) and I will describe the computational intuition. If I have time I will describe work in progress on constructing models of intentional type theory with guarded recursion. Most of the talk describes joint work with Lars Birkedal, Kristian Stoevring and Jan Schwinghammer published at LICS 2011.

• November 23, 2012, at 13:30 in S2|15-201
Makoto Fujuwara, Tohoku University, Japan
Marriage Theorem for Countable Graphs and Computability
We investigate so-called marriage theorem with respect to computability by making use of the technique of reverse mathematics. The marriage theorem has a form that a marriage problem (bipartite graph) satisfying appropriate conditions has a solution (injection). A computable marriage problem may not have a computable solution, but if it has strengthened conditions, it become to have a computable solution. Based on this fact, we investigate how the computational defficulty of marriage problems vary by adding extra information step by step. In the second part of this talk, we consider a new combinatorial condition which also make a computable marriage problem to have a computable solution. However, unlike the previous case, there is no uniform procedure to construct a solution from a given computable marriage problem with this condition. On the other hand, if we have little more information for the marriage problem, we can construct a solution uniformly.
• November 16, 2012, at 13:00 in S2|15-201
PD Dr. Olaf Beyersdorff, University of Leeds, UK
How difficult is it to verify proofs?
Traditionally proof complexity adopts the view that proofs are verified in polynomial time and a rich body of results is known for the complexity of proofs in this framework. The talk will start by providing an introduction to main notions and motivations in proof complexity in this classical setting. The second part of the talk is devoted to a recent line of research trying to understand the power of proof systems that use alternative computational resources to verify proofs. In the talk I will mention recent results where proofs are verified by either additional computational resources (such as advice, randomness, quantum computations) or by very weak computational models. In particular, we will look at a very weak model where proofs are verified by NC^0 circuits and show non-trivial lower and upper bounds for this restrictive model.
• November 9, 2012, at 13:00 in S2|15-201
PD Dr. Laurentiu Leustean, Institute of Mathematics of the Romanian Adacemy, Bucharest, Romania
Proof Mining in Nonlinear Analysis
The program of proof mining is concerned with the extraction of hidden finitary and combinatorial content from proofs that make use of highly infinitary principles. This new information can be both of quantitative nature, such as algorithms and effective bounds, as well as of qualitative nature, such as uniformities in the bounds or weakening the premises. Thus, even if one is not particularly interested in the numerical details of the bounds themselves, in many cases such explicit bounds immediately show the independence of the quantity in question from certain input data. In this talk we present recent applications of proof mining in nonlinear analysis: -effective results on the asymptotic behavior of Picard iterations of firmly nonexpansive mappings, a class of functions which play an important role in metric fixed point theory and optimization due to their correspondence with maximal monotone operators (joint work with David Ariza and Genaro Lopez-Acedo). - explicit and highly uniform rates of metastability (in the sense of Tao) on a nonlinear generalization of the classical von Neumann mean ergodic theorem, due to Shioji andTakahashi for Banach spaces with a uniformly Gateaux differentiable norm (joint work with Ulrich Kohlenbach).
• November 2, 2012, at 13:00 in S2|15-201
Prof. Dr. Martin Otto
Groupoids and Hypergraphs
We look at a novel construction of finite hypergraphs
of controlled acyclicity, based on finite groupoids with suitable acyclicity properties: their Cayley graphs have large girth even w.r.t. a discounted distance measure that contracts arbitrarily long sequences of edges from the same colour class (sub-groupoid), and only counts transitions between colour classes (cosets). These groupoids are shown to be useful, e.g., in the construction of generic finite hypergraph covers that avoid small cyclic configurations. The hypergraph constructions in question use reduced products with groupoids that are generated by the elementary local extension steps and can be made to preserve the symmetries of the given overlap pattern.
• October 26, 2012, at 13:00 in S2|15-201
Reiner Hähnle
Abstract Symbolic Execution

Modern software tends to undergo frequent requirement changes and typically is deployed in many different scenarios. This poses significant challenges to formal software verification, because it is not feasible to verify a software product from scratch after each change. It is essential to perform verification in a modular fashion instead. The goal must be to reuse not merely software artifacts, but also specification and verification effort. In our setting code reuse is realized by delta-oriented programming, an approach where a core program is gradually transformed by code "deltas" each of which corresponds to a product feature. The delta-oriented paradigm is then extended to contract-based formal specifications and to verification proofs. As a next step towards modular verification we transpose Liskov's behavioural subtyping principle to the delta world. Finally, based on the resulting theory, we perform a syntactic analysis of contract deltas that permits to automatically factor out those parts of a verification proof that stays valid after applying a code delta. This is achieved by a novel verification paradigma called "abstract symbolic execution."

• October 26, 2012, at 10:30 in S2|15-201
Viktor Winschel
Coalgebraic Analysis of Subgame-perfect Equilibria in Infinite Games without Discounting - Towards Reflexive Economics

We present a novel coalgebraic formulation of infinite extensive games. We define both the game trees and the strategy profiles by possibly infinite systems of corecursive equations. Certain strategy profiles are proved to be subgame perfect equilibria using a novel proof principle of predicate coinduction which is shown to be sound by reducing it to Kozen's metric coinduction. We characterize all subgame perfect equilibria for the dollar auction game. The economically interesting feature is that in order to prove these results we do not need to rely on continuity assumptions on the payoffs which amount to discounting the future. In particular, we prove a form of one-deviation principle without any such assumptions. This suggests that coalgebra supports a more adequate treatment of infinite-horizon models in game theory and economics. I will also sketch the possibilities that this (and more generalized) mathematics opens up for reflexive economics, which takes it serious that modeling in social science takes place within the modeled system and that essentially depends on self-referential paradoxes and corecursion and similar mathematical constructions. This opens the door to a proper treatment of the Lucas critique, evolutionary, institutional, agend based, computational, numerical economics, econometrics of non experimental data, money, value theory, political economics and management theory and also to a unification and cross fertilization between systems sociology, computer science and biology.

• Thursday, October 11, 2012, at 13:00 in S2|15-201
Stéphane Le Roux
From Determinacy to Nash equilibrium

Game theory is a modelling tool for economics and Nash equilibrium is its central concept, involving many players and many possible outcomes. Independently, logic has been using a game-theoretic terminology and determinacy is its central concept, involving two players and two possible outcomes. I will show that all determinacy results can be uniformly transferred into a result of existence of multi-outcome (possibly multi-player) Nash equilibrium, thus establishing a two-way bridge between logic and economics.

# 2012, January to September

• September 21, 2012, at 13:00 in S2|15-201
Arno Pauly, Cambridge University
Synthetic Descriptive Set Theory
• September 7, 2012, at 13:00 in S2|15-201
Pavol Safarik, TU Darmstadt
A functional interpretation for nonstandard arithmetic
We introduce constructive and classical systems for nonstandard arithmetic and show how variants of the functional interpretations due to Gödel and Shoenfield can be used to rewrite proofs performed in these systems into standard ones. These functional interpretations show in particular that our nonstandard systems are conservative extensions of E-HA\omega; and E-PA\omega;, strengthening earlier results by Moerdijk and Palmgren, and Avigad and Helzner. We will also indicate how our rewriting algorithm can be used for term extraction purposes. To conclude the paper, we will point out some open problems and directions for future research, including some initial results on saturation principles.
• August 24, 2012, at 13:00 in S2|15-201
Tahereh Jafarikhah, University of Tarbiat Modares, Tehran, Iran; at present University of Hagen, Germany
Computable Riesz representation on the dual of C[0; 1] revisited
By the Riesz representation theorem, for every linear functional F : C[0; 1] ! R there is a function g : [0; 1] ! R of bounded variation such that
F(h) = Z f dg (h 2 C[0; 1]) :
A computable version is proved in LW07a [?]: a function g can be computed from F and its norm, and F can be computed from g and an upper bound of its total variation. In this article we present a much more transparent proof. We first give a new proof of the classical theorem from which we then can derive the computable version easily. As in LW07a [?] we use the framework of TTE, the representation approach for computable analysis, which allows to de ne natural concepts of computability for the operators under consideration.
• August 24, 2012, at 10:00 in S2|15-201
Hiroyuki OTA, University of Tokyo, Japan
Computational Complexity of Smooth Differential Equations
The computational complexity of the solution~$h$ to the ordinary differential equation $h(0)=0$, $h'(t) = g(t, h(t))$
under various assumptions on the function $g$ has been investigated in hope of understanding the intrinsic hardness of solving the equation numerically.
Kawamura showed in 2010 that the solution~$h$ can be PSPACE-hard even if $g$ is assumed to be Lipschitz continuous and polynomial-time computable.
We place further requirements on the smoothness of $g$
and obtain the following results:
the solution~$h$ can still be PSPACE-hard
if $g$ is assumed to be of class $C^1$;
• Tuesday, July 17, 2012, at 11:00 in S2|15-401
Prof. Hans-Peter Künzi, University of Cape Town, South Africa
The Katetov construction revisited
(joint work with M. Sanchis, Castellon, Spain)
Various concepts from the theory of metric spaces are generalized to an asymmetric setting. In particular we study the Katetov construction modified for a T_0-quasi-metric space. Our approach is related to the recent work of the first author with Kemajou and Otafudu about a concept of hyperconvexity in such spaces.
• July 6, 2012, at 13:00 in S2|15-201
Prof. Benjamin Miller, University of Münster
An anti-basis theorem for definable cardinals
We will discuss how local rigidity properties of the usual action of SL_2(Z) on R^2 can be used to rule out basis theorems near the base of the definable cardinal hierarchy. I hope to make the talk accessible to a broad audience. This is joint work with Clinton Conley.
• June 29, 2012, at 13:00 in S2|15-201
Prof. Martin Otto, TU Darmstadt
Pebble Games and Linear Equations (joint work with Martin Grohe)
We give a new, simplified and detailed account of the correspondence between levels of the Sherali-Adams relaxation of graph isomorphism and levels of pebble-game equivalence with counting (higher-dimensional Weisfeiler-Lehman colour refinement). The correspondence between basic colour refinement and fractional isomorphism (Ramana, Scheinerman, Ullman 1994) is re-interpreted as the base level of Sherali-Adams and generalised to higher levels in (Atserias, Maneva 2011), who prove that the two resulting hierarchies interleave. In carrying this analysis further, we give (a) a precise characterisation of the level k Sherali-Adams relaxation in terms of a modied counting pebble game; (b) a variant of the Sherali-Adams levels that precisely match the k-pebble counting game; (c) a proof that the interleaving between these two hierarchies is strict. We also investigate the variation based on boolean arithmetic instead of real/rational arithmetic and obtain analogous correspondences and separations for plain k-pebble equivalence (without counting). Our results are driven by considerably simplied accounts of the underlying combinatorics and linear algebra.
• June 22, 2012, at 13:00 in S2|15-201
Atefeh Keshavarzi Zafarghandi, Amirkabir University, Tehran, Iran
Dynamical Systems via Domain Theory
We consider Dynamical Systems from a computational point of view. Then we study the associated dynamical system in extended hyperspaces. We show that attractors of iterated function systems can be obtained as fixed points in these new hyperspaces.
• June 15, 2012, at 13:00 in S2|15-201
Achim Blumensath, TU Darmstadt
Recognisability and Algebras of Infinite Trees
• June 1, 2012 at 13:00 in S2|15-201
Prof. Andre Nies, University of Auckland, New Zealand
Algorithmic randomness and differentiability
We show that algorithmic randomness notions for a real correspond to differentiability at the real of all functions in an appropriate effective class.
• May 18, 2012 at 13:00 in S2|15-201
Andrey Morozov, Sobolev Institute of Mathematics, Novosibirsk
On Sigma-definability of structures over the reals

We give a brief survey of the results on structures Sigma-definable over HF(R) and present some results on the number of non-Sigma-isomorphic presentations of the ordered field R of reals and of its ordering. In particular, we will discuss the question, whether it is possible to use R to 'improve' itself by creating its new classically isomorphic Sigma-copy.
• May 4, 2012
Prof. Takakazu Mori, Sangyo University, Kyoto, Japan
Fine Computable Functions - a computability of discontinuous functions-

The computability of functions by Pour-El and Richards requires that Sequential Computability and Effective Continuity. So, a computable function must be continuous. The theory of Walsh series is based on the Fine metric, which is stronger than Euclidean metric. A function is Fine continuous if and only if it is continuous at dyadic irrationals and right continuous at dyadic rationals. So, Rademach functions and Walsh functions are fine continuous. The Fine computability is defined by means of the Fine topology instead of the Euclidean topology.

• April 27, 2012 at 13:30 in S2|15-201
Proof mining for Halpern iterations in CAT(0) spaces
We examine a convergence result of Halpern iterations in CAT(0) spaces due to Saejung (2010) and explain how to apply proof mining methods to obtain effective uniform rates of metastability as outlined by Kohlenbach and Leustean in 2011. The novelty of this approach lies in the fact that Saejung's original proof used the existence of Banach limits. This requires a substantial reference to the axiom of choice which is not permitted in current metatheorems. We show how one can reduce this reference to fit the requirements of a metatheorem given in Gerhardy-Kohlenbach (2005).
• April 20, 2012 at 13:30 in S2|15-201
Samuele Maschio, University of Pavia, Italy
Initial algebras and internal syntax
In category theory, recursive objects are defined using initial algebras. As well as for natural numbers, objects of lists and objects of trees, it is possible to give a notion of *internal theory * in a category using *ad hoc *functors. I will give some examples and I will underline the connection between Goedel coding and the internalization of arithmetic in the syntactic category of Peano arithmetic.
• March 23, 2012 at 13:30 in S2|15-201
Costas Poulios, University of Athens, Greece

The fixed point property on tree-like Banach spaces
A Banach space $X$ is said to have the fixed point property if for every weakly compact convex set $K\subseteq X$ and every non-expansive $T:K\to K$ (i.e. $\|Tx-Ty\|\leq \|x-y\|$ for all $x,y\in K$), the map $T$ has a fixed point. This property is closely related to the geometry of the space under consideration. If the geometry of the space is "nice", i.e. if $X$ possesses normal structure, then $X$ has the fixed point property. On the other hand, a space without normal structure may fail this property (e.g. the space $L_1$) or it may enjoy this property (e.g. the space $c_0$).
In this lecture we will consider the important class of separable Banach spaces not containing $\ell_1$ with non-separable dual. Our purpose is to study these spaces with respect to the fixed point property. In particular, after discussing some known members of this class, we will present a Banach space belonging to the class under consideration which has the fixed point property even though it does not possess normal structure.

• March 2, 2012 at 11:00 in S2|15-201
Alexander Kreuzer

Non-principal ultrafilters, program extraction and higher order reverse mathematics
Abstract: We investigate the strength of the existence of a non-principal ultrafilter over fragments of higher order arithmetic. Let (U) be the statement that a non-principal ultrafilter on N exists and let ACA_0^w be the higher order extension of ACA_0. We show that ACA_0^w + (U) is Pi1_2-conservative over ACA_0^w and thus that ACA_0^w + (U) is conservative over PA. Moreover, we provide a program extraction method and show that from a proof of a strictly \Pi1_2 statement \forall f \exists g A(f,g) in ACA_0^w + (U) a realizing term in G"odel's system T can be extracted. This means that one can extract a term t\in T, such that \forall f A(f,t(f)).

• February 10, 2012 at 14:30 in S2|15-201
Benno van den Berg
About the Herbrand Topos
• January 27, 2012 at 13:30 in S2|15-201
The complexity of the set of points of continuity of a multi-valued function
We consider a notion of continuity concerning multi-valued functions between metric spaces. We focus on multi-valued functions assigning points to nonempty closed sets and we answer the question asked by M. Ziegler (Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability with Applications to Effective Linear Algebra, to appear in Ann. Pure Appl. Logic) of determining the complexity of the set of points of continuity of such a multi-valued function. We will see that, under appropriate compactness assumptions, the complexity of this set is low in the hierarchy of Borel sets. On the other hand in the general case it might not even be a Borel set. In particular, this notion of continuity is not metrizable in the following sense: if we view a multi-valued function F from X to Y as a single-valued function from X to the family F(Y) of closed subsets of Y, it is not necessarily the case that there is a metrizable topology on F(Y) such that F is continuous at an arbitrary x in the usual sense (of single-valued functions) exactly when it is continuous at x in the sense of multi-valued functions. We will also present some similar results about a stronger notion of continuity, which is closely related to the Fell topology and thus in some cases turns out to be metrizable.
• Wednesday, January 18, 2012 at 14.25 in S2|15-201
Jean-Yves Beziau, Federal University of Rio de Janeiro, UFRJ
Visiting Professor University of Munich, LMU
Universal logic: A general completeness theorem

The aim of universal logic is to develop general concepts and proofs for a better understanding of logical systems. After giving a brief overview of universal logic I will present a general completeness theorem linking sequent systems with bivaluations.
• January 13, 2012 at 13.30 in S2|15-201
Sam Sanders, University of Ghent
Reuniting the antipodes: bringing together Nonstandard Analysis and

Constructive Analysis
Constructive Analysis was introduced by Erret Bishop to identify the computational meaning' of mathematics. In the spirit of intuitionistic mathematics, notions like algorithm,' explicit computation,' and finite procedure' are central. The exact meaning of these vague terms was left open, to ensure the compatibility of Constructive Analysis with several traditions in mathematics. Constructive Reverse Mathematics (CRM) is a spin-off of Harvey Friedman's famous Reverse Mathematics program, based on Constructive Analysis. Bishop famously derided Nonstandard Analysis for its lack of computational meaning. In this talk, we introduce?-invariance': a simple and elegant definition of finite procedure' in (classical) Nonstandard Analysis. Using an intuitive interpretation, we obtain many results from CRM, thus showing that ?-invariance is quite close to Bishop's notion of finite procedure.'
• Tuesday, January 12, 2012 at 11:00 in S2|15-201
Yoshihiro Maruyama, Oxford University
Chu Duality, Born Coalgebras, and Quantum Symmetries
After a brief overview of recent developments of quantum foundations in Oxford, we focus on Abramsky's representations of quantum systems as Chu spaces and as coalgebras, with the purpose of analysing them from the viewpoint of duality theory. We first develop a general theory of dualities between point-free and point-set spaces, based on the formalism of Chu space. The Chu duality theory includes, as a special case, a duality between quantum state spaces and property observables. Building upon the duality, we refine Abramsky's coalgebraic representation of quantum symmetries, so that our (non-fibred) category of coalgebras turns out to be strictly smaller than Abramsky's fibred one; at the same time, however, it is still big enough to represent quantum symmetries.

# Logic Seminar 2011

• December 9, 2011 at 13:30 in S2|15-201
Jaime Gaspar
The infinite pigeonhole principle

Abstract: The infinite pigeonhole principle is one of the world's most evident statements: if we colour the natural numbers with finitely many colours, then some colour occurs infinitely often. Despite its obviousness, its treatment can be surprisingly challenging. We are going to illustrate this with two case studies: 1. the finitary infinite pigeonhole principles in Tao's programme of finitisations in analysis; 2. the infinite pigeonhole principle in a quantitative version of Hillam's theorem characterising the convergence of fixed point iterations.
* Financially supported by the Portuguese Fundação para a Ciência e a Tecnologia under grant SFRH/BD/36358/2007 co-financed by Programa Operacional Potencial Humano / Quadro de Referência Estratégico Nacional / Fundo Social Europeu (União Europeia).
• Tuesday, December 6, 2011 at 13:30 in S2|15-201
Paulo Oliva, Queen Mary University of London
On the restricted form of Spector's bar recursion.
• November 11,2011 at 13:30 in S2 15-201
Feng Shiguang, Sun Yat-Sen University, Guangzhou

This talk is about the expressive power and complexity of second-order extended Horn logic and Krom logic.

Descriptive complexity is a bridge between complexity theory and logic. It asks for the logic resources that are necessary to define the computational resources such as time, space or the amount of hardware that are necessary to decide a property. Horn and Krom logic have their origins in propositional logic. Some researchers extend them to second logic, named SO-HORN and SO-KROM logic. We define revised and extended versions of these logics and get some results. We show that SO-HORN$^r$ and ($\exists$)SO-KROM$^r$ capture P and NL on ordered structures, respectively, and both SO-EHORN and $\Pi^{1}_{1}$-KROM capture co-NP.

• November 4, 2011 at 15:00 in S2|15-201
Jaime Gaspar
Goedel functional interpretation

This talk is a one-hour mini-tutorial introducing from scratch the bare-bones of the Goedel functional interpretation.

• November 4, 2011 at 13:30 in S2|15-201
Pavol Safarik
A quantitative nonlinear strong ergodic theorem for Hilbert spaces

Abstract: We give a quantitative version of a strong nonlinear ergodic theorem for (a class of possibly even discontinuous) selfmappings of an arbitrary subset of a Hilbert space due to R.~Wittmann and outline how the existence of uniform bounds in such quantitative formulations of ergodic theorems can be proved by means of a general logical metatheorem. In particular these bounds depend neither on the operator nor on the initial point. Furthermore, we extract such uniform bounds in our quantitative formulation of Wittmann's theorem, implicitly using the proof-theoretic techniques on which the metatheorem is based. However, we present our result and its proof in analytic terms without any reference to logic as such. Our bounds turn out to involve nested iterations of relatively low computational complexity. While in theory these kind of iterations ought to be expected, so far this seems to be the first occurrence of such a nested use observed in practice. October 28, 2011 at 13:30 in S2|15-201

• October 28, 2011 at 13:30 in S2|15-201
The effective theory on arbitrary Polish spaces

Abstract: We will recall a constructive scheme (which was presented in a previous talk of this seminar) which assigns to every tree T on the naturals a Polish space N(T) which is recursively presented in T. The effective structure of N(T) depends on the combinatorial properties of T, so that for various choices of a recursive tree T the spaces N(T) are not effectively-Borel isomorphic to each other. In particular there are recursively presented Polish spaces which are uncountable sets and yet not effectively-Borel isomorphic to the Baire space. On the other hand every recursively presented Polish space is effectively-Borel isomorphic to one of the spaces N(T), so the latter spaces are the correct effective analogue of the Baire space. We are going to present the recent results concerning the structure of the spaces N(T) and give a characterization of N(T) being effectively-Borel isomorphic to the Baire space in terms of the intrinsic properties of T.
• October 7, 2011 at 13:30 in S2|15-201
Takayuki Kihara, Tohoku University
Non-computability of planar continua

Abstract: Every nonempty open set in a computable metric space contains a computable point. In contrast, the Non-Basis Theorem asserts that a nonempty co-c.e. closed set in Cantor space (hence, even in the real line) can avoid any computable points. Recent exciting progress in Computable Analysis naturally raises the question whether Non-Basis Theorems exist for connected co-c.e. closed sets. In this talk, we show global and local non-computability results concerning connectivity, local connectivity, simply connectivity, and contractibility for planar continua.

• October 7, 2011 at 10:30 in S2|15-201
Hideki Tsuiki, Kyoto University

Unimodal Maps as Boundary-Restrictions of Two-Dimensional Full-Folding Maps

Abstract
: Unimodal Maps as Boundary-Restrictions of Two-Dimensional Full-Folding Maps" Abstract I will start my talk with ideas on computational aspects for topological spaces. Then I will present what I have got for dynamical system through this idea, with an interactive program showing beautiful 2-dimensional figures. It is a generalization of Gray-code and the tent function to other topological spaces and no background knowledge is required to understand it.
• September 16, 2011 a 13:30 in S2|15-201
Akitoshi Kawamura, University of Tokyo
Distance k-Sectors and zone diagrams

Abstract: The bisector of two nonempty sets P and Q in a metric space is the set of all points with equal distance to P and to Q. A distance k-sector of P and Q, where k is an integer, is a (k-1)-tuple (C_1, C_2, ..., C_{k-1}) such that C_i is the bisector of C_{i-1} and C_{i+1} for every i = 1, 2, ..., k-1, where C_0 = P and C_k = Q. This notion, for the case where P and Q are points in Euclidean plane, was introduced by Asano, Matousek, and Tokuyama, motivated by a question of Murata in VLSI design. We prove the existence of a distance k-sector for all k and for every two disjoint, nonempty, closed sets P and Q in Euclidean spaces of any (finite) dimension, or more generally, in proper geodesic spaces (uniqueness remains open). Another generalization of the bisector is the Voronoi diagram, which is the "bisector between n sites". Likewise one can consider the "trisector between n sites", leading to the notion of zone diagrams. Asano, Matousek, and Tokuyama proved the existence and uniqueness of a zone diagram for point sites in Euclidean plane, and Reem and Reich showed existence for two arbitrary sites in an arbitrary metric space. We establish (by a simpler proof than Asano et al.'s) the existence and uniqueness of the zone diagram for n positively separated sites in a Euclidean space of arbitrary (finite) dimension. This talk is based on joint works with K. Imai, J. Matoušek, D. Reem and T. Tokuyama.
http://www-imai.is.s.u-tokyo.ac.jp/~kawamura/papers/k-sectors/
http://www-imai.is.s.u-tokyo.ac.jp/~kawamura/papers/zone/
• August 5, 2011 a 13:30 in S2|15-201
Akitoshi Kawamura, University of Tokyo
Why Lipschitz Continuous Ordinary Differential Equations are Polynomial-Space Complete

Abstract: How complex can the solution of a differential equation be, comparedto the function used to describe the equation? This question can begiven a precise sense consistent with the standard computational complexity theory. In this talk, I will first explain how we define polynomial-time computability of real functions. Then I will discuss the complexity of the solution of initial value problems of differential equations under various assumptions. In particular, an initial value problem given by a polynomial-time computable, Lipschitz continuous function can have a polynomial-space complete solution. This answers Ko's question raised in 1983. Time permitting, I will also mention what this kind of results about the complexity of possible solutions often imply about the hardness of actually finding the solution when the differential equation is given.
• July 15, 2011 at 13:30 in S2|15-201
Jaime Gaspar*
Copies of Classical Logic in Intuitionistic Logic

Abstract: Classical logic is the usual logic in mathematics, and intuitionistic logic is the constructive part of classical logic. Despite their differences, classical logic can be embedded in intuitionistic logic, that is, there are copies of classical logic in intuitionistic logic. The copies found in the literature are all the same. So, is the copy unique? In this talk we answer this question.

* Financially supported by the Portuguese Fundação para a Ciência e a Tecnologia under grant SFRH/BD/36358/2007.
• July 8, 2011 at 13:30 in S2|15-201
Vasco Brattka, Capetown
Computable Analysis in the Weihrauch Lattice

Abstract
: We present recent results on the classification of the computational content of theorems in the Weihrauch lattice. This lattice is a refinementof the Borel hierarchy and allows to classify the computational content oftheorems easily. The essential idea is to view theorems as (possiblymulti-valued) functions that transform certain input data into certainoutput data. The question is then which theorems can be computably andcontinuously transformed into each other. So far, theorems such as theBaire Category Theorem, the Banach Inverse Mapping Theorem, theIntermediate Value Theorem, the Hahn-Banach Theorem, theBolzano-Weierstrass Theorem and several other theorems from analysis havebeen classified from this perspective. The results are not only naturalrefinements of what is known from reverse mathematics, but aclassification of the Weihrauch degree of a theorem leads to a relativelycomplete understanding of the respective theorem regarding its uniform andnon-uniform computability properties.
• July 7, 2011 at 13:30 in S2|15-201
Victor Poupet, Marseille
An Elementary Construction of an Aperiodic Tile Set

Abstract
: Aperiodic tile sets have been widely studied since their introduction in 1962 by Hao Wang. It was initially conjectured by Wang that it was impossible to enforce the aperiodicity of a coloring of the discrete plane Z2 with a finite set of local constraints (there either was a valid periodic coloring or none at all). This would imply that it was decidable whether there existed a valid coloring of the plane for a given set of local rules. This last problem was introduced as the domino problem, and eventually proved undecidable by Robert Berger in 1964. In doing so, Berger produced the first known aperiodic tile set: a set of local constraints that admitted valid colorings of the plane, none of them periodic. Berger's proof was later made significantly simpler by Raphael M. Robinson in 1971 who created the set of Wang tiles now commonly known as the Robinson tile set.

Since then many other aperiodic tile sets have been found, not only on the discrete plane, but also on the continuous plane (the most famous being probably Penrose's "kite and darts" set of two tiles).
In this talk, I will describe yet another construction of an aperiodic tile set. Although the resulting tile set will produce tilings quite similar to those of Robinson's (an infinite hierarchical structure of embedded  squares) the local constraints will be presented in a (hopefully) more natural way: we will start from simple geometrical figures and organize them step by step by adding new rules progressively to enforce the global structure.

• May 20, 2011 at 13:30 in S2|15-201
Martin Lotz, Edinburgh
Geometry and Complexity in Optimization

Abstract: The computational complexity of optimization problems depends crucially on the geometry of the problem. In this talk we will discuss geometric measures of condition for optimization problems such as linear programming, and show an application to compressed sensing: the problem of finding sparse solutions to under-determined systems of equations in polynomial time.
• May 13, 2011 at 13:30 in S2|15-201
Chistian Ikenmeyer, Universität Paderborn
Introduction to Geometric Complextity Theory and Tensor Rank

Abstract: Mulmuley and Sohoni in 2001 proposed to view the P vs NP and related problems as specific orbit closure problems and to attack them by methods from geometric invariant and representation theory. We merge these ideas with work of Strassen towards the goal of showing lower bounds for the computational complexity of matrix multiplication. This talk explains some key ideas of this approach on a basic level.
• Tuesday, May 10, 2011 at 13:30 in S2|15-201
Jaap van Oosten, Universiteit Utrecht
Another Heyting Algebra for embedding the Turing degrees

Abstract: Simpson has been arguing in recent years that classical
degree theory is "dead" and can be "redeemd" by studying the Medvedev lattice. The Medvedev lattice embeds the Turing and Muchnik degrees and its dual is a Heyting algebra. The Medvedev lattice is intimately related to the notion of realizability studied by Kleene and Vesley. In this talk another Heyting algebra will be presented in which the Turing degrees embed; the "lattice of local operators in the effective topos". The structure will be defined in a very concrete way, and some methods for determining equality/ inequality between its members will be discussed.
• May 6, 2011 at 13:30 in S2|15-201
Davorin Lesnik

Synthetic topology

Abstract: Synthetic topology is a study of mathematical models, in which objects possess an intrinsic topology in a suitable sense and all maps arecontinuous. Topological definitions and theorems thus becomes statementsabout (internal) logic. The upshot is that this approach gives new insightand topological theorems often become easier to prove, and their validitytransfers to classical topology.

At the seminar we will discuss the origins, basic ideas and notions of
synthetic topology, and some proofs. Depending on time, we'll speak about the logical and computational aspects of the theory.

• April 21, 2011 at 14:00 in S2|15-201
Jaime Gaspar

Proof mining Hillam's theorem on fixed point iterations

Abstract: Hillam's theorem gives a characterisation of the convergence of fixed point iterations. We present a quantitative version of Hillam's theorem where we give an explicit rate of metastability (a kind of "finitary rate of convergence") for the fixed point iteration. We explain how this result is obtained by methods of proof theory used in the proof mining program, called functional interpretations, which extract computational content from proofs.

• February 18, 2011 at 14:00 in S2|15-201
Arne Seehaus

Ist Mathematik "(Big) Science" im Sinne von Thomas Kuhn und Derek John de Solla Price?
• February 11, 2011 at 14:00 in S2|15-201
Alexander Kartzow
First-order logic on generalisations of pushdown graphs

• January 28, 2011 at 14:00 in S2|15-201
Martin Ziegler
Relative Computability and Uniform Continuity of Relations

Abstract: In their search for a topological characterization of relatively (i.e. oracle-) computable relations (aka multivalued functions), Brattka and Hertling (1994) have considered two notions: weak continuity (which is weaker than relative computability) and strong continuity (which is stronger than relative computability). Based on the Henkin quantifier, we propose a notion of uniform continuity and explore its relations to weak/strong continuity as well as to relative computability.

• January 21, 2011 at 14:00 in S2|15-201
Takako Nemoto, Bern
Determinacy and reverse mathematics

Abstract: It is known that determinacy of infinite games characterize many subsystems of second order arithmetic.

In this talk, we treat relatively stronger systems, beyond so-called "big five systems", and consider the possibility of determinacy as a tool to investigate strong systems of second order arithmetic.

• January 14, 2011 at 14:00 in S2|15-201
Nowhere dense graph classes, stability, and the independence property

Abstract: Recently, Nešetřil and Ossona de Mendez introduced nowhere dense classes of finite graphs, a generalisation of many natural and important graph classes such as graphs of bounded degree, planar graphs, graphs excluding a fixed minor and graphs of bounded expansion. These graph classes play an important role in algorithmic graph theory, as many computational problems that are hard in general become tractable when restricted to such classes.

We observe that the notion of nowhere density is essentially the earlier stability theoretic notion of superflatness, introduced by Podewski and Ziegler in 1978. With this, we show that for a subgraph-closed class C of (not necessarily finite) graphs, the following are equivalent:

1) C is nowhere dense
2) C is stable
3) C is dependent

The result can be generalised to classes of coloured digraphs. It fails if C is not subgraph-closed. For arbitrary graph classes, dependence is the most general notion, generalising both nowhere density and stability, as well as (locally) bounded clique-width.

This is joint work with Hans Adler.

• November 19, 2010 at 14:00 in S2|15-201
Dr. Vadim Puzarenko, Russian Academy of Science, Novosibirsk
A reducibility on structures
Abstract:
• November 12, 2010 at 14:00 in S2|15-201
Carsten Rösnick

Approximate Real Function Maximization and Query Complexity

Abstract: Let MAX,INT : C[0,1] -> R be the functionals for maximization and integration, respectively. I.e.,

MAX(f) = sup_{0<x<1} f(x) ,      INT(f) = int_01 f(t) dt .

It has been shown by Friedman and Ko (1982) that for the restricted an be approximated up to an error 2^{-n} on polynomial-space in n. In above's framework of computation, each computable real function f : [0,1] -> R is essentially represented as a type-2 function (in the sense of Weihrauch), mapping a rational sequence to a rational sequence, i.e., f \in (N->Q) -> (N->Q). This enables us to discuss the computational complexity of a real-valued function on classical Turing machines with oracle access. Here, a TM M computing f is presented with an oracle phi \in (N->Q) for x in [0,1] and a precision n in N, such that M produces the output M^\phi(n) \in Q with

|M^\phi(n) - f(x)| < 2^{-n}.

When it comes to the computation of the functionals MAX and INT, the encoding of an input for a Turing machine becomes even more critical: Instead of representing an x \in [0,1] by a fast converging sequence \phi \in (N->Q), we now have to encode a continuous real-valued function f : [0,1] -> R. The first goal of this thesis is to devise, compare and explore different ways of accessing an unknown continuous real function via black-box queries that model the actual computation of f as supported by certified numerics like interval arithmetic or the iRRAM. Building on this, the second goal of this Master's Thesis is to determine the query complexity of the MAX-functional with respect to different protocols. In combination with an complexity analysis of the protocols themselves, this should yield more explicit quantitative and parameterized both upper and lower bounds (compared to those presented in the beginning) on the complexity of MAX (and INT).

# Logic Seminar 2009, 2010

• August 6, 2010 at 14:00 in S215/201
René Hartung

Computation with infinitely presented groups

Abstract: We give an overview of the current methods for computing with a class of infinitely presented groups, the so-called finitely $L$-presented groups. Theses infinite presentations generalize finite presentations and they allow interesting applications. In particular, our methods allow to compute the lower central series quotients and the Dwyer quotients of the Schur muliplier.

• July 13, 2010 at 14:00 in S215/201
Russell Miller, New York City University

Computable fields and their algebraic closures

Abstract: We present recent results about computable fields F, especially those algebraic over their prime subfields, and embeddings of them into their algebraic closures.  The discussion begins with the basic definitions and a review of Rabin's Theorem, and continues with more exact computability-theoretic comparisons, by Steiner and by the speaker, among the set of irreducible polynomials in F[X], the set of polynomials there with roots in F, and the image of F within computable presentations of its algebraic closure.  Finally, we will discuss the Blum-Shub-Smale model of computation on the real numbers, and explain how it changes the array of tools available for examination of these questions.     back to the top of the page

• June 25, 2010 at 14:00 in S215/201
Jaime Gaspar, TU Darmstadt

Proof interpretations with truth

Abstract: A proof interpretation I of a theory S in a theory T is a function mapping
a formula A of S to a formula AI(x) of T (with distinguished variables x)
verifying S  A ) T  AI(t) for some terms t extracted from a proof of A. Proof
interpretations are used in:
• consistency results (e.g., if ?I = ?, then S  ? ) T  ?, i.e., S is consistent
relatively to T);
• conservation results (e.g., if T  AI ! A for 02
formulas A, then S  A )
T  A, i.e., S is conservative over T for 02
formulas);
• unprovability results (e.g., if there are no terms t such that T  AI(t), then
S 0 A);
• closure under rules (e.g., if S = T, (9yA(y))I(x) = AI(x) and T  AI(x) ! A(x),
then T  9yA(y) ) T  A(t) for some term t, i.e., T has the existence property);
• extracting computational content from proofs (e.g., extracting t in the previous
point).
Closure under rules needs T  AI(x) ! A that can be achieved by:
• upgrading T to the characterization theory CT that proves 9xAI(x) \$ A;
• or hardwiring truth in I obtaining It verifying T  AIt(x) ! A.
The first option doesn’t work if:
• CT is classically inconsistent (e.g., bounded proof interpretations);
• or we want applications to theories weaker than CT.
So we turn to the second option and present a method to hardwire truth (in proof
interpretations of Heyting arithmetic satisfying some mild conditions) by defining:
• a function c that replaces each subformula A of a formula by A^Ac where Ac
is a “copy” of A;
• an “inverse” function c−1 that replaces the “copies” Ac by the “originals” A;
• It = c−1  I  c.
As examples we hardwire truth in:
• modified realizability;
• Diller-Nahm functional interpretation;
• bounded modified realizability;
• bounded functional interpretation;
• slash.

This is based on a joint work with Paulo Oliva.     back to the top of the page

•  June 18, 2010 at 14:00 in S215/201
Prof. Hajime Ishihara, Japan Advanced Institute for Science and Technology

The monotone completeness theorem in constructive reverse

Abstract: We investigate the monotone completeness theorem: MCT. Every bounded monotone sequence of real numbers converges within a subsystem of elementary analysis EL as a formal framework of constructive reverse mathematics aiming at classifying various theorems in intuitionistic, constructive recursive and classical mathematics. We distinguish a Cauchy sequence with modulus from a Cauchy sequence without modulus.

We give a logical principle which is stronger than LPO, and show that it is equivalent to the statement:

(1) Every bounded monotone sequence of real numbers is a Cauchy sequence without modulus.

We also give a function existence axiom which is weaker than the number-number choice for Pi0_1 formulas, and show that it is equivalent to the statement:

(2) Every Cauchy sequence without modulus is a Cauchy sequence with modulus.

Therefore MCT is divided into the logical statement (1), the function existence statement (2), and the Cauchy completeness of the reals which is constructively provable without countable choice.

back to the top of the page

• June 16, 2010 at 14:00 in S215/201
Michal Stronkowski, Karls Universität Prag

Finite axiomatization of quasivarieties of relational structures

Abstract: Much attention was paid to the finite axiomatization of varieties and quasivarieties of algebras in the past. But the problem for relational structures was appearing sporadically. I will review what is known about this and add one new theorem. Namely, we prove that the graphs of monoids or groups (as relational structures) almost always generate non finitely axiomatizable quasivarieties.
• June 15, 2010 at 14:00 in S215/201
Prof. Jimmie Lawson, Louisiana State University, Baton Rouge

Clifford Algebras, Möbius Transformations, Vahlen Matrices, and Bruck Loops

• June 4, 2010 at 14:00 in S215/201
Prof. Klaus Meer, Brandenburgische TU Cottbus

On the expressive CNF formulas of bounded Tree- and Clique-Width

Abstract: We study the expressive power of certain families of polynomials over a field K. The latter are defined via Boolean formulas in conjunctive normal form. We attach in a natural way a graph to such a formula and study the resulting polynomial families when certain parameters of that graph like tree- and clique-width are assumed to bounded.     back to the top of the page

• May 21, 2010 at 14:00 in S215/201
Priv.-Doz. Dr. Norbert Müller, Universität Trier

Exakte reelle Arithmetik - reelle Zahlen aus der Sicht der Informatik

Abstract: Die Menge der reellen Zahlen ist, anders als etwa die viel leichter algorithmisch greifbaren natürlichen Zahlen, überabzählbar. Als Konsequenz ist bereits die Definition des Begriffs "Berechenbarkeit" für reelle Zahlen deutlich komplexer als die klassische Berechenbarkeitsdefinition für natürliche Zahlen mit Turingmaschinen.

Implementierungen dieses Berechenbarkeitsbegriffes werden oft als "exakte reelle Arithmetik" bezeichnet; Basis ist dabei in der Regel eine Intervallarithmetik mit mehrfach genauen Zahlen, ergänzt um eine automatisierte Genauigkeitssteuerung. Damit sind numerische Algorithmen möglich, die in der endlichen Welt der double-precision-Zahlen nicht sinnvoll wären.

Im Vortrag stellen wir einige der Prinzipien einer solchen Implementierung vor und werden dabei insbesondere als Beispiele ihre Anwendung (und ihr Verhalten) bei der "logistic map" und einfachen Differentialgleichungen betrachten.     back to the top of the page
• February 12, 2010 at 14:00 in S215/201
Jaime Gaspar, TU Darmstad

Slash and completeness

Abstract: Gödel's incompleteness theorems gave a fatal blow in Hilbert's program. Despite this, there were some attempts to salvage the program. One of them, by Hilbert himself, was to add omega-rule (if A(n) is provable for each fixed n, then "for all n, A(n)" is provable) to Peano arithmetic. This rule evades Gödel's incompleteness and gives a complete arithmetical theory.   Proof interpretations are proof-theoretic tools to prove consistency results, conservation results, closure under rules and to extract computational content from proofs. The slash is a very simple toy case of a proof interpretation. However, the omega-rule boosts the slash, resulting in some interesting applications. We will talk about one of them: a very simple proof of the completeness of Peano arithmetic plus omega-rule.     back to the top of the page

•  POSTPONED!

Martin Otto, TU Darmstadt
Acyclicity in hypergraph covers and applications

Abstract: Which degrees of acyclicity can be achieved in FINITE hypergraph covers? While actual acyclicity can always be had in generally infinite unfoldings, finite covers can only guarantee certain bounded forms of acyclicity at best.   I want to discuss new constructions of weakly N-acyclic covers (joint work with Vince Barany and Georg Gottlob, Oxford) and N-acyclic covers with a view to their use in the finite model theory of guarded logics.     back to the top of the page

•  January 29, 2010 at 14:00 in S215/201
Martin Ziegler, TU Darmstadt

Expressiveness and Computational Complexity of Euclidean Quantum Logic

Abstract: Quantum logic generalizes, and in dimension one coincides with, Boolean logic. The satisfiability problem of quantum logic formulas is shown NP-complete in dimension two. We then extend these considerations to three and higher-dimensional Euclidean spaces R^d and C^d. For fixed d>2, quantum satisfiability turns out polytime-equivalent to the real feasibility of a multivariate quartic polynomial equation: a well-known problem lying (probably strictly) between NP and PSPACE. We finally investigate the problem over INdefinite finite dimensions and relate it to the real feasibility of quartic NONcommutative *-polynomial equations.     back to the top of the page

•  November 27, 2009 at 14:00 in S215/201
Peter Scheiblechner, Purdue

Comparison of Complexity over the Real vs. Complex Numbers

Abstract: We compare the complexity of certain problems over the reals with their corresponding versions over the complex numbers. We argue in particular that the problem of deciding whether a system of (real) polynomial equations has a real solution is strictly harder than asking for a complex solution, since it cannot be solved by a complex BSS-machine.   On the other side we describe a family of problems, whose real versions are in the model of algebraic decision trees strictly easier than its complex versions.   For this end we prove a new lower bound for the decision complexity of a closed algebraic set X in C^n in terms of the sum of its (compactly supported) Betti numbers b_c(W), which is for the first time better than logarithmic in b_c(W).

back to the top of the page

•  November 13, 2009 at 14:00 in S215/201
Matthias Schroeder, Munich

Topology in computable analysis

Abstract: Computable Analysis studies computability over the real numbers and related spaces. Since computable functions are continuous in some sense, topology plays an important role in this theory. The cartesian closed category QCB of quotients of countably based topological spaces turns out to be an appropriate category for modelling the topological aspects of higher type computation.

In this talk, we discuss topological properties of the Kleene-Kreisel continuous functionals and their surprising relationship to an open problem in exact real-number computation. This problem is whether two approaches in functional programming on higher type computability over the reals coincide.

We show that the sequential topology on the space N^(N^N), which is the space of Kleene-Kreisel continuous functionals of type 2, is neither zero-dimensional nor regular. Moreover, we establish non-regularity of further function spaces formed in the cartesian closed category of sequential spaces. This is one of the categories from which QCB inherits its cartesian closed structure.     back to the top of the page

•  November 6, 2009 at 14:00 in S215/201
Stephane Le Roux, LIX, Ecole Polytechnique

Sequential game theory: a formal and abstract approach

Abstract: In a game from game theory, a Nash equilibrium (NE) is a combination of one strategy per agent such that no agent can increase his payoff by unilaterally changing his strategy. Kuhn proved that all (tree-like) sequential games have NE. Osborne and Rubinstein abstracted over these games and Kuhn's result: they proved a *sufficient* condition on agent *preferences* for all games to have NE. I will present a *necessary and sufficient* condition, thus accounting for the game-theoretic frameworks that were left aside. The proof is *formalised* using Coq, and contrary to usual game theory it adopts an inductive approach to trees for definitions and proofs. By rephrasing a few game-theoretic concepts, by ignoring useless ones, and by characterising the proof-theoretic strength of Kuhn's/Osborne and Rubinstein's development, the talk also intends to clarifies sequential game theory.

Then I will sketch an alternative proof. Both proofs so far pre-process the preferences by topological sorting, which is surprising since it decreases the number of NE. Finally I will sketch two other proofs that pre-process the preferences only by transitive closure. The four proofs are all constructive and by induction, and I will compare them when seen as algorithms. The four proofs also constitute diverse viewpoints on the same concepts, the trial of diverse techniques against the same problem, and reusable techniques for similar structures.

back to the top of the page

•  October 9, 2009 at 14:00 in S215/201
Jaime Gaspar, TU Darmstadt

Proof interpretations with truth

Abstract: Proof interpretations are proof-theoretic tools to extract computational content (such as computable choice functions) from proofs. In some cases this only works (at least without strengthening the theories involved) for proofs of theorems of a particular logical form, but we wish to extract computational content from proofs of arbitrary theorems. Interpretations with truth have been developed to cope with this. We present two new methods on how to do hardwire truth and apply them to new proof interpretations.

In a first part we make a brief introduction to proof interpretations and the extraction of computational content from proofs. Then in a second part we present our two methods. Finally in a third part we apply them in some case studies.

This is joint work with Paulo Oliva.     back to the top of the page

•  September 18, 2009 at 14:00 in S215/201

Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability

Abstract: It is folklore particularly in numerical and computer sciences that, instead of solving some general problem f:A->B, additional structural information about the input x in A (that is any kind of promise that x belongs to a certain subset A' of A) should be taken advantage of. Some examples from real number computation show that such discrete advice can even make the difference between computability and uncomputability. We turn this into a both topological and combinatorial complexity theory of information, investigating for several practical problems how much advice is necessary and sufficient to render them computable. Specifically, finding a nontrivial solution to a homogeneous linear equation A*x=0 for a given singular real NxN-matrix A is possible when knowing rank(A)=0,1,...,N-1; and we show this to be best possible. Similarly, diagonalizing (i.e. finding a BASIS of eigenvectors of) a given real symmetric NxN-matrix is possible when knowing the number of distinct eigenvalues: an integer between 1 and N (the latter corresponding to the nondegenerate case). And again we show that N-fold (i.e. roughly log N bits of) additional information is indeed necessary in order to render this problem (continuous and) computable; whereas for finding SOME SINGLE eigenvector of A, providing the truncated binary logarithm of the least-dimensional eigenspace of A--i.e. Theta(log N)-fold advice--is sufficient and optimal.     back to the top of the page

• September 4, 2009 at 14.00 in S215/201

Effective Descriptive Set Theory and Applications in Analysis

Abstract: Descriptive Set theory is the area of mathematics which studies the structure of definable sets in separable and complete metric spaces. A central part of this area is Effective Descriptive Set Theory (for shortness Effective Theory) which uses ideas from Logic and in particular from Recursion Theory. This area provides a deeper understanding of the subject and has applications in Analysis. We begin this lecture with a brief summary of the basic concepts and we give a new approach to Effective Theory which yields some surprising results. Then we present some connections with General Topology; all theorems we talk about involve notions from both areas. Finally we give some results in Banach space Theory. Here all theorems involve only classic (i.e., non-effective) notions but still Effective Theory is essential for their proof.     back to the top of the page

• August 14, 2009 at 14:00 in S215/201
Richard Garner, Cambridge

Ionads: a generalised notion of topological space

• July 17, 2009 at 14:00 in S215/201
Martin Otto, TU Darmstadt

Acyclicity in hypergraph covers

• July 10, 2009 at 14:00 in S215/201
Pavol Safarik

On the Computational Content of Bolzano-Weierstraß

Abstract: We will apply the methods developed in the field of 'proof mining' to the Bolzano-Weierstraß theorem BW and calibrate the computational contribution of using this theorem in a proof of a combinatorial statement. Using a simple proof based on weak König's lemma and arithmetical comprehension we provide an explicit solution of the Gödel functional interpretation as well as the monotone functional interpretation of BW for the product space \prod_i \in N [-k_i, k_i] for a given sequence of rational numbers (k_i) \subseteq Q (with the standard product metric). In fact, we will use it to get optimal program and bound extraction theorems for proofs based on fixed instances of BW, i.e. for BW applied to fixed sequences in \prod_i \in N [-k_i, k_i].     back to the top of the page

• July 3, 2009 at 14:00 in S215/201
Anuj Dawar, Cambridge

Logics with Rank Operators

Abstract: It is a long-standing open problem in descriptive complexity theory whether there is a logic that expresses exactly the polynomial-time decidable properties of unordered structures. It has been known since the early 90s that fixed-point logic with counting (FPC) is not sufficient and there have been several proposals for more expressive logics. Taking our inspiration from recent results that show that natural linear-algebraic operations are not definable in FPC, we propose FPR - an extension of fixed-point logic with an operator for matrix rank. We show that this is strictly more expressive than FPC and can express the polynomial-time problems that have been shown to be inexpressible in FPC. We also show that FO+R, the extension of first-order logic with rank operators is surprisingly expressive. It can express some forms of transitive closure and, on ordered structures, captures the complexity classes ModpL.

In this talk, I will give the history and general background to the question and explain the context of the new results. I will also point to several interesting open problems. (Joint work with Martin Grohe, Bjarki Holm and Bastian Laubner).

back to the top of the page

# Contact

Department of Mathematics
Logic Group

S2|15
Schloßgartenstraße 7
GERMANY

Phone: +49-(0)6151-164686
Fax:     +49-(0)6151-163317

 Secretaries' officeS2|15-206Barbara BergsträßerBetina Schubotz Opening hours:Mo.-Fr.  10:00-15:00
« May 2013 »
Mo Tu We Th Fr Sa Su
18 1 2 3 4 5
19 6 7 8 9 10 11 12
20 13 14 15 16 17 18 19
21 20 21 22 23 24 25 26
22 27 28 29 30 31