Sommersemester 2014

  • Monday, September 29, 2014, at 14:30 in S2|15-234
    Margarita Korovina, Institute of Informatics Systems, Novosibirsk
    Index sets for continuous data
    We will talk about  our recent research on  the   theory of index sets in the context of computable analysis considering  classes of effectively open sets and  computable real-valued functions. First,  we construct principal computable  numberings for effectively open sets and computable real-valued functions.  Then, we calculate the complexity of index sets for important problems such as root realisability, set equivalence and inclusion, function equivalence which naturally arise in continuous constraint solving. Using developed techniques we give a direct proof  of the generalised Rice-Shapiro theorem for effectively open  sets of Euclidean spaces and  present an analogue of  Rice's theorem for computable real-valued functions. We illustrate how index sets can be used to estimate complexity of continuous data problems in the  settings of the Kleene-Mostowski  arithmetical hierarchy.
  • Friday, July 18, 2014, at 13:30 in S2|15-201
    Matthew de Brecht, CiNet, NICT, Osaka
    Compact saturated subsets of quasi-Polish spaces
    Hyperspaces of compact sets have many interesting applications, ranging from object-classification algorithms in computer vision based on the Hausdorff metric, to the modeling and analysis of non-deterministic programs in domain theory. However, the analysis of these hyperspaces has been mostly divided into research on either complete metric spaces or on continuous domains.
    Here we will aim for a common framework by initiating research on the hyperspaces of compact subsets of quasi-Polish spaces. As a first step in this direction, we will show that the hyperspace of saturated compact subsets of a quasi-Polish space is also quasi-Polish when equipped with the upper Vietoris topology. Furthermore, we will show that the Scott-open filters (which can be identified with the compact saturated sets via the Hofmann-Mislove Theorem) form a basis for the Scott-topology on the open set lattice of a quasi-Polish space.
    (This talk contains results from joint work with M. Schröder and V. Selivanov).
  • Friday, July 11, 2014, at 13:30 in S2|15-201
    Takayuki KIHARA, Japan Advanced Institute for Science and Technology, (JAIST)
    Scott ideals in infinite dimensional topology
    A topological space X is piecewise homeomorphic to Y if there are countable covers (X_i), (Y_i) of X and Y such that each piece X_i is homeomorphic to Y_i. Every (trans-)finite dimensional uncountable Polish space is piecewise  homeomorphic to Cantor space, and most known proper-infinite dimensional Polish spaces are piecewise homeomorphic to Hilbert cube. Motto Ros et al. asked whether there is another piecewise homeomorphism type of an uncountable Polish space.  We solve this problem by using computability theory with the help of the notion of Scott ideal (omega-model of RCA_0+WKL). We will also see that our space is piecewise homeomorphic to R. Pol's counterexample to P. Alexandrov's problem (a weakly infinite dimensional compact metrizable space which is not decomposable into countably many finite dimensional subspaces). By extending our idea, we also show that there are uncountably many piecewise homeomorphism types of compact metrizable spaces.
  • Thursday, July 10, 2014, at 11:00 in S2|15-201
    Prof. Dr. Walter Roth, University Brunei-Darussalam
    Localkonvexe Kegel: eine Einführung
    Dies ist eine Einführung in die Grundlagen der Theorie lokalkonvexer Kegel, eine Verallgemeinerung der etablierten und weitentwickelten Theorie lokalkonvexer topologischer Vektorräume. Die Motivation kommt von Konzepten, die algebraisch nahe an der Struktur von Vektorräumen liegen, in denen aber Negative der Elemente nicht immer existieren und deshalb die Multiplikation nur mit positive Skalaren definiert ist. Beispiele dafür sind u.a. Klassen von Funktionen, die den Wert unendlich annehmen, oder Mengen konvexer Teilmengen eines Vektorraumes. Falls in solchen Strukturen die Kürzungsregel nicht gilt, können sie nicht in Vektorräume eingebettet werden und Resultate und Techniken für Vektorräume sind nicht unmittelbar anwendbar. Die Theorie lokalkonvexer Kegel erfasst solche Beispiele und erlaubt die Entwicklung einer Dualitätstheorie ähnlich der in lokalkonvexen Vektorräumen.
  • Friday, June 27, 2014, at 13:30 in S2|15-234
    Professor Dana S. Scott, UC Berkeley
    Stochastic Lambda-Calculus
    It is shown how the operators in the "graph model" for lambda-calculus (which can function as a programming language for Recursive Function Theory) can be expanded to allow for "random combinators". The result then is a model for a new language for random algorithms.
  • Friday, June 13, 2014, at 13:30 in S2|15-201
    Yasuyuki Tsukamoto, Kyoto University, Japan
    Imaginary Hypercubes
    Imaginary cubes are three-dimensional objects that have square projections in three orthogonal ways just as a cube has. We study their higher-dimensional extensions. In n-dimensional case, we say that an object A is an imaginary n-cube of an n-cube C if A has n-1-cube projections in n orthogonal ways just as C has. It is known that, if an object A is an imaginary 3-cube of two different 3-cubes, then A is a subset of the intersection of the two cubes which share one of their diagonals. We show that this proposition is true also in higher-dimensional cases, and that it cannot happen in five- or more dimensional cases. We also talk about fractal imaginary n-cubes induced by Latin n-1-cubes.
  • Friday, May 16, 2014, at 13:30 in S2|15-201
    Felix Canavoi, RWTH Aachen
    Defining Winning Strategies
    We study the definability of winning strategies for infinite games in least fixed-point logic. In order to do that we specify what is meant by defining winning strategies in a logic. We show that the problems of defining winning regions and defining winning strategies are not always equivalent, although they are closely connected. We show that winning strategies are definable for reachability games, safety games, and parity games with a bounded number of priorities. Furthermore, we explore the case of parity games with an unbounded number of priorities. For this class, winning strategies are not LFP-definable in general, but on finite game graphs winning strategies are LFP-definable if, and only if, they are computable in polynomial time.

WS 2013/14

  • Friday, March 28, 2014, in S2|15-201 at 13:30
    Arno Pauly, Clare College Cambridge, UK
    On function spaces and polynomial-time computability
    In the context of second-order polynomial-time computability, we prove that there is no general function space construction. We proceed to identify restrictions on the domain or the codomain that do provide a function space with polynomial-time function evaluation containing all polynomial-time computable functions of that type. As side results we show that a polynomial-time counterpart to admissibility of a representation is not a suitable criterion for natural representations, and that the Weihrauch degrees embed into the polynomial-time Weihrauch degrees.
  • Friday, February 21, 2014, in S2|15-201 at 13:30
    Alexey Ostrovskiy, Universität der Bundeswehr München
    An Alternative Approach to the Decomposition of Functions
    Let a function f:X->Y with compact fibers take any open set into a union of an open and a closed set. Then there exist subsets T_n of X
    (n = 1,2,...) such that every restriction f|T_n is an open or closed function onto some subset Y_n and the Y_n (n = 0,1,...) cover Y.
    This result is related to the problem of preservation of Borel classes by open-Borel functions and to Luzin's question about decomposition of Borel-measurable functions into countably many continuous ones.
  • Friday, February 14, 2014, in S2|15-201 at 13:30
    Vassilios Gregoriades, TU Darmstadt
    A recursive theoretic view to the decomposability conjecture
    The decomposability conjecture states that every function from an analytic space to a separable metric space, for which the preimage of a Σ^0_{m+1} set is a Σ^0_{n+1} set, (1<=m<=n), is decomposable into countably many Σ^0_{n-m+1}-measurable functions on Π^0_n domains. In this talk we present recent results about this problem based on work of Kihara and the speaker. The key tools for the proofs come from recursion theory and effective descriptive set theory, and include a Lemma by Kihara on canceling out Turing jumps and Louveau separation.
  • Friday, February 7, 2014, in S2|15-201 at 13:30
    Prof. Anuj Dawar, University of Cambridge, UK
    Symmetric Circuits and Fixed-Point Logics
    We study queries on graphs (and other relational structures) defined by families of Boolean circuits that are invariant under permutations of the vertices. In particular, we study circuits that are symmetric, that is, circuits whose invariance is explicitly witnessed by automorphisms of the circuit induced by the permutation of their inputs. We show a close connection between queries defined on structures by uniform families of symmetric circuits and definability in fixed-point logics. Joint work with Matthew Anderson.
  • Friday, January 24, 2014, in S2|15-201 at 13:30
    Prof. Keita Yokoyama, JAIST, Japan
    On the second-order categoricity of the natural number system
    It is important in the foundations of mathematics that the natural number system is characterizable as a system of 0 and a successor function by second-order logic. In other words, the following Dedekind's second-order categoricity theorem holds: every Peano system (P,e,F) is isomorphic to the natural number system (N,0,S). In this talk, I will investigate Dedekind's theorem and other similar statements. We will first do reverse mathematics over RCA0, and then weaken the base theory. This is a joint work with Stephen G. Simpson and Leszek Kolodziejczyk.
  • Friday, December 20, 2013, in S2|15-201 at 13:30
    Yasayuki Tsukamoto, Kyoto University
    Minimal Oriented Matriods with Disconnected Realization Space
  • Tuesday, December 17, 2013, in S2|15-201 at 13:00
    Yasayuki Tsukamoto, Kyoto University
    A Hausdorff space with a strongly independent dyadic subbase
    Let X be a second countable, Hausdorff space. For two points of X, we can separate them by a pair of open subset of X which are exteriors of each other. A dyadic subbase of X is a subbase composed of countable pairs of such open subsets. A dyadic subbase induces a domain representation of the topological space X. I talk about the properties of this representation, and give a strange example for it.
  • Friday, December 13, 2013, in S2|15-201 at 13:30
    Prof. Dr. Yichuan Yang, Beihang University, Beijing, China, at present Stuttgart University
    Non-commutative logical algebras and algebraic quantales
    Quantum B-algebras are partially ordered implication algebras which are equivalent to subreducts of quantales. They were introduced axiomatically in [Rump-Yang: Ann. Pure Appl. Logic 165 (2014)]. They provide a unified semantics for non-commutative algebraic logic, and they cover the vast majority of implication algebras. Partially defined products on algebras related to effect algebras are handled efficiently in this way. The results of Galatos-Jonsson-Tsinakis on the splitting of generalized BL-algebras into a semidirect product of a po-group operating on an integral residuated poset are extended to a characterization of twisted semidirect products of a po-group by a quantum B-algebra.
  • Friday, December 6, 2013, in S2|15-201 at 13:30
    Stepahne Le Roux, TU Darmstadt
    Lazy beats crazy: a spurious yet mathematical justification for laissez-faire

    In perfect-information games in extensive form, common knowledge of rationality triggers backward induction, which yields a Nash equilibrium. That result assumes much about the players’ knowledge, while it holds only for a subclass of the games in extensive form. Alternatively, this article defines a non-deterministic evolutionary process, by myopic and lazy improvements, that settles exactly at Nash equilibrium (in extensive form). Importantly, the strategical changes that the process allows depend only on the structure of the game tree, they are independent from the actual preferences. Nonetheless, the process terminates if the players have acyclic preferences; and even if some preferences are cyclic, the players with acyclic preferences stop improving eventually! This result is then generalised in games played on DAGs or infinite trees, and it is also refined by assigning probabilities to the process and perturbing it.
  • Friday, November 22, 2013, in S2|15-201 at 13:30
    Anna-Sophie Heinemann, University of Paderborn
    Boole and Jevons on Logical Method
    Today's notion of formal logic is coined in the 19th Century. In one sense, this concerns the form of logic: which means its shape and operational techniques. But it also relates to the conception of logical form: the scope of meanings imputable to propositional forms. The same understanding of logical form allows for different methods which may arrive at the same results. However, the techniques differ in some important respects, the most obvious of which seems to lie in an appeal to the notions of functions and variables. In this talk, two approaches will be illustrated by example of George Boole's (quasi-)algebraic calculus of logic and what William Stanley Jevons terms a method of 'indirect deduction,' to be accomplished by mechanical devices such as his 'logical piano.'

  • Friday, October 11, 2013, in S2|15-201 at 13:30
    Dr. Rupert Hölzl, Universität der Bundeswehr München
    Absolutely undecidable sets
    An infinite binary sequence A is absolutely undecidable if it is impossible to compute A on a set of positions of positive upper density. Absolute undecidability is a weakening of bi-immunity. Downey, Jockusch and Schupp asked whether, unlike the case for bi-immunity, there is an absolutely undecidable set in every non-zero Turing degree. We provide a positive answer to this question by applying techniques from coding theory. We show how to use Walsh-Hadamard codes to build a truth-table functional which maps any sequence A to a sequence B, such that given any restriction of B to a set of positive upper density, one can recover A. This implies that if A is non-computable, then B is absolutely undecidable. Using a forcing construction, we show that this result cannot be strengthened in any significant fashion.
    (Joint work with Laurent Bienvenu and Adam R. Day)
  • Friday, October 4, 2013, in S2|15-201 at 13:30
    Prof. Paul Potgieter, University of South Africa
    The theory of complex oscillations
    Complex oscillations, also known as algorithmically random Brownian motion, were first formulated by Asarin and Pokrovskii. The work of Fouche established a bijection between the sets of complex oscillations and Kolmogorov-Chaitin random strings. In this talk, we shall discuss the foundations and basic tools of the theory (such as effectively generated algebras) and survey some of the subsequent work, much of which concerns itself with the fractal properties of complex oscillations.

SS 2013

  • Friday, September 6, 2013, at 13:30 in S2|15-201
    Johanna Sokoli, TU Darmstadt
    Satisfiability of cross product terms is complete for real nondeterministic polytime Blum-Shub-Smale machines
    Nondeterministic polynomial-time Blum-Shub-Smale Machines over the reals give rise to natural a discrete complexity class between NP and PSPACE. Several problems have been shown complete (under many-one reduction by polynomial-time Turing machines) for this class -- although mostly problems from real algebraic geometry / polynomial systems (e.g. by Pascal Koiran); notable exceptions being "stretchability of pseudolines" (Peter Shor), "realizability of chirotopes" (Nikolai Mnev/Juergen Richter-Gebert), or "satisfiability in quantum logic". We exhibit a new one based on questions about expressions built from cross products only.
    http://mcu2013.ini.uzh.ch/Program.html
  • Monday, August 19, 2013, at 14:00 in S2|15-201
    Ruy de Queiroz, Federal University of Pernambuco, Brazil
    Propositional equality, identity types, and direct computational paths
    In proof theory the notion of canonical proof is rather basic, and it is usually taken for granted that a canonical proof of a sentence must be unique up to certain minor syntactical details(such as, e.g., change of bound variables). When setting up a proof theory for equality one is faced with a rather unexpected situation where there may not be a unique canonical proof of an equality statement. Indeed, in a(1994--5)proposal for the formalisation of proofs of propositional equality in the Curry--Howard style, we have already uncovered such a peculiarity. Totally independently, and in a different setting, Hofmann & Streicher (1994) have shown how to build a model of Martin-Löf's Type Theory in which uniqueness of canonical proofs of identity types does not hold. The intention here is to show that, by considering as sequences of rewrites and substitution, it comes a rather natural fact that two(or more)distinct proofs may be yet canonical and are none to be preferred over one another. By looking at proofs of equality as rewriting(or computational)paths this approach will be in line with the recently proposed connections between type theory and homotopy theory via identity types, since elements of identity types will be, concretely, paths (or homotopies).
    (Joint work with Anjolina G. de Oliveira)
  • Friday, July 5, 2013, at 13:30 in S2|15-201
    Aziz Erkal Selman, RWTH Aachen
    On Fractional Isomorphisms

    An isomorphism from a graph G to a graph H can be seen as a permutation matrix X satisfying the equation AX = XB, where A and B are adjacency matrices of G and H, respectively. This means that Graph Isomorphism Problem can be stated in terms of 0-1 integer linear programming. The solutions to the relaxation of this integer program are called fractional isomorphisms. In 1994, Ramana, Scheinerman and Ullman proved that there is a fractional isomorphism between two graphs, if and only if the graphs cannot be distinguished by naive vertex classification, which is a widely used, intuitive method for isomorphism testing. I am going to present a new elementary proof of this theorem. There are other characterisations of the same concept, including a logic and a game characterization. I will briefly talk about them and give a new combinatorial characterization: two graphs are fractionally isomorphic, if and only if there is a so called migration from one to the other.

  • Thursday, July 4, 2013, at 15:00 in S2|15-201
    Florian Steinberg
    Computational Complexity of Poisson's Equation
    The last years have seen an increasing interest in classifying (existence claims in) classical mathematical theorems according to their strength -- e.g. proof-theoretically or in terms of Weihrauch-degrees of computability.
    We pursue the question under the refined view of asymptotic computational worst-case complexity. Ko&Friedman have for instance shown that the indefinite integral of a polynomial-time computable function is again polynomial-time computable iff FP=#P, a complexity class (believed to lie strictly) between NP and PSPACE. We prove that the same holds for the solution of Poisson's Equation on the Euclidean unit ball.
    (Joint work with Akitoshi Kawamura and Martin Ziegler)
  • Friday, June 28, 2013, at 13:30 in S2|15-201
    Eike Neumann
    The Weihrauch-Degree of the Browder-Göhde-Kirk Theorem in Separable Hilbert Space
    Let K be a closed, convex and bounded subset of a separable Hilbert Space. The Browder-Göhde-Kirk Theorem asserts any nonexpansive f:K->K to have a fixed point. [Kohlenbach'01] studied the computational content of the Krasnoselski and Ishikawa fixed point theorems for nonexpansive mappings in uniformly convex normed spaces. He observed that the iterations employed there fail to yield uniform (i.e. depending only on f) rates of convergence already in the case where K = [0;1]. This led him to extract instead rates of so-called asymptotic regularity. On the other hand, since the set of fixed points is known to be convex, it is easy to see that any computable nonexpansive f:[0;1]^d->[0;1]^d has a computable fixed point; cmp. [LeRoux&Ziegler'08]. We show that the Browder-Göhde-Kirk Theorem is computationally equivalent to the principle of Weak Convex Choice in terms of so-called Weihrauch reductions, introduced by [Brattka&Gherardi'09]: A (possibly multivalued, i.e. non-extensional) mapping g:X=>Y between represented spaces X and Y is said to REDUCE to a mapping h:U=>V if there exist computable (multivalued) functions K:VxX=>Y and L:X=>U such that K(h(L(x)),x) = g(x) for all x in X. Convex Choice is the restriction of the so-called Closed Choice principle to convex sets. Closed Choice (in a separable metric space) in turn is a uniform version of the problem of finding some point in a non-empty co-r.e. closed set: Given negative information in form of an enumeration of open rational balls avoiding at least one point, produce such a point. Note that on Cantor Space this is equivalent to Weak König's Lemma and hence uncomputable. Weak Convex Choice is a variant of Convex Choice with respect to the weak topology.
  • Friday, June 21, 2013, at 13:30 in S2|15-201
    Angeliki Koutsoukou-Argyraki, University of Copenhagen
    Automorphisms of the Calkin algebra under two different set-
    theoretic hypotheses; an independence result

    A question by Brown, Douglas and Fillmore from the late 1970's asks if the
    Calkin algebra has outer automorphisms. It has recently been shown that the answer to this question in undecidable in ZFC; Phillips and Weaver (2007) showed that assuming the Continuum Hypothesis the Calkin algebra has outer automorphisms, while Farah (2011) assuming Todorcevic's Axiom showed that it is also consistent with ZFC that all automorphisms of the Calkin algebra are inner and moreover provided an alternative proof to Phillips' and Weaver's result.
    In this talk I will sketch the proofs in Farah's paper.
  • Friday, June 14, 2013, at 14:00 in S2|15-201
    Florian Steinberg
    Computable analytic functions in iRRAM

    It is well known that knowing only the coefficient sequence is insufficient to computably evaluate the induced power series, even for fixed radius of convergence. This can be mended by enriching a name of a real sequence with suitable discrete advice. More precisely arXiv:1211.4974 has exhibited integers quantities serving both as enrichment and as gauge for uniform parameterized polynomial time algorithms evaluating, adding, multiplying, composing, differentiating, integrating, and maximizing real analytic functions. iRRAM is a C++ library for arbitrary precision real arithmetic and analysis, allowing to compute limits of sequences of real numbers and functions. The above combination of continuous and discrete information canonically yields a declaration of a C++ class interface for real analytic functions. We describe an implementation of such a class and of the standard arithmetic operations thereon as well as differentiation, integration and evaluation operators -- and several optimizations.
  • Friday, June 14, 2013, at 13:30 in S2|15-201
    Carsten Rösnick
    Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime

    The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the seemingly simple operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable univariate functions to instances which are 'hard' for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones -- non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch's TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and integral(f); and within what running time, in terms of these parameters and the guaranteed output precision 1/2^n$. It turns out that Gevrey's classical hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity.
  • Friday, June 7, 2013, at 13:30 in S2|15-201
    Andreas Döring, University of Oxford, UK
    A new kind of logic for quantum systems
    Standard quantum logic in the Birkhoff-von Neumann style is plagued by a variety of conceptual and interpretational problems. In this talk, I will show that one can associate a completely different and much better-behaved form of logic to each quantum system. This is based on the so-called topos approach to quantum theory that provides a new notion of generalised state space for quantum systems. Subsets (or rather, subobjects) of this quantum state space form a complete Heyting algebra, and in certain cases a complete bi-Heyting algebra. Such subobjects represent propositions about the quantum system. There is a 'translation map', called daseinisation, from Birkhoff-von Neumann logic to the new form of topos-based logic. I will present some key properties of the topos-based form of logic for quantum systems, including the Heyting and co-Heyting negation, and will consider the role of quantum states as models of this logic.
  • Friday, May 31, 2013, at 13:30 in S2|15-201
    Carsten Rösnick
    Closed Sets and Operators thereon: Representations, Computability and Complexity
    In the past, many representations (in the sense of Weihrauch's Type-2 Theory of Effectivity, TTE) for subsets S of Euclidean space have been devised and systematically compared to each other regarding their computable equivalence. Fewer authors, however, examined how representations relate complexity-wise. By giving a unified (re-)formulation (that permits to encode further properties of a set, like a bound on its diameter) of four existing representation first, we then systematically compare which pairs of set-representations admit polynomial-time translations, extending upon the aforementioned existing (so far non-uniform) complexity results and proving new ones. We then use these insights to prove how complexity (and even computability) of operators over sets (like intersection and projection) varies depending on the representation of their respective in- and outputs.

WS 2012/13

  • Friday, March 15, 2013, at 13:30 in S2|15-201
    Dr. Jaime Gaspar*
    Krivine's classical realisability and the unprovability of the axiom of choice and the continuum hypothesis

    Proof interpretations are tools in mathematical logic with a wide range of applications: consistency results, independence results, and extraction of computational content from proofs, just to name a few. They are usually applied only in the context of logic or arithmetic. There is the dream of applying them even to set theory. Now the dream is fulfilled with a novel proof interpretation: Krivine's classical realisability. In this talk I present my personal digest of Krivine's classical realisability and one of its main applications to set theory: set theory ZF plus the statement "there is a set S between N (the set of natural numbers) and R (the set of real numbers) not equinumerous to its cartesian square S x S" is a consistent theory where both the axiom of choice and the continuum hypothesis fail. As a corollary we get two classical and celebrated results: ZF does not prove the axiom of choice nor the continuum hypothesis. This is a short, simple, and sweet talk, focusing on the ideas and setting aside technicalities.

    * INRIA Paris-Rocquencourt, pi.r2, Univ Paris Diderot, Sorbonne Paris Cite, F-78153 Le Chesnay, France. Financially supported by the French Fondation Sciences Mathematiques de Paris.

  • Friday, February 1, 2013, at 13:30 in S2|15-201
    Prof. Dr. Martin Ziegler
    Real Benefit of Promises and Advice
    Promises are a standard way to formalize partial algorithms; and advice quantifies nonuniformity. For decision problems, the latter it is captured in common complexity classes such as P/poly, that is, with advice growing in size with that of the input. We advertise constant-size advice and explore its theoretical impact on the complexity of classification problems -- a natural generalization of promise problems -- and on real functions and operators. Specifically we exhibit problems that, without any advice, are decidable/computable but of high complexity while, with (each increase in the permitted size of) advice, (gradually) drop down to polynomial-time.
  • Friday, January 25, 2013, at 13:30 in S2|15-201
    Johanna Sokoli
    On the Complexity of Satisfiability over Vector Product Terms
    A real vector product term is a term constructed iteratively by forming vector products of
    R^3-valued variables X_1, ... X_n. By defining a vector product for elements of the projective space P^2 in an appropriate way one can also define projective vector product terms. Given a set of vector product terms the question arises whether a single equation or even an entire equation system formed by these terms is satisfiable or not. In this talk different definitions of such satisfiable problems are given and an overview of several polynomial reductions is provided. Furthermore, the languages are classified into a complexity class and a proof of completeness of some of these languages will be given.
  • Tuesday, January 15, 2013, at 16:00 in S2|15-201
    Keita Yokoyama, Tokyo Institute of Technology
    Combinatorial principles and Reverse Mathematics
    In the Reverse Mathematics program, many mathematical theorems are classified into typical subsystems of second-order arithmetic called "big five". Recently, combinatorial principles are focused on in this program, since many of them are not classified into big five. In this talk, I will discuss on the role of Ramsey's theorem in Reverse Mathematics and related topics.
  • Friday, December 14, 2012, at 15:00 in S2|15-201
    Dr Kord Eickmeyer, National Institute of Informatics, Tokyo
    Order-Invariant Logics on Restricted Classes of Structures
    Order-invariant extensions of various classical logics pop up naturally in descriptive complexity and database theory. Formulas of these logics may speak about a linear order on structures, but have to be invariant under the particular choice of such an order. Strictly speaking, this does not even define a logic, because in general it is undecidable whether a given formula has this invariance property or not, and this reflects in the failure of many powerful tools from finite model theory, such as Ehrenfeucht-Fraïssé games.
    In my talk I will review known results about the expressiveness and model-checking complexity of order- and successor-invariant first-order logics. I will then go on to talk about my ongoing research (with Michael Elberfeld) about the expressive power of order-invariant logics on restricted classes of structures.
  • Tuesday, December 4, 2012, at 16:15 in S2|15-201
    Dr. Tomer Kotek, Haifa, Israel
    Applications of logic in graph theory: definability of graph invariants
    Logical methods can be used to unify results for individual graph- theoretic
    objects, by providing meta-theorems which apply to all such objects
    definable in an appropriate logic. In this talk we discuss graph properties
    and graph polynomials definable in First Order Logic and Monadic Second
    Order Logic in terms of their definability and combinatorial meta-theorem.
    In particular, we show a method of proving non-definability using connection
    matrices. We extend the results for graph properties by showing a
    Feferman-Vaught type theorem for First Order Logic extended with counting
    quantifiers.
  • Wednesday, November 28, 2012, at 13:30 in S2|15-201
    Rasmus Møgelberg, ITU Copenhagen, Denmark
    Presheaf models for guarded recursion

    A guarded recursive definition is one where the recursion variable only occurs in positions guarded by a delay operator. In this talk I will present a new model of guarded recursion and motivate it using two different examples: first as a model for computations with coinductive types such as streams, and second as a model of a metalanguage useful for defining models of programming languages with higher-order store. The model is a simple presheaf model (over the ordered natural numbers) and I will describe the computational intuition. If I have time I will describe work in progress on constructing models of intentional type theory with guarded recursion. Most of the talk describes joint work with Lars Birkedal, Kristian Stoevring and Jan Schwinghammer published at LICS 2011.

  • November 23, 2012, at 13:30 in S2|15-201
    Makoto Fujuwara, Tohoku University, Japan
    Marriage Theorem for Countable Graphs and Computability
    We investigate so-called marriage theorem with respect to computability by making use of the technique of reverse mathematics. The marriage theorem has a form that a marriage problem (bipartite graph) satisfying appropriate conditions has a solution (injection). A computable marriage problem may not have a computable solution, but if it has strengthened conditions, it become to have a computable solution. Based on this fact, we investigate how the computational defficulty of marriage problems vary by adding extra information step by step. In the second part of this talk, we consider a new combinatorial condition which also make a computable marriage problem to have a computable solution. However, unlike the previous case, there is no uniform procedure to construct a solution from a given computable marriage problem with this condition. On the other hand, if we have little more information for the marriage problem, we can construct a solution uniformly.
  • November 16, 2012, at 13:00 in S2|15-201
    PD Dr. Olaf Beyersdorff, University of Leeds, UK
    How difficult is it to verify proofs?
    Traditionally proof complexity adopts the view that proofs are verified in polynomial time and a rich body of results is known for the complexity of proofs in this framework. The talk will start by providing an introduction to main notions and motivations in proof complexity in this classical setting. The second part of the talk is devoted to a recent line of research trying to understand the power of proof systems that use alternative computational resources to verify proofs. In the talk I will mention recent results where proofs are verified by either additional computational resources (such as advice, randomness, quantum computations) or by very weak computational models. In particular, we will look at a very weak model where proofs are verified by NC^0 circuits and show non-trivial lower and upper bounds for this restrictive model.
  • November 9, 2012, at 13:00 in S2|15-201
    PD Dr. Laurentiu Leustean, Institute of Mathematics of the Romanian Adacemy, Bucharest, Romania
    Proof Mining in Nonlinear Analysis
    The program of proof mining is concerned with the extraction of hidden finitary and combinatorial content from proofs that make use of highly infinitary principles. This new information can be both of quantitative nature, such as algorithms and effective bounds, as well as of qualitative nature, such as uniformities in the bounds or weakening the premises. Thus, even if one is not particularly interested in the numerical details of the bounds themselves, in many cases such explicit bounds immediately show the independence of the quantity in question from certain input data. In this talk we present recent applications of proof mining in nonlinear analysis: -effective results on the asymptotic behavior of Picard iterations of firmly nonexpansive mappings, a class of functions which play an important role in metric fixed point theory and optimization due to their correspondence with maximal monotone operators (joint work with David Ariza and Genaro Lopez-Acedo). - explicit and highly uniform rates of metastability (in the sense of Tao) on a nonlinear generalization of the classical von Neumann mean ergodic theorem, due to Shioji andTakahashi for Banach spaces with a uniformly Gateaux differentiable norm (joint work with Ulrich Kohlenbach).
  • November 2, 2012, at 13:00 in S2|15-201
    Prof. Dr. Martin Otto
    Groupoids and Hypergraphs
    We look at a novel construction of finite hypergraphs
    of controlled acyclicity, based on finite groupoids with suitable acyclicity properties: their Cayley graphs have large girth even w.r.t. a discounted distance measure that contracts arbitrarily long sequences of edges from the same colour class (sub-groupoid), and only counts transitions between colour classes (cosets). These groupoids are shown to be useful, e.g., in the construction of generic finite hypergraph covers that avoid small cyclic configurations. The hypergraph constructions in question use reduced products with groupoids that are generated by the elementary local extension steps and can be made to preserve the symmetries of the given overlap pattern.
  • October 26, 2012, at 13:00 in S2|15-201
    Reiner Hähnle
    Abstract Symbolic Execution

    Modern software tends to undergo frequent requirement changes and typically is deployed in many different scenarios. This poses significant challenges to formal software verification, because it is not feasible to verify a software product from scratch after each change. It is essential to perform verification in a modular fashion instead. The goal must be to reuse not merely software artifacts, but also specification and verification effort. In our setting code reuse is realized by delta-oriented programming, an approach where a core program is gradually transformed by code "deltas" each of which corresponds to a product feature. The delta-oriented paradigm is then extended to contract-based formal specifications and to verification proofs. As a next step towards modular verification we transpose Liskov's behavioural subtyping principle to the delta world. Finally, based on the resulting theory, we perform a syntactic analysis of contract deltas that permits to automatically factor out those parts of a verification proof that stays valid after applying a code delta. This is achieved by a novel verification paradigma called "abstract symbolic execution."

  • October 26, 2012, at 10:30 in S2|15-201
    Viktor Winschel
    Coalgebraic Analysis of Subgame-perfect Equilibria in Infinite Games without Discounting - Towards Reflexive Economics

    We present a novel coalgebraic formulation of infinite extensive games. We define both the game trees and the strategy profiles by possibly infinite systems of corecursive equations. Certain strategy profiles are proved to be subgame perfect equilibria using a novel proof principle of predicate coinduction which is shown to be sound by reducing it to Kozen's metric coinduction. We characterize all subgame perfect equilibria for the dollar auction game. The economically interesting feature is that in order to prove these results we do not need to rely on continuity assumptions on the payoffs which amount to discounting the future. In particular, we prove a form of one-deviation principle without any such assumptions. This suggests that coalgebra supports a more adequate treatment of infinite-horizon models in game theory and economics. I will also sketch the possibilities that this (and more generalized) mathematics opens up for reflexive economics, which takes it serious that modeling in social science takes place within the modeled system and that essentially depends on self-referential paradoxes and corecursion and similar mathematical constructions. This opens the door to a proper treatment of the Lucas critique, evolutionary, institutional, agend based, computational, numerical economics, econometrics of non experimental data, money, value theory, political economics and management theory and also to a unification and cross fertilization between systems sociology, computer science and biology.

  • Thursday, October 11, 2012, at 13:00 in S2|15-201
    Stéphane Le Roux
    From Determinacy to Nash equilibrium

    Game theory is a modelling tool for economics and Nash equilibrium is its central concept, involving many players and many possible outcomes. Independently, logic has been using a game-theoretic terminology and determinacy is its central concept, involving two players and two possible outcomes. I will show that all determinacy results can be uniformly transferred into a result of existence of multi-outcome (possibly multi-player) Nash equilibrium, thus establishing a two-way bridge between logic and economics.

2012, January to September


  • September 21, 2012, at 13:00 in S2|15-201
    Arno Pauly, Cambridge University
    Synthetic Descriptive Set Theory
  • September 7, 2012, at 13:00 in S2|15-201
    Pavol Safarik, TU Darmstadt
    A functional interpretation for nonstandard arithmetic
    We introduce constructive and classical systems for nonstandard arithmetic and show how variants of the functional interpretations due to Gödel and Shoenfield can be used to rewrite proofs performed in these systems into standard ones. These functional interpretations show in particular that our nonstandard systems are conservative extensions of E-HA\omega; and E-PA\omega;, strengthening earlier results by Moerdijk and Palmgren, and Avigad and Helzner. We will also indicate how our rewriting algorithm can be used for term extraction purposes. To conclude the paper, we will point out some open problems and directions for future research, including some initial results on saturation principles.

  • August 24, 2012, at 13:00 in S2|15-201
    Tahereh Jafarikhah, University of Tarbiat Modares, Tehran, Iran; at present University of Hagen, Germany
    Computable Riesz representation on the dual of C[0; 1] revisited
    By the Riesz representation theorem, for every linear functional F : C[0; 1] ! R there is a function g : [0; 1] ! R of bounded variation such that
    F(h) = Z f dg (h 2 C[0; 1]) :
    A computable version is proved in LW07a [?]: a function g can be computed from F and its norm, and F can be computed from g and an upper bound of its total variation. In this article we present a much more transparent proof. We first give a new proof of the classical theorem from which we then can derive the computable version easily. As in LW07a [?] we use the framework of TTE, the representation approach for computable analysis, which allows to de ne natural concepts of computability for the operators under consideration.
  • August 24, 2012, at 10:00 in S2|15-201
    Hiroyuki OTA, University of Tokyo, Japan
    Computational Complexity of Smooth Differential Equations
    The computational complexity of the solution~$h$ to the ordinary differential equation $h(0)=0$, $h'(t) = g(t, h(t))$
    under various assumptions on the function $g$ has been investigated in hope of understanding the intrinsic hardness of solving the equation numerically.
    Kawamura showed in 2010 that the solution~$h$ can be PSPACE-hard even if $g$ is assumed to be Lipschitz continuous and polynomial-time computable.
    We place further requirements on the smoothness of $g$
    and obtain the following results:
    the solution~$h$ can still be PSPACE-hard
    if $g$ is assumed to be of class $C^1$;
  • Tuesday, July 17, 2012, at 11:00 in S2|15-401
    Prof. Hans-Peter Künzi, University of Cape Town, South Africa
    The Katetov construction revisited
    (joint work with M. Sanchis, Castellon, Spain)
    Various concepts from the theory of metric spaces are generalized to an asymmetric setting. In particular we study the Katetov construction modified for a T_0-quasi-metric space. Our approach is related to the recent work of the first author with Kemajou and Otafudu about a concept of hyperconvexity in such spaces.
  • July 6, 2012, at 13:00 in S2|15-201
    Prof. Benjamin Miller, University of Münster
    An anti-basis theorem for definable cardinals
    We will discuss how local rigidity properties of the usual action of SL_2(Z) on R^2 can be used to rule out basis theorems near the base of the definable cardinal hierarchy. I hope to make the talk accessible to a broad audience. This is joint work with Clinton Conley.
  • June 29, 2012, at 13:00 in S2|15-201
    Prof. Martin Otto, TU Darmstadt
    Pebble Games and Linear Equations (joint work with Martin Grohe)
    We give a new, simplified and detailed account of the correspondence between levels of the Sherali-Adams relaxation of graph isomorphism and levels of pebble-game equivalence with counting (higher-dimensional Weisfeiler-Lehman colour refinement). The correspondence between basic colour refinement and fractional isomorphism (Ramana, Scheinerman, Ullman 1994) is re-interpreted as the base level of Sherali-Adams and generalised to higher levels in (Atserias, Maneva 2011), who prove that the two resulting hierarchies interleave. In carrying this analysis further, we give (a) a precise characterisation of the level k Sherali-Adams relaxation in terms of a modied counting pebble game; (b) a variant of the Sherali-Adams levels that precisely match the k-pebble counting game; (c) a proof that the interleaving between these two hierarchies is strict. We also investigate the variation based on boolean arithmetic instead of real/rational arithmetic and obtain analogous correspondences and separations for plain k-pebble equivalence (without counting). Our results are driven by considerably simplied accounts of the underlying combinatorics and linear algebra.
  • June 22, 2012, at 13:00 in S2|15-201
    Atefeh Keshavarzi Zafarghandi, Amirkabir University, Tehran, Iran
    Dynamical Systems via Domain Theory
    We consider Dynamical Systems from a computational point of view. Then we study the associated dynamical system in extended hyperspaces. We show that attractors of iterated function systems can be obtained as fixed points in these new hyperspaces.
  • June 15, 2012, at 13:00 in S2|15-201
    Achim Blumensath, TU Darmstadt
    Recognisability and Algebras of Infinite Trees
  • June 1, 2012 at 13:00 in S2|15-201
    Prof. Andre Nies, University of Auckland, New Zealand
    Algorithmic randomness and differentiability
    We show that algorithmic randomness notions for a real correspond to differentiability at the real of all functions in an appropriate effective class.
  • May 18, 2012 at 13:00 in S2|15-201
    Andrey Morozov, Sobolev Institute of Mathematics, Novosibirsk
    On Sigma-definability of structures over the reals

    We give a brief survey of the results on structures Sigma-definable over HF(R) and present some results on the number of non-Sigma-isomorphic presentations of the ordered field R of reals and of its ordering. In particular, we will discuss the question, whether it is possible to use R to 'improve' itself by creating its new classically isomorphic Sigma-copy.
  • May 4, 2012
    Prof. Takakazu Mori, Sangyo University, Kyoto, Japan
    Fine Computable Functions - a computability of discontinuous functions-

    The computability of functions by Pour-El and Richards requires that Sequential Computability and Effective Continuity. So, a computable function must be continuous. The theory of Walsh series is based on the Fine metric, which is stronger than Euclidean metric. A function is Fine continuous if and only if it is continuous at dyadic irrationals and right continuous at dyadic rationals. So, Rademach functions and Walsh functions are fine continuous. The Fine computability is defined by means of the Fine topology instead of the Euclidean topology.

  • April 27, 2012 at 13:30 in S2|15-201
    Katharina Schade
    Proof mining for Halpern iterations in CAT(0) spaces
    We examine a convergence result of Halpern iterations in CAT(0) spaces due to Saejung (2010) and explain how to apply proof mining methods to obtain effective uniform rates of metastability as outlined by Kohlenbach and Leustean in 2011. The novelty of this approach lies in the fact that Saejung's original proof used the existence of Banach limits. This requires a substantial reference to the axiom of choice which is not permitted in current metatheorems. We show how one can reduce this reference to fit the requirements of a metatheorem given in Gerhardy-Kohlenbach (2005).
  • April 20, 2012 at 13:30 in S2|15-201
    Samuele Maschio, University of Pavia, Italy
    Initial algebras and internal syntax
    In category theory, recursive objects are defined using initial algebras. As well as for natural numbers, objects of lists and objects of trees, it is possible to give a notion of *internal theory * in a category using *ad hoc *functors. I will give some examples and I will underline the connection between Goedel coding and the internalization of arithmetic in the syntactic category of Peano arithmetic.
  • March 23, 2012 at 13:30 in S2|15-201
    Costas Poulios, University of Athens, Greece

    The fixed point property on tree-like Banach spaces
    A Banach space $X$ is said to have the fixed point property if for every weakly compact convex set $K\subseteq X$ and every non-expansive $T:K\to K$ (i.e. $\|Tx-Ty\|\leq \|x-y\|$ for all $x,y\in K$), the map $T$ has a fixed point. This property is closely related to the geometry of the space under consideration. If the geometry of the space is "nice", i.e. if $X$ possesses normal structure, then $X$ has the fixed point property. On the other hand, a space without normal structure may fail this property (e.g. the space $L_1$) or it may enjoy this property (e.g. the space $c_0$).
    In this lecture we will consider the important class of separable Banach spaces not containing $\ell_1$ with non-separable dual. Our purpose is to study these spaces with respect to the fixed point property. In particular, after discussing some known members of this class, we will present a Banach space belonging to the class under consideration which has the fixed point property even though it does not possess normal structure.

  • March 2, 2012 at 11:00 in S2|15-201
    Alexander Kreuzer

    Non-principal ultrafilters, program extraction and higher order reverse mathematic
    We investigate the strength of the existence of a non-principal ultrafilter over fragments of higher order arithmetic. Let (U) be the statement that a non-principal ultrafilter on N exists and let ACA_0^w be the higher order extension of ACA_0. We show that ACA_0^w + (U) is Pi1_2-conservative over ACA_0^w and thus that ACA_0^w + (U) is conservative over PA. Moreover, we provide a program extraction method and show that from a proof of a strictly \Pi1_2 statement \forall f \exists g A(f,g) in ACA_0^w + (U) a realizing term in G"odel's system T can be extracted. This means that one can extract a term t\in T, such that \forall f A(f,t(f)).

  • February 10, 2012 at 14:30 in S2|15-201
    Benno van den Berg

    About the Herbrand Topos

  • January 27, 2012 at 13:30 in S2|15-201
    Vassilis Gregoriades
    The complexity of the set of points of continuity of a multi-valued function
    Abstract: We consider a notion of continuity concerning multi-valued functions between metric spaces. We focus on multi-valued functions assigning points to nonempty closed sets and we answer the question asked by M. Ziegler (Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability with Applications to Effective Linear Algebra, to appear in Ann. Pure Appl. Logic) of determining the complexity of the set of points of continuity of such a multi-valued function. We will see that, under appropriate compactness assumptions, the complexity of this set is low in the hierarchy of Borel sets. On the other hand in the general case it might not even be a Borel set. In particular, this notion of continuity is not metrizable in the following sense: if we view a multi-valued function F from X to Y as a single-valued function from X to the family F(Y) of closed subsets of Y, it is not necessarily the case that there is a metrizable topology on F(Y) such that F is continuous at an arbitrary x in the usual sense (of single-valued functions) exactly when it is continuous at x in the sense of multi-valued functions. We will also present some similar results about a stronger notion of continuity, which is closely related to the Fell topology and thus in some cases turns out to be metrizable.

  • Wednesday, January 18, 2012 at 14:25 in S2|15-201
    Jean-Yves Beziau, Federal University of Rio de Janeiro, UFRJ 
    Visiting Professor University of Munich, LMU
    Universal logic: A general completeness theorem
     
    The aim of universal logic is to develop general concepts and proofs for a better understanding of logical systems. After giving a brief overview of universal logic I will present a general completeness theorem linking sequent systems with bivaluations. 

  • January 13, 2012 at 13:30 in S2|15-201
    Sam Sanders, Ghent University
    Reuniting the antipodes: bringing together Nonstandard Analysis and 

    Constructive Analysis 
    Constructive Analysis was introduced by Erret Bishop to identify the `computational meaning' of mathematics. In the spirit of intuitionistic mathematics, notions like `algorithm,' `explicit computation,' and `finite procedure' are central. The exact meaning of these vague terms was left open, to ensure the compatibility of Constructive Analysis with several traditions in mathematics. Constructive Reverse Mathematics (CRM) is a spin-off of Harvey Friedman's famous Reverse Mathematics program, based on Constructive Analysis. Bishop famously derided Nonstandard Analysis for its lack of computational meaning. In this talk, we introduce`?-invariance': a simple and elegant definition of `finite procedure' in (classical) Nonstandard Analysis. Using an intuitive interpretation, we obtain many results from CRM, thus showing that ?-invariance is quite close to Bishop's notion of `finite procedure.'  
  • Tuesday, January 12, 2012 at 11:00 in S2|15-201
    Yoshihiro Maruyama, Oxford University
    Chu Duality, Born Coalgebras, and Quantum Symmetries
    After a brief overview of recent developments of quantum foundations in Oxford, we focus on Abramsky's representations of quantum systems as Chu spaces and as coalgebras, with the purpose of analysing them from the viewpoint of duality theory. We first develop a general theory of dualities between point-free and point-set spaces, based on the formalism of Chu space. The Chu duality theory includes, as a special case, a duality between quantum state spaces and property observables. Building upon the duality, we refine Abramsky's coalgebraic representation of quantum symmetries, so that our (non-fibred) category of coalgebras turns out to be strictly smaller than Abramsky's fibred one; at the same time, however, it is still big enough to represent quantum symmetries. 

Logic Seminar 2011


  • December 9, 2011 at 13:30 in S2|15-201
    Jaime Gaspar
    The infinite pigeonhole principle

    Abstract: The infinite pigeonhole principle is one of the world's most evident statements: if we colour the natural numbers with finitely many colours, then some colour occurs infinitely often. Despite its obviousness, its treatment can be surprisingly challenging. We are going to illustrate this with two case studies: 1. the finitary infinite pigeonhole principles in Tao's programme of finitisations in analysis; 2. the infinite pigeonhole principle in a quantitative version of Hillam's theorem characterising the convergence of fixed point iterations. 
    * Financially supported by the Portuguese Fundação para a Ciência e a Tecnologia under grant SFRH/BD/36358/2007 co-financed by Programa Operacional Potencial Humano / Quadro de Referência Estratégico Nacional / Fundo Social Europeu (União Europeia). 
  • Tuesday, December 6, 2011 at 13:30 in S2|15-201
    Paulo Oliva, Queen Mary University of London
    On the restricted form of Spector's bar recursion.
  • November 11, 2011 at 13:30 in S2|15-201
    Feng Shiguang, Sun Yat-Sen University, Guangzhou
     
    This talk is about the expressive power and complexity of second-order extended Horn logic and Krom logic.

    Descriptive complexity is a bridge between complexity theory and logic. It asks for the logic resources that are necessary to define the computational resources such as time, space or the amount of hardware that are necessary to decide a property. Horn and Krom logic have their origins in propositional logic. Some researchers extend them to second logic, named SO-HORN and SO-KROM logic. We define revised and extended versions of these logics and get some results. We show that SO-HORN$^r$ and ($\exists$)SO-KROM$^r$ capture P and NL on ordered structures, respectively, and both SO-EHORN and $\Pi^{1}_{1}$-KROM capture co-NP. 

  • November 4, 2011 at 15:00 in S2|15-201
    Jaime Gaspar
    Goedel functional interpretation

    This talk is a one-hour mini-tutorial introducing from scratch the bare-bones of the Goedel functional interpretation.
  • November 4, 2011 at 13:30 in S2|15-201
    Pavol Safarik
    A quantitative nonlinear strong ergodic theorem for Hilbert spaces

    Abstract: We give a quantitative version of a strong nonlinear ergodic theorem for (a class of possibly even discontinuous) selfmappings of an arbitrary subset of a Hilbert space due to R.~Wittmann and outline how the existence of uniform bounds in such quantitative formulations of ergodic theorems can be proved by means of a general logical metatheorem. In particular these bounds depend neither on the operator nor on the initial point. Furthermore, we extract such uniform bounds in our quantitative formulation of Wittmann's theorem, implicitly using the proof-theoretic techniques on which the metatheorem is based. However, we present our result and its proof in analytic terms without any reference to logic as such. Our bounds turn out to involve nested iterations of relatively low computational complexity. While in theory these kind of iterations ought to be expected, so far this seems to be the first occurrence of such a nested use observed in practice. October 28, 2011 at 13:30 in S2|15-201

  • October 28, 2011 at 13:30 in S2|15-201
    Vassilis Gregoriades
    The effective theory on arbitrary Polish spaces

    Abstract: We will recall a constructive scheme (which was presented in a previous talk of this seminar) which assigns to every tree T on the naturals a Polish space N(T) which is recursively presented in T. The effective structure of N(T) depends on the combinatorial properties of T, so that for various choices of a recursive tree T the spaces N(T) are not effectively-Borel isomorphic to each other. In particular there are recursively presented Polish spaces which are uncountable sets and yet not effectively-Borel isomorphic to the Baire space. On the other hand every recursively presented Polish space is effectively-Borel isomorphic to one of the spaces N(T), so the latter spaces are the correct effective analogue of the Baire space. We are going to present the recent results concerning the structure of the spaces N(T) and give a characterization of N(T) being effectively-Borel isomorphic to the Baire space in terms of the intrinsic properties of T. 

  • October 7, 2011 at 13:30 in S2|15-201
    Takayuki Kihara, Tohoku University
    Non-computability of planar continua

    Abstract: Every nonempty open set in a computable metric space contains a computable point. In contrast, the Non-Basis Theorem asserts that a nonempty co-c.e. closed set in Cantor space (hence, even in the real line) can avoid any computable points. Recent exciting progress in Computable Analysis naturally raises the question whether Non-Basis Theorems exist for connected co-c.e. closed sets. In this talk, we show global and local non-computability results concerning connectivity, local connectivity, simply connectivity, and contractibility for planar continua.

  • October 7, 2011 at 10:30 in S2|15-201
    Hideki Tsuiki, Kyoto University
    Unimodal Maps as Boundary-Restrictions of Two-Dimensional Full-Folding Maps

    Abstract: Unimodal Maps as Boundary-Restrictions of Two-Dimensional Full-Folding Maps" Abstract I will start my talk with ideas on computational aspects for topological spaces. Then I will present what I have got for dynamical system through this idea, with an interactive program showing beautiful 2-dimensional figures. It is a generalization of Gray-code and the tent function to other topological spaces and no background knowledge is required to understand it. 

  • September 16, 2011 at 13:30 in S2|15-201
    Akitoshi Kawamura, University of Tokyo
    Distance k-Sectors and zone diagrams

    Abstract: The bisector of two nonempty sets P and Q in a metric space is the set of all points with equal distance to P and to Q. A distance k-sector of P and Q, where k is an integer, is a (k-1)-tuple (C_1, C_2, ..., C_{k-1}) such that C_i is the bisector of C_{i-1} and C_{i+1} for every i = 1, 2, ..., k-1, where C_0 = P and C_k = Q. This notion, for the case where P and Q are points in Euclidean plane, was introduced by Asano, Matousek, and Tokuyama, motivated by a question of Murata in VLSI design. We prove the existence of a distance k-sector for all k and for every two disjoint, nonempty, closed sets P and Q in Euclidean spaces of any (finite) dimension, or more generally, in proper geodesic spaces (uniqueness remains open). Another generalization of the bisector is the Voronoi diagram, which is the "bisector between n sites". Likewise one can consider the "trisector between n sites", leading to the notion of zone diagrams. Asano, Matousek, and Tokuyama proved the existence and uniqueness of a zone diagram for point sites in Euclidean plane, and Reem and Reich showed existence for two arbitrary sites in an arbitrary metric space. We establish (by a simpler proof than Asano et al.'s) the existence and uniqueness of the zone diagram for n positively separated sites in a Euclidean space of arbitrary (finite) dimension. This talk is based on joint works with K. Imai, J. Matoušek, D. Reem and T. Tokuyama. 
    http://www-imai.is.s.u-tokyo.ac.jp/~kawamura/papers/k-sectors/ 
    http://www-imai.is.s.u-tokyo.ac.jp/~kawamura/papers/zone/
     
  • August 5, 2011 at 13:30 in S2|15-201
    Akitoshi Kawamura, University of Tokyo
    Why Lipschitz Continuous Ordinary Differential Equations are Polynomial-Space Complete

    Abstract: How complex can the solution of a differential equation be, comparedto the function used to describe the equation? This question can begiven a precise sense consistent with the standard computational complexity theory. In this talk, I will first explain how we define polynomial-time computability of real functions. Then I will discuss the complexity of the solution of initial value problems of differential equations under various assumptions. In particular, an initial value problem given by a polynomial-time computable, Lipschitz continuous function can have a polynomial-space complete solution. This answers Ko's question raised in 1983. Time permitting, I will also mention what this kind of results about the complexity of possible solutions often imply about the hardness of actually finding the solution when the differential equation is given.
  • July 15, 2011 at 13:30 in S2|15-201
    Jaime Gaspar*
    Copies of Classical Logic in Intuitionistic Logic

    Abstract: Classical logic is the usual logic in mathematics, and intuitionistic logic is the constructive part of classical logic. Despite their differences, classical logic can be embedded in intuitionistic logic, that is, there are copies of classical logic in intuitionistic logic. The copies found
    in the literature are all the same. So, is the copy unique? In this talk we answer this question.

    * Financially supported by the Portuguese Fundação para a Ciência e a Tecnologia under grant SFRH/BD/36358/2007.
  • July 8, 2011 at 13:30 in S2|15-201
    Vasco Brattka, Capetown
    Computable Analysis in the Weihrauch Lattice

    Abstract
    : We present recent results on the classification of the computational content of theorems in the Weihrauch lattice. This lattice is a refinementof the Borel hierarchy and allows to classify the computational content oftheorems easily. The essential idea is to view theorems as (possiblymulti-valued) functions that transform certain input data into certainoutput data. The question is then which theorems can be computably andcontinuously transformed into each other. So far, theorems such as theBaire Category Theorem, the Banach Inverse Mapping Theorem, theIntermediate Value Theorem, the Hahn-Banach Theorem, theBolzano-Weierstrass Theorem and several other theorems from analysis havebeen classified from this perspective. The results are not only naturalrefinements of what is known from reverse mathematics, but aclassification of the Weihrauch degree of a theorem leads to a relativelycomplete understanding of the respective theorem regarding its uniform andnon-uniform computability properties. 
  • July 7, 2011 at 13:30 in S2|15-201
    Victor Poupet, Marseille
    An Elementary Construction of an Aperiodic Tile Set

    Abstract
    : Aperiodic tile sets have been widely studied since their introduction in 1962 by Hao Wang. It was initially conjectured by Wang that it was impossible to enforce the aperiodicity of a coloring of the discrete plane Z2 with a finite set of local constraints (there either was a valid periodic coloring or none at all). This would imply that it was decidable whether there existed a valid coloring of the plane for a given set of local rules. This last problem was introduced as the domino problem, and eventually proved undecidable by Robert Berger in 1964. In doing so, Berger produced the first known aperiodic tile set: a set of local constraints that admitted valid colorings of the plane, none of them periodic. Berger's proof was later made significantly simpler by Raphael M. Robinson in 1971 who created the set of Wang tiles now commonly known as the Robinson tile set. 

    Since then many other aperiodic tile sets have been found, not only on the discrete plane, but also on the continuous plane (the most famous being probably Penrose's "kite and darts" set of two tiles). 
    In this talk, I will describe yet another construction of an aperiodic tile set. Although the resulting tile set will produce tilings quite similar to those of Robinson's (an infinite hierarchical structure of embedded  squares) the local constraints will be presented in a (hopefully) more natural way: we will start from simple geometrical figures and organize them step by step by adding new rules progressively to enforce the global structure.

  • May 20, 2011 at 13:30 in S2|15-201
    Martin Lotz, Edinburgh
    Geometry and Complexity in Optimization 
     
    Abstract: The computational complexity of optimization problems depends crucially on the geometry of the problem. In this talk we will discuss geometric measures of condition for optimization problems such as linear programming, and show an application to compressed sensing: the problem of finding sparse solutions to under-determined systems of equations in polynomial time. 
  • May 13, 2011 at 13:30 in S2|15-201
    Chistian Ikenmeyer, Universität Paderborn
    Introduction to Geometric Complextity Theory and Tensor Rank 

    Abstract: Mulmuley and Sohoni in 2001 proposed to view the P vs NP and related problems as specific orbit closure problems and to attack them by methods from geometric invariant and representation theory. We merge these ideas with work of Strassen towards the goal of showing lower bounds for the computational complexity of matrix multiplication. This talk explains some key ideas of this approach on a basic level.
  • Tuesday, May 10, 2011 at 13:30 in S2|15-201
    Jaap van Oosten, Universiteit Utrecht
    Another Heyting Algebra for embedding the Turing degrees

    Abstract: Simpson has been arguing in recent years that classical
    degree theory is "dead" and can be "redeemd" by studying the Medvedev lattice. The Medvedev lattice embeds the Turing and Muchnik degrees and its dual is a Heyting algebra. The Medvedev lattice is intimately related to the notion of realizability studied by Kleene and Vesley. In this talk another Heyting algebra will be presented in which the Turing degrees embed; the "lattice of local operators in the effective topos". The structure will be defined in a very concrete way, and some methods for determining equality/ inequality between its members will be discussed.
  • May 6, 2011 at 13:30 in S2|15-201
    Davorin Lesnik

    Synthetic topology

     Abstract: Synthetic topology is a study of mathematical models, in which objects possess an intrinsic topology in a suitable sense and all maps arecontinuous. Topological definitions and theorems thus becomes statementsabout (internal) logic. The upshot is that this approach gives new insightand topological theorems often become easier to prove, and their validitytransfers to classical topology.


    At the seminar we will discuss the origins, basic ideas and notions of
    synthetic topology, and some proofs. Depending on time, we'll speak about the logical and computational aspects of the theory.

  • April 21, 2011 at 14:00 in S2|15-201
    Jaime Gaspar

    Proof mining Hillam's theorem on fixed point iterations

     

    Abstract: Hillam's theorem gives a characterisation of the convergence of fixed point iterations. We present a quantitative version of Hillam's theorem where we give an explicit rate of metastability (a kind of "finitary rate of convergence") for the fixed point iteration. We explain how this result is obtained by methods of proof theory used in the proof mining program, called functional interpretations, which extract computational content from proofs.

  • February 18, 2011 at 14:00 in S2|15-201
    Arne Seehaus

    Ist Mathematik "(Big) Science" im Sinne von Thomas Kuhn und Derek John de Solla Price?
  • February 11, 2011 at 14:00 in S2|15-201
    Alexander Kartzow
    First-order logic on generalisations of pushdown graphs

  • January 28, 2011 at 14:00 in S2|15-201
    Martin Ziegler
    Relative Computability and Uniform Continuity of Relations

     

    Abstract: In their search for a topological characterization of relatively (i.e. oracle-) computable relations (aka multivalued functions), Brattka and Hertling (1994) have considered two notions: weak continuity (which is weaker than relative computability) and strong continuity (which is stronger than relative computability). Based on the Henkin quantifier, we propose a notion of uniform continuity and explore its relations to weak/strong continuity as well as to relative computability.

  • January 21, 2011 at 14:00 in S2|15-201
    Takako Nemoto, Bern
    Determinacy and reverse mathematics

     

    Abstract: It is known that determinacy of infinite games characterize many subsystems of second order arithmetic.

    In this talk, we treat relatively stronger systems, beyond so-called "big five systems", and consider the possibility of determinacy as a tool to investigate strong systems of second order arithmetic.

  • January 14, 2011 at 14:00 in S2|15-201
    Isolde Adler, Frankfurt
    Nowhere dense graph classes, stability, and the independence property
     
     Abstract: Recently, Nešetřil and Ossona de Mendez introduced nowhere dense classes of finite graphs, a generalisation of many natural and important graph classes such as graphs of bounded degree, planar graphs, graphs excluding a fixed minor and graphs of bounded expansion. These graph classes play an important role in algorithmic graph theory, as many computational problems that are hard in general become tractable when restricted to such classes.


    We observe that the notion of nowhere density is essentially the earlier stability theoretic notion of superflatness, introduced by Podewski and Ziegler in 1978. With this, we show that for a subgraph-closed class C of (not necessarily finite) graphs, the following are equivalent:

    1) C is nowhere dense
    2) C is stable
    3) C is dependent

    The result can be generalised to classes of coloured digraphs. It fails if C is not subgraph-closed. For arbitrary graph classes, dependence is the most general notion, generalising both nowhere density and stability, as well as (locally) bounded clique-width. 

    This is joint work with Hans Adler.

  • November 19, 2010 at 14:00 in S2|15-201
     Dr. Vadim Puzarenko, Russian Academy of Science, Novosibirsk
    A reducibility on structures 
    Abstract: 
  • November 12, 2010 at 14:00 in S2|15-201
     Carsten Rösnick
     
    Approximate Real Function Maximization and Query Complexity

     Abstract: Let MAX,INT : C[0,1] -> R be the functionals for maximization and integration, respectively. I.e., 

         MAX(f) = sup_{0<x<1} f(x) ,      INT(f) = int_01 f(t) dt .

     

    It has been shown by Friedman and Ko (1982) that for the restricted an be approximated up to an error 2^{-n} on polynomial-space in n. In above's framework of computation, each computable real function f : [0,1] -> R is essentially represented as a type-2 function (in the sense of Weihrauch), mapping a rational sequence to a rational sequence, i.e., f \in (N->Q) -> (N->Q). This enables us to discuss the computational complexity of a real-valued function on classical Turing machines with oracle access. Here, a TM M computing f is presented with an oracle phi \in (N->Q) for x in [0,1] and a precision n in N, such that M produces the output M^\phi(n) \in Q with 

     

         |M^\phi(n) - f(x)| < 2^{-n}.  

     

    When it comes to the computation of the functionals MAX and INT, the encoding of an input for a Turing machine becomes even more critical: Instead of representing an x \in [0,1] by a fast converging sequence \phi \in (N->Q), we now have to encode a continuous real-valued function f : [0,1] -> R. The first goal of this thesis is to devise, compare and explore different ways of accessing an unknown continuous real function via black-box queries that model the actual computation of f as supported by certified numerics like interval arithmetic or the iRRAM. Building on this, the second goal of this Master's Thesis is to determine the query complexity of the MAX-functional with respect to different protocols. In combination with an complexity analysis of the protocols themselves, this should yield more explicit quantitative and parameterized both upper and lower bounds (compared to those presented in the beginning) on the complexity of MAX (and INT).

     

Logic Seminar 2009, 2010

  • August 6, 2010 at 14:00 in S215/201  
    René Hartung

    Computation with infinitely presented groups

    Abstract: We give an overview of the current methods for computing with a class of infinitely presented groups, the so-called finitely $L$-presented groups. Theses infinite presentations generalize finite presentations and they allow interesting applications. In particular, our methods allow to compute the lower central series quotients and the Dwyer quotients of the Schur muliplier.

     


     
  • July 13, 2010 at 14:00 in S215/201
    Russell Miller, New York City University

    Computable fields and their algebraic closures

     

    Abstract: We present recent results about computable fields F, especially those algebraic over their prime subfields, and embeddings of them into their algebraic closures.  The discussion begins with the basic definitions and a review of Rabin's Theorem, and continues with more exact computability-theoretic comparisons, by Steiner and by the speaker, among the set of irreducible polynomials in F[X], the set of polynomials there with roots in F, and the image of F within computable presentations of its algebraic closure.  Finally, we will discuss the Blum-Shub-Smale model of computation on the real numbers, and explain how it changes the array of tools available for examination of these questions.     back to the top of the page

  • June 25, 2010 at 14:00 in S215/201
    Jaime Gaspar, TU Darmstadt

    Proof interpretations with truth

    Abstract: A proof interpretation I of a theory S in a theory T is a function mapping
    a formula A of S to a formula AI(x) of T (with distinguished variables x)
    verifying S ` A ) T ` AI(t) for some terms t extracted from a proof of A. Proof
    interpretations are used in:
    • consistency results (e.g., if ?I = ?, then S ` ? ) T ` ?, i.e., S is consistent
    relatively to T);
    • conservation results (e.g., if T ` AI ! A for 02
    formulas A, then S ` A )
    T ` A, i.e., S is conservative over T for 02
    formulas);
    • unprovability results (e.g., if there are no terms t such that T ` AI(t), then
    S 0 A);
    • closure under rules (e.g., if S = T, (9yA(y))I(x) = AI(x) and T ` AI(x) ! A(x),
    then T ` 9yA(y) ) T ` A(t) for some term t, i.e., T has the existence property);
    • extracting computational content from proofs (e.g., extracting t in the previous
    point).
    Closure under rules needs T ` AI(x) ! A that can be achieved by:
    • upgrading T to the characterization theory CT that proves 9xAI(x) $ A;
    • or hardwiring truth in I obtaining It verifying T ` AIt(x) ! A.
    The first option doesn’t work if:
    • CT is classically inconsistent (e.g., bounded proof interpretations);
    • or we want applications to theories weaker than CT.
    So we turn to the second option and present a method to hardwire truth (in proof
    interpretations of Heyting arithmetic satisfying some mild conditions) by defining:
    • a function c that replaces each subformula A of a formula by A^Ac where Ac
    is a “copy” of A;
    • an “inverse” function c−1 that replaces the “copies” Ac by the “originals” A;
    • It = c−1  I  c.
    As examples we hardwire truth in:
    • modified realizability;
    • Diller-Nahm functional interpretation;
    • bounded modified realizability;
    • bounded functional interpretation;
    • slash.

    This is based on a joint work with Paulo Oliva.     back to the top of the page

  •  June 18, 2010 at 14:00 in S215/201
    Prof. Hajime Ishihara, Japan Advanced Institute for Science and Technology

    The monotone completeness theorem in constructive reverse

     Abstract: We investigate the monotone completeness theorem: MCT. Every bounded monotone sequence of real numbers converges within a subsystem of elementary analysis EL as a formal framework of constructive reverse mathematics aiming at classifying various theorems in intuitionistic, constructive recursive and classical mathematics. We distinguish a Cauchy sequence with modulus from a Cauchy sequence without modulus. 

    We give a logical principle which is stronger than LPO, and show that it is equivalent to the statement:

     

    (1) Every bounded monotone sequence of real numbers is a Cauchy sequence without modulus.

     

    We also give a function existence axiom which is weaker than the number-number choice for Pi0_1 formulas, and show that it is equivalent to the statement:

     

    (2) Every Cauchy sequence without modulus is a Cauchy sequence with modulus.

     Therefore MCT is divided into the logical statement (1), the function existence statement (2), and the Cauchy completeness of the reals which is constructively provable without countable choice.

    back to the top of the page

  • June 16, 2010 at 14:00 in S215/201
    Michal Stronkowski, Karls Universität Prag

    Finite axiomatization of quasivarieties of relational structures

    Abstract: Much attention was paid to the finite axiomatization of varieties and quasivarieties of algebras in the past. But the problem for relational structures was appearing sporadically. I will review what is known about this and add one new theorem. Namely, we prove that the graphs of monoids or groups (as relational structures) almost always generate non finitely axiomatizable quasivarieties.
  • June 15, 2010 at 14:00 in S215/201
    Prof. Jimmie Lawson, Louisiana State University, Baton Rouge

    Clifford Algebras, Möbius Transformations, Vahlen Matrices, and Bruck Loops

     
  • June 4, 2010 at 14:00 in S215/201
    Prof. Klaus Meer, Brandenburgische TU Cottbus

    On the expressive CNF formulas of bounded Tree- and Clique-Width

    Abstract: We study the expressive power of certain families of polynomials over a field K. The latter are defined via Boolean formulas in conjunctive normal form. We attach in a natural way a graph to such a formula and study the resulting polynomial families when certain parameters of that graph like tree- and clique-width are assumed to bounded.     back to the top of the page

  • May 21, 2010 at 14:00 in S215/201
    Priv.-Doz. Dr. Norbert Müller, Universität Trier

    Exakte reelle Arithmetik - reelle Zahlen aus der Sicht der Informatik

     Abstract: Die Menge der reellen Zahlen ist, anders als etwa die viel leichter algorithmisch greifbaren natürlichen Zahlen, überabzählbar. Als Konsequenz ist bereits die Definition des Begriffs "Berechenbarkeit" für reelle Zahlen deutlich komplexer als die klassische Berechenbarkeitsdefinition für natürliche Zahlen mit Turingmaschinen. 

    Implementierungen dieses Berechenbarkeitsbegriffes werden oft als "exakte reelle Arithmetik" bezeichnet; Basis ist dabei in der Regel eine Intervallarithmetik mit mehrfach genauen Zahlen, ergänzt um eine automatisierte Genauigkeitssteuerung. Damit sind numerische Algorithmen möglich, die in der endlichen Welt der double-precision-Zahlen nicht sinnvoll wären. 

     Im Vortrag stellen wir einige der Prinzipien einer solchen Implementierung vor und werden dabei insbesondere als Beispiele ihre Anwendung (und ihr Verhalten) bei der "logistic map" und einfachen Differentialgleichungen betrachten.     back to the top of the page
  • February 12, 2010 at 14:00 in S215/201
    Jaime Gaspar, TU Darmstad
     

    Slash and completeness 

     

    Abstract: Gödel's incompleteness theorems gave a fatal blow in Hilbert's program. Despite this, there were some attempts to salvage the program. One of them, by Hilbert himself, was to add omega-rule (if A(n) is provable for each fixed n, then "for all n, A(n)" is provable) to Peano arithmetic. This rule evades Gödel's incompleteness and gives a complete arithmetical theory.   Proof interpretations are proof-theoretic tools to prove consistency results, conservation results, closure under rules and to extract computational content from proofs. The slash is a very simple toy case of a proof interpretation. However, the omega-rule boosts the slash, resulting in some interesting applications. We will talk about one of them: a very simple proof of the completeness of Peano arithmetic plus omega-rule.     back to the top of the page

     
  •  POSTPONED! 

     Martin Otto, TU Darmstadt
    Acyclicity in hypergraph covers and applications 

     

    Abstract: Which degrees of acyclicity can be achieved in FINITE hypergraph covers? While actual acyclicity can always be had in generally infinite unfoldings, finite covers can only guarantee certain bounded forms of acyclicity at best.   I want to discuss new constructions of weakly N-acyclic covers (joint work with Vince Barany and Georg Gottlob, Oxford) and N-acyclic covers with a view to their use in the finite model theory of guarded logics.     back to the top of the page

     
  •  January 29, 2010 at 14:00 in S215/201
    Martin Ziegler, TU Darmstadt

    Expressiveness and Computational Complexity of Euclidean Quantum Logic 

     

    Abstract: Quantum logic generalizes, and in dimension one coincides with, Boolean logic. The satisfiability problem of quantum logic formulas is shown NP-complete in dimension two. We then extend these considerations to three and higher-dimensional Euclidean spaces R^d and C^d. For fixed d>2, quantum satisfiability turns out polytime-equivalent to the real feasibility of a multivariate quartic polynomial equation: a well-known problem lying (probably strictly) between NP and PSPACE. We finally investigate the problem over INdefinite finite dimensions and relate it to the real feasibility of quartic NONcommutative *-polynomial equations.     back to the top of the page

     
  •  November 27, 2009 at 14:00 in S215/201
    Peter Scheiblechner, Purdue 
     

     Comparison of Complexity over the Real vs. Complex Numbers 

     

    Abstract: We compare the complexity of certain problems over the reals with their corresponding versions over the complex numbers. We argue in particular that the problem of deciding whether a system of (real) polynomial equations has a real solution is strictly harder than asking for a complex solution, since it cannot be solved by a complex BSS-machine.   On the other side we describe a family of problems, whose real versions are in the model of algebraic decision trees strictly easier than its complex versions.   For this end we prove a new lower bound for the decision complexity of a closed algebraic set X in C^n in terms of the sum of its (compactly supported) Betti numbers b_c(W), which is for the first time better than logarithmic in b_c(W). 

    back to the top of the page

     
  •  November 13, 2009 at 14:00 in S215/201
      Matthias Schroeder, Munich  

     Topology in computable analysis 

     

    Abstract: Computable Analysis studies computability over the real numbers and related spaces. Since computable functions are continuous in some sense, topology plays an important role in this theory. The cartesian closed category QCB of quotients of countably based topological spaces turns out to be an appropriate category for modelling the topological aspects of higher type computation.

     

    In this talk, we discuss topological properties of the Kleene-Kreisel continuous functionals and their surprising relationship to an open problem in exact real-number computation. This problem is whether two approaches in functional programming on higher type computability over the reals coincide.

     

    We show that the sequential topology on the space N^(N^N), which is the space of Kleene-Kreisel continuous functionals of type 2, is neither zero-dimensional nor regular. Moreover, we establish non-regularity of further function spaces formed in the cartesian closed category of sequential spaces. This is one of the categories from which QCB inherits its cartesian closed structure.     back to the top of the page

     
  •  November 6, 2009 at 14:00 in S215/201 
     Stephane Le Roux, LIX, Ecole Polytechnique  

     Sequential game theory: a formal and abstract approach 

     

    Abstract: In a game from game theory, a Nash equilibrium (NE) is a combination of one strategy per agent such that no agent can increase his payoff by unilaterally changing his strategy. Kuhn proved that all (tree-like) sequential games have NE. Osborne and Rubinstein abstracted over these games and Kuhn's result: they proved a *sufficient* condition on agent *preferences* for all games to have NE. I will present a *necessary and sufficient* condition, thus accounting for the game-theoretic frameworks that were left aside. The proof is *formalised* using Coq, and contrary to usual game theory it adopts an inductive approach to trees for definitions and proofs. By rephrasing a few game-theoretic concepts, by ignoring useless ones, and by characterising the proof-theoretic strength of Kuhn's/Osborne and Rubinstein's development, the talk also intends to clarifies sequential game theory.

     

    Then I will sketch an alternative proof. Both proofs so far pre-process the preferences by topological sorting, which is surprising since it decreases the number of NE. Finally I will sketch two other proofs that pre-process the preferences only by transitive closure. The four proofs are all constructive and by induction, and I will compare them when seen as algorithms. The four proofs also constitute diverse viewpoints on the same concepts, the trial of diverse techniques against the same problem, and reusable techniques for similar structures. 

    back to the top of the page

     
  •  October 9, 2009 at 14:00 in S215/201  
    Jaime Gaspar, TU Darmstadt 
     

     Proof interpretations with truth 

     

    Abstract: Proof interpretations are proof-theoretic tools to extract computational content (such as computable choice functions) from proofs. In some cases this only works (at least without strengthening the theories involved) for proofs of theorems of a particular logical form, but we wish to extract computational content from proofs of arbitrary theorems. Interpretations with truth have been developed to cope with this. We present two new methods on how to do hardwire truth and apply them to new proof interpretations.

     

    In a first part we make a brief introduction to proof interpretations and the extraction of computational content from proofs. Then in a second part we present our two methods. Finally in a third part we apply them in some case studies.

     

    This is joint work with Paulo Oliva.     back to the top of the page

     
  •  September 18, 2009 at 14:00 in S215/201 
     Martin Ziegler,TU  Darmstadt  

     Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability 

     

    Abstract: It is folklore particularly in numerical and computer sciences that, instead of solving some general problem f:A->B, additional structural information about the input x in A (that is any kind of promise that x belongs to a certain subset A' of A) should be taken advantage of. Some examples from real number computation show that such discrete advice can even make the difference between computability and uncomputability. We turn this into a both topological and combinatorial complexity theory of information, investigating for several practical problems how much advice is necessary and sufficient to render them computable. Specifically, finding a nontrivial solution to a homogeneous linear equation A*x=0 for a given singular real NxN-matrix A is possible when knowing rank(A)=0,1,...,N-1; and we show this to be best possible. Similarly, diagonalizing (i.e. finding a BASIS of eigenvectors of) a given real symmetric NxN-matrix is possible when knowing the number of distinct eigenvalues: an integer between 1 and N (the latter corresponding to the nondegenerate case). And again we show that N-fold (i.e. roughly log N bits of) additional information is indeed necessary in order to render this problem (continuous and) computable; whereas for finding SOME SINGLE eigenvector of A, providing the truncated binary logarithm of the least-dimensional eigenspace of A--i.e. Theta(log N)-fold advice--is sufficient and optimal.     back to the top of the page

     
  • September 4, 2009 at 14.00 in S215/201 
    Vassilis Gregoriades, Athens  

    Effective Descriptive Set Theory and Applications in Analysis 

     

    Abstract: Descriptive Set theory is the area of mathematics which studies the structure of definable sets in separable and complete metric spaces. A central part of this area is Effective Descriptive Set Theory (for shortness Effective Theory) which uses ideas from Logic and in particular from Recursion Theory. This area provides a deeper understanding of the subject and has applications in Analysis. We begin this lecture with a brief summary of the basic concepts and we give a new approach to Effective Theory which yields some surprising results. Then we present some connections with General Topology; all theorems we talk about involve notions from both areas. Finally we give some results in Banach space Theory. Here all theorems involve only classic (i.e., non-effective) notions but still Effective Theory is essential for their proof.     back to the top of the page

     
  • August 14, 2009 at 14:00 in S215/201 
    Richard Garner, Cambridge 
     

    Ionads: a generalised notion of topological space

     
  • July 17, 2009 at 14:00 in S215/201 
    Martin Otto, TU Darmstadt 
     

    Acyclicity in hypergraph covers

     
  • July 10, 2009 at 14:00 in S215/201 
    Pavol Safarik
    , TU Darmstadt 

    On the Computational Content of Bolzano-Weierstraß

     

    Abstract: We will apply the methods developed in the field of 'proof mining' to the Bolzano-Weierstraß theorem BW and calibrate the computational contribution of using this theorem in a proof of a combinatorial statement. Using a simple proof based on weak König's lemma and arithmetical comprehension we provide an explicit solution of the Gödel functional interpretation as well as the monotone functional interpretation of BW for the product space \prod_i \in N [-k_i, k_i] for a given sequence of rational numbers (k_i) \subseteq Q (with the standard product metric). In fact, we will use it to get optimal program and bound extraction theorems for proofs based on fixed instances of BW, i.e. for BW applied to fixed sequences in \prod_i \in N [-k_i, k_i].     back to the top of the page

     
  • July 3, 2009 at 14:00 in S215/201 
    Anuj Dawar, Cambridge 
     

    Logics with Rank Operators

     

    Abstract: It is a long-standing open problem in descriptive complexity theory whether there is a logic that expresses exactly the polynomial-time decidable properties of unordered structures. It has been known since the early 90s that fixed-point logic with counting (FPC) is not sufficient and there have been several proposals for more expressive logics. Taking our inspiration from recent results that show that natural linear-algebraic operations are not definable in FPC, we propose FPR - an extension of fixed-point logic with an operator for matrix rank. We show that this is strictly more expressive than FPC and can express the polynomial-time problems that have been shown to be inexpressible in FPC. We also show that FO+R, the extension of first-order logic with rank operators is surprisingly expressive. It can express some forms of transitive closure and, on ordered structures, captures the complexity classes ModpL.

     

    In this talk, I will give the history and general background to the question and explain the context of the new results. I will also point to several interesting open problems. (Joint work with Martin Grohe, Bjarki Holm and Bastian Laubner). 

     back to the top of the page 

Kontakt

Technische Universität Darmstadt
Fachbereich Mathematik
Arbeitsgruppe Logik

S2|15
Schloßgartenstraße 7
64289 Darmstadt

Tel.: +49-(0)6151-164686
Fax: +49-(0)6151-163317

logik@mathematik.tu-darmstadt.de

Sekretariat
Raum S2|15-206
Barbara Bergsträßer
Betina Schubotz

Öffnungszeiten:
Mo. - Fr.  10:00-15:00


 
« Oktober 2014 »
Mo Di Mi Do Fr Sa So
40 1 2 3 4 5
41 6 7 8 9 10 11 12
42 13 14 15 16 17 18 19
43 20 21 22 23 24 25 26
44 27 28 29 30 31

Im Gebäude S2|15 im
Lageplan
der TUD,
Abschnitt Stadtmitte Nord,
oder unter Anreise.

A A A | Print Drucken | Impressum Impressum | Contact Kontakt
    zum Seitenanfangzum Seitenanfang