David Danks (17th of February)
Discussions of the ethical (and societal) impact of AI often implicitly assume that ethical issues arise only once the AI Is deployed or used. If AI is "just math" or "just a tool", then one might think that ethics is simply irrelevant to research and development of AI systems. In contrast, I will argue that ethical issues arise throughout every step of AI creation, including research efforts that seem to be outside of the scope of ethics. That is, ethics is an intrinsic part of AI, not something that arises only after the fact. Throughout this argument, I will provide examples of practical tools and practices to improve the ethics in one’s AI systems. These insights and examples will apply to technology development in general, including fundamental mathematical research.
Bob Coecke (24th of February)
This talk will be fairly high-level and requires no prior technical background. We will introduce the notion of compositional intelligence that my Oxford-based CQ-team is trying to achieve.
In particular, starting from the compositional match between natural language and quantum, which also extends to other domains [1], we will argue that a new generation of AI can emerge when fully pushing this analogy while exploiting the completeness of categorical quantum mechanics [2].
We also discuss the notion of compositionality itself, which takes many different forms within many different contexts, and how the one we need goes beyond previous ones [3].
Glynn Winskel (3rd of March)
This talk bridges between two major paradigms in computation, the functional, at basis computation from input to output, and the interactive, where computation reacts to its environment while underway. Central to any compositional theory of interaction is the dichotomy between a system and its environment. Concurrent games and strategies address the dichotomy in fine detail, very locally, in a distributed fashion, through distinctions between Player moves (events of the system) and Opponent moves (those of the environment). A functional approach has to handle the dichotomy much more ingeniously, through its blunter distinction between input and output. This has led to a variety of functional approaches, specialised to particular interactive demands. Through concurrent games we can more clearly see what separates and connects the differing paradigms, and show how:
Zoé Christoff (17th of March)
This talk gives an introduction to the use of logical tools in understanding social influence and social networks phenomena. Individuals often form their opinions by interpreting the behavior of others around them, and by reasoning about how those others have formed their opinions. This leads to several well-known herd phenomena, such as informational cascades, bystander effect, pluralistic ignorance, bubbles, and polarization. For instance, in the case of informational cascades, agents in a sequence imitate one another’s choices despite having diverging private evidence, sometimes leading the whole community to make the worst possible choice. Similar cascading mechanisms are at the heart of diffusion phenomena in social networks.
I first show how an epistemic logic modeling allows to make precise the conditions for such cascades to form, as well as prove their inescapability. I then turn to what logical tools can do for analysing information flow and influence in social networks. I illustrate how extremely simplified models can yield surprising new results, for instance about stabilization conditions of diffusion processes. Finally, I discuss how logic might help us further understand how social networks affect collective decision making processes.
Dominic Verity (24th of March)
You may well have heard the rumour that ∞-category theory is “really just like category theory with a little homotopy theory thrown in”. Inspired by that comment, you might even have headed to a book on ∞-categories or to the nLab to find out more, only to find that things in the ∞-world are far from that simple.
Firstly you will have discovered that there is no universal agreement on what an ∞-category actually is. Instead, you’ve been met with a proliferation of ∞-categorical models; simplicially enriched categories, quasi-categories, (iterated) complete Segal spaces, complicial sets, Θ-spaces and so on. Then you discover, to your horror, that each one of these models supports its own particular interpretations of common categorical concepts, some of which appear far more familiar than others. Finally, you realise that the relationships between the category theories developed for each model are also much less than clear.
If this weren’t enough to leave anyone reaching for the Aspirin bottle, there is more bad news to come. It quickly becomes clear that there exists quite a large gap between the informal language used to describe ∞-categorical arguments, in blog posts and wiki articles, and the corresponding formal arguments expressed in any particular model of ∞-categories. Most of these are given either as concrete constructions with simplicial (or increasingly cubical) objects or as abstract model category theoretic arguments. Now I have no doubt that we “all” love a good simplicial argument, but encoding things in this way does very little to aid our categorical intuition.
There must be a better way!
In this talk we discuss recent developments in ∞-technology that seek to address these issues. Specifically, we review the current state of the art in model agnostic ∞-category theory [2], which seeks to provide a unified account of ∞-category theory freed from the straight jacket of a specific model. This allows both for the model independent, synthetic development of ∞-categorical results and for the transport of analytically derived such results from one model to another. We shall also see how these techniques are rapidly being reframed in type theoretic terms, through the development of directed type theory [1,3], thereby promising a more natural language for the formal development of ∞-categorical concepts and results.
Maaike Zwart (31st of March)
Composing monads via distributive laws is tricky business, as too often small mistakes are overlooked. After failing to find a distributive law for the list monad over itself, I started proving that finding such a distributive law is impossible. At the same time, Dan Marsden was studying the famous counterexample by Gordon Plotkin that probability does not distribute over non-determinism. Together we developed an algebraic method to find and generalise such counterexamples, resulting in our no-go theorems. In this talk I will explain the main ideas behind our method, illustrated by a proof that 'plus does not distribute over times'. Then, I will highlight some crucial steps in our method, which tell us which type of monads are "high risk" in failing to compose with other monads. Lastly (time permitting), I will say a few words about my current research on combining monads with guarded recursion.
Giuseppe Rosolini (7th of April)
The elementary quotient completion of an elementary doctrine generalises the exact completion of a category with finite products and weak equalisers. I intend to present a characterisation of those elementary quotient completions which produce a quasitopos. As a corollary one gets a characterisation of the elementary quotient completions which give an elementary topos. Our work is reminiscent of, and gathers ideas, from others: in particular, Carboni and Vitale’s characterisation of exact completions in terms of their projective objects, Carboni and Rosolini’s characterisation of locally cartesian closed exact completions, also in the revision by Emmenegger, and Menni’s characterisation of the exact completions which are elementary toposes. This is joint work with Maria Emilia Maietti and Fabio Pasquali.
Leo McElroy (14th of April)
People learn best by making things they care about, which they share with others. The role of an educational technologist is to communicate this principle through tools provided to learners. A primary item in this toolmakers’ toolkit is the microworld, a concept developed by Seymour Papert as a “growing place for a specific species of powerful ideas or intellectual structures.” A microworld presents someone with a small set of ideas in an explorable environment which helps the learner recompose those ideas for their own personal expression. Following the development of a collection of microworlds, we will uncover the meta-structure of the “growing place” for intellectual structures, with the aim of identifying a framework for making tools, for making sense, by making things.
Martín Escardó (28th of April)
If a type X is finite, then for every map p: X → 𝟚 we can tell, by exhaustive search, whether p has a root (we can exhibit x with p x = 0) or not (for every x : X we have that p x = 1). Perhaps surprisingly, there are infinite types X for which this decision is constructively possible. We call such types compact. It turns out that there are plenty of infinite compact types in any 1-topos. A basic, but perhaps surprising, example is the type ℕ∞ of decreasing infinite binary sequences. There are plenty more, and the purpose of this talk is to exhibit them. No matter how hard we tried to avoid that, all searchable types we could find happen to be well-ordered. But this is just as well, because we can use ordinals to measure how complicated the compact types that we can write down in constructive univalent type theory are. Much of the above discussion can be carried out in (the internal language of) a 1-topos. But then some of our constructions go beyond that, by considering the type of all ordinals in a universe, which is itself an ordinal in the next universe, and requires univalence and hence moves us to the realm of ∞-toposes. It remains to address the notion of total separatedness of a type X. This is the condition that there are plenty of maps X → 𝟚 to "separate the points of X" in a suitable sense. I'll rigorously define this and exhibit plenty of compact, totally separated, well-ordered types in univalent type theory. I will mention Johnstone's topological topos as an example of where "compact" and "totally separated" acquire their usual topological meaning. There is an old, non-constructive, theorem of topology that characterizes the compact countable Hausdorff spaces as the countable ordinals with the interval topology. In the topological topos, assuming a non-constructive meta-language to reason about it, the searchable objects we get are of this form.
John Terilla (5th of May)
Language is essential to being human. Without it, no aspect of modern life could exist. But what is language exactly? How precisely does meaning emerge from sequences constructed from a small collection of basic symbols or sounds? Biologists, linguists, and philosophers have been debating this question—still unanswered—for thousands of years.
Now, at a pivotal moment when intelligent machines are beginning to acquire human language skills, new insights into the nature of language are emerging. Some old beliefs about language can probably be cast aside and achievements in artificial intelligence are inspiring some new ways of thinking of language.
I will present an argument for rethinking language and give a tour of some relevant mathematical ideas. The talk will be prepared keeping in mind the four themes of the Topos Institute Colloquium: ethics and societal impact of mathematics, applied category theory, foundation models, and technology and tools.
Andrej Bauer (12th of May)
Joint work with James E. Hanson from the University of Maryland.
In 1874 Georg Cantor published a theorem stating that every sequence of reals is avoided by some real, thereby showing that the reals are not countable. Cantor's proof uses classical logic. There are constructive proofs, although they all rely on the axiom of countable choice. Can the real numbers be shown uncountable without excluded middle and without the axiom of choice? An answer has not been found so far, although not for lack of trying.
We show that there is a topos in which the real numbers are countable, i.e., there is an epimorphism from the object of natural numbers to the object of Dedekind reals. Therefore, higher-order intuitionistic logic cannot show the reals to be uncountable.
The starting point of our construction is a sequence of reals, shown to exist by Joseph S. Miller from University of Wisconsin–Madison, with a strong counter-diagonalization property: if an oracle Turing machine computes a specific real number, when given any oracle representing Miller's sequence, then the number appears in the sequence. One gets the idea that the reals ought to be countable in a realizability topos built from Turing machines with oracles for Miller's sequence. However, we cannot just use ordinary realizability, because all realizability toposes validate countable choice and consequently the uncountability of the reals.
To obtain a topos with countable reals, we define a variant of realizability which we call parameterized realizability. First we define parameterized partial combinatory algebras (ppca), which are partial combinatory algebras whose evaluation depends on a parameter. We then define parameterized realizability toposes. In these realizers witness logical statements uniformly in the parameters. The motivating example is the parameterized realizability topos built from the ppca of oracle Turing machines parameterized by oracles for Miller's sequence. In this topos, Miller's sequence is the desired epimorphism from natural numbers to Dedekind reals.
Alexandra Silva (19th of May)
In the first part of this talk, we discuss active learning algorithms for weighted automata over a semiring. We show that a variant of Angluin's seminal L* algorithm works when the semiring is a principal ideal domain, but not for general semirings such as the natural numbers. In the second part, we present some preliminary work on active learning for probabilistic automata, and in particular discuss what the setup of the problem looks like and how that leads (or not) to impossibility results.
Tai-Danae Bradley (26th of May)
This talk features a small connection between information theory, algebra, and topology—namely, a correspondence between Shannon entropy and derivations of the operad of topological simplices. We will begin with a brief review of operads and their representations with topological simplices and the real line as the main example. We then give a general definition for a derivation of an operad in any category with values in an abelian bimodule over the operad. The main result is that Shannon entropy defines a derivation of the operad of topological simplices, and that for every derivation of this operad there exists a point at which it is given by a constant multiple of Shannon entropy. We show this is compatible with, and relies heavily on, a well-known characterization of entropy given by Faddeev in 1956 and a recent variation given by Leinster.
Thierry Coquand (2nd of June)
In the introduction of his book on Higher Topos Theory, Jacob Lurie motivates this theory by the fact that it allows an elegant and general treatment of sheaf cohomology. It was realised early on that these ideas could be expressed in the setting of univalent foundations/homotopy type theory (cf. the blog post of Mike Shulman on cohomology, 24 July 2013). I will try to explain recent insights which show that this can be done in a maybe surprisingly direct way. Furthermore, all this can be formulated in a constructive meta theory, avoiding the non effective notion of injective resolutions.
Nicolas Behr (9th of June)
This presentation is based upon [1] (joint work with R. Harmer and J. Krivine).
Categorical rewriting theory is a research field in both computer science and applied category theory, with a rich history spanning over 50 years of active developments, starting with the pioneering work of Ehrig in the 1970s.
In this talk, I will present recent results [1] on a novel foundation for reasoning about rewriting theories via so-called compositional rewriting double categories (crDCs). The design principles followed in this approach are the typical "unify and conquer" strategy of applied category theory and a particular variant of the notion of compositionality.
To wit, crDCs permit to formulate a wide variety of categorical rewriting semantics uniformly, whereby the horizontal category for a given semantics models the rewriting rules, while the vertical category models matchings and co-matchings of rules into objects, and where the squares model so-called direct derivations (i.e., individual rewriting steps). Even before considering compositionality, it is already noteworthy that in order for crDCs to be well-defined, strong constraints are imposed upon the horizontal and vertical categories and the squares of the crDC, which in effect suggest categories with adhesivity properties (e.g., toposses, quasi-toposses, adhesive HLR categories,...) as natural starting points for constructing crDCs. In this sense, the notion of crDCs thus permits to justify the by now standard approach for categorical rewriting theories developed over the past 20 years as being based upon categories with adhesivity properties (starting with the seminal work of Lack and Sobocinski on adhesive categories) from a clear mathematical high-level perspective.
Compositionality, on the other hand, is a much deeper mathematical property that a categorical rewriting theory may carry. This property entails the existence of so-called Concurrency and Associativity Theorems. The former concerns being able to reason on two-step rewriting sequences via implementing the overall effect of the two-step rewrite via a composite rule applied in a single rewrite step, and vice versa. The Associativity Theorem then implies that the operation of forming compositions of rewriting rules is, in a certain sense, associative. Compositionality is a necessary and crucial ingredient in the stochastic mechanics and rule algebra formalism developed by Behr et al. since 2015, and which permits to provide a mathematically fully consistent and universal formalism for continuous- [2] and discrete-time Markov Chains [3] (central to applications of rewriting to bio- and organo-chemistry, social network modeling, etc.) and rule-algebraic combinatorics [4].
A central result of [1] is then that a sufficient set of conditions on a double category to model a *compositional* rewriting theory consists in requiring certain fibrational properties for the vertical source and target functors from squares of the double categories, i.e., that they are (residual) multi-opfibrations. I will sketch how these fibrational properties in conjunction with the other crDC axioms yield a completely uniform proof of both the Concurrency and the Associativity Theorems, and, time permitting, how a large variety of categorical rewriting semantics indeed fall under the umbrella of our novel crDC formalism.
Finally, I will provide an overview of open questions and potential fruitful cross-connections of crDC theory with the TIC and broader ACT communities.
Christoph Benzmueller (16th of June)
Symbolic knowledge representation and reasoning (KR&R) is a key aspect of human intelligence. Among other things, it enables scientists to explore new theories, declaratively describe them, and share them with colleagues. In the natural sciences, abstract theories often arise from observations and experiments; in other fields, such as metaphysics, they may result from pure thought experiments, possibly without data from which to start (and learn from). Strong AI without explicit symbolic KR&R capabilities thus seems unthinkable. But how can the exploration of abstract theories, especially fundamental theories of metaphysics and mathematics, including logical formalisms, be fruitfully supported on computers?
In this talk, I review recent contributions in which a logico-pluralistic KR&R methodology and infrastructure, called LogiKEy, has been successfully applied to the exploration and assessment of foundational theories in metaphysics and mathematics. One such exploratory study on the axiomatic foundations of category theory, conducted jointly with Dana Scott and students at FU Berlin, will be described in some more detail. Since LogiKEy also supports exploration and assessment of ethical and legal theories, it also has applications in the areas of trusted AI and AI&Law.
Moshe Vardi (23rd of June)
Over the past decade Artificial Intelligence, in general, and Machine Laerning, in particular, have made impressive advancements, in image recognition, game playing, natural-language understanding and more. But there were also several instances where we saw the harm that these technologies can cause when they are deployed too hastily. A Tesla crashed on Autopilot, killing the driver; a self-driving Uber crashed, killing a pedestrian; and commercial face-recognition systems performed terribly in audits on dark-skinned people. In response to that, there has been much recent talk of AI ethics. Many organizations produced AI-ethics guidelines and companies publicize their newly established responsible-AI teams.
But talk is cheap. "Ethics washing" — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. An example is when a company promotes “AI for good” initiatives with one hand, while selling surveillance tech to governments and corporate customers with the other. I will argue that the ethical lens is too narrow. The real issue is how to deal with technology's impact on society. Technology is driving the future, but who is doing the steering?
Noam Zeilberger (30th of June)
Joint work with Paul-André Melliès.
In "Functors are Type Refinement Systems" [1], we argued for the idea that rather than being modelled merely as categories, type systems should be modelled as functors p : D → T from a category D whose morphisms are typing derivations to a category T whose morphisms are the terms corresponding to the underlying subjects of those derivations. One advantage of this fibrational point of view is that the notion of typing judgment receives a simple mathematical status, as a triple (R,f,S) consisting of two objects R, S in D and a morphism f in T such that p(R)=dom(f) and p(S)=cod(f). The question of finding a typing derivation for a typing judgment (R,f,S) then reduces to the lifting problem of finding a morphism α : R → S such that p(α)=f. We developed this perspective in a series of papers, and believe that it may be usefully applied to a large variety of deductive systems, beyond type systems in the traditional sense. In this work [2], we focus on derivability in context-free grammars, a classic topic in formal language theory with wide applications in computer science.
The talk will begin by explaining how derivations in any context-free grammar G may be naturally encoded by a functor of operads p : Free S → W[Σ] from a freely generated operad into a certain “operad of spliced words”. This motivates the introduction of a more general notion of context-free grammar over any category C, defined as a finite species S equipped with a color denoting the start symbol, and a map of species φ : S → W[C] into the operad of spliced arrows in C, which induces a unique functor p : Free S → W[C] generating a context-free language of arrows in C. We will see that many standard concepts and properties of context-free grammars and languages can be formulated within this framework, thereby admitting simpler analysis, and that parsing may indeed be profitably considered from a fibrational perspective, as a lifting problem along a functor from a freely generated operad.
The second half of the talk will be devoted to a new proof of the Chomsky-Schützenberger Representation Theorem. An important ingredient is the identification of a left adjoint C[−] : Operad → Cat to the operad of spliced arrows functor W[−] : Cat → Operad. This construction builds the contour category C[O] of any operad O, whose arrows have a geometric interpretation as “oriented contours” of the operations of O. A direct consequence of the contour / splicing adjunction is that every finite species equipped with a color induces a universal context-free grammar, generating a language of tree contour words. Finally, we prove a generalization of the Chomsky-Schützenberger Representation Theorem, establishing that any context-free language of arrows over a category C is the functorial image of the intersection of a C-chromatic tree contour language and a regular language.
Josef Urban (7th of July)
The talk will give a brief overview of recent methods that combine learning and deduction over large formal libraries. I will mention the "hammer" linkups between ITPs and ATPs, systems that learn and perform direct tactical guidance of ITPs, discuss learning of premise selection over large libraries, and learning-based guidance of saturation-style and tableau-style automated theorem provers (ATPs) trained over the large proof corpora. I will also mention feedback loops between proving and learning in this setting, and show some autoformalization and conjecturing experiments.
Andrea Censi (25th of August)
I will present some very recent and in-progress work about dealing with “negative information” categorically. This need arises naturally in applications. For example, in motion planning problems, providing an optimal solution is the same as giving a feasible solution (the “positive” information) together with a proof of the fact that there cannot be feasible solutions better than the one given (the “negative” information). We model negative information by introducing the concept of “norphisms”, as opposed to the positive information of morphisms. A “nategory” is a category that has “Nom”-sets in addition to hom-sets, and specifies the compatibility rules between norphisms and morphisms. With this setup we can choose to work in “coherent” “subnategories”: subcategories that describe a potential instantiation of the world in which all morphisms and norphisms are compatible. We derive the composition rules for norphisms in a coherent subnategory; we show that norphisms do not compose by themselves, but rather they need to use morphisms as catalysts. We have two distinct rules of the type morphism + norphism → norphism. We then show that those complex rules for norphism inference are actually as natural as the ones for morphisms, from the perspective of enriched category theory.
This is joint work with Dr. Jonathan Lorand and Ph.D. student Gioele Zardini.
Chad Scherrer (1st of September)
Statistics is often framed in terms of probability distributions. Distributions are special cases of measures; we find that working in these more general terms leads more naturally to a composable system. For example, a univariate probability density function is defined relative to Lebesgue measure. Bayesian inference often leaves us with an unnormalized posterior, which (until normalized) is not a distribution at all, but a measure.
The MeasureTheory.jl ecosystem takes a principled approach to design, identifying primitives (measures like Lebesgue and Counting measure that cannot be described in terms of other measures), and a rich set of combinators for building new measures from existing ones. This approach gives good performance and makes it easy to describe measures and related structures (kernels, likelihoods, etc) that are awkward or impossible to express in other systems.
After introducing some fundamental concepts relevant to the field of measure theory, this talk will give an overview of the MeasureTheory.jl library, finally closing with a brief introduction to Tilde.jl, a probabilistic programming language that specifically targets MeasureTheory.jl.
David Jaz Myers (8th of September)
Orbifolds are smooth spaces where the points may have finitely many internal symmetries. These spaces often arise as quotients of manifolds by the actions of discrete groups — that is, in situations with discrete symmetries, such as in crystallography.
Formally, the notion of orbifold has been presented in a number of different guises — from Satake's V-manifolds to Moerdijk and Pronk's proper étale groupoids — which do not on their face resemble the informal definition. The reason for this divergence between formalism and intuition is that the points of spaces cannot have internal symmetries in traditional, set-level foundations. In this talk, we will see a formal definition which closely tracks the informal idea of an orbifold.
By working with the axioms of synthetic differential geometry in cohesive homotopy type theory, we will give a synthetic definition of orbifold (subsuming the traditional definitions) which closely resembles the informal definition: an orbifold is a microlinear type where the type of identifications between any two points is properly finite. In homotopy type theory, we can construct these orbifolds simply by giving their type of points.
Eduardo Dubuc (15th of September)
(joint work with J. Gilabert based on work by Descotte−Dubuc−Szyld).
Let 𝒞 be a category and Σ be a class of morphisms. The localization of 𝒞 at Σ is a category 𝒞[Σ⁻¹] together with a functor q: 𝒞 ⟶ 𝒞[Σ⁻¹] such that q(s) is an isomorphism for all s ∈ Σ, and which is initial among such functors. The 2−localization is a 2−category 𝒞[Σ∼¹] together with a functor q: 𝒞 ⟶ 𝒞[Σ∼¹] such that q(s) is a equivalence for all s ∈ Σ, and which is initial among such functors. In this talk I will consider the construction of such localizations by means of cylinders and its corresponding homotopies, which will determine the 2−cells of 𝒞[Σ∼¹]. I will examine the case where Σ = 𝒲 is the class of weak equivalences of a Quillen's model category, and in particular the role of the fibrant and cofibrant replacements. I will elaborate about functorial versus non functorial factorizations in this construction. I will recall informally but with sufficient precision the necessary definitions so that the non experts can grasp the ideas and follow the proofs.
Jacques Carette (22nd of September)
An interesting side-effect of formalizing mathematics in a theorem prover is that such efforts frequently leads to learning new details (*) about a known topic. I'll discuss these "little gems" that become much more apparent via formalization. While the focus will be mostly on items learned through formalizing Category Theory, a few nuggets from other domains (Universal Algebra, and the theory of programming languages) will also be thrown in.
(*) These details are frequently known to select experts, but rarely ever recorded in print.
Johan Commelin (29th of September)
In this talk I will argue that formal verification helps break the one-mind-barrier in mathematics. Indeed, formal verification allows a team of mathematicians to collaborate on a project, without one person understanding all parts of the project. At the same time, it also allows a mathematician to rapidly free mental RAM in order to work on a different component of a project. It thus also expands the one-mind-barrier by reducing cognitive load.
I will use the Liquid Tensor Experiment as an example, to illustrate the above two points. This project recently finished the formalization of the main theorem of liquid vector spaces, following up on a challenge by Peter Scholze.
Angeliki Koutsoukou Argyraki (6th of October)
The formalisation of mathematics with proof assistants has recently seen a considerable increase in activity, with fast-expanding, flourishing communities attracting computer scientists, mathematicians and students. I will discuss the philosophy and motivation behind the use of modern proof assistants to formalise mathematics, referring to the state of the art and potential of the area and to recent developments involving the formalisation of advanced, research-level mathematics. I will share my experiences from my participation in the ERC project "ALEXANDRIA: Large-Scale Formal Proof for the Working Mathematician" (https://www.cl.cam.ac.uk/~lp15/Grants/Alexandria/) at the University of Cambridge led by Professor Lawrence C. Paulson FRS and I will give an overview of the main contributions and achievements of the project so far.
Dan Koditschek (13th of October)
A long tradition in robotics has deployed dynamical primitives as empirical modules of behavior [1]. Physically grounded formalization of these practices offers the prospect of an expressive programming language of work for dynamically dexterous robotics whose programs lead to task and motion planners with associated controllers that are correct by design [2]. This talk will offer a brief progress report on a bottom-up robotics research agenda seeking to formalize the use of Lagrangian energy landscapes as “letters” whose hierarchical [3], parallel [4] and sequential [5] compositions yield “words” of hybrid dynamical systems with guaranteed behavioral properties. Attention then shifts to an emerging architecture for top-down task and motion planning [6] that offers the promise of abstract, semantic, formal specification [7] for mobile manipulation tasks carried out by general purpose robots [8] in learned, geometrically complicated environments [9].
The talk concludes with a more speculative appraisal of the prospects for a unified programming environment allowing the expressive top-down specification of robot behavior with automatic compilation into bottom-up words of work that are correct by design. A recent categorical treatment [10] of robot hybrid dynamical systems [11] encodes a well-grounded version of sequential composition [12] and a weak but still practicable version of hierarchical composition [3], while neglecting the consideration of cross-talk [4] in its idealization of parallel composition as a monoidal product. Subsequent results imbue a slightly restricted version of this hybrid dynamical category [10] with a version of Conley’s fundamental theorem [13] guaranteeing the existence of global Lyapunov functions that decompose a suitably formulated relaxation of the hybrid state space into a covering by attractor basins and their boundaries [14]. Thus equipped with generalized energy landscapes, more physically grounded refinements of this hybrid dynamical category may yield a practicable type theory whose associated resource-aware linear logic clauses might be integrated into the more expressive linear temporal logic interface to the task and motion planning architecture of [7]. Such future developments would greatly advance the longer term agenda toward an empirically practicable theory of “programming work” [2].
Robin Cockett (20th of October)
This talk is based on the following papers/notes:
Turing categories are the theory of "abstract computability". Their development followed my meeting Pieter Hofstra. He was in Ottawa at the time and he subsequently joined me as a postdoc. The core theory was developed in Calgary before he returned to Ottawa as a faculty member. Tragically he died earlier this year when there was still so much to do and, indeed, that he had done, but had not published.
Turing categories are important because they characterize computability in a minimal traditional context. These ideas are not original to Pieter and I: De Paola, Heller, Longo, Moggi, and others had all travelled in this terrain before we did. Pieter and I simply took the ideas polished them a bit and moved them a step further on a road which still stretches ahead.
So, the purpose of the talk is to try and explain what all this was about ... and what we were striving to accomplish. To do this I have to introduce restriction categories and Turing categories in that context. Then I will describe a family of models which are fundamental to computer science. Finally, I will take a quick look along the road at some open issues.
Richard Zanibbi (27th of October)
Mathematical information is essential for technical work, but its creation, interpretation, and search are challenging. In the hopes of helping users locate mathematical information more easily, multimodal retrieval models and search interfaces that support both formulas and text have been developed.
In this talk we will start by discussing some differences between the information needs and search behaviors of mathematical experts vs. non-experts. We will then examine techniques used for retrieving math formulas, and math-aware search engines that support queries containing both formulas and text. Finally, we will consider future research directions for math-aware search, including tasks involving Natural Language Processing (NLP) for mathematical texts.
Bryce Clarke (3rd of November)
The goal of this talk is to draw a common thread between three concepts: double categories, lenses, and algebraic weak factorisation systems (AWFS). A double category is a 2-dimensional categorical structure consisting of objects, horizontal and vertical morphisms, and cells between them. In applied category theory, lenses are a kind of morphism which consist of a forwards component and a backwards component. An AWFS on a category C consists of a compatible comonad L and monad R on the arrow category of C, such that each morphism factors into an L-coalgebra followed by an R-algebra. In each case, two classes of morphisms play a central role — horizontal and vertical, forwards and backwards, coalgebras and algebras — and it is natural to wonder if there is a deeper relationship between these three concepts.
In this talk, I will develop a double-categorical approach to lenses via AWFS. The approach builds upon the work of Bourke and Garner, which characterises AWFS as certain kinds of double categories, and the work of Johnson, Rosebrugh, and Wood, which implicitly characterises lenses as members of the right class of an AWFS. Combining these results leads indirectly to a double-categorical approach to lenses. However, there is also a direct approach which constructs a generalised notion of lens inside any double category, using a process called the "right-connected completion". I will compare these two approaches, and explore settings where they coincide.
Gilles Dowek (10th of November)
The development of computerized proof systems is a major step forward in the never ending quest of mathematical rigor. But it jeopardizes once again the universality of mathematical truth, each proof system defining its own language for mathematical statements and its own truth conditions for these statements. One way to address this issue is to view the formalisms implemented in these systems as theories expressed in a common logical framework, such as Predicate logic, λ-Prolog, Isabelle, the Edinburgh logical framework, or Dedukti. We show in the talk how theories, such as Simple type theory, Simple type theory with polymorphism, Simple type theory with predicate subtyping, the Calculus of constructions... can be expressed in Dedukti. Proofs developed with proof processing systems then can be expressed in these theories, independently of the system they have been developed with, and used in any system that supports the axioms they use.
Pawel Sobocinski (1st of December)
One of the goals of applied category theory is to develop new mathematics for reasoning about open systems of various kinds. In this talk, I will introduce a string diagrammatic methodology for reasoning about and manipulating non-passive electrical circuits, which are a classical and well-known example of open system. This joint work with Guillaume Boisseau is, on the one hand, a rigorous, compositional, sound and complete equational calculus, while on the other hand, it retains elements of the intuitive, classical diagrammatic syntax for circuits. It is based on previous work on Affine Graphical Algebra, joint work with Bonchi, Piedeleu and Zanasi which I will first introduce.
Alexandre Miquel (8th of December)
In this talk, I will present implicative algebras, a simple algebraic structure generalizing complete Heyting algebras and abstract Krivine structures, and based on a surprising identification between the notions of a realizer and of a type. I will show that this structure naturally induces a family of triposes - the implicative triposes - that encompass all triposes known so far, namely: Heyting triposes, Boolean triposes, intuitionistic realizability triposes and classical realizability triposes, thus providing a unified framework for expressing forcing and realizability, both in intuitionistic and classical logic. Finally, I will discuss about some recent completeness results about the very notion of implicative model, both in higher-order logic and first-order logic.
Andrei Rodin (15th of December)
In a series of lectures given in 2003 Vladimir Voevodsky identified two strategic goals for his further mathematical research. The goal number one was to develop a "computerised library of mathematical knowledge". This line of research eventually led Voevodsky to the idea of Univalent Foundations and its implementation with the UniMath library. The goal number two was to "bridge pure and applied mathematics". Voevodsky's research towards the second goal did not bring published results but in an interview given in 2012 he expressed his intention to return to this project in the future and explained a possible role of Univalent Foundations in it.
In this talk based on archival sources I reconstruct Voevodsky’s original strategy of bridging pure and applied mathematics, illustrate it with some examples, and argue that Applied Univalent Foundations is a viable research program.